Phonograph, CD, MP3—What’s Next?

The Beatles finally make it to iTunes.

‘I am particularly glad to no longer be asked when the Beatles are coming to iTunes,” said Ringo Starr last week as the Fab Four’s record company finally agreed to have their music sold through digital downloads by Apple. This agreement could mark the beginning of the end of the digital dislocation of the music industry— the first industry to be completely disrupted by information-age technology.

Apple CEO Steve Jobs said, “It has been a long and winding road to get here,” understating the case. Members of the band and other rights-holders had long objected to Apple’s practice of selling songs separately from albums, disagreed with its pricing, and feared illegal file-sharing of the songs if they were ever available online.

All 17 of the Beatles’ albums were among the top 50 sellers on iTunes the day they were made available. Apple is even selling a virtual “box set” of the Beatles. The top single was “Here Comes the Sun,” appropriately enough, since the lyrics “it’s been a long cold lonely winter” summarize a music industry just emerging from the destruction element of creative destruction.

Music has been a test case for technology transitions before. In the 19th century, the sheet-music publishers of Tin Pan Alley dominated the industry but were disrupted by recorded sound when Thomas Edison invented the phonograph. This was in turn replaced by newer physical forms of recordings, from eight-track tapes to cassettes and CDs. In the Internet era, sales of albums—bundles of music—broke down as consumers downloaded just the songs they wanted, usually illegally.

The iTunes store, launched in 2003, popularized legal downloads. Streaming music online has also become popular. Today one quarter of recorded-music revenue comes from digital channels. This tells us that technology can reward both creators and consumers, even as traditional middlemen such as record companies get squeezed.

A Beatles song plays on an iPod.

The Beatles have been accused of being digitally backward, but last year the group targeted younger listeners by cooperating with a videogame maker on “The Beatles: Rock Band” that lets people play along.

“We’ve made the Beatles music,” Paul McCartney told London’s Observer last year. “It’s a body of work. That’s it for us—it’s done. But then what happens is that somebody will come up with a suggestion,” like a video game.

Consumers get more choice through digital products and seem happy to pay for the convenience of downloads through iTunes, despite the availability of free music. Apple can charge more for a Beatles download than Amazon can charge for a CD, even though CDs are usually higher-quality and the songs can be transferred to devices such as iPods.

Several years ago the big legal battle featured music industry companies suing some 35,000 people who illegally downloaded songs. Piracy continues, but now the industry is instead looking for new revenue streams. Sean Parker, founder of the original downloading service, Napster, has advice for music companies. “The war on piracy is a failure,” he says. “Labels must offer services that consumers are willing to pay for, focusing on convenience and accessibility.”

Some musicians still hold out against digital downloads. Country star Kid Rock explained to Billboard magazine recently why he stays off iTunes. “I have trouble with the way iTunes says everybody’s music is worth the same price. I don’t think that’s right. There’s music out there that’s not a penny. They should be giving it away, or they should be making the artist pay people to listen to it.”

Still, there are encouraging signs that creators and distributors are coming together. Artists often skip the music industry altogether by using new technology to make songs cheaply, then market them on the Web. For many musicians, the real money comes from concerts and merchandising. For bands that appeal to older audiences, such as the Beatles, CD sales remain brisk.

For music and many content-based industries, the shift to the Information Age from the Industrial Age is a shift to digital versions from older analog versions. The older forms don’t disappear altogether. Instead, traditional products find a more limited role alongside newer versions that take advantage of new technology to deliver different experiences to consumers. Sellers may lose scarcity value for their goods as digital tools make copying easy, but as iTunes has shown, convenience is also a service worth buying.

If the music industry can learn new tricks, there’s hope for all the other industries that are being transformed as technology continues to give consumers more choices. The best alternative for smart industries is to take the advice of the Beatles song “Let It Be”—make the most of technological progress, and recognize that certain things are beyond anyone’s control.

L. Gordon Crovitz, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704496104575627282994471928.html

Forget any ‘Right to Be Forgotten’

Don’t count on government to censor information about you online.

The stakes keep rising in the debate over online privacy. Last week, the Obama administration floated the idea of a privacy czar to regulate the Internet, and the European Union even concocted a new “right to be forgotten” online.

The proposed European legislation would give people the right, any time, to have all of their personal information deleted online. Regulators say that in an era of Facebook and Google, “People should have the ‘right to be forgotten’ when their data is no longer needed or they want their data to be deleted.” The proposal, which did not explain how this could be done in practice, includes potential criminal sanctions.

Privacy viewed in isolation looks more like a right than it does when seen in context. Any regulation to keep personal information confidential quickly runs up against other rights, such as free speech, and many privileges, from free Web search to free email.

There are real trade-offs between privacy and speech. Consider the case of German murderer Wolfgang Werle, who does not think his name should be used. In 1990, he and his half brother killed German actor Walter Sedlmayr. They spent 15 years in jail. German law protects criminals who have served their time, including from references to their crimes.

Last year, Werle’s lawyers sent a cease-and-desist letter to Wikipedia, citing German law, demanding the online encyclopedia remove the names of the murderers. They even asked for compensation for emotional harm, saying, “His rehabilitation and his future life outside the prison system is severely impacted by your unwillingness to anonymize any articles dealing with the murder of Mr. Sedlmayr with regard to our client’s involvement.”

Censorship requires government limits on speech, at odds with the open ethos of the Web. It’s also not clear how a right to be forgotten could be enforced. If someone writes facts about himself on Facebook that he later regrets, do we really want the government punishing those who use the information?

UCLA law Prof. Eugene Volokh has explained why speech and privacy are often at odds. “The difficulty is that the right to information privacy—the right to control other people’s communication of personally identifiable information about you—is a right to have the government stop people from speaking about you,” he wrote in a law review article in 2000.

Indeed, there’s a good argument that “a ‘right to be forgotten’ is not really a ‘privacy’ right in the first place,” says Adam Thierer, president of the Progress and Freedom Foundation. “A privacy right should only concern information that is actually private. What a ‘right to be forgotten’ does is try to take information that is, by default, public information, and pretend that it’s private.”

There are also concerns about how information is collected for advertising. A Wall Street Journal series, “What They Know,” has shown that many online companies don’t even know how much tracking software they use. Better disclosure would require better monitoring by websites. When used correctly, these systems benignly aggregate information about behavior online so that advertisers can target the right people with the right products.

Many people seem happy to make the trade-off in favor of sharing more about themselves in exchange for services and convenience. On Friday, when news broke of potential new regulations in the U.S., the Journal conducted an online poll asking, “Should the Obama administration appoint a watchdog for online privacy?” Some 85% of respondents said no.

As Brussels and Washington were busily proposing new regulations last week, two of the biggest companies were duking it out over consumer privacy, a new battlefield for competition. Google tried to stop Facebook from letting users automatically import their address and other contact details from their Gmail accounts, arguing that the social-networking site didn’t have a way for users to get the data out again.

When users tried to import their contacts to Facebook, a message from Gmail popped up saying, “Hold on a second. Are you super sure you want to import your contact information for your friends into a service that won’t let you get it out?” The warning adds, “We think this is an important thing for you to know before you import your data there. Although we strongly disagree with this data protectionism, the choice is yours. Because, after all, you should have control over your data.”

One of the virtues of competitive markets is that companies vie for customers over everything from services to privacy protections. Regulators have no reason to dictate one right answer to these balancing acts among interests that consumers are fully capable of making for themselves.

L. Gordon Crovitz, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704658204575610771677242174.html

The Power To Control

Is Internet freedom threatened more by dominant companies or by the government’s efforts to hem them in?

In the early days of the radio industry, in the 1920s, almost anyone could become a broadcaster. There were few barriers to entry, basically just some cheap equipment to acquire. The bigger broadcasters soon realized that wealth creation depended on restricting market entry and limiting competition. Before long, regulation—especially the licensing of radio frequencies—transformed the open radio landscape into a “closed” oligopoly, with few players instead of many.

In “The Master Switch,” Tim Wu, a professor at Columbia University, argues that the Internet also risks becoming a closed system unless certain steps are taken. In his telling, information industries—including radio, television and telecommunications—begin as relatively open sectors of the economy but get co-opted by private interests, often abetted by the state. What starts as a hobby or a cottage industry ends up as a monopoly or cartel.

In such an environment, success often depends on snuffing out competitors before they become formidable. In Greek mythology, Kronos—ruler of the universe—was warned by an oracle that one of his children would dethrone him. Logically, he took pre-emptive action: Each time his wife gave birth, he seized the new child and ate it. Applied to corporate strategy, the “Kronos Effect” is the attempt by a dominant company to devour its challengers in their infancy.

In the late 19th century, Western Union, the telegraph company, tried to put AT&T out of business in the infancy of the telephone—by commissioning Thomas Edison to design a better phone and then rolling out tens of thousands of telephones to consumers, rendering AT&T a “bit player.” It was a sound Kronos strategy, but AT&T survived and eventually prospered over Western Union, thanks in part to aggressive patent litigation. Later AT&T, in its turn, applied the Kronos strategy to every upstart that challenged it.

Mr. Wu notes that, for most of the 20th century, AT&T operated the “most lucrative monopoly in history.” In the early 1980s, the U.S. government broke the monopoly up, but its longevity was the result of government regulation. In 1913, AT&T entered into the “Kingsbury Commitment” with the Justice Department. The deal was meant to increase competition by forcing AT&T, among other things, to allow independent operators to connect their local exchanges with AT&T’s long-distance lines. But the agreement, by forestalling the break-up of AT&T, was really, Mr. Wu says, the “death knell” of both “openness and competition.”

In the past, then, even arrangements aimed at maximizing competition have ended up entrenching the dominant player. Some argue that the Internet will avoid this fate because it is “inherently open.” Mr. Wu isn’t so sure. In fact, he says, “with everything on one network, the potential power to control is so much greater.” He worries about major players dominating the Internet, stifling innovation and free speech.

Mr. Wu’s solution is to propose a “Separation Principle,” a form of industry self-regulation to be overseen by the Federal Communications Commission (though he concedes that there is an ever-present danger of regulatory capture—whereby the FCC or other agencies become excessively influenced by the businesses they are meant to be regulating). The key to competition in the information industry, Mr. Wu believes, is a complete independence among its three layers: content owners (e.g., a games developer); network infrastructure (e.g., a cable company or cellular-network owner); and tools of access (e.g., a mobile handset maker). Obviously vertical integration, where one company participates in more than one layer, would be prohibited. The biggest effect of such a rule would be to separate content and conduit: Comcast, the cable giant, would plainly not be allowed to complete its planned acquisition of NBC Universal, a content provider.

The process that Mr. Wu describes—of a few companies dominating the information industry and requiring regulatory intervention to tame them—plays down the disruptive effects of technology itself. In 1998, the Justice Department launched an antitrust action against Microsoft, partly to prevent it from using Windows, its operating system, to control the Web. But it was innovation by competitors that put paid to Microsoft’s potential dominance. A decade ago, AOL (when it was still called America Online) seemed poised to dominate cyberspace. Then broadband came along and AOL, a glorified dial-up service provider, quickly became an also-ran.

Similarly, mobile carriers, like AT&T Wireless, long enjoyed a near complete control over mobile applications—until the Apple’s iPhone arrived. The App Store decimated that control and unleashed a wave of mobile innovation. Mr. Wu notes that Apple, which at first forbade some competing applications, was “shamed” into allowing apps like Skype and Google Voice on its phones. True enough, but surely that is evidence of market forces creating openness, not the need for more mechanisms to enforce it.

The legitimate desire to prevent basic “discrimination” (e.g., Comcast blocking Twitter) is not enough to justify the broad restrictions that Mr. Wu advocates. Besides, enforcing the new rules would itself stifle innovation, create arbitrary distinctions and protect rival incumbents. Google’s bid for wireless spectrum and its Nexus One smartphone would certainly have crossed “separation” lines—as would Apple’s combination of access devices (the iPhone) and a content-distribution business (iTunes). Mr. Wu’s proposal would blunt the competitive pressure that Google and Apple apply to each other, as well as to Verizon Wireless, Microsoft, Nokia and just about everyone else. As Mr. Wu himself shows when tracing the history of earlier technology-based industries, the effort to regulate openness can often do more harm than good.

Mr. Philips is chief executive of Photon Group, an Australia-based communications company.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704462704575589621694834704.html

Taking on Google by Learning From Ants

Fifteenth- and 16th-century European explorers helped to transform cartography during the Age of Discovery. Rather than mapping newly discovered worlds, Blaise Agüera y Arcas is out to invent new ways of viewing the old ones.

Mr. Agüera y Arcas is the architect of Bing Maps, the online mapping service that is part of Microsoft Corp.’s Bing Internet search engine. Bing Maps does all the basics, like turn-by-turn directions and satellite views that offer a peek into the neighbor’s backyard, but Mr. Agüera y Arcas has attracted attention in the tech world by pushing the service to do a lot more.

Blaise Agüera y Arcas, in Bellevue, Wash.

He helped to cook up a technology that allows people to post high-resolution photocollages that explore the interiors of buildings. For New York’s Metropolitan Museum of Art, for example, 1,317 still images dissolve into each other, giving an online visitor the sensation of touring the Greek and Roman art wing. By dragging a mouse, the viewer can circle a marble statute of Aphrodite and zoom in on the exhibit’s sign to read that the statue, Venus Genetrix, was created between the first and second centuries A.D.

For a user who wants to check out a particular street, Mr. Agüera y Arcas has devised an elegant visual transition that provides the feel of skydiving to the ground. He says that these transitions will become even better over time. “I want this all to become cinematic,” he said.

Mr. Agüera y Arcas, 35, imagines these projects from a cramped office on the 22nd floor of a new high-rise in Bellevue, Wash., facing the jagged geography of the Cascade Range. One wall is covered in chalk notes and equations. He messily applied a coat of blackboard paint to the wall himself because he dislikes the odor of whiteboard markers.

“Technically, I don’t think I was supposed to do that,” said Mr. Agüera y Arcas. With short-cropped hair and a scruffy beard, he has the appearance of a graduate student.

When he’s brainstorming, Mr. Agüera y Arcas paces his office, talking to himself out loud. He said the process “doesn’t look very good,” but the self-dialogue is essential for working out new ideas. “First you try to beat something down and show why it’s a stupid idea,” he said. “Then you branch out. What’s the broadest range of solutions I can come up with? It’s all very dialectical. You kind of argue with yourself.”

He often shares “new pieces of vision,” as he calls these early-stage concepts, in presentations and documents that are distributed to other members of the team. Mr. Agüera y Arcas, who manages about 60 people, said the most stimulating meetings he has are “jam sessions,” in which people riff on each others’ ideas. “Without all of that input, I don’t think I would be doing interesting things on my own,” he said.

Prototypes, he said, are crucial. These include everything from crude bits of functional code to storyboard sketches. Mr. Agüera y Arcas demonstrated one such prototype: a short video, done with a designer, that shows a street-level map in which typography representing street names is upright and suspended off the ground so that it’s easier to see.

“Presenting an idea in the abstract as text or as something you talk about doesn’t have anything like the galvanizing effect on people or on yourself,” he said.

His most productive moments often occur outside the office, without the distraction of meetings. After he has dinner and puts his two young children to bed, Mr. Agüera y Arcas says he and his wife, a neuroscientist at the University of Washington, often sit side-by-side working on their laptops late into the night.

__________

Points of Interest

• Though Mr. Agüera y Arcas has assumed greater management responsibilities over the years, he still considers it vital to find time to develop projects on his own. “You see people who evolved in this way, and sometimes it looks like their brains died,” he said.

• He is a coffee connoisseur, fueling himself throughout the workday with several trips to a café downstairs in his building. Because he can’t always break away from the office, a gleaming chrome espresso maker and coffee grinder sit in the corner “for emergencies,” he said.

• He finds driving a car “deadening,” so he takes a bus to work from his home, reading or working on his laptop during the commute.

• When he was young, Mr. Agüera y Arcas dismantled things both animal and inanimate, from cameras to guinea pigs, so that he could see how they worked.

__________

He finds unlikely sources of inspiration. Mr. Agüera y Arcas once cobbled together software that automatically clustered together related images on a photo-sharing site, with the goal of creating detailed 3-D reconstructions composed of pictures from many different photographers. The software was inspired by research he had read about how ant colonies form the most efficient pathways to food sources. He used the software to build a 3D view of Cambodia’s Angkor Wat temple.

Another time, he stumbled on a project inside Microsoft’s research group called WorldWide Telescope that offers access to telescope imagery of the universe over the Web. Now when Bing Maps users are viewing a location at street level, they can gaze up at the sky to see constellations appear overhead. (Microsoft is testing this and other features on an experimental version of its site before rolling them out to a wider audience.)

A marble statue of Aphrodite at New York’s Metropolitan Museum of Art can be viewed through an app on Bing Maps.

Mr. Agüera y Arcas draws from an eclectic set of skills and interests. The son of a Catalan father and an American mother who met on an Israeli kibbutz, he learned how to program computers during his childhood in Mexico City. As a teenager on a summer internship with a U.S. Navy research center in Bethesda, Md., he reprogrammed the guidance software for aircraft carriers to improve their stability at sea, which helped to reduce seasickness among sailors.

He studied physics, neuroscience and applied math at Princeton University but stopped short of completing his doctoral dissertation. Instead, he chose to apply his quantitative skills to his long fascination with the Early Modern period of history, devoting several years to analyzing the typography of Gutenberg Bibles from the 1450s using computers and digital cameras.

During his research—which cast doubt on Johannes Gutenberg’s role in creating a form of type-making commonly credited to him—he had to create software that was capable of displaying extremely high-resolution images of book pages on a computer screen. That technology inspired him to create a startup, Seadragon Software, that he sold to Microsoft in 2006; its technology is used in a Microsoft program that lets consumers interact with high-resolution images on Bing Maps.

Though his work has helped to build buzz for Bing Maps, Mr. Agüera y Arcas concedes that the site lags its big rival, Google Maps, in some areas. Google has photographed many more streets and roads than Microsoft has for its street-level views. He said that competition with Google is a stimulus for innovation in the maps category, but he avoids doing direct clones of new Google Map features.

“You can always be inspired, but the moment you start copying, you guarantee you will never get ahead,” he said.

Nick Wingfield, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704361504575552661462672160.html

Prize Descriptions

I visit Wikipedia every day. I study the evolving entries for Internet-specific entities like World of Warcraft, Call of Duty, Foursquare and Picasa, often savoring the lucid exposition that Wikipedia brings to technical subjects that might not be expected to inspire poetry and for which no vocabulary has yet been set.

Wikipedia is a perfectly serviceable guide to non-Internet life. But as a companion to the stuff that was born on the Internet, Wikipedia — itself an Internet artifact — will never be surpassed.

Every new symbolic order requires a taxonomist to make sense of it. When Renaissance paintings and drawings first became fashionable in the art market in the early 20th century, the primary task of critics like Bernard Berenson was to attribute them, classify them and create a taste for them. Art collectors had to be introduced to the dynamics of the paintings, the names of the painters and the differences among them. Without descriptions, attributions and analysis, Titian’s “Salomé With the Head of St. John the Baptist” is just a clump of data.

Wikipedia has become the world’s master catalogue raisonnée for new clumps of data. Its legion nameless authors are the Audubons, the Magellans, the Berensons of our time. This was made clear to me recently when I unknowingly quoted the work of Randy Dewberry, an anonymous contributor to Wikipedia, in a column on the video game Angry Birds. Dewberry’s prose hit a note rare in exposition anywhere: both efficient and impassioned. (“Players take control of a flock of birds that are attempting to retrieve their eggs from a group of evil pigs that have stolen them.”)

The passage described Angry Birds so perfectly that I assumed it came from the game’s developers. Who else could know the game so well? But as Dewberry subsequently explained to me in an e-mail, that’s not what happened. In fact, according to the entry’s history, the original description of Angry Birds was such egregious corporate shilling that Wikipedia planned to drop it. That’s when Dewberry, a Wikipedian and devoted gamer, introduced paragraphs so lively they made the pleasure of the game palpable. The entry remained.

Like many Wikipedians, Dewberry is modest to the point of self-effacement about his contributions to the site. Because entries are anonymous and collaborative, no author is tempted to showboat and, in the pursuit of literary glory, swerve from the aim of clarity and utility. “No one editor can lay absolute claim to any articles,” Dewberry told me. “While editors will acknowledge when a user puts a substantial amount of work into an article, it is not ‘their’ article.”

For more information on the house vibe around credit-claiming, Dewberry proposed I type “WP: OWN” into Wikipedia to read its policy about “ownership” of articles. My jaw dropped. The page is fascinating for anyone who has ever been part of a collaborative effort to create anything.

At the strenuously collectivist Wikipedia, it seems, “ownership” of an article — what in legacy media is called “authorship” — is strictly forbidden. But it’s more than that: even doing jerky things that Wikipedia calls “ownership behavior” — subtle ways of acting proprietary about entries — is prohibited. As an example of the kind of attitude one editor is forbidden to cop toward another, Wikipedia cites this: “I have made some small amendments to your changes. You might notice that my tweaking of your wording has, in effect, reverted the article back to what it was before, but do not feel disheartened. Please feel free to make any other changes to my article if you ever think of anything worthwhile. Toodles! :)”

The magazine business could have used some guidelines about this all-too-familiar kind of authorship jockeying decades ago.

Wikipedia is vitally important to the culture. Digital artifacts like video games are our answer to the album covers and romance novels, the saxophone solos and cigarette cases, that previously defined culture. Today an “object” that gives meaning might be an e-book. An MP3. A Flash animation. An HTML5 animation. A video, an e-mail, a text message, a blog. A Tumblr blog. A Foursquare badge. Around these artifacts we now form our identities.

Take another such artifact: the video game Halo. The entry on Wikipedia for Halo: Combat Evolved, which Wikipedia’s editors have chosen as a model for the video-game-entry form, keeps its explanations untechnical. Halo, according to the article, is firmly in the tradition of games about shooting things, “focusing on combat in a 3D environment and taking place almost entirely from a character’s eye view.” But not always: “The game switches to the third-person perspective during vehicle use for pilots and mounted gun operators; passengers maintain a first-person view.” At last, Halo: I understand you!

At first blush the work of composing these anonymous descriptions may seem servile. Hundreds of thousands of unnamed Wikipedia editors have made a hobby of perfecting the descriptions of objects whose sales don’t enrich them. But their pleasure in the always-evolving master document comes through clearly in Wikipedia itself. The nameless authors tell the digital world what its components are, and thereby create it.

MINE! MINE!
With authorship disputes, Wikipedia advises, “stay calm, assume good faith and remain civil.” The revolutionary policy outlined on “Wikipedia: Ownership of Articles” — search Wikipedia or Google for it — is stunningly thorough.

SINGLE-SHOOTER POETS
For the best-written articles on video games, search Wikipedia for WP:VG/FA. These are all featured articles, and as Wikipedia notes, they have “the status which all articles should eventually achieve.”

TWO CENTS OF DATA
It’s time to contribute to Wikipedia — even if you just want to make a small correction to the Calvin Coolidge, “Krapp’s Last Tape” or Bettie Serveert entries. Join the project by following links from Wikipedia’s homepage, and then read WP:YFA, Wikipedia’s page on creating your first article.

Virginia Heffernan, New York Times

__________

Full article and photos: http://www.nytimes.com/2010/11/07/magazine/07FOB-medium-t.html?ref=magazine

True to type

YOU’RE sick of Helvetica, aren’t you? That show-off changed its birth name, Neue Haas Grotesk, had plastic surgery in the 1980s to get thinner (and fatter), and even has its own movie. Helvetica and its online type brethren Arial, Georgia, Times and Verdana appear on billions of Web pages. You’re sick of these other faces, too, even if you don’t know them by name.

No one questions the on-screen aesthetics of the fonts; Georgia and Verdana were designed specifically for computer use by 2010 MacArthur Foundation grant recipient Matthew Carter, one of the greatest modern type designers. The others have varying pedigrees, and work fine in pixels. They aren’t Brush Script and Marker Felt, for heaven’s sake. But those faces dominate the the web’s fontscape purely because of licensing. Most or all of the faces are pre-installed in Mac OS X, Windows, and several mobile operating systems. Their overuse provides a homogeneity that no graphic designer—or informed reader—would ever tolerate in print. Those not educated in type’s arcana can be forgiven for not caring at a conscious level, even as the lack of differentiation pricks at the back of their optic nerves.

That’s about to change. An entente has formed in a cold war lasting over a decade between type foundries that create and license typefaces for use, and browser makers that want to allow web designers the freedom of selection available for print. The testiness between the two camps arose as a result of piracy and intellectual-property protection concerns. Foundries don’t want their valuable designs easily downloaded and copied, which was possible in one iteration of web font inclusion. For a time, foundries looked to digital rights management (DRM) to encrypt and protect use. Microsoft built such a system in 1998 for Internet Explorer 4. Simon Daniels, the company’s typography supremo, says that even with its browser’s giant market share at the time, it wasn’t very widely used.

Such protection is complicated, and requires an infrastructure and agreements that often prevent use across systems. It also has precious little effect in deterring piracy. DRM may actually push potential buyers into pirates’ arms because out of a desire for simplicity and portability rather than out of an unwillingness to pay. Apple once sold only protected music that would play in its iTunes software and on its iPods, iPhones and iPads. The music industry tried to break Apple’s hegemony over digital downloads by removing DRM, which in turn allows song files to be played on any device. That had some effect, but probably not enough. The industry is now moving towards streaming, where a recurring monthly fee or viewing advertisements unlocks audio from central servers on demand. Fonts may follow a similar path. Foundries have accepted a compromise that removes protection in exchange for a warning label and a kind of on-demand font streaming from central depositories.

This compromise, the WOFF (Web Open Font Format), was thrashed out by an employee of Mozilla, the group behind Firefox, and members of two type houses. It’s a mercifully brief technical document that defines political and financial issues. WOFF allows designers to package fonts using either of the two major desktop formats—themselves remnants of font wars of yore—in a way approved by all major and most minor foundries. It doesn’t protect the typefaces with encryption, but with a girdle of ownership defined in clear text. Future versions of browsers from the three groups will add full WOFF support. Apple’s Safari and its underlying WebKit rendering engine used for nearly all mobile operating systems’ browsers will adopt WOFF, as will Google Chrome and its variants. WOFF was proposed in October 2009, presented to the World Wide Web Consortium (W3C) in April 2010 by Microsoft, the Mozilla Foundation and Opera Software, and adopted as a draft in July, remarkably quickly for such an about face. 

At the annual meeting of the typoscenti at the Association Typographique Internationale (ATypI) last month in Dublin, all the web font talk was about WOFF and moving forward to offer more faces, services and integration, says John Berry, the president of ATypI, and part of Mr Daniels’ typography group at Microsoft. “The floodgates have opened,” says Mr Berry. “All the font foundries and many of the designers are offering their fonts or subsets of their fonts.” Several sites now offer a subscription-based combination of font licensing and simple JavaScript code to insert on web pages to ensure that a specified type loads on browsers—even older ones still in use. Online font services include TypeKit, Webtype, and Monotype’s Fonts.com, to name but a few. Designers don’t load the faces on their own websites, but stream them as small packages, cached by browsers, from the licence owner’s servers.

The long-term effect of the campaign for real type will be a gradual branding of sites, whether those created by talented individuals or multi-billion-dollar corporations, or based on choices in templates used in blogging and other platforms. Just as a regular reader of the print edition of this newspaper can recognise it in a flash across a room, so, too, will an online edition have the pizazz (or lack thereof) of a print publication. Mr Berry notes,

It’s most obvious in display type and headlines and things, but it’s going to make a huge difference just in reading and text, to have something besides Arial, Verdana, and Georgia. It will make real web publications possible that you want to read, as opposed to a poor substitute.

Expect an equivalent of the Cambrian explosion in typography. And Cambria—another dedicated computer font—won’t be the only new face in town.

__________
Full article and photo: http://www.economist.com/blogs/babbage/2010/10/web_fonts_will_flourish

Kant on a Kindle?

The technology of the book—sheafs of paper covered in squiggles of ink—has remained virtually unchanged since Gutenberg. This is largely a testament to the effectiveness of books as a means of transmitting and storing information. Paper is cheap, and ink endures.

In recent years, however, the act of reading has undergone a rapid transformation, as devices such as the Kindle and iPad account for a growing share of book sales. (Amazon, for instance, now sells more e-books than hardcovers.) Before long, we will do most of our reading on screens—lovely, luminous screens.

The displays are one of the main selling points of these new literary gadgets. Thanks to dramatic improvements in screen resolution, the words shimmer on the glass; every letter is precisely defined, with fully adjustable fonts. Think of it as a beautifully printed book that’s always available in perfect light. For contrast and clarity, it’s hard for Gutenberg to compete.

And these reading screens are bound to get better. One of the longstanding trends of modern technology is to make it easier and easier to perceive fine-grained content. The number of pixels in televisions has increased fivefold in the last 10 years, VHS gave rise to the Blu-Ray, and computer monitors can display millions of vibrant colors.

I would be the last to complain about such improvements—I shudder to imagine a world without sports on HDTV—but it’s worth considering the ways in which these new reading technologies may change the nature of reading and, ultimately, the content of our books.

Let’s begin by looking at how reading happens in the brain. Stanislas Dehaene, a neuroscientist at the Collège de France in Paris, has helped to demonstrate that the literate brain contains two distinct pathways for making sense of words, each activated in different contexts. One pathway, known as the ventral route, is direct and efficient: We see a group of letters, convert those letters into a word and then directly grasp the word’s meaning. When you’re reading a straightforward sentence in a clear format, you’re almost certainly relying on this neural highway. As a result, the act of reading seems effortless. We don’t have to think about the words on the page.

But the ventral route is not the only way to read. The brain’s second reading pathway, the dorsal stream, is turned on when we have to pay conscious attention to a sentence. Perhaps we’ve encountered an obscure word or a patch of smudged ink. (In his experiments, Mr. Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Mr. Dehaene’s research demonstrates that even adults are still forced to occasionally decipher a text.

The lesson of his research is that the act of reading observes a gradient of awareness. Familiar sentences rendered on lucid e-ink screens are read quickly and effortlessly. Unusual sentences with complex clauses and odd punctuation tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra cognitive work wakes us up; we read more slowly, but we notice more. Psychologists call this the “levels-of-processing” effect, since sentences that require extra levels of analysis are more likely to get remembered.

E-readers have yet to dramatically alter the reading experience; e-ink still feels a lot like old-fashioned ink. But it seems inevitable that the same trends that have transformed our televisions will also affect our reading gadgets. And this is where the problems begin. Do we really want reading to be as effortless as possible? The neuroscience of literacy suggests that, sometimes, the best way to make sense of a difficult text is to read it in a difficult format, to force our brain to slow down and process each word. After all, reading isn’t about ease—it’s about understanding. If we’re going to read Kant on the Kindle, or Proust on the iPad, then we should at least experiment with an ugly font.

Every medium eventually influences the message that it carries. I worry that, before long, we’ll become so used to the mindless clarity of e-ink that the technology will feed back onto the content, making us less willing to endure challenging texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a thorny stretch of prose. And that would be a shame, because not every sentence should be easy to read.

Jonah Lehrer is the author, most recently, of ‘”How We Decide.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703499604575512270255127444.html