Phonograph, CD, MP3—What’s Next?

The Beatles finally make it to iTunes.

‘I am particularly glad to no longer be asked when the Beatles are coming to iTunes,” said Ringo Starr last week as the Fab Four’s record company finally agreed to have their music sold through digital downloads by Apple. This agreement could mark the beginning of the end of the digital dislocation of the music industry— the first industry to be completely disrupted by information-age technology.

Apple CEO Steve Jobs said, “It has been a long and winding road to get here,” understating the case. Members of the band and other rights-holders had long objected to Apple’s practice of selling songs separately from albums, disagreed with its pricing, and feared illegal file-sharing of the songs if they were ever available online.

All 17 of the Beatles’ albums were among the top 50 sellers on iTunes the day they were made available. Apple is even selling a virtual “box set” of the Beatles. The top single was “Here Comes the Sun,” appropriately enough, since the lyrics “it’s been a long cold lonely winter” summarize a music industry just emerging from the destruction element of creative destruction.

Music has been a test case for technology transitions before. In the 19th century, the sheet-music publishers of Tin Pan Alley dominated the industry but were disrupted by recorded sound when Thomas Edison invented the phonograph. This was in turn replaced by newer physical forms of recordings, from eight-track tapes to cassettes and CDs. In the Internet era, sales of albums—bundles of music—broke down as consumers downloaded just the songs they wanted, usually illegally.

The iTunes store, launched in 2003, popularized legal downloads. Streaming music online has also become popular. Today one quarter of recorded-music revenue comes from digital channels. This tells us that technology can reward both creators and consumers, even as traditional middlemen such as record companies get squeezed.

A Beatles song plays on an iPod.

The Beatles have been accused of being digitally backward, but last year the group targeted younger listeners by cooperating with a videogame maker on “The Beatles: Rock Band” that lets people play along.

“We’ve made the Beatles music,” Paul McCartney told London’s Observer last year. “It’s a body of work. That’s it for us—it’s done. But then what happens is that somebody will come up with a suggestion,” like a video game.

Consumers get more choice through digital products and seem happy to pay for the convenience of downloads through iTunes, despite the availability of free music. Apple can charge more for a Beatles download than Amazon can charge for a CD, even though CDs are usually higher-quality and the songs can be transferred to devices such as iPods.

Several years ago the big legal battle featured music industry companies suing some 35,000 people who illegally downloaded songs. Piracy continues, but now the industry is instead looking for new revenue streams. Sean Parker, founder of the original downloading service, Napster, has advice for music companies. “The war on piracy is a failure,” he says. “Labels must offer services that consumers are willing to pay for, focusing on convenience and accessibility.”

Some musicians still hold out against digital downloads. Country star Kid Rock explained to Billboard magazine recently why he stays off iTunes. “I have trouble with the way iTunes says everybody’s music is worth the same price. I don’t think that’s right. There’s music out there that’s not a penny. They should be giving it away, or they should be making the artist pay people to listen to it.”

Still, there are encouraging signs that creators and distributors are coming together. Artists often skip the music industry altogether by using new technology to make songs cheaply, then market them on the Web. For many musicians, the real money comes from concerts and merchandising. For bands that appeal to older audiences, such as the Beatles, CD sales remain brisk.

For music and many content-based industries, the shift to the Information Age from the Industrial Age is a shift to digital versions from older analog versions. The older forms don’t disappear altogether. Instead, traditional products find a more limited role alongside newer versions that take advantage of new technology to deliver different experiences to consumers. Sellers may lose scarcity value for their goods as digital tools make copying easy, but as iTunes has shown, convenience is also a service worth buying.

If the music industry can learn new tricks, there’s hope for all the other industries that are being transformed as technology continues to give consumers more choices. The best alternative for smart industries is to take the advice of the Beatles song “Let It Be”—make the most of technological progress, and recognize that certain things are beyond anyone’s control.

L. Gordon Crovitz, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704496104575627282994471928.html

Forget any ‘Right to Be Forgotten’

Don’t count on government to censor information about you online.

The stakes keep rising in the debate over online privacy. Last week, the Obama administration floated the idea of a privacy czar to regulate the Internet, and the European Union even concocted a new “right to be forgotten” online.

The proposed European legislation would give people the right, any time, to have all of their personal information deleted online. Regulators say that in an era of Facebook and Google, “People should have the ‘right to be forgotten’ when their data is no longer needed or they want their data to be deleted.” The proposal, which did not explain how this could be done in practice, includes potential criminal sanctions.

Privacy viewed in isolation looks more like a right than it does when seen in context. Any regulation to keep personal information confidential quickly runs up against other rights, such as free speech, and many privileges, from free Web search to free email.

There are real trade-offs between privacy and speech. Consider the case of German murderer Wolfgang Werle, who does not think his name should be used. In 1990, he and his half brother killed German actor Walter Sedlmayr. They spent 15 years in jail. German law protects criminals who have served their time, including from references to their crimes.

Last year, Werle’s lawyers sent a cease-and-desist letter to Wikipedia, citing German law, demanding the online encyclopedia remove the names of the murderers. They even asked for compensation for emotional harm, saying, “His rehabilitation and his future life outside the prison system is severely impacted by your unwillingness to anonymize any articles dealing with the murder of Mr. Sedlmayr with regard to our client’s involvement.”

Censorship requires government limits on speech, at odds with the open ethos of the Web. It’s also not clear how a right to be forgotten could be enforced. If someone writes facts about himself on Facebook that he later regrets, do we really want the government punishing those who use the information?

UCLA law Prof. Eugene Volokh has explained why speech and privacy are often at odds. “The difficulty is that the right to information privacy—the right to control other people’s communication of personally identifiable information about you—is a right to have the government stop people from speaking about you,” he wrote in a law review article in 2000.

Indeed, there’s a good argument that “a ‘right to be forgotten’ is not really a ‘privacy’ right in the first place,” says Adam Thierer, president of the Progress and Freedom Foundation. “A privacy right should only concern information that is actually private. What a ‘right to be forgotten’ does is try to take information that is, by default, public information, and pretend that it’s private.”

There are also concerns about how information is collected for advertising. A Wall Street Journal series, “What They Know,” has shown that many online companies don’t even know how much tracking software they use. Better disclosure would require better monitoring by websites. When used correctly, these systems benignly aggregate information about behavior online so that advertisers can target the right people with the right products.

Many people seem happy to make the trade-off in favor of sharing more about themselves in exchange for services and convenience. On Friday, when news broke of potential new regulations in the U.S., the Journal conducted an online poll asking, “Should the Obama administration appoint a watchdog for online privacy?” Some 85% of respondents said no.

As Brussels and Washington were busily proposing new regulations last week, two of the biggest companies were duking it out over consumer privacy, a new battlefield for competition. Google tried to stop Facebook from letting users automatically import their address and other contact details from their Gmail accounts, arguing that the social-networking site didn’t have a way for users to get the data out again.

When users tried to import their contacts to Facebook, a message from Gmail popped up saying, “Hold on a second. Are you super sure you want to import your contact information for your friends into a service that won’t let you get it out?” The warning adds, “We think this is an important thing for you to know before you import your data there. Although we strongly disagree with this data protectionism, the choice is yours. Because, after all, you should have control over your data.”

One of the virtues of competitive markets is that companies vie for customers over everything from services to privacy protections. Regulators have no reason to dictate one right answer to these balancing acts among interests that consumers are fully capable of making for themselves.

L. Gordon Crovitz, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704658204575610771677242174.html

The Power To Control

Is Internet freedom threatened more by dominant companies or by the government’s efforts to hem them in?

In the early days of the radio industry, in the 1920s, almost anyone could become a broadcaster. There were few barriers to entry, basically just some cheap equipment to acquire. The bigger broadcasters soon realized that wealth creation depended on restricting market entry and limiting competition. Before long, regulation—especially the licensing of radio frequencies—transformed the open radio landscape into a “closed” oligopoly, with few players instead of many.

In “The Master Switch,” Tim Wu, a professor at Columbia University, argues that the Internet also risks becoming a closed system unless certain steps are taken. In his telling, information industries—including radio, television and telecommunications—begin as relatively open sectors of the economy but get co-opted by private interests, often abetted by the state. What starts as a hobby or a cottage industry ends up as a monopoly or cartel.

In such an environment, success often depends on snuffing out competitors before they become formidable. In Greek mythology, Kronos—ruler of the universe—was warned by an oracle that one of his children would dethrone him. Logically, he took pre-emptive action: Each time his wife gave birth, he seized the new child and ate it. Applied to corporate strategy, the “Kronos Effect” is the attempt by a dominant company to devour its challengers in their infancy.

In the late 19th century, Western Union, the telegraph company, tried to put AT&T out of business in the infancy of the telephone—by commissioning Thomas Edison to design a better phone and then rolling out tens of thousands of telephones to consumers, rendering AT&T a “bit player.” It was a sound Kronos strategy, but AT&T survived and eventually prospered over Western Union, thanks in part to aggressive patent litigation. Later AT&T, in its turn, applied the Kronos strategy to every upstart that challenged it.

Mr. Wu notes that, for most of the 20th century, AT&T operated the “most lucrative monopoly in history.” In the early 1980s, the U.S. government broke the monopoly up, but its longevity was the result of government regulation. In 1913, AT&T entered into the “Kingsbury Commitment” with the Justice Department. The deal was meant to increase competition by forcing AT&T, among other things, to allow independent operators to connect their local exchanges with AT&T’s long-distance lines. But the agreement, by forestalling the break-up of AT&T, was really, Mr. Wu says, the “death knell” of both “openness and competition.”

In the past, then, even arrangements aimed at maximizing competition have ended up entrenching the dominant player. Some argue that the Internet will avoid this fate because it is “inherently open.” Mr. Wu isn’t so sure. In fact, he says, “with everything on one network, the potential power to control is so much greater.” He worries about major players dominating the Internet, stifling innovation and free speech.

Mr. Wu’s solution is to propose a “Separation Principle,” a form of industry self-regulation to be overseen by the Federal Communications Commission (though he concedes that there is an ever-present danger of regulatory capture—whereby the FCC or other agencies become excessively influenced by the businesses they are meant to be regulating). The key to competition in the information industry, Mr. Wu believes, is a complete independence among its three layers: content owners (e.g., a games developer); network infrastructure (e.g., a cable company or cellular-network owner); and tools of access (e.g., a mobile handset maker). Obviously vertical integration, where one company participates in more than one layer, would be prohibited. The biggest effect of such a rule would be to separate content and conduit: Comcast, the cable giant, would plainly not be allowed to complete its planned acquisition of NBC Universal, a content provider.

The process that Mr. Wu describes—of a few companies dominating the information industry and requiring regulatory intervention to tame them—plays down the disruptive effects of technology itself. In 1998, the Justice Department launched an antitrust action against Microsoft, partly to prevent it from using Windows, its operating system, to control the Web. But it was innovation by competitors that put paid to Microsoft’s potential dominance. A decade ago, AOL (when it was still called America Online) seemed poised to dominate cyberspace. Then broadband came along and AOL, a glorified dial-up service provider, quickly became an also-ran.

Similarly, mobile carriers, like AT&T Wireless, long enjoyed a near complete control over mobile applications—until the Apple’s iPhone arrived. The App Store decimated that control and unleashed a wave of mobile innovation. Mr. Wu notes that Apple, which at first forbade some competing applications, was “shamed” into allowing apps like Skype and Google Voice on its phones. True enough, but surely that is evidence of market forces creating openness, not the need for more mechanisms to enforce it.

The legitimate desire to prevent basic “discrimination” (e.g., Comcast blocking Twitter) is not enough to justify the broad restrictions that Mr. Wu advocates. Besides, enforcing the new rules would itself stifle innovation, create arbitrary distinctions and protect rival incumbents. Google’s bid for wireless spectrum and its Nexus One smartphone would certainly have crossed “separation” lines—as would Apple’s combination of access devices (the iPhone) and a content-distribution business (iTunes). Mr. Wu’s proposal would blunt the competitive pressure that Google and Apple apply to each other, as well as to Verizon Wireless, Microsoft, Nokia and just about everyone else. As Mr. Wu himself shows when tracing the history of earlier technology-based industries, the effort to regulate openness can often do more harm than good.

Mr. Philips is chief executive of Photon Group, an Australia-based communications company.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704462704575589621694834704.html

Taking on Google by Learning From Ants

Fifteenth- and 16th-century European explorers helped to transform cartography during the Age of Discovery. Rather than mapping newly discovered worlds, Blaise Agüera y Arcas is out to invent new ways of viewing the old ones.

Mr. Agüera y Arcas is the architect of Bing Maps, the online mapping service that is part of Microsoft Corp.’s Bing Internet search engine. Bing Maps does all the basics, like turn-by-turn directions and satellite views that offer a peek into the neighbor’s backyard, but Mr. Agüera y Arcas has attracted attention in the tech world by pushing the service to do a lot more.

Blaise Agüera y Arcas, in Bellevue, Wash.

He helped to cook up a technology that allows people to post high-resolution photocollages that explore the interiors of buildings. For New York’s Metropolitan Museum of Art, for example, 1,317 still images dissolve into each other, giving an online visitor the sensation of touring the Greek and Roman art wing. By dragging a mouse, the viewer can circle a marble statute of Aphrodite and zoom in on the exhibit’s sign to read that the statue, Venus Genetrix, was created between the first and second centuries A.D.

For a user who wants to check out a particular street, Mr. Agüera y Arcas has devised an elegant visual transition that provides the feel of skydiving to the ground. He says that these transitions will become even better over time. “I want this all to become cinematic,” he said.

Mr. Agüera y Arcas, 35, imagines these projects from a cramped office on the 22nd floor of a new high-rise in Bellevue, Wash., facing the jagged geography of the Cascade Range. One wall is covered in chalk notes and equations. He messily applied a coat of blackboard paint to the wall himself because he dislikes the odor of whiteboard markers.

“Technically, I don’t think I was supposed to do that,” said Mr. Agüera y Arcas. With short-cropped hair and a scruffy beard, he has the appearance of a graduate student.

When he’s brainstorming, Mr. Agüera y Arcas paces his office, talking to himself out loud. He said the process “doesn’t look very good,” but the self-dialogue is essential for working out new ideas. “First you try to beat something down and show why it’s a stupid idea,” he said. “Then you branch out. What’s the broadest range of solutions I can come up with? It’s all very dialectical. You kind of argue with yourself.”

He often shares “new pieces of vision,” as he calls these early-stage concepts, in presentations and documents that are distributed to other members of the team. Mr. Agüera y Arcas, who manages about 60 people, said the most stimulating meetings he has are “jam sessions,” in which people riff on each others’ ideas. “Without all of that input, I don’t think I would be doing interesting things on my own,” he said.

Prototypes, he said, are crucial. These include everything from crude bits of functional code to storyboard sketches. Mr. Agüera y Arcas demonstrated one such prototype: a short video, done with a designer, that shows a street-level map in which typography representing street names is upright and suspended off the ground so that it’s easier to see.

“Presenting an idea in the abstract as text or as something you talk about doesn’t have anything like the galvanizing effect on people or on yourself,” he said.

His most productive moments often occur outside the office, without the distraction of meetings. After he has dinner and puts his two young children to bed, Mr. Agüera y Arcas says he and his wife, a neuroscientist at the University of Washington, often sit side-by-side working on their laptops late into the night.

__________

Points of Interest

• Though Mr. Agüera y Arcas has assumed greater management responsibilities over the years, he still considers it vital to find time to develop projects on his own. “You see people who evolved in this way, and sometimes it looks like their brains died,” he said.

• He is a coffee connoisseur, fueling himself throughout the workday with several trips to a café downstairs in his building. Because he can’t always break away from the office, a gleaming chrome espresso maker and coffee grinder sit in the corner “for emergencies,” he said.

• He finds driving a car “deadening,” so he takes a bus to work from his home, reading or working on his laptop during the commute.

• When he was young, Mr. Agüera y Arcas dismantled things both animal and inanimate, from cameras to guinea pigs, so that he could see how they worked.

__________

He finds unlikely sources of inspiration. Mr. Agüera y Arcas once cobbled together software that automatically clustered together related images on a photo-sharing site, with the goal of creating detailed 3-D reconstructions composed of pictures from many different photographers. The software was inspired by research he had read about how ant colonies form the most efficient pathways to food sources. He used the software to build a 3D view of Cambodia’s Angkor Wat temple.

Another time, he stumbled on a project inside Microsoft’s research group called WorldWide Telescope that offers access to telescope imagery of the universe over the Web. Now when Bing Maps users are viewing a location at street level, they can gaze up at the sky to see constellations appear overhead. (Microsoft is testing this and other features on an experimental version of its site before rolling them out to a wider audience.)

A marble statue of Aphrodite at New York’s Metropolitan Museum of Art can be viewed through an app on Bing Maps.

Mr. Agüera y Arcas draws from an eclectic set of skills and interests. The son of a Catalan father and an American mother who met on an Israeli kibbutz, he learned how to program computers during his childhood in Mexico City. As a teenager on a summer internship with a U.S. Navy research center in Bethesda, Md., he reprogrammed the guidance software for aircraft carriers to improve their stability at sea, which helped to reduce seasickness among sailors.

He studied physics, neuroscience and applied math at Princeton University but stopped short of completing his doctoral dissertation. Instead, he chose to apply his quantitative skills to his long fascination with the Early Modern period of history, devoting several years to analyzing the typography of Gutenberg Bibles from the 1450s using computers and digital cameras.

During his research—which cast doubt on Johannes Gutenberg’s role in creating a form of type-making commonly credited to him—he had to create software that was capable of displaying extremely high-resolution images of book pages on a computer screen. That technology inspired him to create a startup, Seadragon Software, that he sold to Microsoft in 2006; its technology is used in a Microsoft program that lets consumers interact with high-resolution images on Bing Maps.

Though his work has helped to build buzz for Bing Maps, Mr. Agüera y Arcas concedes that the site lags its big rival, Google Maps, in some areas. Google has photographed many more streets and roads than Microsoft has for its street-level views. He said that competition with Google is a stimulus for innovation in the maps category, but he avoids doing direct clones of new Google Map features.

“You can always be inspired, but the moment you start copying, you guarantee you will never get ahead,” he said.

Nick Wingfield, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704361504575552661462672160.html

Prize Descriptions

I visit Wikipedia every day. I study the evolving entries for Internet-specific entities like World of Warcraft, Call of Duty, Foursquare and Picasa, often savoring the lucid exposition that Wikipedia brings to technical subjects that might not be expected to inspire poetry and for which no vocabulary has yet been set.

Wikipedia is a perfectly serviceable guide to non-Internet life. But as a companion to the stuff that was born on the Internet, Wikipedia — itself an Internet artifact — will never be surpassed.

Every new symbolic order requires a taxonomist to make sense of it. When Renaissance paintings and drawings first became fashionable in the art market in the early 20th century, the primary task of critics like Bernard Berenson was to attribute them, classify them and create a taste for them. Art collectors had to be introduced to the dynamics of the paintings, the names of the painters and the differences among them. Without descriptions, attributions and analysis, Titian’s “Salomé With the Head of St. John the Baptist” is just a clump of data.

Wikipedia has become the world’s master catalogue raisonnée for new clumps of data. Its legion nameless authors are the Audubons, the Magellans, the Berensons of our time. This was made clear to me recently when I unknowingly quoted the work of Randy Dewberry, an anonymous contributor to Wikipedia, in a column on the video game Angry Birds. Dewberry’s prose hit a note rare in exposition anywhere: both efficient and impassioned. (“Players take control of a flock of birds that are attempting to retrieve their eggs from a group of evil pigs that have stolen them.”)

The passage described Angry Birds so perfectly that I assumed it came from the game’s developers. Who else could know the game so well? But as Dewberry subsequently explained to me in an e-mail, that’s not what happened. In fact, according to the entry’s history, the original description of Angry Birds was such egregious corporate shilling that Wikipedia planned to drop it. That’s when Dewberry, a Wikipedian and devoted gamer, introduced paragraphs so lively they made the pleasure of the game palpable. The entry remained.

Like many Wikipedians, Dewberry is modest to the point of self-effacement about his contributions to the site. Because entries are anonymous and collaborative, no author is tempted to showboat and, in the pursuit of literary glory, swerve from the aim of clarity and utility. “No one editor can lay absolute claim to any articles,” Dewberry told me. “While editors will acknowledge when a user puts a substantial amount of work into an article, it is not ‘their’ article.”

For more information on the house vibe around credit-claiming, Dewberry proposed I type “WP: OWN” into Wikipedia to read its policy about “ownership” of articles. My jaw dropped. The page is fascinating for anyone who has ever been part of a collaborative effort to create anything.

At the strenuously collectivist Wikipedia, it seems, “ownership” of an article — what in legacy media is called “authorship” — is strictly forbidden. But it’s more than that: even doing jerky things that Wikipedia calls “ownership behavior” — subtle ways of acting proprietary about entries — is prohibited. As an example of the kind of attitude one editor is forbidden to cop toward another, Wikipedia cites this: “I have made some small amendments to your changes. You might notice that my tweaking of your wording has, in effect, reverted the article back to what it was before, but do not feel disheartened. Please feel free to make any other changes to my article if you ever think of anything worthwhile. Toodles! :)”

The magazine business could have used some guidelines about this all-too-familiar kind of authorship jockeying decades ago.

Wikipedia is vitally important to the culture. Digital artifacts like video games are our answer to the album covers and romance novels, the saxophone solos and cigarette cases, that previously defined culture. Today an “object” that gives meaning might be an e-book. An MP3. A Flash animation. An HTML5 animation. A video, an e-mail, a text message, a blog. A Tumblr blog. A Foursquare badge. Around these artifacts we now form our identities.

Take another such artifact: the video game Halo. The entry on Wikipedia for Halo: Combat Evolved, which Wikipedia’s editors have chosen as a model for the video-game-entry form, keeps its explanations untechnical. Halo, according to the article, is firmly in the tradition of games about shooting things, “focusing on combat in a 3D environment and taking place almost entirely from a character’s eye view.” But not always: “The game switches to the third-person perspective during vehicle use for pilots and mounted gun operators; passengers maintain a first-person view.” At last, Halo: I understand you!

At first blush the work of composing these anonymous descriptions may seem servile. Hundreds of thousands of unnamed Wikipedia editors have made a hobby of perfecting the descriptions of objects whose sales don’t enrich them. But their pleasure in the always-evolving master document comes through clearly in Wikipedia itself. The nameless authors tell the digital world what its components are, and thereby create it.

MINE! MINE!
With authorship disputes, Wikipedia advises, “stay calm, assume good faith and remain civil.” The revolutionary policy outlined on “Wikipedia: Ownership of Articles” — search Wikipedia or Google for it — is stunningly thorough.

SINGLE-SHOOTER POETS
For the best-written articles on video games, search Wikipedia for WP:VG/FA. These are all featured articles, and as Wikipedia notes, they have “the status which all articles should eventually achieve.”

TWO CENTS OF DATA
It’s time to contribute to Wikipedia — even if you just want to make a small correction to the Calvin Coolidge, “Krapp’s Last Tape” or Bettie Serveert entries. Join the project by following links from Wikipedia’s homepage, and then read WP:YFA, Wikipedia’s page on creating your first article.

Virginia Heffernan, New York Times

__________

Full article and photos: http://www.nytimes.com/2010/11/07/magazine/07FOB-medium-t.html?ref=magazine

True to type

YOU’RE sick of Helvetica, aren’t you? That show-off changed its birth name, Neue Haas Grotesk, had plastic surgery in the 1980s to get thinner (and fatter), and even has its own movie. Helvetica and its online type brethren Arial, Georgia, Times and Verdana appear on billions of Web pages. You’re sick of these other faces, too, even if you don’t know them by name.

No one questions the on-screen aesthetics of the fonts; Georgia and Verdana were designed specifically for computer use by 2010 MacArthur Foundation grant recipient Matthew Carter, one of the greatest modern type designers. The others have varying pedigrees, and work fine in pixels. They aren’t Brush Script and Marker Felt, for heaven’s sake. But those faces dominate the the web’s fontscape purely because of licensing. Most or all of the faces are pre-installed in Mac OS X, Windows, and several mobile operating systems. Their overuse provides a homogeneity that no graphic designer—or informed reader—would ever tolerate in print. Those not educated in type’s arcana can be forgiven for not caring at a conscious level, even as the lack of differentiation pricks at the back of their optic nerves.

That’s about to change. An entente has formed in a cold war lasting over a decade between type foundries that create and license typefaces for use, and browser makers that want to allow web designers the freedom of selection available for print. The testiness between the two camps arose as a result of piracy and intellectual-property protection concerns. Foundries don’t want their valuable designs easily downloaded and copied, which was possible in one iteration of web font inclusion. For a time, foundries looked to digital rights management (DRM) to encrypt and protect use. Microsoft built such a system in 1998 for Internet Explorer 4. Simon Daniels, the company’s typography supremo, says that even with its browser’s giant market share at the time, it wasn’t very widely used.

Such protection is complicated, and requires an infrastructure and agreements that often prevent use across systems. It also has precious little effect in deterring piracy. DRM may actually push potential buyers into pirates’ arms because out of a desire for simplicity and portability rather than out of an unwillingness to pay. Apple once sold only protected music that would play in its iTunes software and on its iPods, iPhones and iPads. The music industry tried to break Apple’s hegemony over digital downloads by removing DRM, which in turn allows song files to be played on any device. That had some effect, but probably not enough. The industry is now moving towards streaming, where a recurring monthly fee or viewing advertisements unlocks audio from central servers on demand. Fonts may follow a similar path. Foundries have accepted a compromise that removes protection in exchange for a warning label and a kind of on-demand font streaming from central depositories.

This compromise, the WOFF (Web Open Font Format), was thrashed out by an employee of Mozilla, the group behind Firefox, and members of two type houses. It’s a mercifully brief technical document that defines political and financial issues. WOFF allows designers to package fonts using either of the two major desktop formats—themselves remnants of font wars of yore—in a way approved by all major and most minor foundries. It doesn’t protect the typefaces with encryption, but with a girdle of ownership defined in clear text. Future versions of browsers from the three groups will add full WOFF support. Apple’s Safari and its underlying WebKit rendering engine used for nearly all mobile operating systems’ browsers will adopt WOFF, as will Google Chrome and its variants. WOFF was proposed in October 2009, presented to the World Wide Web Consortium (W3C) in April 2010 by Microsoft, the Mozilla Foundation and Opera Software, and adopted as a draft in July, remarkably quickly for such an about face. 

At the annual meeting of the typoscenti at the Association Typographique Internationale (ATypI) last month in Dublin, all the web font talk was about WOFF and moving forward to offer more faces, services and integration, says John Berry, the president of ATypI, and part of Mr Daniels’ typography group at Microsoft. “The floodgates have opened,” says Mr Berry. “All the font foundries and many of the designers are offering their fonts or subsets of their fonts.” Several sites now offer a subscription-based combination of font licensing and simple JavaScript code to insert on web pages to ensure that a specified type loads on browsers—even older ones still in use. Online font services include TypeKit, Webtype, and Monotype’s Fonts.com, to name but a few. Designers don’t load the faces on their own websites, but stream them as small packages, cached by browsers, from the licence owner’s servers.

The long-term effect of the campaign for real type will be a gradual branding of sites, whether those created by talented individuals or multi-billion-dollar corporations, or based on choices in templates used in blogging and other platforms. Just as a regular reader of the print edition of this newspaper can recognise it in a flash across a room, so, too, will an online edition have the pizazz (or lack thereof) of a print publication. Mr Berry notes,

It’s most obvious in display type and headlines and things, but it’s going to make a huge difference just in reading and text, to have something besides Arial, Verdana, and Georgia. It will make real web publications possible that you want to read, as opposed to a poor substitute.

Expect an equivalent of the Cambrian explosion in typography. And Cambria—another dedicated computer font—won’t be the only new face in town.

__________
Full article and photo: http://www.economist.com/blogs/babbage/2010/10/web_fonts_will_flourish

Kant on a Kindle?

The technology of the book—sheafs of paper covered in squiggles of ink—has remained virtually unchanged since Gutenberg. This is largely a testament to the effectiveness of books as a means of transmitting and storing information. Paper is cheap, and ink endures.

In recent years, however, the act of reading has undergone a rapid transformation, as devices such as the Kindle and iPad account for a growing share of book sales. (Amazon, for instance, now sells more e-books than hardcovers.) Before long, we will do most of our reading on screens—lovely, luminous screens.

The displays are one of the main selling points of these new literary gadgets. Thanks to dramatic improvements in screen resolution, the words shimmer on the glass; every letter is precisely defined, with fully adjustable fonts. Think of it as a beautifully printed book that’s always available in perfect light. For contrast and clarity, it’s hard for Gutenberg to compete.

And these reading screens are bound to get better. One of the longstanding trends of modern technology is to make it easier and easier to perceive fine-grained content. The number of pixels in televisions has increased fivefold in the last 10 years, VHS gave rise to the Blu-Ray, and computer monitors can display millions of vibrant colors.

I would be the last to complain about such improvements—I shudder to imagine a world without sports on HDTV—but it’s worth considering the ways in which these new reading technologies may change the nature of reading and, ultimately, the content of our books.

Let’s begin by looking at how reading happens in the brain. Stanislas Dehaene, a neuroscientist at the Collège de France in Paris, has helped to demonstrate that the literate brain contains two distinct pathways for making sense of words, each activated in different contexts. One pathway, known as the ventral route, is direct and efficient: We see a group of letters, convert those letters into a word and then directly grasp the word’s meaning. When you’re reading a straightforward sentence in a clear format, you’re almost certainly relying on this neural highway. As a result, the act of reading seems effortless. We don’t have to think about the words on the page.

But the ventral route is not the only way to read. The brain’s second reading pathway, the dorsal stream, is turned on when we have to pay conscious attention to a sentence. Perhaps we’ve encountered an obscure word or a patch of smudged ink. (In his experiments, Mr. Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Mr. Dehaene’s research demonstrates that even adults are still forced to occasionally decipher a text.

The lesson of his research is that the act of reading observes a gradient of awareness. Familiar sentences rendered on lucid e-ink screens are read quickly and effortlessly. Unusual sentences with complex clauses and odd punctuation tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra cognitive work wakes us up; we read more slowly, but we notice more. Psychologists call this the “levels-of-processing” effect, since sentences that require extra levels of analysis are more likely to get remembered.

E-readers have yet to dramatically alter the reading experience; e-ink still feels a lot like old-fashioned ink. But it seems inevitable that the same trends that have transformed our televisions will also affect our reading gadgets. And this is where the problems begin. Do we really want reading to be as effortless as possible? The neuroscience of literacy suggests that, sometimes, the best way to make sense of a difficult text is to read it in a difficult format, to force our brain to slow down and process each word. After all, reading isn’t about ease—it’s about understanding. If we’re going to read Kant on the Kindle, or Proust on the iPad, then we should at least experiment with an ugly font.

Every medium eventually influences the message that it carries. I worry that, before long, we’ll become so used to the mindless clarity of e-ink that the technology will feed back onto the content, making us less willing to endure challenging texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a thorny stretch of prose. And that would be a shame, because not every sentence should be easy to read.

Jonah Lehrer is the author, most recently, of ‘”How We Decide.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703499604575512270255127444.html

The Pen That Never Forgets

In the spring, Cincia Dervishaj was struggling with a take-home math quiz. It was testing her knowledge of exponential notation — translating numbers like “3.87 x 102” into a regular form. Dervishaj is a 13-year-old student at St. John’s Lutheran School in Staten Island, and like many students grappling with exponents, she got confused about where to place the decimal point. “I didn’t get them at all,” Dervishaj told me in June when I visited her math class, which was crowded with four-year-old Dell computers, plastic posters of geometry formulas and a big bowl of Lego bricks.

To refresh her memory, Dervishaj pulled out her math notebook. But her class notes were not great: she had copied several sample problems but hadn’t written a clear explanation of how exponents work.

She didn’t need to. Dervishaj’s entire grade 7 math class has been outfitted with “smart pens” made by Livescribe, a start-up based in Oakland, Calif. The pens perform an interesting trick: when Dervishaj and her classmates write in their notebooks, the pen records audio of whatever is going on around it and links the audio to the handwritten words. If her written notes are inadequate, she can tap the pen on a sentence or word, and the pen plays what the teacher was saying at that precise point.

Dervishaj showed me how it works, flipping to her page of notes on exponents and tapping a set of numbers in the middle of the page. Out of a tiny speaker in the thick, cigar-shaped pen, I could hear her teacher, Brian Licata, explaining that precise problem. “It’s like having your own little personal teacher there, with you at all times,” Dervishaj said.

Having a pen that listens, the students told me, has changed the class in curious ways. Some found the pens make class less stressful; because they don’t need to worry about missing something, they feel freer to listen to what Licata says. When they do take notes, the pen alters their writing style: instead of verbatim snippets of Licata’s instructions, they can write “key words” — essentially little handwritten tags that let them quickly locate a crucial moment in the audio stream. Licata himself uses a Livescribe pen to provide the students with extra lessons. Sitting at home, he’ll draw out a complicated math problem while describing out loud how to solve it. Then he’ll upload the result to a class Web site. There his students will see Licata’s handwriting slowly fill the page while hearing his voice explaining what’s going on. If students have trouble remembering how to tackle that type of problem, these little videos — “pencasts” — are online 24 hours a day. All the students I spoke to said they watch them.

LIKE MOST PIECES of classroom technology, the pens cause plenty of digital-age hassles. They can crash. The software for loading students’ notes onto their computers or from there onto the Web can be finicky. And the pens work only with special notepaper that enables the pen to track where it’s writing; regular paper doesn’t work. (Most students buy notepads from Livescribe, though it’s possible to print the paper on a color printer.) There are also some unusual social side-effects. The presence of so many recording devices in the classroom creates a sort of panopticon — or panaudiocon, as it were. Dervishaj has found herself whispering to her seatmate, only to realize the pen was on, “so we’re like, whoa!” — their gossip has been recorded alongside her notes. Although you can pause a recording, there’s currently no way to selectively delete a few seconds of audio from the pen, so she’s forced to make a decision: Delete all the audio for that lesson, or keep it in and hope nobody else ever hears her private chatter. She usually deletes.

Nonetheless, Licata is a convert. As the students started working quietly on review problems, their pens making tiny “boop” noises as the students began or paused their recording, Licata pulled me aside to say the pens had “transformed” his class. Compact and bristling with energy, Licata is a self-professed geek; in his 10 years of teaching, he has seen plenty of classroom gadgets come and go, from Web-based collaboration software to pricey whiteboards that let children play with geometric figures the way they’d manipulate an iPhone screen. Most of these gewgaws don’t impress him. “Two or three times a year teachers whip out some new technology and use it, but it doesn’t do anything better and it’s never seen again,” he said.

But this time, he said, was different. This is because the pen is based on an age-old classroom technique that requires no learning curve: pen-and-paper writing. Livescribe first released the pen in 2008; Licata encountered it when a colleague brought his own to work. Intrigued, he persuaded Livescribe to donate 20 pens to the school to outfit his entire class. (The pens sell for around $129.) “I’ve made more gains with this class this year than I’ve made with any class,” he told me. In his evenings, Licata is pursuing a master’s degree in education; separately, he intends to study how the smart pens might affect the way students learn, write and think. “Two years ago I would have told you that note-taking is a lost art, that handwriting was a lost art,” he said. “But now I think handwriting is crucial.”

TAKING NOTES HAS long posed a challenge in education. Decades of research has found a strong correlation between good notes and good grades: the more detailed and accurate your notes, the better you do in school. That’s partly because the act of taking notes forces you to pay closer attention. But what’s more important, according to some researchers, is that good notes provide a record: most of the benefits from notes come not from taking them but from reviewing them, because no matter how closely we pay attention, we forget things soon after we leave class. “We have feeble memories,” says Ken Kiewra, a professor of educational psychology at the University of Nebraska and one of the world’s leading researchers into note-taking.

Yet most students are very bad at taking notes. Kiewra’s research has found that students record about a third of the critical information they hear in class. Why? Because note-taking is a surprisingly complex mental activity. It heavily taxes our “working memory” — the volume of information we can consciously hold in our heads and manipulate. Note-taking requires a student to listen to a teacher, pick out the most important points and summarize and record them, while trying not to lose the overall drift of the lecture. (The very best students do even more mental work: they blend what they’re hearing with material they already know and reframe the concepts in their own words.) Given how jampacked this task is, “transcription fluency” matters: the less you have to think about the way you’re recording notes, the better. When you’re taking notes, you want to be as fast and as automatic as possible.

All note-taking methods have downsides. Handwriting is the most common and easiest, but a lecturer speaks at 150 to 200 words per minute, while even the speediest high-school students write no more than 40 words per minute. The more you struggle to keep up, the more you’re focusing on the act of writing, not the act of paying attention.

Typing can be much faster. A skilled typist can manage 60 words a minute or more. And notes typed into a computer have other advantages: they can be quickly searched (unlike regular handwritten notes) and backed up or shared online with other students. They’re also neater and thus easier to review. But they come with other problems, not least of which is that typing can’t capture the diagrammatic notes that classes in math, engineering or biology often require. What’s more, while personal computers and laptops may be common in college, that isn’t the case in cash-strapped high schools. Laptops in class also bring a host of distractions — from Facebook to Twitter — that teachers loathe. And students today are rarely taught touch typing; some note-taking studies have found that students can be even slower at typing than at handwriting.

One of the most complete ways to document what is said in class is to make an audio record: all 150-plus words a minute can be captured with no mental effort on the part of the student. Kiewra’s research has found that audio can have a powerful effect on learning. In a 1991 experiment, he had four groups of students listen to a lecture. One group was allowed to listen once, another twice, the third three times and the fourth was free to scroll back and forth through the recording at will, listening to whatever snippets the students wanted to review. Those who relistened were increasingly likely to write down crucial “secondary” ideas — concepts in a lecture that add nuance to the main points but that we tend to miss when we’re focused on writing down the core ideas. And the students who were able to move in and out of the audio stream performed as well as those who listened to the lecture three times in a row. (Students who recorded more secondary ideas also scored higher in a later quiz.) But as anyone who has tried to scroll back and forth through an audio file has discovered, reviewing audio is frustrating and clumsy. Audio may be richer in detail, but it is not, like writing and typescript, skimmable.

JIM MARGGRAFF, the 52-year-old inventor of the Livescribe pen, has a particular knack for blending audio and text. In the ’90s, appalled by Americans’ poor grasp of geography, he invented a globe that would speak the name of any city or country when you touched the location with a pen. In 1998, his firm was absorbed by Leapfrog, the educational-toy maker, where Marggraff invented toys that linked audio to paper. His first device, the LeapPad, was a book that would speak words and play other sounds whenever a child pointed a stylus at it. It quickly became Leapfrog’s biggest hit.

In 2001, Marggraff was browsing a copy of Wired magazine when he read an article about Anoto, a Swedish firm that patented a clever pen technology: it imprinted sheets of paper with tiny dots that a camera-equipped pen could use to track precisely where it was on any page. Several firms were licensing the technology to create pens that would record pen strokes, allowing users to keep digital copies of whatever they wrote on the patterned paper. But Marggraff had a different idea. If the pen recorded audio while it wrote, he figured, it would borrow the best parts from almost every style of note-taking. The audio record would help note-takers find details missing from their written notes, and the handwritten notes would serve as a guide to the audio record, letting users quickly dart to the words they wanted to rehear. Marggraff quit Leapfrog in 2005 to work on his new idea, and three years later he released the first Livescribe pen. He has sold close to 500,000 pens in the last two years, mostly to teachers, students and businesspeople.

I met Marggraff in his San Francisco office this summer. He and Andrew Van Schaack, a professor in the Peabody College of Education at Vanderbilt University and Livescribe’s science adviser, explained that the pen operated, in their view, as a supplement to your working memory. If you’re not worried about catching every last word, you can allocate more of your attention to processing what you’re hearing.

“I think people can be more confident in taking fewer notes, recognizing that they can go back if there’s something important that they need,” Van Schaack said. “As a teacher, I want to free up some cognitive ability. You know that little dial on there, your little brain tachometer? I want to drop off this one so I can use it on my thinking.” Marggraff told me Livescribe has surveyed its customers on how they use the pen. “A lot of adults say that it helps them with A.D.H.D.,” he said. “Students say: ‘It helps me improve my grades in specific classes. I can think and listen, rather than writing.’ They get more confident.”

Livescribe pens often inspire proselytizing among users. I spoke to students at several colleges and schools who insisted that the pen had improved their performance significantly; one swore it helped boost his G.P.A. to 3.9 from 3.5. Others said they had evolved highly personalized short notations — even pictograms — to make it easier to relocate important bits of audio. (Whenever his professor reeled off a long list of facts, one student would simply write “LIST” if he couldn’t keep up, then go back later to fill in the details after class.) A few students pointed to the handwriting recognition in Livescribe’s desktop software: once an individual user has transferred the contents of a pen to his or her computer, the software makes it possible to search that handwriting — so long as it’s reasonably legible — by keyword. That, students said, markedly sped up studying for tests, because they could rapidly find notes on specific topics. The pen can also load “apps”: for example, a user can draw an octave of a piano keyboard and play it (with the notes coming out of the pen’s speaker), or write a word in English and have the pen translate it into Spanish on the pen’s tiny L.E.D. display.

Still, it’s hard to know whether Marggraff’s rosiest ambitions are realistic. No one has yet published independent studies testing whether the Livescribe style of enhanced note-taking seriously improves educational performance. One of the only studies thus far is by Van Schaack himself. In the spring, he conducted an unpublished experiment in which he had 40 students watch a video of a 30-minute lecture on primatology. The students took notes with a Livescribe pen, and were also given an iPod with a recording of the lecture. Afterward, when asked to locate specific facts on both devices, the students were 2.5 times faster at retrieving the facts on the pen than on the iPod. It was, Van Schaack argues, evidence that the pen can make an audio stream genuinely accessible, potentially helping students tap into those important secondary ideas that we miss when we’re scrambling to write solely by hand.

Marggraff suspects the deeper impact of the pen may not be in taking notes when you’re listening to someone else, but when you’re alone — and thinking through a problem by yourself. For example, he said, a book can overwhelm a reader with thoughts. “You’re going to get ideas like crazy when you’re reading,” Marggraff says. “The issue is that it’s too slow to sit down and write them” — but if you don’t record them, you’ll usually forget them. So when Marggraff is reading a book at home or even on a plane, he’ll pull out his pen, hit record and start talking about what he’s thinking, while jotting down some keywords. Later on, when he listens to the notes, “it’s just astounding how relevant it is, and how much value it brings.” No matter how good his written notes are, audio includes many more flashes of insight — the difference between the 30 words per minute of his writing and the 150 minutes per word of his speech, as it were.

Marggraff pulls out his laptop to show me notes he took while reading Malcolm Gladwell’s book “Outliers.” The notes are neat and legible, but the audio is even richer; when he taps on the middle of the note, I can hear his voice chattering away at high speed. When he listens to the notes, he’ll often get new ideas, so he’ll add notes, layering analysis on top of analysis.

“This is game-changing,” he says. “This is a dialogue with yourself.” He has used the technique to brainstorm patent ideas for hours at a time.

Similarly, in his class at St. John’s, Licata has found the pen is useful in capturing the students’ dialogues with themselves. For instance, he asks his students to talk to their pens while they do their take-home quizzes, recording their logic in audio. That way, if they go off the rails, Licata can click through the page to hear what, precisely, went wrong and why. “I’m actually able to follow their train of thought,” he says.

Some experts have doubts about Livescribe as a silver bullet. As Kiewra points out, plenty of technologies in the past have been hailed as salvations of education. “There’s been the radio, there’s been the phonograph, moving pictures, the VCR” — and, of course, the computer. But the average student’s note-taking ability remains as dismal as ever. Kiewra says he now believes the only way to seriously improve it is by painstakingly teaching students the core skills: how to listen for key concepts, how to review your notes and how to organize them to make meaning, teasing out interesting associations between bits of information. (As an example, he points out that students taking notes on the planets will learn lots of individual facts. But if they organize them into a chart, they’ll make discoveries on their own: sort the planets by distance from the sun and speed of rotation, and you’ll discover that the farther you go out, the more slowly they spin.) Kiewra also says that an effective way to get around the problem of incomplete and disorganized note-taking is for teachers to give out “partial” notes — handouts that summarize key concepts in the lecture but leave blanks that the students must fill in, forcing them to pay attention. Some studies have found that students using partial notes capture a majority of the main concepts in a lecture, more than doubling their usual performance.

Indeed, many modern educators say that students shouldn’t be taking notes in class at all. If it’s true that note-taking taxes their working memory, they argue, then teachers should simply hand out complete sets of notes that reflect everything in the lecture — leaving students free to listen and reflect. After all, if the Internet has done anything, it has made it trivially easy for instructors to distribute materials.

“I don’t think anyone should be writing down what the teacher’s saying in class,” is the blunt assessment of Lisa Nielsen, author of a blog, “The Innovative Educator,” who also heads up a division of the New York City Department of Education devoted to finding uses for new digital tools in classrooms. “Teachers should be pulling in YouTube videos or lectures from experts around the world, piping in great people into their classrooms, and all those things can be captured online — on Facebook, on a blog, on a wiki or Web site — for students to be looking at later,” she says. “Now, should students be making meaning of what they’re hearing or coming up with questions? Yes. But they don’t need to write down everything the teacher’s said.” There is some social-science support for the no-note-taking view. In one experiment, Kiewra took several groups of students and subjected them to different note-taking situations: some attended a lecture and reviewed their own notes; others didn’t attend but were given a set of notes from the instructor. Those who heard the lecture and took notes scored 51 percent on a subsequent test, while those who only read the instructor’s notes scored 69 percent.

Of course, if Marggraff has his way, smart pens could become so common — and so much cheaper — that bad notes, or at least incomplete ones, will become a thing of the past. Indeed, if most pen-and-paper writing could be easily copied and swapped online, the impacts on education could be intriguing and widespread. Marggraff intends to release software that lets teachers print their students’ work on dot-patterned paper; students could do their assignment, e-mail it in, then receive a graded paper e-mailed back with handwritten and spoken feedback from the teacher. Students would most likely swap notes more often; perhaps an entire class could designate one really good note-taker and let him write while everyone else listens, sharing the notes online later. Marggraff even foresees textbooks in which students could make notes in the margins and have a permanent digital record of their written and spoken thoughts beside the text. “Now we really have bridged the paper and the digital worlds,” he adds. Perhaps the future of the pen is on the screen.

Clive Thompson, a contributing writer for the magazine, writes frequently about technology and science.

__________

Full article and photo: http://www.nytimes.com/2010/09/19/magazine/19Livescribe-t.html

A virtual counter-revolution

The internet has been a great unifier of people, companies and online networks. Powerful forces are threatening to balkanise it

A fragmenting virtual world

THE first internet boom, a decade and a half ago, resembled a religious movement. Omnipresent cyber-gurus, often framed by colourful PowerPoint presentations reminiscent of stained glass, prophesied a digital paradise in which not only would commerce be frictionless and growth exponential, but democracy would be direct and the nation-state would no longer exist. One, John-Perry Barlow, even penned “A Declaration of the Independence of Cyberspace”.

Even though all this sounded Utopian when it was preached, it reflected online reality pretty accurately. The internet was a wide-open space, a new frontier. For the first time, anyone could communicate electronically with anyone else—globally and essentially free of charge. Anyone was able to create a website or an online shop, which could be reached from anywhere in the world using a simple piece of software called a browser, without asking anyone else for permission. The control of information, opinion and commerce by governments—or big companies, for that matter—indeed appeared to be a thing of the past. “You have no sovereignty where we gather,” Mr Barlow wrote.

The lofty discourse on “cyberspace” has long changed. Even the term now sounds passé. Today another overused celestial metaphor holds sway: the “cloud” is code for all kinds of digital services generated in warehouses packed with computers, called data centres, and distributed over the internet. Most of the talk, though, concerns more earthly matters: privacy, antitrust, Google’s woes in China, mobile applications, green information technology (IT). Only Apple’s latest iSomethings seem to inspire religious fervour, as they did again this week.

Again, this is a fair reflection of what is happening on the internet. Fifteen years after its first manifestation as a global, unifying network, it has entered its second phase: it appears to be balkanising, torn apart by three separate, but related forces.

First, governments are increasingly reasserting their sovereignty. Recently several countries have demanded that their law-enforcement agencies have access to e-mails sent from BlackBerry smart-phones. This week India, which had threatened to cut off BlackBerry service at the end of August, granted RIM, the device’s maker, an extra two months while authorities consider the firm’s proposal to comply. However, it has also said that it is going after other communication-service providers, notably Google and Skype.

Second, big IT companies are building their own digital territories, where they set the rules and control or limit connections to other parts of the internet. Third, network owners would like to treat different types of traffic differently, in effect creating faster and slower lanes on the internet.

It is still too early to say that the internet has fragmented into “internets”, but there is a danger that it may splinter along geographical and commercial boundaries. (The picture above is a visual representation of the “nationality” of traffic on the internet, created by the University of California’s Co-operative Association for Internet Data Analysis: America is in pink, Britain in dark blue, Italy in pale blue, Sweden in green and unknown countries in white.) Just as it was not preordained that the internet would become one global network where the same rules applied to everyone, everywhere, it is not certain that it will stay that way, says Kevin Werbach, a professor at the Wharton School of the University of Pennsylvania.

To grasp why the internet might unravel, it is necessary to understand how, in the words of Mr Werbach, “it pulled itself together” in the first place. Even today, this seems like something of a miracle. In the physical world, most networks—railways, airlines, telephone systems—are collections of more or less connected islands. Before the internet and the world wide web came along, this balkanised model was also the norm online. For a long time, for instance, AOL and CompuServe would not even exchange e-mails.

Economists point to “network effects” to explain why the internet managed to supplant these proprietary services. Everybody had strong incentives to join: consumers, companies and, most important, the networks themselves (the internet is in fact a “network of networks”). The more the internet grew, the greater the benefits became. And its founding fathers created the basis for this virtuous circle by making it easy for networks to hook up and for individuals to get wired.

Yet economics alone do not explain why the internet rather than a proprietary service prevailed (as Microsoft did in software for personal computers, or PCs). One reason may be that the rapid rise of the internet, originally an obscure academic network funded by America’s Department of Defence, took everyone by surprise. “The internet was able to develop quietly and organically for years before it became widely known,” writes Jonathan Zittrain, a professor at Harvard University, in his 2008 book, “The Future of the Internet—And How To Stop It”. In other words, had telecoms firms, for instance, suspected how big it would become, they might have tried earlier to change its rules.

Whatever the cause, the open internet has been a boon for humanity. It has not only allowed companies and other organisations of all sorts to become more efficient, but enabled other forms of production, notably “open source” methods, in which groups of people, often volunteers, all over the world develop products, mostly pieces of software, collectively. Individuals have access to more information than ever, communicate more freely and form groups of like-minded people more easily.

Even more important, the internet is an open platform, rather than one built for a specific service, like the telephone network. Mr Zittrain calls it “generative”: people can tinker with it, creating new services and elbowing existing ones aside. Any young company can build a device or develop an application that connects to the internet, provided it follows certain, mostly technical conventions. In a more closed and controlled environment, an Amazon, a Facebook or a Google would probably never have blossomed as it did.

Forces of fragmentation

However, this very success has given rise to the forces that are now pulling the internet apart. The cracks are most visible along geographical boundaries. The internet is too important for governments to ignore. They are increasingly finding ways to enforce their laws in the digital realm. The most prominent is China’s “great firewall”. The Chinese authorities are using the same technology that companies use to stop employees accessing particular websites and online services. This is why Google at first decided to censor its Chinese search service: there was no other way to be widely accessible in the country.

But China is by no means the only country erecting borders in cyberspace. The Australian government plans to build a firewall to block material showing the sexual abuse of children and other criminal or offensive content. The OpenNet Initiative, an advocacy group, lists more than a dozen countries that block internet content for political, social and security reasons. They do not need especially clever technology: governments go increasingly after dominant online firms because they are easy to get hold of. In April Google published the numbers of requests it had received from official agencies to remove content or provide information about users. Brazil led both counts (see chart 1).

Not every request or barrier has a sinister motive. Australia’s firewall is a case in point, even if it is a clumsy way of enforcing the law. It would be another matter, however, if governments started tinkering with the internet’s address book, the Domain Name System (DNS). This allows the network to look up the computer on which a website lives. If a country started its own DNS, it could better control what people can see. Some fear this is precisely what China and others might do one day.

To confuse matters, the DNS is already splintering for a good reason. It was designed for the Latin alphabet, which was fine when most internet users came from the West. But because more and more netizens live in other parts of the world—China boasts 420m—last October the Internet Corporation for Assigned Names and Numbers, the body that oversees the DNS, allowed domain names entirely in other scripts. This makes things easier for people in, say, China, Japan or Russia, but marks another step towards the renationalisation of the internet.

Many media companies have already gone one step further. They use another part of the internet’s address system, the “IP numbers” that identify computers on the network, to block access to content if consumers are not in certain countries. Try viewing a television show on Hulu, a popular American video service, from Europe and it will tell you: “We’re sorry, currently our video library can only be streamed within the United States.” Similarly, Spotify, a popular European music-streaming service, cannot be reached from America.

Yet it is another kind of commercial attempt to carve up the internet that is causing more concern. Devotees of a unified cyberspace are worried that the online world will soon start looking as it did before the internet took over: a collection of more or less connected proprietary islands reminiscent of AOL and CompuServe. One of them could even become as dominant as Microsoft in PC software. “We’re heading into a war for control of the web,” Tim O’Reilly, an internet savant who heads O’Reilly Media, a publishing house, wrote late last year. “And in the end, it’s more than that, it’s a war against the web as an interoperable platform.”

The trend to more closed systems is undeniable. Take Facebook, the web’s biggest social network. The site is a fast-growing, semi-open platform with more than 500m registered users. Its American contingent spends on average more than six hours a month on the site and less than two on Google. Users have identities specific to Facebook and communicate mostly via internal messages. The firm has its own rules, covering, for instance, which third-party applications may run and how personal data are dealt with.

Apple is even more of a world apart. From its iPhone and iPad, people mostly get access to online services not through a conventional browser but via specialised applications available only from the company’s “App Store”. Granted, the store has lots of apps—about 250,000—but Apple nonetheless controls which ones make it onto its platform. It has used that power to keep out products it does not like, including things that can be construed as pornographic or that might interfere with its business, such as an app for Google’s telephone service. Apple’s press conference to show off its new wares on September 1st was streamed live over the internet but could be seen only on its own devices.

Even Google can be seen as a platform unto itself, if a very open one. The world’s biggest search engine now offers dozens of services, from news aggregation to word processing, all of which are tied together and run on a global network of dozens of huge data-centres. Yet Google’s most important service is its online advertising platform, which serves most text-based ads on the web. Being the company’s main source of revenue, critics say, it is hardly a model of openness and transparency.

There is no conspiracy behind the emergence of these platforms. Firms are in business to make money. And such phenomena as social networks and online advertising exhibit strong network effects, meaning that a dominant market leader is likely to emerge. What is more, most users these days are not experts, but average consumers, who want secure, reliable products. To create a good experience on mobile devices, which more and more people will use to get onto the internet, hardware, software and services must be more tightly integrated than on PCs.

Net neutrality, or not?

Discussion of these proprietary platforms is only beginning. A lot of ink, however, has already been spilt on another form of balkanisation: in the plumbing of the internet. Most of this debate, particularly in America, is about “net neutrality”. This is one of the internet’s founding principles: that every packet of data, regardless of its contents, should be treated the same way, and the best effort should always be made to forward it.

Proponents of this principle want it to become law, out of concern that network owners will breach it if they can. Their nightmare is what Tim Wu, a professor at Columbia University, calls “the Tony Soprano vision of networking”, alluding to a television series about a mafia family. If operators were allowed to charge for better service, they could extort protection money from every website. Those not willing to pay for their data to be transmitted quickly would be left to crawl in the slow lane. “Allowing broadband carriers to control what people see and do online would fundamentally undermine the principles that have made the internet such a success,” said Vinton Cerf, one of the network’s founding fathers (who now works for Google), at a hearing in Congress.

Opponents of the enshrining of net neutrality in law—not just self-interested telecoms firms, but also experts like Dave Farber, another internet elder—argue that it would be counterproductive. Outlawing discrimination of any kind could discourage operators from investing to differentiate their networks. And given the rapid growth in file-sharing and video (see chart 2), operators may have good reason to manage data flows, lest other traffic be crowded out.

The issue is not as black and white as it seems. The internet has never been as neutral as some would have it. Network providers do not guarantee a certain quality of service, but merely promise to do their best. That may not matter for personal e-mails, but it does for time-sensitive data such as video. What is more, large internet firms like Amazon and Google have long redirected traffic onto private fast lanes that bypass the public internet to speed up access to their websites.

Whether such preferential treatment becomes more widespread, and even extortionary, will probably depend on the market and how it is regulated. It is telling that net neutrality has become far more politically controversial in America than it has elsewhere. This is a reflection of the relative lack of competition in America’s broadband market. In Europe and Japan, “open access” rules require network operators to lease parts of their networks to other firms on a wholesale basis, thus boosting competition. A study comparing broadband markets, published in 2009 by Harvard University’s Berkman Centre for Internet & Society, found that countries with such rules enjoy faster, cheaper broadband service than America, because the barrier to entry for new entrants is much lower. And if any access provider starts limiting what customers can do, they will defect to another.

America’s operators have long insisted that open-access requirements would destroy their incentive to build fast, new networks: why bother if you will be forced to share it? After intense lobbying, America’s telecoms regulators bought this argument. But the lesson from elsewhere in the industrialised world is that it is not true. The result, however, is that America has a small number of powerful network operators, prompting concern that they will abuse their power unless they are compelled, by a net-neutrality law, to treat all traffic equally. Rather than trying to mandate fairness in this way—net neutrality is very hard to define or enforce—it makes more sense to address the underlying problem: the lack of competition.

It should come as no surprise that the internet is being pulled apart on every level. “While technology can gravely wound governments, it rarely kills them,” Debora Spar, president of Barnard College at Columbia University, wrote several years ago in her book, “Ruling the Waves”. “This was all inevitable,” argues Chris Anderson, the editor of Wired, under the headline “The Web is Dead” in the September issue of the magazine. “A technology is invented, it spreads, a thousand flowers bloom, and then someone finds a way to own it, locking out others.”

Yet predictions are hazardous, particularly in IT. Governments may yet realise that a freer internet is good not just for their economies, but also for their societies. Consumers may decide that it is unwise to entrust all their secrets to a single online firm such as Facebook, and decamp to less insular alternatives, such as Diaspora.

Similarly, more open technology could also still prevail in the mobile industry. Android, Google’s smart-phone platform, which is less closed than Apple’s, is growing rapidly and gained more subscribers in America than the iPhone in the first half of this year. Intel and Nokia, the world’s biggest chipmaker and the biggest manufacturer of telephone handsets, are pushing an even more open platform called MeeGo. And as mobile devices and networks improve, a standards-based browser could become the dominant access software on the wireless internet as well.

Stuck in the slow lane

If, however, the internet continues to go the other way, this would be bad news. Should the network become a collection of proprietary islands accessed by devices controlled remotely by their vendors, the internet would lose much of its “generativity”, warns Harvard’s Mr Zittrain. Innovation would slow down and the next Amazon, Google or Facebook could simply be, well, Amazon, Google or Facebook.

The danger is not that these islands become physically separated, says Andrew Odlyzko, a professor at the University of Minnesota. There is just too much value in universal connectivity, he argues. “The real question is how high the walls between these walled gardens will be.” Still, if the internet loses too much of its universality, cautions Mr Werbach of the Wharton School, it may indeed fall apart, just as world trade can collapse if there is too much protectionism. Theory demonstrates that interconnected networks such as the internet can grow quickly, he explains—but also that they can dissolve quickly. “This looks rather unlikely today, but if it happens, it will be too late to do anything about it.”

__________

Full article and photos: http://www.economist.com/node/16941635

Ten Fallacies About Web Privacy

We are not used to the Internet reality that something can be known and at the same time no person knows it.

Privacy on the Web is a constant issue for public discussion—and Congress is always considering more regulations on the use of information about people’s habits, interests or preferences on the Internet. Unfortunately, these discussions lead to many misconceptions. Here are 10 of the most important:

1) Privacy is free. Many privacy advocates believe it is a free lunch—that is, consumers can obtain more privacy without giving up anything. Not so. There is a strong trade-off between privacy and information: The more privacy consumers have, the less information is available for use in the economy. Since information helps markets work better, the cost of privacy is less efficient markets.

2) If there are costs of privacy, they are borne by companies. Many who do admit that privacy regulations restricting the use of information about consumers have costs believe they are born entirely by firms. Yet consumers get tremendous benefits from the use of information.

Think of all the free stuff on the Web: newspapers, search engines, stock prices, sports scores, maps and much more. Google alone lists more than 50 free services—all ultimately funded by targeted advertising based on the use of information. If revenues from advertising are reduced or if costs increase, then fewer such services will be provided.

3) If consumers have less control over information, then firms must gain and consumers must lose. When firms have better information, they can target advertising better to consumers—who thereby get better and more useful information more quickly. Likewise, when information is used for other purposes—for example, in credit rating—then the cost of credit for all consumers will decrease.

4) Information use is “all or nothing.” Many say that firms such as Google will continue to provide services even if their use of information is curtailed. This is sometimes true, but the services will be lower-quality and less valuable to consumers as information use is more restricted.

For example, search engines can better target searches if they know what searchers are looking for. (Google’s “Did you mean . . .” to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.

5) If consumers have less privacy, then someone will know things about them that they may want to keep secret. Most information is used anonymously. To the extent that things are “known” about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.

6) Information can be used for price discrimination (differential pricing), which will harm consumers. For example, it might be possible to use a history of past purchases to tell which consumers might place a higher value on a particular good. The welfare implications of discriminatory pricing in general are ambiguous. But if price discrimination makes it possible for firms to provide goods and services that would otherwise not be available (which is common for virtual goods and services such as software, including cell phone apps) then consumers unambiguously benefit.

7) If consumers knew how information about them was being used, they would be irate. When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don’t bother to learn about information use on the Web precisely because there is no harm from the way it is used.

8) Increasing privacy leads to greater safety and less risk. The opposite is true. Firms can use information to verify identity and reduce Internet crime and identity theft. Think of being called by a credit-card provider and asked a series of questions when using your card in an unfamiliar location, such as on a vacation. If this information is not available, then less verification can occur and risk may actually increase.

9) Restricting the use of information (such as by mandating consumer “opt-in”) will benefit consumers. In fact, since the use of information is generally benign and valuable, policies that lead to less information being used are generally harmful.

10) Targeted advertising leads people to buy stuff they don’t want or need. This belief is inconsistent with the basis of a market economy. A market economy exists because buyers and sellers both benefit from voluntary transactions. If this were not true, then a planned economy would be more efficient—and we have all seen how that works.

Mr. Rubin teaches economics at Emory University.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704147804575455192488549362.html

New Law to Stop Companies from Checking Facebook Pages in Germany

Potential bosses will no longer be allowed to look at job applicants’ Facebook pages, if a new law comes into force in Germany.

Good news for jobseekers who like to brag about their drinking exploits on Facebook: A new law in Germany will stop bosses from checking out potential hires on social networking sites. They will, however, still be allowed to google applicants.

Lying about qualifications. Alcohol and drug use. Racist comments. These are just some of the reasons why potential bosses reject job applicants after looking at their Facebook profiles.

According to a 2009 survey commissioned by the website CareerBuilder, some 45 percent of employers use social networking sites to research job candidates. And some 35 percent of those employers had rejected candidates based on what they found there, such as inappropriate photos, insulting comments about previous employers or boasts about their drug use.

But those Facebook users hoping to apply for a job in Germany should pause for a moment before they hit the “deactivate account” button. The government has drafted a new law which will prevent employers from looking at a job applicant’s pages on social networking sites during the hiring process.

According to reports in the Monday editions of the Die Welt and Süddeutsche Zeitung newspapers, Interior Minister Thomas de Maizière has drafted a new law on data privacy for employees which will radically restrict the information bosses can legally collect. The draft law, which is the result of months of negotiations between the different parties in Germany’s coalition government, is set to be approved by the German cabinet on Wednesday, according to the Süddeutsche Zeitung.

Although the new law will reportedly prevent potential bosses from checking out a candidate’s Facebook page, it will allow them to look at sites that are expressly intended to help people sell themselves to future employers, such as the business-oriented social networking site LinkedIn. Information about the candidate that is generally available on the Internet is also fair game. In other words, employers are allowed to google potential hires. Companies may not be allowed to use information if it is too old or if the candidate has no control over it, however.

Toilets to Be Off-Limits

The draft legislation also covers the issue of companies spying on employees. According to Die Welt, the law will expressly forbid firms from video surveillance of workers in “personal” locations such as bathrooms, changing rooms and break rooms. Video cameras will only be permitted in certain places where they are justified, such as entrance areas, and staff will have to be made aware of their presence.

Similarly, companies will only be able to monitor employees’ telephone calls and e-mails under certain conditions, and firms will be obliged to inform their staff about such eavesdropping.

The new law is partially a reaction to a number of recent scandals in Germany involving management spying on staff. In 2008, it was revealed that the discount retail chain Lidl had spied on employees in the toilet and had collected information on their private lives. National railway Deutsche Bahn and telecommunications giant Deutsche Telekom were also involved in cases relating to surveillance of workers.

Online data privacy is increasingly becoming a hot-button issue in Germany. The government is currently also working on legislation to deal with issues relating to Google’s Street View service, which is highly controversial in the country because of concerns it could violate individuals’ privacy.

__________

Full article and photo: http://www.spiegel.de/international/germany/0,1518,713240,00.html

End of the Net Neut Fetish

What the Google-Verizon deal really means for the wireless future.

Historians, if any are interested, will conclude that the unraveling of the net neutrality movement began when the iPhone appeared, instigating a tsunami of demand for mobile Web access.

They will conclude that an ancillary role was played when carriers (even some non-wireless) began talking about metered pricing to meet the deluge of Internet video.

Suddenly, those net neut advocates who live in the real world (e.g., Google) had to face where their advocacy was leading—to usage-based pricing for mobile Web users, a dagger aimed at the heart of their own business models. After all, who would click on a banner ad if it meant paying to do so?

Thus Google and other realists developed a new appreciation of the need for incentives to keep their telco and cable antagonists investing in new broadband capacity. They developed an appreciation of “network management,” though it meant discriminating between urgent and less urgent traffic.

Most of all, they realized (whisper it quietly) that they might soon want to pay out of their own pockets to speed their bits to wireless users, however offensive to the net neutrality gods.

Hence a watershed this week in the little world of the net neut obsessives, as the realists finally parted company with the fetishists. The latter are those Washington-based groups that have emerged in recent years to gobble up Google’s patronage and declaim in favor of “Internet freedom.” You can easily recognize these groups today—they’re the ones taking Google’s name in vain.

The unraveling of the net neut coalition is perhaps the one meaningful result of the new net neut “principles” enunciated this week by former partisans Google and Verizon.

While these principles address in reasonable fashion the largely hypothetical problem of carriers blocking content and services that compete with their own, Verizon and Google insist the terms aren’t meant to apply to wireless. Funny thing—because wireless is precisely what brings these ex-enemies together in the first place. They’re partners in promoting Google’s Android software as a rival platform to Apple’s iPhone.

All their diversionary huffing and puffing, in fact, is a backhanded way of acknowledging reality: The future is mobile, and anything resembling net neutrality on mobile is a nonstarter thanks to the problem of runaway demand and a shortage of spectrum capacity.

Tasteless as it may be to toot our own horn, this column noted the dilemma last year, even forecasting Google’s coming apostasy on net neutrality. Already it was clear that only two economic solutions existed to a coming mobile meltdown. Either wireless subscribers would have to face usage-based pricing, profoundly disturbing the ad-based business models of big players whose services now appear “free” to users. Or Google and its ilk would have to be “willing to subsidize delivery of their services to mobile consumers—which would turn net neut precisely on its head.”

Our point was that the net neut fetish was dead, and good riddance. All along, competition was likely to provide a more reasonable and serviceable definition of “net neutrality” than regulators could ever devise or enforce. That rough-and-ready definition would allow carriers to discriminate in ways that consumers, on balance, are willing to put up with because it enables acceptable service at an acceptable price.

Even now, Google and its CEO Eric Schmidt, in their still-conflicted positioning, argue that the wired Internet has qualities of a natural monopoly, because most homes are dependent on one cable modem supplier. This treats the phone companies’ DSL and fiber services as if they don’t exist. It also overlooks how people actually experience the Internet.

Users don’t just get the Internet at home, but at work and on their mobile devices, and they won’t stand for being denied on one device services and sites they’re used to getting on the others. That is, they won’t unless there’s a good reason related to providing optimum service on a particular device.

You don’t have to look far for an example: Apple iPhone users put up with Apple’s blocking of most Web video on the iPhone because, on the whole, the iPhone still provides a satisfying service.

This is the sensible way ahead as even Google, a business realist, now seems to recognize. The telecom mavens at Strand Consult joke that Google is a “man with deep pockets and short arms, who suddenly disappears when the waiter brings the bill.” Yes, on the wired Net, Google remains entrenched in the position that network providers must continue to bury the cost to users of Google’s services uniformly across the bills of all broadband subscribers.

That won’t work on the wireless battlefield, and Google knows it. Stay tuned as the company’s business interests trump the simple net neutrality that the fetishists believe in—and that Google used to believe in.

Holman W. Jenkins, Wall Street Journal

__________

Full article: http://online.wsj.com/article/SB10001424052748704164904575421434187090098.html

Spies, secrets and smart-phones

SOME sort of a deal seems to have been thrashed out over the weekend, according to reports from Saudi Arabia, under which its spooks will be able to snoop to their heart’s content on messages sent over BlackBerrys within the kingdom. All last week, as it negotiated with the Saudi, United Arab Emirates (UAE) and Indian authorities over their demands for monitoring, the smart-phones’ Canadian maker, Research In Motion (RIM), was dodging journalists’ demands for proper explanations about what exactly is negotiable about the phones’ security. The Economist asked five times in four days for an interview, and got nowhere. Other news organisations had a similar experience.

The best we could get from the company was a series of tight-lipped statements, of which the least cryptic was this one:

RIM has spent over a decade building a very strong security architecture to meet our enterprise customers’ strict security requirements around the world. It is a solution that we are very proud of, and it has helped us become the number one choice for enterprises and governments. In recent days there has been a range of commentary, speculation, and misrepresentation regarding this solution and we want to take the opportunity to set the record straight. There is only one BlackBerry enterprise solution available to our customers around the world and it remains unchanged in all of the markets we operate in. RIM cooperates with all governments with a consistent standard and the same degree of respect. Any claims that we provide, or have ever provided, something unique to the government of one country that we have not offered to the governments of all countries, are unfounded. The BlackBerry enterprise solution was designed to preclude RIM, or any third party, from reading encrypted information under any circumstances since RIM does not store or have access to the encrypted data.

RIM cannot accommodate any request for a copy of a customer’s encryption key, since at no time does RIM, or any wireless network operator or any third party, ever possess a copy of the key.  This means that customers of the BlackBerry enterprise solution can maintain confidence in the integrity of the security architecture without fear of compromise.

Seems, at first glance, pretty categorical and reassuring, doesn’t it? But hang on. First, all of the reassurances about message security seem only to apply to “enterprise” customers—large organisations that give BlackBerrys to their staff, and which route messages through a server on their own premises. RIM’s statement appears to make no promises to the millions of BlackBerry users worldwide who are contracted directly to a mobile-telecoms operator. Their messages are routed via RIM’s own servers, which are dotted around the world. Wherever RIM puts them, it has to comply with local authorities’ demands for access. It is reported that RIM has agreed to put servers inside Saudi territory, which would of course be under Saudi jurisdiction. Presumably the other governments demanding greater access to message monitoring will want something similar, since the company does say it co-operates with all governments “with a consistent standard”.

RIM’s guarantee of the impregnability of customers’ encryption keys is also less impressive than it appears. Let’s leave aside for a moment the long history of “uncrackable” codes proving crackable after all. All that RIM is saying is that while the message is encrypted it is not possible to provide a key to decrypt it. What about at either end of the encryption process? E-mails sent encrypted from a BlackBerry handset at some point have to be decrypted and sent to the recipient’s e-mail server. That is done either by the “enterprise” server, for those large BlackBerry users that have them, or in RIM’s own servers in the case of people who have their BlackBerry contract with a local telecoms firm. So at the very least, anyone who has a BlackBerry contract with a Saudi telecoms operator, or whose Saudi employer provides his Blackberry, would now seem to have his e-mails at risk of being read if the authorities demand this.

But what the Saudis were concerned about was not so much e-mails but those “uncrackable” instant-messaging chats. When the company says it does not have, and cannot provide, a key to decrypt them as they travel from handset to handset, what this may mean, says Ross Anderson, professor of security engineering at Cambridge University in England, is that a new key is generated for each chat, and that only the paired handsets at either end have that key. If that is the case, he says, it might be rather difficult to decode those messages’ contents while they are encrypted and in transmission (though it would not be hard to detect who has sent a message to whom, and when).

The weakest link

However, as we have reported before, the handsets themselves are the weakest link in BlackBerry security. Last year the UAE’s state-controlled telecoms operator, Etisalat, sent out what it insists was a software patch to improve BlackBerrys’ performance. RIM put out an indignant statement saying that  “independent sources” had concluded that the patch could “enable unauthorised access to private or confidential information stored on the user’s smartphone.” In plain language: it appeared to be spyware. RIM gave users advice on how to remove it from their handsets.

The easiest way for spooks to read all of a surveillance target’s messages (including e-mails, texts, web forms) might be to do more stealthily what Etisalat seems (if you accept RIM’s theory) to have tried so clumsily to do: push a piece of spyware out to his handset—perhaps disguised as, or hidden in, a software update. This blogger receives software patches regularly and without warning on his company BlackBerry and would have no idea if one of them were part of a dastardly MI5 plot (paranoid, moi?).

According to an Indian government document leaked to the Economic Times last week, RIM has promised to provide the “tools”, within 8 months, for Indian spooks to read BlackBerry instant-messaging chats. It would be a huge blow to its reputation if it were ever found to have helped spy agencies put spyware on users’ handsets. So perhaps RIM itself would not risk that. But maybe others can provide a “solution” that can push snooping software on to handsets. America’s spies seem to think China’s spies can do this: last year Joel Brenner, then a senior counterintelligence official, told a security conference near CIA headquarters that during the Beijing Olympics “your phone or BlackBerry could have been tagged, tracked, monitored, and exploited between your disembarking the airplane and reaching the taxi stand at the airport. And when you emailed back home, some or all of the malware may have migrated to your home server. This is not hypothetical.”

Mark Rasch, former head of the computer crimes unit at the United States Department of Justice told Reuters that the ability to tap into messages is routine for security agencies around the world, and he should know. American authorities have huge powers, under the post 9/11 Patriot Act and other laws, to demand compliance with wiretapping orders, to gag those who are complying with them and grant them immunity against any legal consequences. So basically, it’s a licence to fib, or at least to keep stumm: if any smart-phone or telecoms provider were letting Uncle Sam take a peep at our messages, they wouldn’t be able to tell us, and even if we found out we couldn’t sue them. Is it plausible that the American authorities, after 9/11, would let people walk around with devices that send completely uncrackable messages? Surely they can read them, says Bruce Schneier, another internet-security expert, “You know they do.”

Given India’s tough line (unsurprising, given its terrorism worries), if it doesn’t get the “tools” to read messenger chats, then RIM may be shut out of a huge market; on the other hand, if BlackBerry services are not blocked in India in the coming months, this is bound to raise suspicions that its authorities have somehow gained (not necessarily from RIM itself) the means to read chats and other messages.

All this leaves RIM in a difficult situation. It doesn’t want to be, and perhaps may not be able to be, entirely open about what sort of access to messages it offers the authorities in different countries. The trouble is, as it notes in its statement, it has to a large degree built its brand on the supposed uncrackability of BlackBerry messages—more than rival brands have done. The feature that set its products apart from other smart-phones is now being thrown into doubt: and at an especially awkward time. The launch last week of the new generation BlackBerry, the Torch, was overshadowed not just by the disputes with various governments over monitoring, but by a Nielsen survey which showed that, unlike iPhone and Android users, only a minority of BlackBerry owners are thinking of buying another BlackBerry next time. The company’s evasiveness on the security issue is hardly going to encourage them to stay loyal.

Pretending not to listen

What about all those other supposedly hack-proof means of communication, such as Skype internet telephony and Google Mail, both of which are “encrypted”. A security pundit interviewed on BBC television’s “Newsnight” a few days ago speculated that the American authorities are only pretending when they claim they still can’t tap into Skype calls. This was then put to Lord West, a former British security minister. His response was fascinating:

When I come on a programme like this I’m always very nervous, ‘cos I know so much. And also people…don’t necessarily always tell the truth. That sounds an awful thing to say but do you want anyone to know that you can get into very high-encrypted stuff? No, you can say “we don’t, we can’t do it”. 

He then went on to say how “mind-boggling” are the capabilities of America’s National Security Agency and its British counterpart, GCHQ. To this blogger, that sounded like: “Yes of course we can hack Skype calls and all the rest, but we have to pretend we can’t”. Mr Anderson notes that there are all sorts of other internet-based services that provide encrypted messaging, including various dungeons-and-dragons online games. As these proliferate, providing terrorists and crime gangs with secure cyber-meeting places, the spooks will have to keep chasing them: serving papers on the hosts where possible, seeking deals with them otherwise. This is tricky but not impossible if you are the United States. For less powerful nations like the UAE, it is harder to get co-operation, and simply blocking all such secure-message services would do great economic damage.

Not all governments may get all of the snooping powers they want (RIM seems to be trying to persuade some to make do with the “metadata” of messages—who sent a message to whom, and when—rather than their contents). Even so, whether you are an international terrorist, an investment banker, or indeed an intelligence agent, given the technical capacity and the legal powers at the disposal of the big world powers, it seems that even on “secure” and “encrypted” channels, you can never be quite sure that someone isn’t listening in:

Number Two: We want information, information, information…
The Prisoner: You won’t get it.
Number Two: By hook or by crook, we will.

__________

Full article and photo: http://www.economist.com/blogs/babbage/2010/08/blackberrys_and_encryption

The Internet Generation Prefers the Real World

They may have been dubbed the “Internet generation,” but young people are more interested in their real-world friends than Facebook. New research shows that the majority of children and teenagers are not the Web-savvy digital natives of legend. In fact, many of them don’t even know how to google properly.

Seventeen-year-old Jetlir is online every day, sometimes for many hours at a time and late into the night. The window of his instant messaging program is nearly always open on his computer screen. A jumble of friends and acquaintances chat with each other. Now and again Jetlir adds half a sentence of his own, though this is soon lost in the endless stream of comments, jokes and greetings. He has in any case moved on, and is now clicking through sports videos on YouTube.

Jetlir is a high school student from Cologne. He could easily be a character in one of the many newspaper stories about the “Internet generation” that is allegedly in grave danger of losing itself in the virtual world.

Jetlir grew up with the Internet. It’s been around for as long as he can remember. He spends half of his leisure time on Facebook and YouTube, or chatting with friends online.

In spite of this, Jetlir thinks that other things — especially basketball — are much more important to him. “My club comes first,” Jetlir says. “I’d never miss a training session.” His real life also seems to come first in other respects: “If someone wants to meet me, I turn off my computer immediately,” he says.

‘What’s the Point?’

Indeed, Jetlir does not actually expect very much from the Internet. Older generations may consider it a revolutionary medium, enthuse about the splendors of blogging and tweet obsessively on the short-messaging service Twitter. But Jetlir is content if his friends are within reach, and if people keep uploading videos to YouTube. He’d never dream of keeping a blog. Nor does he know anybody else his age who would want to. And he’s certainly never tweeted before. “What’s the point?” he asks.

The Internet plays a paradoxical role in Jetlir’s life. Although he uses it intensively, he isn’t that interested in it. It’s indispensable, but only if he has nothing else planned. “It isn’t everything,” he says.

Jetlir’s easy-going attitude towards the Internet is typical of German adolescents today, as several recent studies have shown. Odd as it may seem, the first generation that cannot imagine life without the Internet doesn’t actually consider the medium particularly important, and indeed shuns some of the latest web technologies. Only 3 percent of young people keep their own blog, and no more than 2 percent regularly contribute to Wikipedia or other comparable open source projects.

Similarly, most young people in Germany ignore social bookmarking websites like Delicious and photo-sharing portals such as Flickr and Picasa. Apparently the netizens of the future couldn’t care less about the collaborative delights of Web 2.0 — that, at least, is the finding of a major study by the Hans Bredow Institute in Germany.

The Net Generation

For years, experts have been talking about a new kind of tech-savvy youth who are mobile, networked, and chronically restless, spoilt by the glut of stimuli on the Internet. These young people were said to live in perpetual symbiosis with their computers and mobile phones, with networking technology practically imprinted in their genes. The media habitually referred to them as “digital natives,” “Generation @” or simply “the net generation.”

Two of the much cited spokesmen of this movement are the 64-year-old American author Marc Prensky and his 62-year-old Canadian colleague, Don Tapscott. Prensky coined the expression “digital natives” to describe those lucky souls born into the digital era, instinctively acquainted with all that the Internet has to offer in terms of participation and self-promotion, and streets ahead of their elders in terms of web-savviness. Prensky classifies everyone over the age of 25 as “digital immigrants” — people who gain access to the Internet later in life and betray themselves through their lack of mastery of the local customs, like real-world immigrants who speak their adopted country’s language with an accent.

A small group of writers, consultants and therapists thrives on repeating the same old mantra, namely that our youth is shaped through and through by the online medium in which it grew up. They claim that our schools must, therefore, offer young people completely new avenues — surely traditional education cannot reach this generation any longer, they argue.

Little Evidence

There is little evidence to back such theories up, however. Rather than conducting surveys, these would-be visionaries base their arguments on impressive individual cases of young Internet virtuosos. As other, more serious researchers have since discovered, such exceptions say very little about the generation as a whole, and they are now avidly trying to correct the mistakes of the past.

Numerous studies have since revealed how young people actually use the Internet. The findings show that the image of the “net generation” is almost completely false — as is the belief in the all-changing power of technology.

A study by the Hans Bredow Institute entitled “Growing Up With the Social Web” was particularly thorough in its approach. In addition to conducting a representative survey, the researchers conducted extensive individual interviews with 28 young people. Once again it became clear that young people primarily use the Internet to interact with friends. They go on social networking sites like Facebook and the popular German website SchülerVZ, which is aimed at school students, to chat, mess around and show off — just like they do in real life.

There are a few genuine net pioneers who compose music online with friends from Amsterdam and Barcelona, organize spontaneous protests to lobby for cheaper public transport passes for schoolchildren, or use the virtual arena in other imaginative ways. But most of the respondents saw the Internet as merely a useful extension of the old world rather than as a completely new one. Their relationship to the medium is therefore far more pragmatic than initially posited. “We found no evidence whatsoever that the Internet is the dominating influence in the lives of young people,” says Ingrid Paus-Hasebrink, the Salzburg-based communication researcher who led the project.

Not Very Skilled

More surprising yet, these supposedly gifted netizens are not even particularly adept at getting the most out of the Internet. “They can play around,” says Rolf Schulmeister, an educational researcher from Hamburg who specializes in the use of digital media in the classroom. “They know how to start up programs, and they know where to get music and films. But only a minority is really good at using it.”

Schulmeister should know. He recently ploughed through the findings of more than 70 relevant studies from around the globe. He too came to the conclusion that the Internet certainly hasn’t taken over the real world. “The media continue to account for only a part of people’s leisure activities. And the Internet is only one medium among many,” he says. “Young people still prefer to meet friends or take part in sports.”

Of course that won’t prevent the term “net generation” being bandied about in the media and elsewhere. “It’s an obvious, cheap metaphor,” Schulmeister says. “So it just keeps cropping up.”

In Touch with Friends around the Clock

In purely statistical terms, it appears that ever-greater proportions of young people’s days are focused on technology. According to a recent study carried out by the Stuttgart-based media research group MPFS, 98 percent of 12- to 19-year-olds in Germany now have access to the Internet. And by their own estimates, they are online for an average of 134 minutes a day — just three minutes less than they spend in front of the television. 

However, the raw figures say little about what these supposed digital natives actually do online. As it turns out, the kids of today are very similar to previous generations of young people: They are mainly interested in communicating with their peers. Today’s young people spend almost half of their time interacting socially online. E-mail, instant messaging and social networking together accounts for the bulk of their Internet time.

For instance Tom, one of Jetlir’s classmates, remains in touch with 30 or 40 of his friends almost around the clock. Even so, the channels of communication vary. In the morning Tom will chat briefly on his PC, during lunch recess he’ll rattle off a few text messages, after school he’ll sit down for his daily Facebook session and make a few calls on his cell phone, and in the evening he’ll make one or two longer video calls using the free Internet telephony service Skype.

The Medium Is Not the Message

For Tom, Jetlir, and the others of their age, it doesn’t seem to matter whether they interact over the Internet or via another medium. It seems that young people are mainly interested in what the particular medium or communication device can be used for. In the case of the Internet in particular, that can be one of many things: Sometimes it acts as a telephone, sometimes as a kind of souped-up television. Tom spends an hour or two every day watching online videos, mostly on YouTube, but also entire TV programs if they’re available somehow. “Everyone knows how to find episodes of the TV series they want to watch,” says fellow pupil Pia.

The second most popular use of the Internet is for entertainment. According to a survey conducted by Leipzig University in 2008, more young people now access their music via various online broadcasting services than listen to it on the radio. As a consequence, the video-sharing portal YouTube has become the global jukebox, serving the musical needs of the world’s youth — although its rise to prominence as a resource for music on demand has gone largely unnoticed. Indeed, there are few songs that cannot be dredged up somewhere on the site.

“That’s also practical if you’re looking for something new,” Pia says. Searching for specific content is incredibly simple on YouTube. In general all you need to do is enter half a line of some lyrics you caught at a party, and YouTube supplies the corresponding music video and the song itself.

In this way the Internet is becoming a repository for the content of older media, sometimes even replacing them altogether. And youthful audiences, who are always on the lookout for something to share or entertainment, are now increasingly using the Internet to find this content. But it’s not exactly the kind of behavior that would trigger a lifestyle revolution.

Teens Still Enjoy Meeting Friends

What’s more, there’s still plenty of life beyond the many screens at their disposal. A 2009 study by MPFS found that nine out of every 10 teenagers put meeting friends right at the top of their list of favorite non-media activities. More striking still, 76 percent of young people in Germany take part in sport several times a week, although among girls that figure is only 64 percent.

In January, the authors of the “Generation M2” survey by the Kaiser Family Foundation published the remarkable finding that even the most intense media users in the US exercised just as much as others of their age.

So how can they pack all that into a single day? Simply adding together the amount of time devoted to each activity creates a very false picture. That’s because most young people are excellent media multitaskers, simultaneously making phone calls, checking out their friends on Facebook and listening to music. And it appears that they’re primarily online at times they would otherwise spend lounging around.

“I go online when I have nothing better to do,” Jetlir says. “Unfortunately that’s often when I should already be sleeping.” Thanks to cell phones and MP3 players, young people can also fill gaps in their busy schedules even when they’re away from static media sources like TVs, computers and music systems. Media use can therefore increase steadily while still leaving plenty of time for other activities.

‘Time’s Too Precious’

What’s more, many young people still aren’t the least bit interested in all the online buzz. Some 31 percent of them rarely or never visit social networking sites. Anna, who attends the same school as Jetlir, says she would “probably only miss the train timetable” if the Internet ceased to exist, while fellow student Torben thinks “time’s too precious” to waste on computers. He plays handball and soccer, and says “10 minutes a day on Facebook” is all he needs.

By contrast, Tom will occasionally get so wrapped up in Facebook and his instant messaging that he’ll forget the time altogether. “It’s a strange feeling to realize you’ve spent so much time on something and have nothing to show for it,” he admits. But he also knows that others find the temptations of the virtual world much harder to resist. “Everyone knows a few people who are online all day,” Pia says, though Jetlir suggests that’s only for want of something better to do. “None of them would turn down an offer to go out somewhere instead,” he adds.

But even the most inveterate netizens aren’t necessarily natural experts in the medium. If you want to make use of the Internet, you first have to understand how the real world works. And that’s often the sticking point. The only advantage that young people have over their elders is their lack of inhibitions with regard to computers. “They simply try things out,” says René Scheppler, a teacher at a high school in Wiesbaden. “They discover all sorts of things that way. The only thing is they don’t understand how it works.”

‘I Found It on Google’

Occasionally the teacher will ask his students big-picture questions about the medium they take for granted. Questions like: Where did the Internet come from? “I’ll get replies like, ‘What do you mean? It’s just there!'” Scheppler says. “Unless they’re prompted to do so, they never address those sorts of questions. For them it’s like a car: All that matters is that it works.”

And because teenagers are basically inexperienced, they are all the more likely to overestimate their own abilities. “They think they’re the real experts,” Scheppler says. “But when it comes down to it, they can’t even google properly.”

When Scheppler scheduled a lesson about Google to teach his pupils how to better search the Web, they thought it was hilarious. “Google?!” they gasped. “We know all about that. We do it all the time. And now Mr Scheppler wants to tell us how to use Google!”

He, therefore, set them a challenge: They were to design a poster on globalization based on the example of Indian subcontractors. Now it was the teacher’s turn to laugh. “They just typed a series of individual keywords into Google, and then they went click, click, click: ‘Don’t want that! Useless! Let’s try another one!'” Scheppler recalls. “They’re very quick to jettison things, sometimes even relevant information. They think they can tell the wheat from the chaff, but they just stumble about — very rapidly, very hectically and very superficially. And they stop the moment they get a hit that looks reasonably plausible.”

Few have any idea where the information on the Web comes from. And if their teacher asks for references, he often gets the reply, “I found it on Google.”

Learning How to Use the Internet Productively

Recent research into the way people conduct Internet searches confirms Scheppler’s observations. A major study conducted by the British Library came to the sobering conclusion that the “net generation” hardly knows what to look for, quickly scans over results, and has a hard time assessing relevance. “The information literacy of young people has not improved with the widening access to technology,” the authors wrote. 

A few schools have now realized that the time has come to act. One of them is Kaiserin Augusta School in Cologne, the high school that Jetlir, Tom, Pia, and Anna attend. “We want our pupils to learn how to use the Internet productively,” says music teacher André Spang, “Not just for clicking around in.”

Spang uses Web 2.0 tools in the classroom. When teaching them about the music of the 20th century, for example, he got his 12th-graders to produce a blog on the subject. “They didn’t even know what that was,” he says. Now they’re writing articles on aleatoric music and musique concrete, composing simple 12-tone rows and collecting musical examples, videos, and links about it. Everyone can access the project online, see what the others are doing and comment on each other’s work. The fact that the material is public also helps to promote healthy competition and ambition among the participants.

Blogs are not technically challenging and are quick to set up. That’s why they are also being used to teach other subjects. Piggybacking on the enormous success of Wikipedia, the collaborative online encyclopedia produced entirely by volunteer contributors, wikis are also being employed in schools. The 10th-graders in the physics class of Spang’s colleague Thomas Vieth are currently putting together a miniature encyclopedia of electromagnetism. “In the past all we could do was give out group assignments, and people would just rattle off their presentations,” Vieth says. “Now everyone reads along, partly because all the articles are connected and have to be interlinked.”

Not Interested in Fame

One positive side-effect is that the students are also learning how to find reliable information on the Internet. And so that they understand what they find online, there are regular sessions of old-fashioned sessions on learning how to learn, including reading, comprehension and summarizing exercises. So instead of tech-savvy young netizens challenging the school, the school itself is painstakingly teaching them how to benefit from the online medium.

For most of the pupils it was the first time they had contributed their own work to the Internet’s pool of data. They’re not interested in widespread fame. Self-promoters are rare, and most young people even shun anonymous role-playing such as that found in the online world Second Life. The youth of today, it turns out, is much more obsessed with real relationships. Whatever they do or write is directed at their particular group of friends and acquaintances.

That also applies to video, the medium most tempting for people to try out for themselves. An impressive 15 percent of young people have already uploaded at least one home-made video, mostly shot on a cell phone.

Part of Their Social Life

One student, Sven, has uploaded a video he made to YouTube. It shows him and a few friends in their bathing suits first by a lake, then all running into the clearly icy water. “No, really,” Sven says, “people are interested in this. They talk about it!” There are indeed already 37 comments under the video, all from his circle of friends.

“And here,” Sven adds, pointing to the screen. “Here on Facebook someone recently posted just a dot. Even so, seven people have clicked on the ‘Like’ button so far, and 83 commented on the dot.”

Older people might consider such activity inane, but for young people it’s part of their social life and no less important than a friendly wave or affable clowning around in the offline world. The example of the dot shows how normal the Internet has become, and debunks the idea that it is a special world in which special things happen.

“Media are used by the masses if they have some relevance to everyday life,” says Rolf Schulmeister, the educational researcher. “And they are used for aims that people already had anyway.”

Turning Point

Young people have now reached this turning point. The Internet is no longer something they are willing to waste time thinking about. It seems that the excitement about cyberspace was a phenomenon peculiar to their predecessors, the technology-obsessed first generation of Web users.

For a brief transition period, the Web seemed to be tremendously new and different, a kind of revolutionary power that could do and reshape everything. Young people don’t feel that way. They hardly even use the word “Internet,” talking about “Google”, “YouTube” and “Facebook” instead. And they certainly no longer understand it when older generations speak of “going online.”

“The expression is meaningless,” Tom says. Indeed the term is a relic of a time when the Internet was still something special, evoking a separate space distinct from our real life, an independent, secretive world that you entered and then exited again.

Tom and his friends just describe themselves as being “on” or “off,” using the English terms. What they mean is: contactable or not.

__________

Full article and photo: http://www.spiegel.de/international/zeitgeist/0,1518,710139,00.html

The Web’s New Gold Mine: Your Secrets

A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series.

Hidden inside Ashley Hayes-Beaty’s computer, a tiny file helps gather personal details about her, all to be put up for sale for a tenth of a penny.

The file consists of a single code— 4c812db292272995e5416a323e79bd37—that secretly identifies her as a 26-year-old female in Nashville, Tenn.

The code knows that her favorite movies include “The Princess Bride,” “50 First Dates” and “10 Things I Hate About You.” It knows she enjoys the “Sex and the City” series. It knows she browses entertainment news and likes to take quizzes.

“Well, I like to think I have some mystery left to me, but apparently not!” Ms. Hayes-Beaty said when told what that snippet of code reveals about her. “The profile is eerily correct.”

Ms. Hayes-Beaty is being monitored by Lotame Solutions Inc., a New York company that uses sophisticated software called a “beacon” to capture what people are typing on a website—their comments on movies, say, or their interest in parenting and pregnancy. Lotame packages that data into profiles about individuals, without determining a person’s name, and sells the profiles to companies seeking customers. Ms. Hayes-Beaty’s tastes can be sold wholesale (a batch of movie lovers is $1 per thousand) or customized (26-year-old Southern fans of “50 First Dates”).

“We can segment it all the way down to one person,” says Eric Porres, Lotame’s chief marketing officer.

One of the fastest-growing businesses on the Internet, a Wall Street Journal investigation has found, is the business of spying on Internet users.

The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.

• The study found that the nation’s 50 top websites on average installed 64 pieces of tracking technology onto the computers of visitors, usually with no warning. A dozen sites each installed more than a hundred. The nonprofit Wikipedia installed none.

• Tracking technology is getting smarter and more intrusive. Monitoring used to be limited mainly to “cookie” files that record websites people visit. But the Journal found new tools that scan in real time what people are doing on a Web page, then instantly assess location, income, shopping interests and even medical conditions. Some tools surreptitiously re-spawn themselves even after users try to delete them.

• These profiles of individuals, constantly refreshed, are bought and sold on stock-market-like exchanges that have sprung up in the past 18 months.

The new technologies are transforming the Internet economy. Advertisers once primarily bought ads on specific Web pages—a car ad on a car site. Now, advertisers are paying a premium to follow people around the Internet, wherever they go, with highly specific marketing messages.

In between the Internet user and the advertiser, the Journal identified more than 100 middlemen—tracking companies, data brokers and advertising networks—competing to meet the growing demand for data on individual behavior and interests.

The data on Ms. Hayes-Beaty’s film-watching habits, for instance, is being offered to advertisers on BlueKai Inc., one of the new data exchanges.

“It is a sea change in the way the industry works,” says Omar Tawakol, CEO of BlueKai. “Advertisers want to buy access to people, not Web pages.”

The Journal examined the 50 most popular U.S. websites, which account for about 40% of the Web pages viewed by Americans. (The Journal also tested its own site, WSJ.com.) It then analyzed the tracking files and programs these sites downloaded onto a test computer.

As a group, the top 50 sites placed 3,180 tracking files in total on the Journal’s test computer. Nearly a third of these were innocuous, deployed to remember the password to a favorite site or tally most-popular articles.

But over two-thirds—2,224—were installed by 131 companies, many of which are in the business of tracking Web users to create rich databases of consumer profiles that can be sold.

The top venue for such technology, the Journal found, was IAC/InterActive Corp.’s Dictionary.com. A visit to the online dictionary site resulted in 234 files or programs being downloaded onto the Journal’s test computer, 223 of which were from companies that track Web users.

__________

Dig Deeper

Glossary

Key tracking terminology

[wtkglossary]

How to Protect Yourself

Almost every major website you visit is tracking your online activity. Here’s a step-by-step guide to fending off trackers.

The Tracking Ecosystem

Surfing the Internet kickstarts a process that passes information about you and your interests to tracking companies and advertisers. See how it works.

[ecosystemPromo]

The information that companies gather is anonymous, in the sense that Internet users are identified by a number assigned to their computer, not by a specific person’s name. Lotame, for instance, says it doesn’t know the name of users such as Ms. Hayes-Beaty—only their behavior and attributes, identified by code number. People who don’t want to be tracked can remove themselves from Lotame’s system.

And the industry says the data are used harmlessly. David Moore, chairman of 24/7 RealMedia Inc., an ad network owned by WPP PLC, says tracking gives Internet users better advertising.

“When an ad is targeted properly, it ceases to be an ad, it becomes important information,” he says.

Tracking isn’t new. But the technology is growing so powerful and ubiquitous that even some of America’s biggest sites say they were unaware, until informed by the Journal, that they were installing intrusive files on visitors’ computers.

The Journal found that Microsoft Corp.’s popular Web portal, MSN.com, planted a tracking file packed with data: It had a prediction of a surfer’s age, ZIP Code and gender, plus a code containing estimates of income, marital status, presence of children and home ownership, according to the tracking company that created the file, Targus Information Corp.

Both Targus and Microsoft said they didn’t know how the file got onto MSN.com, and added that the tool didn’t contain “personally identifiable” information.

Tracking is done by tiny files and programs known as “cookies,” “Flash cookies” and “beacons.” They are placed on a computer when a user visits a website. U.S. courts have ruled that it is legal to deploy the simplest type, cookies, just as someone using a telephone might allow a friend to listen in on a conversation. Courts haven’t ruled on the more complex trackers.

The most intrusive monitoring comes from what are known in the business as “third party” tracking files. They work like this: The first time a site is visited, it installs a tracking file, which assigns the computer a unique ID number. Later, when the user visits another site affiliated with the same tracking company, it can take note of where that user was before, and where he is now. This way, over time the company can build a robust profile.

One such ecosystem is Yahoo Inc.’s ad network, which collects fees by placing targeted advertisements on websites. Yahoo’s network knows many things about recent high-school graduate Cate Reid. One is that she is a 13- to 18-year-old female interested in weight loss. Ms. Reid was able to determine this when a reporter showed her a little-known feature on Yahoo’s website, the Ad Interest Manager, that displays some of the information Yahoo had collected about her.

Yahoo’s take on Ms. Reid, who was 17 years old at the time, hit the mark: She was, in fact, worried that she may be 15 pounds too heavy for her 5-foot, 6-inch frame. She says she often does online research about weight loss.

“Every time I go on the Internet,” she says, she sees weight-loss ads. “I’m self-conscious about my weight,” says Ms. Reid, whose father asked that her hometown not be given. “I try not to think about it…. Then [the ads] make me start thinking about it.”

Yahoo spokeswoman Amber Allman says Yahoo doesn’t knowingly target weight-loss ads at people under 18, though it does target adults.

“It’s likely this user received an untargeted ad,” Ms. Allman says. It’s also possible Ms. Reid saw ads targeted at her by other tracking companies.

Information about people’s moment-to-moment thoughts and actions, as revealed by their online activity, can change hands quickly. Within seconds of visiting eBay.com or Expedia.com, information detailing a Web surfer’s activity there is likely to be auctioned on the data exchange run by BlueKai, the Seattle startup.

Each day, BlueKai sells 50 million pieces of information like this about specific individuals’ browsing habits, for as little as a tenth of a cent apiece. The auctions can happen instantly, as a website is visited.

Spokespeople for eBay Inc. and Expedia Inc. both say the profiles BlueKai sells are anonymous and the people aren’t identified as visitors of their sites. BlueKai says its own website gives consumers an easy way to see what it monitors about them.

Tracking files get onto websites, and downloaded to a computer, in several ways. Often, companies simply pay sites to distribute their tracking files.

But tracking companies sometimes hide their files within free software offered to websites, or hide them within other tracking files or ads. When this happens, websites aren’t always aware that they’re installing the files on visitors’ computers.

Often staffed by “quants,” or math gurus with expertise in quantitative analysis, some tracking companies use probability algorithms to try to pair what they know about a person’s online behavior with data from offline sources about household income, geography and education, among other things.

The goal is to make sophisticated assumptions in real time—plans for a summer vacation, the likelihood of repaying a loan—and sell those conclusions.

Some financial companies are starting to use this formula to show entirely different pages to visitors, based on assumptions about their income and education levels.

Life-insurance site AccuquoteLife.com, a unit of Byron Udell & Associates Inc., last month tested a system showing visitors it determined to be suburban, college-educated baby-boomers a default policy of $2 million to $3 million, says Accuquote executive Sean Cheyney. A rural, working-class senior citizen might see a default policy for $250,000, he says.

“We’re driving people down different lanes of the highway,” Mr. Cheyney says.

Consumer tracking is the foundation of an online advertising economy that racked up $23 billion in ad spending last year. Tracking activity is exploding. Researchers at AT&T Labs and Worcester Polytechnic Institute last fall found tracking technology on 80% of 1,000 popular sites, up from 40% of those sites in 2005.

The Journal found tracking files that collect sensitive health and financial data. On Encyclopaedia Britannica Inc.’s dictionary website Merriam-Webster.com, one tracking file from Healthline Networks Inc., an ad network, scans the page a user is viewing and targets ads related to what it sees there. So, for example, a person looking up depression-related words could see Healthline ads for depression treatments on that page—and on subsequent pages viewed on other sites.

Healthline says it doesn’t let advertisers track users around the Internet who have viewed sensitive topics such as HIV/AIDS, sexually transmitted diseases, eating disorders and impotence. The company does let advertisers track people with bipolar disorder, overactive bladder and anxiety, according to its marketing materials.

Targeted ads can get personal. Last year, Julia Preston, a 32-year-old education-software designer in Austin, Texas, researched uterine disorders online. Soon after, she started noticing fertility ads on sites she visited. She now knows she doesn’t have a disorder, but still gets the ads.

It’s “unnerving,” she says.

Tracking became possible in 1994 when the tiny text files called cookies were introduced in an early browser, Netscape Navigator. Their purpose was user convenience: remembering contents of Web shopping carts.

Back then, online advertising barely existed. The first banner ad appeared the same year. When online ads got rolling during the dot-com boom of the late 1990s, advertisers were buying ads based on proximity to content—shoe ads on fashion sites.

The dot-com bust triggered a power shift in online advertising, away from websites and toward advertisers. Advertisers began paying for ads only if someone clicked on them. Sites and ad networks began using cookies aggressively in hopes of showing ads to people most likely to click on them, thus getting paid.

Targeted ads command a premium. Last year, the average cost of a targeted ad was $4.12 per thousand viewers, compared with $1.98 per thousand viewers for an untargeted ad, according to an ad-industry-sponsored study in March.

The Journal examined three kinds of tracking technology—basic cookies as well as more powerful “Flash cookies” and bits of software code called “beacons.”

More than half of the sites examined by the Journal installed 23 or more “third party” cookies. Dictionary.com installed the most, placing 159 third-party cookies.

Cookies are typically used by tracking companies to build lists of pages visited from a specific computer. A newer type of technology, beacons, can watch even more activity.

Beacons, also known as “Web bugs” and “pixels,” are small pieces of software that run on a Web page. They can track what a user is doing on the page, including what is being typed or where the mouse is moving.

The majority of sites examined by the Journal placed at least seven beacons from outside companies. Dictionary.com had the most, 41, including several from companies that track health conditions and one that says it can target consumers by dozens of factors, including zip code and race.

Dictionary.com President Shravan Goli attributed the presence of so many tracking tools to the fact that the site was working with a large number of ad networks, each of which places its own cookies and beacons. After the Journal contacted the company, it cut the number of networks it uses and beefed up its privacy policy to more fully disclose its practices.

The widespread use of Adobe Systems Inc.’s Flash software to play videos online offers another opportunity to track people. Flash cookies originally were meant to remember users’ preferences, such as volume settings for online videos.

But Flash cookies can also be used by data collectors to re-install regular cookies that a user has deleted. This can circumvent a user’s attempt to avoid being tracked online. Adobe condemns the practice.

Most sites examined by the Journal installed no Flash cookies. Comcast.net installed 55.

That finding surprised the company, which said it was unaware of them. Comcast Corp. subsequently determined that it had used a piece of free software from a company called Clearspring Technologies Inc. to display a slideshow of celebrity photos on Comcast.net. The Flash cookies were installed on Comcast’s site by that slideshow, according to Comcast.

Clearspring, based in McLean, Va., says the 55 Flash cookies were a mistake. The company says it no longer uses Flash cookies for tracking.

CEO Hooman Radfar says Clearspring provides software and services to websites at no charge. In exchange, Clearspring collects data on consumers. It plans eventually to sell the data it collects to advertisers, he says, so that site users can be shown “ads that don’t suck.” Comcast’s data won’t be used, Clearspring says.

Wittingly or not, people pay a price in reduced privacy for the information and services they receive online. Dictionary.com, the site with the most tracking files, is a case study.

The site’s annual revenue, about $9 million in 2009 according to an SEC filing, means the site is too small to support an extensive ad-sales team. So it needs to rely on the national ad-placing networks, whose business model is built on tracking.

Dictionary.com executives say the trade-off is fair for their users, who get free access to its dictionary and thesaurus service.

“Whether it’s one or 10 cookies, it doesn’t have any impact on the customer experience, and we disclose we do it,” says Dictionary.com spokesman Nicholas Graham. “So what’s the beef?”

The problem, say some industry veterans, is that so much consumer data is now up for sale, and there are no legal limits on how that data can be used.

Until recently, targeting consumers by health or financial status was considered off-limits by many large Internet ad companies. Now, some aim to take targeting to a new level by tapping online social networks.

Media6Degrees Inc., whose technology was found on three sites by the Journal, is pitching banks to use its data to size up consumers based on their social connections. The idea is that the creditworthy tend to hang out with the creditworthy, and deadbeats with deadbeats.

“There are applications of this technology that can be very powerful,” says Tom Phillips, CEO of Media6Degrees. “Who knows how far we’d take it?”

Julia Angwin, Wall Street Journal

__________

Full article and pĥotos: http://online.wsj.com/article/SB10001424052748703977004575393173432219064.html

To Tweet, Or Not to Tweet

How weary, stale, flat and unprofitable—at times—seem all the digital distractions of this world.

A catastrophic event unfolds. A seemingly healthy professional embarks on his daily commute, only to come to the frightening realization that his battered and beloved BlackBerry lies vulnerable and unused in a distant corner of his home. An unwholesome panic descends. No matter how far away from home he is, and no matter how needless the device may be in a practical sense, he is impelled to hightail it back to his house and reconnect with the world.

William Powers offers this beleaguered man (me), and everyone else who has faced a similar ordeal, a roadmap to contentment in “Hamlet’s BlackBerry,” a rewarding guide to finding a “quiet” and “spacious” place “where the mind can wander free.”

Based on the author’s much-discussed 2006 National Journal essay, “Hamlet’s BlackBerry: Why Paper is Eternal” (and how I wish that were true), the former Washington Post staff writer argues that the distractions of manic connectivity often lead to a lack of productivity and, if allowed to permeate too deeply, to an assault on the beauty and meaning of everyday life.

Obviously this is not a unique grievance, or a fresh one: As Mr. Powers acknowledges, concerns about the deleterious effects of a new world supplanting the old go back to Plato. But there has been an awful lot of grousing about digital distraction lately—Nicholas Carr’s “The Shallows: What the Internet Is Doing to Our Brains” came out just a few weeks ago—and it is easy to feel skeptical of worrywarts agonizing about Americans “wrestling” with too many choices and “coping” with the effects of too much Internet use.

There is simply too much good that comes of innovation for that sort of Luddite hand-wringing. The farmer a century ago who pulled himself off the straw mattress at 4 a.m. to till the earth so his family wouldn’t starve led a fairly straightforward, undistracted existence, but he was almost certainly miserable most of the time. And he probably regarded the arrival of radio as a sort of miracle. In discussions of this type I tend to rely on the wisdom of P.J. O’Rourke: “Civilization is an enormous improvement on the lack thereof.”

But even a jaded reader is likely to be won over by “Hamlet’s BlackBerry.” It convincingly argues that we’ve ceded too much of our existence to what he calls Digital Maximalism. Less scold and more philosopher, Mr. Powers certainly bemoans the spread of technology in our lives, but he also offers a compelling discussion of our dependence on contraptions and of the ways in which we might free ourselves from them. I buy it. I need quiet time.

To accept “Hamlet’s BlackBerry” is to accept that we are super busy. “It’s staggering,” writes Mr. Powers, “how many balls we keep in the air each day and how few we drop. We’re so busy, sometimes it seems as though busyness itself is the point.” Though I don’t find all that ball-juggling as staggering as the author, and I don’t know anyone who acts as if chaos is the point of it all, it would be foolish not to concede that our lives have become far more complex than ever before.

What can be done? What should be done? Mr. Powers’s answer is, in essence: Just say no. Try to cultivate a quieter or at least more focused life. The most persuasive and entertaining parts of “Hamlet’s BlackBerry” are found in Mr. Powers’s efforts to practice what he preaches. (Most of us, it should be noted, do not have the option of moving from a dense Washington, D.C., suburb to an idyllic Cape Cod town to grapple with the demons of gadgetry addiction.) His skeptical wife and kids agree that if they’re allowed to use their laptops during the week, they will turn the computers off on the weekend. Mr. Powers discovers that friends and relatives quickly adapt to the family’s digital disconnect (they call it the “Internet Sabbath”). The family spends more time face-to-face instead of Facebooking.

Mr. Powers proposes that we take into account the “need to connect outward, as well as the opposite need for time and space apart.” It is a powerful desire, the balanced life. Most of us yearn for it. Neither technology nor connectivity is injurious unless we allow them to consume us. Mr. Powers argues that letting life turn into a blizzard of snapshots—that’s what all those screenviews amount to, after all—isn’t enough. We would be happier freeing ourselves for genuine, unfiltered experience and then reflecting on it, not tweeting about it. The busy person will pause here to nod in sympathy.

I’m not sure that many of us have found that spacious place where our minds can wander free of technological intrusions, of beeps and buttons and emails and tweets, but “Hamlet’s BlackBerry” makes the case that we can—or should—find it. Recently, while watching some hypnotically dreadful movie, I instinctively reached for my BlackBerry to fetch some worthless biographical information about a third-rate actress that would do no more than clog my brain still further.

Then I remembered something in Mr. Powers’s book—which takes its title from a scene in “Hamlet” when the prince refers to an Elizabethan technical advance: specially coated paper or parchment that could be wiped clean. A book that included heavy, blank, erasable pages made from such paper—an almanac, for example—was called a table. “Yea, from the table of my memory / I’ll wipe away all trivial fond records,” Hamlet says. Or, as Mr. Powers paraphrases: ” ‘Don’t worry,’ Hamlet’s nifty device whispered, ‘you don’t have to know everything. Just the few things that matter.’ ”

Mr. Harsanyi is a nationally syndicated columnist for the Denver Post.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704895204575321220596115064.html

An empire gives way

Blogs are growing a lot more slowly. But specialists still thrive

ONLINE archaeology can yield surprising results. When John Kelly of Morningside Analytics, a market-research firm, recently pored over data from websites in Indonesia he discovered a “vast field of dead blogs”. Numbering several thousand, they had not been updated since May 2009. Like hastily abandoned cities, they mark the arrival of the Indonesian version of Facebook, the online social network.

Such swathes of digital desert are still rare in the blogosphere. And they should certainly not be taken as evidence that it has started to die. But signs are multiplying that the rate of growth of blogs has slowed in many parts of the world. In some countries growth has even stalled.

Blogs are a confection of several things that do not necessarily have to go together: easy-to-use publishing tools, reverse-chronological ordering, a breezy writing style and the ability to comment. But for maintaining an online journal or sharing links and photos with friends, services such as Facebook and Twitter (which broadcasts short messages) are quicker and simpler.

Charting the impact of these newcomers is difficult. Solid data about the blogosphere are hard to come by. Such signs as there are, however, all point in the same direction. Earlier in the decade, rates of growth for both the numbers of blogs and those visiting them approached the vertical. Now traffic to two of the most popular blog-hosting sites, Blogger and WordPress, is stagnating, according to Nielsen, a media-research firm. By contrast, Facebook’s traffic grew by 66% last year and Twitter’s by 47%. Growth in advertisements is slowing, too. Blogads, which sells them, says media buyers’ inquiries increased nearly tenfold between 2004 and 2008, but have grown by only 17% since then. Search engines show declining interest, too.

People are not tiring of the chance to publish and communicate on the internet easily and at almost no cost. Experimentation has brought innovations, such as comment threads, and the ability to mix thoughts, pictures and links in a stream, with the most recent on top. Yet Facebook, Twitter and the like have broken the blogs’ monopoly. Even newer entrants such as Tumblr have offered sharp new competition, in particular for handling personal observations and quick exchanges. Facebook, despite its recent privacy missteps, offers better controls to keep the personal private. Twitter limits all communication to 140 characters and works nicely on a mobile phone.

A good example of the shift is Iran. Thanks to the early translation into Persian of a popular blogging tool (and crowds of journalists who lacked an outlet after their papers were shut down), Iran had tens of thousands of blogs by 2009. Many were shut down, and their authors jailed, after the crackdown that followed the election in June of that year. But another reason for the dwindling number of blogs written by dissidents is that the opposition Green Movement is now on Facebook, says Hamid Tehrani, the Brussels-based Iran editor for Global Voices, a blog news site. Mir Hossein Mousavi, one of the movement’s leaders, has 128,000 Facebook followers. Facebook, explains Mr Tehrani, is a more efficient way to reach people.

The future for blogs may be special-interest publishing. Mr Kelly’s research shows that blogs tend to be linked within languages and countries, with each language-group in turn containing smaller pockets of densely linked sites. These pockets form around public subjects: politics, law, economics and knowledge professions. Even narrower specialisations emerge around more personal topics that benefit from public advice. Germany has a cluster for children’s crafts; France, for food; Sweden, for painting your house.

Such specialist cybersilos may work for now, but are bound to evolve further. Deutsche Blogcharts says the number of links between German blogs dropped last year, with posts becoming longer. Where will that end? Perhaps in a single, hugely long blog posting about the death of blogs.

__________

Full article: http://www.economist.com/node/16432794?story_id=16432794&source=hptextfeature

Hooked on Gadgets, and Paying a Mental Price

Brenda and Kord Campbell, with iPads, at breakfast

When one of the most important e-mail messages of his life landed in his in-box a few years ago, Kord Campbell overlooked it.

Not just for a day or two, but 12 days. He finally saw it while sifting through old messages: a big company wanted to buy his Internet start-up.

“I stood up from my desk and said, ‘Oh my God, oh my God, oh my God,’ ” Mr. Campbell said. “It’s kind of hard to miss an e-mail like that, but I did.”

The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing.

While he managed to salvage the $1.3 million deal after apologizing to his suitor, Mr. Campbell continues to struggle with the effects of the deluge of data. Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.

His wife, Brenda, complains, “It seems like he can no longer be fully in the moment.”

This is your brain on computers.

Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.

These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.

The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.

While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.

And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers.

“The technology is rewiring our brains,” said Nora Volkow, director of the National Institute of Drug Abuse and one of the world’s leading brain scientists. She and other researchers compare the lure of digital stimulation less to that of drugs and alcohol than to food and sex, which are essential but counterproductive in excess.

Technology use can benefit the brain in some ways, researchers say. Imaging studies show the brains of Internet users become more efficient at finding information. And players of some video games develop better visual acuity.

More broadly, cellphones and computers have transformed life. They let people escape their cubicles and work anywhere. They shrink distances and handle countless mundane tasks, freeing up time for more exciting pursuits.

For better or worse, the consumption of media, as varied as e-mail and TV, has exploded. In 2008, people consumed three times as much information each day as they did in 1960. And they are constantly shifting their attention. Computer users at work change windows or check e-mail or other programs nearly 37 times an hour, new research shows.

The nonstop interactivity is one of the most significant shifts ever in the human environment, said Adam Gazzaley, a neuroscientist at the University of California, San Francisco.

“We are exposing our brains to an environment and asking them to do things we weren’t necessarily evolved to do,” he said. “We know already there are consequences.”

Mr. Campbell, 43, came of age with the personal computer, and he is a heavier user of technology than most. But researchers say the habits and struggles of Mr. Campbell and his family typify what many experience — and what many more will, if trends continue.

For him, the tensions feel increasingly acute, and the effects harder to shake.

The Campbells recently moved to California from Oklahoma to start a software venture. Mr. Campbell’s life revolves around computers.

He goes to sleep with a laptop or iPhone on his chest, and when he wakes, he goes online. He and Mrs. Campbell, 39, head to the tidy kitchen in their four-bedroom hillside rental in Orinda, an affluent suburb of San Francisco, where she makes breakfast and watches a TV news feed in the corner of the computer screen while he uses the rest of the monitor to check his e-mail.

Major spats have arisen because Mr. Campbell escapes into video games during tough emotional stretches. On family vacations, he has trouble putting down his devices. When he rides the subway to San Francisco, he knows he will be offline 221 seconds as the train goes through a tunnel.

Their 16-year-old son, Connor, tall and polite like his father, recently received his first C’s, which his family blames on distraction from his gadgets. Their 8-year-old daughter, Lily, like her mother, playfully tells her father that he favors technology over family.

“I would love for him to totally unplug, to be totally engaged,” says Mrs. Campbell, who adds that he becomes “crotchety until he gets his fix.” But she would not try to force a change.

“He loves it. Technology is part of the fabric of who he is,” she says. “If I hated technology, I’d be hating him, and a part of who my son is too.”

Always On

Mr. Campbell, whose given name is Thomas, had an early start with technology in Oklahoma City. When he was in third grade, his parents bought him Pong, a video game. Then came a string of game consoles and PCs, which he learned to program.

In high school, he balanced computers, basketball and a romance with Brenda, a cheerleader with a gorgeous singing voice. He studied too, with focus, uninterrupted by e-mail. “I did my homework because I needed to get it done,” he said. “I didn’t have anything else to do.”

He left college to help with a family business, then set up a lawn mowing service. At night he would read, play video games, hang out with Brenda and, as she remembers it, “talk a lot more.”

In 1996, he started a successful Internet provider. Then he built the start-up that he sold for $1.3 million in 2003 to LookSmart, a search engine.

Mr. Campbell loves the rush of modern life and keeping up with the latest information. “I want to be the first to hear when the aliens land,” he said, laughing. But other times, he fantasizes about living in pioneer days when things moved more slowly: “I can’t keep everything in my head.”

No wonder. As he came of age, so did a new era of data and communication.

At home, people consume 12 hours of media a day on average, when an hour spent with, say, the Internet and TV simultaneously counts as two hours. That compares with five hours in 1960, say researchers at the University of California, San Diego. Computer users visit an average of 40 Web sites a day, according to research by RescueTime, which offers time-management tools.

As computers have changed, so has the understanding of the human brain. Until 15 years ago, scientists thought the brain stopped developing after childhood. Now they understand that its neural networks continue to develop, influenced by things like learning skills.

So not long after Eyal Ophir arrived at Stanford in 2004, he wondered whether heavy multitasking might be leading to changes in a characteristic of the brain long thought immutable: that humans can process only a single stream of information at a time.

Going back a half-century, tests had shown that the brain could barely process two streams, and could not simultaneously make decisions about them. But Mr. Ophir, a student-turned-researcher, thought multitaskers might be rewiring themselves to handle the load.

His passion was personal. He had spent seven years in Israeli intelligence after being weeded out of the air force — partly, he felt, because he was not a good multitasker. Could his brain be retrained?

Mr. Ophir, like others around the country studying how technology bent the brain, was startled by what he discovered.

The Myth of Multitasking

The test subjects were divided into two groups: those classified as heavy multitaskers based on their answers to questions about how they used technology, and those who were not.

In a test created by Mr. Ophir and his colleagues, subjects at a computer were briefly shown an image of red rectangles. Then they saw a similar image and were asked whether any of the rectangles had moved. It was a simple task until the addition of a twist: blue rectangles were added, and the subjects were told to ignore them.

The multitaskers then did a significantly worse job than the non-multitaskers at recognizing whether red rectangles had changed position. In other words, they had trouble filtering out the blue ones — the irrelevant information.

So, too, the multitaskers took longer than non-multitaskers to switch among tasks, like differentiating vowels from consonants and then odd from even numbers. The multitaskers were shown to be less efficient at juggling problems.

Other tests at Stanford, an important center for research in this fast-growing field, showed multitaskers tended to search for new information rather than accept a reward for putting older, more valuable information to work.

Researchers say these findings point to an interesting dynamic: multitaskers seem more sensitive than non-multitaskers to incoming information.

The results also illustrate an age-old conflict in the brain, one that technology may be intensifying. A portion of the brain acts as a control tower, helping a person focus and set priorities. More primitive parts of the brain, like those that process sight and sound, demand that it pay attention to new information, bombarding the control tower when they are stimulated.

Researchers say there is an evolutionary rationale for the pressure this barrage puts on the brain. The lower-brain functions alert humans to danger, like a nearby lion, overriding goals like building a hut. In the modern world, the chime of incoming e-mail can override the goal of writing a business plan or playing catch with the children.

“Throughout evolutionary history, a big surprise would get everyone’s brain thinking,” said Clifford Nass, a communications professor at Stanford. “But we’ve got a large and growing group of people who think the slightest hint that something interesting might be going on is like catnip. They can’t ignore it.”

Mr. Nass says the Stanford studies are important because they show multitasking’s lingering effects: “The scary part for guys like Kord is, they can’t shut off their multitasking tendencies when they’re not multitasking.”

Melina Uncapher, a neurobiologist on the Stanford team, said she and other researchers were unsure whether the muddied multitaskers were simply prone to distraction and would have had trouble focusing in any era. But she added that the idea that information overload causes distraction was supported by more and more research.

A study at the University of California, Irvine, found that people interrupted by e-mail reported significantly increased stress compared with those left to focus. Stress hormones have been shown to reduce short-term memory, said Gary Small, a psychiatrist at the University of California, Los Angeles.

Preliminary research shows some people can more easily juggle multiple information streams. These “supertaskers” represent less than 3 percent of the population, according to scientists at the University of Utah.

Other research shows computer use has neurological advantages. In imaging studies, Dr. Small observed that Internet users showed greater brain activity than nonusers, suggesting they were growing their neural circuitry.

At the University of Rochester, researchers found that players of some fast-paced video games can track the movement of a third more objects on a screen than nonplayers. They say the games can improve reaction and the ability to pick out details amid clutter.

“In a sense, those games have a very strong both rehabilitative and educational power,” said the lead researcher, Daphne Bavelier, who is working with others in the field to channel these changes into real-world benefits like safer driving.

There is a vibrant debate among scientists over whether technology’s influence on behavior and the brain is good or bad, and how significant it is.

“The bottom line is, the brain is wired to adapt,” said Steven Yantis, a professor of brain sciences at Johns Hopkins University. “There’s no question that rewiring goes on all the time,” he added. But he said it was too early to say whether the changes caused by technology were materially different from others in the past.

Mr. Ophir is loath to call the cognitive changes bad or good, though the impact on analysis and creativity worries him.

He is not just worried about other people. Shortly after he came to Stanford, a professor thanked him for being the one student in class paying full attention and not using a computer or phone. But he recently began using an iPhone and noticed a change; he felt its pull, even when playing with his daughter.

“The media is changing me,” he said. “I hear this internal ping that says: check e-mail and voice mail.”

“I have to work to suppress it.”

Kord Campbell does not bother to suppress it, or no longer can.

Interrupted by a Corpse

It is a Wednesday in April, and in 10 minutes, Mr. Campbell has an online conference call that could determine the fate of his new venture, called Loggly. It makes software that helps companies understand the clicking and buying patterns of their online customers.

Mr. Campbell and his colleagues, each working from a home office, are frantically trying to set up a program that will let them share images with executives at their prospective partner.

But at the moment when Mr. Campbell most needs to focus on that urgent task, something else competes for his attention: “Man Found Dead Inside His Business.”

That is the tweet that appears on the left-most of Mr. Campbell’s array of monitors, which he has expanded to three screens, at times adding a laptop and an iPad.

On the left screen, Mr. Campbell follows the tweets of 1,100 people, along with instant messages and group chats. The middle monitor displays a dark field filled with computer code, along with Skype, a service that allows Mr. Campbell to talk to his colleagues, sometimes using video. The monitor on the right keeps e-mail, a calendar, a Web browser and a music player.

Even with the meeting fast approaching, Mr. Campbell cannot resist the tweet about the corpse. He clicks on the link in it, glances at the article and dismisses it. “It’s some article about something somewhere,” he says, annoyed by the ads for jeans popping up.

The program gets fixed, and the meeting turns out to be fruitful: the partners are ready to do business. A colleague says via instant message: “YES.”

Other times, Mr. Campbell’s information juggling has taken a more serious toll. A few weeks earlier, he once again overlooked an e-mail message from a prospective investor. Another time, Mr. Campbell signed the company up for the wrong type of business account on Amazon.com, costing $300 a month for six months before he got around to correcting it. He has burned hamburgers on the grill, forgotten to pick up the children and lingered in the bathroom playing video games on an iPhone.

Mr. Campbell can be unaware of his own habits. In a two-and-a-half hour stretch one recent morning, he switched rapidly between e-mail and several other programs, according to data from RescueTime, which monitored his computer use with his permission. But when asked later what he was doing in that period, Mr. Campbell said he had been on a long Skype call, and “may have pulled up an e-mail or two.”

The kind of disconnection Mr. Campbell experiences is not an entirely new problem, of course. As they did in earlier eras, people can become so lost in work, hobbies or TV that they fail to pay attention to family.

Mr. Campbell concedes that, even without technology, he may work or play obsessively, just as his father immersed himself in crossword puzzles. But he says this era is different because he can multitask anyplace, anytime.

“It’s a mixed blessing,” he said. “If you’re not careful, your marriage can fall apart or your kids can be ready to play and you’ll get distracted.”

The Toll on Children

Father and son sit in armchairs. Controllers in hand, they engage in a fierce video game battle, displayed on the nearby flat-panel TV, as Lily watches.

They are playing Super Smash Bros. Brawl, a cartoonish animated fight between characters that battle using anvils, explosives and other weapons.

“Kill him, Dad,” Lily screams. To no avail. Connor regularly beats his father, prompting expletives and, once, a thrown pillow. But there is bonding and mutual respect.

“He’s a lot more tactical,” says Connor. “But I’m really good at quick reflexes.”

Screens big and small are central to the Campbell family’s leisure time. Connor and his mother relax while watching TV shows like “Heroes.” Lily has an iPod Touch, a portable DVD player and her own laptop, which she uses to watch videos, listen to music and play games.

Lily, a second-grader, is allowed only an hour a day of unstructured time, which she often spends with her devices. The laptop can consume her.

“When she’s on it, you can holler her name all day and she won’t hear,” Mrs. Campbell said.

Researchers worry that constant digital stimulation like this creates attention problems for children with brains that are still developing, who already struggle to set priorities and resist impulses.

Connor’s troubles started late last year. He could not focus on homework. No wonder, perhaps. On his bedroom desk sit two monitors, one with his music collection, one with Facebook and Reddit, a social site with news links that he and his father love. His iPhone availed him to relentless texting with his girlfriend.

When he studied, “a little voice would be saying, ‘Look up’ at the computer, and I’d look up,” Connor said. “Normally, I’d say I want to only read for a few minutes, but I’d search every corner of Reddit and then check Facebook.”

His Web browsing informs him. “He’s a fact hound,” Mr. Campbell brags. “Connor is, other than programming, extremely technical. He’s 100 percent Internet savvy.”

But the parents worry too. “Connor is obsessed,” his mother said. “Kord says we have to teach him balance.”

So in January, they held a family meeting. Study time now takes place in a group setting at the dinner table after everyone has finished eating. It feels, Mr. Campbell says, like togetherness.

No Vacations

For spring break, the family rented a cottage in Carmel, Calif. Mrs. Campbell hoped everyone would unplug.

But the day before they left, the iPad from Apple came out, and Mr. Campbell snapped one up. The next night, their first on vacation, “We didn’t go out to dinner,” Mrs. Campbell mourned. “We just sat there on our devices.”

She rallied the troops the next day to the aquarium. Her husband joined them for a bit but then begged out to do e-mail on his phone.

Later she found him playing video games.

The trip came as Mr. Campbell was trying to raise several million dollars for his new venture, a goal that he achieved. Brenda said she understood that his pursuit required intensity but was less understanding of the accompanying surge in video game.

His behavior brought about a discussion between them. Mrs. Campbell said he told her that he was capable of logging off, citing a trip to Hawaii several years ago that they called their second honeymoon.

“What trip are you thinking about?” she said she asked him. She recalled that he had spent two hours a day online in the hotel’s business center.

On Thursday, their fourth day in Carmel, Mr. Campbell spent the day at the beach with his family. They flew a kite and played whiffle ball.

Connor unplugged too. “It changes the mood of everything when everybody is present,” Mrs. Campbell said.

The next day, the family drove home, and Mr. Campbell disappeared into his office.

Technology use is growing for Mrs. Campbell as well. She divides her time between keeping the books of her husband’s company, homemaking and working at the school library. She checks e-mail 25 times a day, sends texts and uses Facebook.

Recently, she was baking peanut butter cookies for Teacher Appreciation Day when her phone chimed in the living room. She answered a text, then became lost in Facebook, forgot about the cookies and burned them. She started a new batch, but heard the phone again, got lost in messaging, and burned those too. Out of ingredients and shamed, she bought cookies at the store.

She feels less focused and has trouble completing projects. Some days, she promises herself she will ignore her device. “It’s like a diet — you have good intentions in the morning and then you’re like, ‘There went that,’ ” she said.

Mr. Nass at Stanford thinks the ultimate risk of heavy technology use is that it diminishes empathy by limiting how much people engage with one another, even in the same room.

“The way we become more human is by paying attention to each other,” he said. “It shows how much you care.”

That empathy, Mr. Nass said, is essential to the human condition. “We are at an inflection point,” he said. “A significant fraction of people’s experiences are now fragmented.”

Matt Richtel, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/06/07/technology/07brain.html

Immersed and Confused

Jaw-dropping graphics, engrossing action and . . . vapid storytelling.

Tom Bissell has purchased four Xbox 360 videogame consoles in the past five years. And he has given away three. In an attempt to kick his videogame habit, Mr. Bissell would bestow each recently acquired console on a friend or family member, only to run out and buy another one a short time later. No doubt Microsoft is gratified. We should just be glad that Mr. Bissell was able to drag himself away from playing “Grand Theft Auto” and “Fallout” long enough to write “Extra Lives,” his exploration of, as the subtitle has it, “Why Video Games Matter.”

Unusually for the videogame book genre, Mr. Bissell brings to his subject not only a handy way with a game controller but also a deft literary style and a journalist’s eye. He writes for Harper’s magazine and The New Yorker and is the author of the short-story collection “God Lives in St. Petersburg” (2005). “Extra Lives” is mostly a travelogue recounting Mr. Bissell’s journey, over the course of several years, through a series of immense, immersive videogames, such as “Far Cry 2.” It’s much less tedious than it sounds.

Mr. Bissell is so descriptively alert that his accounts of pixelated derring-do may well interest even those who are immune to the charm of videogames. Here, for instance, is his description of a scene in “Fallout 3,” a post-apocalyptic, role-playing shoot-’em-up game that mostly takes place in Washington D.C.: “I was running up the stairs of what used to be the Dupont Circle metro station and, as I turned to bash in the brainpan of a radioactive ghoul, noticed the playful, lifelike way in which the high-noon sunlight streaked along the grain of my sledgehammer’s wooden handle.”

He’s funny, too. In a section arguing that the artistic merits of videogames can’t judged by the worst of the breed, he writes: “Every form of art, popular or otherwise, has its ghettos”—for instance, “the crack houses along Michael Bay Avenue.”

But what makes “Extra Lives” so winning is Mr. Bissell’s sense of absurdity. He recounts a discussion with some fellow customers at a videogame store about the artistic merits of the game “Left 4 Dead.” The little colloquy continued until he realized: “I was contrasting my aesthetic sensitivity to that of some teenagers about a game that concerns itself with shooting as many zombies as possible. It is moments like this that can make it so dispiritingly difficult to care about videogames.”

Running through “Extra Lives” is a thread of seriousness. Mr. Bissell wonders why, despite their technical sophistication, videogames are so bad at telling stories. It’s a more complex question than you might think.

The best narrative art forms are necessarily authoritarian. In books, film or theater, the creator tells his story with near total control. In a certain way, the audience might as well not even exist. Videogames are participatory. And the fact of participation creates all sorts of problems for narrative authority.

Yet many videogame producers do aspire to tell meaningful stories. The game “BioShock,” for instance, attempts to explore the philosophical tensions within Ayn Rand’s Objectivism and to meditate on the costs of individual freedom—but with plenty of genetic mutants to splatter. The script for the game “Mass Effect”—that is, the on-screen characters’ dialogue, not the computer code—is 300,000 words. But to little avail. The stories just aren’t much as stories. Videogames seem “designed by geniuses and written by Ed Wood Jr.,” Mr. Bissell laments.

The most interesting person Mr. Bissell crosses paths with is Jonathan Blow, a videogame designer and a sort philosopher of the medium. Mr. Blow has spent a good deal of his life thinking about storytelling and the “dynamical meaning” of simply getting through a game. He believes that the central problem with storytelling in videogames is that the actual mechanics of playing a game—moving your character to jump over a barrel, or eat a power pellet, or punch an enemy—are divorced from the stories that videogame makers are trying to tell.

Like all games, videogames are constructed around rules. You can shoot this. You can’t shoot that. This hamburger restores your health. That sword gives you extra power. And so on. “Games have rules, rules have meaning, and game-play is the process by which those rules are tested and explored,” Mr. Bissell explains. And as Mr. Blow notes, if those rules are fake, unimportant or arbitrary, audiences sense it. And, let’s face it, assigning power to a hamburger is a little arbitrary. No matter how impressive a game is, in its rules-ridden immersiveness it will not be able to tell a coherent, meaningful story. The very nature of the medium, Mr. Blow believes, “prevents the stories from being good.”

As if to prove the rule, Mr. Blow designed a game called “Braid.” It concerns a young scientist who discovers how to go back in time and decides to use this power to revisit the period when he lost his great love. “Braid” is a meditation on time travel, choices and consequences. A crucial aspect of playing the game is the player’s ability, at any moment, to rewind the clock to undo his mistakes. It is “dynamical meaning” in harmony with narrative ambition. And because of it, “Braid” occupies a lonely place in the pantheon of videogames as something that approaches art.

When Mr. Blow departs the scene in “Extra Lives,” the book loses some of its sharpness. And toward the end reader interest may flag even more as Mr. Bissell’s videogame addiction merges unsettlingly with his cocaine addiction. Drug stories, like dreams, are interesting only to the person who has them.

Even so, “Extra Lives” is the most fun you’ll ever have reading about videogames. It may prove even more entertaining than playing them.

Mr. Last is a senior writer at The Weekly Standard.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704875604575280760632222980.html

A Web Smaller Than a Divide

AT first glance, there’s a clear need for expanding the Web beyond the Latin alphabet, including in the Arabic-speaking world. According to the Madar Research Group, about 56 million Arabs, or 17 percent of the Arab world, use the Internet, and those numbers are expected to grow 50 percent over the next three years.

Many think that an Arabic-alphabet Web will bring millions online, helping to bridge the socio-economic divides that pervade the region.

But such hopes are overblown. Although there are still problems — encoding glitches and the lack of a standard Arabic keyboard — virtually any Arabic speaker who uses the Web has already adjusted to these challenges in his or her own way. And it’s no big deal: educated Arabs are exposed, in various degrees, to English and French in school.

The very idea of an “Arabic Web” is misleading. True, before the Icann announcement declared that Arabic characters could be used throughout domain names, U.R.L.’s had to be written at least in part in Latin script. But once one passes the Latin domain gate, the rest is all done in Arabic characters anyway.

Nowadays almost every computer can be made to write Arabic, or any other script, and there is plenty of Arabic software. Most late-model electronic devices are equipped with Arabic. I text with friends using Arabic on my iPhone. Many computer keyboards are now even made with Arabic letters printed on the keys.

And where there’s no readily available solution, Arabic Internet users have found a way to adjust. Many use the Latin script to transliterate messages in Arabic when there’s no conversion program or font set available. Phonetic spelling is common. For sounds that have no written equivalent in Latin script, they’ve gotten creative: for example, the number 3 is commonly used for the “ayn” sound and 7 stands in for the “ha,” because their shapes closely resemble the corresponding Arabic letters.

So what will happen? In the short term, of course, some additional users will move to the Web, especially as they take advantage of the new range of domain names. Over time, though, this will peter out, because, as in most of the world, the digital divide still tracks closely with the material and political divide. The haves are the ones using computers, and many of them are also the ones long accustomed to working with Latin script. The have-nots are unlikely to have the luxury of jumping online. Changing the alphabet used to form domain names won’t exactly attract millions of poor Arabs to the Internet.

We should all celebrate the diversity that comes with an Internet no longer tied to a single alphabet. But we should be realistic, too. The Web may be a revolutionary technology, but an Arabic Web is not about to spur an Internet revolution.

Sinan Antoon, an assistant professor of Arabic literature at New York University, is the author of the novel “I`jaam: An Iraqi Rhapsody.”

__________

Full article: http://www.nytimes.com/2010/05/16/opinion/16Antoon.html

Search Engine of the Song Dynasty

BAIDU.COM, the popular search engine often called the Chinese Google, got its name from a poem written during the Song Dynasty (960-1279). The poem is about a man searching for a woman at a busy festival, about the search for clarity amid chaos. Together, the Chinese characters băi and dù mean “hundreds of ways,” and come out of the last lines of the poem: “Restlessly I searched for her thousands, hundreds of ways./ Suddenly I turned, and there she was in the receding light.”

Baidu, rendered in Chinese, is rich with linguistic, aesthetic and historical meaning. But written phonetically in Latin letters (as I must do here because of the constraints of the newspaper medium and so that more American readers can understand), it is barely anchored to the two original characters; along the way, it has lost its precision and its poetry.

As Web addresses increasingly transition to non-Latin characters as a result of the changing rules for domain names, that series of Latin letters Chinese people usually see at the top of the screen when they search for something on Baidu may finally turn into intelligible words: “a hundred ways.”

Of course, this expansion of languages for domain names could lead to confusion: users seeking to visit Web sites with names in a script they don’t read could have difficulty putting in the addresses, and Web browsers may need to be reconfigured to support non-Latin characters. The previous system, with domain names composed of numbers, punctuation marks and Latin letters without accents, promoted standardization, wrangling into consistency and simplicity one small part of the Internet. But something else, something important, has been lost.

Part of the beauty of the Chinese language comes from a kind of divisibility not possible in a Latin-based language. Chinese is composed of approximately 20,000 single-syllable characters, 10,000 of which are in common use. These characters each mean something on their own; they are also combined with other characters to form hundreds of thousands of multisyllabic words. Níhăo, for example, Chinese for “Hello,” is composed of ní — “you,” and hăo — “good.” Isn’t “You good” — both as a statement and a question — a marvelous and strangely precise breakdown of what we’re really saying when we greet someone?

The Romanization of Chinese into a phonetic system called Pinyin, using the Latin alphabet and diacritics (to indicate the four distinguishing tones in Mandarin), was developed by the Chinese government in the 1950s. Pinyin makes the language easier to learn and pronounce, and it has the added benefit of making Chinese characters easy to input into a computer. Yet Pinyin, invented for ease and standards, only represents sound. In Chinese, there are multiple characters with the exact same sound. The sound “băi,” for example, means 100, but it can also mean cypress, or arrange. And “Baidu,” without diacritics, can mean “a failed attempt to poison” or “making a religion of gambling.” In the case of Baidu.com, the word, in Latin letters, has slipped away from its original context and meaning, and been turned into a brand.

Language is such a basic part of our lives, it seems ordinary and transparent. But language is strange and magical, too: it dredges up history and memory; it simultaneously bestows and destabilizes meaning. Each of the thousands of languages spoken around the world has its own system and rules, its own subversions, its own quixotic beauty. Whenever you try to standardize those languages, whether on the Internet, in schools or in literature, you lose something. What we gain in consistency costs us in precision and beauty.

When Chinese speakers Baidu (like Google, it too is a verb), we look for information on the Internet using a branded search engine. But when we see the characters for băi dù, we might, for one moment, engage with the poetry of our language, remember that what we are really trying to do is find what we were seeking in the receding light. Those sets of meanings, layered like a palimpsest, might appear suddenly, where we least expect them, in the address bar at the top of our browsers. And in some small way, those words, in our own languages, might help us see with clarity, and help us to make sense of the world.

Ruiyan Xu is the author of the forthcoming novel “The Lost and Forgotten Languages of Shanghai.”

__________

Full article: http://www.nytimes.com/2010/05/16/opinion/16xu.html

Goddess English of Uttar Pradesh

Mumbai, India

A FORTNIGHT ago, in a poor village in Uttar Pradesh, in northern India, work began on a temple dedicated to Goddess English. Standing on a wooden desk was the idol of English — a bronze figure in robes, wearing a wide-brimmed hat and holding aloft a pen. About 1,000 villagers had gathered for the groundbreaking, most of them Dalits, the untouchables at the bottom of India’s caste system. A social activist promoting the study of English, dressed in a Western suit despite the hot sun and speaking as if he were imparting religious wisdom, said, “Learn A, B, C, D.” The temple is a gesture of defiance from the Dalits to the nation’s elite as well as a message to the Dalit young — English can save you.

A few days later, the Internet Corporation for Assigned Names and Numbers, a body that oversees domain names on the Web, announced a different kind of liberation: it has taken the first steps to free the online world from the Latin script, which English and most Web addresses are written in. In some parts of the world, Web addresses can already be written in non-Latin scripts, though until this change, all needed the Latin alphabet for country codes, like “.sa” for Saudi Arabia. But now that nation, along with Egypt and the United Arab Emirates, has been granted a country code in the Arabic alphabet, and Russia has gotten a Cyrillic one. Soon, others will follow.

Icann calls it a “historic” development, and that is true, but only because a great cliché has finally been defeated. The Internet as a unifier of humanity was always literary nonsense, on par with “truth will triumph.”

The universality of the Latin script online was an accident of its American invention, not a global intention. The world does not want to be unified. What is the value of belonging if you belong to all? It is a fragmented world by choice, and so it was always a fragmented Web. Now we can stop pretending — but that doesn’t mean this is a change worth celebrating.

Many have argued that the introduction of domain names and country codes in non-Latin scripts will help the Web finally reach the world’s poor. But it is really hard to believe that what separates an Egyptian or a Tamil peasant from the Internet is the requirement to type in a few foreign characters. There are far greater obstacles. It is even harder to believe that all the people who are demanding their freedom from the Latin script are doing it for humanitarian reasons. A big part of the issue here is nationalism, and the East’s imagination of the West as an adversary. This is just the latest episode in an ancient campaign.

A decade ago I met Mahatma Gandhi’s great-grandson, Tushar Gandhi, a jolly, endearing, meat-eating man. He was distraught that the Indians who were creating Web sites were choosing the dot-com domain over the more patriotic dot-in. He was trying to convince Indians to flaunt their nationality. He told me: “As long as we live in this world, there will be boundaries. And we need to be proud of what we call home.”

It is the same sentiment that is now inspiring small groups of Indians to demand top-level domain names (the suffix that follows the dot in a Web address) in their own native scripts, like Tamil. The Tamil language is spoken in the south Indian state of Tamil Nadu, where I spent the first 20 years of my life, and where I have seen fierce protests against the colonizing power of Hindi. The International Forum for Information Technology in Tamil, a tech advocacy and networking group, has petitioned Icann for top-level domain names in the Tamil script. But if it cares about increasing the opportunities available to poor Tamils, it should be promoting English, not Tamil.

There’s no denying that at the heart of India’s new prosperity is a foreign language, and that the opportunistic acceptance of English has improved the lives of millions of Indians. There are huge benefits in exploiting a stronger cultural force instead of defying it. Imagine what would have happened if the 12th-century Europeans who first encountered Hindu-Arabic numerals (0, 1, 2, 3) had rejected them as a foreign oddity and persisted with the cumbersome Roman numerals (IV, V). The extraordinary advances in mathematics made by Europeans would probably have been impossible.

But then the world is what it is. There is an expression popularized by the spread of the Internet: the global village. Though intended as a celebration of the modern world’s inclusiveness, it is really an accurate condemnation of that world. After all, a village is a petty place — filled with old grudges, comical self-importance and imagined fears.

Manu Joseph, the deputy editor of the Indian newsweekly OPEN, is the author of the forthcoming novel “Serious Men.”

__________

Full article: http://www.nytimes.com/2010/05/16/opinion/16joseph.html

Five Ways to Keep Online Criminals at Bay

THE Web is a fount of information, a busy marketplace, a thriving social scene — and a den of criminal activity.

Criminals have found abundant opportunities to undertake stealthy attacks on ordinary Web users that can be hard to stop, experts say. Hackers are lacing Web sites — often legitimate ones — with so-called malware, which can silently infiltrate visiting PCs to steal sensitive personal information and then turn the computers into “zombies” that can be used to spew spam and more malware onto the Internet.

At one time, virus attacks were obvious to users, said Alan Paller, director of research at the SANS Institute, a training organization for computer security professionals. He explained that now, the attacks were more silent. “Now it’s much, much easier infecting trusted Web sites,” he said, “and getting your zombies that way.”

And there are myriad lures aimed at conning people into installing nefarious programs, buying fake antivirus software or turning over personal information that can be used in identity fraud.

“The Web opened up a lot more opportunities for attacking” computer users and making money, said Maxim Weinstein, executive director of StopBadware, a nonprofit consumer advocacy group that receives funding from Google, PayPal, Mozilla and others.

Google says its automated scans of the Internet recently turned up malware on roughly 300,000 Web sites, double the number it recorded two years ago. Each site can contain many infected pages. Meanwhile, Malware doubled last year, to 240 million unique attacks, according to Symantec, a maker of security software. And that does not count the scourge of fake antivirus software and other scams.

So it is more important than ever to protect yourself. Here are some basic tips for thwarting them.

Protect the Browser

The most direct line of attack is the browser, said Vincent Weafer, vice president of Symantec Security Response. Online criminals can use programming flaws in browsers to get malware onto PCs in “drive-by” downloads without users ever noticing.

Internet Explorer and Firefox are the most targeted browsers because they are the most popular. If you use current versions, and download security updates as they become available, you can surf safely. But there can still be exposure between when a vulnerability is discovered and an update becomes available, so you will need up-to-date security software as well to try to block any attacks that may emerge, especially if you have a Windows PC.

It can help to use a more obscure browser like Chrome from Google, which also happens to be the newest browser on the market and, as such, includes some security advances that make attacks more difficult.

Get Adobe Updates

Most consumers are familiar with Adobe Reader, for PDF files, and Adobe’s Flash Player. In the last year, a virtual epidemic of attacks has exploited their flaws; almost half of all attacks now come hidden in PDF files, Mr. Weafer said. “No matter what browser you’re using,” he said, “you’re using the PDF Reader, you’re using the Adobe Flash Player.”

Part of the problem is that many computers run old, vulnerable versions. But as of April, it has become easier to get automatic updates from Adobe, if you follow certain steps.

To update Reader, open the application and then select “Help” and “Check for Updates” from the menu bar. Since April, Windows users have been able to choose to get future updates automatically without additional prompts by clicking “Edit” and “Preferences,” then choosing “Updater” from the list and selecting “Automatically install updates.” Mac users can arrange updates using a similar procedure, though Apple requires that they enter their password each time an update is installed.

Adobe said it did not make silent automatic updates available previously because many users, especially at companies, were averse to them. To get the latest version of Flash Player, visit Abobe’s Web site.

Any software can be vulnerable. Windows PC users can identify vulnerable or out-of-date software using Secunia PSI, a free tool that scans machines and alerts users to potential problems.

Beware Malicious Ads

An increasingly popular way to get attacks onto Web sites people trust is to slip them into advertisements, usually by duping small-time ad networks. Malvertising, as this practice is known, can exploit software vulnerabilities or dispatch deceptive pop-up messages.

A particularly popular swindle involves an alert that a virus was found on the computer, followed by urgent messages to buy software to remove it. Of course, there is no virus and the security software, known as scareware, is fake. It is a ploy to get credit card numbers and $40 or $50. Scareware accounts for half of all malware delivered in ads, up fivefold from a year ago, Google said.

Closing the pop-up or killing the browser will usually end the episode. But if you encounter this scam, check your PC with trusted security software or Microsoft’s free Malicious Software Removal Tool. If you have picked up something nasty, you are in good company; Microsoft cleaned scareware from 7.8 million PCs in the second half of 2009, up 47 percent from the 5.3 million in the first half, the company said.

Another tool that can defend against malvertising, among other Web threats, is K9 Web Protection, free from Blue Coat Systems. Though it is marketed as parental-control software, K9 can be configured to look only for security threats like malware, spyware and phishing attacks — and to bark each time it stops one.

Poisoned Search Results

Online criminals are also trying to manipulate search engines into placing malicious sites toward the top of results pages for popular keywords. According to a recent Google study, 60 percent of malicious sites that embed hot keywords try to distribute scareware to the computers of visitors.

Google and search engines like Microsoft’s Bing are working to detect malicious sites and remove them from their indexes. Free tools like McAfee’s SiteAdvisor and the Firefox add-on Web of Trust can also help — warning about potentially dangerous links.

Antisocial Media

Attackers also use e-mail, instant messaging, blog comments and social networks like Facebook and Twitter to induce people to visit their sites.

It’s best to accept “friend” requests only from people you know, and to guard your passwords. Phishers are trying to filch login information so they can infiltrate accounts, impersonate you to try to scam others out of money and gather personal information about you and your friends.

Also beware the Koobface worm, variants of which have been taking aim at users of Facebook and other social sites for more than a year. It typically promises a video of some kind and asks you to download a fake multimedia-player codec to view the video. If you do so, your PC is infected with malware that turns it into a zombie (making it part of a botnet, or group of computers, that can spew spam and malware across the Internet).

But most important, you need to keep your wits about you. Criminals are using increasingly sophisticated ploys, and your best defense on the Web may be a healthy level of suspicion.

Riva Richmond, New York Times

__________

Full article: http://www.nytimes.com/2010/05/20/technology/personaltech/20basics.html

To catch a thief

Spotting video piracy

A new way to scan digital videos for copyright infringement

ONLINE video piracy is a big deal. Google’s YouTube, for example, is being sued for more than $1 billion by Viacom, a media company. But it is extremely hard to tell if a video clip is copyrighted, particularly since 24 hours of video are uploaded to YouTube every minute. Now a new industry standard promises to be able to identify pirated material with phenomenal accuracy in a matter of seconds.

The technique, developed by NEC, a Japanese technology company, and later tweaked by Mitsubishi Electric, has been adopted by the International Organisation for Standardisation (ISO) for MPEG-7, the latest standard for describing audio-visual content. The two existing methods do not do a very good job. One is digital “watermarking,” in which a bit of computer code is embedded in a file to identify it. This works only if content owners take the trouble to affix the watermark—and then it only spots duplicates, not other forms of piracy such as recording a movie at a cinema. A second approach is to extract a numeric code or “digital fingerprint” from the content file itself by comparing, say, the colours or texture of regions in a frame. But this may not work if the file is altered, such as by cropping or overlaying text.

NEC’s technology extracts a digital signature that works even if the video is altered. It does this by comparing the brightness in 380 predefined “regions of interest” in a frame of the video. This could be done for all or only some of the frames in a film. The brightness is assigned a value: -1, 0, or +1. These values are encapsulated in a digital signature of 76 bytes per frame.

The beauty of the technique is that it encompasses both granularity and generality. The 380 regions of interest are numerous, so an image can be identified even if it is doctored. At the same time, the array of three values simplifies the complexity in the image, so even if a video is of poor quality or a different hue, the information about its relative luminance is retained. Moreover, the compact signature is computationally easy to extract and use.

NEC says the system could be used to automate what is currently a manual procedure of checking that video uploaded to the internet is not pirated. The technology is said to have an average detection rate of 96% and a low rate of false alarms: a mere five per million, according to tests by the ISO. It can detect if a video is pirated from clips as short as two seconds. And an ordinary PC can be used with the system to scour through 1,000 hours of video in a second. There are other potential uses too, because it provides a way to identify video content. A person could, say, use the signature in a clip to search for a full version of a movie. Piracy will still flourish—but the pirates may have to get smarter.

__________

Full article: http://www.economist.com/science-technology/displaystory.cfm?story_id=16103864&source=hptextfeature

Tell-All Generation Learns to Keep Things Offline

“I am much more self-censoring,” said Sam Jackson, a student.

Min Liu, a 21-year-old liberal arts student at the New School in New York City, got a Facebook account at 17 and chronicled her college life in detail, from rooftop drinks with friends to dancing at a downtown club. Recently, though, she has had second thoughts.

Concerned about her career prospects, she asked a friend to take down a photograph of her drinking and wearing a tight dress. When the woman overseeing her internship asked to join her Facebook circle, Ms. Liu agreed, but limited access to her Facebook page. “I want people to take me seriously,” she said.

The conventional wisdom suggests that everyone under 30 is comfortable revealing every facet of their lives online, from their favorite pizza to most frequent sexual partners. But many members of the tell-all generation are rethinking what it means to live out loud.

While participation in social networks is still strong, a survey released last month by the University of California, Berkeley, found that more than half the young adults questioned had become more concerned about privacy than they were five years ago — mirroring the number of people their parent’s age or older with that worry.

They are more diligent than older adults, however, in trying to protect themselves. In a new study to be released this month, the Pew Internet Project has found that people in their 20s exert more control over their digital reputations than older adults, more vigorously deleting unwanted posts and limiting information about themselves. “Social networking requires vigilance, not only in what you post, but what your friends post about you,” said Mary Madden, a senior research specialist who oversaw the study by Pew, which examines online behavior. “Now you are responsible for everything.”

The erosion of privacy has become a pressing issue among active users of social networks. Last week, Facebook scrambled to fix a security breach that allowed users to see their friends’ supposedly private information, including personal chats.

Sam Jackson, a junior at Yale who started a blog when he was 15 and who has been an intern at Google, said he had learned not to trust any social network to keep his information private. “If I go back and look, there are things four years ago I would not say today,” he said. “I am much more self-censoring. I’ll try to be honest and forthright, but I am conscious now who I am talking to.”

He has learned to live out loud mostly by trial and error and has come up with his own theory: concentric layers of sharing.

His Facebook account, which he has had since 2005, is strictly personal. “I don’t want people to know what my movie rentals are,” he said. “If I am sharing something, I want to know what’s being shared with others.”

Mistrust of the intentions of social sites appears to be pervasive. In its telephone survey of 1,000 people, the Berkeley Center for Law and Technology at the University of California found that 88 percent of the 18- to 24-year-olds it surveyed last July said there should be a law that requires Web sites to delete stored information. And 62 percent said they wanted a law that gave people the right to know everything a Web site knows about them.

That mistrust is translating into action. In the Pew study, to be released shortly, researchers interviewed 2,253 adults late last summer and found that people ages 18 to 29 were more apt to monitor privacy settings than older adults are, and they more often delete comments or remove their names from photos so they cannot be identified. Younger teenagers were not included in these studies, and they may not have the same privacy concerns. But anecdotal evidence suggests that many of them have not had enough experience to understand the downside to oversharing.

Elliot Schrage, who oversees Facebook’s global communications and public policy strategy, said it was a good thing that young people are thinking about what they put online. “We are not forcing anyone to use it,” he said of Facebook. But at the same time, companies like Facebook have a financial incentive to get friends to share as much as possible. That’s because the more personal the information that Facebook collects, the more valuable the site is to advertisers, who can mine it to serve up more targeted ads.

Two weeks ago, Senator Charles E. Schumer, Democrat of New York, petitioned the Federal Trade Commission to review the privacy policies of social networks to make sure consumers are not being deliberately confused or misled. The action was sparked by a recent change to Facebook’s settings that forced its more than 400 million users to choose to “opt out” of sharing private information with third-party Web sites instead of “opt in,” a move which confounded many of them.

Mr. Schrage of Facebook said, “We try diligently to get people to understand the changes.”

But in many cases, young adults are teaching one another about privacy.

Ms. Liu is not just policing her own behavior, but her sister’s, too. Ms. Liu sent a text message to her 17-year-old sibling warning her to take down a photo of a guy sitting on her sister’s lap. Why? Her sister wants to audition for “Glee” and Ms. Liu didn’t want the show’s producers to see it. Besides, what if her sister became a celebrity? “It conjures up an image where if you became famous anyone could pull up a picture and send it to TMZ,” Ms. Liu said.

Andrew Klemperer, a 20-year-old at Georgetown University, said it was a classmate who warned him about the implications of the recent Facebook change — through a status update on (where else?) Facebook. Now he is more diligent in monitoring privacy settings and apt to warn others, too.

Helen Nissenbaum, a professor of culture, media and communication at New York University and author of “Privacy in Context,” a book about information sharing in the digital age, said teenagers were naturally protective of their privacy as they navigate the path to adulthood, and the frequency with which companies change privacy rules has taught them to be wary.

That was the experience of Kanupriya Tewari, a 19-year-old pre-med student at Tufts University. Recently she sought to limit the information a friend could see on Facebook but found the process cumbersome. “I spent like an hour trying to figure out how to limit my profile, and I couldn’t,” she said. She gave up because she had chemistry homework to do, but vowed to figure it out after finals.

“I don’t think they would look out for me,” she said. “I have to look out for me.”

Laura M. Holson, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/05/09/fashion/09privacy.html

Emperors and beggars

The rise of content farms

Can technology help make online content pay?

Our ceramic-mugs correspondent writes

THIS week the Wall Street Journal, the pride of News Corporation’s stable of newspapers, launched a 12-page daily section of local news in New York, in a direct challenge to the New York Times. The premise behind the launch is that expensive, thorough reporting will pay for itself by attracting readers and advertisers. Indeed, Rupert Murdoch, News Corp’s boss, recently proclaimed, “Content is not just king, it is the emperor of all things electronic.” However, a new brand of media firms dubbed “content farms” takes the opposite view: that online, at any rate, revenue from advertising or subscriptions will never cover the costs of conventional journalism, so journalism will have to change.

Newspaper articles are expensive to produce but usually cost nothing to read online and do not command high advertising rates, since there is almost unlimited inventory. Mr Murdoch’s answer is to charge for online content: another of his newspapers, the Times of London, will start to do so this summer (the Journal already does). Content farms like Demand Media and Associated Content, in contrast, aim to produce content at a price so low that even meagre advertising revenue can support it.

Demand Media’s approach is a “combination of science and art”, in the words of Steven Kydd, who is in charge of the firm’s content production. Clever software works out what internet users are interested in and how much advertising revenue a given topic can pull in. The results are sent to an army of 7,000 freelancers, each of whom must have a college degree, writing experience and a speciality. They artfully pen articles or produce video clips to fit headlines such as “How do I paint ceramic mugs?” and “Why am I so tired in winter?”

Although an article may pay as little as $5, writers make on average $20-25 an hour, says Mr Kydd. The articles are copy-edited and checked for plagiarism. For the most part, they are published on the firm’s 72 websites, including eHow, answerbag and travels.com. But videos are also uploaded onto YouTube, where the firm is by far the biggest contributor. Some articles end up on the websites of more conventional media, including USAToday, which runs travel tips produced by Demand Media. In March, Demand Media churned out 150,000 pieces of content in this way. The company is expected to go public later this year, if it is not acquired by a big web portal, such as Yahoo!, first.

AOL, a web portal which was recently spun off from Time Warner, a media giant, does not like to be compared to such an operation. Tim Armstrong, its boss, intends to turn it into “the largest producer of quality online content”. The firm already runs more than 80 websites covering topics from gadgets (Engadget.com) and music (Spinner.com) to fashion (Stylelist.com) and local news (Patch.com).

In AOL’s journalistic universe there are three groups of contributors. The first two are salaried journalists and freelancers with expertise in a certain domain, who currently number more than 3,000. Then there are amateurs who contribute to individual projects, for instance when AOL recently compiled profiles of all 2,000 bands at the SXSW music festival in Texas. (Contributors were paid $50 for each profile.) All this is powered by a system like Demand Media’s that uses data and algorithms to predict what sorts of stories will appeal most to readers, what advertisers are willing to pay for them and what freelancers should therefore be offered. So far, however, the numbers are small: in the week of April 25th, 61 writers published 155 articles in this way on 33 AOL sites.

Predictably, many commentators are appalled. Demand Media has been called “demonic”. But, argues Dan Gillmor, a professor of journalism at Arizona State University, “the firm is at least interested in what people want to know—which is nothing to sneer at”. And unlike many other services that take advantage of “user generated content”, he says, Demand Media actually pays its contributors.

The problem with content farms, Mr Gillmor and others say, is that they swamp the internet with mediocre content. To earn a decent living, freelancers have to work at a breakneck pace, which has an obvious impact on quality. Moreover, content that is designed to appear high up in the results produced by search engines could lose its audience if the search engines change their rules.

In AOL’s case, the question is whether the infrastructure for the three tiers of contributors will work financially, not just journalistically and technically. Clay Shirky, a new-media expert at New York University, suggests that content produced cheaply by freelancers could serve to fund more ambitious projects. If AOL can make that work, the pundits will cheer.

__________

Full article and photo: http://www.economist.com/business-finance/displaystory.cfm?story_id=16010291&source=hptextfeature

When Apple Calls the Cops

The First Amendment doesn’t only belong to journalists.

Jason Chen is a newsman. Or is he?

That’s just one question raised by the raid on Mr. Chen’s home by the San Mateo County, Calif., Sheriff’s Office, which carted off some computers and other electronic equipment. The search warrant appears to be the result of an investigation into whether Mr. Chen broke the law when he bought an iPhone prototype that an Apple engineer left in a bar where he was celebrating his birthday.

Because Mr. Chen reported on the new iPhone for his website, Gizmodo.com, the seizure of his computers has renewed a heated debate about whether bloggers are real journalists. Traditionally, many in the mainstream press have disparaged bloggers, though in this case at least some press organizations—including the parent company that runs Mr. Chen’s blog—argue that he is a full-time journalist whose home is his newsroom. The irony is how few connect Mr. Chen’s First Amendment freedoms to those for corporations that were recently upheld in a landmark Supreme Court ruling.

The case was Citizens United v. Federal Election Commission. Citizens United is a nonprofit corporation that produced a documentary on Hillary Clinton. It sought to distribute the film via video-on-demand back when she was running in the Democratic presidential primaries. When a lower court agreed with the FEC that the McCain-Feingold restrictions applied to the Hillary film, the group appealed and won at the Supreme Court this past January.

Not long after, in his State of the Union Address, Barack Obama disparaged members of the Supreme Court sitting before him by accusing them of opening “the floodgates for special interests.” Many focused on the president’s rudeness. More troubling was his message: What President Obama was really saying is that the Wall Street Journal and ABC News and your hometown daily should be free to print or broadcast what they want during an election. But not organizations like Citizens United.

The High Court wisely rejected that logic. Writing for the majority, Justice Anthony Kennedy said that “The First Amendment protects speech and speaker, and the ideas that flow from each.” In other words, the government can’t restrict First Amendment rights based on the identity of the speaker.”

Steve Simpson, a lawyer for the Institute for Justice, a libertarian public interest law firm, puts it this way: “Once the government gets in the business of deciding who can speak based on identity, it will then necessarily be involved in deciding what viewpoints get heard.”

The classic view of the First Amendment holds all Americans are entitled to its rights by virtue of citizenship. These days, alas, too many journalists and politicians assume that a free press should mean special privileges for a designated class. The further we travel in this direction, the more the government will end up deciding which Americans qualify and which do not.

It’s not just Mr. Chen. Two weeks ago in New Jersey, a state appeals court ruled that a hockey mom who blogs is not a journalist for the purposes of protecting her sources. The woman was being sued for derogatory comments she posted on a message board about a company that supplies software for the porn industry. At the federal level, meanwhile, a “shield law” protecting journalists from revealing their sources remains bogged down in Congress as legislators are forced to define who is legitimately a journalist and who is not.

Mr. Simpson points to another irony: Legislation now being pushed by Sen. Chuck Schumer (D., N.Y.) to scale back the Supreme Court’s January decision would limit political speech for government contractors, for companies that owe TARP money, and for those that pass some threshold for foreign ownership.

It’s an interesting proposition. I wonder: How many among the press who favor these chains being wrapped around corporations have thought through the implications for news organizations? The implications will be especially interesting if Congress ever does get around to approving that bailout for failing newspapers that the president says he’s at least open to.

In Mr. Chen’s case, all this may be moot if his troubles really have to do with buying property that is considered stolen under California law. In its reporting on the case, Gizmodo has already admitted paying $5,000 for the iPhone prototype. If the criminal case comes down to stolen property, whether or not he is deemed a bona fide journalist may not make much difference.

The larger point is that the best guarantee of good, independent journalism has always been the willingness of reporters and editors and publishers to run with the truth, protect their sources, and accept the consequences—even jail, if it comes to that. In short, we’ll all be better served by a First Amendment that remains a fundamental right for all rather than a class privilege for some.

William McGurn, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704342604575222501056696836.html

Mrs. Clinton, Tear Down this Cyberwall

The State Department is sitting on funds to free the flow of information in closed societies.

When a government department refuses to spend money that Congress has allocated, there’s usually a telling backstory. This is doubly so when the funds are for a purpose as uncontroversial as making the Internet freer.

So why has the State Department refused to spend $45 million in appropriations since 2008 to “expand access and information in closed societies”? The technology to circumvent national restrictions is being provided by volunteers who believe that with funding they can bring Web access to many more people, from Iran to China.

A bipartisan group in Congress intended to pay for tests aimed at expanding the use of software that brings Internet access to “large numbers of users living in closed societies that have acutely hostile Internet environments.” The most successful of these services is provided by a group called the Global Internet Freedom Consortium, whose programs include Freegate and Ultrasurf.

When Iranian demonstrators last year organized themselves through Twitter posts and brought news of the crackdown to the outside world, they got past the censors chiefly by using Freegate to get access to outside sites.

The team behind these circumvention programs understands how subversive their efforts can be. As Shiyu Zhou, deputy director of the Global Internet Freedom Consortium, told Congress last year, “The Internet censorship firewalls have become 21st-century versions of Berlin Walls that isolate and dispirit the citizens of closed-society dictatorships.”

Repressive governments rightly regard the Internet as an existential threat, giving people powerful ways to communicate and organize. These governments also use the Web as a tool of repression, monitoring emails and other traffic. Recall that Google left China in part because of hacking of human-rights activists’ Gmail accounts.

To counter government monitors and censors, these programs give online users encrypted connections to secure proxy servers around the world. A group of volunteers constantly switches the Internet Protocol addresses of the servers—up to 10,000 times an hour. The group has been active since 2000, and repressive governments haven’t figured out how to catch up. More than one million Iranians used the system last June to post videos and photos showing the government crackdown.

Mr. Zhou tells me his group would use any additional money to add equipment and to hire full-time technical staff to support the volunteers. For $50 million, he estimates the service could accommodate 5% of Chinese Internet users and 10% in other closed societies—triple the current capacity.

So why won’t the State Department fund this group to expand its reach, or at least test how scalable the solution could be? There are a couple of explanations.

The first is that the Global Internet Freedom Consortium was founded by Chinese-American engineers who practice Falun Gong, the spiritual movement suppressed by Beijing. Perhaps not the favorites of U.S. diplomats, but what other group has volunteers engaged enough to keep such a service going? As with the Jewish refuseniks who battled the Soviet Union, sometimes it takes a persecuted minority to stand up to a totalitarian regime.

The second explanation is a split among technologists—between those who support circumvention programs built on proprietary systems and others whose faith is on more open sources of code. A study last year by the Berkman Center at Harvard gave more points to open-source efforts, citing “a well-established contentious debate among software developers about whether secrecy about implementation details is a robust strategy for security.” But whatever the theoretical objections, the proprietary systems work.

Another likely factor is realpolitik. Despite the tough speech Hillary Clinton gave in January supporting Internet freedom, it’s easy to imagine bureaucrats arguing that the U.S. shouldn’t undermine the censorship efforts of Tehran and Beijing. An earlier generation of bureaucrats tried to edit, as overly aggressive, Ronald Reagan’s 1987 speech in Berlin urging Mikhail Gorbachev: “Tear down this wall.”

It’s true that circumvention doesn’t solve every problem. Internet freedom researcher and advocate Rebecca MacKinnon has made the point that “circumvention is never going to be the silver bullet” in the sense that it can only give people access to the open Web. It can’t help with domestic censorship.

During the Cold War, the West expended huge effort to get books, tapes, fax machines, radio reports and other information, as well as the means to convey it, into closed societies. Circumvention is the digital-age equivalent.

If the State Department refuses to support a free Web, perhaps there’s a private solution. An anonymous poster, “chinese.zhang,” suggested on a Google message board earlier this year that the company should fund the Global Internet Freedom Consortium as part of its defense against Chinese censorship. “I think Google can easily offer more servers to help to break down the Great Firewall,” he wrote.

L. Gordon Crovitz, Wall Street Journal

__________

Full article: http://online.wsj.com/article/SB10001424052748704608104575219022492475364.html

Brave New World of Digital Intimacy

On Sept. 5, 2006, Mark Zuckerberg changed the way that Facebook worked, and in the process he inspired a revolt.

Zuckerberg, a doe-eyed 24-year-old C.E.O., founded Facebook in his dorm room at Harvard two years earlier, and the site quickly amassed nine million users. By 2006, students were posting heaps of personal details onto their Facebook pages, including lists of their favorite TV shows, whether they were dating (and whom), what music they had in rotation and the various ad hoc “groups” they had joined (like “Sex and the City” Lovers). All day long, they’d post “status” notes explaining their moods — “hating Monday,” “skipping class b/c i’m hung over.” After each party, they’d stagger home to the dorm and upload pictures of the soused revelry, and spend the morning after commenting on how wasted everybody looked. Facebook became the de facto public commons — the way students found out what everyone around them was like and what he or she was doing.

But Zuckerberg knew Facebook had one major problem: It required a lot of active surfing on the part of its users. Sure, every day your Facebook friends would update their profiles with some new tidbits; it might even be something particularly juicy, like changing their relationship status to “single” when they got dumped. But unless you visited each friend’s page every day, it might be days or weeks before you noticed the news, or you might miss it entirely. Browsing Facebook was like constantly poking your head into someone’s room to see how she was doing. It took work and forethought. In a sense, this gave Facebook an inherent, built-in level of privacy, simply because if you had 200 friends on the site — a fairly typical number — there weren’t enough hours in the day to keep tabs on every friend all the time.

“It was very primitive,” Zuckerberg told me when I asked him about it last month. And so he decided to modernize. He developed something he called News Feed, a built-in service that would actively broadcast changes in a user’s page to every one of his or her friends. Students would no longer need to spend their time zipping around to examine each friend’s page, checking to see if there was any new information. Instead, they would just log into Facebook, and News Feed would appear: a single page that — like a social gazette from the 18th century — delivered a long list of up-to-the-minute gossip about their friends, around the clock, all in one place. “A stream of everything that’s going on in their lives,” as Zuckerberg put it.

When students woke up that September morning and saw News Feed, the first reaction, generally, was one of panic. Just about every little thing you changed on your page was now instantly blasted out to hundreds of friends, including potentially mortifying bits of news — Tim and Lisa broke up; Persaud is no longer friends with Matthew — and drunken photos someone snapped, then uploaded and tagged with names. Facebook had lost its vestigial bit of privacy. For students, it was now like being at a giant, open party filled with everyone you know, able to eavesdrop on what everyone else was saying, all the time.

“Everyone was freaking out,” Ben Parr, then a junior at Northwestern University, told me recently. What particularly enraged Parr was that there wasn’t any way to opt out of News Feed, to “go private” and have all your information kept quiet. He created a Facebook group demanding Zuckerberg either scrap News Feed or provide privacy options. “Facebook users really think Facebook is becoming the Big Brother of the Internet, recording every single move,” a California student told The Star-Ledger of Newark. Another chimed in, “Frankly, I don’t need to know or care that Billy broke up with Sally, and Ted has become friends with Steve.” By lunchtime of the first day, 10,000 people had joined Parr’s group, and by the next day it had 284,000.

Zuckerberg, surprised by the outcry, quickly made two decisions. The first was to add a privacy feature to News Feed, letting users decide what kind of information went out. But the second decision was to leave News Feed otherwise intact. He suspected that once people tried it and got over their shock, they’d like it.

He was right. Within days, the tide reversed. Students began e-mailing Zuckerberg to say that via News Feed they’d learned things they would never have otherwise discovered through random surfing around Facebook. The bits of trivia that News Feed delivered gave them more things to talk about — Why do you hate Kiefer Sutherland? — when they met friends face to face in class or at a party. Trends spread more quickly. When one student joined a group — proclaiming her love of Coldplay or a desire to volunteer for Greenpeace — all her friends instantly knew, and many would sign up themselves. Users’ worries about their privacy seemed to vanish within days, boiled away by their excitement at being so much more connected to their friends. (Very few people stopped using Facebook, and most people kept on publishing most of their information through News Feed.) Pundits predicted that News Feed would kill Facebook, but the opposite happened. It catalyzed a massive boom in the site’s growth. A few weeks after the News Feed imbroglio, Zuckerberg opened the site to the general public (previously, only students could join), and it grew quickly; today, it has 100 million users.

When I spoke to him, Zuckerberg argued that News Feed is central to Facebook’s success. “Facebook has always tried to push the envelope,” he said. “And at times that means stretching people and getting them to be comfortable with things they aren’t yet comfortable with. A lot of this is just social norms catching up with what technology is capable of.”

In essence, Facebook users didn’t think they wanted constant, up-to-the-minute updates on what other people are doing. Yet when they experienced this sort of omnipresent knowledge, they found it intriguing and addictive. Why?

Social scientists have a name for this sort of incessant online contact. They call it “ambient awareness.” It is, they say, very much like being physically near someone and picking up on his mood through the little things he does — body language, sighs, stray comments — out of the corner of your eye. Facebook is no longer alone in offering this sort of interaction online. In the last year, there has been a boom in tools for “microblogging”: posting frequent tiny updates on what you’re doing. The phenomenon is quite different from what we normally think of as blogging, because a blog post is usually a written piece, sometimes quite long: a statement of opinion, a story, an analysis. But these new updates are something different. They’re far shorter, far more frequent and less carefully considered. One of the most popular new tools is Twitter, a Web site and messaging service that allows its two-million-plus users to broadcast to their friends haiku-length updates — limited to 140 characters, as brief as a mobile-phone text message — on what they’re doing. There are other services for reporting where you’re traveling (Dopplr) or for quickly tossing online a stream of the pictures, videos or Web sites you’re looking at (Tumblr). And there are even tools that give your location. When the new iPhone, with built-in tracking, was introduced in July, one million people began using Loopt, a piece of software that automatically tells all your friends exactly where you are.

For many people — particularly anyone over the age of 30 — the idea of describing your blow-by-blow activities in such detail is absurd. Why would you subject your friends to your daily minutiae? And conversely, how much of their trivia can you absorb? The growth of ambient intimacy can seem like modern narcissism taken to a new, supermetabolic extreme — the ultimate expression of a generation of celebrity-addled youths who believe their every utterance is fascinating and ought to be shared with the world. Twitter, in particular, has been the subject of nearly relentless scorn since it went online. “Who really cares what I am doing, every hour of the day?” wondered Alex Beam, a Boston Globe columnist, in an essay about Twitter last month. “Even I don’t care.”

Indeed, many of the people I interviewed, who are among the most avid users of these “awareness” tools, admit that at first they couldn’t figure out why anybody would want to do this. Ben Haley, a 39-year-old documentation specialist for a software firm who lives in Seattle, told me that when he first heard about Twitter last year from an early-adopter friend who used it, his first reaction was that it seemed silly. But a few of his friends decided to give it a try, and they urged him to sign up, too.

Each day, Haley logged on to his account, and his friends’ updates would appear as a long page of one- or two-line notes. He would check and recheck the account several times a day, or even several times an hour. The updates were indeed pretty banal. One friend would post about starting to feel sick; one posted random thoughts like “I really hate it when people clip their nails on the bus”; another Twittered whenever she made a sandwich — and she made a sandwich every day. Each so-called tweet was so brief as to be virtually meaningless.

But as the days went by, something changed. Haley discovered that he was beginning to sense the rhythms of his friends’ lives in a way he never had before. When one friend got sick with a virulent fever, he could tell by her Twitter updates when she was getting worse and the instant she finally turned the corner. He could see when friends were heading into hellish days at work or when they’d scored a big success. Even the daily catalog of sandwiches became oddly mesmerizing, a sort of metronomic click that he grew accustomed to seeing pop up in the middle of each day.

This is the paradox of ambient awareness. Each little update — each individual bit of social information — is insignificant on its own, even supremely mundane. But taken together, over time, the little snippets coalesce into a surprisingly sophisticated portrait of your friends’ and family members’ lives, like thousands of dots making a pointillist painting. This was never before possible, because in the real world, no friend would bother to call you up and detail the sandwiches she was eating. The ambient information becomes like “a type of E.S.P.,” as Haley described it to me, an invisible dimension floating over everyday life.

“It’s like I can distantly read everyone’s mind,” Haley went on to say. “I love that. I feel like I’m getting to something raw about my friends. It’s like I’ve got this heads-up display for them.” It can also lead to more real-life contact, because when one member of Haley’s group decides to go out to a bar or see a band and Twitters about his plans, the others see it, and some decide to drop by — ad hoc, self-organizing socializing. And when they do socialize face to face, it feels oddly as if they’ve never actually been apart. They don’t need to ask, “So, what have you been up to?” because they already know. Instead, they’ll begin discussing something that one of the friends Twittered that afternoon, as if picking up a conversation in the middle.

Facebook and Twitter may have pushed things into overdrive, but the idea of using communication tools as a form of “co-presence” has been around for a while. The Japanese sociologist Mizuko Ito first noticed it with mobile phones: lovers who were working in different cities would send text messages back and forth all night — tiny updates like “enjoying a glass of wine now” or “watching TV while lying on the couch.” They were doing it partly because talking for hours on mobile phones isn’t very comfortable (or affordable). But they also discovered that the little Ping-Ponging messages felt even more intimate than a phone call.

“It’s an aggregate phenomenon,” Marc Davis, a chief scientist at Yahoo and former professor of information science at the University of California at Berkeley, told me. “No message is the single-most-important message. It’s sort of like when you’re sitting with someone and you look over and they smile at you. You’re sitting here reading the paper, and you’re doing your side-by-side thing, and you just sort of let people know you’re aware of them.” Yet it is also why it can be extremely hard to understand the phenomenon until you’ve experienced it. Merely looking at a stranger’s Twitter or Facebook feed isn’t interesting, because it seems like blather. Follow it for a day, though, and it begins to feel like a short story; follow it for a month, and it’s a novel.

You could also regard the growing popularity of online awareness as a reaction to social isolation, the modern American disconnectedness that Robert Putnam explored in his book “Bowling Alone.” The mobile workforce requires people to travel more frequently for work, leaving friends and family behind, and members of the growing army of the self-employed often spend their days in solitude. Ambient intimacy becomes a way to “feel less alone,” as more than one Facebook and Twitter user told me.

When I decided to try out Twitter last year, at first I didn’t have anyone to follow. None of my friends were yet using the service. But while doing some Googling one day I stumbled upon the blog of Shannon Seery, a 32-year-old recruiting consultant in Florida, and I noticed that she Twittered. Her Twitter updates were pretty charming — she would often post links to camera-phone pictures of her two children or videos of herself cooking Mexican food, or broadcast her agonized cries when a flight was delayed on a business trip. So on a whim I started “following” her — as easy on Twitter as a click of the mouse — and never took her off my account. (A Twitter account can be “private,” so that only invited friends can read one’s tweets, or it can be public, so anyone can; Seery’s was public.) When I checked in last month, I noticed that she had built up a huge number of online connections: She was now following 677 people on Twitter and another 442 on Facebook. How in God’s name, I wondered, could she follow so many people? Who precisely are they? I called Seery to find out.

“I have a rule,” she told me. “I either have to know who you are, or I have to know of you.” That means she monitors the lives of friends, family, anyone she works with, and she’ll also follow interesting people she discovers via her friends’ online lives. Like many people who live online, she has wound up following a few strangers — though after a few months they no longer feel like strangers, despite the fact that she has never physically met them.

I asked Seery how she finds the time to follow so many people online. The math seemed daunting. After all, if her 1,000 online contacts each post just a couple of notes each a day, that’s several thousand little social pings to sift through daily. What would it be like to get thousands of e-mail messages a day? But Seery made a point I heard from many others: awareness tools aren’t as cognitively demanding as an e-mail message. E-mail is something you have to stop to open and assess. It’s personal; someone is asking for 100 percent of your attention. In contrast, ambient updates are all visible on one single page in a big row, and they’re not really directed at you. This makes them skimmable, like newspaper headlines; maybe you’ll read them all, maybe you’ll skip some. Seery estimated that she needs to spend only a small part of each hour actively reading her Twitter stream.

Yet she has, she said, become far more gregarious online. “What’s really funny is that before this ‘social media’ stuff, I always said that I’m not the type of person who had a ton of friends,” she told me. “It’s so hard to make plans and have an active social life, having the type of job I have where I travel all the time and have two small kids. But it’s easy to tweet all the time, to post pictures of what I’m doing, to keep social relations up.” She paused for a second, before continuing: “Things like Twitter have actually given me a much bigger social circle. I know more about more people than ever before.”

I realized that this is becoming true of me, too. After following Seery’s Twitter stream for a year, I’m more knowledgeable about the details of her life than the lives of my two sisters in Canada, whom I talk to only once every month or so. When I called Seery, I knew that she had been struggling with a three-day migraine headache; I began the conversation by asking her how she was feeling.

Online awareness inevitably leads to a curious question: What sort of relationships are these? What does it mean to have hundreds of “friends” on Facebook? What kind of friends are they, anyway?

In 1998, the anthropologist Robin Dunbar argued that each human has a hard-wired upper limit on the number of people he or she can personally know at one time. Dunbar noticed that humans and apes both develop social bonds by engaging in some sort of grooming; apes do it by picking at and smoothing one another’s fur, and humans do it with conversation. He theorized that ape and human brains could manage only a finite number of grooming relationships: unless we spend enough time doing social grooming — chitchatting, trading gossip or, for apes, picking lice — we won’t really feel that we “know” someone well enough to call him a friend. Dunbar noticed that ape groups tended to top out at 55 members. Since human brains were proportionally bigger, Dunbar figured that our maximum number of social connections would be similarly larger: about 150 on average. Sure enough, psychological studies have confirmed that human groupings naturally tail off at around 150 people: the “Dunbar number,” as it is known. Are people who use Facebook and Twitter increasing their Dunbar number, because they can so easily keep track of so many more people?

As I interviewed some of the most aggressively social people online — people who follow hundreds or even thousands of others — it became clear that the picture was a little more complex than this question would suggest. Many maintained that their circle of true intimates, their very close friends and family, had not become bigger. Constant online contact had made those ties immeasurably richer, but it hadn’t actually increased the number of them; deep relationships are still predicated on face time, and there are only so many hours in the day for that.

But where their sociality had truly exploded was in their “weak ties” — loose acquaintances, people they knew less well. It might be someone they met at a conference, or someone from high school who recently “friended” them on Facebook, or somebody from last year’s holiday party. In their pre-Internet lives, these sorts of acquaintances would have quickly faded from their attention. But when one of these far-flung people suddenly posts a personal note to your feed, it is essentially a reminder that they exist. I have noticed this effect myself. In the last few months, dozens of old work colleagues I knew from 10 years ago in Toronto have friended me on Facebook, such that I’m now suddenly reading their stray comments and updates and falling into oblique, funny conversations with them. My overall Dunbar number is thus 301: Facebook (254) + Twitter (47), double what it would be without technology. Yet only 20 are family or people I’d consider close friends. The rest are weak ties — maintained via technology.

This rapid growth of weak ties can be a very good thing. Sociologists have long found that “weak ties” greatly expand your ability to solve problems. For example, if you’re looking for a job and ask your friends, they won’t be much help; they’re too similar to you, and thus probably won’t have any leads that you don’t already have yourself. Remote acquaintances will be much more useful, because they’re farther afield, yet still socially intimate enough to want to help you out. Many avid Twitter users — the ones who fire off witty posts hourly and wind up with thousands of intrigued followers — explicitly milk this dynamic for all it’s worth, using their large online followings as a way to quickly answer almost any question. Laura Fitton, a social-media consultant who has become a minor celebrity on Twitter — she has more than 5,300 followers — recently discovered to her horror that her accountant had made an error in filing last year’s taxes. She went to Twitter, wrote a tiny note explaining her problem, and within 10 minutes her online audience had provided leads to lawyers and better accountants. Fritton joked to me that she no longer buys anything worth more than $50 without quickly checking it with her Twitter network.

“I outsource my entire life,” she said. “I can solve any problem on Twitter in six minutes.” (She also keeps a secondary Twitter account that is private and only for a much smaller circle of close friends and family — “My little secret,” she said. It is a strategy many people told me they used: one account for their weak ties, one for their deeper relationships.)

It is also possible, though, that this profusion of weak ties can become a problem. If you’re reading daily updates from hundreds of people about whom they’re dating and whether they’re happy, it might, some critics worry, spread your emotional energy too thin, leaving less for true intimate relationships. Psychologists have long known that people can engage in “parasocial” relationships with fictional characters, like those on TV shows or in books, or with remote celebrities we read about in magazines. Parasocial relationships can use up some of the emotional space in our Dunbar number, crowding out real-life people. Danah Boyd, a fellow at Harvard’s Berkman Center for Internet and Society who has studied social media for 10 years, published a paper this spring arguing that awareness tools like News Feed might be creating a whole new class of relationships that are nearly parasocial — peripheral people in our network whose intimate details we follow closely online, even while they, like Angelina Jolie, are basically unaware we exist.

“The information we subscribe to on a feed is not the same as in a deep social relationship,” Boyd told me. She has seen this herself; she has many virtual admirers that have, in essence, a parasocial relationship with her. “I’ve been very, very sick, lately and I write about it on Twitter and my blog, and I get all these people who are writing to me telling me ways to work around the health-care system, or they’re writing saying, ‘Hey, I broke my neck!’ And I’m like, ‘You’re being very nice and trying to help me, but though you feel like you know me, you don’t.’ ” Boyd sighed. “They can observe you, but it’s not the same as knowing you.”

When I spoke to Caterina Fake, a founder of Flickr (a popular photo-sharing site), she suggested an even more subtle danger: that the sheer ease of following her friends’ updates online has made her occasionally lazy about actually taking the time to visit them in person. “At one point I realized I had a friend whose child I had seen, via photos on Flickr, grow from birth to 1 year old,” she said. “I thought, I really should go meet her in person. But it was weird; I also felt that Flickr had satisfied that getting-to-know you satisfaction, so I didn’t feel the urgency. But then I was like, Oh, that’s not sufficient! I should go in person!” She has about 400 people she follows online but suspects many of those relationships are tissue-fragile. “These technologies allow you to be much more broadly friendly, but you just spread yourself much more thinly over many more people.”

What is it like to never lose touch with anyone? One morning this summer at my local cafe, I overheard a young woman complaining to her friend about a recent Facebook drama. Her name is Andrea Ahan, a 27-year-old restaurant entrepreneur, and she told me that she had discovered that high-school friends were uploading old photos of her to Facebook and tagging them with her name, so they automatically appeared in searches for her.

She was aghast. “I’m like, my God, these pictures are completely hideous!” Ahan complained, while her friend looked on sympathetically and sipped her coffee. “I’m wearing all these totally awful ’90s clothes. I look like crap. And I’m like, Why are you people in my life, anyway? I haven’t seen you in 10 years. I don’t know you anymore!” She began furiously detagging the pictures — removing her name, so they wouldn’t show up in a search anymore.

Worse, Ahan was also confronting a common plague of Facebook: the recent ex. She had broken up with her boyfriend not long ago, but she hadn’t “unfriended” him, because that felt too extreme. But soon he paired up with another young woman, and the new couple began having public conversations on Ahan’s ex-boyfriend’s page. One day, she noticed with alarm that the new girlfriend was quoting material Ahan had e-mailed privately to her boyfriend; she suspected he had been sharing the e-mail with his new girlfriend. It is the sort of weirdly subtle mind game that becomes possible via Facebook, and it drove Ahan nuts.

“Sometimes I think this stuff is just crazy, and everybody has got to get a life and stop obsessing over everyone’s trivia and gossiping,” she said.

Yet Ahan knows that she cannot simply walk away from her online life, because the people she knows online won’t stop talking about her, or posting unflattering photos. She needs to stay on Facebook just to monitor what’s being said about her. This is a common complaint I heard, particularly from people in their 20s who were in college when Facebook appeared and have never lived as adults without online awareness. For them, participation isn’t optional. If you don’t dive in, other people will define who you are. So you constantly stream your pictures, your thoughts, your relationship status and what you’re doing — right now! — if only to ensure the virtual version of you is accurate, or at least the one you want to present to the world.

This is the ultimate effect of the new awareness: It brings back the dynamics of small-town life, where everybody knows your business. Young people at college are the ones to experience this most viscerally, because, with more than 90 percent of their peers using Facebook, it is especially difficult for them to opt out. Zeynep Tufekci, a sociologist at the University of Maryland, Baltimore County, who has closely studied how college-age users are reacting to the world of awareness, told me that athletes used to sneak off to parties illicitly, breaking the no-drinking rule for team members. But then camera phones and Facebook came along, with students posting photos of the drunken carousing during the party; savvy coaches could see which athletes were breaking the rules. First the athletes tried to fight back by waking up early the morning after the party in a hungover daze to detag photos of themselves so they wouldn’t be searchable. But that didn’t work, because the coaches sometimes viewed the pictures live, as they went online at 2 a.m. So parties simply began banning all camera phones in a last-ditch attempt to preserve privacy.

“It’s just like living in a village, where it’s actually hard to lie because everybody knows the truth already,” Tufekci said. “The current generation is never unconnected. They’re never losing touch with their friends. So we’re going back to a more normal place, historically. If you look at human history, the idea that you would drift through life, going from new relation to new relation, that’s very new. It’s just the 20th century.”

Psychologists and sociologists spent years wondering how humanity would adjust to the anonymity of life in the city, the wrenching upheavals of mobile immigrant labor — a world of lonely people ripped from their social ties. We now have precisely the opposite problem. Indeed, our modern awareness tools reverse the original conceit of the Internet. When cyberspace came along in the early ’90s, it was celebrated as a place where you could reinvent your identity — become someone new.

“If anything, it’s identity-constraining now,” Tufekci told me. “You can’t play with your identity if your audience is always checking up on you. I had a student who posted that she was downloading some Pearl Jam, and someone wrote on her wall, ‘Oh, right, ha-ha — I know you, and you’re not into that.’ ” She laughed. “You know that old cartoon? ‘On the Internet, nobody knows you’re a dog’? On the Internet today, everybody knows you’re a dog! If you don’t want people to know you’re a dog, you’d better stay away from a keyboard.”

Or, as Leisa Reichelt, a consultant in London who writes regularly about ambient tools, put it to me: “Can you imagine a Facebook for children in kindergarten, and they never lose touch with those kids for the rest of their lives? What’s that going to do to them?” Young people today are already developing an attitude toward their privacy that is simultaneously vigilant and laissez-faire. They curate their online personas as carefully as possible, knowing that everyone is watching — but they have also learned to shrug and accept the limits of what they can control.

It is easy to become unsettled by privacy-eroding aspects of awareness tools. But there is another — quite different — result of all this incessant updating: a culture of people who know much more about themselves. Many of the avid Twitterers, Flickrers and Facebook users I interviewed described an unexpected side-effect of constant self-disclosure. The act of stopping several times a day to observe what you’re feeling or thinking can become, after weeks and weeks, a sort of philosophical act. It’s like the Greek dictum to “know thyself,” or the therapeutic concept of mindfulness. (Indeed, the question that floats eternally at the top of Twitter’s Web site — “What are you doing?” — can come to seem existentially freighted. What are you doing?) Having an audience can make the self-reflection even more acute, since, as my interviewees noted, they’re trying to describe their activities in a way that is not only accurate but also interesting to others: the status update as a literary form.

Laura Fitton, the social-media consultant, argues that her constant status updating has made her “a happier person, a calmer person” because the process of, say, describing a horrid morning at work forces her to look at it objectively. “It drags you out of your own head,” she added. In an age of awareness, perhaps the person you see most clearly is yourself.

Clive Thompson, New York Times

__________

Full article and photo: http://www.nytimes.com/2008/09/07/magazine/07awareness-t.html

Tinker, Tailor, Soldier, Hacker

The Internet was designed for easy communication. Security? Not so much.

Worrying about threats to the electric grid is all the rage these days, with anxious planners troubled by electromagnetic pulse attacks or even solar superflares that could melt down the power net for months or even years, bringing civilization to a halt. But Richard Clarke and Robert Knake warn in “Cyber War” that if such a calamity occurs, the culprit behind it might not be a high-altitude nuclear burst or strange solar weather but a computer hacker in Beijing or Tehran.

Over the past few decades, American society has become steadily more wired. Devices talk to one another over the Internet, with tremendous increases in efficiency: Copy machines call their own repairmen when they break down, stores automatically replenish inventory as needed and military units stay in perpetual contact over logistical matters—often without humans in the loop at all. The benefits of this nonstop communication are obvious, but the vulnerabilities are underappreciated. The Internet was designed for ease of communication; security was (and is) largely an afterthought. We have created a hacker’s playground.

Worse yet, computer hardware, usually made in China, is sometimes laced with “logic bombs” that will allow anyone who has the correct codes—the Chinese government comes to mind—to turn our own devices against us. Messrs. Clarke and Knake are particularly concerned with risks to the electric grid. Hackers might be able not only to trick generators into turning themselves off but also to command expensive custom equipment to tear itself apart—damage that could take months or longer to fix. The result wouldn’t be a short-term blackout of the sort we’re familiar with but something more like Baghdad after the Iraq invasion. And that’s probably a best-case scenario.

Nor are electric-generating facilities, already the target of thousands of known hack attacks, the only vulnerability. Military secrets and valuable intellectual property are also at risk, Messrs. Clarke and Knake note. Yet efforts to protect against hacker-attacks have lagged behind increasingly sophisticated threats as the Pentagon concentrates on offensive, not defensive, cyberwar techniques. The emphasis may reflect the unhappy truth that, in a cyberwar, first-strike capability is an enormous advantage. The instigator can launch an attack before the targeted country has raised its defenses or disconnected vital services from the Internet altogether. The targeted country may be damaged so badly that it cannot respond in kind, and a weaker response would probably meet a well-prepared defense. The incentive to strike first, Messrs. Clarke and Knake argue, is destabilizing and dangerous—and all the more reason to bolster our preparedness.

Not that every first strike is malign; sometimes it produces a happy result. Messrs. Clarke and Knake are convinced that an Israeli air strike in 2007 against a secret North Korean-designed nuclear facility being constructed in the Syrian desert was a textbook case of cyber-aided warfare. Israeli computers “owned” Syria’s elaborate air defenses, the authors say, “ensuring that the enemy could not even raise its defenses.” How the Israelis accomplished the task isn’t known, but Messrs. Clarke and Knake speculate that a drone aircraft may have been used to commandeer Syrian radar signals, or Israeli agents may have inserted a “trapdoor” access point in the computer code of the Russian-designed defense system, or an intrepid Israeli agent deep in Syria may have spliced into a fiber-optic cable linked to the defense system and then sent commands clearing the way for the bombing run.

Stealthy online intrusion and malicious hacking have evolved from low-level intelligence-gathering tools to weapons that are, potentially, as destructive as bombs and missiles. (How many Americans would die if the electricity went out for a week? A month? Six months?) Yet many policy-makers still seem to regard the threat as a sideshow. The Pentagon plans “net-centric” warfare without addressing the vulnerability of the “net” part; diplomats who discuss arms control deal almost exclusively with traditional weaponry, without considering more modern threats. Generals are astounded to hear about digital military weaknesses that already haunt every captain and major. Presidents Clinton and George W. Bush largely ignored the problem, and President Obama shows no sign of doing any better.

In some intelligence circles the threat of cyber attacks is scoffed at, but I think that Messrs. Clarke and Knake are right to sound the alarm. (Mr. Clarke, we should recall, was the head of counterterrorism security in the Clinton and George W. Bush administrations.) As Henry Fielding remarked long ago, those who lay the foundation of their own ruin find that others are apt to build upon it. By constructing, and then relying on, vulnerable systems that are now entwined with almost every aspect of American life, we have laid just such a foundation. The time has come to fix it or at least to refine the systems to avoid catastrophic failure.

“Just-in-time” inventory systems are highly vulnerable to transportation problems; “network computing” fails when the network does; and smart grids are open invitations to smart hackers. Too much of our critical infrastructure operates with increased vulnerabilities and reduced margins for error. “The same way that a hand can reach out from cyberspace and destroy an electric transmission line or generator,” the authors note, “computer commands can derail a train or send freight cars to the wrong place, or cause a gas pipeline to burst.”

Promoters of something called “resilience engineering” suggest that planners should put more effort into designing systems that resist disruption and that degrade gracefully, rather than failing calamitously when stressed. Such an approach would reduce our vulnerability to cyberwar—and to many other kinds of trouble as well.

Mr. Reynolds, who teaches Internet law at the University of Tennessee, hosts “Instavision” at PJTV.com.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704671904575193942114368842.html