The Speech Obama Hasn’t Given

What are we doing in Libya? Americans deserve an explanation.

It all seems rather mad, doesn’t it? The decision to become involved militarily in the Libyan civil war couldn’t take place within a less hospitable context. The U.S. is reeling from spending and deficits, we’re already in two wars, our military has been stretched to the limit, we’re restive at home, and no one, really, sees President Obama as the kind of leader you’d follow over the top. “This way, men!” “No, I think I’ll stay in my trench.” People didn’t hire him to start battles but to end them. They didn’t expect him to open new fronts. Did he not know this?

He has no happy experience as a rallier of public opinion and a leader of great endeavors; the central initiative of his presidency, the one that gave shape to his leadership, health care, is still unpopular and the cause of continued agitation. When he devoted his entire first year to it, he seemed off point and out of touch. This was followed by the BP oil spill, which made him look snakebit. Now he seems incompetent and out of his depth in foreign and military affairs. He is more observed than followed, or perhaps I should say you follow him with your eyes and not your heart. So it’s funny he’d feel free to launch and lead a war, which is what this confused and uncertain military action may become.

What was he thinking? What is he thinking?

Which gets me to Mr. Obama’s speech, the one he hasn’t given. I cannot for the life of me see how an American president can launch a serious military action without a full and formal national address in which he explains to the American people why he is doing what he is doing, why it is right, and why it is very much in the national interest. He referred to his aims in parts of speeches and appearances when he was in South America, but now he’s home. More is needed, more is warranted, and more is deserved. He has to sit at that big desk and explain his thinking, put forward the facts as he sees them, and try to garner public support. He has to make a case for his own actions. It’s what presidents do! And this is particularly important now, because there are reasons to fear the current involvement will either escalate and produce a lengthy conflict or collapse and produce humiliation.

Without a formal and extended statement, the air of weirdness, uncertainty and confusion that surrounds this endeavor will only deepen.

The questions that must be answered actually start with the essentials. What, exactly, are we doing? Why are we doing it? At what point, or after what arguments, did the president decide U.S. military involvement was warranted? Is our objective practical and doable? What is America’s overriding strategic interest? In what way are the actions taken, and to be taken, seeing to those interests?

From those questions flow many others. We know who we’re against—Moammar Gadhafi, a bad man who’s done very wicked things. But do we know who we’re for? That is, what does the U.S. government know or think it knows about the composition and motives of the rebel forces we’re attempting to assist? For 42 years, Gadhafi controlled his nation’s tribes, sects and groups through brute force, bribes and blandishments. What will happen when they are no longer kept down? What will happen when they are no longer oppressed? What will they become, and what role will they play in the coming drama? Will their rebellion against Gadhafi degenerate into a dozen separate battles over oil, power and local dominance?

What happens if Gadhafi hangs on? The president has said he wants U.S. involvement to be brief. But what if Gadhafi is fighting on three months from now?

On the other hand, what happens if Gadhafi falls, if he’s deposed in a palace coup or military coup, or is killed, or flees? What exactly do we imagine will take his place?

Supporters of U.S. intervention have argued that if we mean to protect Libya’s civilians, as we have declared, then we must force regime change. But in order to remove Gadhafi, they add, we will need to do many other things. We will need to provide close-in air power. We will probably have to put in special forces teams to work with the rebels, who are largely untrained and ragtag. The Libyan army has tanks and brigades and heavy weapons. The U.S. and the allies will have to provide the rebels training and give them support. They will need antitank missiles and help in coordinating air strikes.

Once Gadhafi is gone, will there be a need for an international peacekeeping force to stabilize the country, to provide a peaceful transition, and to help the post-Gadhafi government restore its infrastructure? Will there be a partition? Will Libyan territory be altered?

None of this sounds like limited and discrete action.

In fact, this may turn out to be true: If Gadhafi survives, the crisis will go on and on. If Gadhafi falls, the crisis will go on and on.

Everyone who supports the Libyan endeavor says they don’t want an occupation. One said the other day, “We’re not looking for a protracted occupation.”

Protracted?

Mr. Obama has apparently set great store in the fact that he was not acting alone, that Britain, France and Italy were eager to move. That’s good—better to work with friends and act in concert. But it doesn’t guarantee anything. A multilateral mistake is still a mistake. So far the allied effort has not been marked by good coordination and communication. If the conflict in Libya drags on, won’t there tend to be more fissures, more tension, less commitment and more confusion as to objectives and command structures? Could the unanticipated results of the Libya action include new strains, even a new estrangement, among the allies?

How might Gadhafi hit out, in revenge, in his presumed last days, against America and the West?

And what, finally, about Congress? Putting aside the past half-century’s argument about declarations of war, doesn’t Congress, as representative of the people, have the obvious authority and responsibility to support the Libyan endeavor, or not, and to authorize funds, or not?

These are all big questions, and there are many other obvious ones. If the Libya endeavor is motivated solely by humanitarian concerns, then why haven’t we acted on those concerns recently in other suffering nations? It’s a rough old world out there, and there’s a lot of suffering. What is our thinking going forward? What are the new rules of the road, if there are new rules? Were we, in Libya, making a preemptive strike against extraordinary suffering—suffering beyond what is inevitable in a civil war?

America has been though a difficult 10 years, and the burden of proof on the need for U.S. action would be with those who supported intervention. Chief among them, of course, is the president, who made the decision as commander in chief. He needs to sit down and tell the American people how this thing can possibly turn out well. He needs to tell them why it isn’t mad.

Peggy Noonan, Wall Street Journal

__________

Full article: http://online.wsj.com/article/SB10001424052748704604704576221142167651286.html

Advertisements

Plural trouble

Uh-oh. Now there’s more than one kind of Prius

You might not think linguistic issues would be high on the agenda at a car show, but don’t tell that to Toyota. At the Chicago Auto Show two weeks ago, the company announced the results of an online poll he held to determine the “proper” plural of its popular hybrid car, the Prius.

The poll was part of the company’s announcement of three new Prius models: a crossover hybrid, a small concept car, and a plug-in hybrid. Although none is on the market yet, the prospect of more than one kind of Prius meant the question of the proper plural had become urgent, at least to Toyota.

The Prius, of course, is tricky to pluralize, because it ends with an “s” and sounds Latin, both of which tend to throw English speakers into fits. As part of its marketing campaign, Toyota also created a series of jokey videos with James Lipton, host of “Inside the Actors Studio.” In one, Lipton interviews an octopus, who of course picks Prii (pronounced pree-eye) as its preferred plural; in another, William Shakespeare plumps for the faux-Latin Prium. More than 1.8 million votes were cast over six weeks, and the final winner was Prii, with 25 percent, followed closely by Prius, 24 percent; Priuses, 20 percent; Prien, 18 percent; and Prium, 13 percent (sorry, Bill). The winner has now been added to Dictionary.com’s entry for Prius. (Toyota has long been an advertiser on Dictionary.com’s site, including a 2009 campaign that linked Prius ads to words such as sustainable, green, and moonroof.)

Toyota is not the first to try to leverage people’s strong opinions about language to draw attention to a brand. Other companies have done similar campaigns — Nestle ran television ads featuring Shaquille O’Neal arguing about how to pronounce caramel (CARE-uh-muhl v. CAR-muhl), old cartoon ads for Heinz Worcestershire sauce played on the difficulty of pronouncing Worcestershire, and just recently The New York Times reported that Italian children’s brand Chicco is running a contest for parents to record their children saying “Chicco” (pronounced KEE-ko) with the winners appearing on a billboard in Times Square.

Toyota’s enthusiastic embrace of the plural is unusual in the business world. Trademark holders typically like to avoid their marks being entered in dictionaries at all. When I was an editor at a traditional dictionary, I had a thick file of letters from trademark holders demanding special treatment for their trademarks — or their immediate removal. The more knee-jerk the use of a trademark for the branded object becomes (think Band-Aid, Kleenex) the more the company starts worrying about “genericization,” the risk that its valuable brand will become so widely used that it loses trademark status. Xerox runs regular ads to remind people that the company prefers you to use Xerox only as an adjective (as in “Xerox brand photocopiers”) instead of as a verb or even as a plural noun (which they give, if you must, as Xeroxes). No fanciful ad campaign asking us to choose between Xeroxes and Xeroxim!

Toyota must feel that the risk to its trademark is outweighed by the positive publicity. Steve Rivkin, a branding expert and coauthor of six books on marketing strategy and innovation, agrees. He found the Toyota campaign “very much in keeping with the brand’s irreverent, cheeky personality.”

But have the people really spoken? Before the vote, both expert language opinion and general usage seemed to be firmly on the side of Priuses. Kory Stamper, an associate editor for the Merriam-Webster Dictionary, was quoted in January by The Detroit Free Press as being “tempted as an English lexicographer to pluralize it as a regular English noun, Priuses.” Ben Zimmer, of the (recently canceled) On Language column for The New York Times Magazine, also threw in his support for Priuses: “We might as well form the word the way English plurals are formed.” In normal usage, people seem to treat Prius just like any other regular English plural, slapping an “es” on it. Newspapers and magazines, by and large, have unselfconsciously used Priuses since the brand entered the public eye in the late 1990s. (The Prius was first released in Japan in 1997.)

Despite the votes and the Dictionary.com imprimatur, I’d bet against Prii, and put my money on the more pedestrian Priuses. Most people will use the first form that comes to mind, the clearer and more English-y Priuses. People who care deeply about etymologically motivated plurals will use the correct Latin (Priora, meaning “earlier, better, or more important [things],” as Jan Freeman reported in this space back in 2007, citing Harry Mount, author of “Carpe Diem: Put a Little Latin in Your Life”). That leaves only the people who enjoy voting in online polls to throw their weight behind Prii.

And me? I admit a sneaking fondness for the completely unjustifiable and rarely suggested Prixen, which makes the cars sound like one of Santa’s reindeer.

Erin McKean is a lexicographer and founder of Wordnik.com.

The power of lonely

What we do better without other people around

You hear it all the time: We humans are social animals. We need to spend time together to be happy and functional, and we extract a vast array of benefits from maintaining intimate relationships and associating with groups. Collaborating on projects at work makes us smarter and more creative. Hanging out with friends makes us more emotionally mature and better able to deal with grief and stress.

Spending time alone, by contrast, can look a little suspect. In a world gone wild for wikis and interdisciplinary collaboration, those who prefer solitude and private noodling are seen as eccentric at best and defective at worst, and are often presumed to be suffering from social anxiety, boredom, and alienation.

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking. There is even research to suggest that blocking off enough alone time is an important component of a well-functioning social life — that if we want to get the most out of the time we spend with people, we should make sure we’re spending enough of it away from them. Just as regular exercise and healthy eating make our minds and bodies work better, solitude experts say, so can being alone.

One ongoing Harvard study indicates that people form more lasting and accurate memories if they believe they’re experiencing something alone. Another indicates that a certain amount of solitude can make a person more capable of empathy towards others. And while no one would dispute that too much isolation early in life can be unhealthy, a certain amount of solitude has been shown to help teenagers improve their moods and earn good grades in school.

“There’s so much cultural anxiety about isolation in our country that we often fail to appreciate the benefits of solitude,” said Eric Klinenberg, a sociologist at New York University whose book “Alone in America,” in which he argues for a reevaluation of solitude, will be published next year. “There is something very liberating for people about being on their own. They’re able to establish some control over the way they spend their time. They’re able to decompress at the end of a busy day in a city…and experience a feeling of freedom.”

Figuring out what solitude is and how it affects our thoughts and feelings has never been more crucial. The latest Census figures indicate there are some 31 million Americans living alone, which accounts for more than a quarter of all US households. And at the same time, the experience of being alone is being transformed dramatically, as more and more people spend their days and nights permanently connected to the outside world through cellphones and computers. In an age when no one is ever more than a text message or an e-mail away from other people, the distinction between “alone” and “together” has become hopelessly blurry, even as the potential benefits of true solitude are starting to become clearer.

Solitude has long been linked with creativity, spirituality, and intellectual might. The leaders of the world’s great religions — Jesus, Buddha, Mohammed, Moses — all had crucial revelations during periods of solitude. The poet James Russell Lowell identified solitude as “needful to the imagination;” in the 1988 book “Solitude: A Return to the Self,” the British psychiatrist Anthony Storr invoked Beethoven, Kafka, and Newton as examples of solitary genius.

But what actually happens to people’s minds when they are alone? As much as it’s been exalted, our understanding of how solitude actually works has remained rather abstract, and modern psychology — where you might expect the answers to lie — has tended to treat aloneness more as a problem than a solution. That was what Christopher Long found back in 1999, when as a graduate student at the University of Massachusetts Amherst he started working on a project to precisely define solitude and isolate ways in which it could be experienced constructively. The project’s funding came from, of all places, the US Forest Service, an agency with a deep interest in figuring out once and for all what is meant by “solitude” and how the concept could be used to promote America’s wilderness preserves.

With his graduate adviser and a researcher from the Forest Service at his side, Long identified a number of different ways a person might experience solitude and undertook a series of studies to measure how common they were and how much people valued them. A 2003 survey of 320 UMass undergraduates led Long and his coauthors to conclude that people felt good about being alone more often than they felt bad about it, and that psychology’s conventional approach to solitude — an “almost exclusive emphasis on loneliness” — represented an artificially narrow view of what being alone was all about.

“Aloneness doesn’t have to be bad,” Long said by phone recently from Ouachita Baptist University, where he is an assistant professor. “There’s all this research on solitary confinement and sensory deprivation and astronauts and people in Antarctica — and we wanted to say, look, it’s not just about loneliness!”

Today other researchers are eagerly diving into that gap. Robert Coplan of Carleton University, who studies children who play alone, is so bullish on the emergence of solitude studies that he’s hoping to collect the best contemporary research into a book. Harvard professor Daniel Gilbert, a leader in the world of positive psychology, has recently overseen an intriguing study that suggests memories are formed more effectively when people think they’re experiencing something individually.

That study, led by graduate student Bethany Burum, started with a simple experiment: Burum placed two individuals in a room and had them spend a few minutes getting to know each other. They then sat back to back, each facing a computer screen the other could not see. In some cases they were told they’d both be doing the same task, in other cases they were told they’d be doing different things. The computer screen scrolled through a set of drawings of common objects, such as a guitar, a clock, and a log. A few days later the participants returned and were asked to recall which drawings they’d been shown. Burum found that the participants who had been told the person behind them was doing a different task — namely, identifying sounds rather than looking at pictures — did a better job of remembering the pictures. In other words, they formed more solid memories when they believed they were the only ones doing the task.

The results, which Burum cautions are preliminary, are now part of a paper on “the coexperiencing mind” that was recently presented at the Society for Personality and Social Psychology conference. In the paper, Burum offers two possible theories to explain what she and Gilbert found in the study. The first invokes a well-known concept from social psychology called “social loafing,” which says that people tend not to try as hard if they think they can rely on others to pick up their slack. (If two people are pulling a rope, for example, neither will pull quite as hard as they would if they were pulling it alone.) But Burum leans toward a different explanation, which is that sharing an experience with someone is inherently distracting, because it compels us to expend energy on imagining what the other person is going through and how they’re reacting to it.

“People tend to engage quite automatically with thinking about the minds of other people,” Burum said in an interview. “We’re multitasking when we’re with other people in a way that we’re not when we just have an experience by ourselves.”

Perhaps this explains why seeing a movie alone feels so radically different than seeing it with friends: Sitting there in the theater with nobody next to you, you’re not wondering what anyone else thinks of it; you’re not anticipating the discussion that you’ll be having about it on the way home. All your mental energy can be directed at what’s happening on the screen. According to Greg Feist, an associate professor of psychology at the San Jose State University who has written about the connection between creativity and solitude, some version of that principle may also be at work when we simply let our minds wander: When we let our focus shift away from the people and things around us, we are better able to engage in what’s called meta-cognition, or the process of thinking critically and reflectively about our own thoughts.

Other psychologists have looked at what happens when other people’s minds don’t just take up our bandwidth, but actually influence our judgment. It’s well known that we’re prone to absorb or mimic the opinions and body language of others in all sorts of situations, including those that might seem the most intensely individual, such as who we’re attracted to. While psychologists don’t necessarily think of that sort of influence as “clouding” one’s judgment — most would say it’s a mechanism for learning, allowing us to benefit from information other people have access to that we don’t — it’s easy to see how being surrounded by other people could hamper a person’s efforts to figure out what he or she really thinks of something.

Teenagers, especially, whose personalities have not yet fully formed, have been shown to benefit from time spent apart from others, in part because it allows for a kind of introspection — and freedom from self-consciousness — that strengthens their sense of identity. Reed Larson, a professor of human development at the University of Illinois, conducted a study in the 1990s in which adolescents outfitted with beepers were prompted at irregular intervals to write down answers to questions about who they were with, what they were doing, and how they were feeling. Perhaps not surprisingly, he found that when the teens in his sample were alone, they reported feeling a lot less self-conscious. “They want to be in their bedrooms because they want to get away from the gaze of other people,” he said.

The teenagers weren’t necessarily happier when they were alone; adolescence, after all, can be a particularly tough time to be separated from the group. But Larson found something interesting: On average, the kids in his sample felt better after they spent some time alone than they did before. Furthermore, he found that kids who spent between 25 and 45 percent of their nonclass time alone tended to have more positive emotions over the course of the weeklong study than their more socially active peers, were more successful in school and were less likely to self-report depression.

“The paradox was that being alone was not a particularly happy state,” Larson said. “But there seemed to be kind of a rebound effect. It’s kind of like a bitter medicine.”

The nice thing about medicine is it comes with instructions. Not so with solitude, which may be tremendously good for one’s health when taken in the right doses, but is about as user-friendly as an unmarked white pill. Too much solitude is unequivocally harmful and broadly debilitating, decades of research show. But one person’s “too much” might be someone else’s “just enough,” and eyeballing the difference with any precision is next to impossible.

Research is still far from offering any concrete guidelines. Insofar as there is a consensus among solitude researchers, it’s that in order to get anything positive out of spending time alone, solitude should be a choice: People must feel like they’ve actively decided to take time apart from people, rather than being forced into it against their will.

Overextended parents might not need any encouragement to see time alone as a desirable luxury; the question for them is only how to build it into their frenzied lives. But for the millions of people living by themselves, making time spent alone time productive may require a different kind of effort. Sherry Turkle, director of the MIT Initiative on Technology and Self, argues in her new book, “Alone, Together,” that people should be mindfully setting aside chunks of every day when they are not engaged in so-called social snacking activities like texting, g-chatting, and talking on the phone. For teenagers, it may help to understand that feeling a little lonely at times may simply be the price of forging a clearer identity.

John Cacioppo of the University of Chicago, whose 2008 book “Loneliness” with William Patrick summarized a career’s worth of research on all the negative things that happen to people who can’t establish connections with others, said recently that as long as it’s not motivated by fear or social anxiety, then spending time alone can be a crucially nourishing component of life. And it can have some counterintuitive effects: Adam Waytz in the Harvard psychology department, one of Cacioppo’s former students, recently completed a study indicating that people who are socially connected with others can have a hard time identifying with people who are more distant from them. Spending a certain amount of time alone, the study suggests, can make us less closed off from others and more capable of empathy — in other words, better social animals.

“People make this error, thinking that being alone means being lonely, and not being alone means being with other people,” Cacioppo said. “You need to be able to recharge on your own sometimes. Part of being able to connect is being available to other people, and no one can do that without a break.”

Leon Neyfakh is the staff writer for Ideas.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2011/03/06/the_power_of_lonely/