Teaching Good Sex

“First base, second base, third base, home run,” Al Vernacchio ticked off the classic baseball terms for sex acts. His goal was to prompt the students in Sexuality and Society — an elective for seniors at the private Friends’ Central School on Philadelphia’s affluent Main Line — to examine the assumptions buried in the venerable metaphor. “Give me some more,” urged the fast-talking 47-year-old, who teaches 9th- and 12th-grade English as well as human sexuality. Arrayed before Vernacchio was a circle of small desks occupied by 22 teenagers, six male and the rest female — a blur of sweatshirts and Ugg boots and form-fitting leggings.

“Grand slam,” called out a boy (who’d later tell me with disarming matter-of-factness that “the one thing Mr. V. talked about that made me feel really good was that penis size doesn’t matter”).

“Now, ‘grand slam’ has a bunch of different meanings,” replied Vernacchio, who has a master’s degree in human sexuality. “Some people say it’s an orgy, some people say grand slam is a one-night stand. Other stuff?”

“Grass,” a girl, a cheerleader, offered.

“If there’s grass on the field, play ball, right, right,” Vernacchio agreed, “which is interesting in this rather hair-phobic society where a lot of people are shaving their pubic hair — ”

“You know there’s grass, and then it got mowed, a landing strip,” one boy deadpanned, instigating a round of laughter. While these kids will sit poker-faced as Vernacchio expounds on quite graphic matters, class discussions are a spirited call and response, punctuated with guffaws, jokey patter and whispered asides, which Vernacchio tolerates, to a point.

Vernacchio explained that sex as baseball implies that it’s a game; that one party is the aggressor (almost always the boy), while the other is defending herself; that there is a strict order of play, and you can’t stop until you finish. “If you’re playing baseball,” he elaborated, “you can’t just say, ‘I’m really happy at second base.’ ”

A boy who was the leader of the Young Conservatives Club asked, “But what if it’s just more pleasure getting to home base?” Although this student is a fan of Vernacchio’s, he likes to challenge him about his tendency to empathize with the female perspective.

“Well, we’ve talked about how a huge percentage of women aren’t orgasming through vaginal intercourse,” Vernacchio responded, “so if that’s what you call a home run, there’s a lot of women saying” — his voice dropped to a dull monotone — ‘O.K., but this is not doing it for me.’ ”

In its breadth, depth and frank embrace of sexuality as, what Vernacchio calls, a “force for good” — even for teenagers — this sex-ed class may well be the only one of its kind in the United States. “There is abstinence-only sex education, and there’s abstinence-based sex ed,” said Leslie Kantor, vice president of education for Planned Parenthood Federation of America. “There’s almost nothing else left in public schools.”

Across the country, the approach ranges from abstinence until marriage is the only acceptable choice, contraceptives don’t work and premarital sex is physically and emotionally harmful, to abstinence is usually best, but if you must have sex, here are some ways to protect yourself from pregnancy and disease. The latter has been called “disaster prevention” education by sex educators who wish they could teach more; a dramatic example of the former comes in a video called “No Second Chances,” which has been used in abstinence-only courses. In it, a student asks a school nurse, “What if I want to have sex before I get married?” To which the nurse replies, “Well, I guess you’ll just have to be prepared to die.”

In settings outside schools, the constraints typically aren’t as tight. Bill Taverner, director of the Center for Family Life Education for Planned Parenthood of Greater Northern New Jersey, said that his 11 educators are usually given the most freedom with so-called high-risk youth, those in juvenile detention, or who live in poor neighborhoods with high teen-pregnancy rates. “I wish I could say it was for positive reasons,” he said, “but it’s almost as if society has just kind of thrown up their hands and said, ‘Well, these kids are going to have sex anyway, so you might as well not hide anything from them.’ ”

Sex education in America was invented by Progressive Era reformers like Sears, Roebuck’s president, Julius Rosenwald, and Charles Eliot, the president of Harvard University. Eliot, according to Kristin Luker, author of the book “When Sex Goes to School,” concluded that sex education was so important that he turned down Woodrow Wilson’s offer of the ambassadorship to Britain to join the first national group devoted to promoting the subject. Eliot was one of the so-called social hygienists who thought that teaching people about the “proper uses of sexuality” would help stamp out venereal disease and the sexual double-standard that kept women from achieving full equality. Proper sex meant sex between husband and wife (prostitution was then seen as regrettable but necessary because of men and their “needs”), so educators preached about both the rewards of carnal contact within marriage and the hazards outside of it.

It wasn’t until the 1960s and 1970s that the pill, feminism and generational rebellion smashed the cultural consensus that sex should be confined to marriage. And for a “brief, fragile period” in the 1970s and early 1980s, writes Luker, a professor of sociology and of law at U.C. Berkeley, “opinion leaders of almost every stripe believed sex education was the best response to the twin problems of teenage pregnancy and H.I.V. AIDS.” It was around this time that the Unitarian Universalist Association started its famously sex-positive curriculum, About Your Sexuality, with details about masturbation and orgasms and slide shows of couples touching one another’s genitals. (The classes are still going strong, though in the late 1990s, the program was replaced with another one without explicit images called Our Whole Lives, a joint project of the U.U.A. and the United Church of Christ.)

Back then, even public schools taught what came to be called “comprehensive sex education,” nonjudgmental instruction on bodies, birth control, disease prevention and “healthy relationships” — all geared to helping teenagers make responsible choices, one of which might be choosing to become sexually intimate with someone. But by the end of the 1980s, sex ed had taken its place in the basket of wedge issues dividing the right and left. This created the opening for abstinence instruction (the word “abstinence” wasn’t part of the sex-ed vernacular until the 1980s) to bulldoze any curriculum that didn’t treat sex as forbidden for teenagers. But Kantor and many others in the field remain “comprehensive sex ed” believers. To them, the license Vernacchio has to roam the sexual landscape is almost unimaginable.

Sitting in the conference room at Friends’, a tall, striking girl told me after class one day last winter that she was on the verge of getting involved with someone she really liked but was hesitating because she knew he had a reputation for juggling multiple girlfriends. The girl, who’d had sex twice in 11th grade with a boy she later discovered was sleeping around, wanted to be monogamous with the new guy but didn’t know how to broach it with him. (She was one of 17 students in Sexuality and Society who spoke to me privately; while Vernacchio is happy to discuss any personal information the kids bring up, he doesn’t seek it.)

Another young woman, who tended to treat her tiny desk in Vernacchio’s class as a lounger, flinging her legs out toward the center of the room, told me that she enjoyed sex for its own sake — the way guys do, as she put it. While she could express this with some bravado now, she came into Sexuality and Society in the beginning of the year uneasy about this aspect of herself, she said. A third girl, who called herself a “really anxious person,” still got choked up discussing a false rumor someone wrote on Facebook last fall: that she’d drunkenly offered oral sex to a boy at a party, who, as it happened, also was enrolled in Sexuality and Society. That young man, who didn’t post the lie and was predictably unfazed by it, had fallen for another classmate. That girl was equally besotted, but because they were in “sex class,” the couple always positioned themselves across the room from each other, never side by side; otherwise, they told me, they’d feel like animals in a zoo. Not that the pair weren’t still on display. With the exception of Vernacchio, everyone knew their status and found it impossible not to notice them locking eyes periodically, smiling briefly, before she’d duck her head, push her long, shiny hair behind her ears and turn her gaze back to her teacher.

“Mr. V. takes every question seriously,” another girl, the student-council vice president, told me. “You never feel like it’s the wise sexuality master preaching to the young.” Yet Vernacchio also doesn’t give off the vibe that he wants to be young, or imagines that he still is. His attire every day for the two weeks I attended the class in February was a sweater vest over a button-down shirt and tie, except for Valentine’s Day, when he shed the vest for a ruby red shirt and a tie decorated with hearts. That day Vernacchio gave all of his students brightly colored origami hearts he made himself; the members of Sexuality and Society reciprocated by sending him a singing Valentine (a “Glee”-worthy rendition of “Everytime We Touch,” by the boys’ barbershop choir).

Vernacchio is nothing so much as a mensch. Gay, with a partner of 17 years, he has ruddy cheeks, a quick smile and a plane of brown hair overhanging his brow, from which he must regularly wipe away sweat during intense discussions. He lectures with plainspoken authority while also conveying a deep curiosity about his subject — the consummate sex scholar.

During a lesson about recognizing your “crumble lines” — comments that play to your vulnerability and may make you “act against your values” — Vernacchio, a self-described “short, round, hairy guy” who struggles with body-image issues, revealed his own tendency to fall for anybody who compliments his appearance: “You say you think I’m pretty. I’ll do anything for you.” He was exaggerating a bit for effect, but the poignancy of the self-disclosure wasn’t lost on the class.

Friends’ Central, a Quaker prep school that prides itself on both its academic rigor and its ethic of social responsibility, is tucked away in the bucolic hills of suburban Philadelphia. Vernacchio joined the school’s English department in 1998, and when, three years later, he asked to start Sexuality and Society, administrators were delighted. “He teaches at the very highest level,” said David Felsen, who in June retired as headmaster of the school after 23 years. Because Vernacchio was such a gifted instructor, Felsen said, he didn’t worry about parents’ reactions. And in fact, Vernacchio says that no one has ever complained or even voiced reservations about something he discussed in class.

The parents I spoke to — ranging from a father who said he loves his son “to pieces” but wishes he knew him better to a mom who gets frequent updates from her daughter now in college — seemed grateful for the class. “My daughter is sometimes private,” another mother said, “and I appreciate that there was another place she could go to get good, healthy information.” Early in the year, Vernacchio gives an assignment asking students to interview a parent about how he or she learned about sex, and the father said his son handled it with aplomb: “He was very natural, and I’m the one thinking, This is embarrassing. He was a lot more mature about the conversation than I was.”

Sexuality and Society begins in the fall with a discussion of how to recognize and form your own values, then moves through topics like sexual orientation (occasionally students identify as gay or transgender, Vernacchio said, but in this particular class none did); safer sex; relationships; sexual health; and the emotional and physical terrain of sexual activity. (The standard public-school curriculum sticks to S.T.I.’s and contraceptive methods, and it can go by in a blink; in a Kaiser Family Foundation survey, two-thirds of principals said that the subject was covered in just several class periods.) Vernacchio also teaches a mandatory six-session sexuality course for ninth graders that covers some of the same material presented to the older kids, though less fully.

The lessons that tend to raise eyebrows outside the school, according to Vernacchio, are a medical research video he shows of a woman ejaculating — students are allowed to excuse themselves if they prefer not to watch — and a couple of dozen up-close photographs of vulvas and penises. The photos, Vernacchio said, are intended to show his charges the broad range of what’s out there. “It’s really a process of desensitizing them to what real genitals look like so they’ll be less freaked out by their own and, one day, their partner’s,” he said. What’s interesting, he added, is that both the boys and girls receive the photographs of the penises rather placidly but often insist that the vulvas don’t look “normal.” “They have no point of reference for what a normal, healthy vulva looks like, even their own,” Vernacchio said. The female student-council vice president agreed: “When we did the biology unit, I probably would’ve been able to label just as many of the boys’ body parts as the girls’, which is sad. I mean, you should know about the names of your own body.”

Vernacchio is aware that his utter lack of self-consciousness in conversing about sexual matters is unusual. “When God was passing out talents,” he likes to say, “I got ease in talking about sex.” But any plan of God’s, whom Vernacchio, a practicing Catholic, often references, was nudged along by two earthly happenings. “As a little kid,” Vernacchio said, “I got pegged as a good public speaker, so I started narrating all the school plays and reading at church; I got over the fear of speaking really early.” Then, around age 12, he started to research sex, having known from kindergarten that he was different in a “way that had to do with boys and girls.” He looked up homosexuality in the family dictionary, then took to going to libraries and planting himself in the sexuality section of the stacks. “I used to have the Dewey-decimal number for homosexuality memorized.” He was entirely on his own. There was no discussion of being gay at Vernacchio’s all-boys school; none from his parish priest, who at the end of sermons offered a prayer for “veterans of foreign wars, people who live near nuclear power plants and homosexuals”; and not from his parents, either, even after he came out to them at 19. Indeed, one night several years later, his mom was doing dinner dishes at the sink and his dad was plopped on the couch a few feet away in their tiny South Philly house, and Vernacchio mustered the courage to tell them that he was happily dating someone. “My mom never turned around, never reacted in any way, and my dad turned to me, didn’t miss a beat and said, ‘Whatever happened to the metric system?’ ”

It was drummed into him as a human-sexuality master’s student, Vernacchio said, to never be explicit merely for the sake of being explicit: have a rationale for every last thing you say. Which occurred to me one day listening to him answer an anonymous question — there’s a box on the bookshelf where students can drop them — about whether a girl’s urge to urinate during intercourse might be a precursor to female ejaculation. He laid out a plethora of explanations for the feeling, everything from anxiety about having sex to a bladder infection to the possibility that the young woman was getting “some really good G-spot stimulation” and in fact verging on ejaculation.

“If kids are starting to use their bodies sexually, they should know about their potentialities,” Vernacchio told me later. “It’s O.K. that boys ejaculate, that’s totally normalized” — wet dreams have been standard fare for middle-school health class for decades — “but girls, gross! Girls will think they’re peeing themselves, and it’s really shameful.”

“I just love this class — you can ask anything,” a member of the girls’ basketball team told me one day in February. She wears her long blond hair in two braids and shyly divulged that she was in love with her boyfriend of eight months. “You may not be able to get the best information on the Internet, but you can ask Mr. V., and he’ll either know it or ask his sex-ed friends,” she said, referring to a sex-educators’ e-mail list that Vernacchio consults.

Two boys who told me they’d been masturbating to Internet porn since middle school said they found themselves disoriented at the real-life encounters they had with girls, but Vernacchio helped them grasp the disjuncture. Pornography “gives boys the impression that the girl is there to do any position you want, or to please you, or to, you know, role-play to your liking,” one of them said. “But yesterday, when Mr. V. said there is no romanticism or intimacy in porn, porn is strictly sexual — I’d never thought about that.”

One young man in the class told me he had intercourse with 10 girls, but he was a relative outlier. While most of the students had had intercourse — 70 percent of teenagers do so by their 19th birthday, according to the Gutt­macher Institute — only 4 of the 17 I spoke with reported having three or more partners; 10 had had one or two; the other three were virgins.

But the numbers fail to capture the variation within the sexual histories. Of the two girls with more than two partners, one was the girl who appreciated purely sexual encounters. The other told me that during the summer before ninth grade, she was raped one night on a beach by a stranger. She told no one, she said, and she subsequently got together with a number of boys in what she now saw as a misguided effort to “take control” of her sexuality.

As to whether his class encourages teenagers to have sex — a protest perennially lodged against even basic sex ed (though pretty firmly disproved by research) — Vernacchio said that he portrays sex in all its glory and complications. “As much as I say, ‘This is how orgasms work, and they’re really cool,’ I say there’s a lot of work to being in a relationship and having sex. I don’t think I have the power to make sex sound so enticing that kids are going to break through their self-esteem issues or body stuff or parental pressures or whatever to just go do it.” And anyway, Vernacchio went on, “I don’t necessarily see the decision to become sexually active when you’re 17 as an unhealthy one.” His goal is for young people to know their own minds, be clear about what they do and don’t want and use their self-knowledge to make choices.

To that end, he spends one class leading the students through a kind of cost-benefit analysis of various types of relationships, from friendship to old-school dating to hookups. When he asked his students about the benefits of hookups, the kids volunteered: “No expected commitment,” “Sexual pleasure” and “Guarding emotions,” meaning you can enjoy yourself without the messiness of attachment.

“Yep,” Vernacchio said, “sometimes a hookup is all you want.” Then he pressed them for drawbacks.

“You may not be able to control your emotions,” someone called out.

“O.K.,” Vernacchio said approvingly. “What else?”

“It’s confusing,” said the student-council vice president.

“Yeah,” Vernacchio said, explaining that two people may have different ideas about what it means to hook up, which is why communication is so important. (“If you can’t talk about it, you probably shouldn’t be doing it,” he says.)

“People saying, ‘Oh, she’s a slut,’ ‘Oh, he’s a man-whore,’ ” floated a boy who described himself to me as a “lonesome outcast” until 11th grade, when he finally started to make friends. “I guess for women it’s usually seen as more of a bad thing.”

“Right,” Vernacchio agreed, “but there’s pressure on guys too. Guys get the, ‘Oh, yeah, he’s a player,’ but what if you’re really not? And then you feel pressure to maintain that.”

Vernacchio rarely misses a chance to ask his students to examine gender bias in their sexual attitudes or behavior. The girl who “admitted” to liking sex as much as boys did said that Vernacchio’s consistent affirmation of the variety of sexual preferences (“Guys aren’t necessarily naturally hornier than girls — there’s a huge social piece of this,” he told the class) helped her shake her sense of deviance and shame. In fact, she felt confident enough to debate her point of view in class with the girl who was nervous about embarking on a relationship with the guy known to be promiscuous. That young woman told me she’d been moved by the exchange: “I’m like, ‘That is nasty to hook up with someone for one night.’ She was like, ‘Well, I don’t care, sometimes I don’t want a relationship.’ We were going back and forth, but then I had to respect her. Before I took this class, I probably would’ve thought she was a whore, but she knows what she wants. That’s not something I want, but it doesn’t make her wrong, it doesn’t make me wrong.”

Above all else, what Vernacchio can do that his colleagues envy is to simply assume the pleasure of sex and directly address it with unharried ease. During one class, he handed out a worksheet with the five senses printed along the top and asked the students to try and list sexual activities that optimized each. (There were examples to prod their thinking: under hearing, for instance, was “listening to your partner read an erotic story.”) While Vernacchio knew the exercise would be a challenge for the kids — and he didn’t expect them to share their answers — its purpose was to open their minds to a broader sexuality.

Regarding the statistic that Vernacchio alluded to earlier — that 70 percent of women do not orgasm through vaginal penetration alone — one boy exclaimed when we talked, “That shocked me, a lot.” The other boys also told me they’d been in the dark about the mysteries of female sexual satisfaction. “I think I sort of knew where the clitoris was, but I didn’t know it was, like, under something,” one said. Another declared, “It’s almost like a wake-up call.” He paused. “To not just please yourself.”

The female students were nearly equally surprised. “I always thought, Is it weird that I don’t get an orgasm from, you know, just like vaginal penetration?” said a girl who’d had intercourse with one boy, though she’d had orgasms before that from being touched genitally. “It was comforting to hear that for most people it doesn’t happen. I mean, I’d heard it, but it was nice hearing it from Mr. V., who knows so much about it, and other people saying, ‘Yeah, yeah, that’s right.’ ”

Not that information was always power for these young women. One girl said that while she could advise her boyfriend on how to increase her pleasure, she wouldn’t, because he’s “very insecure” about his lack of experience. Another estimated she’d had only two orgasms with her boyfriend of longstanding, each during intercourse, though she climaxes on her own through masturbation. Somehow, when she and her boyfriend “do anything, we just end up having sex,” she said, seeming both a little perplexed by the situation, and a little afraid to make waves.

Who gives oral sex to whom is common fodder for Vernacchio’s gender-parity conversations. All but one of the students told me they’d had it, but sometimes only once or twice, and the vast majority within monogamous relationships.

Although Vernacchio encourages students to think about fairness, he certainly doesn’t encourage a direct quid pro quo for oral sex — and the girls, the main givers, were not terribly enthused about being the recipients. “[My boyfriend] completely offered, and I did not want that,” one said. Another agreed: “It just creeps me out.” None were thrilled about performing it, either, and they seemed to be wrestling — in thought and deed — with why they continued to do so. “I do think girls like to take care of people,” the student-council V.P. mused, “and I know that just sounds horrible, like you should send me right back to the ’50s, but my mom is like the most liberal woman I know and still is so happy to make food for people. To some extent, women are just more people-pleasers than men.” One girl said she’d come up with “tricks” to make giving oral sex more enjoyable for her, and that she’d set “strict rules” for herself: “I only do it if they do something on me first, and it has to be below the belt.” And another said she doesn’t enjoy cunnilingus, but taking the personal is political to heart, she asked her boyfriend to do it anyway: if she was expected to service him orally, he should have to return the favor.

All the boys said that Vernacchio had increased their sensitivity to the girls. One recounted how in an effort to consider his girlfriend’s feelings he’d asked her if she was willing to give him oral sex — none of that pushing her head down in the heat of the moment — and she’d considered it for an excruciating hour. Or maybe it just felt like that. “Do you have to think about it this long?” he finally pleaded. Eventually, she agreed.

Pleasure in sex ed was a major topic last November at one of the largest sex-education conferences in the country, sponsored by the education arm of Planned Parenthood of Greater Northern New Jersey. “Porn is the model for today’s middle-school and high-school students,” Paul Joannides said in the keynote speech. “And none of us is offering an alternative that’s even remotely appealing.”

Joannides, who is 58, made sex education his life’s work following the success of his sex manual for older teenagers and adults called, “The Guide to Getting It On.” Lauded for its voluminous accuracy and wit, the 900-plus-page paperback took him 15 years to research and write. Joannides argues that pornography can be used as a teaching tool, not a bogeyman, as is apparent in a short Web video he made called “5 Things to Learn About Lovemaking From Porn.” “In porn,” he affably lectures, “sex happens instantly: camera, action, crotch. . . . In real life, the willingness to ask and learn from your partner is often what separates the good lovers from those who are totally forgettable.” (Another of Joannides’s assertions is that the best way to reach heterosexual boys — who he believes are the most neglected in the current environment — is to play to their desire for “mastery,” because by middle school, they’ve thoroughly absorbed that to be a man is to be a stud.)

One of sex educators’ big problems, Joannides told the New Jersey audience, is that they define their role as the “messengers of all the things that can go wrong with sex.” The attention paid to S.T.I.’s, pregnancy, rape and discrimination based on sexual orientation, while understandable, comes at a cost, he says. “We’re worrying about which bathrooms transgender students should use while teens are worrying whether they should shave all the way or leave a landing strip,” he said. “They’re worrying if someone special will find them sexually attractive, whether they will be able to do it as well as porn, whether others have the same kind of sexual feelings they do.”

In other words, as much as Joannides criticizes his opponents on the right, he also tweaks the orthodoxies of his friends on the left, hoping to spur them to contemplate how they themselves dismiss pleasure. His main premise is that young people will tune out educators if their real concerns are left in the shadows. And practically speaking, pleasure is so braided through sex that if you can’t mention it, you miss chances to teach about safe sex in a way that young people can really use.

For instance, in addition to pulling condoms over bananas — which has become a de rigueur contraception lesson among “liberal” educators — young people need to hear specifics about making the method work for them. “We don’t tell them: ‘Look, there are different shapes of condoms. Get sampler packs, experiment.’ That would be entering pleasure into the conversation, and we don’t want that.”

While the conference attendees couldn’t have agreed more with Joannides about what should be taught in schools, much of the crowd thought he was deluded to imagine they could ever get away with it. Back in 1988, Michelle Fine, a professor of social psychology at the City University of New York, wrote an article in The Harvard Educational Review called “Sexuality, Schooling and Adolescent Females: The Missing Discourse of Desire.” In it, she included the comments of a teacher who discouraged community advocates from lobbying for change in the formal curriculum. If outsiders actually discovered the liberties some teachers take, Fine was told, they’d be shut down.

More than two decades later, at the conference, an educator from Pennsylvania told me that one school asked her to teach a sex-ed class but forbade her to use the words “sex, ” “sexy” or “tampon.” (She declined.) A chipper young Unitarian sex educator from Brooklyn, Kirsten deFur, who led a workshop titled “Don’t Forget the Good Stuff,” gave tips on how her colleagues could avoid uttering the words “pleasure” and “orgasm.” “Ask open-ended questions about what feels good,” deFur recommended. And, she added, the P-word might even be acceptable in the proper context: “If you have healthy sex, it’ll be more pleasurable,” an instructor might dare to say.

That more expansive sex education has to be done in code was something I came across repeatedly. A veteran advocate in the field gave me a short list of teachers to contact who might be willing to talk to me but then warned, “I don’t know if any of them are going to want to have what they’re doing out there.”

“What if our kids really believed we wanted them to have great sex?” Vernacchio asked near the end of an evening talk he gave in January primarily for parents of ninth graders who would attend his sex-ed minicourse. “What if they really believed that we want them to be so passionately in love with someone that they can’t keep their hands off them? What if they really believed we want them to know their own bodies?”

Vernacchio didn’t imagine that his audience, who gave him an enthusiastic ovation when his presentation ended, wanted their 14- and 15-year-olds to go out tomorrow and jump into bed or the backseat. Sex education, he and others point out, is one of the few classes where it’s not understood that young people are being prepared for the future.

Sex, of course, can come with emotional confusion and pain, and be enmeshed with violence, which Michelle Fine knows well. She said that what all adolescents crave is a “safe space” to pull apart and ponder the stew of relationships and sexual activity — including intimacy and desire and betrayal and coercion.

Vernacchio’s classroom is such a setting. Owing partly to his devotion to his job, partly to the individual relationships he starts developing with students in ninth grade as their English or sex-ed instructor or adviser, he looks out at a roomful of people whom he really knows, and who depend on him for discerning and generous counsel. This was especially true for the young woman who was raped — she told Vernacchio about the assault before anyone else at Friends’ — as well as the girl who was undone by her scorching on Facebook. She relied on Vernacchio all year for support, she said.

For every single question that Vernacchio pulls out of his anonymous question box about female ejaculation, there are 10 like these: How do you handle your insecurities in a relationship? How do you stop worrying about being cheated on? How do you know when it’s time to break up? How do I talk to my partner about wanting to spend more time together without being annoying? Watching how closely the students attended to Vernacchio’s often lengthy answers was a moving reminder of how young 17- and 18-year-olds are.

“As a society, we always tell kids, ‘Work hard, just focus on school, don’t think about girls or guys — you can worry about that stuff later, that stuff will work itself out,’ but the thing is, it doesn’t,” said a boy who had told me he had a disconcerting one-nighter with a girl he’d talked to only electronically. The class taught him to be more cautious about choosing the right time with the right person, he said, with a forcefulness that didn’t quite cover the hurt in his eyes. “You learn about the psychological after-effects that could happen to you.”

The girl who was contemplating getting serious with a boy, but only if they could be exclusive, told me she finally figured out how to approach the guy after Vernacchio talked in class about the difference between “nagging” and asking for what you want. “I never thought of saying to him, ‘You know, just tell me if you’re having sex with someone else.’ I don’t want to pressure him, but I feel like it would make me comfortable.” This seems like pretty simple stuff, especially for someone who repeatedly called herself “strong,” but somehow it wasn’t until Vernacchio said that it was O.K. to make such forthright requests that she could conceive of it.

“The campaign for abstinence in the schools and communities may seem trivial, an ideological nuisance,” Michelle Fine and Sara McClelland wrote in a 2006 study in The Harvard Educational Review, “but at its core it is . . . a betrayal of our next generation, which is desperately in need of knowledge, conversation and resources to negotiate the delicious and treacherous terrain of sexuality in the 21st century.”

It’s axiomatic, however, that parents who support richer sex education don’t make the same ruckus with school officials as those who oppose it. “We need to be there at the school boards and say: ‘Guess where kids are getting their messages about sex from? They’re getting it from porn,’ ” Joannides exhorted. “All we’re talking about is just being able to acknowledge that sex is a good thing in the right circumstances, that it’s a normal thing.”

Of course, sex isn’t all pleasure or all peril, it’s both (and sometimes both at once, though that lesson may have to wait for grad school). Vernacchio has a way of getting at its positive potential without ignoring the fact that, however good sex may feel, it’s sometimes best left off the menu. “So let’s think about pizza,” Vernacchio said to his students after they’d deconstructed baseball. The class for that day was just about over. “Why do you have pizza?”

“You’re hungry,” a cross-country runner said.

“Because you want to,” Vernacchio affirmed. “It starts with desire, an internal sense — not an external ‘I got a game today, I have to do it.’ And wouldn’t it be great if our sexual activity started with a real sense of wanting, whether your desire is for intimacy, pleasure or orgasms. . . . And you can be hungry for pizza and still decide, No thanks, I’m dieting. It’s not the healthiest thing for me now.

“If you’re gonna have pizza with someone else, what do you have to do?” he continued. “You gotta talk about what you want. Even if you’re going to have the same pizza you always have, you say, ‘We getting the usual?’ Just a check in. And square, round, thick, thin, stuffed crust, pepperoni, stromboli, pineapple — none of those are wrong; variety in the pizza model doesn’t come with judgment,” Vernacchio hurried on. “So ideally when the pizza arrives, it smells good, looks good, it’s mouthwatering. Wouldn’t it be great if we had that kind of anticipation before sexual activity, if it stimulated all our senses, not just our genitals but this whole-body experience.” By this time, he was really moving fast; he’d had to cram his pizza metaphor into the last five minutes. “And what’s the goal of eating pizza? To be full, to be satisfied. That might be different for different people; it might be different for you on different occasions. Nobody’s like ‘You failed, you didn’t eat the whole pizza.’

“So again, what if our goal, quote, unquote, wasn’t necessarily to finish the bases?” The students were gathering their papers, preparing to go. “What if it just was, ‘Wow, I feel like I had enough. That was really good.’ ”

Laurie Abraham wrote “The Husbands and Wives Club: A Year in the Life of a Couples Therapy Group,” which began as an article in the magazine.

__________

Full article and photo: http://www.nytimes.com/2011/11/20/magazine/teaching-good-sex.html

The power of lonely

What we do better without other people around

You hear it all the time: We humans are social animals. We need to spend time together to be happy and functional, and we extract a vast array of benefits from maintaining intimate relationships and associating with groups. Collaborating on projects at work makes us smarter and more creative. Hanging out with friends makes us more emotionally mature and better able to deal with grief and stress.

Spending time alone, by contrast, can look a little suspect. In a world gone wild for wikis and interdisciplinary collaboration, those who prefer solitude and private noodling are seen as eccentric at best and defective at worst, and are often presumed to be suffering from social anxiety, boredom, and alienation.

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking. There is even research to suggest that blocking off enough alone time is an important component of a well-functioning social life — that if we want to get the most out of the time we spend with people, we should make sure we’re spending enough of it away from them. Just as regular exercise and healthy eating make our minds and bodies work better, solitude experts say, so can being alone.

One ongoing Harvard study indicates that people form more lasting and accurate memories if they believe they’re experiencing something alone. Another indicates that a certain amount of solitude can make a person more capable of empathy towards others. And while no one would dispute that too much isolation early in life can be unhealthy, a certain amount of solitude has been shown to help teenagers improve their moods and earn good grades in school.

“There’s so much cultural anxiety about isolation in our country that we often fail to appreciate the benefits of solitude,” said Eric Klinenberg, a sociologist at New York University whose book “Alone in America,” in which he argues for a reevaluation of solitude, will be published next year. “There is something very liberating for people about being on their own. They’re able to establish some control over the way they spend their time. They’re able to decompress at the end of a busy day in a city…and experience a feeling of freedom.”

Figuring out what solitude is and how it affects our thoughts and feelings has never been more crucial. The latest Census figures indicate there are some 31 million Americans living alone, which accounts for more than a quarter of all US households. And at the same time, the experience of being alone is being transformed dramatically, as more and more people spend their days and nights permanently connected to the outside world through cellphones and computers. In an age when no one is ever more than a text message or an e-mail away from other people, the distinction between “alone” and “together” has become hopelessly blurry, even as the potential benefits of true solitude are starting to become clearer.

Solitude has long been linked with creativity, spirituality, and intellectual might. The leaders of the world’s great religions — Jesus, Buddha, Mohammed, Moses — all had crucial revelations during periods of solitude. The poet James Russell Lowell identified solitude as “needful to the imagination;” in the 1988 book “Solitude: A Return to the Self,” the British psychiatrist Anthony Storr invoked Beethoven, Kafka, and Newton as examples of solitary genius.

But what actually happens to people’s minds when they are alone? As much as it’s been exalted, our understanding of how solitude actually works has remained rather abstract, and modern psychology — where you might expect the answers to lie — has tended to treat aloneness more as a problem than a solution. That was what Christopher Long found back in 1999, when as a graduate student at the University of Massachusetts Amherst he started working on a project to precisely define solitude and isolate ways in which it could be experienced constructively. The project’s funding came from, of all places, the US Forest Service, an agency with a deep interest in figuring out once and for all what is meant by “solitude” and how the concept could be used to promote America’s wilderness preserves.

With his graduate adviser and a researcher from the Forest Service at his side, Long identified a number of different ways a person might experience solitude and undertook a series of studies to measure how common they were and how much people valued them. A 2003 survey of 320 UMass undergraduates led Long and his coauthors to conclude that people felt good about being alone more often than they felt bad about it, and that psychology’s conventional approach to solitude — an “almost exclusive emphasis on loneliness” — represented an artificially narrow view of what being alone was all about.

“Aloneness doesn’t have to be bad,” Long said by phone recently from Ouachita Baptist University, where he is an assistant professor. “There’s all this research on solitary confinement and sensory deprivation and astronauts and people in Antarctica — and we wanted to say, look, it’s not just about loneliness!”

Today other researchers are eagerly diving into that gap. Robert Coplan of Carleton University, who studies children who play alone, is so bullish on the emergence of solitude studies that he’s hoping to collect the best contemporary research into a book. Harvard professor Daniel Gilbert, a leader in the world of positive psychology, has recently overseen an intriguing study that suggests memories are formed more effectively when people think they’re experiencing something individually.

That study, led by graduate student Bethany Burum, started with a simple experiment: Burum placed two individuals in a room and had them spend a few minutes getting to know each other. They then sat back to back, each facing a computer screen the other could not see. In some cases they were told they’d both be doing the same task, in other cases they were told they’d be doing different things. The computer screen scrolled through a set of drawings of common objects, such as a guitar, a clock, and a log. A few days later the participants returned and were asked to recall which drawings they’d been shown. Burum found that the participants who had been told the person behind them was doing a different task — namely, identifying sounds rather than looking at pictures — did a better job of remembering the pictures. In other words, they formed more solid memories when they believed they were the only ones doing the task.

The results, which Burum cautions are preliminary, are now part of a paper on “the coexperiencing mind” that was recently presented at the Society for Personality and Social Psychology conference. In the paper, Burum offers two possible theories to explain what she and Gilbert found in the study. The first invokes a well-known concept from social psychology called “social loafing,” which says that people tend not to try as hard if they think they can rely on others to pick up their slack. (If two people are pulling a rope, for example, neither will pull quite as hard as they would if they were pulling it alone.) But Burum leans toward a different explanation, which is that sharing an experience with someone is inherently distracting, because it compels us to expend energy on imagining what the other person is going through and how they’re reacting to it.

“People tend to engage quite automatically with thinking about the minds of other people,” Burum said in an interview. “We’re multitasking when we’re with other people in a way that we’re not when we just have an experience by ourselves.”

Perhaps this explains why seeing a movie alone feels so radically different than seeing it with friends: Sitting there in the theater with nobody next to you, you’re not wondering what anyone else thinks of it; you’re not anticipating the discussion that you’ll be having about it on the way home. All your mental energy can be directed at what’s happening on the screen. According to Greg Feist, an associate professor of psychology at the San Jose State University who has written about the connection between creativity and solitude, some version of that principle may also be at work when we simply let our minds wander: When we let our focus shift away from the people and things around us, we are better able to engage in what’s called meta-cognition, or the process of thinking critically and reflectively about our own thoughts.

Other psychologists have looked at what happens when other people’s minds don’t just take up our bandwidth, but actually influence our judgment. It’s well known that we’re prone to absorb or mimic the opinions and body language of others in all sorts of situations, including those that might seem the most intensely individual, such as who we’re attracted to. While psychologists don’t necessarily think of that sort of influence as “clouding” one’s judgment — most would say it’s a mechanism for learning, allowing us to benefit from information other people have access to that we don’t — it’s easy to see how being surrounded by other people could hamper a person’s efforts to figure out what he or she really thinks of something.

Teenagers, especially, whose personalities have not yet fully formed, have been shown to benefit from time spent apart from others, in part because it allows for a kind of introspection — and freedom from self-consciousness — that strengthens their sense of identity. Reed Larson, a professor of human development at the University of Illinois, conducted a study in the 1990s in which adolescents outfitted with beepers were prompted at irregular intervals to write down answers to questions about who they were with, what they were doing, and how they were feeling. Perhaps not surprisingly, he found that when the teens in his sample were alone, they reported feeling a lot less self-conscious. “They want to be in their bedrooms because they want to get away from the gaze of other people,” he said.

The teenagers weren’t necessarily happier when they were alone; adolescence, after all, can be a particularly tough time to be separated from the group. But Larson found something interesting: On average, the kids in his sample felt better after they spent some time alone than they did before. Furthermore, he found that kids who spent between 25 and 45 percent of their nonclass time alone tended to have more positive emotions over the course of the weeklong study than their more socially active peers, were more successful in school and were less likely to self-report depression.

“The paradox was that being alone was not a particularly happy state,” Larson said. “But there seemed to be kind of a rebound effect. It’s kind of like a bitter medicine.”

The nice thing about medicine is it comes with instructions. Not so with solitude, which may be tremendously good for one’s health when taken in the right doses, but is about as user-friendly as an unmarked white pill. Too much solitude is unequivocally harmful and broadly debilitating, decades of research show. But one person’s “too much” might be someone else’s “just enough,” and eyeballing the difference with any precision is next to impossible.

Research is still far from offering any concrete guidelines. Insofar as there is a consensus among solitude researchers, it’s that in order to get anything positive out of spending time alone, solitude should be a choice: People must feel like they’ve actively decided to take time apart from people, rather than being forced into it against their will.

Overextended parents might not need any encouragement to see time alone as a desirable luxury; the question for them is only how to build it into their frenzied lives. But for the millions of people living by themselves, making time spent alone time productive may require a different kind of effort. Sherry Turkle, director of the MIT Initiative on Technology and Self, argues in her new book, “Alone, Together,” that people should be mindfully setting aside chunks of every day when they are not engaged in so-called social snacking activities like texting, g-chatting, and talking on the phone. For teenagers, it may help to understand that feeling a little lonely at times may simply be the price of forging a clearer identity.

John Cacioppo of the University of Chicago, whose 2008 book “Loneliness” with William Patrick summarized a career’s worth of research on all the negative things that happen to people who can’t establish connections with others, said recently that as long as it’s not motivated by fear or social anxiety, then spending time alone can be a crucially nourishing component of life. And it can have some counterintuitive effects: Adam Waytz in the Harvard psychology department, one of Cacioppo’s former students, recently completed a study indicating that people who are socially connected with others can have a hard time identifying with people who are more distant from them. Spending a certain amount of time alone, the study suggests, can make us less closed off from others and more capable of empathy — in other words, better social animals.

“People make this error, thinking that being alone means being lonely, and not being alone means being with other people,” Cacioppo said. “You need to be able to recharge on your own sometimes. Part of being able to connect is being available to other people, and no one can do that without a break.”

Leon Neyfakh is the staff writer for Ideas.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2011/03/06/the_power_of_lonely/

Beyond Understanding

I ought to have known better than to have lunch with a psychologist.

“Take you, for example,” he said. “You are definitely autistic.”

“What!?”

“I rest my case,” he shot back. “Q.E.D.”

His ironic point seemed to be that if I didn’t instantly grasp his point — which clearly I didn’t — then, at some level, I was exhibiting autistic tendencies.

Autism is often the subject of contentious and emotional debate, certainly because it manifests in the most vulnerable of humans — children. It is also hard to pin down; as a “spectrum disorder” it can take extreme and disheartening forms and incur a devastating toll on families. It is the “milder” or “high functioning” form and the two main agreed-upon symptoms of sub-optimal social and communication skills that I confine myself to here.

Simon Baron-Cohen, for example, in his book “Mindblindness,” argues that the whole raison d’être of consciousness is to be able to read other people’s minds; autism, in this context, can be defined as an inability to “get” other people, hence “mindblind.” 

A less recent but possibly related conversation took place during the viva voce exam Ludwig Wittgenstein was given by Bertrand Russell and G. E. Moore in Cambridge in 1929. Wittgenstein was formally presenting his “Tractatus Logico-Philosophicus,” an already well-known work he had written in 1921, as his doctoral thesis. Russell and Moore were respectfully suggesting that they didn’t quite understand proposition 5.4541 when they were abruptly cut off by the irritable Wittgenstein. “I don’t expect you to understand!” (I am relying on local legend here; Ray Monk’s biography of Wittgenstein has him, in a more clubbable way, slapping them on the back and bringing proceedings cheerfully to a close with the words, “Don’t worry, I know you’ll never understand it.”)

I have always thought of Wittgenstein’s line as (a) admittedly, a little tetchy (or in the Monk version condescending) but (b) expressing enviable self-confidence and (c) impressively devoid of deference (I’ve even tried to emulate it once or twice, but it never comes out quite right). But if autism can be defined, at one level, by a lack of understanding (verbal or otherwise), it is at least plausible that Wittgenstein is making (or at least implying) a broadly philosophical proposition here, rather than commenting, acerbically, on the limitations of these particular interlocutors. He could be read as saying:

Thank you, gentlemen, for raising the issue of understanding here. The fact is, I don’t expect people in general to understand what I have written. And it is not just because I have written something, in places, particularly cryptic and elliptical and therefore hard to understand, or even because it is largely a meta-discourse and therefore senseless, but rather because, in my view, it is not given to us to achieve full understanding of what another person says. Therefore I don’t expect you to understand this problem of misunderstanding either.

If Wittgenstein was making a statement along these lines, then it would provide an illuminating perspective in which to read the “Tractatus.” The persistent theme within it of “propositions which say nothing,” which we tend to package up under the heading of “the mystical,” would have to be rethought. Rather than clinging to a clear-cut divide between all these propositions ─ over here, the well-formed and intelligible (scientific) and over there, the hazy, dubious and mystical (aesthetic or ethical) ─ we might have to concede that, given the way humans interact with one another, there is always a potential mystery concealed within the most elementary statement. And it is harder than you think it is going to be to eliminate, entirely, the residue of obscurity, the possibility of misunderstanding lurking at the core of every sentence. Sometimes Wittgenstein thinks he has solved the problem, at others not (“The solution of the problem of life is seen in the vanishing of the problem,” he writes in “Tractatus.”) What do we make of those dense, elegiac and perhaps incomprehensible final lines, sometimes translated as “Whereof one cannot speak thereof one must remain silent”? Positioned as it is right at the end of the book (like “the rest is silence” at the end of “Hamlet”), proposition number 7 is apt to be associated with death or the afterlife. But translating it yet again into the sort of terms a psychologist would readily grasp, perhaps Wittgenstein is also hinting: “I am autistic” or “I am mindblind.” Or, to put it another way, autism is not some exotic anomaly but rather a constant.

I am probably misreading the text here — if I have understood it correctly, I must be misreading it. But Wittgenstein has frequently been categorized, in recent retrospective diagnoses, as autistic. Sula Wolff, for example, in “Loners, The Life Path of Unusual Children” (1995), analyzes Wittgenstein as a classic case of Asperger’s syndrome, so-called “high-functioning autism” ─ that is, being articulate, numerate and not visibly dysfunctional, but nevertheless awkward and unskilled in social intercourse. He is apt to get hold of the wrong end of the stick (not to mention the poker that he once waved aggressively at Karl Popper). An illustrative true story: he is dying of cancer; it is his birthday; his cheerful landlady comes in and wishes him “Many happy returns, Mr. Wittgenstein”; he snaps back, “There will be no returns.”

Wittgenstein, not unlike someone with Asperger’s, admits to having difficulty working out what people are really going on about. In “Culture and Value” (1914) he writes: “We tend to take the speech of a Chinese for inarticulate gurgling. Someone who understands Chinese will recognize language in what he hears. Similarly I often cannot recognize the humanity of another human being.” Which might also go some way towards explaining his remark (in the later “Philosophical Investigations”) that even if a lion could speak English, we would still be unable to understand him.

Wittgenstein is not alone among philosophers in being included in this category of mindblindness. Russell, for one, has also been labeled autistic. Taking this into account, it is conceivable that Wittgenstein is saying to Russell, when he tells him that he doesn’t expect him to understand, “You are autistic!” Or (assuming a handy intellectual time machine), “If I am to believe Wolff and others, we are autistic. Perhaps all philosophers are. It is why we end up studying philosophy.”

I don’t want to maintain that all philosophers are autistic in this sense. Perhaps not even that “You don’t have to be autistic, but it helps.” And yet there are certainly episodes and sentences associated with philosophers quite distinct from Wittgenstein and Russell, that might lead us to think in that way. 

Consider, for example, Sartre’s classic one-liner, “Hell is other people.” Wouldn’t autism, with its inherent poverty of affective contact, go some way towards accounting for that? The fear of faces and the “gaze of the other” that Sartre analyzes are classic symptoms. Sartre recognized this in himself and in others as well: he explicitly describes Flaubert as “autistic” in his great, sprawling study of the writer, “The Family Idiot,” and also asserts that “Flaubert c’est moi.” Sartre’s theory that Flaubert starts off autistic and everything he writes afterwards — trying to work out what is in Madame Bovary’s mind, for example — is a form of compensation or rectification, could easily apply to his own work.

One implication of what a psychologist might say about autism goes something like this: you, a philosopher, are mindblind and liable to take up philosophy precisely because you don’t “get” what other people are saying to you. You, like Wittgenstein, have a habit of hearing and seeing propositions, but feeling that they say nothing (as if they were rendered in Chinese). In other words, philosophy would be a tendency to interpret what people say as a puzzle of some kind, a machine that may or may not work.

I think this helps to explain Wittgenstein’s otherwise slightly mysterious advice, to the effect that if you want to be a good philosopher, you should become a car mechanic (a job Wittgenstein actually held during part of the Great War). It was not just some notion of getting away from the study of previous philosophers, but also the idea that working on machines would be a good way of thinking about language. Wittgenstein, we know, came up with his preliminary model of language while studying court reports of a car accident in Paris during the war. The roots of picture theory (the model used in court to portray the event) and ostensive definition (all those little arrows and labels) are all here. But at the core of the episode are two machines and a collision. Perhaps language can be seen as a car, a vehicle of some kind, designed to get you from A to B, carrying a certain amount of information, but apt to get stuck in jams or break down or crash; and which will therefore need fixing. Wittgenstein and the art of car maintenance. This car mechanic conception of language is just the sort of thing high-functioning autistic types would come up with, my psychologist friend might say, because they understand “systems” better than they understand people. They are “(hyper-)systemizers” not “empathizers.” The point I am not exactly “driving” at but rather skidding into, and cannot seem to avoid, is this: indisputably, most car mechanics are men. 

My psychologist friend assured me that I was not alone. “Men tend to be autistic on average. More so than women.” The accepted male-to-female ratio for autism is roughly 4-to-1; for Asperger’s the ratio jumps even higher, by some accounts 10-to-1 (other statistics give higher or lower figures but retain the male prevalence). Asperger himself wrote that the autistic mind is “an extreme variant of male intelligence”; Baron-Cohen argues that “the extreme male brain” (not exclusive to men) is the product of an overdose of fetal testosterone.

If Wittgenstein in his conversation with Russell is suggesting that philosophers are typically autistic in a broad sense, this view might explain (in part) the preponderance of male philosophers. I went back over several sources to get an idea of the philosophical ratio: Russell’s “History of Western Philosophy” (about 100-to-1), Critchley’s “Book of Dead Philosophers” (30-to-1), while, in the realm of the living, the list of contributors to The Stone, for example, the ratio narrows to more like 4-to-1.

A psychologist might say something like: “Q.E.D., philosophy is all about systemizing (therefore male) and cold, hard logic, whereas the empathizers (largely female) seek out more humane, less mechanistic havens.” I would like to offer a slightly different take on the evidence. Plato took the view (in Book V of “The Republic”) that women were just as philosophical as men and would qualify to become the philosopher “guardians” of the ideal Greek state of the future (in return they would have to learn to run around naked at the gym). It seems likely that women were among the pre-Socratic archi-philosophers. But they were largely oracular. They tended to speak in riddles. The point of philosophy from Aristotle onwards was to resolve and abolish the riddle.

But perhaps the riddle is making a comeback. Understanding can be coercive and suffocating. Do I really have to be quite so “understanding”? Isn’t that the same as being masochistically subservient? And isn’t it just another aspect of your hegemony to claim to understand me quite so well? Simone de Beauvoir was exercising her right to what I would like to call autismo when she wrote that, “one is not born a woman but becomes one.” Similarly, when she emblazons her first novel, “She Came To Stay,” with an epigraph derived from Hegel ─ “every consciousness seeks the death of the other” ─ and her philosophical avatar takes it upon herself to bump off the provincial young woman she has invited to stay in Paris: I refuse to understand, to be a mind-reader. Conversely, when Luce Irigaray, the feminist theorist and philosopher, speaks — again paradoxically — of “this sex which is not one,” she is asking us to think twice about our premature understanding of gender — what Wittgenstein might call a case of “bewitchment.”

The study of our psychopathology, via cognitive neuroscience, suggests a hypothetical history. Why does language arise? It arises because of the scope for misunderstanding. Body language, gestures, looks, winks, are not quite enough. I am not a mind-reader. I don’t understand. We need noises and written signs, speech-acts, the Word, logos. If you tell me what you want, I will tell you what I want. Language is a system that arises to compensate for an empathy deficit. But with or without language, I can still exhibit traits of autism. I can misread the signs. Perhaps it would be more exact to say that autism only arises, is only identified, at the same time as there is an expectation of understanding. But if autism is a problem, from certain points of view, autismo is also a solution: it is an assertion that understanding itself can be overvalued.

It is a point that Wittgenstein makes memorably in the introduction to the “Tractatus,” in which he writes:

I therefore believe myself to have found, on all essential points, the final solution of the problems [of philosophy]. And if I am not mistaken in this belief … it shows how little is achieved when these problems are solved.

Which is why he also suggests, at the end of the book, that anyone who has climbed up his philosophical ladder should throw it away.

Andy Martin is currently completing “Philosophy Fight Club: Sartre vs. Camus,” to be published by Simon and Schuster. He was a 2009-10 fellow at the Cullman Center for Scholars and Writers in New York, and teaches at Cambridge University.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/21/beyond-understanding

Should This Be the Last Generation?

Have you ever thought about whether to have a child? If so, what factors entered into your decision? Was it whether having children would be good for you, your partner and others close to the possible child, such as children you may already have, or perhaps your parents? For most people contemplating reproduction, those are the dominant questions. Some may also think about the desirability of adding to the strain that the nearly seven billion people already here are putting on our planet’s environment. But very few ask whether coming into existence is a good thing for the child itself. Most of those who consider that question probably do so because they have some reason to fear that the child’s life would be especially difficult — for example, if they have a family history of a devastating illness, physical or mental, that cannot yet be detected prenatally.

All this suggests that we think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don’t usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as “the asymmetry” and it is not easy to justify. But rather than go into the explanations usually proffered — and why they fail — I want to raise a related problem. How good does life have to be, to make it reasonable to bring a child into the world? Is the standard of life experienced by most people in developed nations today good enough to make this decision unproblematic, in the absence of specific knowledge that the child will have a severe genetic disease or other problem? 

The 19th-century German philosopher Arthur Schopenhauer held that even the best life possible for humans is one in which we strive for ends that, once achieved, bring only fleeting satisfaction. New desires then lead us on to further futile struggle and the cycle repeats itself.

Schopenhauer’s pessimism has had few defenders over the past two centuries, but one has recently emerged, in the South African philosopher David Benatar, author of a fine book with an arresting title: “Better Never to Have Been: The Harm of Coming into Existence.” One of Benatar’s arguments trades on something like the asymmetry noted earlier. To bring into existence someone who will suffer is, Benatar argues, to harm that person, but to bring into existence someone who will have a good life is not to benefit him or her. Few of us would think it right to inflict severe suffering on an innocent child, even if that were the only way in which we could bring many other children into the world. Yet everyone will suffer to some extent, and if our species continues to reproduce, we can be sure that some future children will suffer severely. Hence continued reproduction will harm some children severely, and benefit none.

Benatar also argues that human lives are, in general, much less good than we think they are. We spend most of our lives with unfulfilled desires, and the occasional satisfactions that are all most of us can achieve are insufficient to outweigh these prolonged negative states. If we think that this is a tolerable state of affairs it is because we are, in Benatar’s view, victims of the illusion of pollyannaism. This illusion may have evolved because it helped our ancestors survive, but it is an illusion nonetheless. If we could see our lives objectively, we would see that they are not something we should inflict on anyone.

Here is a thought experiment to test our attitudes to this view. Most thoughtful people are extremely concerned about climate change. Some stop eating meat, or flying abroad on vacation, in order to reduce their carbon footprint. But the people who will be most severely harmed by climate change have not yet been conceived. If there were to be no future generations, there would be much less for us to feel to guilty about.

So why don’t we make ourselves the last generation on earth? If we would all agree to have ourselves sterilized then no sacrifices would be required — we could party our way into extinction!

Of course, it would be impossible to get agreement on universal sterilization, but just imagine that we could. Then is there anything wrong with this scenario? Even if we take a less pessimistic view of human existence than Benatar, we could still defend it, because it makes us better off — for one thing, we can get rid of all that guilt about what we are doing to future generations — and it doesn’t make anyone worse off, because there won’t be anyone else to be worse off.

Is a world with people in it better than one without? Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

I do think it would be wrong to choose the non-sentient universe. In my judgment, for most people, life is worth living. Even if that is not yet the case, I am enough of an optimist to believe that, should humans survive for another century or two, we will learn from our past mistakes and bring about a world in which there is far less suffering than there is now. But justifying that choice forces us to reconsider the deep issues with which I began. Is life worth living? Are the interests of a future child a reason for bringing that child into existence? And is the continuance of our species justifiable in the face of our knowledge that it will certainly bring suffering to innocent future human beings?

What do you think?

Readers are invited to respond to the following questions in the comment section below:

If a child is likely to have a life full of pain and suffering is that a reason against bringing the child into existence?

If a child is likely to have a happy, healthy life, is that a reason for bringing the child into existence?

Is life worth living, for most people in developed nations today?

Is a world with people in it better than a world with no sentient beings at all?

Would it be wrong for us all to agree not to have children, so that we would be the last generation on Earth?

 

Peter Singer is Professor of Bioethics at Princeton University and Laureate Professor at the University of Melbourne. His most recent book is “The Life You Can Save.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/06/06/should-this-be-the-last-generation/

Friendship in an Age of Economics

When I was 17 years old, I had the honor of being the youngest person in the history of New York Hospital to undergo surgery for a herniated disc. This was at a time in which operations like this kept people in the hospital for over a week. The day after my surgery, I awoke to find a friend of mine sitting in a chair across from my bed. I don’t remember much about his visit. I am sure I was too sedated to say much. But I will not forget that he visited me on that day, and sat there for I know not how long, while my humanity was in the care of a morphine drip. 

The official discourses of our relations with one another do not have much to say about the afternoon my friend spent with me. Our age, what we might call the age of economics, is in thrall to two types of relationships which reflect the lives we are encouraged to lead. There are consumer relationships, those that we participate in for the pleasure they bring us. And there are entrepreneurial relationships, those that we invest in hoping they will bring us some return. In a time in which the discourse of economics seeks to hold us in its grip, this should come as no surprise.

The encouragement toward relationships of consumption is nowhere more prominently on display than in reality television. Jon and Kate, the cast of “Real World,” the Kardashians, and their kin across the spectrum conduct their lives for our entertainment. It is available to us in turn to respond in a minor key by displaying our own relationships on YouTube. Or, barring that, we can collect friends like shoes or baseball cards on Facebook.

Entrepreneurial relationships have, in some sense, always been with us. Using people for one’s ends is not a novel practice. It has gained momentum, however, as the reduction of governmental support has diminished social solidarity and the rise of finance capitalism has stressed investment over production. The economic fruits of the latter have lately been with us, but the interpersonal ones, while more persistent, remain veiled. Where nothing is produced except personal gain, relationships come loose from their social moorings.

Aristotle thought that there were three types of friendship: those of pleasure, those of usefulness, and true friendship. In friendships of pleasure, “it is not for their character that men love ready-witted people, but because they find them pleasant.” In the latter, “those who love each other for their utility do not love each other for themselves but in virtue of some good which they get from each other.” For him, the first is characteristic of the young, who are focused on momentary enjoyment, while the second is often the province of the old, who need assistance to cope with their frailty. What the rise of recent public rhetoric and practice has accomplished is to cast the first two in economic terms while forgetting about the third.

In our lives, however, few of us have entirely forgotten about the third — true friendship. We may not define it as Aristotle did — friendship among the already virtuous — but we live it in our own way nonetheless. Our close friendships stand as a challenge to the tenor of our times.

Conversely, our times challenge those friendships. This is why we must reflect on friendship; so that it doesn’t slip away from us under the pressure of a dominant economic discourse. We are all, and always, creatures of our time. In the case of friendship, we must push back against that time if we are to sustain what, for many of us, are among the most important elements of our lives. It is those elements that allow us to sit by the bedside of a friend: not because we know it is worth it, but because the question of worth does not even arise.

There is much that might be said about friendships. They allow us to see ourselves from the perspective of another. They open up new interests or deepen current ones. They offer us support during difficult periods in our lives. The aspect of friendship that I would like to focus on is its non-economic character. Although we benefit from our close friendships, these friendships are not a matter of calculable gain and loss. While we draw pleasure from them, they are not a matter solely of consuming pleasure. And while the time we spend with our friends and the favors we do for them are often reciprocated in an informal way, we do not spend that time or offer those favors in view of the reciprocation that might ensue.

Friendships follow a rhythm that is distinct from that of either consumer or entrepreneurial relationships. This is at once their deepest and most fragile characteristic. Consumer pleasures are transient. They engulf us for a short period and then they fade, like a drug. That is why they often need to be renewed periodically. Entrepreneurship, when successful, leads to the victory of personal gain. We cultivate a colleague in the field or a contact outside of it in the hope that it will advance our career or enhance our status. When it does, we feel a sense of personal success. In both cases, there is the enjoyment of what comes to us through the medium of other human beings.

Friendships worthy of the name are different. Their rhythm lies not in what they bring to us, but rather in what we immerse ourselves in. To be a friend is to step into the stream of another’s life. It is, while not neglecting my own life, to take pleasure in another’s pleasure, and to share their pain as partly my own. The borders of my life, while not entirely erased, become less clear than they might be. Rather than the rhythm of pleasure followed by emptiness, or that of investment and then profit, friendships follow a rhythm that is at once subtler and more persistent. This rhythm is subtler because it often (although not always) lacks the mark of a consumed pleasure or a successful investment. But even so, it remains there, part of the ground of our lives that lies both within us and without.

To be this ground, friendships have a relation to time that is foreign to an economic orientation. Consumer relationships are focused on the momentary present. It is what brings immediate pleasure that matters. Entrepreneurial relationships have more to do with the future. How I act toward others is determined by what they might do for me down the road. Friendships, although lived in the present and assumed to continue into the future, also have a deeper tie to the past than either of these. Past time is sedimented in a friendship. It accretes over the hours and days friends spend together, forming the foundation upon which the character of a relationship is built. This sedimentation need not be a happy one. Shared experience, not just common amusement or advancement, is the ground of friendship.

Of course, to have friendships like this, one must be prepared to take up the past as a ground for friendship. This ground does not come to us, ready-made. We must make it our own. And this, perhaps, is the contemporary lesson we can draw from Aristotle’s view that true friendship requires virtuous partners, that “perfect friendship is the friendship of men who are good.” If we are to have friends, then we must be willing to approach some among our relationships as offering an invitation to build something outside the scope of our own desires. We must be willing to forgo pleasure or usefulness for something that emerges not within but between one of us and another.

We might say of friendships that they are a matter not of diversion or of return but of meaning. They render us vulnerable, and in doing so they add dimensions of significance to our lives that can only arise from being, in each case, friends with this or that particular individual, a party to this or that particular life.

It is precisely this non-economic character that is threatened in a society in which each of us is thrown upon his or her resources and offered only the bywords of ownership, shopping, competition, and growth. It is threatened when we are encouraged to look upon those around us as the stuff of our current enjoyment or our future advantage. It is threatened when we are led to believe that friendships without a recognizable gain are, in the economic sense, irrational. Friendships are not without why, perhaps, but they are certainly without that particular why.

In turn, however, it is friendship that allows us to see that there is more than what the prevalent neoliberal discourse places before us as our possibilities. In a world often ruled by the dollar and what it can buy, friendship, like love, opens other vistas. The critic John Berger once said of one of his friendships, “We were not somewhere between success and failure; we were elsewhere.” To be able to sit by the bed of another, watching him sleep, waiting for nothing else, is to understand where else we might be.

Todd May is a professor of philosophy at Clemson University. He is the author 10 books, including “The Philosophy of Foucault” and “Death,” and is at work on a book about friendship in the contemporary period.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/07/04/friendship-in-an-age-of-economics/

Two Friendships: A Response

Earlier columns in The Stone have raised the question of what philosophy is. Surely among its tasks is to think about matters that are at once urgent, personal and of general significance. When one is lucky, one finds interlocutors who are willing to share that thought, add to it in one way or another, or suggest a different direction. In the comments from readers of my earlier post, “Friendship in an Age of Economics,” I have been fortunate.

I would like to linger over two friendships described in the comments. One, offered by Echo from Santa Cruz, describes a life-long friendship with someone from whom she was physically separated for many years, and who eventually died of cancer. (I am inferring from the context of her comment, that Echo, as in Greek mythology, is a woman, though she never says so explicitly.) The other is from E. Kelley Harris in Slovakia, who recounts the example of an intimidating seaman named Frank with whom, over the course of intense theological dispute, a moment of intimacy arose in an unexpected way. 

The friendship described by Echo is one that many of us will find examples of in our life. I am still close friends with the person who sat by my bedside 38 years ago, even though we live far from each other. Regarding her friendship, Echo comments that, “There was no work to that friendship. Our instincts told us what to do, in the same way as a new mother takes her child and holds it to her breast.” I am sure Echo would agree with me that a friendship without work is not something that is given; it is an achievement. Friendships take time. They must be cultivated, sometimes when one is in the mood, sometimes when one is not. That is part of its non-economic character. What Echo describes in personal language is an achieved friendship, one that likely started with a spark, but has been tended over the years and allowed the two friends to continue sharing with each other up to the end of one of their lives.

Several comments insisted that one would never become friends with someone unless there was something to be gained. This is certainly true. Close friendships are not simply exercises in altruism. Friendships that come to resemble relationships between donors and recipients begin to fray. Eventually they come to look like something other than friendships. The non-economic character of friendship does not lie in its altruism, but in its lack of accounting. We are friends not solely because you amuse me or assist me, but more deeply because we have rooted ourselves together in a soil we have both agreed to cultivate. Echo has provided an example of the fruit of that cultivation.

What E. Kelley depicts is a more unlikely friendship between someone who can best be described as a bully and another person, the author, who found himself in the unenviable position of bunk mate. Over time, passionate theological conversation developed between them, leading to a moment where the author put himself in a vulnerable position before the bully, who declined to play his expected role. As with Echo’s example, there is the accretion of shared time that is necessary for that moment to occur. It would hardly have happened the first night Frank stepped from the brig. But there is something else as well. There is the development of aspects of oneself that otherwise might have gone neglected or even unrecognized. E. Kelly displayed a kind of courage that seemed even to surprise him, and Frank lent himself to passionate discussion without having to overpower his conversational adversary. This is what I meant when I wrote in my column that in close friendships we step into the stream of another’s life.

One might say that there is, among seamen — as among military personnel and those facing collective harm generally — a motivation for a common bond that helped drive the two together. BlueGhost from Iowa says this explicitly in his discussion of his son’s decision to join the military. If so, this would be another example of the idea in the column that we are always creatures of our time and our circumstances. There were some who worried that in criticizing the consumer and entrepreneurial models of friendship, I might be suggesting that there was a previous period in which friendships were better or more pure. That would be, as the comments noted, naïve. Each age has its context, and people in that age — or in one specific aspect of it — cannot escape engaging with the themes of that context, its motifs and parameters. Consumerism and entrepreneurship are dominant themes of our age; if my column is right, they are a threat to our friendships. Other ages have had different themes and their friendships different dangers.

There is, of course, much more to be said about how consumerism and entrepreneurship endanger our friendships. I neglected to do so in the column because my goal in that short space was not so much critique as a description or a reminder of how we often still participate in relationships whose value is not the subject of most of our public discourse about them. To trace the development of consumerism and entrepreneurship in their particular character over the past 30 or 40 years, as well as their effects on our relationships, would require a much longer discussion as well as an engagement with many contemporary theorists and social scientists — basically, a book. What I counted on in the column was that there would be a resonance among readers for what was being suggested. If the comments are any indication, I was fortunate there as well.

A last note. Several comments suggested that there may be other ways to characterize friendship than by appeal to the Aristotelean distinctions I invoked. This is undoubtedly true. It is also true that there is a certain oversimplification to any categorization of friendship. There is more to Echo’s and E. Kelley’s friendships than the themes I have isolated here. What Aristotle offers us — and this over two millennia after his death — are tools that help us think about ourselves. It is not that there are three and only three types of friendships. Rather, in thinking about Aristotle’s categories of friendship in the context of our time we can begin to see ourselves and our relationships more clearly than we might otherwise. This is also true of many other philosophers, a number of whose names were invoked in the comments. It is what philosophers who stand the test of time offer us: not rigid categories to which we must conform, but instead ways of making sense of ourselves and our lives, of considering who we are, where we are, and what we might become.

Todd May is a professor of philosophy at Clemson University. He is the author 10 books, including “The Philosophy of Foucault” and “Death,” and is at work on a book about friendship in the contemporary period.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/07/13/two-friendships-a-response/

Beyond the Veil: A Response

I’m extraordinarily grateful to the many people who posted comments on my piece, “Veiled Threats?”  I note that many have come from educated and active Muslim women (in countries ranging from the U. S. to India), who have expressed a sense of “relief” at having their convictions and voices taken seriously.

I’ll begin my reply with a story.  The day my article came out, I went to a White Sox game (the one in which my dear team took over first place!).  I was there with two friends from Texas and my son-in-law, who was born in Germany and now has a green card.  So, in Chicago terms, we were already a heterogeneous lot.  Behind me was a suburban dad with shoulder-length gray hair (an educated, apparently affluent ex-hippie, like the “Bobos” of David Brooks’s book), who took pleasure in explaining the finer points of the game (like the suicide squeeze) to his daughter and two other preteen girls in fashionable sundresses.  On our right was a sedate African-American couple, the woman holding a bag that marked her as working for the “U. S. Census Religion subcommittee” of her suburban county.  In front of us were three Orthodox Jewish boys, ages around 6, 10, and 18, their tzizit (ritual fringes) showing underneath their Sox shirts, and cleverly double-hatted so that they could doff their Sox caps during the national anthem, while still retaining their kipot.  Although this meant that they had not really bared their heads for the Anthem, not one person gave them an ugly stare or said, “Take off your hat!” — or, even worse, “Here we take off our hats.”  Indeed, nobody apart from me seemed to notice them at all.

I don’t always feel patriotic, but I did then.  I would not encourage a child or relative of mine to wear tzizit or, outside of temple, a kipoh.  I’m a Reform Jew, and I view these things as totemism and fetishism.  But I would not offend strangers by pointing that out, or acquaintances unless they were friends who had asked my advice.  And that’s the way I feel about most of these things: it’s not my business.  Luckily, a long-dominant tradition in American culture and law agrees with me.  From the time when Quakers and Mennonites refused to doff their hats, and when both Mennonites and Amish adopted “pre-modern” dress, we Americans are pretty comfortable with weird clothes, and used to the idea that people’s conscientious observances frequently require them to dress in ways that seem strange or unpleasant to the majority.  To the many people who wrote about how immigrants have to learn to fit in, I ask: what would you have liked to see at that ball game?  The scene I witnessed, or three Jewish boys being ejected from the park because they allegedly failed to respect the flag?  (And note that, like most minorities, they did show respect in the way they felt they could, without violating their conscience.)

Before addressing a series of points raised in the comments, two prefatory notes:

1.  Throughout, I am speaking only about liberal democracies, not about autocratic regimes.   It’s only in such democracies where liberty of conscience is a reality anyway, so I think that examples of autocracy in Saudi Arabia are beside the point.  We’re talking about what limits liberal democracies may reasonably impose on freedom of conscience and expression while remaining consistent with their basic principles.

2.  To those who described me as in an “ivory tower,” let me point out that I have spent many years working in international development organizations and that I have particularly close ties with India, home to the second-largest Muslim population in the world (the largest being in Indonesia).  I’ve written a book about interreligious violence in India (“The Clash Within: Democracy, Religious Violence, and India’s Future,” 2007), which turns out to be largely a story of Hindu neo-fascist organizations fomenting violence against Muslims.  So in fact I am not in the ivory tower so far as these issues are concerned, and I’ve spent many years working with organizations that foster education and other opportunities for poor women.

All right, now to my argument.   Remember that my contention was that pursuit of conscientious commitments is a very important human interest, closely linked to human dignity, which can rightly be limited only by a “compelling state interest.”  I then went on to argue that none of the interests standardly brought forward against the burqa is compelling, and, moreover, that any ban on the burqa in response to these reasons would require banning other common practices that nobody objects to because of their familiarity.   As Annie rightly summarizes (126): “Hypocrisy isn’t democratic.”

1. The position of the Catholic Church. Stephen O’Brien points out helpfully that the “Catechism of the Catholic Church,” in sections dealing with religious liberty and conscience (sections 1782 and 2106) takes a position that has been used by the Catholic Church in France to oppose a ban on the burqa.   O’Brien and I once acted in a play together, during the time that both of us were undergraduates at N.Y.U., and in fact we had an intense argument about propriety in dress, which turned into a lasting collegial relationship.  So I thank him for his intervention and his urging my  study of the Catechism!

2. The special case of France.  I did not discuss France in my piece, but since some readers did, let me comment.  The French policy of laïcité does indeed lead to restrictions on a wide range of religious manifestations, all in the name of a total separation of church and state.  But if one looks closely, the restrictions are unequal and discriminatory.  The school dress code forbids the Muslim headscarf and the Jewish yarmulke, along with “large” Christian crosses.  But this is a totally unequal burden, because the first two items of clothing are religiously obligatory for observant members of those religions, and the third is not: Christians are under no religious obligation to wear any cross, much less a “large” one.   So there is discrimination inherent in the French system.

Would French secularism be acceptable if practiced in an even-handed way?  According to U.S. constitutional law, government may not favor religion over non-religion, or non-religion over religion.  For example, it was unconstitutional for the University of Virginia to announce that it would use student fees to fund all other student organizations (political, environmental, and so forth) but not the religious clubs (Rosenberger v. Rector and Visitors of the University of Virginia, 515 U. S. 819 (1995)).  I must say that I prefer this balanced policy to French laïcité; I think it is fairer to religious people.   Separation is not total, even in France: thus, a fire in a burning church would still be put out by the public fire department; churches still get the use of the public water supply and the public sewer system.  Still, the amount and type of separation that the French system mandates, while understandable historically, looks unfair in the light of the principles I have defended.

3. Terrorism and safety.  A number of the commenters think that the burqa creates unique risks of various sorts, particularly in the context of the legitimate interest in preventing acts of terrorism.  All I can say is that if I were a terrorist in the U. S. or Europe, and if I were not stupid, the last thing I would wear would be a burqa, since that way of dressing attracts suspicious attention.  Criminals usually want not to attract suspicious attention; if they are at all intelligent, they succeed.  I think I’d dress like Martha Nussbaum in the winter: floor length Eddie Bauer down coat, hat down over the eyebrows, extra hood for insulation, and a bulky Indian shawl around nose and mouth.  Nonetheless, I have never been asked to remove these clothes, in a department store, a public building, or even a bank.  Bank workers do look at my ID documents, though, and I’ve already said that at this stage in our technological development I think it is a reasonable request that ID documents contain a full face photo.  (Moreover, I’ve been informed by my correspondence that most contemporary Islamic scholars agree: a woman can and must remove her niqab for visual identification if so requested.)   In the summer, again if I were an intelligent sort of terrorist, I would wear a big floppy hat and a long  loose caftan, and I think I’d carry a capacious Louis Vuitton bag, the sort that signals conspicuous consumption.  That is what a smart terrorist would do, and the smart ones are the ones to worry about.

So, what to do about the threat that all bulky and non-revealing clothing creates?  Airline security does a lot with metal detectors, body imaging, pat-downs, etc.  (One very nice system is at work in India, where all passengers get a full manual pat-down, but in a curtained booth by a member of the same sex who is clearly trained to be courteous and respectful.)  The White Sox stadium searches all bags (though more to check for beer than for explosives, thus protecting the interests of in-stadium vendors).  Private stores or other organizations who feel that bulky clothing is a threat (whether of shoplifting or terrorism or both) could institute a nondiscriminatory rule banning, e.g., floor-length coats; they could even have a body scanner at the door.  But they don’t, presumably preferring customer friendliness to the extra margin of safety.  What I want to establish, however, is the invidious discrimination inherent in the belief that the burqa poses a unique security risk.  Reasonable security policies, applied to similar cases similarly, are perfectly fine.

4. Depersonalization and respect for persons. Several readers made the comment that the burqa is objectionable because it portrays women as non-persons.    Is this plausible?  Isn’t our poetic tradition full of the trope that eyes are the windows of the soul?  And I think this is just right: contact with another person, as individual to individual, is made primarily through eyes, not nose or mouth.  Once during a construction project that involved a lot of dust in my office, I (who am prone to allergies and vain about my singing voice and the state of my hair) had to cover everything but my eyes while talking to students for a longish number of weeks.  At first they found it quite weird, but soon they were asking me how they could get a mask and filter scarf like the ones I was using.  My personality did not feel stifled, nor did they feel that they could not access my individuality.

More generally, I think one should listen to what women who wear the burqa say they think it means before opining.  Even if one feels convinced that depersonalization is what’s going on, that might be a reason for not liking that mode of dress, but why would it be a reason for banning it?  If the burqa were uniquely depersonalizing, we could at least debate this point: but, as I pointed out, a lot of revealing clothing is plausibly seen as a way of marketing a woman as sex objects, and that is itself a form of depersonalization.  The feminist term is “objectification,” and it has long been plausibly maintained that a lot of advertising and other aspects of culture objectify women, treat them as sex objects rather than as full persons.  The models in porn (whether films or photos) are usually not conspicuous for their rich individuality.   (Indeed, in the light of the tremendous cultural pressure to market oneself as a sex object, one might feel that wearing a lot of covering is a way of resisting that demand or insisting on intimacy.)  In any case, what business is it of government to intervene, if there is no clear public interest in burdening liberty of conscience in this way?

At this point, I want to address the point about respect raised by Amy (115).  I agree with her that we needn’t approve of the forms of dress that others choose, or of any other religious observance.  We may judge them ridiculous, or revolting, or even hateful.  I do think that one should try to understand first before coming to such a judgment, and I think that in most cases one should not give one’s opinion about the way a person is dressed unless someone has asked for it.  But of course any religious ceremony that expresses hatred for another group (a racist religion, say) is deeply objectionable, and one can certainly protest that, as usually happens when the KKK puts on a show somewhere these days.

I do not think that a burqa is a symbol of hatred, and thus not something that it would be reasonable to find deeply hateful.  It is more like the boys and their tzizit, something I may feel out of tune with, but which it is probably nosy to denounce unless a friend has asked my opinion.  Still, if Amy wants to say that it is deeply objectionable, and that she does not respect it, that does not in any way disagree with the principles I expressed in my article.   Her intervention prompts me to make a necessary clarification.   I am not saying that all religious activities ought to be respected.  Equal respect, in my view, is rightly directed at the person, and the core of human dignity in the person, which I hope Amy will agree all these people still have.  Respecting their equal human dignity and equal human rights means giving them space to carry out their conscientious observances, even if we think that those are silly or even disgusting.  Their human dignity gives them the right to be wrong, we might say.  One religion that makes me cringe is an evangelical sect that requires its members to handle poisonous snakes (the subject of long litigation).  I find that one bizarre, I would never go near it, and I tend to find the actions involved disgusting.  But that does not mean that I don’t respect the people as bearers of equal human rights and human dignity.  Because they have equal human rights and human dignity, they get to carry on their religion unless there is some compelling government interest against it.  The long litigation concerned just that question.  Since the religion kept non-consenting adults and all children far away from the snakes, it was not an easy question.  In the end, a cautious government decided to intervene (Swann v. Pack, 527 S. W. 2d 99 (Tenn. 1975)).  But that did not mean that they did not show equal respect for the snake-handlers as human beings and bearers of human dignity and human rights.

What respect for persons requires, then, is that people have equal space to exercise their conscientious commitments, not that others like or even respect what they do in that space.  Furthermore, equal respect for persons is compatible, as I said, with limiting religious freedom in the case of a “compelling state interest.”  In the snake-handler case, the interest was in public safety.  Another government intervention that was right, in my view, was the judgment that Bob Jones University should lose its tax exemption for its ban on interracial dating (Bob Jones v. U. S., 461 U. S. 574 (1983).  Here the Supreme Court agreed that the ban was part of that sect’s religion, and thus that the loss of tax-exempt status was a “substantial burden” on the exercise of that religion, but they said that society has a compelling interest in not cooperating with racism.   Never has the government taken similar steps against the many Roman Catholic universities that restrict their presidencies to a priest, hence a male; but in my view they should all lose their tax exemptions for this reason.  (The compelling interest standard is difficult to delineate, and courts can get it wrong, which is one reason why Justice Scalia prefers the Lockean position.)

Why is the burqa different from the case of Bob Jones University?  First, of course, government was not telling Bob Jones that they could not continue with their policy, it was just refusing to give them a huge financial reward, thus in effect cooperating with the policy.  A second difference is that Bob Jones enforced a total ban on interracial dating, just as the major Catholic universities (Georgetown excepted, which now has a lay president) have imposed a total ban on female candidates for the job of president.  The burqa, by contrast, is a personal choice, so it’s more like the case of some student at Bob Jones (or any other university) who decides to date only white females or males because of familial and parental pressure.  Amy and I would probably agree in disliking such behavior.  But it does not seem like a case for government intervention. Which brings me to my next point.

5. Social Pressure and government intervention. When is social and familial pressure bad enough to justify state intervention with a conscientious observance?  I have already said that all forms of physical coercion are inadmissible and should be vigorously interfered with, whether they concern children or adults.   I would even favor no-drop laws in cases of domestic violence, since we know that a woman’s withdrawal of a complaint against a violent spouse or partner is often coerced.  My judgment about Turkey in the past — that the ban on veiling was justified, in those days, by a compelling state interest — derived from the belief that women were at risk of physical violence if they went unveiled, unless the government intervened to make the veil illegal for all.  Today in Europe the situation is utterly different, and no physical violence will greet the woman who wears even scanty clothing — apart from the always present danger of rape, which should be dealt with by convicting violent men, not by telling women they can’t wear what they want to wear.   (And this too the law has now recognized: thus, in the case that became the basis for the excellent film “The Accused,” a woman’s sexually provocative behavior was found not to give the men who raped her any defense, given that she clearly said “no” to the rape.

Thus, in response to Samuel (44), my point about Turkey is not one about numbers: if even a minority were at risk of physical violence, some government action would be justified.   Usually, what government will rightly do is to stop the assailants from beating up on people, rather than banning any religious practices.   For example, the Supreme Court said that Jehovah’s Witnesses have a constitutional right to say negative things about Catholics in the public street, and the sort of government intervention that would be appropriate would not be a ban on insults to Catholics, but rather a careful defense of the minority against coercive pressure both from the state and from private individuals (see Cantwell v. Connecticut, 310 U. S. 296 (1940): Connecticut’s action charging the Jehovah’s Witnesses with a breach of the peace for their slurs against Catholics violated their rights under the First and Fourteenth Amendments).  The situation in Turkey was different, because the violence toward unveiled women was thought to be so widespread and so unstoppable that only a total ban on the veil could stop it.  If the facts were correct, the decision was (temporarily) right.

When the pressure is emotional only, the case is much more difficult.  On the whole, we tend to favor few legal limits for adults: thus, if someone is in an emotionally abusive relationship, that is a case for counseling or the intervention of friends and family, not for the police. Even when we can see that what is going on is manipulative — e.g. the man says, “I won’t date you any longer if you don’t do this or that sex act” — we think that is the business of the people involved and those who care about them, not of the police.   I think that emotional coercion to wear a burqa, applied to an adult woman (threats of withdrawal of affection, for example, but not physical violence) is like this, and should be dealt with by friends and family, not by the law.

What about children?  This opens up a huge topic, since there is nothing that is more common in the modern family than various forms of coercive pressure (to get into a top college, to date people of the “right” religion or ethnicity, to wear “appropriate” clothes, to choose a remunerative career, to take a shower, “and so each and so on to no last term” as James Joyce wrote in “Ulysses.”  So: where should government and law step in?  Not often, and only where the behavior either constitutes a gross risk to bodily health and safety (as with Jehovah’s Witness children being forbidden to have a life-saving blood transfusion), or impairs some major functioning.  Thus, I think that female genital mutilation practiced on minors should be illegal if it is a form that impairs sexual pleasure or other bodily functions. (A symbolic pin-prick is a different story.)  Male circumcision seems to me all right, however, because there is no evidence that it interferes with adult sexual functioning; indeed it is now known to reduce susceptibility to H.I.V./AIDS.   The burqa (for minors) is not in the same class as genital mutilation, since it is not irreversible and does not engender health or impair other bodily functions — not nearly so much as high-heeled shoes, as I remarked (being a proud lover of the same).  Suppose parents required their daughters to wear a Victorian corset — which did do bodily damage, compressing various organs.  There might be a case for a ban then.  But the burqa is not even in the category of the corset.   As many readers pointed out, it is sensible dress in a hot climate where skin easily becomes worn by sun and dust.

At the limit, where the state’s interest in protecting the opportunities of children is concerned, is the denial of education at stake in the Supreme Court case, Wisconsin v. Yoder (406 U. S. 205 (1972)), in which a group of Amish parents asked to withdraw their children from the last two years of legally required schooling.  They would clearly have lost if they had asked to take their children out of all schooling, but what was in question were these two years only.  They won under the accommodationist principle I described in my article, although they probably would have lost on Justice Scalia’s Lockean test, since the law mandating education until age 16 was nondiscriminatory and not drawn up in order to persecute the Amish.  The case is difficult, because the parents made a convincing case that work on the farm, at that crucial age, was a key part of their community-based religion — and yet education opens up so many exit opportunities that the denial even of those two years may unreasonably limit children’s future choices.    And of course the children were under heavy pressure to do what their parents wanted.  (Thus Justice Douglas’s claim that the Court should decide the case by interviewing the children betrayed a lack of practical understanding.)

6. How much choice is enough?   Annie (126) and several others have pointed out that we all get our values through some type of social indoctrination, religious values included.  So we can’t really assume that the choice to wear a burqa is a free choice, if we mean by that a choice that has been deliberated about with due consideration of all the alternatives and with unimpeded access to some alternatives.  But then, as Annie says, we can’t assume that about anyone’s choice of anything — career, romantic partner, politics, etc.  What we can do, I think, is to guarantee a threshold level of overall freedom, by making primary and secondary education compulsory, by opening higher education to all who want it and are qualified (through need-blind admissions), and to work on job creation so that all of our citizens have some choice in matters of employment.  Moreover, the education that children get should encourage critical thinking, expansion of the imagination, and the other humanistic ideals that I discuss in my recent book, “Not For Profit: Why Democracy Needs the Humanities” (Princeton University Press 2010).  If a person gets an education like that (and it is not expensive, I’ve seen it done by women’s groups in India for next to nothing, just a lot of passion), then we can be more confident that a choice is a choice.

Thanks to you all for taking the time to respond!

Martha Nussbaum teaches law, philosophy, and divinity at The University of Chicago. She is the author of several books, including “Liberty of Conscience: In Defense of America’s Tradition of Religious Equality” (2008) and “Not for Profit: Why Democracy Needs the Humanities” (2010).

__________

Full article: http://opinionator.blogs.nytimes.com/2010/07/15/beyond-the-veil-a-response/

Moral Camouflage or Moral Monkeys?

After being shown proudly around the campus of a prestigious American university built in gothic style, Bertrand Russell is said to have exclaimed, “Remarkable. As near Oxford as monkeys can make.” Much earlier, Immanuel Kant had expressed a less ironic amazement, “Two things fill the mind with ever new and increasing admiration and awe … the starry heavens above and the moral law within.” Today many who look at morality through a Darwinian lens can’t help but find a charming naïveté in Kant’s thought. “Yes, remarkable. As near morality as monkeys can make.”

So the question is, just how near is that? Optimistic Darwinians believe, near enough to be morality. But skeptical Darwinians won’t buy it. The great show we humans make of respect for moral principle they see as a civilized camouflage for an underlying, evolved psychology of a quite different kind.

This skepticism is not, however, your great-grandfather’s Social Darwinism, which saw all creatures great and small as pitted against one another in a life or death struggle to survive and reproduce — “survival of the fittest.” We now know that such a picture seriously misrepresents both Darwin and the actual process of natural selection. Individuals come and go, but genes can persist for 1000 generations or more. Individual plants and animals are the perishable vehicles that genetic material uses to make its way into the next generation (“A chicken is an egg’s way of making another egg”). From this perspective, relatives, who share genes, are to that extent not really in evolutionary competition; no matter which one survives, the shared genes triumph. Such “inclusive fitness” predicts the survival, not of selfish individuals, but of “selfish” genes, which tend in the normal range of environments to give rise to individuals whose behavior tends to propel those genes into future.

A place is thus made within Darwinian thought for such familiar phenomena as family members sacrificing for one another — helping when there is no prospect of payback, or being willing to risk life and limb to protect one’s people or avenge harms done to them.

But what about unrelated individuals? “Sexual selection” occurs whenever one must attract a mate in order to reproduce. Well, what sorts of individuals are attractive partners? Henry Kissinger claimed that power is the ultimate aphrodisiac, but for animals who bear a small number of young over a lifetime, each requiring a long gestation and demanding a great deal of nurturance to thrive into maturity, potential mates who behave selfishly, uncaringly, and unreliably can lose their chance. And beyond mating, many social animals depend upon the cooperation of others for protection, foraging and hunting, or rearing the young. Here, too, power can attract partners, but so can a demonstrable tendency behave cooperatively and share benefits and burdens fairly, even when this involves some personal sacrifice — what is sometimes called “reciprocal altruism.” Baboons are notoriously hierarchical, but Joan Silk, a professor of anthropology at UCLA, and her colleagues, recently reported a long-term study of baboons, in which they found that, among females, maintaining strong, equal, enduring social bonds — even when the individuals were not related — can promote individual longevity more effectively than gaining dominance rank, and can enhance the survival of progeny.

A picture thus emerges of selection for “proximal psychological mechanisms”— for example, individual dispositions like parental devotion, loyalty to family, trust and commitment among partners, generosity and gratitude among friends, courage in the face of enemies, intolerance of cheaters — that make individuals into good vehicles, from the gene’s standpoint, for promoting the “distal goal” of enhanced inclusive fitness.

Why would human evolution have selected for such messy, emotionally entangling proximal psychological mechanisms, rather than produce yet more ideally opportunistic vehicles for the transmission of genes — individuals wearing a perfect camouflage of loyalty and reciprocity, but fine-tuned underneath to turn self-sacrifice or cooperation on or off exactly as needed? Because the same evolutionary processes would also be selecting for improved capacities to detect, pre-empt and defend against such opportunistic tendencies in other individuals — just as evolution cannot produce a perfect immune system, since it is equally busily at work improving the effectiveness of viral invaders. Devotion, loyalty, honesty, empathy, gratitude, and a sense of fairness are credible signs of value as a partner or friend precisely because they are messy and emotionally entangling, and so cannot simply be turned on and off by the individual to capture each marginal advantage. And keep in mind the small scale of early human societies, and Abraham Lincoln’s point about our power to deceive.

Why, then, aren’t we better — more honest, more committed, more loyal? There will always be circumstances in which fooling some of the people some of the time is enough; for example, when society is unstable or individuals mobile. So we should expect a capacity for opportunism and betrayal to remain an important part of the mix that makes humans into monkeys worth writing novels about. 

How close does all this take us to morality? Not all the way, certainly. An individual psychology primarily disposed to consider the interests of all equally, without fear or favor, even in the teeth of social ostracism, might be morally admirable, but simply wouldn’t cut it as a vehicle for reliable replication. Such pure altruism would not be favored in natural selection over an impure altruism that conferred benefits and took on burdens and risks more selectively — for “my kind” or “our kind.” This puts us well beyond pure selfishness, but only as far as an impure us-ishness. Worse, us-ish individuals can be a greater threat than purely selfish ones, since they can gang up so effectively against those outside their group. Certainly greater atrocities have been committed in the name of “us vs. them” than “me vs. the world.”

So, are the optimistic Darwinians wrong, and impartial morality beyond the reach of those monkeys we call humans? Does thoroughly logical evolutionary thinking force us to the conclusion that our love, loyalty, commitment, empathy, and concern for justice and fairness are always at bottom a mixture of selfish opportunism and us-ish clannishness? Indeed, is it only a sign of the effectiveness of the moral camouflage that we ourselves are so often taken in by it?

Speaking of what “thoroughly logical evolutionary thinking” might “force” us to conclude provides a clue to the answer. Think for a moment about science and logic themselves. Natural selection operates on a need-to-know basis. Between two individuals — one disposed to use scarce resources and finite capacities to seek out the most urgent and useful information and the other, heedless of immediate and personal concerns and disposed instead toward pure, disinterested inquiry, following logic wherever it might lead — it is clear which natural selection would tend to favor.

And yet, Darwinian skeptics about morality believe, humans somehow have managed to redeploy and leverage their limited, partial, human-scale psychologies to develop shared inquiry, experimental procedures, technologies and norms of logic and evidence that have resulted in genuine scientific knowledge and responsiveness to the force of logic. This distinctively human “cultural evolution” was centuries in the making, and overcoming partiality and bias remains a constant struggle, but the point is that these possibilities were not foreclosed by the imperfections and partiality of the faculties we inherited. As Wittgenstein observed, crude tools can be used to make refined tools. Monkeys, it turns out, can come surprisingly near to objective science.

We can see a similar cultural evolution in human law and morality — a centuries-long process of overcoming arbitrary distinctions, developing wider communities, and seeking more inclusive shared standards, such as the Geneva Conventions and the Universal Declaration of Humans Rights. Empathy might induce sympathy more readily when it is directed toward kith and kin, but we rely upon it to understand the thoughts and feelings of enemies and outsiders as well. And the human capacity for learning and following rules might have evolved to enable us to speak a native language or find our place in the social hierarchy, but it can be put into service understanding different languages and cultures, and developing more cosmopolitan or egalitarian norms that can be shared across our differences.

Within my own lifetime, I have seen dramatic changes in civil rights, women’s rights and gay rights. That’s just one generation in evolutionary terms. Or consider the way that empathy and the pressure of consistency have led to widespread recognition that our fellow animals should receive humane treatment. Human culture, not natural selection, accomplished these changes, and yet it was natural selection that gave us the capacities that helped make them possible. We still must struggle continuously to see to it that our widened empathy is not lost, our sympathies engaged, our understandings enlarged, and our moral principles followed. But the point is that we have done this with our imperfect, partial, us-ish native endowment. Kant was right to be impressed. In our best moments, we can come surprisingly close to being moral monkeys.

Peter Railton is the Perrin Professor of Philosophy at the University of Michigan, Ann Arbor. His main areas of research are moral philosophy and the philosophy of science. He is a member of the American Academy of Arts and Sciences.

__________
Full article and photo: http://opinionator.blogs.nytimes.com/2010/07/18/moral-camouflage-or-moral-monkeys/

Your Move: The Maze of Free Will

You arrive at a bakery. It’s the evening of a national holiday. You want to buy a cake with your last 10 dollars to round off the preparations you’ve already made. There’s only one thing left in the store — a 10-dollar cake.

On the steps of the store, someone is shaking an Oxfam tin. You stop, and it seems quite clear to you — it surely is quite clear to you — that it is entirely up to you what you do next. You are — it seems — truly, radically, ultimately free to choose what to do, in such a way that you will be ultimately morally responsible for whatever you do choose. Fact: you can put the money in the tin, or you can go in and buy the cake. You’re not only completely, radically free to choose in this situation. You’re not free not to choose (that’s how it feels). You’re “condemned to freedom,” in Jean-Paul Sartre’s phrase. You’re fully and explicitly conscious of what the options are and you can’t escape that consciousness. You can’t somehow slip out of it.

You may have heard of determinism, the theory that absolutely everything that happens is causally determined to happen exactly as it does by what has already gone before — right back to the beginning of the universe. You may also believe that determinism is true. (You may also know, contrary to popular opinion, that current science gives us no more reason to think that determinism is false than that determinism is true.) In that case, standing on the steps of the store, it may cross your mind that in five minutes’ time you’ll be able to look back on the situation you’re in now and say truly, of what you will by then have done, “Well, it was determined that I should do that.” But even if you do fervently believe this, it doesn’t seem to be able to touch your sense that you’re absolutely morally responsible for what you next.

The case of the Oxfam box, which I have used before to illustrate this problem, is relatively dramatic, but choices of this type are common. They occur frequently in our everyday lives, and they seem to prove beyond a doubt that we are free and ultimately morally responsible for what we do. There is, however, an argument, which I call the Basic Argument, which appears to show that we can never be ultimately morally responsible for our actions. According to the Basic Argument, it makes no difference whether determinism is true or false. We can’t be ultimately morally responsible either way.

The argument goes like this.

(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.

(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.

(3) But you can’t be ultimately responsible for the way you are in any respect at all.

(4) So you can’t be ultimately responsible for what you do.

The key move is (3). Why can’t you be ultimately responsible for the way you are in any respect at all? In answer, consider an expanded version of the argument.

(a) It’s undeniable that the way you are initially is a result of your genetic inheritance and early experience.

(b) It’s undeniable that these are things for which you can’t be held to be in any way responsible (morally or otherwise).

(c) But you can’t at any later stage of life hope to acquire true or ultimate moral responsibility for the way you are by trying to change the way you already are as a result of genetic inheritance and previous experience.

(d) Why not? Because both the particular ways in which you try to change yourself, and the amount of success you have when trying to change yourself, will be determined by how you already are as a result of your genetic inheritance and previous experience.

(e) And any further changes that you may become able to bring about after you have brought about certain initial changes will in turn be determined, via the initial changes, by your genetic inheritance and previous experience.

There may be all sorts of other factors affecting and changing you. Determinism may be false: some changes in the way you are may come about as a result of the influence of indeterministic or random factors. But you obviously can’t be responsible for the effects of any random factors, so they can’t help you to become ultimately morally responsible for how you are.

Some people think that quantum mechanics shows that determinism is false, and so holds out a hope that we can be ultimately responsible for what we do. But even if quantum mechanics had shown that determinism is false (it hasn’t), the question would remain: how can indeterminism, objective randomness, help in any way whatever to make you responsible for your actions? The answer to this question is easy. It can’t.

And yet we still feel that we are free to act in such a way that we are absolutely responsible for what we do. So I’ll finish with a third, richer version of the Basic Argument that this is impossible.

(i) Interested in free action, we’re particularly interested in actions performed for reasons (as opposed to reflex actions or mindlessly habitual actions).

(ii) When one acts for a reason, what one does is a function of how one is, mentally speaking. (It’s also a function of one’s height, one’s strength, one’s place and time, and so on, but it’s the mental factors that are crucial when moral responsibility is in question.)

(iii) So if one is going to be truly or ultimately responsible for how one acts, one must be ultimately responsible for how one is, mentally speaking — at least in certain respects.

(iv) But to be ultimately responsible for how one is, in any mental respect, one must have brought it about that one is the way one is, in that respect. And it’s not merely that one must have caused oneself to be the way one is, in that respect. One must also have consciously and explicitly chosen to be the way one is, in that respect, and one must also have succeeded in bringing it about that one is that way.

(v) But one can’t really be said to choose, in a conscious, reasoned, fashion, to be the way one is in any respect at all, unless one already exists, mentally speaking, already equipped with some principles of choice, “P1″ — preferences, values, ideals — in the light of which one chooses how to be.

(vi) But then to be ultimately responsible, on account of having chosen to be the way one is, in certain mental respects, one must be ultimately responsible for one’s having the principles of choice P1 in the light of which one chose how to be.

(vii) But for this to be so one must have chosen P1, in a reasoned, conscious, intentional fashion.

(viii) But for this to be so one must already have had some principles of choice P2, in the light of which one chose P1.

(ix) And so on. Here we are setting out on a regress that we cannot stop. Ultimate responsibility for how one is is impossible, because it requires the actual completion of an infinite series of choices of principles of choice.

(x) So ultimate, buck-stopping moral responsibility is impossible, because it requires ultimate responsibility for how one is; as noted in (iii).

Does this argument stop me feeling entirely morally responsible for what I do? It does not. Does it stop you feeling entirely morally responsible? I very much doubt it. Should it stop us? Well, it might not be a good thing if it did. But the logic seems irresistible …. And yet we continue to feel we are absolutely morally responsible for what we do, responsible in a way that we could be only if we had somehow created ourselves, only if we were “causa sui,” the cause of ourselves. It may be that we stand condemned by Nietzsche:

The causa sui is the best self-contradiction that has been conceived so far. It is a sort of rape and perversion of logic. But the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense. The desire for “freedom of the will” in the superlative metaphysical sense, which still holds sway, unfortunately, in the minds of the half-educated; the desire to bear the entire and ultimate responsibility for one’s actions oneself, and to absolve God, the world, ancestors, chance, and society involves nothing less than to be precisely this causa sui and, with more than Baron Münchhausen’s audacity, to pull oneself up into existence by the hair, out of the swamps of nothingness … (“Beyond Good and Evil,” 1886).

Is there any reply? I can’t do better than the novelist Ian McEwan, who wrote to me: “I see no necessary disjunction between having no free will (those arguments seem watertight) and assuming moral responsibility for myself. The point is ownership. I own my past, my beginnings, my perceptions. And just as I will make myself responsible if my dog or child bites someone, or my car rolls backwards down a hill and causes damage, so I take on full accountability for the little ship of my being, even if I do not have control of its course. It is this sense of being the possessor of a consciousness that makes us feel responsible for it.”

Galen Strawson is professor of philosophy at Reading University and is a regular visitor at the philosophy program at the City University of New York Graduate Center. He is the author of “Selves: An Essay in Revisionary Metaphysics” (Oxford: Clarendon Press, 2009) and other books.

___________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/07/22/your-move-the-maze-of-free-will/

Islamophobia and Homophobia

As if we needed more evidence of America’s political polarization, last week Juan Williams gave the nation a Rorschach test. Williams said he gets scared when people in “Muslim garb” board a plane he’s on, and he promptly got (a) fired by NPR and (b) rewarded by Fox News with a big contract.

Suppose Williams had said something hurtful to gay people instead of to Muslims. Suppose he had said gay men give him the creeps because he fears they’ll make sexual advances. NPR might well have fired him, but would Fox News have chosen that moment to give him a $2-million pat on the back?

I don’t think so. Playing the homophobia card is costlier than playing the Islamophobia card. Or at least, the costs are more evenly spread across the political spectrum. In 2007, when Ann Coulter used a gay slur, she was denounced on the right as well as the left, and her stock dropped. Notably, her current self-promotion campaign stresses her newfound passion for gay rights.

Coulter’s comeuppance reflected sustained progress on the gay rights front. Only a few decades ago, you could tell an anti-gay joke on the Johnny Carson show — with Carson’s active participation — and no one would complain. (See postscript below for details.) The current “it gets better” campaign, designed to reassure gay teenagers that adulthood will be less oppressive than adolescence, amounts to a kind of double entrendre: things get better not just over an individual’s life but over the nation’s life.

When we move from homophobia to Islamophobia, the trendline seems to be pointing in the opposite direction. This isn’t shocking, given 9/11 and the human tendency to magnify certain kinds of risk. (Note to Juan Williams: Over the past nine years about 90 million flights have taken off from American airports, and not one has been brought down by a Muslim terrorist. Even in 2001, no flights were brought down by people in “Muslim garb.”) 

Still, however “natural” this irrational fear, it’s dangerous. As Islamophobia grows, it alienates Muslims, raising the risk of homegrown terrorism — and homegrown terrorism heightens the Islamophobia, which alienates more Muslims, and so on: a vicious circle that could carry America into the abyss. So it’s worth taking a look at why homophobia is fading; maybe the underlying dynamic is transplantable to the realm of inter-ethnic prejudice.

Theories differ as to what it takes for people to build bonds across social divides, and some theories offer more hope than others.

One of the less encouraging theories grows out of the fact that both homophobia and Islamophobia draw particular strength from fundamentalist Christians. Maybe, this argument goes, part of the problem is a kind of “scriptural determinism.” If religious texts say that homosexuality is bad, or that people of other faiths are bad, then true believers will toe that line.

If scripture is indeed this powerful, we’re in trouble, because scripture is invoked by intolerant people of all Abrahamic faiths — including the Muslim terrorists who plant the seeds of Islamophobia. And, judging by the past millennium or two, God won’t be issuing a revised version of the Bible or the Koran anytime soon.

Happily, there’s a new book that casts doubt on the power of intolerant scripture: “American Grace,” by the social scientists Robert Putnam and David Campbell.

Three decades ago, according to one of the many graphs in this data-rich book, slightly less than half of America’s frequent churchgoers were fine with gay people freely expressing their views on gayness. Today that number is over 70 percent — and no biblical verse bearing on homosexuality has magically changed in the meanwhile. And these numbers actually understate the progress; over those three decades, church attendance was dropping for mainline Protestant churches and liberal Catholics, so the “frequent churchgoers” category consisted increasingly of evangelicals and conservative Catholics.

So why have conservative Christians gotten less homophobic? Putnam and Campbell favor the “bridging” model. The idea is that tolerance is largely a question of getting to know people. If, say, your work brings you in touch with gay people or Muslims — and especially if your relationship with them is collaborative — this can brighten your attitude toward the whole tribe they’re part of. And if this broader tolerance requires ignoring or reinterpreting certain scriptures, so be it; the meaning of scripture is shaped by social relations.

The bridging model explains how attitudes toward gays could have made such rapid progress. A few decades ago, people all over America knew and liked gay people — they just didn’t realize these people were gay. So by the time gays started coming out of the closet, the bridge had already been built.

And once straight Americans followed the bridge’s logic — once they, having already accepted people who turned out to be gay, accepted gayness itself — more gay people felt comfortable coming out. And the more openly gay people there were, the more straight people there were who realized they had gay friends, and so on: a virtuous circle.

So could bridging work with Islamophobia? Could getting to know Muslims have the healing effect that knowing gay people has had?

The good news is that bridging does seem to work across religious divides. Putnam and Campbell did surveys with the same pool of people over consecutive years and found, for example, that gaining evangelical friends leads to a warmer assessment of evangelicals (by seven degrees on a “feeling thermometer” per friend gained, if you must know).

And what about Muslims? Did Christians warm to Islam as they got to know Muslims — and did Muslims return the favor?

That’s the bad news. The population of Muslims is so small, and so concentrated in distinct regions, that there weren’t enough such encounters to yield statistically significant data. And, as Putnam and Campbell note, this is a recipe for prejudice. Being a small and geographically concentrated group makes it hard for many people to know you, so not much bridging naturally happens. That would explain why Buddhists and Mormons, along with Muslims, get low feeling-thermometer ratings in America.

In retrospect, the situation of gays a few decades ago was almost uniquely conducive to rapid progress. The gay population, though not huge, was finely interspersed across the country, with  representatives in virtually every high school, college and sizeable workplace. And straights had gotten to know them without even seeing the border they were crossing in the process.

So the engineering challenge in building bridges between Muslims and non-Muslims will be big. Still, at least we grasp the nuts and bolts of the situation. It’s a matter of bringing people into contact with the “other” in a benign context. And it’s a matter of doing it fast, before the vicious circle takes hold, spawning appreciable homegrown terrorism and making fear of Muslims less irrational.

After 9/11, philanthropic foundations spent a lot of money arranging confabs whose participants spanned the divide between “Islam” and “the West.” Meaningful friendships did form across this border, and that’s good. It’s great that Imam Feisal Abdul Rauf, a cosmopolitan, progressive Muslim, got to know lots of equally cosmopolitan Christians and Jews.

But as we saw when he decided to build an Islamic Community Center near ground zero, this sort of high-level networking — bridging among elites whose attitudes aren’t really the problem in the first place — isn’t enough. Philanthropists need to figure out how you build lots of little bridges at the grass roots level. And they need to do it fast.

Postscript: As for the Johnny Carson episode: I don’t like to rely on my memory alone for decades-old anecdotes, but in this case I’m 99.8 percent sure that I remember the basics accurately. Carson’s guest was the drummer Buddy Rich. In a supposedly spontaneous but obviously pre-arranged exchange, Rich said something like, “People often ask me, What is Johnny Carson really like?” Carson looked at Rich warily and said, “And how do you respond to this query?” But he paused between “this” and “query,” theatrically ratcheting up the wariness by an increment or two, and then pronounced the word “query” as “queery.” Rich immediately replied, “Like that.” Obviously, there are worse anti-gay jokes than this. Still, the premise was that being gay was something to be ashamed of. That Googling doesn’t turn up any record of this episode suggests that it didn’t enter the national conversation or the national memory. I don’t think that would be the case today. And of course, anecdotes aside, there is lots of polling data showing the extraordinary progress made since the Johnny Carson era on such issues as gay marriage and on gay rights in general.

Robert Wright, New York Times

__________

Full article: http://opinionator.blogs.nytimes.com/2010/10/26/islamophobia-and-homophobia/

The Burqa and the Body Electric

In her post of July 11, “Veiled Threats?” and her subsequent response to readers, Martha Nussbaum considers the controversy over the legal status of the burqa — which continues to flare across Europe —  making a case for freedom of religious expression.  In these writings, Professor Nussbaum applies the argument of her 2008 book “Liberty of Conscience,” which praises the American approach to religious liberty of which Roger Williams, one of the founders of Rhode Island Colony, is an early champion.

Williams is an inspiring figure, indeed.  Feeling firsthand the constraint of religious conformism in England and in Massachusetts Bay, he developed a uniquely broad position on religious toleration, one encompassing not only Protestants of all stripes, but Roman Catholics, Jews and Muslims.  The state, in his view, can legitimately enforce only the second tablet of the Decalogue — those final five commandments covering murder, theft, and the like.  All matters of worship covered by the first tablet must be left to the individual conscience. 

Straightforward enough.  But in the early years of Rhode Island, Williams faced quite a relevant challenge.  One of the colonists who fled Salem with him, Joshua Verin, quickly made himself an unwelcome presence in the fledgling community.  In addition to being generally “boysterous,” Verin had forbidden his wife from attending the religious services that Williams held in his home, and became publicly violent in doing so.  The colony was forced into action: Verin was disenfranchised on the grounds that he was interfering with his wife’s religious freedom.  Taking up a defense of Verin, fellow colonist William Arnold — who would also nettle Williams in the years to follow — claimed the punishment to be a violation of Verin’s religious liberty in that it interfered with his biblically appointed husbandly dominion.  “Did you pretend to leave Massachusetts, because you would not offend God to please men,” Arnold asked, “and would you now break an ordinance and commandment of God to please women?”  In his Salem days Williams himself had affirmed such biblical hierarchy, favoring the covering of women’s heads in all public assemblies.

A colony whose founding charter was signed by more than one woman was apparently not willing to accept the kind of male violence on which the Bible is at least indifferent.  Some suggested that Mrs. Verin be liberated from her husband and placed in the community’s protection until a suitable mate could be found for her.  That proposal was not taken up, and it was not taken up because Mrs. Verin herself declared that she wished to remain with her husband — a declaration made while she was literally tied in ropes and being dragged back to Salem by a man who had bound her in a more than matrimonial way.

The Verin case illustrates the weakness of the principle on which Nussbaum depends.  Liberty of conscience limits human institutions so that they do not interfere with the sacrosanct relationship between the soul and God, and in its strict application allows a coerced soul to run its course into the abyss.   In especially unappealing appeals to this kind of liberty, bigots of all varieties have claimed an entitlement to their views on grounds of tender conscience.  Nussbaum recognizes this elsewhere.  Despite her casual attitude toward burqa wearers — a marginal group in an already small minority population — she responds forcefully to false claims of religious freedom when they infect policymaking, as in her important defense of the right to marry.

The burqa controversy revolves around a central question: “Does this cultural practice performed in the name of religion inherently violate the principle of equality that democracies are obliged to defend?”  The only answer to that question offered by liberty of conscience is that we have no right to ask in the first place.  This is in essence Nussbaum’s position, even though the kind of floor-to-ceiling drapery that we are considering is not at all essential to Muslim worship.  The burqa is not religious headwear; it is a physical barrier to engagement in public life adopted in a deep spirit of misogyny. 

Lockean religious toleration, a tradition of which Nussbaum is skeptical, expects religious observance more fully to conform to the aims of a democratic polity.  We might see the French response to the burqa as an expression of that tradition.  After a famously misguided initial attempt to do away with all Muslim headwear in schools and colleges, French legislators later settled down to an evaluation of the point at which headwear becomes an affront to gender equality, passing most recently a ban on the niqab, or face veil—which has also been barred from classrooms and dormitories at Cairo’s Al’Azhar University, the historical center of Muslim learning.  It seems farcical to create a scorecard of permissible and impermissible religious garb — and as a youth reading Arthur C. Clarke it is not what I imagined the world’s legislators to be doing with urgency in the year 2010 — but we must wonder if it is the kind of determination that we must make, and make with care, if we are to come to a just settlement of the issue.

If we take a broader view, though, we might see that Lockean tradition as part of the longstanding wish of the state to disarm religion’s subversive potential.

Controversies on religious liberty are as old as temple and palace, those two would-be foci of ultimate meaning in human life that seem perpetually to run on a collision course.  Sophocles’s Antigone subscribes to an absolute religious duty in her single-minded wish to administer funeral rites to her brother Polyneices.  King Creon forbids the burial because Polyneices is an enemy of the state, an attempt to bring observance into harmony with political authority.  Augustine of Hippo handles the tension in his critique of the Roman polymath Varro.  Though the work that he is discussing is now lost, Augustine in his thorough way gives us a rigorous account of Varro’s distinction between “natural theology,” the theology of philosophers who seek to discern the nature of the gods as they really are, and “civil theology,” which is the theology acceptable in public observance.  Augustine objects to this distinction: if natural theology is “really natural,” if it has discovered the true nature of divinity, he asks, “what is found wrong with it, to cause its exclusion from the city?”

Debates on religious liberty seem always to be separated by the gulf between Varro and Augustine.  Those who follow Varro tolerate brands of religion that do not threaten civil order or prevailing moral conventions, and accept in principle a distinction between public and private worship; those who follow Augustine tolerate only those who agree with their sense of the nature of divinity, which authorities cannot legitimately restrict.  At their worst, Varronians make flimsy claims on preserving public decorum and solidify the state’s marginalization of religious minorities — as the Swiss did in December 2009 by passing a referendum banning the construction of minarets.  Augustinians at their worst expect the state to respect primordial bigotries and tribal exceptionalism — as do many of this country’s so-called Christian conservatives.

Might there be a third way?  If, as several thinkers have suggested, we now find ourselves in a “post-secular” age, then perhaps we might look beyond traditional disputes between political and ecclesiastical authority, between religion and secularism.  Perhaps post-secularity can take justice and equality to be absolutely good with little regard for whether we come to value the good by a religious or secular path.  Our various social formations — political, religious, social, familial — find their highest calling in deepening our bonds of fellow feeling.  “Compelling state interest” has no inherent value; belief  also has no inherent value.  Political and religious positions  must be measured against the purity of truths, rightly conceived as those principles enabling the richest possible lives for our fellow human beings.

So let us attempt such a measure.  The kind of women’s fashion favored by the Taliban might legitimately be outlawed as an instrument of gender apartheid — though one must have strong reservations about the enforcement of such a law, which could create more divisiveness than it cures.  The standard of human harmony provides strong resistance to anti-gay prejudice, stripping it of its wonted mask of righteousness. It objects in disgust to Pope Benedict XVI when he complains about Belgian authorities seizing church records in the course of investigating sexual abuse; it also praises the Catholic Church for the humanitarian and spiritual services it provides on this country’s southern border, which set the needs of the human family above arbitrary distinctions of citizenship.  The last example shows that some belief provides a deeply humane resistance to state power run amok.  To belief of this kind there is no legitimate barrier.

Humane action is of course open to interpretation. But if we place it at the center of our aspirations, we will make decisions more salutary than those offered by the false choice between state interest and liberty of conscience. Whitman may have been the first post-secularist in seeing that political and religious institutions declaring certain bodies to be shameful denigrate all human dignity: every individual is a vibrant body electric deeply connected to all beings by an instinct of fellow feeling. Such living democracy shames the puerile self-interest of modern electoral politics, and the worn barbarisms lurking under the shroud of retrograde orthodoxy. Embracing that vitality, to return to Nussbaum’s concerns, also guides us to the most generous possible reading of the First Amendment, which restricts government so that individual consciences and groups of believers can actively advance, rather than stall, the American project of promoting justice and equality.

 

Feisal G. Mohamed is an associate professor of English at the University of Illinois.  His most recent book, “Milton and the Post-Secular Present,” is forthcoming from Stanford University Press.

__________
Full article: http://opinionator.blogs.nytimes.com/2010/07/28/the-burqa-and-the-body-electric/

Freedom and Reality: A Response

It has been a privilege to read the comments posted in response to my July 25th post, “The Limits of the Coded World” and I am exceedingly grateful for the time, energy and passion so many readers put into them. While my first temptation was to answer each and every one, reality began to set in as the numbers increased, and I soon realized I would have to settle for a more general response treating as many of the objections and remarks as I could. This is what I would like to do now. 

If I had to distill my entire response into one sentence, it would be this: it’s not about the monkeys! It was unfortunate that the details of the monkey experiment — in which researchers were able to use computers wired to the monkey’s brains to predict the outcome of certain decisions — dominated so much attention, since it could have been replaced by mention of any number of similar experiments, or indeed by a fictional scenario. My intention in bringing it up at all was twofold: first, to take an example of the sort of research that has repeatedly sparked sensationalist commentary in the popular press about the end of free will; and second, to show how plausible and in some sense automatic the link between predictability and the problem of free will can be (which was born out by the number of readers who used the experiment to start their own inquiry into free will).

As readers know, my next step was to argue that predictability no more indicates lack of free will than does unpredictability indicate its presence. Several readers noted an apparent contradiction between this claim, that “we have no reason to assume that either predictability or lack of predictability has anything to say about free will,” and one of the article’s concluding statements, that “I am free because neither science nor religion can ever tell me, with certainty, what my future will be and what I should do about it.”

Indeed, when juxtaposed without the intervening text, the statements seem clearly opposed. However, not only is there no contradiction between them, the entire weight of my arguments depends on understanding why there is none. In the first sentence I am talking about using models to more or less accurately guess at future outcomes; in the second I am talking about a model of the ultimate nature of reality as a kind of knowledge waiting to be decoded — what I called in the article “the code of codes,” a phrase I lifted from one of my mentors, the late Richard Rorty — and how the impossibility of that model is what guarantees freedom.

The reason why predictability in the first sense has no bearing on free will is, in fact, precisely because predictability in the second sense is impossible. The theoretical predictability of everything that occurs in a universe whose ultimate reality is conceived of as a kind of knowledge or code is what goes by the shorthand of determinism. In the old debate between free will and determinism, determinism has always played the default position, the backdrop from which free will must be wrested if we are to have any defensible concept of responsibility. From what I could tell, a great number of commentators on my article shared at least this idea with Galen Strawson: that we live in a deterministic universe, and if the concept of free will has any importance at all it is merely as a kind of necessary illusion. My position is precisely the opposite: free will does not need to be “saved” from determinism, because there was never any real threat there to begin with, either of a scientific nature or a religious one.  

The reason for this is that when we assume a deterministic universe of any kind we are implicitly importing into our thinking the code of codes, a model of reality that is not only false, but also logically impossible. Let’s see why.

To make a choice that in any sense could be considered “free,” we would have to claim that it was at some point unconstrained. But, the hard determinist would argue, there can never be any point at which a choice is unconstrained, because even if we exclude any and all obvious constraints, such as hunger or coercion, the chooser is constrained by (and this is Strawson’s “basic argument”) how he or she is at the time of the choosing, a sum total of effects over which he or she could never exercise causality.

This constraint of “how he or she is,” however, is pure fiction, a treatment of tangible reality as if it were decodable knowledge, requiring a kind of God’s eye perspective capable of knowing every instance and every possible interpretation of every aspect of a person’s history, culture, genes and general chemistry, to mention only a few variables. It refers to a reality that self-proclaimed rationalists and science advocates pay lip service to in their insistence on basing all claims on hard, tangible facts, but is in fact as elusive, as metaphysical and ultimately as incompatible with anything we could call human knowledge as would be a monotheistic religion’s understanding of God.

When some readers sardonically (I assume) reduced by argument to “ignorance=freedom,” then, they were right in a way; but the rub lies in how we understand ignorance. The commonplace understanding would miss the point entirely: it is not ignorance against the backdrop of ultimate knowledge that equates to freedom; rather, it is constitutive, essential ignorance. This, again, needs expansion.

Knowledge can never be complete. This is the case not merely because there will always be something more to know; rather, it is so because completed knowledge is oxymoronic, self-defeating. AI theorists have long dreamed of what Daniel Dennett once called heterophenomenology, the idea that, with an accurate-enough understanding of the human brain my description of another person’s experience could become indiscernible from that experience itself. My point it not merely that heterophenomenology is impossible from a technological perspective or undesirable from an ethical perspective; rather, it is impossible from a logical perspective, since the very phenomenon we are seeking to describe, in this case the conscious experience of another person, would cease to exist without the minimal opacity separating his or her consciousness from mine. Analogously, all knowledge requires this kind of minimal opacity, because knowing something involves, at a minimum, a synthesis of discrete perceptions across space or time. 

The Argentine writer Jorge Luis Borges demonstrated this point with implacable rigor in a story about a man who loses the ability to forget, and with that also ceases to think, perceive, and eventually to live, because, as Borges points out, thinking necessarily involves abstraction, the forgetting of differences. Because of what we can thus call our constitutive ignorance, then, we are free — only and precisely because as beings who cannot possibly occupy all times and spatial perspectives without thereby ceasing to be what we are, we are constantly faced with choices. All these choices — to the extent that they are choices and not simply responses to stimuli or reactions to forces exerted on us — have at least some element that cannot be traced to a direct determination, but could only be blamed, for the sake of defending a deterministic thesis, on the ideal and completely fanciful determinism of “how we are” at the time of the decision to be made.

Far from a mere philosophical wish fulfillment or fuzzy, humanistic thinking, then, this kind of freedom is real, hard-nosed and practical. Indeed, courts of law and ethics panels may take specific determinations into account when casting judgment on responsibility, but most of us would agree that it would be absurd for them to waste time considering philosophical, scientific or religious theories of general determinism. The purpose of  both my original piece and this response  has been to show that, philosophically speaking as well, this real and practical freedom has nothing to fear from philosophical, scientific or religious pipedreams.

This last remark leads me to the one more issue that many readers brought up, and which I can only touch on now in passing: religion. In a recent blog post Jerry Coyne, a professor of evolutionary biology at the University of Chicago, labels me an “accommodationist” who tries to “denigrate science” and vindicate “other ways of knowing.” Professor Coyne goes on to contrast my (alleged) position to “the scientific ‘model of the world,’” which, he adds, has “been extraordinarily successful at solving problems, while other ‘models’ haven’t done squat.” Passing over the fact that, far from denigrating them, I am fervent and open admirer of the natural sciences (my first academic interests were physics and mathematics), I’m content to let Professor Coyne’s dismissal of every cultural, literary, philosophical or artistic achievement in history speak for itself.

What I find of interest here is the label accommodationism, because the intent behind the current deployment of the term by the new atheist block is to associate explicitly those so labeled with the tragic failure of the Chamberlain government to stand up to Hitler. Indeed, Richard Dawkins has called those espousing open dialogue between faith and science “the Neville Chamberlain school of evolution.” One can only be astonished by the audacity of the rhetorical game they are playing: somehow with a twist of the tongue those arguing for greater intellectual tolerance have been allied with the worst example of intolerance in history.

One of the aims of my recent work has indeed been to provide a philosophical defense of moderate religious belief. Certain ways of believing, I have argued, are extremely effective at undermining the implicit model of reality supporting the philosophical mistake I described above, a model of reality that religious fundamentalists also depend on. While fundamentalisms of all kinds are unified in their belief that the ultimate nature of reality is a code that can be read and understood, religious moderates, along with those secularists we would call agnostics, are profoundly suspicious of any claims that one can come to know reality as it is in itself. I believe that such believers and skeptics are neither less scientific nor less religious for their suspicion. They are, however, more tolerant of discord; more prone to dialog, to patient inquiry, to trial and error, and to acknowledging the potential insights of other ways of thinking and other disciplines than their own. They are less righteously assured of the certainty of their own positions and have, historically, been less inclined to be prodded to violence than those who are beholden to the code of codes. If being an accommodationist means promoting these values, then I welcome the label.

William Egginton is the Andrew W. Mellon Professor in the Humanities at the Johns Hopkins University. His next book, “In Defense of Religious Moderation,” will be published by Columbia University Press in 2011.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/08/04/freedom-and-reality-a-response/

The Phenomenology of Ugly

This all started the day Luigi gave me a haircut. I was starting to look like a mad professor: specifically like Doc in “Back to the Future.” So Luigi took his scissors out and tried to fix me up. Except — and this is the point that occurred to me as I inspected the hair in the bathroom mirror the next morning — he didn’t really take quite enough off. He had enhanced the style, true, but there was a big floppy fringe that was starting to annoy me. And it was hot out. So I opened up the clipper attachment on the razor and hacked away at it for a while. When I finally emerged there was a general consensus that I looked like a particularly disreputable scarecrow. In the end I went to another barbershop (I didn’t dare show Luigi my handiwork) and had it all sheared off. Now I look like a cross between Britney Spears and Michel Foucault.

In short, it was a typical bad hair day. Everyone has them. I am going to hold back on my follicular study of the whole of Western philosophy (Nietzsche’s will-to-power-eternal-recurrence mustache; the workers-of-the-world-unite Marxian beard), but I think it has to be said that a haircut can have significant philosophical consequences. Jean-Paul Sartre, the French existentialist thinker, had a particularly traumatic tonsorial experience when he was only seven. Up to that point he had had a glittering career as a crowd-pleaser. Everybody referred to young “Poulou” as “the angel.” His mother had carefully cultivated a luxuriant halo of golden locks. Then one fine day his grandfather takes it into his head that Poulou is starting to look like a girl, so he waits till his mother has gone out, then tells the boy they are going out for a special treat. Which turns out to be the barbershop. Poulou can hardly wait to show off his new look to his mother. But when she walks through the door, she takes one look at him before running up the stairs and flinging herself on the bed, sobbing hysterically. Her carefully constructed — one might say carefully combed — universe has just been torn down, like a Hollywood set being broken and reassembled for some quite different movie, rather harsher, darker, less romantic and devoid of semi-divine beings. For, as in an inverted fairy-tale, the young Sartre has morphed from an angel into a “toad”. It is now, for the first time, that Sartre realizes that he is — as his American lover, Sally Swing, will say of him — “ugly as sin.”

Jean-Paul Sartre and two friends in France, no doubt discussing philosophy.

“The fact of my ugliness” becomes a barely suppressed leitmotif of his writing. He wears it like a badge of honor (Camus, watching Sartre in laborious seduction mode in a Paris bar: “Why are you going to so much trouble?” Sartre: “Have you had a proper look at this mug?”). The novelist Michel Houellebecq says somewhere that, when he met Sartre, he thought he was “practically disabled.” It is fair comment. He certainly has strabismus (with his distinctive lazy eye, so he appears to be looking in two directions at once), various parts of his body are dysfunctional and he considers his ugliness to count as a kind of disability. I can’t help wondering if ugliness is not indispensable to philosophy. Sartre seems to be suggesting that thinking — serious, sustained questioning — arises out of, or perhaps with, a consciousness of one’s own ugliness.

I don’t want to make any harsh personal remarks here but it is clear that a philosophers’ Mr. or Ms. Universe contest would be roughly on a par with the philosophers’ football match imagined by Monty Python. That is to say, it would have an ironic relationship to beauty. Philosophy as a satire on beauty.

It is no coincidence that one of our founding philosophers, Socrates, makes a big deal out of his own ugliness. It is the comic side of the great man. Socrates is (a) a thinker who asks profound and awkward questions (b) ugly. In Renaissance neo-Platonism (take, for example, Erasmus and his account of  “foolosophers” in “The Praise of Folly”) Socrates, still spectacularly ugly, acquires an explicitly Christian logic: philosophy is there — like Sartre’s angelic curls — to save us from our ugliness (perhaps more moral than physical).

But I can’t help thinking that ugliness infiltrated the original propositions of philosophy in precisely this redemptive way. The implication is there in works like Plato’s  “Phaedo.” If we need to die in order to attain the true, the good, and the beautiful (to kalon: neither masculine nor feminine but neutral, like Madame Sartre’s ephemeral angel, gender indeterminate), it must be because truth, goodness, and beauty elude us so comprehensively in life. You think you’re beautiful? Socrates seems to say. Well, think again! The idea of beauty, in this world, is like a mistake. An error of thought. Which should be re-thought.

Perhaps Socrates’s mission is to make the world safe for ugly people. Isn’t everyone a little ugly, one way or the other, at one time or another? Who is truly beautiful, all the time? Only the archetypes can be truly beautiful.

Fast-forwarding to Sartre and my bathroom-mirror crisis, I feel this gives us a relatively fresh way of thinking about neo-existentialism. Sartre (like Aristotle, like Socrates himself at certain odd moments) is trying to get away from the archetypes. From, in particular, a transcendent concept of beauty that continues to haunt — and sometimes cripple — us.

“It doesn’t matter if you are an ugly bastard. As an existentialist you can still score.” Sartre, so far as I know, never actually said it flat out (although he definitely described himself as a “salaud”). And yet it is nevertheless there in almost everything he ever wrote. In trying to be beautiful, we are trying to be like God (the “for-itself-in-itself” as Sartre rebarbatively put it). In other words, to become like a perfect thing, an icon of perfection, and this we can never fully attain. But it is good business for manufacturers of beauty creams, cosmetic surgeons and — yes! — even barbers.

Switching gender for a moment — going in the direction Madame Sartre would have preferred — I suspect that the day Britney Spears shaved her own hair off  represented a kind of Sartrean or Socratic argument (rather than, say, a nervous breakdown). She was, in effect, by the use of appearance, shrewdly de-mythifying beauty. The hair lies on the floor, “inexplicably faded” (Sartre), and the conventional notion of femininity likewise. I see Marilyn Monroe and Brigitte Bardot in a similar light: one by dying, the other by remaining alive, were trying to deviate from and deflate their iconic status. The beautiful, to kalon, is not some far-flung transcendent abstraction, in the neo-existentialist view. Beauty is a thing (social facts are things, Durkheim said). Whereas I am no-thing. Which explains why I can never be truly beautiful. Even if it doesn’t stop me wanting to be either. Perhaps this explains why Camus, Sartre’s more dashing sparring partner, jotted down in his notebooks, “Beauty is unbearable and drives us to despair.”

I always laugh when somebody says, “don’t be so judgmental.” Being judgmental is just what we do. Not being judgmental really would be like death. Normative behavior is normal. That original self-conscious, slightly despairing glance in the mirror (together with, “Is this it?” or “Is that all there is?”) is a great enabler because it compels us to seek improvement. The transcendent is right here right now. What we transcend is our selves. And we can (I am quoting Sartre here) transascend or transdescend. The inevitable dissatisfaction with one’s own appearance is the engine not only of philosophy but of civil society at large. Always providing you don’t end up pulling your hair out by the roots.

Andy Martin is currently completing “What It Feels Like To Be Alive: Sartre and Camus Remix” for Simon and Schuster. He was an 2009-10 fellow at the Cullman Center for Scholars and Writers in New York, and teaches at Cambridge University.

__________

Full article and photo:

Reclaiming the Imagination

Imagine being a slave in ancient Rome. Now remember being one. The second task, unlike the first, is crazy. If, as I’m guessing, you never were a slave in ancient Rome, it follows that you can’t remember being one — but you can still let your imagination rip. With a bit of effort one can even imagine the impossible, such as discovering that Dick Cheney and Madonna are really the same person. It sounds like a platitude that fiction is the realm of imagination, fact the realm of knowledge.

Why did humans evolve the capacity to imagine alternatives to reality? Was story-telling in prehistoric times like the peacock’s tail, of no direct practical use but a good way of attracting a mate? It kept Scheherazade alive through those one thousand and one nights — in the story. 

On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.

A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.

Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.

To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place. 

Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.

Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.

In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.

Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.

The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.

Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.

Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).

___________

Full article: http://opinionator.blogs.nytimes.com/2010/08/15/reclaiming-the-imagination/

When the Mind Wanders, Happiness Also Strays

A quick experiment. Before proceeding to the next paragraph, let your mind wander wherever it wants to go. Close your eyes for a few seconds, starting … now.

And now, welcome back for the hypothesis of our experiment: Wherever your mind went — the South Seas, your job, your lunch, your unpaid bills — that daydreaming is not likely to make you as happy as focusing intensely on the rest of this column will.

I’m not sure I believe this prediction, but I can assure you it is based on an enormous amount of daydreaming cataloged in the current issue of Science. Using an iPhone app called trackyourhappiness, psychologists at Harvard contacted people around the world at random intervals to ask how they were feeling, what they were doing and what they were thinking.

The least surprising finding, based on a quarter-million responses from more than 2,200 people, was that the happiest people in the world were the ones in the midst of enjoying sex. Or at least they were enjoying it until the iPhone interrupted.

The researchers are not sure how many of them stopped to pick up the phone and how many waited until afterward to respond. Nor, unfortunately, is there any way to gauge what thoughts — happy, unhappy, murderous — went through their partners’ minds when they tried to resume.

When asked to rate their feelings on a scale of 0 to 100, with 100 being “very good,” the people having sex gave an average rating of 90. That was a good 15 points higher than the next-best activity, exercising, which was followed closely by conversation, listening to music, taking a walk, eating, praying and meditating, cooking, shopping, taking care of one’s children and reading. Near the bottom of the list were personal grooming, commuting and working.

When asked their thoughts, the people in flagrante were models of concentration: only 10 percent of the time did their thoughts stray from their endeavors. But when people were doing anything else, their minds wandered at least 30 percent of the time, and as much as 65 percent of the time (recorded during moments of personal grooming, clearly a less than scintillating enterprise).

On average throughout all the quarter-million responses, minds were wandering 47 percent of the time. That figure surprised the researchers, Matthew Killingsworth and Daniel Gilbert.

“I find it kind of weird now to look down a crowded street and realize that half the people aren’t really there,” Dr. Gilbert says.

You might suppose that if people’s minds wander while they’re having fun, then those stray thoughts are liable to be about something pleasant — and that was indeed the case with those happy campers having sex. But for the other 99.5 percent of the people, there was no correlation between the joy of the activity and the pleasantness of their thoughts.

“Even if you’re doing something that’s really enjoyable,” Mr. Killingsworth says, “that doesn’t seem to protect against negative thoughts. The rate of mind-wandering is lower for more enjoyable activities, but when people wander they are just as likely to wander toward negative thoughts.”

Whatever people were doing, whether it was having sex or reading or shopping, they tended to be happier if they focused on the activity instead of thinking about something else. In fact, whether and where their minds wandered was a better predictor of happiness than what they were doing.

“If you ask people to imagine winning the lottery,” Dr. Gilbert says, “they typically talk about the things they would do — ‘I’d go to Italy, I’d buy a boat, I’d lay on the beach’ — and they rarely mention the things they would think. But our data suggest that the location of the body is much less important than the location of the mind, and that the former has surprisingly little influence on the latter. The heart goes where the head takes it, and neither cares much about the whereabouts of the feet.”

Still, even if people are less happy when their minds wander, which causes which? Could the mind-wandering be a consequence rather than a cause of unhappiness?

To investigate cause and effect, the Harvard psychologists compared each person’s moods and thoughts as the day went on. They found that if someone’s mind wandered at, say, 10 in the morning, then at 10:15 that person was likely to be less happy than at 10 , perhaps because of those stray thoughts. But if people were in a bad mood at 10, they weren’t more likely to be worrying or daydreaming at 10:15.

“We see evidence for mind-wandering causing unhappiness, but no evidence for unhappiness causing mind-wandering,” Mr. Killingsworth says.

This result may disappoint daydreamers, but it’s in keeping with the religious and philosophical admonitions to “Be Here Now,” as the yogi Ram Dass titled his 1971 book. The phrase later became the title of a George Harrison song warning that “a mind that likes to wander ’round the corner is an unwise mind.”

What psychologists call “flow” — immersing your mind fully in activity — has long been advocated by nonpsychologists. “Life is not long,” Samuel Johnson said, “and too much of it must not pass in idle deliberation how it shall be spent.” Henry Ford was more blunt: “Idleness warps the mind.” The iPhone results jibe nicely with one of the favorite sayings of William F. Buckley Jr.: “Industry is the enemy of melancholy.”

Alternatively, you could interpret the iPhone data as support for the philosophical dictum of Bobby McFerrin: “Don’t worry, be happy.” The unhappiness produced by mind-wandering was largely a result of the episodes involving “unpleasant” topics. Such stray thoughts made people more miserable than commuting or working or any other activity.

But the people having stray thoughts on “neutral” topics ranked only a little below the overall average in happiness. And the ones daydreaming about “pleasant” topics were actually a bit above the average, although not quite as happy as the people whose minds were not wandering.

There are times, of course, when unpleasant thoughts are the most useful thoughts. “Happiness in the moment is not the only reason to do something,” says Jonathan Schooler, a psychologist at the University of California, Santa Barbara. His research has shown that mind-wandering can lead people to creative solutions of problems, which could make them happier in the long term.

Over the several months of the iPhone study, though, the more frequent mind-wanderers remained less happy than the rest, and the moral — at least for the short-term — seems to be: you stray, you pay. So if you’ve been able to stay focused to the end of this column, perhaps you’re happier than when you daydreamed at the beginning. If not, you can go back to daydreaming starting…now.

Or you could try focusing on something else that is now, at long last, scientifically guaranteed to improve your mood. Just make sure you turn the phone off.

John Tierney, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/11/16/science/16tier.html

The Third Replicator

All around us information seems to be multiplying at an ever increasing pace. New books are published, new designs for toasters and i-gadgets appear, new music is composed or synthesized and, perhaps above all, new content is uploaded into cyberspace. This is rather strange. We know that matter and energy cannot increase but apparently information can.

It is perhaps rather obvious to attribute this to the evolutionary algorithm or Darwinian process, as I will do, but I wish to emphasize one part of this process — copying. The reason information can increase like this is that, if the necessary raw materials are available, copying creates more information. Of course it is not new information, but if the copies vary (which they will if only by virtue of copying errors), and if not all variants survive to be copied again (which is inevitable given limited resources), then we have the complete three-step process of natural selection  (Dennett, 1995). From here novel designs and truly new information emerge. None of this can happen without copying.

I want to make three arguments here.

Imitation is not just some new minor ability. It changes everything. It enables a new kind of evolution.

The first is that humans are unique because they are so good at imitation. When our ancestors began to imitate they let loose a new evolutionary process based not on genes but on a second replicator, memes. Genes and memes then coevolved, transforming us into better and better meme machines.

The second is that one kind of copying can piggy-back on another: that is, one replicator (the information that is copied) can build on the products (vehicles or interactors) of another. This multilayered evolution has produced the amazing complexity of design we see all around us.

The third is that now, in the early 21st century, we are seeing the emergence of a third replicator. I call these temes (short for technological memes, though I have considered other names). They are digital information stored, copied, varied and selected by machines. We humans like to think we are the designers, creators and controllers of this newly emerging world but really we are stepping stones from one replicator to the next.

As I try to explain this I shall make some assertions and assumptions that some readers may find outrageous, but I am deliberately putting my case in its strongest form so that we can debate the issues people find most interesting or most troublesome.

Some may entirely reject the notion of replicators, and will therefore dismiss the whole enterprise. Others will accept that genes are replicators but reject the idea of memes. For example, Eva Jablonka and Marion J. Lamb ( 2005) refer to “the dreaded memes” while Peter J. Richerson and Robert Boyd (2005), who have contributed so much to the study of cultural evolution, assert that “cultural variants are not replicators.” They use the phrase “selfish memes” but still firmly reject memetics (Blackmore 2006). Similarly, in a previous “On The Human” post, William Benzon explains why he does not like the term “meme,” yet he needs some term to refer to the things that evolve and so he still uses it. As John S. Wilkins points out in response, there are several more classic objections: memes are not discrete (I would say some are not discrete), they do not form lineages (some do), memetic evolution appears to be Lamarckian (but only appears so), memes are not replicated but re-created or reproduced, or are not copied with sufficient fidelity (see discussions in Aunger 2000, Sterelny 2006, Wimsatt 2010). I have tackled all these, and more, elsewhere and concluded that the notion is still valid (Blackmore 1999, 2010a).

So I will press on, using the concept of memes as originally defined by Dawkins who invented the term; that is, memes are “that which is imitated” or whatever it is that is copied when people imitate each other. Memes include songs, stories, habits, skills, technologies, scientific theories, bogus medical treatments, financial systems, organizations — everything that makes up human culture. I can now, briefly, tell the story of how I think we arrived where we are today.

First there were genes. Perhaps we should not call genes the first replicator because there may have been precursors worthy of that name and possibly RNA-like replicators before the evolution of DNA (Maynard Smith and Szathmary 1995). However, Dawkins (1976), who coined the term “replicator,” refers to genes this way and I shall do the same.

We should note here an important distinction for living things based on DNA, that the genes are the replicators while the animals and plants themselves are vehicle, interactors, or phenotypes: ephemeral creatures constructed with the aid of genetic information coded in tiny strands of DNA packaged safely inside them. Whether single-celled bacteria, great oak trees, or dogs and cats, in the gene-centered view of evolution they are all gene machines or Dawkins’s “lumbering robots.” The important point here is that the genetic information is faithfully copied down the generations, while the vehicles or interactors live and die without actually being copied. Put another way, this system copies the instructions for making a product rather than the product itself, a process that has many advantages (Blackmore 1999, 2001). This interesting distinction becomes important when we move on to higher replicators.

So what happened next? Earth might have remained a one-replicator planet but it did not. One of these gene machines, a social and bipedal ape, began to imitate. We do not know why, although shifting climate may have favored stealing skills from others rather than learning them anew (Richerson and Boyd 2005). Whatever the reason, our ancestors began to copy sounds, skills and habits from one to another. They passed on lighting fires, making stone tools, wearing clothes, decorating their bodies and all sorts of skills to do with living together as hunters and gatherers. The critical point here is, of course, that they copied these sounds, skills and habits, and this, I suggest, is what makes humans unique. No other species (as far as we know) can do this. Song birds can copy some sounds, some of the other great apes can imitate some actions, and most notably whales and dolphins can imitate, but none is capable of the widespread, generalized imitation that comes so easily to us. Imitation is not just some new minor ability. It changes everything. It enables a new kind of evolution.

This is why I have called humans “Earth’s Pandoran species.” They let loose this second replicator and began the process of memetic evolution in which memes competed to be selected by humans to be copied again. The successful memes then influenced human genes by gene-meme co-evolution (Blackmore 1999, 2001). Note that I see this process as somewhat different from gene-culture co-evolution, partly because most theorists treat culture as an adaptation (e.g. Richerson and Boyd 2005), and agree with Wilson that genes “keep culture on a leash.” (Lumsden and Wilson 1981 p 13).

Benzon, in responding to Peter Railton’s post here at The Stone, points out the limits of this  metaphor and proposes the “chess board and game” instead. I prefer a simple host-parasite analogy. Once our ancestors could imitate they created lots of memes that competed to use their brains for their own propagation. This drove these hominids to become better meme machines and to carry the (potentially huge and even dangerous) burden of larger brain size and energy use, eventually becoming symbiotic. Neither memes nor genes are a dog or a dog-owner. Neither is on a leash. They are both vast competing sets of information, all selfishly getting copied whenever and however they can.

To help understand the next step we can think of this process as follows: one replicator (genes) built vehicles (plants and animals) for its own propagation. One of these then discovered a new way of copying and diverted much of its resources to doing this instead, creating a new replicator (memes) which then led to new replicating machinery (big-brained humans). Now we can ask whether the same thing could happen again and — aha — we can see that it can, and is.

A sticking point concerns the equivalent of the meme-phenotype or vehicle. This has plagued memetics ever since its beginning: some arguing that memes must be inside human heads while words, technologies and all the rest are their phenotypes, or “phemotypes”; others arguing the opposite. I disagree with both (Blackmore 1999, 2001). By definition, whatever is copied is the meme and I suggest that, until very recently, there was no meme-phemotype distinction because memes were so new and so poorly replicated that they had not yet constructed stable vehicles. Now they have.

Think about songs, recipes, ways of building houses or clothes fashions. These can be copied and stored by voice, by gesture, in brains, or on paper with no clear replicator/vehicle distinction. But now consider a car factory or a printing press. Thousands of near-identical copies of cars, books, or newspapers are churned out. Those actual cars or books are not copied again but they compete for our attention and if they prove popular then more copies are made from the same template. This is much more like a replicator-vehicle system. It is “copy the instructions” not “copy the product.”

Of course cars and books are passive lumps of metal, paper and ink. They cannot copy, let alone vary and select information themselves. So could any of our modern meme products take the step our hominid ancestors did long ago and begin a new kind of copying? Yes. They could and they are. Our computers, all linked up through the Internet, are beginning to carry out all three of the critical processes required for a new evolutionary process to take off.

Computers handle vast quantities of information with extraordinarily high-fidelity copying and storage. Most variation and selection is still done by human beings, with their biologically evolved desires for stimulation, amusement, communication, sex and food. But this is changing. Already there are examples of computer programs recombining old texts to create new essays or poems, translating texts to create new versions, and selecting between vast quantities of text, images and data. Above all there are search engines. Each request to Google, Alta Vista or Yahoo! elicits a new set of pages — a new combination of items selected by that search engine according to its own clever algorithms and depending on myriad previous searches and link structures.

This is a radically new kind of copying, varying and selecting, and means that a new evolutionary process is starting up. This copying is quite different from the way cells copy strands of DNA or humans copy memes. The information itself is also different, consisting of highly stable digital information stored and processed by machines rather than living cells. This, I submit, signals the emergence of temes and teme machines, the third replicator.

What should we expect of this dramatic step? It might make as much difference as the advent of human imitation did. Just as human meme machines spread over the planet, using up its resources and altering its ecosystems to suit their own needs, so the new teme machines will do the same, only faster. Indeed we might see our current ecological troubles not as primarily our fault, but as the inevitable consequence of earth’s transition to being a three-replicator planet. We willingly provide ever more energy to power the Internet, and there is enormous scope for teme machines to grow, evolve and create ever more extraordinary digital worlds, some aided by humans and others independent of them. We are still needed, not least to run the power stations, but as the temes proliferate, using ever more energy and resources, our own role becomes ever less significant, even though we set the whole new evolutionary process in motion in the first place.

Whether you consider this a tragedy for the planet or a marvelous, beautiful story of creation, is up to you.

Susan Blackmore is a psychologist and writer researching consciousness, memes, and anomalous experiences, and a Visiting Professor at the University of Plymouth. She is the author of  several books, including “The Meme Machine” (1999), “Conversations on Consciousness” (2005) and Ten Zen Questions (2009).

References

Aunger, R.A. (Ed) (2000) “Darwinizing Culture: The Status of Memetics as a Science,” Oxford University Press

Benzon, W.L. (2010) “Cultural Evolution: A Vehicle for Cooperative Interaction Between the Sciences and the Humanities.” Post for On the Human.

Blackmore, S. 1999 “The Meme Machine,” Oxford and New York, Oxford University Press

Blackmore,S. 2001 “Evolution and memes: The human brain as a selective imitation device.” Cybernetics and Systems, 32, 225-255

Blackmore, S. (2006) “Memetics by another name?” Review of “Not by Genes Alone” by P.J. Richerson and R. Boyd. Bioscience, 56, 74-5

Blackmore, S. (2010a) Memetics does provide a useful way of understanding cultural evolution. In “Contemporary Debates in Philosophy of Biology”, Ed. Francisco Ayala and Robert Arp, Chichester, Wiley-Blackwell,  255-72.

Blackmore (2010b) “Dangerous Memes; or what the Pandorans let loose.” In “Cosmos and Culture: Cultural Evolution in a Cosmic Context,” Ed. Steven Dick and Mark Lupisella, NASA 297-318

Dawkins,R. (1976) “The Selfish Gene,” Oxford, Oxford University Press (new edition with additional material, 1989)

Dennett, D. (1995) “Darwin’s Dangerous Idea.” London, Penguin

Jablonka, E. and Lamb, M.J. (2005) “Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral and Symbolic Variation in the History of Life.” Bradford Books

Lumsden,C.J. and Wilson,E.O. (1981) “Genes, Mind and Culture.” Cambridge, Mass., Harvard University Press.

Maynard-Smith,J. and Szathmáry,E (1995) “The Major Transitions in Evolution.” Oxford, Freeman

Richerson, P.J. and Boyd, R. (2005) “Not by Genes Alone: How Culture Transformed Human Evolution,” Chicago, University of Chicago Press

Sterelny, K. (2006). “Memes Revisited.” British Journal for the Philosophy of Science 57 (1)

Wimsatt, W. (2010) Memetics does not provide a useful way of understanding cultural evolution: A developmental perspective. In “Contemporary Debates in Philosophy of Biology” Ed. Francisco Ayala and Robert Arp, Chichester, Wiley-Blackwell,  255-72.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/08/22/the-third-replicator/

Plato’s Pop Culture Problem, and Ours

This fall, the U.S. Supreme Court will rule on a case that may have the unusual result of establishing a philosophical link between Arnold Schwarzenegger and Plato.

The case in question is the 2008 decision of the Ninth Circuit Court of Appeals striking down a California law signed by Gov. Schwarzenegger in 2005, that imposed fines on stores that sell video games featuring “sexual and heinous violence” to minors.  The issue is an old one: one side argues that video games shouldn’t receive First Amendment protection since exposure to violence in the media is likely to cause increased aggression or violence in real life.  The other side counters that the evidence shows nothing more than a correlation between the games and actual violence. In their book “Grand Theft Childhood,” the authors Lawrence Kutner and Cheryl K. Olson of Harvard Medical School argue that this causal claim is only the result of “bad or irrelevant research, muddleheaded thinking and unfounded, simplistic news reports,.” 

The issue, which at first glance seems so contemporary, actually predates the pixel by more than two millennia.  In fact, an earlier version of the dispute may be found in “The Republic,” in which Plato shockingly excludes Homer and the great tragic dramatists from the ideal society he describes in that work.

Could Plato, who wrote in the 4th century B.C., possibly have anything to say about today’s electronic media?  As it turns out, yes, It  is characteristic of philosophy that even its most abstruse and apparently irrelevant ideas, suitably interpreted, can sometimes acquire an unexpected immediacy.  And while philosophy doesn’t always provide clear answers to our questions, it often reveals what exactly it is that we are asking.

Children in ancient Athens learned both grammar and citizenship from Homer and the tragic poets. Plato follows suit but submits their works to the sort of ruthless censorship that would surely raise the hackles of modern supporters of free speech.  But would we have reason to complain?  We, too, censor our children’s educational materials as surely, and on the same grounds, as Plato did.  Like him, many of us believe that emulation becomes “habit and second nature,” that bad heroes (we call them “role models” today) produce bad people.  We even fill our children’s books with our own clean versions of the same Greek stories that upset him, along with our bowdlerized versions of Shakespeare and the Bible.

What is really disturbing is that Plato’s adult citizens are exposed to poetry even less than their children.   Plato knows how captivating and so how influential poetry can be but, unlike us today, he considers its influence catastrophic.  To begin with, he accuses it of conflating the authentic and the fake.  Its heroes appear genuinely admirable, and so worth emulating, although they are at best flawed and at worst vicious.  In addition, characters of that sort are necessary because drama requires conflict — good characters are hardly as engaging as bad ones.  Poetry’s subjects are therefore inevitably vulgar and repulsive — sex and violence.  Finally, worst of all, by allowing us to enjoy depravity in our imagination, poetry condemns us to a depraved life. 

This very same reasoning is at  the heart of today’s denunciations of mass media.  Scratch the surface of any attack on the popular arts — the early Christians against the Roman circus, the Puritans against Shakespeare, Coleridge against the novel, the various assaults on photography, film, jazz, television, pop music, the Internet, or video games — and you will find Plato’s criticisms of poetry.  For the fact is that the works of  both Homer and Aeschylus, whatever else they were in classical Athens, were, first and  foremost, popular entertainment.

Tens of thousands of people of all classes attended the free dramatic festivals and Homeric recitations of ancient Athens.  Noisy and rambunctious, they cheered the actors they liked and chased those they didn’t off the stage, often pelting them with the food it was customary to bring into the theater.  Drama, moreover, seemed to them inherently realistic: it is said that women miscarried when the avenging Furies rushed onstage in Aeschylus’s “Eumenides.”

To be realistic is to seem to present the world without artifice or convention, without mediation — reality pure and simple.  And popular entertainment, as long as it remains popular, always seems realistic: television cops always wear seat belts.   Only with the passage of time does artifice become visible — George Raft’s 1930’s gangsters appear dated to audiences that grew up with Robert De Niro.  But by then, what used to be entertainment is on its way to becoming art.

In 1935, Rudolf Arnheim called television “a mere instrument of transmission, which does not offer any new means for the artistic representation of reality.”  He was repeating, unawares, Plato’s ancient charge that, without a “craft” or an art of his own, Homer merely reproduces “imitations,” “images,” or “appearances” of virtue and, worse, images of vice masquerading as virtue.  Both Plato and Arnheim ignored the medium of representation, which interposes itself between the viewer and what is represented.  And so, in Achilles’ lament for Patroclus’ death, Plato sees not a fictional character acting according to epic convention but a real man behaving shamefully.  And since Homer presents Achilles as a hero whose actions are commendable,  he seduces his audience into enjoying a distorted and dismal representation that both reflects and contributes to a distorted and dismal life.

We will never know how the ancient Athenians reacted to poetry.  But what about us?  Do we, as Plato thought, move immediately from representation to reality?  If we do, we should be really worried about the effects of television or video games.  Or are we aware that many features of each medium belong to its conventions and do not represent real life?

To answer these questions, we can no longer investigate only the length of our exposure to the mass media; we must focus on its quality: are we passive consumers or active participants? Do we realize that our reaction to representations need not determine our behavior in life?  If so, the influence of the mass media will turn out to be considerably less harmful than many suppose.  If not, instead of limiting access to or reforming the content of the mass media, we should ensure that we, and especially our children, learn to interact intelligently and sensibly with them.  Here, again, philosophy, which questions the relation between representation and life, will have something to say.

Even if that is true, however, though, to compare the “Iliad” or “Oedipus Rex” to “Grand Theft Auto,”, “CSI: NY,” or even “The Wire” may seem silly, if not absurd.  Plato, someone could argue, missed something serious about great art, but there is nothing to miss in today’s mass media.  Yet the fact is that Homer’s epics and, in particular, the 31 tragedies that have survived intact (a tiny proportion of the tens of thousands of works produced by thousands of ancient dramatists) did so because they were copied much more often than others — and that, as anyone familiar with best-selling books knows, may have little to do with perceived literary quality. For better or worse, the popular entertainment of one era often becomes the fine art of another.  And to the extent that we still admire Odysseus, Oedipus, or Medea, Plato, for one, would have found our world completely degenerate — as degenerate, in fact, as we would find a world that, perhaps two thousand years from now, had replaced them with Tony Soprano, Nurse Jackie, or the Terminator.

And so, as often in philosophy, we end with a dilemma: If Plato was wrong about epic and tragedy, might we be wrong about television and video games?  If, on the other hand, we are right, might Plato have been right about Homer and Euripides?

Alexander Nehamas is professor of philosophy and comparative literature and Edmund N. Carpenter, II, Class of 1943 Professor in the Humanities at Princeton University. He is the author of several works on Plato, Nietzsche, literary theory and aesthetics, including, most recently, “Only A Promise of Happiness: The Place of Beauty in a World of Art.”
__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/08/29/platos-pop-culture-problem-and-ours/

Copy That: A Response

In my essay for The Stone, “The Third Replicator,” I argued that new replicators can piggy-back on older ones. This happens when the product of one replicator becomes copying machinery for the next replicator, and so on. Memes appeared, I wrote,  when humans (a product of genes) became capable of imitating, varying and selecting a new kind of information in the form of words, actions, technologies and ideas. The same thing is happening now, I argued, with a new replicator. Computers (a product of memes) are just beginning to be capable not only of copying but of copying, varying and selecting digital information. This means the birth of a new  replicator ─  temes, or technological memes.

In the comments that followed, I was called a “trippy visionary” (Guillermo C. Jimenez, 68), and a “pink hair meme” spreader (Jim Gerofsky, 133, though I don’t think it did spread much) and sent to the Chinese Room, but I am grateful for the many responses from readers,  which compel me to defend some of my arguments, clarify others and think hard about some of the questions raised.

Inevitably, some common misunderstandings of memetics surfaced, concerning the use of analogies, the existence of memes, the nature of selfishness and the role of imitation.

Marcelo (69) ably defends the use of analogies in scientific thinking. Yet the value of an analogy depends on how it is used, and many people seem to misunderstand this when it comes to both memes and temes. Let’s go back to 1976 and the origin of the term “meme” in Richard Dawkins’s “The Selfish Gene.” He says this: “I think we have got to start again and go right back to first principles. … .  The gene will enter my thesis as an analogy, nothing more.”  (Dawkins, 1976. p. 191). Those first principles are what he calls “Universal Darwinism” ─ that when anything is copied with variation and selection then evolution must occur. He looks at human culture, argues that songs, words, ideas, technologies and habits are all copied from person to person with added variations and heavy selection and so concludes that there must be a new evolutionary process going on. He calls the replicator involved in that new process the meme. 

The critical point here is that he starts from first principles; from what Dennett (1995) calls “Darwin’s Dangerous Idea.” I guess this is what 9 and 15 mean by saying I’ve “got it” i.e. “got” what Dawkins was saying, and what is missed by so many objectors. So thanks to 9 and 15! Dawkins does not do what many seem to accuse him of,  which seems to go something like this:

1. Genes are the replicator underlying biological evolution.

2. Cultural evolution looks a bit like biological evolution so by analogy let’s invent a new replicator to underlie culture and call it a meme.

They then add: 3. But memes and genes are so dissimilar that the theory of memes must be rubbish.

I have put this rather crudely but this seems to be what many people think. For  example, Frank (49) says “the meme-as-a-cultural-sort-of-gene analogy can only be taken so far.” I would agree there, but he has missed the point. Analogies between genes and memes are secondary. If memes really are replicators (and this depends both on how you define a replicator and whether cultural information really is copied with variation and selection) then we should expect some interesting analogies between genes and memes. But we should not necessarily expect them to be close. Indeed they often are not. And this is not surprising since one is based on digital information encoded in DNA and the other is a wide variety of behaviors, skills, technologies and so on copied in various ways and often with low fidelity by humans.

A future science of higher replicators, if ever such a science comes about, should be able to use analogies between replicators as fruitful ways of asking new questions or investigating how replicators behave. It should not expect all these analogies to be useful or close. Some will be and some will not.

Some commentators have understood and built on this. For example, Marshall (116) points out that “Genetic replicators have been “working on” the problem of accurate reproduction for rather a long time, and have evolved many mechanisms discretizing units of information, fixing or eliminating miscopies, defeating genes that game the process of random recombination, and so on.” He disagrees with me by concluding that what I call temes are really more memes (so does Mark Wei, 130, who thinks we are “still in a two replicator system, with the second replicator still in its infancy”), but he goes on to note, as I have also done, that memetic transmission is still in its early stages and so it is not surprising that the replicators are ill-defined and the process sloppy. 

Similarly R. Garrett (147) argues that memes have only had about a thousand generations of humans using sufficiently complex language to spread them. So it’s unrealistic to expect memes to be as developed as genes: they could more reasonably be compared with RNA enzymes in the RNA-world. He suggests that writing, the printing press and digital information storage might all be steps towards a digital equivalent of DNA. This, I think, is the way we should be using analogies between different replicators ─ looking at how general processes operate in ones we understand and then seeing whether or not we can discern similar processes happening in ones we do not understand.

Some people claim that memes do not exist, or are not proven to exist . This reveals another misunderstanding (Aunger, 2000). Memes are words, stories, songs, habits, skills and so on. Surely they exist.  Dennett asks “Do words exist?” Of course they do. The interesting question is not whether memes exist but whether thinking of words, stories, skills, habits and technologies as a new replicator is of any value. I say yes, many others say no. This, unlike the existence question, is an argument worth having.

Some commentators are bothered by the notion of selfish memes or more generally of selfish replicators, and get themselves into trouble wondering about intentionality, anthropomorphism and teleology (100, 109, 119, 122, 129). When I say that memes are selfish I mean that they will get copied whenever and however they can without regard for the consequences. This is not because they have human-like self-preserving desires or emotions, but because they cannot care (they are only words, skills, habits etc). This is precisely the same argument as with genes ─ they have effects on living things but they don’t care because they cannot care. This becomes especially interesting when applied to temes. If I am right and we are on the verge of attaining a third replicator, this too will be selfish because it cannot care. Squillions of digits being copied, mixed up, selected and copied again cannot care about the consequences to us, our genes or to the planet. This is the sense in which temes are or will be selfish. Their inevitable evolution will drive the creation of ever more teme machines with ever more information passing around, with no regard for us or our planet.

This brings me onto the question of human emotions and consciousness. Many commentators berated me for ignoring human emotion or for playing down the importance of sentience, consciousness (36, 40, 58, 94, 129, 169) and free will (170, 172). As many of you will know, I think both consciousness and free will are illusions, by which I mean that they are not what they seem to be.

For example, people often think of consciousness as some kind of power or force which is able to act on their brain, but I reject this covertly dualist notion. Others think of consciousness as some kind of added principle in brain function, i.e. there is vision, learning, memory etc. etc. and then consciousness as well. I reject this, too (Blackmore 2002, 2010). Consciousness, in these senses, does not exist. It is an illusion that comes about when clever brains build stories about self and other and try to understand their own actions. So in reply to some of these comments I would suggest that consciousness need play no role in creativity, selection or anything else we do. In a way this is one of the delights of memetics; one can think about everything we are and do, including the way that we construct illusions of a conscious self who is in charge of our bodies, without invoking any special stuff or property or power called “consciousness.” 

What about creativity? Several mention this (61, 70, 73, 137, 143, 148), some saying that I have sneaked it in illegitimately while others argue that you cannot get novelty, creativity, or invention from “mindless copying” (137). No you can’t. You get novelty, creativity, and invention from mindless copying with variation and selecting. That’s the whole point. I would go so far as to say that this process (the evolutionary process, Darwin’s dangerous idea) is the source of all design in the universe (Blackmore, 2007). This is what creativity is. When we humans create new ideas, paintings, poems, stories, or technical achievements, it is because old ones have been copied, integrated with each other, mixed up, added to, and then the results have been ruthlessly selected, either within one brain or within the cruel worlds of bookshops, scientific peer review, cost-cutting, the fickleness of human desires and many other processes. We do not need either the concept of consciousness or the notion of some special creative capacity within humans to explain why we are such imaginative and creative creatures (Blackmore, 2007).

Cube (148) gives a wonderful example of this in by-pass surgery. The technology required was not invented by one person but by the efforts of many groups, lots of small steps, and lots of trial and error, all tested in the real world of patients and hospitals.

If you think that we humans have some special faculty of creativity or consciousness or sentience then you may think, as do 123 and 170, that we can somehow “escape the project.” Dawkins may have thought this too when he ended the “Selfish Gene” with the stirring words “We, alone on earth, can rebel against the tyranny of the selfish replicators” (1976. p. 201). I disagree (Blackmore, 1999). We are meme machines soon to become embedded in a three replicator system and without any consciousness, free will, or other spooky power that might enable to leap outside the system.

I will end with a few comments that raised interesting questions. William Benzon (41) makes many helpful comments and I have already replied to these separately (84, 112) . I enjoyed 55’s thoughts about cells and unification of people into one greater organism. I have been pondering on these processes, too. As for temes, some (76, 152) worry that since all information is effectively stored for ever somewhere in the Internet there can be no “survival of the fittest” which would discount them as replicators. However, much of this information languishes never to be copied again. As 141 points out “copying is what keeps a meme alive”.

Cube (148) says “I’m not quite sure why the Internet, although faster and cheaper, is qualitatively different from the printing press.” Some point out that most varying and selecting is still done by us humans, even though we let our machines do so much of the copying and storage. Most of the stuff out there is there because some human put it there or because other humans like it and keep copying it.

I agree but this is what I suggest is beginning to change: it’s not the Internet per se that is so different, but the advent of machines that can carry out all of the three processes required for evolution: copying, varying and selecting. Out there among all the computers interlinked around the world are, I suggest, the beginnings of such machines. This is what will bring about, or already has brought about, the birth of the third replicator.

Susan Blackmore is a psychologist and writer researching consciousness, memes, and anomalous experiences, and a visiting professor at the University of Plymouth. She is the author of several books, including “The Meme Machine (1999), “Conversations on Consciousness” (2005) and Ten Zen Questions (2009).

__________

Full article: http://opinionator.blogs.nytimes.com/2010/09/03/copy-that-a-response/

Experiments in Philosophy

Aristotle once wrote that philosophy begins in wonder, but one might equally well say that philosophy begins with inner conflict. The cases in which we are most drawn to philosophy are precisely the cases in which we feel as though there is something pulling us toward one side of a question but also something pulling us, perhaps equally powerfully, toward the other.

But how exactly can philosophy help us in cases like these? If we feel something within ourselves drawing us in one direction but also something drawing us the other way, what exactly can philosophy do to offer us illumination?

One traditional answer is that philosophy can help us out by offering us some insight into human nature. Suppose we feel a sense of puzzlement about whether God exists, or whether there are objective moral truths, or whether human beings have free will.

The traditional view was that philosophers could help us get to the bottom of this puzzlement by exploring the sources of the conflict within our own minds. If you look back to the work of some of the greatest thinkers of the 19th century Mill, Marx, Nietzsche — you can find extraordinary intellectual achievements along these basic lines.

As noted earlier this month in The Times’s Room for Debate forum, this traditional approach is back with a vengeance.  Philosophers today are once again looking for the roots of philosophical conflicts in our human nature, and they are once again suggesting that we can make progress on philosophical questions by reaching a better understanding of our own minds.  But these days, philosophers are going after these issues using a new set of methodologies.  They are pursuing the traditional questions using all the tools of modern cognitive science.  They are teaming up with researchers in other disciplines, conducting experimental studies, publishing in some of the top journals of psychology.  Work in this new vein has come to be known as experimental philosophy.

The Room for Debate discussion of this movement brought up an important question that is worth pursuing further.  The study of human nature, whether in Nietzsche or in a contemporary psychology journal, is obviously relevant to certain purely scientific questions, but how could this sort of work ever help us to answer the distinctive questions of philosophy? It may be of some interest just to figure out how people ordinarily think, but how could facts about how people ordinarily think ever tell us which views were actually right or wrong?

Instead of just considering this question in the abstract, let’s focus in on one particular example.  Take the age-old problem of free will — a topic discussed at length here at The Stone by Galen Strawson, William Egginton and hundreds of readers. If all of our actions are determined by prior events — just one thing causing the next, which causes the next — then is it ever possible for human beings to be morally responsible for the things we do? Faced with this question, many people feel themselves pulled in competing directions — it is as though there is something compelling them to say yes, but also something that makes them want to say no.

What is it that draws us in these two conflicting directions? The philosopher Shaun Nichols and I thought that people might be drawn toward one view by their capacity for abstract, theoretical reasoning, while simultaneously being drawn in the opposite direction by their more immediate emotional reactions. It is as though their capacity for abstract reasoning tells them, “This person was completely determined and therefore cannot be held responsible,” while their capacity for immediate emotional reaction keeps screaming, “But he did such a horrible thing! Surely, he is responsible for it.”

To put this idea to the test, we conducted a simple experiment.  All participants in the study were told about a deterministic universe (which we called “Universe A”), and all participants received exactly the same information about how this universe worked. The question then was whether people would think that it was possible in such a universe to be fully morally responsible.

But now comes the trick. Some participants were asked in a way designed to trigger abstract, theoretical reasoning, while others were asked in a way designed to trigger a more immediate emotional response. Specifically, participants in one condition were given the abstract question:

In Universe A, is it possible for a person to be fully morally responsible for their actions?

Meanwhile, participants in the other condition were given a more concrete and emotionally fraught example:

In Universe A, a man named Bill has become attracted to his secretary, and he decides that the only way to be with her is to kill his wife and three children. He knows that it is impossible to escape from his house in the event of a fire. Before he leaves on a business trip, he sets up a device in his basement that burns down the house and kills his family.

Is Bill fully morally responsible for killing his wife and children?

The results showed a striking difference between conditions. Of the participants who received the abstract question, the vast majority (86 percent) said that it was not possible for anyone to be morally responsible in the deterministic universe. But then, in the more concrete case, we found exactly the opposite results. There, most participants (72 percent) said that Bill actually was responsible for what he had done.

What we have in this example is just one very simple initial experiment. Needless to say, the actual body of research on this topic involves numerous different studies, and the scientific issues arising here can be quite complex.  But let us put all those issues to the side for the moment.  Instead, we can just return to our original question.  How can experiments like these possibly help us to answer the more traditional questions of philosophy?

The simple study I have been discussing here can offer at least a rough sense of how such an inquiry works.  The idea is not that we subject philosophical questions to some kind of Gallup poll. (“Well, the vote came out 65 percent to 35 percent, so I guess the answer is … human beings do have free will!”) Rather, the aim is to get a better understanding of the psychological mechanisms at the root of our sense of conflict and then to begin thinking about which of these mechanisms are worthy of our trust and which might simply be leading us astray.

So, what is the answer in the specific case of the conflict we feel about free will? Should we be putting our faith in our capacity for abstract theoretical reasoning, or should we be relying on our more immediate emotional responses?  At the moment, there is no consensus on this question within the experimental philosophy community.  What all experimental philosophers do agree on, however, is that we will be able to do a better job of addressing these fundamental philosophical questions if we can arrive at a better understanding of the way our own minds work.

Joshua Knobe is an assistant professor at Yale University, where he is appointed both in Cognitive Science and in Philosophy. He is a co-editor, with Shaun Nichols, of the volume “Experimental Philosophy.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/09/07/experimental-philosophy/

On Not Returning to Normal

Political theorists, lawyers and policy-makers sometimes assume that responses to emergency should — morally should — aim at a speedy return to a “normal” that predated the emergency. This is implicit in the metaphor of resilience often used by officials for emergency response. “Resilience” suggests that the preferred aftermath of an emergency is quickly regaining one’s former shape, bouncing back. Presumably it is possible to bounce back with a few permanent bumps or scars, but at the limit we might speak of an invisible mending ideal of emergency response: when the response is genuinely successful, the effects of the emergency entirely disappear: before and after are indistinguishable. 

The first thing to be said about the invisible mending model is that it is highly ambitious, even as a model of ideal emergency response. We normally expect firemen to put out the house fire. Replacing what is charred is not their responsibility. Neither is making the house habitable. In the same way, an emergency medical response is not supposed on its own to restore people to full functioning or health. The governing norm is that of removing or reducing the threat to life and limb. The larger aim of returning to normal involves a much larger set of agents than those who confront the emergency, and a much greater length of time.  Such was the case, as we now know, in the days that followed the attacks of Sept. 11. The efforts of the immediate responders to this grave emergency eliminated dangers and saved lives, but did not, could not, effect the complete and invisible mending that the language often used to describe their efforts implied.

Invisible mending may be a bad ideal of emergency response for two further reasons. First, the status quo ante — the way things were before — may be an emergency waiting to happen. Rebuilding water-damaged housing on a flood plain only invites more water damage. Instead of restoring things to their former condition, it is at least arguable that emergency response should usher in discontinuity. Perhaps people living in highly vulnerable flood plains have to be encouraged to move; or perhaps new forms of flood-resistant construction have to be developed.

The second reason why emergency response can be geared to the wrong ideal when it aims at invisible mending is that restoration can serve to continue a morally questionable status quo ante. Even when returning  to normal after a crisis does not return people to an emergency waiting to happen, it can return them to a “normal” that is unacceptable in other ways. The peace process in Northern Ireland has made violent paramilitary activity, including bombing, largely a thing of the past. This marks a return to a normality of non-violence that has not been seen for decades.  But there are many urban areas where high walls built during the Troubles separate loyalists from nationalists. The walls can be torn down. But extreme sectarian ill-feeling will go on: that is a continuity that the peace process has not broken.

Aiming simply to return to a former normality can have an unwelcome complacency about it, sometimes a defiant complacency. A determination to go on exactly as before —just to spite an enemy or attacker or simply a critic — is a recognizable human response to attack, enmity or criticism. Perhaps it also displays a kind of resilience. But unless continuity has a significant value of its own, the determination to go on exactly as before may have little to be said for it. Emergencies may better be seen as occasions for fresh starts and rethinking. Because they take life and make death vivid for those who survive emergencies, they properly prompt people to appraise lives that are nearly cut short. 

Consider the following clichéd vignette. Bloggs, a ruthless businessman, spends 20 years, seven days a week, clinching deals. He is run down, eats too much, drinks too much. This leads to a heart attack. He takes this heart-attack as a wake-up call. The teenage children he has neglected, his wife; his dog — all of these figures appear in a new light. He decides to lead a new sort of life that gives them the attention and appreciation they deserve. The heart attack leads Bloggs to this decision, let’s say, and not the decision to lead his old business life on a new sort of diet. The heart attack may have been caused by the bad diet, and it is open to interpretation as a wake-up call about eating and drinking. But it is naturally seized upon as an opportunity to take stock more generally and change a way of life.

A public emergency can be seized upon in the same way, even if one is not at its sharp end. This is how it was with September 11. Through the live television coverage the whole world was there. Many viewers throughout the world identified strongly with the victims; so that their deaths reminded us of our mortality, and  prompted us to take stock. To this extent we took September 11 as a wake-up call.  We opened our minds to questions of how we could live better.

There are philosophers such as Ted Honderich who think that the responsibility Westerners had before September 11 for not not living better lives by alleviating global inequalities contributed to the overall  responsibility for  the September 11  attack. Honderich wrote in “After the Terror” in 2003: “[T]he atrocity at the Twin Towers did have a human necessary condition in what preceded it: our deadly treatment of those outside our circle of comfort, those with the bad lives. Without that deadly treatment by us, the atrocity of the Twin Towers would not have happened.”

But this is a puzzling thing to think.  It may be true that the rich in the world should wake up to the fact that they can do much more for the poor of the world. It may be true that the earlier this is done the better. It may be true that there was an opportunity to start on this task on September 12 2001, as opposed to September 12 2003. It may therefore be true that people should have started on this task on September 12 2001. What does not seem to be true is that they should have started to do this because of the reasons for the September 11 attack. There is no reason to think that the Al-Qaeda operatives who flew the airplanes, or their masters, had any agenda with respect to global inequality. And it is hard to understand an attack on the Twin Towers or Pentagon as a means of reducing inequality. This crucial and fairly obvious point is fatal for Honderich’s way of arguing.

September 11 caused many people to take stock of their lives, and many governments to reappraise their priorities in foreign policy. Not every such reappraisal has led to better lives or better policies. But there is something important about the opportunity that emergency offers for not going on in the same old way. For us to break from our past because of an emergency is not at all for us to be broken by an emergency.

 

Tom Sorell is John Ferguson Professor of Global Ethics and director of the Center for the Study of Global Ethics at Birmingham University. He is the author of several philosophical works, including “Moral Theory and Anomaly” (1999), and is currently working on a book on the moral and political theory of emergencies.

___________

Full article: http://opinionator.blogs.nytimes.com/2010/09/12/on-not-returning-to-normal/

Predators: A Response

There are certain responses to the arguments in my post, “The Meat Eaters,” that recur with surprising frequency throughout the comments.  The following four objections, listed in the order of the relative frequency of their appearance, are the most common.

1. If predators were to disappear from a certain geographical region, herbivore populations in that area would rapidly expand, depleting edible vegetation, and thereby ultimately producing more deaths of among herbivores from starvation or disease than would otherwise have been caused by predators.  And starvation and disease normally involve more suffering than being quickly dispatched by a predator.

2. Should human beings be the first to go?

3. What about the suffering of plants?

4. What about bacteria, viruses, and insects?

My own response will focus primarily on the first of these objections, and on the ways in which the argument might continue after the objection has been noted.

In the sixth, seventh, and eighth paragraphs of my original article, I anticipated the first objection. I wrote:

Suppose that we could arrange the gradual extinction of carnivorous species, replacing them with new herbivorous ones.  Or suppose that we could intervene genetically, so that currently carnivorous species would gradually evolve into herbivorous ones, thereby fulfilling Isaiah’s prophecy.  If we could bring about the end of predation by one or the other of these means at little cost to ourselves, ought we to do it?

I concede, of course, that it would be unwise to attempt any such change given the current state of our scientific understanding.  Our ignorance of the potential ramifications of our interventions in the natural world remains profound.  Efforts to eliminate certain species and create new ones would have many unforeseeable and potentially catastrophic effects.

Perhaps one of the more benign scenarios is that action to reduce predation would create a Malthusian dystopia in the animal world, with higher birth rates among herbivores, overcrowding, and insufficient resources to sustain the larger populations.  Instead of being killed quickly by predators, the members of species that once were prey would die slowly, painfully, and in greater numbers from starvation and disease.

After presenting the objection, I referred back to it six times in the course of the remaining 1900 words of the article.  Those references typically stress that my argument takes a conditional form: that is, my argument has practical implications only if we could have a high degree of confidence that problems of the sort I identified could be avoided.  Yet this same objection is repeatedly pressed in the comments, usually with lamentations over my appalling ignorance of biology and ecology, as if I had been unaware that there was such an obvious and devastating refutation of everything I had said.  I will return to this fact about the nature of the commentary at the end of this response.

Among those commentators who actually read the article and thus were aware that I had acknowledged the objection, a few took the argument a step further.  They understood that my argument was conditional but claimed that the relevant condition could never obtain.  We will never, they contended, be able to eliminate predation without causing catastrophic ecological disruption and thus even more suffering than we might have prevented.  If they are right, my article may present an interesting thought experiment that might have prompted us to reflect on our values (though it didn’t), but it is essentially devoid of practical significance.  These readers were too polite to point out that their prediction also casts Isaiah in a pretty disappointing light in his role as prophet.

My assumption in the article, however, was that our understanding of the biological and ecological sciences may well advance beyond what we consider possible today.  That has happened repeatedly in the history of science, as when Rutherford, who first split the atom, said in 1933 that anyone who thought that the splitting of the atom could be a source of power was talking “moonshine.”  Since we can’t be certain that we’ll never be able to reduce or eliminate predation without disastrous side effects, it’s important to think in advance about how we might wisely employ a more refined and discriminating power of intervention if we were ever to acquire it.

It seems, moreover, that my argument has some relevance to choices we must make even now.  There are some species of large predatory animals, such as the Siberian tiger, that are currently on the verge of extinction.  If we do nothing to preserve it, the Siberian tiger as a species may soon become extinct.  The number of extant Siberian tigers has been low for a considerable period.  Any ecological disruption occasioned by their dwindling numbers has largely already occurred or is already occurring.  If their number in the wild declines from several hundred to zero, the impact of their disappearance on the ecology of the region will be almost negligible.  Suppose, however, that we could repopulate their former wide-ranging habitat with as many Siberian tigers as there were during the period in which they flourished in their greatest numbers, and that that population could be sustained indefinitely.  That would mean that herbivorous animals in the extensive repopulated area would again, and for the indefinite future, live in fear and that an incalculable number would die in terror and agony while being devoured by a tiger.  In a case such as this, we may actually face the kind of dilemma I called attention to in my article, in which there is a conflict between the value of preserving existing species and the value of preventing suffering and early death for an enormously large number of animals.

Many of the commentators said, in effect: “Leave nature alone; the course of events in the natural world will go better without human intervention.”  Since efforts to repopulate their original habitat with large numbers of Siberian tigers might require a massive intervention in nature, this anti-interventionist view may itself imply that we ought to allow the Siberian tiger to become extinct.  But suppose Siberian tigers would eventually restore their former numbers on their own if human beings would simply leave them alone.  Most people, I assume, would find that desirable.  But is that because our human prejudices blind us to the significance of animal suffering?  Siberian tigers are in fact not particularly aggressive toward human beings, but suppose for the sake of argument that they were.  And suppose that there were large numbers of poor people living in primitive and vulnerable conditions in the areas in which Siberian tigers might become resurgent, so that many of these people would be threatened with mutilation and death if the tigers were not to become extinct, or not banished to captivity.  Would you still say: “Leave nature alone; let the tigers repopulate their former habitats.”?  What if you were one of the people in the region, so that your children or grandchildren might be among the victims?  And what would your reaction be if someone argued for the proliferation of tigers by pointing out that without tigers to keep the human population in check, you and others would breed incontinently and overcultivate the land, so that eventually your numbers would have to be controlled by famine or epidemic?  Better, they might say, to let nature do the work of culling the human herd in your region via the Siberian tiger.  Would you agree?

In fact we can’t leave nature alone.  We are a part of it, as much as any other animal.  More importantly, we can’t help but have a massive and pervasive impact on the natural world given our own numbers.  Agricultural practices necessary for our survival constitute a continuing invasion and occupation of lands previously inhabited by others.  One explicit suggestion of my article was that it would be better to try to control our impact on the natural world in a purposeful way, guided by intelligence and moral values, including the value of diminishing suffering, rather than to continue to allow our effects on the natural world, including the extinction of species, to be determined by blind inadvertence — as, for example, in the case of the many extinctions of animal species that will be caused by global climate change.

Some commentators made the interesting point that even if predators were to become extinct in a certain area without environmental catastrophe, new ones would eventually evolve there to fill the ecological niche that would have been left vacant, thereby restarting the whole dreary cycle.  But even if this were to happen, the evolution of a species can take a long time, and a lengthy interval without predation could be a significant good, just as the prevention of a war can be a great good even if it does nothing to prevent other wars in the future.  More importantly, it’s hardly plausible to suppose that we could have the ability to eliminate a predatory species from an area but would then lack the ability, even in the distant future when our scientific expertise would have advanced even further, to prevent a new predatory species from arising.

Consider the three other responses that turned up repeatedly in the comments.  Some readers suggested that my argument implies that we should aim for the extinction of the carnivorous human species, the species that that causes far more suffering to other animals than any other species.  Most took that to be a reductio ad absurdum of my argument, but a few seemed to think that getting rid of human beings would be a good idea.  For those who wish to pursue this issue, I recommend Peter Singer’s contribution to The Stone (“Should This Be the Last Generation?”, June 6, 2010).  My own response can be quite brief.  Human beings are not carnivores in the relevant sense, but omnivores, and in most cases can choose to live without tormenting and killing other animals, an option that is not a biological possibility for genuine carnivores.  My own view, though I won’t argue for it here, is that the extinction of human beings would be the worst event that could possibly occur.

What about the suffering of plants?  Again a brief response: plants don’t suffer, though they do respond to stimuli in ways that some have mistaken for a pain response.  What was rather shocking about the repeated invocation of suffering in plants is that it occasioned no reflections on what the moral implications would be if plants really did suffer.  The commentators’ gesture toward the alleged suffering of plants seemed no more than a rhetorical move in their attack on my argument.  But if one became convinced, as some of the commentators appear to be, that plants are conscious, feel pain, and experience suffering, that ought to prompt serious reconsideration of the permissibility of countless practices that we have always assumed to be benign.  If you really believed that plants suffer, would you continue to think that it’s perfectly acceptable to mow your grass?

Finally, my responses to the recurring challenges concerning microbes and insects parallel those I offered to people’s solicitude about plants.  Like plants, microbes don’t suffer.  I don’t think we know yet whether many types of insect do.  If, in controlled conditions, one pulls the leg or wing off a fly while it’s feeding or grooming, it will carry on with its activity as if nothing had happened.  But suppose insects really do suffer, perhaps quite intensely.  Shouldn’t that elicit serious moral reflection rather than being deployed as a mere debating point?

Earlier I noted that by far the most common objection to my article was that I ignored the likely consequences of the elimination or even the mere reduction of predation.  If you have the patience, review the first 152 comments on my article.  You will find this objection stated in 28 of them — that is, in one of every 5.4, or nearly 20 percent.  Given that I explicitly stated and addressed that objection, and later reverted to it six times, it seems clear that many, and probably most, of the readers of the article gave it only a cursory glance before pouncing on their keyboards to give me a good roasting. But at least those who replicated the objection I had stated deserve credit for saying something of substance. What’s particularly disheartening is that their comments are greatly outnumbered by those that make no reference to my arguments and never touch on a point of substance, but instead consist entirely of insults and invective.  If you take your own moral beliefs seriously, the way to respond to a challenge to them is to make sure you understand the challenge and then to try to refute the arguments for it.  If you can’t answer the challenge except by mocking the challenger, how can you retain your confidence in your own beliefs?

Jeff McMahan is professor of philosophy at Rutgers University and a visiting research collaborator at the Center for Human Values at Princeton University. He is the author of many works on ethics and political philosophy, including “The Ethics of Killing: Problems at the Margins of Life” and “Killing in War.”

__________

Full article: http://opinionator.blogs.nytimes.com/2010/09/28/predators-a-response/

The Defiant Ones

In her new book, the author of ‘Seabiscuit’ turns to the unimaginable ordeal of an Olympic athlete and WW II hero. Because of her own debilitating illness, they struck a special bond.

With a fringe of white hair poking out from under a University of Southern California baseball cap and blue eyes sharp behind bifocals, 93-year-old Louis Zamperini refuses to concede much to old age. He still works a couple of hours each day in the yard of his Hollywood Hills home, bagging leaves, climbing stairs and, on occasion, trimming trees with a chainsaw. His outlook is upbeat, even rambunctious. “I have a cheerful countenance at all times,” he says. “When you have a good attitude your immune system is fortified.” But as he plunged into “Unbroken,” Laura Hillenbrand’s 496-page story of his life, the happy trappings of his current existence fell away.

“Unbroken” will be published Nov. 16 with a first printing of 250,000 copies. Its publisher, Random House, hopes to repeat the success it enjoyed with “Seabiscuit,” Ms. Hillenbrand’s 2001 best seller, which has six million books in print and became a hit movie. “We’re positioning it as the big book for the holidays,” says a Barnes & Noble buyer.

One of the many notable aspects of “Unbroken” is that its author has never met her subject. Suffering from a debilitating case of chronic fatigue syndrome, she was unable to travel to Los Angeles from her Washington, D.C., home. She did the bulk of her research by phone and over the Internet, which enabled her to zero in on key collections at such institutions as the National Archives.

Mr. Zamperini, in his bomber jacket
 
“Unbroken” details a life that was tumultuous from the beginning. As a blue-collar kid in Southern California, Mr. Zamperini fell in and out of scrapes with the law. By age 19, he’d redirected his energies into sports, becoming a record-breaking distance runner. He competed in the 1936 Olympic Games in Berlin where he made headlines, not just on the track (Hitler sought him out for a congratulatory handshake), but by stealing a Nazi flag from the well-guarded Reich Chancellery. The heart of the story, however, is about Mr. Zamperini’s experiences while serving in the Pacific during World War II.

A bombardier on a B-24 flying out of Hawaii in May 1943, the Army Air Corps lieutenant was one of only three members of an 11-man crew to survive a crash into a trackless expanse of ocean. For 47 days, Mr. Zamperini and pilot Russell Allen Phillips (tail gunner Francis McNamara died on day 33) huddled aboard a tiny, poorly provisioned raft, subsisting on little more than rain water and the blood of hapless birds they caught and killed bare-handed. All the while sharks circled, often rubbing their backs against the bottom of the raft. The sole aircraft that sighted them was Japanese. It made two strafing runs, missing its human targets both times. After drifting some 2,000 miles west, the bullet-riddled, badly patched raft washed ashore in the Marshall Islands, where Messrs. Zamperini and Phillips were taken prisoner by the Japanese. The war still had more than two years to go.

 
Laura Hillenbrand at her home in Washington; she rarely leaves the house because of her illness.

For 25 months in such infamous Japanese POW camps as Ofuna, Omori and Naoetsu, Mr. Zamperini was physically tortured and subjected to constant psychological abuse. He was beaten. He was starved. He was denied medical care for maladies that included beriberi and chronic bloody diarrhea. His fellow prisoners—among them Mr. Phillips—were treated almost as badly. But Mr. Zamperini was singled out by a sadistic guard named Mutsuhiro Watanabe, known to prisoners as “the Bird,” a handle picked because it had no negative connotations that might bring down his irrational wrath. The Bird intended to make an example of the famous Olympian. He regularly whipped him across the face with a belt buckle and forced him to perform demeaning acts, among them push-ups atop pits of human excrement. The Bird’s goal was to force Mr. Zamperini to broadcast anti-American propaganda over the radio. Mr. Zamperini refused. Following Japan’s surrender, Mr. Watanabe was ranked seventh among its most wanted war criminals (Tojo was first). Because war-crime prosecutions were suspended in the 1950s, he was never brought to justice.

Mr. Zamperini, record-setting miler, 1939

This all came rushing back when Mr. Zamperini first sat down with a copy of “Unbroken” last month. “As I was reading,” he says, gesturing with an arm to a peaceful vista of palm trees outside his house, “I had to look out that picture window from time to time to make sure that I wasn’t still in Japan. When I got to the end I called Laura and told her she’d put me back in prison, and she said, ‘I’m sorry.’ ”

“It’s almost unimaginable what Louie went through,” says Ms. Hillenbrand from her home on a late fall afternoon. She discovered Mr. Zamperini’s story while researching “Seabiscuit,” the saga of another individual—in that case, a horse—that confronted long odds. “Louie and Seabiscuit were both Californians and both on the sports pages in the 1930s,” she says. “I was fascinated. When I learned about his World War II experiences, I thought, ‘If this guy is still alive, I want to meet him.’ ”

Following the publication of “Seabiscuit,” Ms. Hillenbrand wrote to Mr. Zamperini. Shortly thereafter they had the first of many long phone conversations. His tale of survival captivated her both on its merits and because she could relate to it personally. “I’m attracted,” she says, “to subjects who overcome tremendous suffering and learn to cope emotionally with it.

In basic training, pre-WWII helmet, 1941

The 43-year-old Ms. Hillenbrand contracted chronic fatigue syndrome during her sophomore year at Kenyon College. The bewildering disease, thought to originate from a virus, can be enfeebling and is incurable. Ms. Hillenbrand is today essentially a prisoner in her own home. She is so consistently weak and dizzy (vertigo is a side effect) that she recently installed a chair lift to get to the second floor of her house, where she lives with her husband, G. Borden Flanagan, an assistant professor of political philosophy at American University. What to others might seem simple matters are to her subjects of grave consideration. “I skipped my shower today,” she says, “in order to have the strength to do this interview. My illness is excruciating and difficult to cope with. It takes over your entire life and causes more suffering than I can describe.”

Ms. Hillenbrand’s research was complicated by her disease. But as she likes to remind people, she came down with chronic fatigue syndrome before starting her writing career, and she has learned to work around it. “For ‘Seabiscuit,’ ” she says, “I interviewed 100 people I never met.” For “Unbroken,” Ms. Hillenbrand located not only many of Mr. Zamperini’s fellow POWs and the in-laws of Mr. Phillips, but the most friendly of his Japanese captors. She also interviewed scores of experts on the War in the Pacific (the book is extensively end-noted) and benefited from her subject’s personal files, which he shipped to Washington for her use. “A superlative pack rat,” she writes, “Louie has saved virtually every artifact of his life.”

His damaged B-24 after a mission:, 1943

Mr. Zamperini with mother at homecoming, 1945

During her exploration of Mr. Zamperini’s war years, Ms. Hillenbrand was most intrigued by his capacity to endure hardship. “One of the fascinating things about Louie,” she says, “is that he never allowed himself to be a passive participant in his ordeal. It’s why he survived. When he was being tortured, he wasn’t just lying there and getting hit. He was always figuring out ways to escape emotionally or physically.”

Mr. Zamperini with mother at homecoming, 1945
Mr. Zamperini owes this resiliency, Ms. Hillenbrand concluded, to his rebellious nature. “Defiance defines Louie,” she says. “As a boy he was a hell-raiser. He refused to be corralled. When someone pushed him he pushed back. That made him an impossible kid but an unbreakable man.”

Although Mr. Zamperini came back to California in one piece, he was emotionally ruined. At night, his demons descended in the form of vengeful dreams about Mr. Watanabe. He drank heavily. He nearly destroyed his marriage. In 1949, at the urging of his wife, Cynthia, Mr. Zamperini attended a Billy Graham crusade in downtown Los Angeles, where he became a Christian. (The conversion of the war hero helped put the young evangelist on the map.) Ultimately Mr. Zamperini forgave his tormentors and enjoyed a successful career running a center for troubled youth. He even reached out to Mr. Watanabe. “As a result of my prisoner of war experience under your unwarranted and unreasonable punishment,” Mr. Zamperini wrote his former guard in the 1990s, “my post-war life became a nightmare … but thanks to a confrontation with God … I committed my life to Christ. Love replaced the hate I had for you.” A third party promised to deliver the letter to Mr. Watanabe. He did not reply, and it is not known whether he received it. He died in 2003.

Mr. Zamperini still has his purloined Nazi flag.

Mr. Zamperini’s internal battles and ultimate redemption point to a key difference between “Unbroken” and Ms. Hillenbrand’s previous book. “Seabiscuit’s story is one of accomplishment,” she says. “Louie’s is one of survival. Seabiscuit’s story played out before the whole world. Louie dealt with his ordeal essentially alone. His was a mental struggle.” That struggle, she adds, feels particularly resonant in 2010. “This is a time when people need to be buoyed by something, and Louie blows breath into people by making them realize that they can overcome more than they think.”

Because of Ms. Hillenbrand’s illness, there will be no author tour. In 2007 she sank deeper into chronic fatigue syndrome, and she hasn’t pulled out of it. “This is going to be hard,” she says. “I’m very afraid. I’m not functioning well. I’m going to have to be careful that I don’t slip back to the bottom.” Next week’s “Today” show interview was taped at her home.

A rambunctious youth in Torrance, Calif.

Mr. Zamperini—whose health issues don’t go beyond taking blood-thinning medication following a recent angioplasty—is raring to go. His wife died in 2001, and while he is close to his two children and a grandson, he lives alone. In short, he’s up for an adventure. He has told Random House he will promote the book in Ms. Hillenbrand’s stead. He also has signed with a San Francisco-based speakers’ agency. His goal is to become an inspirational mainstay on cruise ships. He has transformed what he learned as a POW into parables (“Hope has to have a reason. Faith has to have an object”) that he feels can reduce stress and are perfect for an anxiety-filled time.

Visiting a prison camp in Japan in 1950

There is also, not surprisingly, movie interest (the film version of “Seabiscuit” took in $150 million world-wide at the box office). The outlook, however, is uncertain. In the 1950s, Mr. Zamperini published an autobiography titled “Devil at My Heels.” Universal, envisioning a vehicle for Tony Curtis, optioned Mr. Zamperini’s life rights. The project went nowhere. In the 1990s, Universal re-optioned the rights, this time for Nicolas Cage. Again the project faltered. In 2003, Mr, Zamperini and writer David Rensin updated “Devil at My Heels.”

Running in the Olympic torch relay in Los Angeles, 1984

Andrew Rigrod, an entertainment lawyer representing Mr. Zamperini, believes the rights have now reverted to his client. A Universal spokeswoman says that this is most likely correct, but says the studio still owns the previous project and is developing it. She adds that she expects things to be resolved to everyone’s satisfaction. Mr. Zamperini’s hope, Mr. Rigrod says, is that he and Ms. Hillenbrand (who is represented by CAA) will join forces. “He wants the movie to be based on Laura’s book,” says the lawyer, “and he would cooperate and participate.” Says Mr. Zamperini: “For the work she’s done, she deserves the movie. I told her I don’t want anything

Over the course of the seven years Ms. Hillenbrand toiled on “Unbroken,” she and Mr. Zamperini became friends, despite never laying eyes on each other. “I call him a virtuoso of joy,” she says. “When things are going bad, I phone him.” Says Mr. Zamperini, “Every time I say good-bye to her, I tell her I love her and she tells me, ‘I love you.’ I’ve never known a girl like her.

“Laura brought my war buddies back to life,” he says. “The fact that Laura has suffered so much enabled her to put our suffering into words.”

Steve Oney is the author of “And the Dead Shall Rise: The Murder of Mary Phagan and the Lynching of Leo Frank.”

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703514904575602540345409292.html

Discovering the Virtues of a Wandering Mind

At long last, the doodling daydreamer is getting some respect.

In the past, daydreaming was often considered a failure of mental discipline, or worse. Freud labeled it infantile and neurotic. Psychology textbooks warned it could lead to psychosis. Neuroscientists complained that the rogue bursts of activity on brain scans kept interfering with their studies of more important mental functions.

But now that researchers have been analyzing those stray thoughts, they’ve found daydreaming to be remarkably common — and often quite useful. A wandering mind can protect you from immediate perils and keep you on course toward long-term goals. Sometimes daydreaming is counterproductive, but sometimes it fosters creativity and helps you solve problems.

Consider, for instance, these three words: eye, gown, basket. Can you think of another word that relates to all three? If not, don’t worry for now. By the time we get back to discussing the scientific significance of this puzzle, the answer might occur to you through the “incubation effect” as your mind wanders from the text of this article — and, yes, your mind is probably going to wander, no matter how brilliant the rest of this column is.

Mind wandering, as psychologists define it, is a subcategory of daydreaming, which is the broad term for all stray thoughts and fantasies, including those moments you deliberately set aside to imagine yourself winning the lottery or accepting the Nobel. But when you’re trying to accomplish one thing and lapse into “task-unrelated thoughts,” that’s mind wandering.

During waking hours, people’s minds seem to wander about 30 percent of the time, according to estimates by psychologists who have interrupted people throughout the day to ask what they’re thinking. If you’re driving down a straight, empty highway, your mind might be wandering three-quarters of the time, according to two of the leading researchers, Jonathan Schooler and Jonathan Smallwood of the University of California, Santa Barbara.

“People assume mind wandering is a bad thing, but if we couldn’t do it during a boring task, life would be horrible,” Dr. Smallwood says. “Imagine if you couldn’t escape mentally from a traffic jam.”

You’d be stuck contemplating the mass of idling cars, a mental exercise that is much less pleasant than dreaming about a beach and much less useful than mulling what to do once you get off the road. There’s an evolutionary advantage to the brain’s system of mind wandering, says Eric Klinger, a psychologist at the University of Minnesota and one of the pioneers of the field.

“While a person is occupied with one task, this system keeps the individual’s larger agenda fresher in mind,” Dr. Klinger writes in the “Handbook of Imagination and Mental Simulation.“ It thus serves as a kind of reminder mechanism, thereby increasing the likelihood that the other goal pursuits will remain intact and not get lost in the shuffle of pursuing many goals.”

Of course, it’s often hard to know which agenda is most evolutionarily adaptive at any moment. If, during a professor’s lecture, students start checking out peers of the opposite sex sitting nearby, are their brains missing out on vital knowledge or working on the more important agenda of finding a mate? Depends on the lecture.

But mind wandering clearly seems to be a dubious strategy, if, for example, you’re tailgating a driver who suddenly brakes. Or, to cite activities that have actually been studied in the laboratory, when you’re sitting by yourself reading “War and Peace” or “Sense and Sensibility.”

If your mind is elsewhere while your eyes are scanning Tolstoy’s or Austen’s words, you’re wasting your own time. You’d be better off putting down the book and doing something more enjoyable or productive than “mindless reading,” as researchers call it.

Yet when people sit down in a laboratory with nothing on the agenda except to read a novel and report whenever their mind wanders, in the course of a half hour they typically report one to three episodes. And those are just the lapses they themselves notice, thanks to their wandering brains being in a state of “meta-awareness,” as it’s called by Dr. Schooler,

He, and other researchers have also studied the many other occasions when readers aren’t aware of their own wandering minds, a condition known in the psychological literature as “zoning out.” (For once, a good bit of technical jargon.) When experimenters sporadically interrupted people reading to ask if their minds were on the text at that moment, about 10 percent of the time people replied that their thoughts were elsewhere — but they hadn’t been aware of the wandering until being asked about it.

“It’s daunting to think that we’re slipping in and out so frequently and we never notice that we were gone,” Dr. Schooler says. “We have this intuition that the one thing we should know is what’s going on in our minds: I think, therefore I am. It’s the last bastion of what we know, and yet we don’t even know that so well.”

The frequency of zoning out more than doubled in reading experiments involving smokers who craved a cigarette and in people who were given a vodka cocktail before taking on “War and Peace.” Besides increasing the amount of mind wandering, the people made alcohol less likely to notice when their minds wandered from Tolstoy’s text.

In another reading experiment, researchers mangled a series of consecutive sentences by switching the position of two nouns in each one — the way that “alcohol” and “people” were switched in the last sentence of the previous paragraph. In the laboratory experiment, even though the readers were told to look for sections of gibberish somewhere in the story, only half of them spotted it right away. The rest typically read right through the first mangled sentence and kept going through several more before noticing anything amiss.

To measure mind wandering more directly, Dr. Schooler and two psychologists at the University of Pittsburgh, Erik D. Reichle and Andrew Reineberg, used a machine that tracked the movements of people’s eyes while reading “Sense and Sensibility” on a computer screen. It’s probably just as well that Jane Austen is not around to see the experiment’s results, which are to appear in a forthcoming issue of Psychological Science.

By comparing the eye movements with the prose on the screen, the experimenters could tell if someone was slowing to understand complex phrases or simply scanning without comprehension. They found that when people’s mind wandered, the episode could last as long as two minutes.

Where exactly does the mind go during those moments? By observing people at rest during brain scans, neuroscientists have identified a “default network” that is active when people’s minds are especially free to wander. When people do take up a task, the brain’s executive network lights up to issue commands, and the default network is often suppressed.

But during some episodes of mind wandering, both networks are firing simultaneously, according to a study led by Kalina Christoff of the University of British Columbia. Why both networks are active is up for debate. One school theorizes that the executive network is working to control the stray thoughts and put the mind back on task.

Another school of psychologists, which includes the Santa Barbara researchers, theorizes that both networks are working on agendas beyond the immediate task. That theory could help explain why studies have found that people prone to mind wandering also score higher on tests of creativity, like the word-association puzzle mentioned earlier. Perhaps, by putting both of the brain networks to work simultaneously, these people are more likely to realize that the word that relates to eye, gown and basket is ball, as in eyeball, ball gown and basketball.

To encourage this creative process, Dr. Schooler says, it may help if you go jogging, take a walk, do some knitting or just sit around doodling, because relatively undemanding tasks seem to free your mind to wander productively. But you also want to be able to catch yourself at the Eureka moment.

“For creativity you need your mind to wander,” Dr. Schooler says, “but you also need to be able to notice that you’re mind wandering and catch the idea when you have it. If Archimedes had come up with a solution in the bathtub but didn’t notice he’d had the idea, what good would it have done him?”

John Tierney, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/06/29/science/29tier.html

When It Comes to Sex, Chimps Need Help, Too

The human ego has never been quite the same since the day in 1960 that Jane Goodall observed a chimpanzee feasting on termites near Lake Tanganyika. After carefully trimming a blade of grass, the chimpanzee poked it into a passage in the termite mound to extract his meal. No longer could humans claim to be the only tool-making species.

The deflating news was summarized by Ms. Goodall’s mentor, Louis Leakey: “Now we must redefine tool, redefine Man, or accept chimpanzees as human.”

So what have we actually done now that we’ve had a half-century to pout? In a 50th anniversary essay in the journal Science, the primatologist William C. McGrew begins by hailing the progression of chimpanzee studies from field notes to “theory-driven, hypothesis-testing ethnology.”

He tactfully waits until the third paragraph — journalists call this “burying the lead” — to deliver the most devastating blow yet to human self-esteem. After noting that chimpanzees’ “tool kits” are now known to include 20 items, Dr. McGrew casually mentions that they’re used for “various functions in daily life, including subsistence, sociality, sex, and self-maintenance.”

Sex? Chimpanzees have tools for sex? No way. If ever there was an intrinsically human behavior, it had to be the manufacture of sex toys.

Considering all that evolution had done to make sex second nature, or maybe first nature, I would have expected creatures without access to the Internet to leave well enough alone.

Only Homo sapiens seemed blessed with the idle prefrontal cortex and nimble prehensile thumbs necessary to invent erotic paraphernalia. Or perhaps Homo habilis, the famous Handy Man of two million years ago, if those ancestors got bored one day with their jobs in the rock-flaking industry:

“Flake, flake, flake.”

“There’s gotta be more to life.”

“Nobody ever died wishing he’d spent more time making sharp rocks.”

“What if you could make a tool for… something fun?”

I couldn’t imagine how chimps managed this evolutionary leap. But then, I couldn’t imagine what they were actually doing. Using blades of grass to tickle one another? Building heart-shaped beds of moss? Using stones for massages, or vines for bondage, or — well, I really had no idea, so I called Dr. McGrew, who is a professor at the University of Cambridge.

The tool for sex, he explained, is a leaf. Ideally a dead leaf, because that makes the most noise when the chimp clips it with his hand or his mouth.

“Males basically have to attract and maintain the attention of females,” Dr. McGrew said. “One way to do this is leaf clipping. It makes a rasping sound. Imagine tearing a piece of paper that’s brittle or dry. The sound is nothing spectacular, but it’s distinctive.”

O.K., a distinctive sound. Where does the sex come in?

“The male will pluck a leaf, or a set of leaves, and sit so the female can see him. He spreads his legs so the female sees the erection, and he tears the leaf bit by bit down the midvein of the leaf, dropping the pieces as he detaches them. Sometimes he’ll do half a dozen leaves until she notices.”

And then?

“Presumably she sees the erection and puts two and two together, and if she’s interested, she’ll typically approach and present her back side, and then they’ll mate.”

My first reaction, as a chauvinistic human, was to dismiss the technology as laughably primitive — too crude to even qualify as a proper sex tool. But Dr. McGrew said it met anthropologists’ definition of a tool: “He’s using a portable object to obtain a goal. In this case, the goal is not food but mating.”

Put that way, you might see this chimp as the equivalent of a human (wearing pants, one hopes) trying to attract women by driving around with a car thumping out 120-decibel music. But until researchers are able to find a woman who admits to being anything other than annoyed by guys in boom cars, these human tools must be considered evolutionary dead ends.

By contrast, the leaf-clipping chimps seem more advanced, practically debonair. But it would be fairer to compare the clipped leaf with the most popular human sex tool, which we can now identify thanks to the academic research described last year by my colleague Michael Winerip. The researchers found that the vibrator, considered taboo a few decades ago, had become one of the most common household appliances in the United States. Slightly more than half of all women, and almost half of men, reported having used one, and they weren’t giving each other platonic massages.

Leaf-clipping, meanwhile, has remained a local fetish among chimpanzees. The sexual strategy has been spotted at a colony in Tanzania but not in most other groups. There has been nothing comparable to the evolution observed in distributors of human sex tools: from XXX stores to chains of cutely named boutiques (Pleasure Chest, Good Vibrations) to mass merchants like CVS and Wal-Mart.

So let us, as Louis Leakey suggested, salvage some dignity by redefining humanity. We may not be the only tool-making species, but no one else possesses our genius for marketing. We reign supreme, indeed unrivaled, as the planet’s only tool-retailing species.

Now let’s see how long we hold on to that title.

John Tierney, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/05/04/science/04tier.html

Bring On the Fat, Bring On the Taste

Celebrity Chefs Join Burger Wars, Baste Beef Patties in Butter

Celebrity chefs have slaved in haute cuisine kitchens and mastered the world’s most complex dishes. Today, they’re dedicating their culinary brain power to another challenge: How to cash in on the burger craze.

Chefs such as French-trained Hubert Keller, all-American Bobby Flay and television star Emeril Lagasse are devoting their expertise to the once-humble hamburger. The rapidly growing pack of burger chefs is sparking fierce competition to expand, protect innovations and promote their recipes as the world’s best.

The deluxe $60 Rossini burger, with Kobe beef, sauteed foie gras, shaved truffles and Madeira sauce at Burger Bar in San Francisco.

Most of the chefs make a big deal about the kind of meat served at their restaurants. Mr. Lagasse blends ground chuck, short rib and brisket; others promote their Angus, Kobe or grass-fed beef. Some beef experts say the main secret behind tasty celebrity-chef burgers is simple: They pile on the fat, whether from beef patties with 30% fat content or from patties basted in butter. That alone may make their burgers delicious at a time when supermarket ground beef may contain as little as 8% fat.

“I crave cheeseburgers more than anything else,” says Bobby Flay, who has five Bobby’s Burger Palace locations in the Northeast and is planning five to seven more in the next 12 to 18 months. “We treat the food like a high-end restaurant,” using only fresh, unprocessed ingredients, Mr. Flay says.

__________

‘Two Kobe Beef Patties, Truffles…’

Laurent Tourondel

Burger Joint: LT Burger, Sag Harbor, N.Y.

Meat Theory: A grind of Certified Angus Beef short rib, brisket, chuck and sirloin

Priciest Burger: TheWagyu burger costs $16.

Cooking Tip: Smear the beef patty with softened butter, salt and pepper before cooking.

Richard Blais

Burger Joint: Flip Burger Boutique in Atlanta (above) and Birmingham, Ala.

Priciest Burger: The A5 burger, made of Japanese Kobe beef, truffles and foie gras, costs $39.

Cooking Tip: Heat a cast-iron pan, add clarified butter, a garlic clove and sprigs of thyme, rosemary or sage. Add the patty and chunks of butter, and baste.

Hubert Keller

Burger Joint: Burger Bar in Las Vegas, St. Louis and San Francisco

Priciest Burger: The Rossini burger, made with black truffles, foie gras and Madeira sauce, costs $60.

Cooking Tip: Grind your own meat: Cut meat into cubes and chill in a bowl. Pulse quickly in a food processor, stopping when it is still coarse.

Bobby Flay

Burger Joint: Bobby’s Burger Palace, five locations in the Northeast

Meat Theory: Ground chuck and sirloin, 20% fat

Cooking Tip: Add a layer of potato chips between the meat and bun for extra crunch.

Marcus Samuelsson

Burger Joint: Marc Burger, Chicago and Costa Mesa, Calif.

Meat Theory: Chuck, 30% fat

Priciest Burger: Two Kobe sliders cost $8.95.

Cooking Tip: Don’t mix any salt into the meat— apply right before cooking so the meat doesn’t ‘cure.’

Emeril Lagasse

Burger Joint: Burgers and More by Emeril, in Bethlehem, Pa.

Meat Theory: Burgers use different cuts or blends of meat.

Cooking Tip: Get a griddle very hot, about 375 degrees, and sear the patty, then reduce heat to cook through

__________

The chefs are competing with several popular chains serving burgers that aren’t prepared by celebrities but are more upscale than fast food—such as restaurateur Danny Meyer’s Shake Shack, with units in New York City, Saratoga Springs, N.Y., and Miami, and Five Guys, with more than 600 units in 40 states.

Kevin Connaughton, a theatrical lighting designer, has laid out $12.60 for a customized burger at Mr. Keller’s Burger Bar in San Francisco three or four times since it opened last year. “Anywhere that doesn’t specialize in burgers, it’s hard to get it properly cooked,” he says. “It’s definitely a very good burger.”

Most of the celebrity burger joints sprinkle in some trappings of fine dining, while charging anywhere from a few dollars extra to twice as much as the average diner. Marc Burger makes its own spicy ketchup. Burger Bar serves a burger topped with foie gras and truffles for $60; the San Francisco location features a wine cellar.

Ambience varies. The chains from Mr. Flay, Mr. Samuelsson and Mr. Blais look like stylish diners, with hip touches like a loft ceiling or a wavy dining counter. Mr. Keller’s looks more like an old-fashioned bar and grill, with dark-wood paneling. Many are located inside malls, stores or casinos, where chefs can rely on high-volume foot traffic.

Few celebrity chefs spend their days flipping burgers or working the fry-o-later. Instead, they design the concept, conceive the recipes, train the staff and check in regularly to maintain quality. Mr. Blais and Mr. Flay have staffed the top positions of their burger restaurants with cooks from their fine-dining operations.

Mr. Flay says before opening his first Burger Palace, he identified a fault with the hamburger: It has little textural contrast. So Mr. Flay created a concept he calls “crunchify,” which means putting a layer of crispy potato chips between meat and bun. He trademarked the term, as well as “Crunchburger.”

Three weeks ago, Mr. Flay called the chief executive of Cheesecake Factory and asked him to remove a “Double Cheese Crunch Burger,” with a layer of potato chips, from its menu.

“I’m going to protect this with all my might, because it’s the signature of my restaurant,” Mr. Flay says. (Cheesecake Factory says it was unaware of Mr. Flay’s trademark and will change the menu in the next printing cycle.)

The most expensive celebrity burger is usually a “Kobe” burger. Most menus specify that the beef used comes from American Wagyu cattle, a breed famous for its highly-marbled meat, meaning thin veins of fat run throughout the muscle, adding juiciness.

Beef experts are divided on the merit of Kobe burgers. Kobe beef contains fatty acids that give it a distinct taste and have a healthier profile than the fats in typical American beef, says Chris Kerth, professor of meat science at Texas A & M University. But the taste difference between ground Kobe and ground beef with an equally high fat content is so subtle, consumers probably can’t notice it, says Edgar Chambers IV, Kansas State University professor of food science.

Mr. Blais, who serves a Kobe burger, agrees that the unique marbling is lost in a hamburger but says Kobe beef is still a good choice for people who love a burger with abundant, tasty fat. His $39 Japanese Kobe burger consists of about 30% fat.

Chefs have their own special blends of beef cuts, such as short rib, sirloin or brisket.

“You’re creating a story and people love to hear stories,” says Mr. Keller, who uses ground chuck. Mr. Blais says his blend, which includes hangar steak, is the result of much research and study, including meals at rival Burger Bar, BLT Burger, Shake Shack and Five Guys.

“You get kind of tired of burgers after so much R & D,” Mr. Blais says. There’s minimal scientific research to guide them into the flavor differences among various meat cuts when ground.

“Grass fed” beef shows up in celebrity burgers—and often costs a little extra. Grass-fed beef contains healthier fats than typical grain-fed beef and is trendy in food circles partly because of a reputation for being better for the environment (although that is a question subject to scientific debate).

Mr. Tourondel says his grass-fed burger is a big hit but he personally doesn’t like it. “Too lean, too dry,” says the chef, who ordinarily smears softened butter onto his burger patties before cooking.

Katy McLaughlin, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704312504575618450888182376.html

Sweet Smell of Success

The title of Tilar J. Mazzeo’s “The Secret of Chanel No. 5” suggests that there is some hidden truth behind one of the most famous fragrances in the world. There may be, for all we know. There is certainly a lot of Chanel lore that is unfamiliar to most of us.

The perfume’s formula, for instance, was not entirely original. It was based on a fragrance made in 1914 to honor Russian royalty. More surprising is the fact that, for part of World War II, this paragon of French fragrance was produced in Hoboken, N.J. Most off-putting, though, is the news that the perfume’s creator—who would see Chanel No. 5 turned into a cultural totem in the U.S. by G.I.’s who brought it home from Paris as a fancy gift for their wives and girlfriends—spent the Occupation holed up at the Paris Ritz with a German officer as her lover.

Gabrielle “Coco” Chanel’s affinity for the Germans—she made two trips to Berlin during the war—did nothing to dent the perfume’s appeal. Since its launch in 1921, Chanel No. 5 has seemed almost invulnerable to any force that might damage its world-wide popularity. Though its cachet has gone up and down over the years, it remains hugely popular. Les Parfums Chanel doesn’t reveal sales figures, but Ms. Mazzeo says that a bottle sells somewhere in the world every 30 seconds, with annual revenue estimated at $100 million.

What accounts for the continuing fascination with a product that is nine decades old? Ms. Mazzeo explains with a combination of engaging historical detail and at times overwrought drama.

Orphaned at an early age, Coco Chanel grew up in the convent abbey called Aubazine in southwestern France. Ms. Mazzeo contends that the abbey’s scents, aesthetic minimalism and even its numerical patterns—the place teemed with five-pointed stars and pentagon shapes, the author says—made a lifelong impression on the young girl.

Chanel’s entry into business began in 1909 when she set up a millinery shop in Paris. The boutique was a success, prompting her to open a seaside store in Deauville, where she introduced a sportswear line in 1913 that bore the hallmarks of a style—simple and chic—that would turn the Chanel brand into an international fashion powerhouse.

A few years later, upset by a break-up with her wealthy British boyfriend, Arthur “Boy” Capel, and his subsequent death in an automobile accident, Chanel focused her energies on establishing a perfume line. Ms. Mazzeo says, in a typically feverish passage: “The perfume she would create had everything to do with the complicated story of her sensuality, with the heart-breaking loss of Boy in his car crash, and with everything that had come before. In crafting this scent, she would return to her emotional ground zero.”

Chanel was introduced to perfumer Ernest Beaux, who had worked in Moscow for the fragrance house A. Rallet & Co. While there, he had created a scent that was intended to celebrate Catherine the Great. But the timing, Ms. Mazzeo notes, was not right: “A perfume named after a German-born empress of Russia was doomed in 1914.”

Marilyn Monroe

Working with Chanel, Beaux used his Catherine the Great formula to capture the qualities that Chanel was looking for in her product: It would have to be seductive and expensive, she said, and “a modern work of art and an abstraction.” A perfume based on the scent of a particular flower—which at the time had the power to define its wearer as a respectable woman (rose) or a showgirl (jasmine)—would not do. “I want to give women an artificial perfume,” Chanel once said. “Yes, I do mean artificial, like a dress, something that has been made. I don’t want a rose or a lily of the valley, I want a perfume that is a composition.”

The composition she and Beaux arrived at had strong notes of rose and jasmine, balanced by what was, in the 1920s, a new fragrance technology: aldehydes. Ms. Mazzeo neatly explains that aldehydes are “molecules with a very particular kind of arrangement among their oxygen, hydrogen and carbon atoms, and they are a stage in the natural process that happens when exposure turns an alcohol to an acid.” Aldehydes provide a “clean” scent and intensify other fragrances.

Chanel’s perfume was not the first to use aldehydes, but it was the first to use them in large portions. The innovation led to a new category of fragrance, the floral-aldehydics, that combine the scent of flowers and aldehydes.

The story of how Coco Chanel decided what to name the perfume has been often told: Beaux supposedly presented her with 10 vials of fragrance, and she chose the fifth one. But in Ms. Matteo’s telling, Chanel picked the fifth vial and called her perfume Chanel No. 5 because, if we believe the purple prose, the “special talisman” held all manner of significance for her. Even Boy Capel regarded five as his “magic number,” according to the author.

The fragrance was an immediate success. Changes to the formula “have been only minor and only when absolutely required,” writes Ms. Mazzeo, as when a type of chemically unstable musk used in the perfume was banned in the 1980s.

By then, Chanel No. 5 had long been unconnected to the Chanel business interests: Coco Chanel sold Les Parfums Chanel—an enterprise separate from her fashion house—in 1924 to French industrialists Paul and Pierre Wertheimer, who had a large perfume manufacturing and distribution operation. Chanel, who retained a 10% interest, was seeking a world-wide market for the perfume. That goal was attained, but Chanel came to bitterly regret the decision. At times over the following decades she tried and failed to win back the company, even resorting to disparaging the perfume. By the time of her death in 1971 at age 87 she had reached a settlement with the owners. Corporate squabbles aside, Chanel No. 5 has endured. Remarkably little about it has changed since 1921. And if its history tells us anything, little will.

Ms. Catton writes the Culture City column for the Journal’s Greater New York section.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704312504575618581112340888.html

Good Gracious

Surrounded by beautiful things from a tender age (her mother owned an antiques shop in her native New Orleans), interior designer Suzanne Rheinstein has made a career of showcasing them.

Interior designer Suzanne Rheinstein

Her Los Angeles store, Hollyhock—a destination for fine antiques and new decorative objects—and the homes she creates for clients both display her impeccable polish and a talent for mixing old and new.

On the release of her book “At Home: A Style for Today With Things From the Past,” we pick her estimable brain for ideas decorative and otherwise.

One of my decorating tricks is using fabrics on the wrong side. Certain materials seem too bright and strong on the right side but when I turn them over they’re often very subtle and interesting.

Why save your special things like “good” china and “good silver” for once a year? Joan Didion once said “every day is all there is.”

My favorite shopping street in the world is Magazine Street in New Orleans. There are lots of great antique shops and wonderful little children’s stores and when you finish there’s lunch at Lilette.

A room decorated by Ms. Rheinstein

I always have jugs of flowers or branches of berries or pretty leaves. The great thing about leaves is that you can just leave them and a month later they still look interesting.

I’m pro scented candles. This time of year I’m burning Michael Smith’s Angkor when I’m in New York and Diptyque’s Cannelle in L.A.

When I write letters I use engraved cards from The Printery in Oyster Bay.

My all-time favorite flowers are black dahlias and Rêve d’Or roses, which have kind of a pinky buff-y color.

A Rêve d’Or rose

In the winter I use black beeswax candles by Del Mar on the dinner table. Black candles were used in Regency times and I like that they don’t stick out like white ones.

The tackiest thing I love is Velveeta melted in a pan with Ro-Tel, which is diced tomatoes with hot chilis in it. I serve it in a chafing dish with Fritos.

The Printery cards

My favorite room in the world is Pauline de Rothschild’s bedroom in Paris, which features this fantastic juxtaposition of a very spare, contemporary metal canopy bed she designed with green 18th-century Chinese wallpaper.

Pauline de Rothschild’s bedroom

I’m not one for hiding the television set. I think it should be where you watch it. Most of ours are in bookshelves and surrounded by books. The one place I really don’t like it is above a mantelpiece because it ruins your enjoyment of both fire and TV.

Right now I’m reading “Artempo: Where Time Becomes Art,” by Axel Vervoordt. The man has the most amazing sense of art and style. I believe his influence will endure long after the flood of China-made Belgian-esque furniture has ruined that look for many people.

To keep people from using their phones at dinner I think there should be a sort of seventh-inning stretch where everyone has 10 minutes to use their gadgets. Really, the rules of common courtesy should prevail, but these days courtesy is uncommon.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703514904575602642316191692.html

The Meat Eaters

Viewed from a distance, the natural world often presents a vista of sublime, majestic placidity. Yet beneath the foliage and hidden from the distant eye, a vast, unceasing slaughter rages. Wherever there is animal life, predators are stalking, chasing, capturing, killing, and devouring their prey. Agonized suffering and violent death are ubiquitous and continuous. This hidden carnage provided one ground for the philosophical pessimism of Schopenhauer, who contended that “one simple test of the claim that the pleasure in the world outweighs the pain…is to compare the feelings of an animal that is devouring another with those of the animal being devoured.”

The continuous, incalculable suffering of animals is also an important though largely neglected element in the traditional theological “problem of evil” ─ the problem of reconciling the existence of evil with the existence of a benevolent, omnipotent god. The suffering of animals is particularly challenging because it is not amenable to the familiar palliative explanations of human suffering. Animals are assumed not to have free will and thus to be unable either to choose evil or deserve to suffer it. Neither are they assumed to have immortal souls; hence there can be no expectation that they will be compensated for their suffering in a celestial afterlife. Nor do they appear to be conspicuously elevated or ennobled by the final suffering they endure in a predator’s jaws. Theologians have had enough trouble explaining to their human flocks why a loving god permits them to suffer; but their labors will not be over even if they are finally able to justify the ways of God to man. For God must answer to animals as well.

If I had been in a position to design and create a world, I would have tried to arrange for all conscious individuals to be able to survive without tormenting and killing other conscious individuals.  I hope most other people would have done the same.  Certainly this and related ideas have been entertained since human beings began to reflect on the fearful nature of their world — for example, when the prophet Isaiah, writing in the 8th century B.C.E., sketched a few of the elements of his utopian vision.  He began with people’s abandonment of war: “They shall beat their swords into plowshares, and their spears into pruning hooks: nation shall not lift up sword against nation.”  But human beings would not be the only ones to change; animals would join us in universal veganism: “The wolf also shall dwell with the lamb, and the leopard shall lie down with the kid; and the calf and the young lion and the fatling together; and the little child shall lead them.  And the cow and the bear shall feed; their young ones shall lie down together; and the lion shall eat straw like the ox.” (Isaiah 2: 4 and 11: 6-7)

Isaiah was, of course, looking to the future rather than indulging in whimsical fantasies of doing a better job of Creation, and we should do the same.  We should start by withdrawing our own participation in the mass orgy of preying and feeding upon the weak.

Our own form of predation is of course more refined than those of other meat-eaters, who must capture their prey and tear it apart as it struggles to escape.  We instead employ professionals to breed our prey in captivity and prepare their bodies for us behind a veil of propriety, so that our sensibilities are spared the recognition that we too are predators, red in tooth if not in claw (though some of us, for reasons I have never understood, do go to the trouble to paint their vestigial claws a sanguinary hue).  The reality behind the veil is, however, far worse than that in the natural world.  Our factory farms, which supply most of the meat and eggs consumed in developed societies, inflict a lifetime of misery and torment on our prey, in contrast to the relatively brief agonies endured by the victims of predators in the wild.  From the moral perspective, there is nothing that can plausibly be said in defense of this practice.  To be entitled to regard ourselves as civilized, we must, like Isaiah’s morally reformed lion, eat straw like the ox, or at least the moral equivalent of straw.

But ought we to go further?  Suppose that we could arrange the gradual extinction of carnivorous species, replacing them with new herbivorous ones.  Or suppose that we could intervene genetically, so that currently carnivorous species would gradually evolve into herbivorous ones, thereby fulfilling Isaiah’s prophecy.  If we could bring about the end of predation by one or the other of these means at little cost to ourselves, ought we to do it?

I concede, of course, that it would be unwise to attempt any such change given the current state of our scientific understanding.  Our ignorance of the potential ramifications of our interventions in the natural world remains profound.  Efforts to eliminate certain species and create new ones would have many unforeseeable and potentially catastrophic effects.

Perhaps one of the more benign scenarios is that action to reduce predation would create a Malthusian dystopia in the animal world, with higher birth rates among herbivores, overcrowding, and insufficient resources to sustain the larger populations.  Instead of being killed quickly by predators, the members of species that once were prey would die slowly, painfully, and in greater numbers from starvation and disease.

Yet our relentless efforts to increase individual wealth and power are already causing massive, precipitate changes in the natural world.  Many thousands of animal species either have been or are being driven to extinction as a side effect of our activities.  Knowing this, we have thus far been largely unwilling even to moderate our rapacity to mitigate these effects.  If, however, we were to become more amenable to exercising restraint, it is conceivable that we could do so in a selective manner, favoring the survival of some species over others.  The question might then arise whether to modify our activities in ways that would favor the survival of herbivorous rather than carnivorous species.

At a minimum, we ought to be clear in advance about the values that should guide such choices if they ever arise, or if our scientific knowledge ever advances to a point at which we could seek to eliminate, alter, or replace certain species with a high degree of confidence in our predictions about the short- and long-term effects of our action.  Rather than continuing to collide with the natural world with reckless indifference, we should prepare ourselves now to be able to act wisely and deliberately when the range of our choices eventually expands. 

The suggestion that we consider whether and how we might exercise control over the prospects of different animal species, perhaps eventually selecting some for extinction and others for survival in accordance with our moral values, will undoubtedly strike most people as an instance of potentially tragic hubris, presumptuousness on a cosmic scale.  The accusation most likely to be heard is that we would be “playing God,” impiously usurping prerogatives that belong to the deity alone.  This has been a familiar refrain in the many instances in which devotees of one religion or another have sought to obstruct attempts to mitigate human suffering by, for example, introducing new medicines or medical practices, permitting and even facilitating suicide, legalizing a constrained practice of euthanasia, and so on.  So it would be surprising if this same claim were not brought into service in opposition to the reduction of suffering among animals as well.  Yet there are at least two good replies to it.

One is that it singles out deliberate, morally-motivated action for special condemnation, while implicitly sanctioning morally neutral action that foreseeably has the same effects as long as those effects are not intended.  One plays God, for example, if one administers a lethal injection to a patient at her own request in order to end her agony, but not if one gives her a largely ineffective analgesic only to mitigate the agony, though knowing that it will kill her as a side effect.  But it is hard to believe that any self-respecting deity would be impressed by the distinction.  If the first act encroaches on divine prerogatives, the second does as well.

The second response to the accusation of playing God is simple and decisive.  It is that there is no deity whose prerogatives we might usurp.  To the extent that these matters are up to anyone, they are up to us alone.  Since it is too late to prevent human action from affecting the prospects for survival of many animal species, we ought to guide and control the effects of our action to the greatest extent we can in order to bring about the morally best, or least bad, outcomes that remain possible.

Another equally unpersuasive objection to the suggestion that we ought to eliminate carnivorism if we could do so without major ecological disruption is that this would be “against Nature.”  This slogan also has a long history of deployment in crusades to ensure that human cultures remain primitive.  And like the appeal to the sovereignty of a deity, it too presupposes an indefensible metaphysics.  Nature is not a purposive agent, much less a wise one.  There is no reason to suppose that a species has special sanctity simply because it arose in the natural process of evolution.

Many people believe that what happens among animals in the wild is not our responsibility, and indeed that what they do among themselves is none of our business.   They have their own forms of life, quite different from our own, and we have no right to intrude upon them or to impose our anthropocentric values on them. 

There is an element of truth in this view, which is that our moral reason to prevent harm for which we would not be responsible is weaker than our reason not to cause harm.  Our primary duty with respect to animals is therefore to stop tormenting and killing them as a means of satisfying our desire to taste certain flavors or to decorate our bodies in certain ways.  But if suffering is bad for animals when we cause it, it is also bad for them when other animals cause it.  That suffering is bad for those who experience it is not a human prejudice; nor is an effort to prevent wild animals from suffering a moralistic attempt to police the behavior of other animals.  Even if we are not morally required to prevent suffering among animals in the wild for which we are not responsible, we do have a moral reason to prevent it, just as we have a general moral reason to prevent suffering among human beings that is independent both of the cause of the suffering and of our relation to the victims.  The main constraint on the permissibility of acting on our reason to prevent suffering is that our action should not cause bad effects that would be worse than those we could prevent.

That is the central issue raised by whether we ought to try to eliminate carnivorism.  Because the elimination of carnivorism would require the extinction of carnivorous species, or at least their radical genetic alteration, which might be equivalent or tantamount to extinction, it might well be that the losses in value would outweigh any putative gains.  Not only are most or all animal species of some instrumental value, but it is also arguable that all species have intrinsic value.  As Ronald Dworkin has observed, “we tend to treat distinct animal species (though not individual animals) as sacred.  We think it very important, and worth a considerable economic expense, to protect endangered species from destruction.”  When Dworkin says that animal species are sacred, he means that their existence is good in a way that need not be good for anyone; nor is it good in the sense that it would be better if there were more species, so that we would have reason to create new ones if we could.  “Few people,” he notes, “believe the world would be worse if there had always been fewer species of birds, and few would think it important to engineer new bird species if that were possible.  What we believe important is not that there be any particular number of species but that a species that now exists not be extinguished by us.”

The intrinsic value of individual species is thus quite distinct from the value of species diversity.  It also seems to follow from Dworkin’s claims that the loss involved in the extinction of an existing species cannot be compensated for, either fully or perhaps even partially, by the coming-into-existence of a new species.

The basic issue, then, seems to be a conflict between values: prevention of suffering and preservation of animal species.  It is relatively uncontroversial that suffering is intrinsically bad for those who experience it, even if occasionally it is also instrumentally good for them, as when it has the purifying, redemptive effects that Dostoyevsky’s characters so often crave.  Nor is it controversial that the extinction of an animal species is normally instrumentally bad.  It is bad for the individual members who die and bad for other individuals and species that depended on the existence of the species for their own well-being or survival.  Yet the extinction of an animal species is not necessarily bad for its individual members.  (To indulge in science fiction, suppose that a chemical might be introduced into their food supply that would induce sterility but also extend their longevity.)  And the extinction of a carnivorous species could be instrumentally good for all those animals that would otherwise have been its prey.  That simple fact is precisely what prompts the question whether it would be good if carnivorous species were to become extinct.

The conflict, therefore, must be between preventing suffering and respecting the alleged sacredness — or, as I would phrase it, the impersonal value — of carnivorous species.  Again, the claim that suffering is bad for those who experience it and thus ought in general to be prevented when possible cannot be seriously doubted.  Yet the idea that individual animal species have value in themselves is less obvious.  What, after all, are species?  According to Darwin, they “are merely artificial combinations made for convenience.”  They are collections of individuals distinguished by biologists that shade into one another over time and sometimes blur together even among contemporaneous individuals, as in the case of ring species.  There are no universally agreed criteria for their individuation.  In practice, the most commonly invoked criterion is the capacity for interbreeding, yet this is well known to be imperfect and to entail intransitivities of classification when applied to ring species.  Nor has it ever been satisfactorily explained why a special sort of value should inhere in a collection of individuals simply by virtue of their ability to produce fertile offspring.  If it is good, as I think it is, that animal life should continue, then it is instrumentally good that some animals can breed with one another.  But I can see no reason to suppose that donkeys, as a group, have a special impersonal value that mules lack.

Even if animal species did have impersonal value, it would not follow that they were irreplaceable.  Since animals first appeared on earth, an indefinite number of species have become extinct while an indefinite number of new species have arisen.  If the appearance of new species cannot make up for the extinction of others, and if the earth could not simultaneously sustain all the species that have ever existed, it seems that it would have been better if the earliest species had never become extinct, with the consequence that the later ones would never have existed.  But few of us, with our high regard for our own species, are likely to embrace that implication.

Here, then, is where matters stand thus far.  It would be good to prevent the vast suffering and countless violent deaths caused by predation.  There is therefore one reason to think that it would be instrumentally good if  predatory animal species were to become extinct and be replaced by new herbivorous species, provided that this could occur without ecological upheaval involving more harm than would be prevented by the end of predation.  The claim that existing animal species are sacred or irreplaceable is subverted by the moral irrelevance of the criteria for individuating animal species.  I am therefore inclined to embrace the heretical conclusion that we have reason to desire the extinction of all carnivorous species, and I await the usual fate of heretics when this article is opened to comment.

Jeff McMahan is professor of philosophy at Rutgers University and a visiting research collaborator at the Center for Human Values at Princeton University. He is the author of many works on ethics and political philosophy, including “The Ethics of Killing: Problems at the Margins of Life” and “Killing in War.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/

Speech and Harm

As every public figure knows, there are certain words that can not be uttered without causing shock or offense. These words, commonly known as “slurs,” target groups on the basis of race, nationality, religion, gender, sexual orientation, immigration status and sundry other demographics.  Many of us were reminded of the impact of such speech in August, when the radio host Dr. Laura Schlessinger repeatedly uttered a racial slur on a broadcast of her show. A public outcry followed, and ultimately led to her resignation. Many such incidents of abuse and offense, often with much more serious consequences, seem to appear in the news by the day.

We may at times convince ourselves, as Dr. Laura may have, that there are  inoffensive ways to use slurs.  But a closer look at the matter shows us that those ways are very rare. Slurs are in fact uniquely and stubbornly resistant to attempts to neutralize their power to hurt or offend. 

To be safe, we may ask ourselves how a targeted member, perhaps overhearing a slur,  would react to it. Doing so, we will almost always find that what may have seemed suitable most definitely is not.

But why are slurs so offensive? And why are some more offensive than others?  Even different slurs for the same group vary in intensity of contempt. How can words fluctuate both in their status as slurs and in their power to offend? Members of targeted groups themselves are not always offended by slurs ─ consider the uses of appropriated or reclaimed slurs among African-Americans and gay people.

The consensus answer among philosophers to the first question is that slurs, as a matter of convention, signal negative attitudes towards targeted groups. Those who pursue this answer are committed to the view that slurs carry offensive content or meaning; they disagree only over the mechanisms of implementation.  An alternative proposal is that slurs are prohibited words not on account of any particular content they get across, but rather because of relevant edicts surrounding their prohibition. This latter proposal itself raises a few pertinent questions: How do words become prohibited? What’s the relationship between prohibition and a word’s power to offend? And why is it sometimes appropriate to flout such prohibitions? 

Let’s start with conventional meaning.

Does a slur associated with a racial or ethnic group mean something different from the neutral conventional name for the group, for example, African-American or Hispanic?  The Oxford English Dictionary says a slur is a “deliberate slight; an expression or suggestion of disparagement or reproof.” But this definition fails to distinguish specific slurs from one another, or even distinct slurs for the same group. Still, from this definition we may infer that slurs supplement the meanings of their neutral counterparts with something offensive about whomever they reference. This information, however meager, suffices to isolate a flaw in trying to pin the offensiveness of a slur on its predicative meaning.

Anyone who wants to disagree with what “Mary is Hispanic” ascribes to Mary can do so with a denial (“Mary is not Hispanic.”).  If the use of a slur was offensive on account of what it predicates of its subject, we should be able to reject its offense simply by denying it. But replacing “Hispanic” with a slur on a Hispanic person does not work ─ it is no less inflammatory in the denial than the original is.  Therefore, however slurs offend, it is not through what they predicate of their subjects.

Another fascinating aspect of slurs that challenges the view that their meaning renders them offensive pertains to their effect in indirect speech.  Normally, an utterance can be correctly reported by re-using the very expressions being reported on, as in a quote in a book or a newspaper. What better insurance for accuracy can there be in reporting another than to re-use her words? Yet any such report not only fails to capture the original offense, but interestingly, it guarantees a second offense by whoever is doing the reporting.  What’s gone wrong? We expect indirect reports to be of others, not of ourselves. This limit on reporting slurs is significant. Is the offense of another’s slurring inescapable?  Is it possible that we can recognize the offense, but not re-express it?  How odd.

Is there someplace else to look for an account of why slurs are offensive? Could it be a matter of tone? Unlike conventionalized content, tone is supposed to be subjective.  Words can be different in tone but share content.  Might tone distinguish slurs from neutral counterparts?  No one can deny that the use of a slur can arouse subjective images and feelings in us that a use of its neutral counterpart does  not, but as an account of the difference in offensive punch it can’t be the whole story.

Consider a xenophobe who only uses slurs for picking out a target group. He may harbor no negative opinions towards its members; he may use slurs only among likeminded friends when intending to express affection for Hispanics or admiration for Asians but these uses remain pertinently offensive. The difference between a slur and its neutral counterpart cannot be a matter of subjective feel.

A major problem with any account that tries to explain the offensive nature of a slur by invoking content is how it can explain the general exhortation against even mentioning slurs. A quoted occurrence of a slur can easily cause alarm and offense. Witness the widespread preference in some media for using phrases that describe slurs rather than using or mentioning them. This is surprising since quotation is usually just about the form or shape of a word. You can see this in statement like “ ‘Love’ is a four letter word.” This suggests that it is something about the form or shape of other four letter words makes them unprintable. 

Another challenge to the content view is raised by the offensive potential of incidental uses of slurs, as witnessed by the Washington D.C. official who wound up resigning his job over the outcry that his use of the word “niggardly” provoked.  In 1999, the head of the Office of Public Advocate in Washington, DC used it in a discussion with a black colleague. He was reported as saying, “I will have to be niggardly with this fund because it’s not going to be a lot of money.”  Despite a similarity in spelling, his word has no semantic or etymological tie to the slur it may invoke; mere phonetic and orthographic overlap caused as much a stir as standard offensive language.  This is not an accidental use of an ambiguous or unknown slur, but an incidental one. Or take the practice of many newspapers (in case you haven’t noticed my own contortions in presenting these materials) that slurs cannot even be canonically described as in “the offensive word that begins with a certain letter….”

What conclusions should we draw from these constraints? One suggestion is that uses of slurs (and their canonical descriptions) are offensive simply because they sometimes constitute violations on their very prohibition.  Just as whoever violates a prohibition risks offending those who respect it, perhaps the fact that slurs are prohibited explains why we cannot escape the affect, hatred and negative association tied to them and why their occurrences in news outlets and even within quotation marks can still inflict pain. Prohibited words are usually banished wherever they occur. This explains why bystanders (even when silent) are uncomfortable, often embarrassed, when confronted by a slur. Whatever offenses these confrontations exact, the audience risks complicity, as if the offense were thrust upon them, not because of its content, but because of a responsibility we all incur in ensuring certain violations are prevented; when they are not, they must be reported and possibly punished. Their occurrences taint us all.

In short, Lenny Bruce got it right when he declared “the suppression of the word gives it the power, the violence, the viciousness.”  It is impossible to reform a slur until it has been removed from common use.

Words become prohibited for all sorts of reasons — by a directive or edict of an authoritative figure; or because of a tainted history of associations, perhaps, though conjuring up past pernicious or injurious events. The history of its uses, combined with reasons of self-determination, is exactly how “colored,” once used by African-Americans self-referentially, became prohibited, and so, offensive.  A slur may become prohibited because of who introduces or uses it.  This is the sentiment of a high school student who objected to W.E.B. Dubois’ use of  “Negro” because it “is a white man’s word.”

What’s clear is that no matter what its history, no matter what it means or communicates, no matter who introduces it, regardless of past associations, once relevant individuals with sufficient authority declare a word a slur, it is one.  The condition under which this occurs is not easy to predict in advance. When the Rev. Jesse Jackson proclaimed at the 1988 Democratic National Convention that from then on “black” should not be used, his effort failed. Many African-Americans carried positive associations with the term (“Black Panthers,”  “Black Power,” “I’m black and I’m proud.”) and so Jackson’s attempt at prohibition did not stick.

In appropriation, targeted members can opt to use a slur without violating its prohibition because membership provides a defeasible escape clause; most prohibitions include such clauses.  Oil embargoes permit exportation, just not importation.  Sanctions invariably exclude medical supplies. Why shouldn’t prohibitions against slurs and their descriptions exempt certain individuals under certain conditions for appropriating a banished word?  Targeted groups can sometimes inoffensively use slurs among themselves.  The NAACP, for example, continues to use “Colored” relatively prominently (on their letterhead, on their banners, etc.).

Once appropriation is sufficiently widespread, it might come to pass that the prohibition eases, permitting — under regulated circumstances — designated outside members access to an appropriated use. (For example, I have much more freedom in discussing the linguistics of slurs inside scholarly journals than I do here.) Should this practice become sufficiently widespread, the slur might lose its intensity.  How escape clauses are fashioned and what sustains them is a complex matter — one I cannot take up here.

Ernie Lepore, a professor of philosophy and co-director of the Center for Cognitive Science at Rutgers University, writes on language and mind. More of his work, including the study, “Slurring Words,” with Luvell Anderson, can be found here.

___________

Full article: http://opinionator.blogs.nytimes.com/2010/11/07/speech-and-harm/

The Cheat: The Greens Party

There was a party of dudes from Montana sitting at a table in Vij’s in Vancouver, British Columbia, getting ready to graze. They were businessmen, in the city for a conference, and the hotel had sent them out to South Granville Street to wait for a table, for Vij’s takes no reservations and never has.

The men sat in the bar and had a few beers, and after a table opened they took it and looked at the menu and ordered, each of them asking for a variation on the same theme: some mutton kebabs to start, the beef tenderloin after; the mutton kebabs to start, the lamb Popsicles after.

Their waitress was Meeru Dhalwala, who is also the chef at Vij’s and, with her husband, Vikram Vij, an owner of the restaurant. At that point she had spent more than a decade running the kitchen at Vij’s — she arrived in 1995 — but she had never worked in the dining room, interacting with customers, dealing with American men ordering meat.

Dhalwala took the orders and paused, then asked the men if they wanted any vegetables. They said no, almost instantaneously. “But you’ve ordered meat on meat,” she said. There was a collective shrug. They were from Montana.

Dhalwala stared at them. She was in a kind of shock. Vij’s is at once an excellent restaurant and a curious one, Indian without being doctrinaire about it, utopian without being political. No men work in the kitchen at Vij’s: 54 women, with no turnover save for during maternity leaves. No one, Dhalwala has said, has ever been fired. She feels a deep and important connection to both the food she makes and the business that she and her husband run. These boys from Montana were freaking her out.

From an e-mail she sent me: “I told them that as the creator of the food they were about to eat, I could not in good faith give them what they wanted and that they had to order some fiber and vitamins for their dinner. They were as dumbfounded as I was. ‘But we don’t like vegetables,’ they said. So I made a deal with them that I would choose a large plate of a side vegetable, and if they didn’t like it, I would pay for their dinner.”

The coconut kale we are cooking this weekend was that dish, and the Montana men paid for it happily. They complimented Dhalwala on their way out the door: Good vegetables! The leaves are rich and fiery, sweet and salty all at once. Important to the Montanans: they taste as if cut through with blood and fat, as if they were steak and fries combined. The grilling softens the texture of the kale without overcooking it or removing its essential structure — or the mild bitterness of the leaves — while the marinade of coconut milk, cayenne, salt and lemon juice balances out the flavors, caramelizing in the heat.

Made over a charcoal fire or even in a wickedly hot pan, it becomes a dish of uncommon flavor, the sort of thing you could eat on its own, with only a mound of basmati rice for contrast.

But you know, why would you? Here in America, after all, we will always be from Montana somehow.

At Vij’s, Dhalwala bathes grilled lamb chops — little meat Popsicles taken off the rack — in a rich, creamy fenugreek curry and serves them with turmeric-hued potatoes. This is a very, very good dish. But we are cheating here, Sunday cooks on the run. We are grilling kale, and perhaps for the first time. So let us keep things simple. To highlight the flavor of the greens, we will embrace austerity for our lamb, grilling it off under nothing but a garlic rub and showers of salt and freshly ground pepper. (You can sear the meat on the stovetop as well, in a cast-iron pan, and finish it in a hot oven.)

Then, bouncing back toward complicated flavors once more, we have a simple chickpea curry that Dhalwala cooks with star anise and chopped dates, which combine into an autumnal darkness that lingers on the tongue. Save for the business of messing around with black cardamom (and finding the black cardamom — you can always head to penzeys.com, or Amazon), it takes only a matter of moments to assemble the ingredients and cook, allowing some time at the end of the process for the flavors to meld together in the pot.

Back to Dhalwala again. She came to professional cooking late, after a career in nongovernmental organizations, the dance and heartache of third-world development. To her, cooking is a spiritual act, a simple one. You can taste this in every dish she serves. “There is an inner soul of cooking that is for nurturing and community,” she wrote to me, “and all my recipes stem from this place.”

So if you don’t like the lamb chops, that is on me.

Sam Sifton, New York Times

__________

Full article and photos: http://www.nytimes.com/2010/11/07/magazine/07food-t-000.html