Moral Camouflage or Moral Monkeys?

After being shown proudly around the campus of a prestigious American university built in gothic style, Bertrand Russell is said to have exclaimed, “Remarkable. As near Oxford as monkeys can make.” Much earlier, Immanuel Kant had expressed a less ironic amazement, “Two things fill the mind with ever new and increasing admiration and awe … the starry heavens above and the moral law within.” Today many who look at morality through a Darwinian lens can’t help but find a charming naïveté in Kant’s thought. “Yes, remarkable. As near morality as monkeys can make.”

So the question is, just how near is that? Optimistic Darwinians believe, near enough to be morality. But skeptical Darwinians won’t buy it. The great show we humans make of respect for moral principle they see as a civilized camouflage for an underlying, evolved psychology of a quite different kind.

This skepticism is not, however, your great-grandfather’s Social Darwinism, which saw all creatures great and small as pitted against one another in a life or death struggle to survive and reproduce — “survival of the fittest.” We now know that such a picture seriously misrepresents both Darwin and the actual process of natural selection. Individuals come and go, but genes can persist for 1000 generations or more. Individual plants and animals are the perishable vehicles that genetic material uses to make its way into the next generation (“A chicken is an egg’s way of making another egg”). From this perspective, relatives, who share genes, are to that extent not really in evolutionary competition; no matter which one survives, the shared genes triumph. Such “inclusive fitness” predicts the survival, not of selfish individuals, but of “selfish” genes, which tend in the normal range of environments to give rise to individuals whose behavior tends to propel those genes into future.

A place is thus made within Darwinian thought for such familiar phenomena as family members sacrificing for one another — helping when there is no prospect of payback, or being willing to risk life and limb to protect one’s people or avenge harms done to them.

But what about unrelated individuals? “Sexual selection” occurs whenever one must attract a mate in order to reproduce. Well, what sorts of individuals are attractive partners? Henry Kissinger claimed that power is the ultimate aphrodisiac, but for animals who bear a small number of young over a lifetime, each requiring a long gestation and demanding a great deal of nurturance to thrive into maturity, potential mates who behave selfishly, uncaringly, and unreliably can lose their chance. And beyond mating, many social animals depend upon the cooperation of others for protection, foraging and hunting, or rearing the young. Here, too, power can attract partners, but so can a demonstrable tendency behave cooperatively and share benefits and burdens fairly, even when this involves some personal sacrifice — what is sometimes called “reciprocal altruism.” Baboons are notoriously hierarchical, but Joan Silk, a professor of anthropology at UCLA, and her colleagues, recently reported a long-term study of baboons, in which they found that, among females, maintaining strong, equal, enduring social bonds — even when the individuals were not related — can promote individual longevity more effectively than gaining dominance rank, and can enhance the survival of progeny.

A picture thus emerges of selection for “proximal psychological mechanisms”— for example, individual dispositions like parental devotion, loyalty to family, trust and commitment among partners, generosity and gratitude among friends, courage in the face of enemies, intolerance of cheaters — that make individuals into good vehicles, from the gene’s standpoint, for promoting the “distal goal” of enhanced inclusive fitness.

Why would human evolution have selected for such messy, emotionally entangling proximal psychological mechanisms, rather than produce yet more ideally opportunistic vehicles for the transmission of genes — individuals wearing a perfect camouflage of loyalty and reciprocity, but fine-tuned underneath to turn self-sacrifice or cooperation on or off exactly as needed? Because the same evolutionary processes would also be selecting for improved capacities to detect, pre-empt and defend against such opportunistic tendencies in other individuals — just as evolution cannot produce a perfect immune system, since it is equally busily at work improving the effectiveness of viral invaders. Devotion, loyalty, honesty, empathy, gratitude, and a sense of fairness are credible signs of value as a partner or friend precisely because they are messy and emotionally entangling, and so cannot simply be turned on and off by the individual to capture each marginal advantage. And keep in mind the small scale of early human societies, and Abraham Lincoln’s point about our power to deceive.

Why, then, aren’t we better — more honest, more committed, more loyal? There will always be circumstances in which fooling some of the people some of the time is enough; for example, when society is unstable or individuals mobile. So we should expect a capacity for opportunism and betrayal to remain an important part of the mix that makes humans into monkeys worth writing novels about. 

How close does all this take us to morality? Not all the way, certainly. An individual psychology primarily disposed to consider the interests of all equally, without fear or favor, even in the teeth of social ostracism, might be morally admirable, but simply wouldn’t cut it as a vehicle for reliable replication. Such pure altruism would not be favored in natural selection over an impure altruism that conferred benefits and took on burdens and risks more selectively — for “my kind” or “our kind.” This puts us well beyond pure selfishness, but only as far as an impure us-ishness. Worse, us-ish individuals can be a greater threat than purely selfish ones, since they can gang up so effectively against those outside their group. Certainly greater atrocities have been committed in the name of “us vs. them” than “me vs. the world.”

So, are the optimistic Darwinians wrong, and impartial morality beyond the reach of those monkeys we call humans? Does thoroughly logical evolutionary thinking force us to the conclusion that our love, loyalty, commitment, empathy, and concern for justice and fairness are always at bottom a mixture of selfish opportunism and us-ish clannishness? Indeed, is it only a sign of the effectiveness of the moral camouflage that we ourselves are so often taken in by it?

Speaking of what “thoroughly logical evolutionary thinking” might “force” us to conclude provides a clue to the answer. Think for a moment about science and logic themselves. Natural selection operates on a need-to-know basis. Between two individuals — one disposed to use scarce resources and finite capacities to seek out the most urgent and useful information and the other, heedless of immediate and personal concerns and disposed instead toward pure, disinterested inquiry, following logic wherever it might lead — it is clear which natural selection would tend to favor.

And yet, Darwinian skeptics about morality believe, humans somehow have managed to redeploy and leverage their limited, partial, human-scale psychologies to develop shared inquiry, experimental procedures, technologies and norms of logic and evidence that have resulted in genuine scientific knowledge and responsiveness to the force of logic. This distinctively human “cultural evolution” was centuries in the making, and overcoming partiality and bias remains a constant struggle, but the point is that these possibilities were not foreclosed by the imperfections and partiality of the faculties we inherited. As Wittgenstein observed, crude tools can be used to make refined tools. Monkeys, it turns out, can come surprisingly near to objective science.

We can see a similar cultural evolution in human law and morality — a centuries-long process of overcoming arbitrary distinctions, developing wider communities, and seeking more inclusive shared standards, such as the Geneva Conventions and the Universal Declaration of Humans Rights. Empathy might induce sympathy more readily when it is directed toward kith and kin, but we rely upon it to understand the thoughts and feelings of enemies and outsiders as well. And the human capacity for learning and following rules might have evolved to enable us to speak a native language or find our place in the social hierarchy, but it can be put into service understanding different languages and cultures, and developing more cosmopolitan or egalitarian norms that can be shared across our differences.

Within my own lifetime, I have seen dramatic changes in civil rights, women’s rights and gay rights. That’s just one generation in evolutionary terms. Or consider the way that empathy and the pressure of consistency have led to widespread recognition that our fellow animals should receive humane treatment. Human culture, not natural selection, accomplished these changes, and yet it was natural selection that gave us the capacities that helped make them possible. We still must struggle continuously to see to it that our widened empathy is not lost, our sympathies engaged, our understandings enlarged, and our moral principles followed. But the point is that we have done this with our imperfect, partial, us-ish native endowment. Kant was right to be impressed. In our best moments, we can come surprisingly close to being moral monkeys.

Peter Railton is the Perrin Professor of Philosophy at the University of Michigan, Ann Arbor. His main areas of research are moral philosophy and the philosophy of science. He is a member of the American Academy of Arts and Sciences.

Full article and photo:

Your Move: The Maze of Free Will

You arrive at a bakery. It’s the evening of a national holiday. You want to buy a cake with your last 10 dollars to round off the preparations you’ve already made. There’s only one thing left in the store — a 10-dollar cake.

On the steps of the store, someone is shaking an Oxfam tin. You stop, and it seems quite clear to you — it surely is quite clear to you — that it is entirely up to you what you do next. You are — it seems — truly, radically, ultimately free to choose what to do, in such a way that you will be ultimately morally responsible for whatever you do choose. Fact: you can put the money in the tin, or you can go in and buy the cake. You’re not only completely, radically free to choose in this situation. You’re not free not to choose (that’s how it feels). You’re “condemned to freedom,” in Jean-Paul Sartre’s phrase. You’re fully and explicitly conscious of what the options are and you can’t escape that consciousness. You can’t somehow slip out of it.

You may have heard of determinism, the theory that absolutely everything that happens is causally determined to happen exactly as it does by what has already gone before — right back to the beginning of the universe. You may also believe that determinism is true. (You may also know, contrary to popular opinion, that current science gives us no more reason to think that determinism is false than that determinism is true.) In that case, standing on the steps of the store, it may cross your mind that in five minutes’ time you’ll be able to look back on the situation you’re in now and say truly, of what you will by then have done, “Well, it was determined that I should do that.” But even if you do fervently believe this, it doesn’t seem to be able to touch your sense that you’re absolutely morally responsible for what you next.

The case of the Oxfam box, which I have used before to illustrate this problem, is relatively dramatic, but choices of this type are common. They occur frequently in our everyday lives, and they seem to prove beyond a doubt that we are free and ultimately morally responsible for what we do. There is, however, an argument, which I call the Basic Argument, which appears to show that we can never be ultimately morally responsible for our actions. According to the Basic Argument, it makes no difference whether determinism is true or false. We can’t be ultimately morally responsible either way.

The argument goes like this.

(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.

(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.

(3) But you can’t be ultimately responsible for the way you are in any respect at all.

(4) So you can’t be ultimately responsible for what you do.

The key move is (3). Why can’t you be ultimately responsible for the way you are in any respect at all? In answer, consider an expanded version of the argument.

(a) It’s undeniable that the way you are initially is a result of your genetic inheritance and early experience.

(b) It’s undeniable that these are things for which you can’t be held to be in any way responsible (morally or otherwise).

(c) But you can’t at any later stage of life hope to acquire true or ultimate moral responsibility for the way you are by trying to change the way you already are as a result of genetic inheritance and previous experience.

(d) Why not? Because both the particular ways in which you try to change yourself, and the amount of success you have when trying to change yourself, will be determined by how you already are as a result of your genetic inheritance and previous experience.

(e) And any further changes that you may become able to bring about after you have brought about certain initial changes will in turn be determined, via the initial changes, by your genetic inheritance and previous experience.

There may be all sorts of other factors affecting and changing you. Determinism may be false: some changes in the way you are may come about as a result of the influence of indeterministic or random factors. But you obviously can’t be responsible for the effects of any random factors, so they can’t help you to become ultimately morally responsible for how you are.

Some people think that quantum mechanics shows that determinism is false, and so holds out a hope that we can be ultimately responsible for what we do. But even if quantum mechanics had shown that determinism is false (it hasn’t), the question would remain: how can indeterminism, objective randomness, help in any way whatever to make you responsible for your actions? The answer to this question is easy. It can’t.

And yet we still feel that we are free to act in such a way that we are absolutely responsible for what we do. So I’ll finish with a third, richer version of the Basic Argument that this is impossible.

(i) Interested in free action, we’re particularly interested in actions performed for reasons (as opposed to reflex actions or mindlessly habitual actions).

(ii) When one acts for a reason, what one does is a function of how one is, mentally speaking. (It’s also a function of one’s height, one’s strength, one’s place and time, and so on, but it’s the mental factors that are crucial when moral responsibility is in question.)

(iii) So if one is going to be truly or ultimately responsible for how one acts, one must be ultimately responsible for how one is, mentally speaking — at least in certain respects.

(iv) But to be ultimately responsible for how one is, in any mental respect, one must have brought it about that one is the way one is, in that respect. And it’s not merely that one must have caused oneself to be the way one is, in that respect. One must also have consciously and explicitly chosen to be the way one is, in that respect, and one must also have succeeded in bringing it about that one is that way.

(v) But one can’t really be said to choose, in a conscious, reasoned, fashion, to be the way one is in any respect at all, unless one already exists, mentally speaking, already equipped with some principles of choice, “P1″ — preferences, values, ideals — in the light of which one chooses how to be.

(vi) But then to be ultimately responsible, on account of having chosen to be the way one is, in certain mental respects, one must be ultimately responsible for one’s having the principles of choice P1 in the light of which one chose how to be.

(vii) But for this to be so one must have chosen P1, in a reasoned, conscious, intentional fashion.

(viii) But for this to be so one must already have had some principles of choice P2, in the light of which one chose P1.

(ix) And so on. Here we are setting out on a regress that we cannot stop. Ultimate responsibility for how one is is impossible, because it requires the actual completion of an infinite series of choices of principles of choice.

(x) So ultimate, buck-stopping moral responsibility is impossible, because it requires ultimate responsibility for how one is; as noted in (iii).

Does this argument stop me feeling entirely morally responsible for what I do? It does not. Does it stop you feeling entirely morally responsible? I very much doubt it. Should it stop us? Well, it might not be a good thing if it did. But the logic seems irresistible …. And yet we continue to feel we are absolutely morally responsible for what we do, responsible in a way that we could be only if we had somehow created ourselves, only if we were “causa sui,” the cause of ourselves. It may be that we stand condemned by Nietzsche:

The causa sui is the best self-contradiction that has been conceived so far. It is a sort of rape and perversion of logic. But the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense. The desire for “freedom of the will” in the superlative metaphysical sense, which still holds sway, unfortunately, in the minds of the half-educated; the desire to bear the entire and ultimate responsibility for one’s actions oneself, and to absolve God, the world, ancestors, chance, and society involves nothing less than to be precisely this causa sui and, with more than Baron Münchhausen’s audacity, to pull oneself up into existence by the hair, out of the swamps of nothingness … (“Beyond Good and Evil,” 1886).

Is there any reply? I can’t do better than the novelist Ian McEwan, who wrote to me: “I see no necessary disjunction between having no free will (those arguments seem watertight) and assuming moral responsibility for myself. The point is ownership. I own my past, my beginnings, my perceptions. And just as I will make myself responsible if my dog or child bites someone, or my car rolls backwards down a hill and causes damage, so I take on full accountability for the little ship of my being, even if I do not have control of its course. It is this sense of being the possessor of a consciousness that makes us feel responsible for it.”

Galen Strawson is professor of philosophy at Reading University and is a regular visitor at the philosophy program at the City University of New York Graduate Center. He is the author of “Selves: An Essay in Revisionary Metaphysics” (Oxford: Clarendon Press, 2009) and other books.


Full article and photo:

Islamophobia and Homophobia

As if we needed more evidence of America’s political polarization, last week Juan Williams gave the nation a Rorschach test. Williams said he gets scared when people in “Muslim garb” board a plane he’s on, and he promptly got (a) fired by NPR and (b) rewarded by Fox News with a big contract.

Suppose Williams had said something hurtful to gay people instead of to Muslims. Suppose he had said gay men give him the creeps because he fears they’ll make sexual advances. NPR might well have fired him, but would Fox News have chosen that moment to give him a $2-million pat on the back?

I don’t think so. Playing the homophobia card is costlier than playing the Islamophobia card. Or at least, the costs are more evenly spread across the political spectrum. In 2007, when Ann Coulter used a gay slur, she was denounced on the right as well as the left, and her stock dropped. Notably, her current self-promotion campaign stresses her newfound passion for gay rights.

Coulter’s comeuppance reflected sustained progress on the gay rights front. Only a few decades ago, you could tell an anti-gay joke on the Johnny Carson show — with Carson’s active participation — and no one would complain. (See postscript below for details.) The current “it gets better” campaign, designed to reassure gay teenagers that adulthood will be less oppressive than adolescence, amounts to a kind of double entrendre: things get better not just over an individual’s life but over the nation’s life.

When we move from homophobia to Islamophobia, the trendline seems to be pointing in the opposite direction. This isn’t shocking, given 9/11 and the human tendency to magnify certain kinds of risk. (Note to Juan Williams: Over the past nine years about 90 million flights have taken off from American airports, and not one has been brought down by a Muslim terrorist. Even in 2001, no flights were brought down by people in “Muslim garb.”) 

Still, however “natural” this irrational fear, it’s dangerous. As Islamophobia grows, it alienates Muslims, raising the risk of homegrown terrorism — and homegrown terrorism heightens the Islamophobia, which alienates more Muslims, and so on: a vicious circle that could carry America into the abyss. So it’s worth taking a look at why homophobia is fading; maybe the underlying dynamic is transplantable to the realm of inter-ethnic prejudice.

Theories differ as to what it takes for people to build bonds across social divides, and some theories offer more hope than others.

One of the less encouraging theories grows out of the fact that both homophobia and Islamophobia draw particular strength from fundamentalist Christians. Maybe, this argument goes, part of the problem is a kind of “scriptural determinism.” If religious texts say that homosexuality is bad, or that people of other faiths are bad, then true believers will toe that line.

If scripture is indeed this powerful, we’re in trouble, because scripture is invoked by intolerant people of all Abrahamic faiths — including the Muslim terrorists who plant the seeds of Islamophobia. And, judging by the past millennium or two, God won’t be issuing a revised version of the Bible or the Koran anytime soon.

Happily, there’s a new book that casts doubt on the power of intolerant scripture: “American Grace,” by the social scientists Robert Putnam and David Campbell.

Three decades ago, according to one of the many graphs in this data-rich book, slightly less than half of America’s frequent churchgoers were fine with gay people freely expressing their views on gayness. Today that number is over 70 percent — and no biblical verse bearing on homosexuality has magically changed in the meanwhile. And these numbers actually understate the progress; over those three decades, church attendance was dropping for mainline Protestant churches and liberal Catholics, so the “frequent churchgoers” category consisted increasingly of evangelicals and conservative Catholics.

So why have conservative Christians gotten less homophobic? Putnam and Campbell favor the “bridging” model. The idea is that tolerance is largely a question of getting to know people. If, say, your work brings you in touch with gay people or Muslims — and especially if your relationship with them is collaborative — this can brighten your attitude toward the whole tribe they’re part of. And if this broader tolerance requires ignoring or reinterpreting certain scriptures, so be it; the meaning of scripture is shaped by social relations.

The bridging model explains how attitudes toward gays could have made such rapid progress. A few decades ago, people all over America knew and liked gay people — they just didn’t realize these people were gay. So by the time gays started coming out of the closet, the bridge had already been built.

And once straight Americans followed the bridge’s logic — once they, having already accepted people who turned out to be gay, accepted gayness itself — more gay people felt comfortable coming out. And the more openly gay people there were, the more straight people there were who realized they had gay friends, and so on: a virtuous circle.

So could bridging work with Islamophobia? Could getting to know Muslims have the healing effect that knowing gay people has had?

The good news is that bridging does seem to work across religious divides. Putnam and Campbell did surveys with the same pool of people over consecutive years and found, for example, that gaining evangelical friends leads to a warmer assessment of evangelicals (by seven degrees on a “feeling thermometer” per friend gained, if you must know).

And what about Muslims? Did Christians warm to Islam as they got to know Muslims — and did Muslims return the favor?

That’s the bad news. The population of Muslims is so small, and so concentrated in distinct regions, that there weren’t enough such encounters to yield statistically significant data. And, as Putnam and Campbell note, this is a recipe for prejudice. Being a small and geographically concentrated group makes it hard for many people to know you, so not much bridging naturally happens. That would explain why Buddhists and Mormons, along with Muslims, get low feeling-thermometer ratings in America.

In retrospect, the situation of gays a few decades ago was almost uniquely conducive to rapid progress. The gay population, though not huge, was finely interspersed across the country, with  representatives in virtually every high school, college and sizeable workplace. And straights had gotten to know them without even seeing the border they were crossing in the process.

So the engineering challenge in building bridges between Muslims and non-Muslims will be big. Still, at least we grasp the nuts and bolts of the situation. It’s a matter of bringing people into contact with the “other” in a benign context. And it’s a matter of doing it fast, before the vicious circle takes hold, spawning appreciable homegrown terrorism and making fear of Muslims less irrational.

After 9/11, philanthropic foundations spent a lot of money arranging confabs whose participants spanned the divide between “Islam” and “the West.” Meaningful friendships did form across this border, and that’s good. It’s great that Imam Feisal Abdul Rauf, a cosmopolitan, progressive Muslim, got to know lots of equally cosmopolitan Christians and Jews.

But as we saw when he decided to build an Islamic Community Center near ground zero, this sort of high-level networking — bridging among elites whose attitudes aren’t really the problem in the first place — isn’t enough. Philanthropists need to figure out how you build lots of little bridges at the grass roots level. And they need to do it fast.

Postscript: As for the Johnny Carson episode: I don’t like to rely on my memory alone for decades-old anecdotes, but in this case I’m 99.8 percent sure that I remember the basics accurately. Carson’s guest was the drummer Buddy Rich. In a supposedly spontaneous but obviously pre-arranged exchange, Rich said something like, “People often ask me, What is Johnny Carson really like?” Carson looked at Rich warily and said, “And how do you respond to this query?” But he paused between “this” and “query,” theatrically ratcheting up the wariness by an increment or two, and then pronounced the word “query” as “queery.” Rich immediately replied, “Like that.” Obviously, there are worse anti-gay jokes than this. Still, the premise was that being gay was something to be ashamed of. That Googling doesn’t turn up any record of this episode suggests that it didn’t enter the national conversation or the national memory. I don’t think that would be the case today. And of course, anecdotes aside, there is lots of polling data showing the extraordinary progress made since the Johnny Carson era on such issues as gay marriage and on gay rights in general.

Robert Wright, New York Times


Full article:

The Burqa and the Body Electric

In her post of July 11, “Veiled Threats?” and her subsequent response to readers, Martha Nussbaum considers the controversy over the legal status of the burqa — which continues to flare across Europe —  making a case for freedom of religious expression.  In these writings, Professor Nussbaum applies the argument of her 2008 book “Liberty of Conscience,” which praises the American approach to religious liberty of which Roger Williams, one of the founders of Rhode Island Colony, is an early champion.

Williams is an inspiring figure, indeed.  Feeling firsthand the constraint of religious conformism in England and in Massachusetts Bay, he developed a uniquely broad position on religious toleration, one encompassing not only Protestants of all stripes, but Roman Catholics, Jews and Muslims.  The state, in his view, can legitimately enforce only the second tablet of the Decalogue — those final five commandments covering murder, theft, and the like.  All matters of worship covered by the first tablet must be left to the individual conscience. 

Straightforward enough.  But in the early years of Rhode Island, Williams faced quite a relevant challenge.  One of the colonists who fled Salem with him, Joshua Verin, quickly made himself an unwelcome presence in the fledgling community.  In addition to being generally “boysterous,” Verin had forbidden his wife from attending the religious services that Williams held in his home, and became publicly violent in doing so.  The colony was forced into action: Verin was disenfranchised on the grounds that he was interfering with his wife’s religious freedom.  Taking up a defense of Verin, fellow colonist William Arnold — who would also nettle Williams in the years to follow — claimed the punishment to be a violation of Verin’s religious liberty in that it interfered with his biblically appointed husbandly dominion.  “Did you pretend to leave Massachusetts, because you would not offend God to please men,” Arnold asked, “and would you now break an ordinance and commandment of God to please women?”  In his Salem days Williams himself had affirmed such biblical hierarchy, favoring the covering of women’s heads in all public assemblies.

A colony whose founding charter was signed by more than one woman was apparently not willing to accept the kind of male violence on which the Bible is at least indifferent.  Some suggested that Mrs. Verin be liberated from her husband and placed in the community’s protection until a suitable mate could be found for her.  That proposal was not taken up, and it was not taken up because Mrs. Verin herself declared that she wished to remain with her husband — a declaration made while she was literally tied in ropes and being dragged back to Salem by a man who had bound her in a more than matrimonial way.

The Verin case illustrates the weakness of the principle on which Nussbaum depends.  Liberty of conscience limits human institutions so that they do not interfere with the sacrosanct relationship between the soul and God, and in its strict application allows a coerced soul to run its course into the abyss.   In especially unappealing appeals to this kind of liberty, bigots of all varieties have claimed an entitlement to their views on grounds of tender conscience.  Nussbaum recognizes this elsewhere.  Despite her casual attitude toward burqa wearers — a marginal group in an already small minority population — she responds forcefully to false claims of religious freedom when they infect policymaking, as in her important defense of the right to marry.

The burqa controversy revolves around a central question: “Does this cultural practice performed in the name of religion inherently violate the principle of equality that democracies are obliged to defend?”  The only answer to that question offered by liberty of conscience is that we have no right to ask in the first place.  This is in essence Nussbaum’s position, even though the kind of floor-to-ceiling drapery that we are considering is not at all essential to Muslim worship.  The burqa is not religious headwear; it is a physical barrier to engagement in public life adopted in a deep spirit of misogyny. 

Lockean religious toleration, a tradition of which Nussbaum is skeptical, expects religious observance more fully to conform to the aims of a democratic polity.  We might see the French response to the burqa as an expression of that tradition.  After a famously misguided initial attempt to do away with all Muslim headwear in schools and colleges, French legislators later settled down to an evaluation of the point at which headwear becomes an affront to gender equality, passing most recently a ban on the niqab, or face veil—which has also been barred from classrooms and dormitories at Cairo’s Al’Azhar University, the historical center of Muslim learning.  It seems farcical to create a scorecard of permissible and impermissible religious garb — and as a youth reading Arthur C. Clarke it is not what I imagined the world’s legislators to be doing with urgency in the year 2010 — but we must wonder if it is the kind of determination that we must make, and make with care, if we are to come to a just settlement of the issue.

If we take a broader view, though, we might see that Lockean tradition as part of the longstanding wish of the state to disarm religion’s subversive potential.

Controversies on religious liberty are as old as temple and palace, those two would-be foci of ultimate meaning in human life that seem perpetually to run on a collision course.  Sophocles’s Antigone subscribes to an absolute religious duty in her single-minded wish to administer funeral rites to her brother Polyneices.  King Creon forbids the burial because Polyneices is an enemy of the state, an attempt to bring observance into harmony with political authority.  Augustine of Hippo handles the tension in his critique of the Roman polymath Varro.  Though the work that he is discussing is now lost, Augustine in his thorough way gives us a rigorous account of Varro’s distinction between “natural theology,” the theology of philosophers who seek to discern the nature of the gods as they really are, and “civil theology,” which is the theology acceptable in public observance.  Augustine objects to this distinction: if natural theology is “really natural,” if it has discovered the true nature of divinity, he asks, “what is found wrong with it, to cause its exclusion from the city?”

Debates on religious liberty seem always to be separated by the gulf between Varro and Augustine.  Those who follow Varro tolerate brands of religion that do not threaten civil order or prevailing moral conventions, and accept in principle a distinction between public and private worship; those who follow Augustine tolerate only those who agree with their sense of the nature of divinity, which authorities cannot legitimately restrict.  At their worst, Varronians make flimsy claims on preserving public decorum and solidify the state’s marginalization of religious minorities — as the Swiss did in December 2009 by passing a referendum banning the construction of minarets.  Augustinians at their worst expect the state to respect primordial bigotries and tribal exceptionalism — as do many of this country’s so-called Christian conservatives.

Might there be a third way?  If, as several thinkers have suggested, we now find ourselves in a “post-secular” age, then perhaps we might look beyond traditional disputes between political and ecclesiastical authority, between religion and secularism.  Perhaps post-secularity can take justice and equality to be absolutely good with little regard for whether we come to value the good by a religious or secular path.  Our various social formations — political, religious, social, familial — find their highest calling in deepening our bonds of fellow feeling.  “Compelling state interest” has no inherent value; belief  also has no inherent value.  Political and religious positions  must be measured against the purity of truths, rightly conceived as those principles enabling the richest possible lives for our fellow human beings.

So let us attempt such a measure.  The kind of women’s fashion favored by the Taliban might legitimately be outlawed as an instrument of gender apartheid — though one must have strong reservations about the enforcement of such a law, which could create more divisiveness than it cures.  The standard of human harmony provides strong resistance to anti-gay prejudice, stripping it of its wonted mask of righteousness. It objects in disgust to Pope Benedict XVI when he complains about Belgian authorities seizing church records in the course of investigating sexual abuse; it also praises the Catholic Church for the humanitarian and spiritual services it provides on this country’s southern border, which set the needs of the human family above arbitrary distinctions of citizenship.  The last example shows that some belief provides a deeply humane resistance to state power run amok.  To belief of this kind there is no legitimate barrier.

Humane action is of course open to interpretation. But if we place it at the center of our aspirations, we will make decisions more salutary than those offered by the false choice between state interest and liberty of conscience. Whitman may have been the first post-secularist in seeing that political and religious institutions declaring certain bodies to be shameful denigrate all human dignity: every individual is a vibrant body electric deeply connected to all beings by an instinct of fellow feeling. Such living democracy shames the puerile self-interest of modern electoral politics, and the worn barbarisms lurking under the shroud of retrograde orthodoxy. Embracing that vitality, to return to Nussbaum’s concerns, also guides us to the most generous possible reading of the First Amendment, which restricts government so that individual consciences and groups of believers can actively advance, rather than stall, the American project of promoting justice and equality.


Feisal G. Mohamed is an associate professor of English at the University of Illinois.  His most recent book, “Milton and the Post-Secular Present,” is forthcoming from Stanford University Press.

Full article:

Freedom and Reality: A Response

It has been a privilege to read the comments posted in response to my July 25th post, “The Limits of the Coded World” and I am exceedingly grateful for the time, energy and passion so many readers put into them. While my first temptation was to answer each and every one, reality began to set in as the numbers increased, and I soon realized I would have to settle for a more general response treating as many of the objections and remarks as I could. This is what I would like to do now. 

If I had to distill my entire response into one sentence, it would be this: it’s not about the monkeys! It was unfortunate that the details of the monkey experiment — in which researchers were able to use computers wired to the monkey’s brains to predict the outcome of certain decisions — dominated so much attention, since it could have been replaced by mention of any number of similar experiments, or indeed by a fictional scenario. My intention in bringing it up at all was twofold: first, to take an example of the sort of research that has repeatedly sparked sensationalist commentary in the popular press about the end of free will; and second, to show how plausible and in some sense automatic the link between predictability and the problem of free will can be (which was born out by the number of readers who used the experiment to start their own inquiry into free will).

As readers know, my next step was to argue that predictability no more indicates lack of free will than does unpredictability indicate its presence. Several readers noted an apparent contradiction between this claim, that “we have no reason to assume that either predictability or lack of predictability has anything to say about free will,” and one of the article’s concluding statements, that “I am free because neither science nor religion can ever tell me, with certainty, what my future will be and what I should do about it.”

Indeed, when juxtaposed without the intervening text, the statements seem clearly opposed. However, not only is there no contradiction between them, the entire weight of my arguments depends on understanding why there is none. In the first sentence I am talking about using models to more or less accurately guess at future outcomes; in the second I am talking about a model of the ultimate nature of reality as a kind of knowledge waiting to be decoded — what I called in the article “the code of codes,” a phrase I lifted from one of my mentors, the late Richard Rorty — and how the impossibility of that model is what guarantees freedom.

The reason why predictability in the first sense has no bearing on free will is, in fact, precisely because predictability in the second sense is impossible. The theoretical predictability of everything that occurs in a universe whose ultimate reality is conceived of as a kind of knowledge or code is what goes by the shorthand of determinism. In the old debate between free will and determinism, determinism has always played the default position, the backdrop from which free will must be wrested if we are to have any defensible concept of responsibility. From what I could tell, a great number of commentators on my article shared at least this idea with Galen Strawson: that we live in a deterministic universe, and if the concept of free will has any importance at all it is merely as a kind of necessary illusion. My position is precisely the opposite: free will does not need to be “saved” from determinism, because there was never any real threat there to begin with, either of a scientific nature or a religious one.  

The reason for this is that when we assume a deterministic universe of any kind we are implicitly importing into our thinking the code of codes, a model of reality that is not only false, but also logically impossible. Let’s see why.

To make a choice that in any sense could be considered “free,” we would have to claim that it was at some point unconstrained. But, the hard determinist would argue, there can never be any point at which a choice is unconstrained, because even if we exclude any and all obvious constraints, such as hunger or coercion, the chooser is constrained by (and this is Strawson’s “basic argument”) how he or she is at the time of the choosing, a sum total of effects over which he or she could never exercise causality.

This constraint of “how he or she is,” however, is pure fiction, a treatment of tangible reality as if it were decodable knowledge, requiring a kind of God’s eye perspective capable of knowing every instance and every possible interpretation of every aspect of a person’s history, culture, genes and general chemistry, to mention only a few variables. It refers to a reality that self-proclaimed rationalists and science advocates pay lip service to in their insistence on basing all claims on hard, tangible facts, but is in fact as elusive, as metaphysical and ultimately as incompatible with anything we could call human knowledge as would be a monotheistic religion’s understanding of God.

When some readers sardonically (I assume) reduced by argument to “ignorance=freedom,” then, they were right in a way; but the rub lies in how we understand ignorance. The commonplace understanding would miss the point entirely: it is not ignorance against the backdrop of ultimate knowledge that equates to freedom; rather, it is constitutive, essential ignorance. This, again, needs expansion.

Knowledge can never be complete. This is the case not merely because there will always be something more to know; rather, it is so because completed knowledge is oxymoronic, self-defeating. AI theorists have long dreamed of what Daniel Dennett once called heterophenomenology, the idea that, with an accurate-enough understanding of the human brain my description of another person’s experience could become indiscernible from that experience itself. My point it not merely that heterophenomenology is impossible from a technological perspective or undesirable from an ethical perspective; rather, it is impossible from a logical perspective, since the very phenomenon we are seeking to describe, in this case the conscious experience of another person, would cease to exist without the minimal opacity separating his or her consciousness from mine. Analogously, all knowledge requires this kind of minimal opacity, because knowing something involves, at a minimum, a synthesis of discrete perceptions across space or time. 

The Argentine writer Jorge Luis Borges demonstrated this point with implacable rigor in a story about a man who loses the ability to forget, and with that also ceases to think, perceive, and eventually to live, because, as Borges points out, thinking necessarily involves abstraction, the forgetting of differences. Because of what we can thus call our constitutive ignorance, then, we are free — only and precisely because as beings who cannot possibly occupy all times and spatial perspectives without thereby ceasing to be what we are, we are constantly faced with choices. All these choices — to the extent that they are choices and not simply responses to stimuli or reactions to forces exerted on us — have at least some element that cannot be traced to a direct determination, but could only be blamed, for the sake of defending a deterministic thesis, on the ideal and completely fanciful determinism of “how we are” at the time of the decision to be made.

Far from a mere philosophical wish fulfillment or fuzzy, humanistic thinking, then, this kind of freedom is real, hard-nosed and practical. Indeed, courts of law and ethics panels may take specific determinations into account when casting judgment on responsibility, but most of us would agree that it would be absurd for them to waste time considering philosophical, scientific or religious theories of general determinism. The purpose of  both my original piece and this response  has been to show that, philosophically speaking as well, this real and practical freedom has nothing to fear from philosophical, scientific or religious pipedreams.

This last remark leads me to the one more issue that many readers brought up, and which I can only touch on now in passing: religion. In a recent blog post Jerry Coyne, a professor of evolutionary biology at the University of Chicago, labels me an “accommodationist” who tries to “denigrate science” and vindicate “other ways of knowing.” Professor Coyne goes on to contrast my (alleged) position to “the scientific ‘model of the world,’” which, he adds, has “been extraordinarily successful at solving problems, while other ‘models’ haven’t done squat.” Passing over the fact that, far from denigrating them, I am fervent and open admirer of the natural sciences (my first academic interests were physics and mathematics), I’m content to let Professor Coyne’s dismissal of every cultural, literary, philosophical or artistic achievement in history speak for itself.

What I find of interest here is the label accommodationism, because the intent behind the current deployment of the term by the new atheist block is to associate explicitly those so labeled with the tragic failure of the Chamberlain government to stand up to Hitler. Indeed, Richard Dawkins has called those espousing open dialogue between faith and science “the Neville Chamberlain school of evolution.” One can only be astonished by the audacity of the rhetorical game they are playing: somehow with a twist of the tongue those arguing for greater intellectual tolerance have been allied with the worst example of intolerance in history.

One of the aims of my recent work has indeed been to provide a philosophical defense of moderate religious belief. Certain ways of believing, I have argued, are extremely effective at undermining the implicit model of reality supporting the philosophical mistake I described above, a model of reality that religious fundamentalists also depend on. While fundamentalisms of all kinds are unified in their belief that the ultimate nature of reality is a code that can be read and understood, religious moderates, along with those secularists we would call agnostics, are profoundly suspicious of any claims that one can come to know reality as it is in itself. I believe that such believers and skeptics are neither less scientific nor less religious for their suspicion. They are, however, more tolerant of discord; more prone to dialog, to patient inquiry, to trial and error, and to acknowledging the potential insights of other ways of thinking and other disciplines than their own. They are less righteously assured of the certainty of their own positions and have, historically, been less inclined to be prodded to violence than those who are beholden to the code of codes. If being an accommodationist means promoting these values, then I welcome the label.

William Egginton is the Andrew W. Mellon Professor in the Humanities at the Johns Hopkins University. His next book, “In Defense of Religious Moderation,” will be published by Columbia University Press in 2011.


Full article:

The Phenomenology of Ugly

This all started the day Luigi gave me a haircut. I was starting to look like a mad professor: specifically like Doc in “Back to the Future.” So Luigi took his scissors out and tried to fix me up. Except — and this is the point that occurred to me as I inspected the hair in the bathroom mirror the next morning — he didn’t really take quite enough off. He had enhanced the style, true, but there was a big floppy fringe that was starting to annoy me. And it was hot out. So I opened up the clipper attachment on the razor and hacked away at it for a while. When I finally emerged there was a general consensus that I looked like a particularly disreputable scarecrow. In the end I went to another barbershop (I didn’t dare show Luigi my handiwork) and had it all sheared off. Now I look like a cross between Britney Spears and Michel Foucault.

In short, it was a typical bad hair day. Everyone has them. I am going to hold back on my follicular study of the whole of Western philosophy (Nietzsche’s will-to-power-eternal-recurrence mustache; the workers-of-the-world-unite Marxian beard), but I think it has to be said that a haircut can have significant philosophical consequences. Jean-Paul Sartre, the French existentialist thinker, had a particularly traumatic tonsorial experience when he was only seven. Up to that point he had had a glittering career as a crowd-pleaser. Everybody referred to young “Poulou” as “the angel.” His mother had carefully cultivated a luxuriant halo of golden locks. Then one fine day his grandfather takes it into his head that Poulou is starting to look like a girl, so he waits till his mother has gone out, then tells the boy they are going out for a special treat. Which turns out to be the barbershop. Poulou can hardly wait to show off his new look to his mother. But when she walks through the door, she takes one look at him before running up the stairs and flinging herself on the bed, sobbing hysterically. Her carefully constructed — one might say carefully combed — universe has just been torn down, like a Hollywood set being broken and reassembled for some quite different movie, rather harsher, darker, less romantic and devoid of semi-divine beings. For, as in an inverted fairy-tale, the young Sartre has morphed from an angel into a “toad”. It is now, for the first time, that Sartre realizes that he is — as his American lover, Sally Swing, will say of him — “ugly as sin.”

Jean-Paul Sartre and two friends in France, no doubt discussing philosophy.

“The fact of my ugliness” becomes a barely suppressed leitmotif of his writing. He wears it like a badge of honor (Camus, watching Sartre in laborious seduction mode in a Paris bar: “Why are you going to so much trouble?” Sartre: “Have you had a proper look at this mug?”). The novelist Michel Houellebecq says somewhere that, when he met Sartre, he thought he was “practically disabled.” It is fair comment. He certainly has strabismus (with his distinctive lazy eye, so he appears to be looking in two directions at once), various parts of his body are dysfunctional and he considers his ugliness to count as a kind of disability. I can’t help wondering if ugliness is not indispensable to philosophy. Sartre seems to be suggesting that thinking — serious, sustained questioning — arises out of, or perhaps with, a consciousness of one’s own ugliness.

I don’t want to make any harsh personal remarks here but it is clear that a philosophers’ Mr. or Ms. Universe contest would be roughly on a par with the philosophers’ football match imagined by Monty Python. That is to say, it would have an ironic relationship to beauty. Philosophy as a satire on beauty.

It is no coincidence that one of our founding philosophers, Socrates, makes a big deal out of his own ugliness. It is the comic side of the great man. Socrates is (a) a thinker who asks profound and awkward questions (b) ugly. In Renaissance neo-Platonism (take, for example, Erasmus and his account of  “foolosophers” in “The Praise of Folly”) Socrates, still spectacularly ugly, acquires an explicitly Christian logic: philosophy is there — like Sartre’s angelic curls — to save us from our ugliness (perhaps more moral than physical).

But I can’t help thinking that ugliness infiltrated the original propositions of philosophy in precisely this redemptive way. The implication is there in works like Plato’s  “Phaedo.” If we need to die in order to attain the true, the good, and the beautiful (to kalon: neither masculine nor feminine but neutral, like Madame Sartre’s ephemeral angel, gender indeterminate), it must be because truth, goodness, and beauty elude us so comprehensively in life. You think you’re beautiful? Socrates seems to say. Well, think again! The idea of beauty, in this world, is like a mistake. An error of thought. Which should be re-thought.

Perhaps Socrates’s mission is to make the world safe for ugly people. Isn’t everyone a little ugly, one way or the other, at one time or another? Who is truly beautiful, all the time? Only the archetypes can be truly beautiful.

Fast-forwarding to Sartre and my bathroom-mirror crisis, I feel this gives us a relatively fresh way of thinking about neo-existentialism. Sartre (like Aristotle, like Socrates himself at certain odd moments) is trying to get away from the archetypes. From, in particular, a transcendent concept of beauty that continues to haunt — and sometimes cripple — us.

“It doesn’t matter if you are an ugly bastard. As an existentialist you can still score.” Sartre, so far as I know, never actually said it flat out (although he definitely described himself as a “salaud”). And yet it is nevertheless there in almost everything he ever wrote. In trying to be beautiful, we are trying to be like God (the “for-itself-in-itself” as Sartre rebarbatively put it). In other words, to become like a perfect thing, an icon of perfection, and this we can never fully attain. But it is good business for manufacturers of beauty creams, cosmetic surgeons and — yes! — even barbers.

Switching gender for a moment — going in the direction Madame Sartre would have preferred — I suspect that the day Britney Spears shaved her own hair off  represented a kind of Sartrean or Socratic argument (rather than, say, a nervous breakdown). She was, in effect, by the use of appearance, shrewdly de-mythifying beauty. The hair lies on the floor, “inexplicably faded” (Sartre), and the conventional notion of femininity likewise. I see Marilyn Monroe and Brigitte Bardot in a similar light: one by dying, the other by remaining alive, were trying to deviate from and deflate their iconic status. The beautiful, to kalon, is not some far-flung transcendent abstraction, in the neo-existentialist view. Beauty is a thing (social facts are things, Durkheim said). Whereas I am no-thing. Which explains why I can never be truly beautiful. Even if it doesn’t stop me wanting to be either. Perhaps this explains why Camus, Sartre’s more dashing sparring partner, jotted down in his notebooks, “Beauty is unbearable and drives us to despair.”

I always laugh when somebody says, “don’t be so judgmental.” Being judgmental is just what we do. Not being judgmental really would be like death. Normative behavior is normal. That original self-conscious, slightly despairing glance in the mirror (together with, “Is this it?” or “Is that all there is?”) is a great enabler because it compels us to seek improvement. The transcendent is right here right now. What we transcend is our selves. And we can (I am quoting Sartre here) transascend or transdescend. The inevitable dissatisfaction with one’s own appearance is the engine not only of philosophy but of civil society at large. Always providing you don’t end up pulling your hair out by the roots.

Andy Martin is currently completing “What It Feels Like To Be Alive: Sartre and Camus Remix” for Simon and Schuster. He was an 2009-10 fellow at the Cullman Center for Scholars and Writers in New York, and teaches at Cambridge University.


Full article and photo:

Reclaiming the Imagination

Imagine being a slave in ancient Rome. Now remember being one. The second task, unlike the first, is crazy. If, as I’m guessing, you never were a slave in ancient Rome, it follows that you can’t remember being one — but you can still let your imagination rip. With a bit of effort one can even imagine the impossible, such as discovering that Dick Cheney and Madonna are really the same person. It sounds like a platitude that fiction is the realm of imagination, fact the realm of knowledge.

Why did humans evolve the capacity to imagine alternatives to reality? Was story-telling in prehistoric times like the peacock’s tail, of no direct practical use but a good way of attracting a mate? It kept Scheherazade alive through those one thousand and one nights — in the story. 

On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.

A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.

Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.

To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place. 

Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.

Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.

In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.

Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.

The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.

Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.

Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).


Full article: