It has been a privilege to read the comments posted in response to my July 25th post, “The Limits of the Coded World” and I am exceedingly grateful for the time, energy and passion so many readers put into them. While my first temptation was to answer each and every one, reality began to set in as the numbers increased, and I soon realized I would have to settle for a more general response treating as many of the objections and remarks as I could. This is what I would like to do now.
If I had to distill my entire response into one sentence, it would be this: it’s not about the monkeys! It was unfortunate that the details of the monkey experiment — in which researchers were able to use computers wired to the monkey’s brains to predict the outcome of certain decisions — dominated so much attention, since it could have been replaced by mention of any number of similar experiments, or indeed by a fictional scenario. My intention in bringing it up at all was twofold: first, to take an example of the sort of research that has repeatedly sparked sensationalist commentary in the popular press about the end of free will; and second, to show how plausible and in some sense automatic the link between predictability and the problem of free will can be (which was born out by the number of readers who used the experiment to start their own inquiry into free will).
As readers know, my next step was to argue that predictability no more indicates lack of free will than does unpredictability indicate its presence. Several readers noted an apparent contradiction between this claim, that “we have no reason to assume that either predictability or lack of predictability has anything to say about free will,” and one of the article’s concluding statements, that “I am free because neither science nor religion can ever tell me, with certainty, what my future will be and what I should do about it.”
Indeed, when juxtaposed without the intervening text, the statements seem clearly opposed. However, not only is there no contradiction between them, the entire weight of my arguments depends on understanding why there is none. In the first sentence I am talking about using models to more or less accurately guess at future outcomes; in the second I am talking about a model of the ultimate nature of reality as a kind of knowledge waiting to be decoded — what I called in the article “the code of codes,” a phrase I lifted from one of my mentors, the late Richard Rorty — and how the impossibility of that model is what guarantees freedom.
The reason why predictability in the first sense has no bearing on free will is, in fact, precisely because predictability in the second sense is impossible. The theoretical predictability of everything that occurs in a universe whose ultimate reality is conceived of as a kind of knowledge or code is what goes by the shorthand of determinism. In the old debate between free will and determinism, determinism has always played the default position, the backdrop from which free will must be wrested if we are to have any defensible concept of responsibility. From what I could tell, a great number of commentators on my article shared at least this idea with Galen Strawson: that we live in a deterministic universe, and if the concept of free will has any importance at all it is merely as a kind of necessary illusion. My position is precisely the opposite: free will does not need to be “saved” from determinism, because there was never any real threat there to begin with, either of a scientific nature or a religious one.
The reason for this is that when we assume a deterministic universe of any kind we are implicitly importing into our thinking the code of codes, a model of reality that is not only false, but also logically impossible. Let’s see why.
To make a choice that in any sense could be considered “free,” we would have to claim that it was at some point unconstrained. But, the hard determinist would argue, there can never be any point at which a choice is unconstrained, because even if we exclude any and all obvious constraints, such as hunger or coercion, the chooser is constrained by (and this is Strawson’s “basic argument”) how he or she is at the time of the choosing, a sum total of effects over which he or she could never exercise causality.
This constraint of “how he or she is,” however, is pure fiction, a treatment of tangible reality as if it were decodable knowledge, requiring a kind of God’s eye perspective capable of knowing every instance and every possible interpretation of every aspect of a person’s history, culture, genes and general chemistry, to mention only a few variables. It refers to a reality that self-proclaimed rationalists and science advocates pay lip service to in their insistence on basing all claims on hard, tangible facts, but is in fact as elusive, as metaphysical and ultimately as incompatible with anything we could call human knowledge as would be a monotheistic religion’s understanding of God.
When some readers sardonically (I assume) reduced by argument to “ignorance=freedom,” then, they were right in a way; but the rub lies in how we understand ignorance. The commonplace understanding would miss the point entirely: it is not ignorance against the backdrop of ultimate knowledge that equates to freedom; rather, it is constitutive, essential ignorance. This, again, needs expansion.
Knowledge can never be complete. This is the case not merely because there will always be something more to know; rather, it is so because completed knowledge is oxymoronic, self-defeating. AI theorists have long dreamed of what Daniel Dennett once called heterophenomenology, the idea that, with an accurate-enough understanding of the human brain my description of another person’s experience could become indiscernible from that experience itself. My point it not merely that heterophenomenology is impossible from a technological perspective or undesirable from an ethical perspective; rather, it is impossible from a logical perspective, since the very phenomenon we are seeking to describe, in this case the conscious experience of another person, would cease to exist without the minimal opacity separating his or her consciousness from mine. Analogously, all knowledge requires this kind of minimal opacity, because knowing something involves, at a minimum, a synthesis of discrete perceptions across space or time.
The Argentine writer Jorge Luis Borges demonstrated this point with implacable rigor in a story about a man who loses the ability to forget, and with that also ceases to think, perceive, and eventually to live, because, as Borges points out, thinking necessarily involves abstraction, the forgetting of differences. Because of what we can thus call our constitutive ignorance, then, we are free — only and precisely because as beings who cannot possibly occupy all times and spatial perspectives without thereby ceasing to be what we are, we are constantly faced with choices. All these choices — to the extent that they are choices and not simply responses to stimuli or reactions to forces exerted on us — have at least some element that cannot be traced to a direct determination, but could only be blamed, for the sake of defending a deterministic thesis, on the ideal and completely fanciful determinism of “how we are” at the time of the decision to be made.
Far from a mere philosophical wish fulfillment or fuzzy, humanistic thinking, then, this kind of freedom is real, hard-nosed and practical. Indeed, courts of law and ethics panels may take specific determinations into account when casting judgment on responsibility, but most of us would agree that it would be absurd for them to waste time considering philosophical, scientific or religious theories of general determinism. The purpose of both my original piece and this response has been to show that, philosophically speaking as well, this real and practical freedom has nothing to fear from philosophical, scientific or religious pipedreams.
This last remark leads me to the one more issue that many readers brought up, and which I can only touch on now in passing: religion. In a recent blog post Jerry Coyne, a professor of evolutionary biology at the University of Chicago, labels me an “accommodationist” who tries to “denigrate science” and vindicate “other ways of knowing.” Professor Coyne goes on to contrast my (alleged) position to “the scientific ‘model of the world,’” which, he adds, has “been extraordinarily successful at solving problems, while other ‘models’ haven’t done squat.” Passing over the fact that, far from denigrating them, I am fervent and open admirer of the natural sciences (my first academic interests were physics and mathematics), I’m content to let Professor Coyne’s dismissal of every cultural, literary, philosophical or artistic achievement in history speak for itself.
What I find of interest here is the label accommodationism, because the intent behind the current deployment of the term by the new atheist block is to associate explicitly those so labeled with the tragic failure of the Chamberlain government to stand up to Hitler. Indeed, Richard Dawkins has called those espousing open dialogue between faith and science “the Neville Chamberlain school of evolution.” One can only be astonished by the audacity of the rhetorical game they are playing: somehow with a twist of the tongue those arguing for greater intellectual tolerance have been allied with the worst example of intolerance in history.
One of the aims of my recent work has indeed been to provide a philosophical defense of moderate religious belief. Certain ways of believing, I have argued, are extremely effective at undermining the implicit model of reality supporting the philosophical mistake I described above, a model of reality that religious fundamentalists also depend on. While fundamentalisms of all kinds are unified in their belief that the ultimate nature of reality is a code that can be read and understood, religious moderates, along with those secularists we would call agnostics, are profoundly suspicious of any claims that one can come to know reality as it is in itself. I believe that such believers and skeptics are neither less scientific nor less religious for their suspicion. They are, however, more tolerant of discord; more prone to dialog, to patient inquiry, to trial and error, and to acknowledging the potential insights of other ways of thinking and other disciplines than their own. They are less righteously assured of the certainty of their own positions and have, historically, been less inclined to be prodded to violence than those who are beholden to the code of codes. If being an accommodationist means promoting these values, then I welcome the label.
William Egginton is the Andrew W. Mellon Professor in the Humanities at the Johns Hopkins University. His next book, “In Defense of Religious Moderation,” will be published by Columbia University Press in 2011.