ttwngcbt (ttwngcbt) wrote in phil_of_mind,
ttwngcbt
ttwngcbt
phil_of_mind

Phil of Mind, Cog Sci: Anti-Computationalism

This is a (short) paper I wrote for a Cog-Sci/Phil-of-Mind class. The idea is that human minds must be functionally superior to modern day standard computers. Any comments welcome.

I go to UMCP, home of Prof. Georges Rey (Rey at Wikipedia). And this paper began as a response to a section of his Contemporary Philosophy of Mind: A Contentiously Classical Approach in which he discusses the implications of Godel's incompleteness results. But the paper mostly appeals to the views of a professor at Rensselaer Polytechnic Institute, Selmer Bringsjord, to defend anti-computationalism. The response to Rey's discussion appears briefly in the closing paragraph.

x-posts: philosophy, real_philosophy




Minds, Not (Yet?) Machines

The claim I endorse in this paper is that the abilities of human minds surpass those of any standard computer. In defense of this claim I will appeal to the following pair of articles by Professor Selmer Bringsjord, a contemporary computer science (CS) theorist at Rensselaer Polytechnic Institute: An Argument for the Uncomputability of Infinitary Mathematical Expertise (1995), and The Modal Argument for Hypercomputing Minds (2003). [The second article is co-authored with Konstantine Arkoudas of MIT. -Ed] As their titles indicate, these articles present two distinct arguments for denying computationalism. However, each argument has a common foundation in the mathematical results of Gödel and Turing. Before presenting the arguments themselves, I will explain why I chose them in particular for my defense, and why they are relevant in the context of contemporary philosophy of mind.


Before Bringsjord, anti-computational sentiments based on incompleteness results were famously expressed by philosopher J. R. Lucas (1961) and subsequently by mathematical physicist Roger Penrose (1989, 1994). And their arguments have inspired quite a bit of debate, in which Bringsjord himself has been an active participant. In fact, Bringsjord has found flaws in the particular arguments of both Lucas and Penrose. Nevertheless, of course, he endorses their anti-computationalist view. So, in his own arguments he has the advantage of not repeating the mistakes of Lucas and Penrose. Instead, he has provided new arguments for the old stance against computationalism. Now, from my meager position, his arguments are difficult to comprehend, but they do seem more reasonable, straightforward, and clearly spelled out than those of either Lucas or Penrose. For these reasons, I have chosen to follow Bringsjord instead of his more famous predecessors.


In a moment I will present the arguments themselves, but first I want to explain their relevance in the light of contemporary philosophy of mind. (Incidentally, as I do so, I will elucidate the terms 'mind' and 'computation'.) Philosopher Jaegwon Kim offers a succinct expression of the core problems in contemporary philosophy of mind:

How can the mind exercise its causal powers in a causally closed physical world? Why is there, and how can there be, such a thing as the mind, or consciousness, in a physical world? We will see that these two problems, mental causation and consciousness, are intertwined, and that, in a sense, they make each other insoluble (Kim 2005:13).

An implicit assumption here is that mental phenomena have some sort of autonomous status, at least as entities on their own level of explanation. They may be in some sense reducible to physical phenomena, but any adequate explanation of regular behavior of mental beings, on this view, will require explicit use of mental concepts and terms (e.g., 'belief, 'desire', etc.), just as any adequate explanation of the anatomy of animals requires using terms of evolutionary biology. On the physical level, the world is causally closed. However, on the mental level, physical causes alone cannot account for empirical regularities in the behavior of mental agents. These are the commitments of modern machine functionalism.


According to Kim (1998), machine functionalists believe that "mentality, or having a mind, consists in realizing an appropriate Turing machine" (p. 91). Standard computers, roughly speaking, instantiate (physically constrained) Turing machines. And I will follow Penrose (1994) in defining a "computation" as "an action of a Turing machine" (p. 17). Now, if we take the action of a 'mind' to correspond with the actions of particular individual persons, then we have defined, at least approximately, the conceptual ingredients of our argument. To say that the mind surpasses standard computers is thus to deny machine functionalism. However, another philosopher Georges Rey (1997) defends another version of functionalism called psycho-functionalism, itself founded on the "computational representational theory of thought" (or CRTT):

CRTT is certainly not committed to supposing that our mental architecture remotely resembles that of a Turing Machine, or a von Neumann architecture, or any deterministic or serial automaton. CRTT presents only a quite weak constraint on possible architectures, viz. that logico-syntactic properties be causally efficacious; and this is compatible with an indefinitely rich variety of architectures, for example, ones that might be implemented on connectionist or other massively parallel processors (p. 269).

Thus, prima facie, Bringsjord's and Rey's views are logically compatible. Nevertheless, it is in refuting at least some versions of functionalism that Bringsjord's views find their relevance for contemporary philosophy of mind.


Now to Bringsjord's specific arguments. In both articles, he presents them as lines of premises followed by a conclusion, in which form they are very short, only five or six lines long. The bulk of the argumentation, therefore, lies in the prefatory remarks and in the replies to anticipated objections. This pattern occurs also in the writings of Lucas and Penrose. This is telling, I think, not necessarily of any defects of the arguments, but of the enduring elusiveness of any fixed, formalized solution to the problem of the computational nature of mentality, due probably to recurring vagueness in the notions of 'mind' and 'computation'. For that reason, I have tried here to contain these notions in simple terms.


Let me repeat the title of Bringsjord's first article: An Argument for the Uncomputability of Infinitary Mathematical Expertise. We may ignore the notion of "expertise" here and instead think simply of human mental competence. Plainly, if a human being can do it, then it counts as a mental action. The argument, then, is that certain mental actions are not computable, i.e., not the action of any Turing machine. The relevant actions are those of logicians when reasoning in the infinitary system w[This last symbol happens to be incorrect; I just had trouble with the formatting. -Ed] This system is opposed to that of traditional first-order logic, I. Bringsjord explains formally how two concepts (the finitude of models, and Peano's mathematical induction) can be expressed in w, but not in I. The crucial premise is that Turing machine computations are equivalent to deductions in I. (This is a well-known theorem in theoretical CS.) Thus, a logician who reasons in w can perform mental actions not equivalent to any deductions in I. We are now in an position to appreciate Bringsjord's formal argument:

A1. Suppose all human mentality is computable. (Supposition)

A2. For every mental action A there exists a Turing Machine (TM) M such that some computation C of M satisfies A=C.

A3. For every computation C of every TM M there is an equivalent deduction D in first-order logic.

A4. For every mental action A there exists a first-order deduction D such that A=D.

A5. There exists a mental action A* (an appropriate instance of reasoning in w) such that for every first-order deduction D, A* D.

A6. ^ (A4, A5)

A7. By contradiction, not all mental actions are computable.

The anticipated objections in this article mostly rely on a finitistic philosophy of mathematics, which Bringsjord says is "generally thought to be untenable" (p. 21). He cites Bertrand Russell's critique of such a philosophy in "The Limits of Empiricism" (from 1936).


I now quote Bringsjord at length:

[T]he only reason I, and others like me, reject (or recast) computationalism is that we understand and are moved by arguments like the one I've just given! Before I learned a thing or two about the math underlying what I was doing when trying to get a computer to do snazzy, intelligent things, I was not only a computationalist, I was a rabid, evangelical computationalist.

[…] Just because some species of expertise is uncomputable doesn't mean that it will resist scientific analysis (a point noted by the likes of Douglas Hofstadter, Peter Kugel, Roger Penrose, Hao Wang, and Kurt Gödel, to name a few). After all, computer science includes an entire sub-field devoted to the rigorous study of uncomputability. We know that there are grades of uncomputability, we know much about the relationships between these grades, we know how uncomputability relates to computability, and so on; uncomputability theory, mathematically speaking, is no different than any other mature branch of classical mathematics. So, why can't uncomputability theory be linked to work in psychology devoted to the scientific analysis of the uncomputable side of human expertise? …In short, perhaps we can come to understand mathematical expertise scientifically, while at the same time acknowledging that we can't (yet?) give such expertise to computers.

[…] Nothing I've said herein precludes success in the attempt to engineer a computational system which appears to have infinitary expertise. What I purport to have shown, or at least made plausible, is that no such system can in fact enjoy such expertise.



Now for the second argument. I repeat the title of the second article: The Modal Argument for Hypercomputing Minds. The argument here is more subtle than the first, and I'm afraid I'll do it less justice. Nevertheless, it's worth at least glimpsing the formalities. Again Bringsjord (now with Arkoudas) refers to a standard theorem in CS: "For a fixed Turing machine m0, there is no algorithm that can determine, given an input string i, whether m0 accepts [i.e., halts on] i" (p. 172). Letting the three-place predicate Dmm*i mean “Turing machine m determines whether Turing machine m* halts on input i,” the theorem becomes "m$Dmm0i. So, in the following argument, the first line represents this theorem, and the second represents a supposition to be contradicted. The supposition is that computationalism obtains, i.e., that every person p is the realization of some Turing machine, or "p$m (p = m):

B1. "m$Dmm0i.

B2. "p$m (p= m). (Supposition)

B3. "p$Dpm0i.

B4. "p"iDpm0i. (By the nature of persons.)

B5. ^ (B3, B4)

B6. ¬"p$m (p= m).

The critical premise here is B4, that it is logically possible for a person to determine whether even m0 will halt on any input. How so? By performing infinitely many steps, each in a shorter and shorter time span. For instance, the first step takes ½ seconds, the second ¼, the third 1/8, and so on. Bringsjord again cites Russell (from 1915): "[Philosopher] Ambrose says it is logically impossible [for a man] to run through the whole expansion of π. I should have said it was medically impossible." Bringsjord therefore concludes that human beings cannot be computers, qua instantiated Turing machines. Rather, since it is possible that they are hypercomputers, then in fact that is what they must be. Again he refers to standardization of the study of hypercomputing devices in CS, implying that human intelligence remains within the scope of scientific pursuit.


Perhaps, indeed almost certainly, my paraphrases of Bringsjord's arguments will not satisfy the reader. But I would still be satisfied if my presentations have at least instilled some interest in his arguments. If the claims of Lucas and Penrose don't seem convincing to the AI community, perhaps those of Bringsjord would be more so.


Admittedly, I have ignored two sources of subtlety in my arguments. I have done so in part because Bringsjord ignores them, and also because I did not have space. However, in closing, I'll briefly mention them. First, I avoided the varieties of AI claims, namely, "Strong AI", "Weak AI", and idiosyncratic variations thereof. This seems to be a point of contention among critics of anti-computationalists. Perhaps Bringsjord's "computationalism" is a straw man from the perspective of AI. I'm not sure. In any case, the second nuance I overlooked was the question of consistency. The relevant theorems of Gödel and Turing assume that arithmetic is consistent. (This is my understanding: as long as arithmetic is consistent, the meta-mathematics used to demonstrate the truth of Gödel sentences will be consistent, since that reasoning itself is mirrored in arithmetic.) Beyond that, I don't understand when Rey speaks of a human's being inconsistent, without qualification. Is a person who never utters a word consistent or inconsistent? Or, is the point that the set of a person's internally represented beliefs are either consistent or not? I imagine that is the point. But such a position seems to beg the question, as it seems to require endorsing CRTT beforehand. Certainly it requires endorsing RTT. And if the representations are causally ineffectual, what function do they serve? So, another reason I avoided this issue was because I didn't understand it. And perhaps it is a weakness of Bringsjord's arguments. Nevertheless, again, I would be satisfied simply if this paper aroused some interest in his claims, if not necessarily in their validity. Though I am of the mind that he is probably right.




Bibliography


Bringsjord, Selmer. An Argument for the Uncomputability of Infinitary Mathematical Expertise. 1995.

Bringsjord, Selmer and Arkoudas, Konstantine. The modal argument for hypercomputing minds. Theoretical Computer Science 317 (2004): 167–190.

Kim, Jaegwon. Philosophy of Mind. Boulder, CO: Westview Press, 1996.

________. Physicalism, or Something Near Enough. Princeton: Princeton University Press, 2005.

Lucas, J. R. Minds, Machines, and Gödel. Philosophy XXXVI (1961): 112–127.

Penrose, Roger. Shadows of the Mind. Oxford: Oxford University Press, 1994.

Rey, Georges. Contemporary Philosophy of Mind. Cambridge: Blackwell Publishers, 1997.

  • Post a new comment

    Error

    default userpic
  • 2 comments