Archive

Archive for the ‘Philosophy’ Category

Replies to the Chinese Room Argument

September 4th, 2010 No comments

Criticisms of the narrow Chinese Room argument against Strong AI have often followed three main lines, which can be distinguished by how much they concede:

  1. Some critics concede that the man in the room does not understand Chinese, but hold that at the same time there is some other thing that does understand. These critics object to the inference from the claim that the man in the room does not understand Chinese to the conclusion that no understanding has been created. There might be understanding by a larger, or different, entity. This is the strategy of The Systems Reply and the Virtual Mind Reply. These replies hold that there could be understanding in the original Chinese Room scenario.
  2. Other critics concede Searle’s claim that just running a natural language processing program as described in the CR scenario does not create any understanding, whether by a human or a computer system. However, these critics hold that a variation on the computer system could understand. The variant might be a computer embedded in a robotic body, having interaction with the physical world via sensors and motors (“The Robot Reply”), or it might be a system that simulated the detailed operation of an entire brain, neuron by neuron (“the Brain Simulator Reply”).
  3. Finally, some critics do not concede even the narrow point against AI. These critics hold that the man in the original Chinese Room scenario might understand Chinese, despite Searle’s denials, or that the scenario is impossible. For example, critics have argued that our intuitions in such cases are unreliable. Other critics have held that it all depends on what one means by “understand” points discussed in the section on the Intuition Reply. Others (e.g. Sprevak 2007) object to the assumption that any system (e.g. Searle in the room) can run any computer program. And finally some have argued that if it is not reasonable to attribute understanding on the basis of the behavior exhibited by the Chinese Room, then it would not be reasonable to attribute understanding to humans on the basis of similar behavioral evidence (Searle calls this last the “Other Minds Reply”).

In addition to these responses, Critics also independently argue against Searle’s larger claim, and hold that one can get semantics (that is, meaning) from syntactic symbol manipulation, including the sort that takes place inside a digital computer, a question discussed in the section on syntax and semantics.

The Chinese Room argument

September 4th, 2010 No comments
The Chinese Room argument

Image via Wikipedia

The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese.

The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle’s argument is a direct challenge to proponents of Artificial Intelligence, and the argument has broad implications for functionalist and computational theories of meaning and of mind. As a result, there have been many critical replies to the argument.

Loebner Prize Competition

September 4th, 2010 No comments

The Loebner Prize is an annual competition in artificial intelligence that awards prizes to the chatterbot considered by the judges to be the most human-like. The format of the competition is that of a standard Turing test. A human judge poses text questions to a computer program and a human being via computer. Based upon the answers, the judge must decide which is which. In 2008 a variety of judges, including experts and non-experts, adults and children, native and non-native English speakers participated in the University of Reading hosted contest.

The contest began in 1990 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies, Massachusetts and United States. It has since been associated with Flinders University, Dartmouth College, the Science Museum in London, and most recently the University of Reading. In 2004 and 2005, it was held in Loebner’s apartment in New York City.

Within the field of artificial intelligence, the Loebner Prize is somewhat controversial; the most prominent critic, Marvin Minsky, has called it a publicity stunt that does not help the field along.

There is little doubt that Turing would have been disappointed by the state of play at the end of the twentieth century.

On the one hand, participants in the Loebner Prize Competition an annual event in which computer programmers are submitted to the Turing Test come nowhere near the standard that Turing envisaged. (A quick look at the transcripts of the participants for the past decade reveals that the entered programs are all easily detected by a range of not-very-subtle lines of questioning.)

On the other hand, major players in the field often claim that the Loebner Prize Competition is an embarrassment precisely because we are so far from having a computer programme that could carry out a decent conversation for a period of five minutes see, for example, Shieber (1994). (The programs entered in the Loebner Prize Competition are designed solely with the aim of winning the minor prize of best competitor for the year, with no thought that the embodied strategies would actually yield something capable of passing the Turing Test.)

The Turing Test

September 3rd, 2010 No comments
The Turing Test

Image via Wikipedia

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion.

However, if we consider the more precise and somehow related question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then at least in Turing’s eyes we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

Turing’s Imitation Game

Turing (1950) describes the following kind of game. Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two the person is, and which the machine is. The interrogator knows the other person and the machine, by the labels ‘X’ and ‘Y’. He does not know which of the other person and the machine is ‘X’ and at the end of the game says either ‘X’ is the person and Y is the machine’ or ‘X’ is the machine and ‘Y’ is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:

I believe that in about fifty years’ time it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

There are two kinds of questions, which can be raised about Turing’s Imitation Game.

  • First, there are empirical questions, e.g., Is it true that we now or will soon have made computers that can play the imitation game so well that an average interrogator has no more than a 70 percent chance of making the right identification after five minutes of questioning?
  • Second, there are conceptual questions, e.g., Is it true that, if an average interrogator had no more than a 70 percent chance of making the right identification after five minutes of questioning, we should conclude that the machine exhibits some level of thought, or intelligence, or mentality?

Philosophy of AI

September 3rd, 2010 No comments

The philosophy of artificial intelligence attempts to answer such questions as,

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?

These three questions reflect the divergent interests of AI researchers, philosophers and cognitive scientists respectively. The answers to these questions depend on how one defines “intelligence” or “consciousness” and exactly which “machines” are under discussion.
Important propositions in the philosophy of AI include:

  • Turing’s “polite convention”: If a machine acts as intelligently as a human being does, then it is as intelligent as a human being is.
  • The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.
  • Newell and Simon’s physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action.
  • Searle’s strong AI hypothesis: The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

One can expect there to be an academic subject called the philosophy of artificial intelligence analogous to the existing fields of philosophy of physics and philosophy of biology. By analogy, it will be a philosophical study of the research methods of AI and will propose to clarify philosophical problems raised.

I suppose it will take up the methodological issues raised by Hubert Dreyfus and John Searle, even the idea that intelligence requires that the system be made of meat.

Presumably, some philosophers of AI will do battle with the idea that AI is impossible (Dreyfus), that it is immoral (Weizenbaum) and that the very concept is incoherent (Searle).

It is unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.

AI in Common with Philosophy ?

September 3rd, 2010 No comments

Artificial intelligence and philosophy have more in common than a science usually has with the philosophy of that science. This is because human level artificial intelligence requires equipping a computer program with some philosophical attitudes, especially epistemological.

The program must have built into it a concept of what knowledge is and how it is obtained.

If the program is to reason about what it can and cannot do, its designers will need an attitude to free will. If it is to do Meta-level reasoning about what it can do, it needs an attitude of its own to free will.

If the program is to be protected from performing unethical actions, its designers will have to build in an attitude about that.

Unfortunately, in none of these areas is there any philosophical attitude or system sufficiently well defined to provide the basis of a usable computer program.

Most AI work today does not require any philosophy, because the system being developed does not have to operate independently in the world and have a view of the world. The designer of the program does the philosophy in advance and builds a restricted representation into the program.

Not all philosophical positions are compatible with what has to be built into intelligent programs. Here are some of the philosophical attitudes that seem to me to be required.

  1. Science and common sense knowledge of the world must both be accepted. There are atoms, and there are chairs. We can learn features of the world at the intermediate size level on which humans operate without having to understand fundamental physics. Causal relations must also be used for a robot to reason about the consequences of its possible actions.
  2. Mind has to be understood a feature at a time. There are systems with only a few beliefs and no belief that they have beliefs. Other systems will do extensive introspection. Contrast this with the attitude that unless a system has a whole raft of features, it is not a mind and therefore it cannot have beliefs.
  3. Beliefs and intentions are objects that can be formally described.
  4. A sufficient reason to ascribe a mental quality is that it accounts for behavior to a sufficient degree.
  5. It is legitimate to use approximate concepts not capable of iffy definition. For this, it is necessary to relax some of the criteria for a concept to be meaningful. It is still possible to use mathematical logic to express approximate concepts.
  6. Because a theory of approximate concepts and approximate theories is not available, philosophical attempts to be precise have often led to useless hair-splitting.
  7. Free will and determinism are compatible. The deterministic process that determines what an agent will do involves its evaluation of the consequences of the available choices. These choices are present in its consciousness and can give rise to sentences about them as they are observed.
  8. Self-consciousness consists in putting sentences about consciousness in memory.
  9. Twentieth century philosophers became too critical of reification. Many of the criticism do not apply when the entities reified are treated as approximate concepts.