Home > Philosophy > Philosophy of AI

Philosophy of AI

The philosophy of artificial intelligence attempts to answer such questions as,

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?

These three questions reflect the divergent interests of AI researchers, philosophers and cognitive scientists respectively. The answers to these questions depend on how one defines “intelligence” or “consciousness” and exactly which “machines” are under discussion.
Important propositions in the philosophy of AI include:

  • Turing’s “polite convention”: If a machine acts as intelligently as a human being does, then it is as intelligent as a human being is.
  • The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.
  • Newell and Simon’s physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action.
  • Searle’s strong AI hypothesis: The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

One can expect there to be an academic subject called the philosophy of artificial intelligence analogous to the existing fields of philosophy of physics and philosophy of biology. By analogy, it will be a philosophical study of the research methods of AI and will propose to clarify philosophical problems raised.

I suppose it will take up the methodological issues raised by Hubert Dreyfus and John Searle, even the idea that intelligence requires that the system be made of meat.

Presumably, some philosophers of AI will do battle with the idea that AI is impossible (Dreyfus), that it is immoral (Weizenbaum) and that the very concept is incoherent (Searle).

It is unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.

  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.