Archive

Posts Tagged ‘Philosophy of Science’

Philosophy of AI

September 3rd, 2010 No comments

The philosophy of artificial intelligence attempts to answer such questions as,

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?

These three questions reflect the divergent interests of AI researchers, philosophers and cognitive scientists respectively. The answers to these questions depend on how one defines “intelligence” or “consciousness” and exactly which “machines” are under discussion.
Important propositions in the philosophy of AI include:

  • Turing’s “polite convention”: If a machine acts as intelligently as a human being does, then it is as intelligent as a human being is.
  • The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.
  • Newell and Simon’s physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action.
  • Searle’s strong AI hypothesis: The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

One can expect there to be an academic subject called the philosophy of artificial intelligence analogous to the existing fields of philosophy of physics and philosophy of biology. By analogy, it will be a philosophical study of the research methods of AI and will propose to clarify philosophical problems raised.

I suppose it will take up the methodological issues raised by Hubert Dreyfus and John Searle, even the idea that intelligence requires that the system be made of meat.

Presumably, some philosophers of AI will do battle with the idea that AI is impossible (Dreyfus), that it is immoral (Weizenbaum) and that the very concept is incoherent (Searle).

It is unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.

AI in Common with Philosophy ?

September 3rd, 2010 No comments

Artificial intelligence and philosophy have more in common than a science usually has with the philosophy of that science. This is because human level artificial intelligence requires equipping a computer program with some philosophical attitudes, especially epistemological.

The program must have built into it a concept of what knowledge is and how it is obtained.

If the program is to reason about what it can and cannot do, its designers will need an attitude to free will. If it is to do Meta-level reasoning about what it can do, it needs an attitude of its own to free will.

If the program is to be protected from performing unethical actions, its designers will have to build in an attitude about that.

Unfortunately, in none of these areas is there any philosophical attitude or system sufficiently well defined to provide the basis of a usable computer program.

Most AI work today does not require any philosophy, because the system being developed does not have to operate independently in the world and have a view of the world. The designer of the program does the philosophy in advance and builds a restricted representation into the program.

Not all philosophical positions are compatible with what has to be built into intelligent programs. Here are some of the philosophical attitudes that seem to me to be required.

  1. Science and common sense knowledge of the world must both be accepted. There are atoms, and there are chairs. We can learn features of the world at the intermediate size level on which humans operate without having to understand fundamental physics. Causal relations must also be used for a robot to reason about the consequences of its possible actions.
  2. Mind has to be understood a feature at a time. There are systems with only a few beliefs and no belief that they have beliefs. Other systems will do extensive introspection. Contrast this with the attitude that unless a system has a whole raft of features, it is not a mind and therefore it cannot have beliefs.
  3. Beliefs and intentions are objects that can be formally described.
  4. A sufficient reason to ascribe a mental quality is that it accounts for behavior to a sufficient degree.
  5. It is legitimate to use approximate concepts not capable of iffy definition. For this, it is necessary to relax some of the criteria for a concept to be meaningful. It is still possible to use mathematical logic to express approximate concepts.
  6. Because a theory of approximate concepts and approximate theories is not available, philosophical attempts to be precise have often led to useless hair-splitting.
  7. Free will and determinism are compatible. The deterministic process that determines what an agent will do involves its evaluation of the consequences of the available choices. These choices are present in its consciousness and can give rise to sentences about them as they are observed.
  8. Self-consciousness consists in putting sentences about consciousness in memory.
  9. Twentieth century philosophers became too critical of reification. Many of the criticism do not apply when the entities reified are treated as approximate concepts.