Archive

Posts Tagged ‘Intelligence’

Computer Vision

September 4th, 2010 2 comments

Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner.

As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.

Computer vision is, in some ways, the inverse of computer graphics. While computer graphics produces image data from 3D models, computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines.

An intelligent computer system can go a long way in reducing human labor. However, if such a system can be provided with a method of actually interacting with the physical world, its usefulness is greatly increased. Robotics gives AI the means to exhibit real-world intelligence by directly manipulating their environment. That is, robotics gives the artificial mind a body.

An essential component of robotics has to do with artificial sensory systems in general and artificial vision in particular. While it is true that robotics systems exist (including many successful industrial robots) that have no sensory equipment (or very limited sensors) they tend to be very brittle systems. They need to have their work area perfectly lit, with no shadows or mess. They must have the parts needed in precisely the right position and orientation, and if they are moved to a new location, they may require hours of recalibration. If a system could be developed that could make sense out of a visual scene it would greatly enhance the potential for robotics applications. It is therefore not surprising that the study of artificial vision and robotics go hand-in-hand.

Approaches to AI

September 4th, 2010 No comments

The researchers have branched Artificial Intelligence into different approaches, but they had the same goal of creating intelligent machines. Let us introduce ourselves to some of the main approaches to artificial intelligence. They are divided into two main lines of thought, the bottom up and the top down approach:

Neural Networks

Neural Network

Image via Wikipedia

This is the bottom up approach. It aims at mimicking the structure and functioning of the human brain, to create intelligent behavior. Researchers are attempting to build a silicon-based electronic network that is modeled on the working and form of the human brain! Our brain is a network of billions of neurons, each connected with the other.

At an individual level, a neuron has very little intelligence, in the sense that it operates by a simple set of rules, conducting electric signals through its network. However, the combined network of all these neurons creates intelligent behavior that is unrivaled and unsurpassed. Therefore, these researchers created network of electronic analogues of a neuron, based on Boolean logic. Memory was recognized to be an electronic signal pattern in a closed neural network.

How the human brain works is, it learns to realize patterns and remembers them. Similarly, the neural networks developed have the ability to learn patterns and remember. This approach has its limitations due to the scale and complexity of developing an exact replica of a human brain, as the neurons number in billions! Currently, through simulation techniques, people create virtual neural networks. This approach has not been able to achieve the ultimate goal but there is a very positive progress in the field. The progress in the development of parallel computing will aid it in the future.

Expert Systems

This is the top down approach. Instead of starting at the base level of neurons, by taking advantage of the phenomenal computational power of the modern computers, followers of the expert systems approach are designing intelligent machines that solve problems by deductive logic. It is like the dialectic approach in philosophy.

This is an intensive approach as opposed to the extensive approach in neural networks. As the name expert systems suggest, these are machines devoted to solving problems in very specific niche areas. They have total expertise in a specific domain of human thought. Their tools are like those of a detective or sleuth. They are programmed to use statistical analysis and data mining to solve problems. They arrive at a decision through a logical flow developed by answering yes-no questions.

Chess computers like Fritz and its successors that beat chess grandmaster Kasparov are examples of expert systems. Chess is known as the drosophila or experimental specimen of artificial intelligence.

Philosophy of AI

September 3rd, 2010 No comments

The philosophy of artificial intelligence attempts to answer such questions as,

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?

These three questions reflect the divergent interests of AI researchers, philosophers and cognitive scientists respectively. The answers to these questions depend on how one defines “intelligence” or “consciousness” and exactly which “machines” are under discussion.
Important propositions in the philosophy of AI include:

  • Turing’s “polite convention”: If a machine acts as intelligently as a human being does, then it is as intelligent as a human being is.
  • The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.
  • Newell and Simon’s physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action.
  • Searle’s strong AI hypothesis: The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

One can expect there to be an academic subject called the philosophy of artificial intelligence analogous to the existing fields of philosophy of physics and philosophy of biology. By analogy, it will be a philosophical study of the research methods of AI and will propose to clarify philosophical problems raised.

I suppose it will take up the methodological issues raised by Hubert Dreyfus and John Searle, even the idea that intelligence requires that the system be made of meat.

Presumably, some philosophers of AI will do battle with the idea that AI is impossible (Dreyfus), that it is immoral (Weizenbaum) and that the very concept is incoherent (Searle).

It is unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.

AI in Common with Philosophy ?

September 3rd, 2010 No comments

Artificial intelligence and philosophy have more in common than a science usually has with the philosophy of that science. This is because human level artificial intelligence requires equipping a computer program with some philosophical attitudes, especially epistemological.

The program must have built into it a concept of what knowledge is and how it is obtained.

If the program is to reason about what it can and cannot do, its designers will need an attitude to free will. If it is to do Meta-level reasoning about what it can do, it needs an attitude of its own to free will.

If the program is to be protected from performing unethical actions, its designers will have to build in an attitude about that.

Unfortunately, in none of these areas is there any philosophical attitude or system sufficiently well defined to provide the basis of a usable computer program.

Most AI work today does not require any philosophy, because the system being developed does not have to operate independently in the world and have a view of the world. The designer of the program does the philosophy in advance and builds a restricted representation into the program.

Not all philosophical positions are compatible with what has to be built into intelligent programs. Here are some of the philosophical attitudes that seem to me to be required.

  1. Science and common sense knowledge of the world must both be accepted. There are atoms, and there are chairs. We can learn features of the world at the intermediate size level on which humans operate without having to understand fundamental physics. Causal relations must also be used for a robot to reason about the consequences of its possible actions.
  2. Mind has to be understood a feature at a time. There are systems with only a few beliefs and no belief that they have beliefs. Other systems will do extensive introspection. Contrast this with the attitude that unless a system has a whole raft of features, it is not a mind and therefore it cannot have beliefs.
  3. Beliefs and intentions are objects that can be formally described.
  4. A sufficient reason to ascribe a mental quality is that it accounts for behavior to a sufficient degree.
  5. It is legitimate to use approximate concepts not capable of iffy definition. For this, it is necessary to relax some of the criteria for a concept to be meaningful. It is still possible to use mathematical logic to express approximate concepts.
  6. Because a theory of approximate concepts and approximate theories is not available, philosophical attempts to be precise have often led to useless hair-splitting.
  7. Free will and determinism are compatible. The deterministic process that determines what an agent will do involves its evaluation of the consequences of the available choices. These choices are present in its consciousness and can give rise to sentences about them as they are observed.
  8. Self-consciousness consists in putting sentences about consciousness in memory.
  9. Twentieth century philosophers became too critical of reification. Many of the criticism do not apply when the entities reified are treated as approximate concepts.