Archive

Posts Tagged ‘Computer Science’

History of Knowledge Representation

September 4th, 2010 No comments

In computer science, particularly artificial intelligence, a number of representations have been devised to structure information.

KR is most commonly used to refer to representations intended for processing by modern computers, and in particular, for representations consisting of explicit objects (the class of all elephants, or Clyde a certain individual), and of assertions or claims about them (‘Clyde is an elephant’, or ‘all elephants are grey’). Representing knowledge in such explicit form enables computers to draw conclusions from knowledge already stored (‘Clyde is grey’).

Many KR methods were tried in the 1970s and early 1980s, such as heuristic question-answering, neural networks, theorem proving, and expert systems, with varying success. Medical diagnosis (e.g., Mycin) was a major application area, as were games such as chess.

In the 1980s, formal computer knowledge representation languages and systems arose. Major projects attempted to encode wide bodies of general knowledge; for example the “Cyc” project (still ongoing) went through a large encyclopedia, encoding not the information itself, but the information a reader would need in order to understand the encyclopedia: naive physics; notions of time, causality, motivation; commonplace objects and classes of objects.

Through such work, the difficulty of KR came to be better appreciated. In computational linguistics, meanwhile, much larger databases of language information were being built, and these, along with great increases in computer speed and capacity, made deeper KR more feasible.

Several programming languages have been developed that are oriented to KR. Prolog developed in 1972, but popularized much later, represents propositions and basic logic, and can derive conclusions from known premises. KL-ONE (1980s) is more specifically aimed at knowledge representation itself. In 1995, the Dublin Core standard of metadata was conceived.

In the electronic document world, languages were being developed to represent the structure of documents, such as SGML (from which HTML descended) and later XML. These facilitated information retrieval and data mining efforts, which have in recent years begun to relate to knowledge representation.

Application Areas

September 4th, 2010 No comments
Agriculture, Natural Resource Management and the Environment, Architecture & Design

Art

Artificial Noses … and Taste

Astronomy & Space Exploration

Assistive Technologies

Automatic Programming

Autonomous Vehicles, Robots, Rovers, Explorers

Marketing, Customer Relations/Service & E-Commerce, Medicine

Military

Music

Networks – including Maintenance, Security & Intrusion Detection

Petroleum Industry

Politics & Foreign Relations

Public Health & Welfare

Scientific Discovery

Banking, Finance & Investing, Bioinformatics

Business & Manufacturing

Drama, Fiction, Poetry, Storytelling & Machine Writing

Earth & Atmospheric Sciences

Engineering

Filtering

Fraud Detection & Prevention

Agents, Expert Systems

Games & Puzzles

Machine Learning

Natural Language Processing

Robots

Vision

Hazards & Disasters, Information Retrieval & Extraction

Intelligent Tutoring Systems

Knowledge Management

Law

Law Enforcement & Public Safety

Libraries

Machine Translation

Smart Rooms, Smart Houses and Household Appliances, Social Science

Sports

Telecommunications

Transportation & Shipping

Video Games, Toys. Robotic Pets & Entertainment

Artificial Intelligence in the form of expert systems and neural networks have applications in every field of human endeavor. They combine precision and computational power with pure logic, to solve problems and reduce error in operation. Already, robot expert systems are taking over many jobs in industries that are dangerous for or beyond human ability. Some of the applications divided by domains are as follows:

Heavy Industries and Space

Robotics and cybernetics have taken a leap combined with artificially intelligent expert systems. An entire manufacturing process is now totally automated, controlled and maintained by a computer system in car manufacture, machine tool production, computer chip production and almost every high-tech process. They carry out dangerous tasks like handling hazardous radioactive materials. Robotic pilots carry out complex maneuvering techniques of unmanned spacecraft sent in space. Japan is the leading country in the world in terms of robotics research and use.

Finance

Banks use intelligent software applications to screen and analyze financial data. Software that can predict trends in the stock market have created which have known to beat humans in predictive power. Credit card providers, telephone companies, mortgage lenders, banks, and the U.S. Government employs AI systems to detect fraud and expedite financial transactions, with daily transaction volumes in the billions. These systems first use-learning algorithms to construct profiles of customer usage patterns, and then use the resulting profiles to detect unusual patterns and take the appropriate action (e.g., disable the credit card). Such automated oversight of financial transactions is an important component in achieving a viable basis for electronic commerce.

Computer Science

Researchers in quest of artificial intelligence have created spin offs like dynamic programming, object-oriented programming, symbolic programming, intelligent storage management systems and many more such tools. The primary goal of creating an artificial intelligence remains a distant dream but people are getting an idea of the ultimate path, which could lead to it.

Aviation

Researchers in quest of artificial intelligence have created spin offs like dynamic programming, object-oriented programming, symbolic programming, intelligent storage management systems and many more such tools. The primary goal of creating an artificial intelligence remains a distant dream but people are getting an idea of the ultimate path, which could lead to it.

Weather Forecast

Neural networks are used for predicting weather conditions. Previous data is fed to a neural network, which learns the pattern and uses that knowledge to predict weather patterns.

Swarm Intelligence

This is an approach to, as well as application of artificial intelligence similar to a neural network. Here, programmers study how intelligence emerges in natural systems like swarms of bees even though on an individual level, a bee just follows simple rules. They study relationships in nature like the prey-predator relationships that give an insight into how intelligence emerges in a swarm or collection from simple rules at an individual level. They develop intelligent systems by creating agent programs that mimic the behavior of these natural systems… etc.

The Turing Test

September 3rd, 2010 No comments
The Turing Test

Image via Wikipedia

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion.

However, if we consider the more precise and somehow related question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then at least in Turing’s eyes we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

Turing’s Imitation Game

Turing (1950) describes the following kind of game. Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two the person is, and which the machine is. The interrogator knows the other person and the machine, by the labels ‘X’ and ‘Y’. He does not know which of the other person and the machine is ‘X’ and at the end of the game says either ‘X’ is the person and Y is the machine’ or ‘X’ is the machine and ‘Y’ is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:

I believe that in about fifty years’ time it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

There are two kinds of questions, which can be raised about Turing’s Imitation Game.

  • First, there are empirical questions, e.g., Is it true that we now or will soon have made computers that can play the imitation game so well that an average interrogator has no more than a 70 percent chance of making the right identification after five minutes of questioning?
  • Second, there are conceptual questions, e.g., Is it true that, if an average interrogator had no more than a 70 percent chance of making the right identification after five minutes of questioning, we should conclude that the machine exhibits some level of thought, or intelligence, or mentality?