Archive

Posts Tagged ‘Knowledge representation and reasoning’

Scripts

September 4th, 2010 No comments

Scripts were developed in the early AI work by Roger Schank, Robert P. Abelson and their research group, and are a method of representing procedural knowledge. They are very much like frames, except the values that fill the slots must be ordered.

The classic example of a script involves the typical sequence of events that occur when a person dines in a restaurant: finding a seat, reading the menu, ordering drinks from the wait staff… In the script form, these would be decomposed into conceptual transitions, such as MTRANS and PTRANS, which refer to mental transitions and physical transitions.

Schank, Abelson and their colleagues tackled some of the most difficult problems in artificial intelligence (i.e., story understanding), but ultimately their line of work ended without tangible success. This type of work received little attention after the 1980s, but it is very influential in later knowledge representation techniques, such as case-based reasoning.

Scripts can be inflexible. To deal with inflexibility, smaller modules called memory organization packets (MOP) can be combined in a way that is appropriate for the situation.

Different knowledge representation techniques

September 4th, 2010 1 comment

There are representation techniques such as frames, rules, tagging, and semantic networks, which have originated from theories of human information processing. Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to represent knowledge in a manner as to facilitate inferencing (i.e. drawing conclusions) from knowledge.

Some issues that arise in knowledge representation from an AI perspective are:

  • How do people represent knowledge?
  • What is the nature of knowledge?
  • Should a representation scheme deal with a particular domain or should it be general purpose?
  • How expressive is a representation scheme or formal language?
  • Should the scheme be declarative or procedural?

There has been very little top-down discussion of the knowledge representation (KR) issues and research in this area is a well-aged quillwork. There are well known problems such as “spreading activation” (this is a problem in navigating a network of nodes), “subsumption” (this is concerned with selective inheritance; e.g. an ATV can be thought of as a specialization of a car but it inherits only particular characteristics) and “classification.” For example, a tomato could be classified both as a fruit and as a vegetable.

In the field of artificial intelligence, problem solving can be simplified by an appropriate choice of knowledge representation. Representing knowledge in some ways makes certain problems easier to solve. For example, it is easier to divide numbers represented in Hindu-Arabic numerals than numbers represented as Roman numerals.

History of Knowledge Representation

September 4th, 2010 No comments

In computer science, particularly artificial intelligence, a number of representations have been devised to structure information.

KR is most commonly used to refer to representations intended for processing by modern computers, and in particular, for representations consisting of explicit objects (the class of all elephants, or Clyde a certain individual), and of assertions or claims about them (‘Clyde is an elephant’, or ‘all elephants are grey’). Representing knowledge in such explicit form enables computers to draw conclusions from knowledge already stored (‘Clyde is grey’).

Many KR methods were tried in the 1970s and early 1980s, such as heuristic question-answering, neural networks, theorem proving, and expert systems, with varying success. Medical diagnosis (e.g., Mycin) was a major application area, as were games such as chess.

In the 1980s, formal computer knowledge representation languages and systems arose. Major projects attempted to encode wide bodies of general knowledge; for example the “Cyc” project (still ongoing) went through a large encyclopedia, encoding not the information itself, but the information a reader would need in order to understand the encyclopedia: naive physics; notions of time, causality, motivation; commonplace objects and classes of objects.

Through such work, the difficulty of KR came to be better appreciated. In computational linguistics, meanwhile, much larger databases of language information were being built, and these, along with great increases in computer speed and capacity, made deeper KR more feasible.

Several programming languages have been developed that are oriented to KR. Prolog developed in 1972, but popularized much later, represents propositions and basic logic, and can derive conclusions from known premises. KL-ONE (1980s) is more specifically aimed at knowledge representation itself. In 1995, the Dublin Core standard of metadata was conceived.

In the electronic document world, languages were being developed to represent the structure of documents, such as SGML (from which HTML descended) and later XML. These facilitated information retrieval and data mining efforts, which have in recent years begun to relate to knowledge representation.

Knowledge Representation & Reasoning

September 4th, 2010 No comments

Knowledge representation and reasoning is an area of artificial intelligence whose fundamental goal is to represent knowledge in a manner that facilitates inferencing (i.e. drawing conclusions) from knowledge. It analyzes how to formally think – how to use a symbol system to represent a domain of discourse (that which can be talked about), along with functions that allow inference (formalized reasoning) about the objects. Some kind of logic is used to both supply formal semantics of how reasoning functions apply to symbols in the domain of discourse, as well as to supply operators such as quantifiers, modal operators, etc. that, along with an interpretation theory, give meaning to the sentences in the logic.

When we design a knowledge representation (and a knowledge representation system to interpret sentences in the logic in order to derive inferences from them), we have to make choices across a number of design spaces. The single most important decision to be made is the expressivity of the KR. The more expressive, the easier and more compact it is to “say something”. However, more languages that are expressive are harder to automatically derive inferences from. An example of a less expressive KR would be propositional logic. An example of a more expressive KR would be auto epistemic temporal modal logic. Less expressive KRs may be both complete and consistent (formally less expressive than set theory). KRs that are more expressive may be neither complete nor consistent.