Archive

Posts Tagged ‘Natural Language’

Statistical NLP

September 4th, 2010 No comments

Statistical natural-language processing uses stochastic, probabilistic and statistical methods to resolve some of the difficulties discussed above, especially those, which arise because longer sentences are highly ambiguous when, processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of corpora and Markov models. Statistical NLP comprises all quantitative approaches to automated language processing, including probabilistic modeling, information theory, and linear algebra. The technology for statistical NLP comes mainly from machine learning and data mining, both of which are fields of artificial intelligence that involve learning from data.

Ambiguity and Disambiguation in NLP

September 4th, 2010 1 comment

The biggest problem in natural language processing is that most utterances are ambiguous. Following section describes different type of ambiguities

Lexical ambiguity

The lexical ambiguity of a word or phrase consists in its having more than one meaning in the language to which the word belongs. “Meaning” hereby refers to whatever should be captured by a good dictionary. For instance, the word “bank” has several distinct lexical definitions, including “financial institution” and “edge of a river”. Another example is as in apothecary. You could say, “I bought herbs from the apothecary.” This could mean you actually spoke to the apothecary (pharmacist) or went to the apothecary (drug store).

Syntactic ambiguity

Syntactic ambiguity is a property of sentences, which may be reasonably interpreted in more than one way, or reasonably interpreted to mean more than one thing. Ambiguity may or may not involve one word having two parts of speech or homonyms.

Syntactic ambiguity arises not from the range of meanings of single words, but from the relationship between the words and clauses of a sentence, and the sentence structure implied thereby. When a reader can reasonably interpret the same sentence as having more than one possible structure, the text is equivocal and meets the definition of syntactic ambiguity.

Semantic ambiguity

Semantic ambiguity arises when a word or concept has an inherently diffuse meaning based on widespread or informal usage. This is often the case, for example, with idiomatic expressions whose definitions are rarely or never well defined, and are presented in the context of a larger argument that invites a conclusion.

For example, “You could do with a new automobile. How about a test drive?” The clause “You could do with” presents a statement with such wide possible interpretation as to be essentially meaningless. Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any numbers of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.

Referential ambiguity

If it is unclear what a referring expression is referring to, then the expression is referentially ambiguous. For example, a pronoun is a referring expression such as “it”, “he”, “they”, etc. You might point to a famous basketball player and say, “he is rich”, and here “he” refers to the player. Nevertheless, if it is not clear whom you are pointing to, then we might not know to whom the pronoun refers, and so might not be able to determine whether you are saying something true. Similarly, without further information, a statement such as “Ally hit Georgia and then she started bleeding” is also referentially ambiguous. This is because it is not clear whether it is Ally or Georgia, or some third person, who started to bleed.

Referential ambiguity can also arise if you are talking about a group using an expression such as “every”. People are often fond of making generalizations, such as “everyone thinks that democracy is a good thing.” However, is it true that absolutely everyone in the world thinks so? Of course not. Therefore, who are we talking about here? There is no ambiguity if the context makes it clear which group of people we are talking about. Otherwise, there is a need to clarify.

Sometimes the context makes it clear which group of people a speaker is referring to. A teacher taking attendance might say, “Everyone is here.” Of course, the teacher is not saying that every human being in the whole world is here. He or she is likely to be talking about the students in the class.

Pragmatic ambiguity

All languages depend on words and sentences in constructing meaning. However, one of the fundamental facts about words and sentences is that many of them in our languages have more one meaning. Therefore, ambiguity may occur when an utterance can be understood in two or more distinct senses. Kess and Hoppe even say in Ambiguity in Psycholinguistics, “Upon careful consideration, one cannot but be amazed at the ubiquity in language. English, as a language is no exception to it. Since Ambiguity is not a new topic, many researches have been made in this field. In the west, ambiguity can be traces back to the sophism of ancient Greek philosophy. However, previous researches are mainly concerned with phonological ambiguity, lexical ambiguity and grammatical ambiguity. However, the word “pragmatics” was first put forward in 1930s by Charles Morris and the category of pragmatic ambiguity was not explored until the 1970s. So researches on pragmatic ambiguity are still insufficiently thorough, for example, its definition, characteristics, category, functions and understanding still need further study.

Sub Problems of NLP

September 4th, 2010 1 comment
  • Speech segmentation

In most spoken languages, the sounds representing successive letters blend into each other, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, in natural speech there are hardly any pauses between successive words; the location of those boundaries usually must take into account grammatical and semantic constraints, as well as the context.

  • Text segmentation

Some written languages like Chinese, Japanese and Thai do not have single-word boundaries either, so any significant text parsing usually requires the identification of word boundaries, which is often a non-trivial task.

  • Part-of-speech tagging
  • Word sense disambiguation

Many words have more than one meaning; we have to select the meaning which makes the most sense in context.

  • Syntactic ambiguity

The grammar for natural languages is ambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information. Specific problem components of syntactic ambiguity include sentence boundary disambiguation.

  • Imperfect or irregular input

Foreign or regional accents and vocal impediments in speech, typing or grammatical errors, OCR errors in texts.

  • Speech acts and plans

A sentence can often be considered an action by the speaker. The sentence structure alone may not contain enough information to define this action. For instance, a question is sometimes the speaker requesting some sort of response from the listener. The desired response may be verbal, physical, or some combination. For example, “Can you pass the class?” is a request for a simple yes-or-no answer, while “Can you pass the salt?” is requesting a physical action to be performed. It is not appropriate to respond with “Yes, I can pass the salt,” without the accompanying action (although “No” or “I can’t reach the salt” would explain a lack of action).

Natural Language Processing

September 4th, 2010 No comments

Natural Language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages. In theory, natural language processing is a very attractive method of human-computer interaction. Natural-language understanding is sometimes referred to as an AI-complete problem, because natural-language recognition seems to require extensive knowledge about the outside world and the ability to manipulate it.

NLP has significant overlap with the field of computational linguistics, and is often considered a sub-field of artificial intelligence.

Semantic Nets

September 4th, 2010 No comments
An example of a semantic network

Image via Wikipedia

A semantic network or net is a graphic notation for representing knowledge in patterns of interconnected nodes and arcs. Computer implementations of semantic networks were first developed for artificial intelligence and machine translation, but earlier versions have long been used in philosophy, psychology, and linguistics.

What is common to all semantic networks is a declarative graphic representation that can be used either to represent knowledge or to support automated systems for reasoning about knowledge. Some versions are highly informal, but other versions are formally defined systems of logic. Following are six of the most common kinds of semantic networks, each of which is discussed in detail in one section of this article.

  • A definitional network emphasizes the subtype or is-a relation between a concept type and a newly defined subtype. The resulting network, also called a generalization or subsumption hierarchy, supports the rule of inheritance for copying properties defined for a super type to all of its subtypes. Since definitions are true by definition, the information in these networks is often assumed necessarily true.
  • Assertional networks are designed to assert propositions. Unlike definitional networks, the information in an assertional network is assumed contingently true, unless it is explicitly marked with a modal operator. Some assertional networks have been proposed as models of the conceptual structures underlying natural language semantics.
  • Implicational networks use implication as the primary relation for connecting nodes. They may be used to represent patterns of beliefs, causality, or inferences.
  • Executable networks include some mechanism, such as marker passing or attached procedures, which can perform inferences, pass messages, or search for patterns and associations.
  • Learning networks build or extend their representations by acquiring knowledge from examples. The new knowledge may change the old network by adding and deleting nodes and arcs or by modifying numerical values, called weights, associated with the nodes and arcs.
  • Hybrid networks combine two or more of the previous techniques, either in a single network or in separate, but closely interacting networks.

Some of the networks have been explicitly designed to implement hypotheses about human cognitive mechanisms, while others have been designed primarily for computer efficiency. Sometimes, computational reasons may lead to the same conclusions as psychological evidence. The distinction between definitional and assertional networks, for example, has a close parallel to Tulving’s (1972) distinction between semantic memory and episodic memory.

The Chinese Room argument

September 4th, 2010 No comments
The Chinese Room argument

Image via Wikipedia

The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese.

The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle’s argument is a direct challenge to proponents of Artificial Intelligence, and the argument has broad implications for functionalist and computational theories of meaning and of mind. As a result, there have been many critical replies to the argument.

The Turing Test

September 3rd, 2010 No comments
The Turing Test

Image via Wikipedia

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion.

However, if we consider the more precise and somehow related question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then at least in Turing’s eyes we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

Turing’s Imitation Game

Turing (1950) describes the following kind of game. Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two the person is, and which the machine is. The interrogator knows the other person and the machine, by the labels ‘X’ and ‘Y’. He does not know which of the other person and the machine is ‘X’ and at the end of the game says either ‘X’ is the person and Y is the machine’ or ‘X’ is the machine and ‘Y’ is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:

I believe that in about fifty years’ time it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

There are two kinds of questions, which can be raised about Turing’s Imitation Game.

  • First, there are empirical questions, e.g., Is it true that we now or will soon have made computers that can play the imitation game so well that an average interrogator has no more than a 70 percent chance of making the right identification after five minutes of questioning?
  • Second, there are conceptual questions, e.g., Is it true that, if an average interrogator had no more than a 70 percent chance of making the right identification after five minutes of questioning, we should conclude that the machine exhibits some level of thought, or intelligence, or mentality?