Archive

Author Archive

Computer Vision Systems

September 4th, 2010 1 comment

The organization of a computer vision system is highly application dependent. Some systems are stand-alone applications, which solve a specific measurement or detection problem, while others constitute a sub-system of a larger design, which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on if its functionality is pre-specified or if some part of it can be learned or modified during operation. There are, however, typical functions, which are found in many computer vision systems.

Image acquisition

Pre-processing

Feature extraction

Detection/segmentation

High-level processing

Typical Tasks of Computer Vision

September 4th, 2010 No comments

Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods.

Recognition

The classical problem in computer vision, image processing, and machine vision is that of determining whether the image data contains some specific object, feature, or activity. This task can normally be solved robustly and without effort by a human, but is still not satisfactorily solved in computer vision for the general case: arbitrary objects in arbitrary situations. The existing methods for dealing with this problem can at best solve it only for specific objects, such as simple geometric objects (e.g., polyhedral), human faces, printed or hand-written characters, or vehicles, and in specific situations, typically described in terms of well-defined illumination, background, and pose of the object relative to the camera.

Different varieties of the recognition problem are described in the literature:

Object recognition: one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene.

Identification: An individual instance of an object is recognized. Examples: identification of a specific person’s face or fingerprint, or identification of a specific vehicle.

Detection: the image data is scanned for a specific condition. Examples: detection of possible abnormal cells or tissues in medical images or detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data, which can be further analyzed by more computationally demanding techniques to produce a correct interpretation.

Several specialized tasks based on recognition exist, such as:

Content-based image retrieval: finding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative a target image (give me all images similar to image X), or in terms of high-level search criteria given as text input (give me all images which contains many houses, are taken during winter, and have no cars in them).

Pose estimation: estimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation.

Optical character recognition (OCR): identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII).

Motion analysis

Either several tasks relate to motion estimation where an image sequence is processed to produce an estimate of the velocity at each point in the image or in the 3D scene, or even of the camera, that produces the images. Examples of such tasks are:

Egomotion: determining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera.

Tracking: following the movements of a (usually) smaller set of interest points or objects (e.g., vehicles or humans) in the image sequence.

Optical flow: to determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result both of how the corresponding 3D point is moving in the scene and of how the camera is moving relative to the scene.

Scene reconstruction

Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case, the model can be a set of 3D points. More methods that are sophisticated produce a complete 3D surface model.

Image restoration

The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters such as low-pass filters or median filters. More methods that are sophisticated assume a model of how the local image structures look like, a model, which distinguishes them from the noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.

Computer Vision & Related Fields

September 4th, 2010 2 comments

The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications. This implies that the basic techniques that are used and developed in these fields are more or less identical, something which can be interpreted as there is only one field with different names. On the other hand, it appearsComputer Vision & Related Fields to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations that distinguish each of the fields from the others have been presented.

The following characterizations appear relevant but should not be taken as universally accepted:

Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither require assumptions nor produce interpretations about the image content.

Computer vision tends to focus on the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image.

Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision based autonomous robots and systems for vision based inspection or measurement. This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that the external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms.

There is also a field called imaging which primarily focus on the process of producing images, but sometimes also deals with processing and analysis of images. For example, medical imaging contains lots of work on the analysis of image data in medical applications.

Finally, pattern recognition is a field, which uses various methods to extract information from signals in general, mainly based on statistical approaches. A significant part of this field is devoted to applying these methods to image data.

Computer Vision

September 4th, 2010 2 comments

Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner.

As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.

Computer vision is, in some ways, the inverse of computer graphics. While computer graphics produces image data from 3D models, computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines.

An intelligent computer system can go a long way in reducing human labor. However, if such a system can be provided with a method of actually interacting with the physical world, its usefulness is greatly increased. Robotics gives AI the means to exhibit real-world intelligence by directly manipulating their environment. That is, robotics gives the artificial mind a body.

An essential component of robotics has to do with artificial sensory systems in general and artificial vision in particular. While it is true that robotics systems exist (including many successful industrial robots) that have no sensory equipment (or very limited sensors) they tend to be very brittle systems. They need to have their work area perfectly lit, with no shadows or mess. They must have the parts needed in precisely the right position and orientation, and if they are moved to a new location, they may require hours of recalibration. If a system could be developed that could make sense out of a visual scene it would greatly enhance the potential for robotics applications. It is therefore not surprising that the study of artificial vision and robotics go hand-in-hand.

Statistical NLP

September 4th, 2010 No comments

Statistical natural-language processing uses stochastic, probabilistic and statistical methods to resolve some of the difficulties discussed above, especially those, which arise because longer sentences are highly ambiguous when, processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of corpora and Markov models. Statistical NLP comprises all quantitative approaches to automated language processing, including probabilistic modeling, information theory, and linear algebra. The technology for statistical NLP comes mainly from machine learning and data mining, both of which are fields of artificial intelligence that involve learning from data.

Ambiguity and Disambiguation in NLP

September 4th, 2010 1 comment

The biggest problem in natural language processing is that most utterances are ambiguous. Following section describes different type of ambiguities

Lexical ambiguity

The lexical ambiguity of a word or phrase consists in its having more than one meaning in the language to which the word belongs. “Meaning” hereby refers to whatever should be captured by a good dictionary. For instance, the word “bank” has several distinct lexical definitions, including “financial institution” and “edge of a river”. Another example is as in apothecary. You could say, “I bought herbs from the apothecary.” This could mean you actually spoke to the apothecary (pharmacist) or went to the apothecary (drug store).

Syntactic ambiguity

Syntactic ambiguity is a property of sentences, which may be reasonably interpreted in more than one way, or reasonably interpreted to mean more than one thing. Ambiguity may or may not involve one word having two parts of speech or homonyms.

Syntactic ambiguity arises not from the range of meanings of single words, but from the relationship between the words and clauses of a sentence, and the sentence structure implied thereby. When a reader can reasonably interpret the same sentence as having more than one possible structure, the text is equivocal and meets the definition of syntactic ambiguity.

Semantic ambiguity

Semantic ambiguity arises when a word or concept has an inherently diffuse meaning based on widespread or informal usage. This is often the case, for example, with idiomatic expressions whose definitions are rarely or never well defined, and are presented in the context of a larger argument that invites a conclusion.

For example, “You could do with a new automobile. How about a test drive?” The clause “You could do with” presents a statement with such wide possible interpretation as to be essentially meaningless. Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any numbers of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.

Referential ambiguity

If it is unclear what a referring expression is referring to, then the expression is referentially ambiguous. For example, a pronoun is a referring expression such as “it”, “he”, “they”, etc. You might point to a famous basketball player and say, “he is rich”, and here “he” refers to the player. Nevertheless, if it is not clear whom you are pointing to, then we might not know to whom the pronoun refers, and so might not be able to determine whether you are saying something true. Similarly, without further information, a statement such as “Ally hit Georgia and then she started bleeding” is also referentially ambiguous. This is because it is not clear whether it is Ally or Georgia, or some third person, who started to bleed.

Referential ambiguity can also arise if you are talking about a group using an expression such as “every”. People are often fond of making generalizations, such as “everyone thinks that democracy is a good thing.” However, is it true that absolutely everyone in the world thinks so? Of course not. Therefore, who are we talking about here? There is no ambiguity if the context makes it clear which group of people we are talking about. Otherwise, there is a need to clarify.

Sometimes the context makes it clear which group of people a speaker is referring to. A teacher taking attendance might say, “Everyone is here.” Of course, the teacher is not saying that every human being in the whole world is here. He or she is likely to be talking about the students in the class.

Pragmatic ambiguity

All languages depend on words and sentences in constructing meaning. However, one of the fundamental facts about words and sentences is that many of them in our languages have more one meaning. Therefore, ambiguity may occur when an utterance can be understood in two or more distinct senses. Kess and Hoppe even say in Ambiguity in Psycholinguistics, “Upon careful consideration, one cannot but be amazed at the ubiquity in language. English, as a language is no exception to it. Since Ambiguity is not a new topic, many researches have been made in this field. In the west, ambiguity can be traces back to the sophism of ancient Greek philosophy. However, previous researches are mainly concerned with phonological ambiguity, lexical ambiguity and grammatical ambiguity. However, the word “pragmatics” was first put forward in 1930s by Charles Morris and the category of pragmatic ambiguity was not explored until the 1970s. So researches on pragmatic ambiguity are still insufficiently thorough, for example, its definition, characteristics, category, functions and understanding still need further study.

Sub Problems of NLP

September 4th, 2010 1 comment
  • Speech segmentation

In most spoken languages, the sounds representing successive letters blend into each other, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, in natural speech there are hardly any pauses between successive words; the location of those boundaries usually must take into account grammatical and semantic constraints, as well as the context.

  • Text segmentation

Some written languages like Chinese, Japanese and Thai do not have single-word boundaries either, so any significant text parsing usually requires the identification of word boundaries, which is often a non-trivial task.

  • Part-of-speech tagging
  • Word sense disambiguation

Many words have more than one meaning; we have to select the meaning which makes the most sense in context.

  • Syntactic ambiguity

The grammar for natural languages is ambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information. Specific problem components of syntactic ambiguity include sentence boundary disambiguation.

  • Imperfect or irregular input

Foreign or regional accents and vocal impediments in speech, typing or grammatical errors, OCR errors in texts.

  • Speech acts and plans

A sentence can often be considered an action by the speaker. The sentence structure alone may not contain enough information to define this action. For instance, a question is sometimes the speaker requesting some sort of response from the listener. The desired response may be verbal, physical, or some combination. For example, “Can you pass the class?” is a request for a simple yes-or-no answer, while “Can you pass the salt?” is requesting a physical action to be performed. It is not appropriate to respond with “Yes, I can pass the salt,” without the accompanying action (although “No” or “I can’t reach the salt” would explain a lack of action).

Natural Language Processing

September 4th, 2010 No comments

Natural Language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages. In theory, natural language processing is a very attractive method of human-computer interaction. Natural-language understanding is sometimes referred to as an AI-complete problem, because natural-language recognition seems to require extensive knowledge about the outside world and the ability to manipulate it.

NLP has significant overlap with the field of computational linguistics, and is often considered a sub-field of artificial intelligence.

Scripts

September 4th, 2010 No comments

Scripts were developed in the early AI work by Roger Schank, Robert P. Abelson and their research group, and are a method of representing procedural knowledge. They are very much like frames, except the values that fill the slots must be ordered.

The classic example of a script involves the typical sequence of events that occur when a person dines in a restaurant: finding a seat, reading the menu, ordering drinks from the wait staff… In the script form, these would be decomposed into conceptual transitions, such as MTRANS and PTRANS, which refer to mental transitions and physical transitions.

Schank, Abelson and their colleagues tackled some of the most difficult problems in artificial intelligence (i.e., story understanding), but ultimately their line of work ended without tangible success. This type of work received little attention after the 1980s, but it is very influential in later knowledge representation techniques, such as case-based reasoning.

Scripts can be inflexible. To deal with inflexibility, smaller modules called memory organization packets (MOP) can be combined in a way that is appropriate for the situation.

Frames

September 4th, 2010 No comments

The frame contains information on how to use the frame, what to expect next, and what to do when these expectations are not met. Some information in the frame is generally unchanged while other information, stored in “terminals,” usually changes. Different frames may share the same terminals.

A frame’s terminals are already filled with default values, which are based on how the human mind works. For example, when a person is told “a boy kicks a ball,” most people will be able to visualize a particular ball (such as a familiar soccer ball) rather than imagining some abstract ball with no attributes.

Categories: Knowledge Representation Tags: