A Look at Statistical Pattern Recognition Through the Visions of Vision

If Nearest Neighbor tires you, what would SVM do?” this sentence paraphrases Jeremiah 12:5; or it alludes to 1971 Christexploitation movie by Ron Ormond. Either way, if you’re stuck at NN, how are you going to deal with Support Vector Machines?!

Never fret. I’m here to paraphrase and quote from E. R. Davies’ Computer Vision: Principles, Algorithms, Application. I’ve sucked up to E. R. Davies before, I think he’s a wonderful person, and his contributions to the field of computer and machine vision and pattern recognition shan’t be let adrift from out minds. His books are perfect because he’s too… industrial. He’s not about theory, he’s all about application!

This post looks at classification through vision. So you’ve been warned.

When the objects that appear in an image have simple shapes, just one stage of processing may be required, – as in the case of circular washers on a conveyor belt. For more complex objects such as flat brackets and hinges, locating them requires at least two stages – as when graph matching methods are used. For situations where the full complexity of three dimensions occurs, more subtle procedures are usually required. Indeed, the very ambiguity involved in interpreting 2-D images from a set of 3-D objects generally requires cues to be sought and hypotheses to be proposed before any serious attempt can be made with the task. Thus, cues are vital to keying into complex data structures of many images. However, for simple situations, concentration on small features is valuable in permitting image interpretation to be carried out efficiently and rapidly. Neither must it be forgotten that in many applications of computer vision, the task is made simpler by the fact that the main interest lies in specific types of object.

In practical situations, measurements of prominent features allow most objects to be classified straightforwardly. In fact, this is most commonly achieved with varying degrees of certainty, but by comparing object features with those of a great many other known objects, we arrive at classifcations that are statistically the most likely ones. Hence, this sort of classification procedure is called SPR.

A good number of early pattern recognition techniques and algorithms followed the SPR approach without proceeding to the next stage – that of determing the solution that is mathematically the most probable one. Thus, an important goal has been to move the sujbects from SPR to PPR → Probable Pattern Recognition. Indeed, the key aim of ML is to progress towards probabilistic pattern recognition. In this blog post, we’ll study SPR. We’ve already discussed PPR in the last post, one of the aspects of it, at least, called the EM algorithm. We’ll unravel more aspects of PPR later.

Let’s talk about NN: Nearest Neighbor algorithm. The principle of the NN algorithm is that of comparing input image patterns against a number of paradigms and then classifying them according to the class of the paradigm that gives the closest match. Below yo see a principle component analyzed dataset wherein the dataset, in blue, is closer in paradigm to green rather than red.

 

An instructive but rather trivial example is shown below:

Here, a number of binary patterns are presented to the computer in the training phase of the algorithm: then the test patterns are presented one at a time and compared bit by bit against each of the training patterns. It is clear that this gives a generally reasonable result, the main problems arise when 1) training patterns of different classes are close together in Hamming distance – Hamming distance begin the distance between the image and its anomalies – and 2) minor translations, rotations, or noise cause variations that inhibit accurate recognition. More generally, problem 2 means that the training patterns are insufficiently representative of what will appear during the test phase. The latter statement encapsulates an exceptionally important principle, and it implies that there must be sufficient patterns in the training set for the algorithm to be able to generalize over all possible patterns of each class. However, problem 1 implies that patterns of two different classes may in some case be so similar as to be indistinguishable by any algorithm; and then it is inevitable that erroneous classifications will be made. It is seen below that this is because the underlying distribution in feature space overlap.

The basis of Bayes’ decision theory will now be examined. If we are trying to get a computer to classify objects a sound approach is to get it to measure some prominent feature of each object such as its length and to use this feature as an aid to classification. Sometimes, such a feature may give very little indication of the pattern class – perhaps because of the effects of manufacturing variation. For example, a hand-written character may be so ill formed that its features are of little help in interpreting it; it then becomes much more reliable to make use of the known relative frequencies of letters, or to invoke context. In fact, either of these strategies can give a greatly increased probability of correct interpretation. In other words, when feature measurement are found to giving an error rate above a certain threshold, it is more reliable to employ a priori probability of a given pattern appearing.

The next step in improving recognition performance is to combine the information from feature measurements and from a priori probabilities, this is achieved by applying Bayes’ rule. For a single feature x, this takes the form:

    \[ P(C_i|x) = \frac{p(x|C_i)P(C_i)}{p(x)} \]

Where:

    \[ p(x) = \sum_{j}p(x|C_j)P(C_j) \]

The vertical line | denotes conditional probability.

Mathematically, the variables here are 1) the a priori probability of class Ci, P(Ci); 2) the probability density for feature x, p(x); 3) the class-conditional probability density for feature x in class Ci, p(x|Ci), and 4) the a posteriori probability of class Ci when x is observed, P(Ci|x).

But what is a priori? What is a posteriori? Let this image I’ve created explain it all!

 

The notation P(Ci|x) is a standard one, being defined as the probability that the class is Ci when the feature is known to have the value x. Bayes’ rule says that to find the class of an object, we need to know two sets of information about the objects that might be viewed: the first is the basic probability P(Ci) that a particular class might arise; the second is the distribution of values of the feature x for each class.

Many common image analysis techniques give features that may be used to help identify or classify object. Such as area of the object, its perimeter, the number of holes it possesses, and so on. Generally, increasing number of features helps to resolve the classification problem more quickly. In the figure below, the first image has a much lower error rate than the second image. Keep in mind the cardinal rule of machine learning: you can’t have an error rate of 0!

Many classification methods, including NN and Bayes’ classifier, can involve substantial amounts of storage and computation the amount of training is to be sufficient to achieve low error rates. Hence, there is considerable value in employing methods that minimize computation cost. This is the goal of our lovely and stout Naïve Bayes’ classifier.

In Naïve Bayes’ algorithm, we multiply the a priori of different features and thus get a a posteriori with low error rate:

    \[ p(x|C_i)P(C_i) = p(x_1|C_i)p(x_2|Ci)\ldots p(x_N|C_i).P(C_i) = \prod_{j}p(x_j|C_i).P(C_i) \]


Let’s talk about the optimum number of features in an image. An important factor to consider is hat the optimum number of features depends on the amount of training a classifier receives. If the number of training set patterns is increased, more evidence is available to support the determination of a greater number of features, and hence to provide more accurate classification of test patterns.

Alright. Probability is nice and all but we shan’t go around all willy nilly – we need to calculate the cost of our mistakes! This is where the cost function comes into play. First, let’s take a look at a function known as the conditional risk function:

    \[ R(C_i|x) = \sum_j L(C_i|C_j)P(C_j|x) \]

This function expresses the expected cost of deciding on class Ci when x is observed. As it is wished to minimize this function, we decide on class Ci only if:

    \[ R(C_i|x) < R(C_i|x) \text{for all} j \neq u \]

If we were to choose a particularily simple cost function, of the form:

 

    \[L(C_i|C_j) = \left\{\begin{array}{lr}0, & \text{for } i=j\\1, & \text{for }i\neqj\\\end{array} \]

Cost functions permit classifications to be biased in favor oa safe decision in a rigorous, predetermined, and controlled manner. One way of minimizing costs is for the classifier to recognize when it is “doubtful” about a particular classification, because two or more classes are most equally likely. Then, one solution is to make a safe decision, the decision plane in feature space being biased away from its position for maximum probability classification. An alternative is to reject the pattern, i.e. place it into an “unknown” category, in that case, some other means can be employed for making appropriate classification.

Alright buckaroos! That is it for today! Of course one topic remains, and that’s SVMs: Support Vector Machines. But they, along with clustering, require a post of their own.

I’m kinda tired today… And I’m working on this massive project, which is to be revealed soon! Very, very, very soon! So buckle up, because when Chubak rides the bus, you can bet your butt it’s gonna be a bumpy ride!

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *