Oh boy, I’m on fire yo! Two posts in one day… To be fair, I wrote the last post yesterday. Anyhow, I don’t think anyone cares. But you reap what you sow. By the time I’m finished with this post my knowledge will be richer, and more robust. So what am I complaining about? As I said before *I write to learn*. So be it!

But first, let’s talk about one of my heroes, E. Roy Davies. He’s one of the forerunners of computer and machine vision, a subset of pattern recognition – one of the three things my profession is about. This post is based on a few chapters of his opus, *Computer Vision: Algorithms, Applications, Learning.*

Alright. Enough sucking up to the masters of the field. Let’s talk about *Artificial Neural Networks. *

ANNs were launched off in the 1950s, and continued well into the 60s, and research into them still continues to this day. Beldoe and Browning developed *n-tuple* type of classifier that involved bitwise recording and lookup of binary feature data, leading to *weightless* or *logical* type of ANN. Rosenblatt’s *percept**r**on, *although, was more important than this algorithm. Let’s talk about this “perceptron”.

The simple preceptron is a linear classifier that classifies patterns into two classes. It takes a feature vector, x = (x_{1}, x_{2}, …, x_{N}) as its input and produces a single scalar output , the classification process being completed by applying a threshold function at . The mathematics is simplified by writing as w_{0}, and taking it to correspond to an input x_{0} which is maintained at a constant value of unity. The output of the linear part of the classified is then written in the form:

and the final output of the classifier is given by:

This type of *neuron* – which as we said before, is called preceptron, can be trained using a variety of procedures, such as the *fixed increment rule.* The basic concept of this algorithm was to try to improve the overall error rate by moving the linear disciminant planea fixed distance toward a position where no misclassifcation would occur – by only doing this when a classification error has occurred.

Let me interpret all this mumbo jumbo. What it basically means. First, take a look at this image, courtesy of Towards Data Science:

So you can clearly see what happens. A bunch of inputs are given, then they are weighted by their threshold – the *hyperplane* if you will. Then they are summed up. If this number is less than a value, it’s one binary choice, if it’s greater than the value, it’s another binary choice. This is a *Heaviside *function.

The photo below you shows the difference between *separable *and *non-separable *data. The hyperplane could do so much more if the data are separable!

So far, our preceptrons have been *single-layered.* Meaning that only one layer of hyperplanes can be ousted. The other concept is *multilayer preceptron*, or MLP. Rosenblatt himself suggested such networks, but was unable to work out his magic and represent them. Ten years later, in 1969, Minsky and Papert published their famous monograph in which they discussed MLP. It wasn’t until 1986 that Rumbelhart *et al* were successful in proposing a systemic approach to training of MLPs. Their solution is known as the back-propagation algorithm.

Let’s talk about it.

The problem of trainign a MLP can be simply stated as: a general layer of a MLP obtains its first feature data from the lower layers and receives its class data from higher levels. Henece, if all the weights in the MLP are potentially changeable, *the information reaching a particular layer cannot be relied upon*. There is no reason why training a layer in isolation would lead to overall convergence of the MLP toward and ideal classifier. Although it might be thought that this is a rather minor difficulty, in fact, this is not so; indeed, this is but one example of the so-called credit assignment problem. What is this problem? Well, it’s correctly determinign the local origins or global properties and making the right assignment of rewards, punishments, corrections, and so on.

The key to solving these problems was to modify the preceptron cwomposing the MLP by giving them a less hard activation function that the Heaviside thresholding function whch results in and we call its negation w_{0}, the hyperplane – we give started changing the Heaviside function to a *sigmoid* shape, such as the *tanh* function.

Once these softer activation functions were used, it became possible for each layer of the MLP to feel the data more precisely, and thus training procedures could be set up on a systematic basis. In particular, the rate of change of the data at each individual neuron could be communicated to other layers that could then be trained appropriately – though only on an incremntal basis. I’m not going to bore you with the mathematical details, just some points regarding the algorithm:

**1)** The outputs of one node are the inputs of the next, and an arbitrary choice is made to label all variables as output (y) parameters rather than input (x) variables, all output parameters are in the range 0 to 1 (because of tanh, duh!)

**2)** The class parameter has been generalized as the target value *t* of the output variable y.

**3)** For all except the final outputs, the quantity of has to be calculated using the formula , the summation having to be taken over all the nodes in the layer *above *node j.

**4)** The sequence for computing the node weights involves starting with the output nodes and the proceeding downwards one layer at the time.

In the figure below you can see the difference between Heaviside activator, linear activator, and Sigmoid activator.

I have prepared a short video for people who are visual learners. Let’s hope it helps them:

Ok. That’s it! I’m not sure what the next post is going to be about. But a little birdie tells me it’s going to be about Deep Learning! So if you really, really read my blog, look out for it!

What am I going to do? Well, first I’m going to a casino and use my X-Ray Auto Blackjack Aviator Specs to win some hands. Then I’m gonna read chapter 14 of Davies books. It’s a refresher on basic Machine learning concepts. I hope I don’t fall asleep, I’ve been awake for 14 hours!