I know I promised a post on regression, but then I realized I only have a shallow understanding of Boosting and AdaBoost. So I biked to the nearest public library, when to the index cards, search for ‘Boost’ and after perusing through hundreds of self-help books, I found the greatest resource on AdaBoost: “How to Boost Your Spirit by Ada MacNally”. Nah, kidding. Such book doesn’t exist. The following, as always, is my study notes, taken from Foundations of Machine Learning by MIT Press and Machine Learning in Action by Peter Harrington published by Manning. The first book is heavy on math. The second book is more of a fluff book and is much simpler than the first one.

It is often difficult, for a non-trivial learning task, to directly devise an accurate algorithm satisfying the strong PAC-learning requirements that we saw in the PAC-learnable algorithm post I wrote before (link) but, there can be more hope for finding simple predictors guaranteed only to perform slightly better than random. The following gives a formal definition of such weak learners. As in the PAC-learning post, we let $n$ be a number such that the computational cost of representing any element $x \in \mathcal{X}$ is at most $O(n)$ and denote by size(c) the maximal cost of computational representation of $c \in \mathcal{C}$.

Let’s define weak learning. A concept class $\mathcal{C}$ is said to be weakly PAC-learnable if there exists an algorithm $\mathcal{A}, \gamma > 0$ and a polynomial function $poly(., . , .)$ such that for any $\delta > 0$, for all distributions $\mathcal{D}$ on $\mathcal{X}$ and for any target concept $c \in \mathcal{C}$, the following holds for any sample size $m \geq poly(1/\delta,n,size(c))$:

\[ \underset{\mathcal{S} \sim \mathcal{D}^m}{\mathbb{P}}\left[R(h_s) \leq \frac{1}{2} – \gamma \right] \leq 1 – \delta, \]

where $h_s$ is the hypothesis returned by algorithm $\mathcal{A}$ when trained on sample $S$. When such an algorithm $\mathcal{A}$ exists, it is called a weak learning algorithm for $\mathcal{C}$ or a weak learner. The hypothesis returned by a weak learning algorithm is called a base classifier.

The key idea behind boosting techniques is to use a weak learning algorithm to build a strong learner. That is, an accurate PAC-learning algorithm. To do so, boosting techniques use an ensemble method: they combine different base classifiers returned by a weak learner by a weak learner to create more accurate predictors. But which base classifier should be used and how should they be combined?

Another visualization shall be:

The algorithm takes as input a labeled sample $S=((x_1, y_1),\ldots,(x_m, y_m))$, with $(x_i, y_i) \in \mathcal{X} \times \{-1, +1\}$ for all $i \in [m]$, and maintains a distribution over the indices $\{-1, \ldots, m\}$. Initially, lines 1-2, the distribution uniform ($\mathcal{D}_1$). At each round of boosting, that is each iteration $t \in [T]$, of the loop in lines 3-8, a new base classifier $ h_t \in \mathcal{H} $ is selected that minimizes the error on the training sample weighted by the distribution $ \mathcal{D}_t$:

\[ h_t \in \underset{h \in \mathcal{H}}{argmin}\underset{i \sim \mathcal{D}_t}{\mathbb{P}} [h(x_i) \neq y_i] = \underset{h \in \mathcal{H}}{argmin}\sum_{i=1}^{m}\mathcal{D}_t(i)1_{h(x_i) \neq y_i} .\]

$Z_t$ is simply a normalization factor to ensure that the weights $D_{t+1}(i)$ sum to one. The precise reason for the definition of the coefficient at $\alpha_t$ will become clear later. For now, observe that if $\epsilon_t$, the error of the base classifier, is less than $\frac{1}{2}$, then $\frac{1-\epsilon_t}{\epsilon_t} > 1$ and $\alpha_t$ is positive. Thus, the new distribution $\mathcal{D}_{t+1}$ is defined from $\mathcal{D}_t$ by substantially increasing the weight on $i$ if point $x_i$ is incorrectly classified, and, on the contrary, decreasing it if $x_i$ is correctly classified. This has the effect of focusing more on the points incorrectly classified at the next round of boosting, less on those correctly classified by $h_t$.

After $T$ rounds of boosting, the classifier returned by AdaBoost is based on the sing of function $f$, which is a non-negative linear combination of the base classifiers $h_t$. The weight $\alpha_t$ assigned to $h_t$in that um is a logarithmic function of the ratio of the accuracy $1-\epsilon_t$ and error $\epsilon_t$ of $h_t$. Thus, more accurate base classifiers are assigned a larger weight in that sum.

For any $t \in [T]$, we will denote by $f_t$ the linear combination of the base classifiers after $t$ rounds of boosting: $f_t = \sum_{s=1}^{t}\alpha_sh_s$. In particular, we have,e $f_T = f$. The distribution $\mathcal{D}_{t+1}$ can be expressed in terms of $f_t$ and the normalization factors $Z_s$, $s \in [t]$ as follows:

\[ \forall i \in [m], \mathcal{D}_{t+1}(i)=\frac{e^{y_if_t(x_i)}}{ m \prod_{s=1}^{t}Z_s} .\]

The AdaBoost algorithm can be generalized in several ways:

– Instead of a hypothesis with minimal weighted error, $h_t$ can be more generally the base classifier returned by a weak learner algorithm trained on $D_t$;

– The range of the base classifiers could be $[-1, +1]$, or more generally bounded subset of $\mathbb{R}$. The coefficients $\alpha_t$ can then be different and may not even admit a closed form. In general, they are chosen to minimize an upper bound on the empirical error, as discussed in the next section. Of course, in that general case the hypotheses $h_t$ are not binary classifiers, but their sign could define the label and their magnitude could be interpreted as a measure of confidence.

AdaBoost was originally designed to address the theoretical question of whether a weak learning algorithm could be be used to derive a strong learning one. Here, we will show that it coincides in fact with a very simple algorithm, which consists of applying a general coordinate descent technique to a convex and differentiable objective function.

For simplicity, in this section, we assume that the base classifier set $\mathcal{H}$ is finite, with cardinality $N: \mathcal{H} = /{h_1, \ldots, h_N/}$. An ensemble function $f$ such as the one returned by AdaBoost can then be written as $f = \sum_{j=1}^N\bar{\alpha}_j h_j$. Given a labeled sample $S = ((x_1, y_1), \ldots, (x_m, y_m))$ let $F$ be the objective function defined for all $\boldsymbol{\bar{\alpha}} = (\bar{alpha}_1, \ldots, \bar{\alpha}_N) \in \mathbb{R}^N$ by:

\[ F(\boldsymbol{\bar{\alpha}}) = \frac{1}{m}\sum_{i=1}^{m}e^{y_i f(x_i)} = \frac{1}{m}\sum_{i=1}^m e^{y_i \sum_{j=1}{N}\bar{\alpha}h_j(x_i)}. \]

$F$ is a convex function of $\boldsymbol{\bar{\alpha}}$ since it is a sum of convex functions, each obtained by composition of the (convex) exponential function with an affine function of $\boldsymbol{\bar{\alpha}}$.

The AdaBoost algorithm takes the input dataset, the class labels, and the number of iterations. This is the only parameter you need to specify in most ML libraries.

Here, we briefly describe the standard practical use of AdaBoost. An important requirement for the algorithm is the choice of the base classifiers or that of the weak learner. The family of base classifiers typically used with AdaBoost in practice is that of decision trees, which are equivalent to hierarchical partitions of the space. Among decision trees, those of depth one, also known as stumps, are by far the most frequently used base classifiers.

Boosting stumps are threshold functons associated to a single feature. Thus, a stump corresponds to a single axis-aligned partition of space. If the data is in $\mathbb{R}^N$, we can associate a stump to each ot he N components. Thus, to determine the stump with the minimal weighted error at each round of boosting, the best component and the best threshold for each component must be computed.

Now, let’s create a weak learner with a decision stump, and then implement a different AdaBoost algorithm using it. If you’re familiar with decision trees, that’s OK, you’ll understand this part. However, if you’re not, either learn about them, or wait until I cover them tomorrow. A decision tree with only one split is a decision stump.

 

The pseudocode to generate a simple decision stump looks like this:

Set the minError to +

     For every feature in dataset:

           For every step:

                    For each inequality:

                           Build a decision stump and test it with the weighted dataset

                            If the error is less than minError:

Set this stump as the best stump

Return the best stump

Now that we have generated a decision stump, let’s train it:

For each iteration:

         Find the best stump using buildStump()

           Add the best stump to the stump array

           Calculate α

           Calculate the new eight vector

           Update the aggregate class estimate

           If the error rate is 0:

               Break out of the loop

Well, that is it for today! I know I keep promising one thing, and deliver another, but this time I’m really going to talk about decision trees!

Semper Fudge!

 

I’m extremely agitated today. I dunno why. Maybe because there was some convulsion in the peaceful tidings of the house I live in, or the fact that I’m kinda hungry at the moment. Anyways, I don’t have time for chitchat. Let’s get to the studying.

The following is taken from Foundations of Machine Learning by Rostamyar, et al.

Support Vector Machines are the most theoretically well motivated and practically most effective classification algorithms in modern machine learning.

Consider an input space $\mathcal{X}$ that is a subset of $\mathbb{R}^N$ with $N \geq 1$, and the output or target space $\mathcal{Y}=\{-1, +1\}$, and let $f : \mathcal{X} \rightarrow \mathcal{Y} $ be the target function. Given a hypothesis set $\mathcal{H}$ of functions mapping $\mathcal{X}$ to $\mathcal{Y}$, the binary classification task is formulated as follows:

The learner receives a training sample $S$ of size $m$ drawn independently and identically from $\mathcal{X}$ to some unknown distribution $\mathcal{D}$, $S = ((x_1, y_1), \ldots, (x_m, y_m)) \in (\mathcal{X}\times\mathcal{Y})^m$, with $y_i = f(x_i) $ for all $i \in [m]$. The problem consists of determining a hypothesis $ h \in \mathcal{H}$, a binary classifier, with small generalization error: The probability that hypothesis set is not the target function is our error rate.

\[ R_{\mathcal{D}} = \underset{x\sim\mathcal{D}}{\mathbb{P}} [h(x) \neq f(x)]. \]

Different hypothesis sets $\mathcal{H}$ can be selected for this task. Hypothesis sets with smaller complexity provide better learning guarantees, everything else being equal. A natural hypothesis set with relatively small complexity is that of a linear classifier, or hyperplanes, which can be defined as follows:

\[ \mathcal{H}= \{x \rightarrow sign(w.x+b) : w \in \mathbb{R}^N, b \in r\} \]

The learning problem is then referred to as a linear classification problem. The general equation of a hyperplane in $\mathbb{R}^N$is $w.x+b=0$ where $w\in\mathbb{R}^N$ is a non-zero vector normal to the hyperplane $b\in\mathbb{R}$ a scalar. A hypothesisol. of the form $x\rightarrow sign(w.x+b)$ thus labels positively all points falling on one side of the hyperplane $w.x+b=0$ and negatively all others.

From now until we say so, we’ll assume that the training sample $S$ can be linearly separated, that is, we assume the existence of a hyperplane that perfectly separates the training samples into two populations of positively and negatively labeled points, as illustrated by the left panel of figure below. This is equivalent to the existence of $ (\boldsymbol{w}, b) \in (\mathbb{R}^N – \boldsymbol{\{0\}}) \times \mathbb{R}$such that:

\[ \forall i \in [m], \quad y_i(\boldsymbol{w}.x_i + b) \geq 0 \]

 

But, as you can see above, there are then infinitely many such separating hyperplane. Which hyperplane should a learning algorithm select? The definition of SVM solution is based on the notion of geometric margin.

Let’s define what we just came up with: The geometric margin $\rho_h(x)$ pf a linear classifier $h:\rightarrow \boldsymbol{w.x} + b $ at a point $x$ is its Euclidean distance to the hyperplane $\boldsymbol{w.x}+b=0$:

\[ \rho_h(x) = \frac{w.x+b}{||w||_2} \]

The geometric margin of $\rho_h$ of a linear classifier h for a sample $S = (x_1, …, x_m) $ is the minimum geometric margin over the points in the sample, $\rho_h = min_{i\in[m]} \rho_h(x_i)$, that is the distance of hyperplane defining h to the closest sample points.

So what is the solution? It is that, the separating hyperplane with the maximum geometric margin is thus known as maximum-margin hyperplane. The right panel of the figure above illustrates the maximum-margin hyperplane returned by SVM algorithm is the separable case. We will present later in this chapter a theory that provides a strong justification for the solution. We can observe already, however, that the SVM solution can also be viewed as the safest choice in the following sense: a test point is classified correctly by separating hyperplanes with geometric margin $\rho$ even when it falls within a distance $\rho$ of the training samples sharing the same label: for the SVM solution, $\rho$ is the maximum geometric margin and thus the safest value.

We now derive the equations nd optimization problem that define the SVM solution. By definition of the geometric margin, the maximum margin of $\rho$ of a separating hyperplane is given by:

\[ \rho = \underset{w,b : y_i(w.x_i+b) \geq 0}{max}\underset{i\in[m]}{min}\frac{|w.x_i=b}{||w||} = \underset{w,b}{max}{min}\frac{y_(w.x_i+b)}{||w||} \]

The second quality follows from the fact that, since sample is linearly separable, for the maximizing pair $(w, b), y_i(w.x_i+b)$ must be non-negative for al $i\in[m]$. Now, observe that the last expression is invariant to multiplication of $(w, b)$ by a positive scalar. Thus, we can restrict ourselves to pairs $(\boldsymbol{w},b)$ scaled such that $min_{i\in[m]}(\boldsymbol{w}.x_i+b) = 1$:

\[ \rho = \underset{min_{i\in[m]}y_i(w.x_i+b)=1}{max}\frac{1}{||w||} = \underset{\forall i \in[m],y_i(w.x_i+b) \geq }{max}\frac{1}{||w||} \]

Figure below illustrates the solution $(w, b)$ of the maximization we just formalized. In addition to the maximum-margin hyperplane, it also shows the marginal hyperplanes, which are the hyperplanes parallel to the separating hyperplane and passing through the closest points on the negative or positive sides.

 

 

Since maximizing $1/||w||$ is equivalent to minimizing $\frac{1}{2}||w||^2$, in view of the equation above, the pair $(\boldsymbol{w}, b)$ returned by the SVM in the separable case is the solution of the following convex optimization problem:

\[ \underset{w, b}{min}\frac{1}{2}||w||^2 \]\[ \text{subject to}: y_i(\boldsymbol{w}.x_i+b) \geq 1, \forall i \in[m] \]

Since the objective function is quadratic and the constraints are affine (meaning they are greater or equal to) the optimization problem above is in fact a specific instance of quadratic programming (QP), a family of problems extensively studied in optimization. A variety of commercial and open-source solvers are available for solving convex QP problems. Additionally, motivated by the empirical success of SVMs along with its rich theoretical underpinnings, specialized methods have been developed to more efficiently solve this particular convex QP problem, notably the block coordinate descent algorithms with blocks of just two coordinates.

So what are support vectors? See the formula above, we note that constraints tare affine and thus qualified. The objective function as well as the affine constrains are convex and differentiable.

We introduce Lagrange variables $\alpha_i \geq 0, i\in[m]$, associated to the m constrains and denoted by $\boldsymbol{\alpha}$ the vector $(\alpha_1, \ldots, \alpha_m)^T$. The Lagrangian can then be defiend for all $\boldsymbol{w}\in\mathbb{R}^N,b\in\mathbb{R}$, and $\boldsymbol{\alpha}\in\mathbb{R}_+^m$ by:

\[ \mathcal{L}(\boldsymbol{w},b,\boldsymbol{\alpha} = \frac{1}{2}||w||^2 – \sum_{i = 1}^{m}\alpha_i[y_i(w.x_i+b) -1] \]

Support vectors fully define the maximum-margin hyperplane or SVM solution, which justifies the name of the algorithm. By definition, vectors not lying on the marginal hyperplanes do not affect the definiton of these hyperplanse – in their absence, the solution the solution to the SVM problem is unique, the support vectors are not. In dimensiosn $N, N+1$ points are sufficient to define a hyperplane. Thus when more then $N+1$ points lie on them marginal hyperplane, different choices are possible for $N+1$ support vectors.

But the points in the space are not always separable. In most practical settings, the training data is not linearly separable, which implies that for any hyperplane $\boldsymbol{w.x}+b=0$, there exists $x_i \in S$ such that:

\[ y_i[\boldsymbol{w.x_i}+b] \ngeq 1 \]

Thus, the constrains imposed on the linearly separable case cannot be hold simultaneously. However, a relaxed version of these constraints can indeed hold, that is, for each $i\in[m]$, there exists $\xi_i \geq 0$ such that:

\[ y_i[\boldsymbol{w.x_i}+b] \ngeq 1-\xi_i \]

The variables $\xi_i$ are known as slack variables and are commonly used in optimization to define relaxed versions of constraints. Here, a slack variable $\xi_i$ measures the distance by which vector $x_i$ violates the desires inequality, $y_i(\boldsymol{w.x_i} + b) \geq 1 . This figure illustrates this situation:

 

For hyperplane $y_i(w.x_i+b) = 1 $, a vector $x_i$ with $x_i > 0$ can be viewed as an outlier. Each $x_i$ must be positioned on the correct side the appropriate marginal hyperplane. Here’s the formula we use to optimize the non-separable cases:

\[ \underset{w, b, \xi}{min} \frac{1}{2}||w||^2 + C\sum_{i=1}^{m}\xi_i^p \]\[ \text{subject to} \quad y_i(w.x_i+b) \geq 1-\xi_i \wedge \xi_i \geq 0, i\in[m] \]

Okay! Alright! I think I understand it now. That’s enough classification for today. I’m going to study something FUN next. Altough I’m a bit drowsy… No matter! I have some energy drinks at home. Plus I have some methamphetamine which I have acquired to boost my eenrgy… Nah, kidding. I’m a cocaine man!

I get like, ~100 visitors daily, and I know it’s a lot for a person who’s just started, and I’m not even blogging for visitors, because this blog is basically my study notes, but I think, if you look hard enough, if you slant your eyes, you may find something of value in it. So why not introduce it to your friends? Buddies? Comrades? Homies? Compadres? Dusts? Rafighs? Ais? Do it!

I wanna be honest with you. One thing that I never understood is Principal Component Analysis. It’s very hard. At least to me. Because my college doesn’t offer any courses in the way of statistics (yes, I’m an Associates student!) I thought to myself, “Shieeet, I’m gonna read every resource there is. From Youtube, which I kinda feel like is the worst resource for learning – At least, sometimes – to a book about Computer Vision!” and ta-dah! Here we are. A book about Computer Vision is about to clear everything PCA. Let us commence learning! Put your learning hat and goggles on, and wear your diaphragm, because we’re gonna get screwed!

The following is based on Computer Vision, Principles, Algorithms, Applications, Learning by E. R. Davies.

One powerful way of approaching the task of data representation is Principal Component Analysis, or PCA. This involves finding the mean of a cluster of points in feature space, and then finding the principal axes of the cluster in the following way: First, an axis is found which passes through the mean position and which gives the maximum variance when the data are projected onto it. Then, a second such axis is found which maximizes variance in a directional normal to the first. This process is carried out until a total of N principal axes have been found for N-dimensional feature space. The process is illustrated in this figure:

 In fact, the process is entirely mathematical and need not be undertaken in the strict sequence indicated above, it merely involves finding a set of orthogonal axes which diagonalize the covariance matrix. A matrix is diagonalizable if it has n distinct eigenvalues in F, the eigenspace. Eigenvalues are defined as (I know what eigenvalues are, this is just for people who are reading this post in the future and don’t wanna google):

    \[ A\vec{v} = \lambda \vect{v} \]

Where A is the matrix, in this case, the covariance matrix, \vec{v} is the eigenvector, and \lambda is the eigenvalue.

The covariance matrix for the input population is defined as:

    \[ \Sigma = \mathbb{E}((x_{(p)} - \mu)(x_{(p)}-\mu)^T) \]

where x_{(p)} is the location of the pth data point and \mu is the mean of P data points, and \mathbb{E}(.) indicates expectation value for the underlying population. We can estimate \Sigma from the equations:

    \[ \Sigma = \frac{1}{P}\sum_{p=1}^{P}x_{(p)}x_{(p)}^T - \mu\mu^T \]

Where:

    \[ \mu = \frac{1}{P}\sum_{p=1}^{P}x_{(p)} \]

As \Sigma is real and symmetric, it is possible to diagonalize it using a suitable orthogonal transformation matrix A, obtaining a set of N orthonormal eigenvectors u_i with eigenvalues \lambda_i given by:

    \[ \Sigma u_i = \lambda_i u_i \quad (I = 1, 2, \ldots, N) \]

The vectors u_i are derived from the original vectors x_i by:

    \[ u_i = A(x_i - \mu) \]

And the inverse transformation needed to recover the original data vectors is:

    \[ x_i = \mu + A^T\u_i \]

Here, we have recalled that for an orthogonal matrix

    \[ A^{-1} = A^T \]

In fact, it may be shown that A is the matrix whose rows are formed from the eigenvectors of \Sigma, and that the diagonalized covariance matrix \Sigma^\prime is given by:

    \[ \Sigma^\prime = A\SigmaA^T \]

So that:

    \[ \Sigma^\prime =\left[\begin{array}{cccc}\lambda_1 & 0 & \hdots & 0 \\0 & \lambda_2 && 0 \\\vdots && \ddots & \vdots \\0 & 0 & \hdots & \lambda_N\end{array}\right] \]

In what follows, we shall assume that eigenvalues have been placed in an ordered sequence, starting with the largest. In that case, \lambda_1 represents the most characteristic of the set data points, with later eigenvalues representing successively less significant characteristics. We could even go so far as to say that, in some sense, \lambda_1 represents the most interesting characteristic of the data, whereas \lambda_N would be devoid of interest. More practically, if we ignored \lambda_N we might not lose much information. And indeed the last few eigenvalues would frequently represent characteristics which are not statistically significant and are essentially noise. For these reasons, PCA is commonly used for reduction in dimensionality of the feature space N to some lower value N^\prime.

Well, this was a short blog post. I have understood the concept behind PCA, I just haven’t understood how I can calculate it. No worries! There’s always NumPy… Unless I’m in C++, then I’m screwed!

If Nearest Neighbor tires you, what would SVM do?” this sentence paraphrases Jeremiah 12:5; or it alludes to 1971 Christexploitation movie by Ron Ormond. Either way, if you’re stuck at NN, how are you going to deal with Support Vector Machines?!

Never fret. I’m here to paraphrase and quote from E. R. Davies’ Computer Vision: Principles, Algorithms, Application. I’ve sucked up to E. R. Davies before, I think he’s a wonderful person, and his contributions to the field of computer and machine vision and pattern recognition shan’t be let adrift from out minds. His books are perfect because he’s too… industrial. He’s not about theory, he’s all about application!

This post looks at classification through vision. So you’ve been warned.

When the objects that appear in an image have simple shapes, just one stage of processing may be required, – as in the case of circular washers on a conveyor belt. For more complex objects such as flat brackets and hinges, locating them requires at least two stages – as when graph matching methods are used. For situations where the full complexity of three dimensions occurs, more subtle procedures are usually required. Indeed, the very ambiguity involved in interpreting 2-D images from a set of 3-D objects generally requires cues to be sought and hypotheses to be proposed before any serious attempt can be made with the task. Thus, cues are vital to keying into complex data structures of many images. However, for simple situations, concentration on small features is valuable in permitting image interpretation to be carried out efficiently and rapidly. Neither must it be forgotten that in many applications of computer vision, the task is made simpler by the fact that the main interest lies in specific types of object.

In practical situations, measurements of prominent features allow most objects to be classified straightforwardly. In fact, this is most commonly achieved with varying degrees of certainty, but by comparing object features with those of a great many other known objects, we arrive at classifcations that are statistically the most likely ones. Hence, this sort of classification procedure is called SPR.

A good number of early pattern recognition techniques and algorithms followed the SPR approach without proceeding to the next stage – that of determing the solution that is mathematically the most probable one. Thus, an important goal has been to move the sujbects from SPR to PPR → Probable Pattern Recognition. Indeed, the key aim of ML is to progress towards probabilistic pattern recognition. In this blog post, we’ll study SPR. We’ve already discussed PPR in the last post, one of the aspects of it, at least, called the EM algorithm. We’ll unravel more aspects of PPR later.

Let’s talk about NN: Nearest Neighbor algorithm. The principle of the NN algorithm is that of comparing input image patterns against a number of paradigms and then classifying them according to the class of the paradigm that gives the closest match. Below yo see a principle component analyzed dataset wherein the dataset, in blue, is closer in paradigm to green rather than red.

 

An instructive but rather trivial example is shown below:

Here, a number of binary patterns are presented to the computer in the training phase of the algorithm: then the test patterns are presented one at a time and compared bit by bit against each of the training patterns. It is clear that this gives a generally reasonable result, the main problems arise when 1) training patterns of different classes are close together in Hamming distance – Hamming distance begin the distance between the image and its anomalies – and 2) minor translations, rotations, or noise cause variations that inhibit accurate recognition. More generally, problem 2 means that the training patterns are insufficiently representative of what will appear during the test phase. The latter statement encapsulates an exceptionally important principle, and it implies that there must be sufficient patterns in the training set for the algorithm to be able to generalize over all possible patterns of each class. However, problem 1 implies that patterns of two different classes may in some case be so similar as to be indistinguishable by any algorithm; and then it is inevitable that erroneous classifications will be made. It is seen below that this is because the underlying distribution in feature space overlap.

The basis of Bayes’ decision theory will now be examined. If we are trying to get a computer to classify objects a sound approach is to get it to measure some prominent feature of each object such as its length and to use this feature as an aid to classification. Sometimes, such a feature may give very little indication of the pattern class – perhaps because of the effects of manufacturing variation. For example, a hand-written character may be so ill formed that its features are of little help in interpreting it; it then becomes much more reliable to make use of the known relative frequencies of letters, or to invoke context. In fact, either of these strategies can give a greatly increased probability of correct interpretation. In other words, when feature measurement are found to giving an error rate above a certain threshold, it is more reliable to employ a priori probability of a given pattern appearing.

The next step in improving recognition performance is to combine the information from feature measurements and from a priori probabilities, this is achieved by applying Bayes’ rule. For a single feature x, this takes the form:

    \[ P(C_i|x) = \frac{p(x|C_i)P(C_i)}{p(x)} \]

Where:

    \[ p(x) = \sum_{j}p(x|C_j)P(C_j) \]

The vertical line | denotes conditional probability.

Mathematically, the variables here are 1) the a priori probability of class Ci, P(Ci); 2) the probability density for feature x, p(x); 3) the class-conditional probability density for feature x in class Ci, p(x|Ci), and 4) the a posteriori probability of class Ci when x is observed, P(Ci|x).

But what is a priori? What is a posteriori? Let this image I’ve created explain it all!

 

The notation P(Ci|x) is a standard one, being defined as the probability that the class is Ci when the feature is known to have the value x. Bayes’ rule says that to find the class of an object, we need to know two sets of information about the objects that might be viewed: the first is the basic probability P(Ci) that a particular class might arise; the second is the distribution of values of the feature x for each class.

Many common image analysis techniques give features that may be used to help identify or classify object. Such as area of the object, its perimeter, the number of holes it possesses, and so on. Generally, increasing number of features helps to resolve the classification problem more quickly. In the figure below, the first image has a much lower error rate than the second image. Keep in mind the cardinal rule of machine learning: you can’t have an error rate of 0!

Many classification methods, including NN and Bayes’ classifier, can involve substantial amounts of storage and computation the amount of training is to be sufficient to achieve low error rates. Hence, there is considerable value in employing methods that minimize computation cost. This is the goal of our lovely and stout Naïve Bayes’ classifier.

In Naïve Bayes’ algorithm, we multiply the a priori of different features and thus get a a posteriori with low error rate:

    \[ p(x|C_i)P(C_i) = p(x_1|C_i)p(x_2|Ci)\ldots p(x_N|C_i).P(C_i) = \prod_{j}p(x_j|C_i).P(C_i) \]


Let’s talk about the optimum number of features in an image. An important factor to consider is hat the optimum number of features depends on the amount of training a classifier receives. If the number of training set patterns is increased, more evidence is available to support the determination of a greater number of features, and hence to provide more accurate classification of test patterns.

Alright. Probability is nice and all but we shan’t go around all willy nilly – we need to calculate the cost of our mistakes! This is where the cost function comes into play. First, let’s take a look at a function known as the conditional risk function:

    \[ R(C_i|x) = \sum_j L(C_i|C_j)P(C_j|x) \]

This function expresses the expected cost of deciding on class Ci when x is observed. As it is wished to minimize this function, we decide on class Ci only if:

    \[ R(C_i|x) < R(C_i|x) \text{for all} j \neq u \]

If we were to choose a particularily simple cost function, of the form:

 

    \[L(C_i|C_j) = \left\{\begin{array}{lr}0, & \text{for } i=j\\1, & \text{for }i\neqj\\\end{array} \]

Cost functions permit classifications to be biased in favor oa safe decision in a rigorous, predetermined, and controlled manner. One way of minimizing costs is for the classifier to recognize when it is “doubtful” about a particular classification, because two or more classes are most equally likely. Then, one solution is to make a safe decision, the decision plane in feature space being biased away from its position for maximum probability classification. An alternative is to reject the pattern, i.e. place it into an “unknown” category, in that case, some other means can be employed for making appropriate classification.

Alright buckaroos! That is it for today! Of course one topic remains, and that’s SVMs: Support Vector Machines. But they, along with clustering, require a post of their own.

I’m kinda tired today… And I’m working on this massive project, which is to be revealed soon! Very, very, very soon! So buckle up, because when Chubak rides the bus, you can bet your butt it’s gonna be a bumpy ride!

Sherry is one of those beverages that only stuck-up yuppies drink. I’ve personally never tasted it, for obvious reasons, but I’d imagine it’ll taste like any other fermented grape drink – stingy and bitter. I think people drink these beverages mostly because they’re used to them! Aren’t I right? I think machine learning can be used in the alcoholic beverage industry… Like, how long should this casket sit in the cellar? I dunno.

I wanted to make another post based on Foundations of Machine Learning by MIT Press… But I decided to go the other way and be more succinct: make a general post about machine learning. In the last post I talked about chapter 14 of E. R. Davies book. This chapter is titled “Probabilistic Machine Learning”. I’m gonna make a post about this chapter, and just hope to Lord Almighty that I’ll learn something cool… Of course I’ll learn something cool! What am I saying? Let’s start talking about probabilistic machine learning. Starting off with EM algorithm, EM standing for Expectation Maximization.

Perhaps the main point about probabilistic optimization is that we are always in a situation where we have an absolute mathematical goal – to ensure that the solutions we are seeking are subject to ever-increasing probability. This is important because, when analyzing data involving a large component of randomness, we can never be sure whether any real improvement is being made. But if we can prove mathematically that the process of change can only increase the probability of correct interpretation, we have a crucially important tool at our fingertips!

This is all good and fine, but how exactly will we formulate probabilistic arguments in such a way as to achieve our aims? The answer to this questions lies in the fact that by the 2010s many tools have been developed to let this happne, and at this very moment in time progress in this area is accelerating.

There are concrete methodology, the most powerful of them is Baye’s theory, the sin qua non in the area of applied statistics. There’s then Jensen’s inequality, and Kulback-Leibler divergence formula, which gives a distance measure showing how two different probabilistic distributions are. Then there is Newton’s method of approximation, which is fundamental, but which can be bettered in relevant cases by the Expectation Maximization algorithm. Among all this theory and methodology, we must not forget such basic probability ideas as the vertical bar notation, which allows probabilities to be reexpressed using the product rule, p(A, B) = p(A|B)p(B). These are all fine and dandy, but is there an algorithm that is based on the normal distribution, you may inquire? And I answer, yes, there is! And we’re talking about it.

Much of what we shall do in the probabilistic formalism is to make models of the input data – this being particularly true of EM algorithm, and EM being the subject of our post, is designed for generating accurate statistical models of data. But what types of models are to be used? Gaussian distribution is key, this is because it accurately models the inaccuracies of measurement due to random noise.

Before proceeding to describe the Expectation Maximization algorithm, and its justification, it will be useful to look at the sort of problems that we will want to apply to it. In particular, suppose we have a 1-D distribution of data points which we wish to fit: Perhaps, the most obvious way to model it is by using a set of individual Gaussian distributions, each of which will correspond to one of the peaks of the input distribution. Mathematically, we can model this as a mixture of Gaussians in which each Gaussian has its own mixture coefficient m. Furthermore, if we are to follow our probabilistic strategy, we will need to express both the input distribution and the result as probability distributions.

The first thing to do is to represent the Gaussian distribution as a probability distribution integrating to unity:

    \[ \mathcal{N}(x|\mu, \sigma) = \frac{1}{(2\pi\sigma^2)^(1/2)}exp\left[-\frac{(x - \mu)^2}{2\sigma^2}\right] \]


\mu and \sigma respectively, are the mean and the standard deviation of the distribution. In addition, we follow standard usage in denoting the Gaussian by its alternate name, the normal distribution, using symbol \mathcal{N} to represent it. Probability of this distribution is denoted as:

    \[ p(x) = \sum_{k = 1}^{K}m_k\Nu(x|\mu_k, \sigma_k) \]

If we take the integral of both sides we get:

    \[ \sum_{k = 1}^{K} = 1 \]


So what does it all mean? It basically means that the normal or Gaussian distribution sums up to 1.

If we assume that the probability of EM sample distribution vector z = {z1, …, zk} is mk if zk = 1 and 1 if zk = 0, then according to Baye’s theorem we have the conditional probability of EM vector sample and the Gaussian distribution:

    \[ p(x) = \sum_{z}p(x|z)p(z) = \sum_{k = 1}^{K}m_k\mathcal{N}(x|\mu_k, \sigma_k) \]


    \[ \rho(z_k) = p(z_k = 1 | x) = \frac{p(x|z_k = 1)p(z_k = 1)}{\sum_{j = 1}^{K}}p(x|z_j=1)p(z_j = 1) = \frac{p(x|z)p(z)}{\sum_{z}p(x|z)p(z)} \]

Thus we’re done with the Expectation part of the EM algorithm. But don’t forget about the Probability Density Function! When fitting data to a single Gaussian distribution, it is very necessary to take a products of PDFs of all the individual data points:

    \[ p(x_1, … , x_I|\mu,\sigma) = \prod_{i = 1}^{N}p(x_i|\mu,\sigma) = \prod_{i = 1}^{N} \mathcal{N}(x_i|\mu, \sigma) \]

In the Maximization step of EM algorithm we take the mixture of parameters to be fixed and solve for the Gaussian parameters \mu_k, \sigma_k. This step, along with the former step, are recycled as many times as necessary to proceed from an initial approximation to a final, much more accurate one.

We can generalize EM algorithm for n-dimensions like so:

    \[ \mathcal{N}(x | \mu, \Sigma) = \frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}exp\left[-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu)\right] \]

Let’s put EM algorithm to test, shall we? Look at this figure:

 

Look at them contours! That’s what I want in a woman!

The mean positions of this distributions are (1, 1.5), (2, 5), (-2, 5) and the covariance matrix for it is:

    \[ \left( \begin{array}{ccc}2 & 0 \\0 & 0.4 \\\end{array} \right)\]

    \[ \left( \begin{array}{ccc}0.5 & 0 \\0 & 1.5 \\\end{array} \right)\]

    \[ \left( \begin{array}{ccc}1 & -0.5 \\-0.5 & 1 \\\end{array} \right)\]

The 200 points randomly extracted from each of these Gaussians overlap in nontrivial wways, thereby providing a reasonably complex task for the EM algorithm. Next, we move on to a more immediately useful situation: we segment the samples into a number of subareas. It takes a lot of iterations to get a low error rate, but a powerful computer can do it – based on the data structure that’s holding the Gaussians and so many other factors. In fact, training an EM algorithm takes a long time – to test all the contours of the classifiers… And overall, the separability of the data plays a large role which shan’t be ignored.

Threshold… Sounds like a Thrash Metal band!

Above you see the effect of EM algorithm in multilevel thresholding. The intensity of histogram of image A is shown s a green trace in image C. The EM algorithm is used to obtain a GMM as shown in red by the six Gaussians in C. All pixels contributing to the green trace between adjacent Gaussian crossings are assigned to the mean intensity of the intervening Gaussian and rensterted into he image, as in B. The fit to the cloud intensities is naturally relatively poor, but he other intensities are reasonably matched.

That is it for this short blog post! I hope you’ve enjoyed it. I care neither about quality, nor quantity: I just wanna learn. Perhaps 50% of the post is mine, and the rest is credited to the author, so keep that in mind.

Whilst you’re sipping your sherries, I will be smoking Kent Red and reading chapter 13 of Davies’ book: Classification algorithms. I will make a post about it, I promise!

Enjoy life, as if it’s the last day of it – Drinking this much sherry, it might as well be!

Oh boy, I’m on fire yo! Two posts in one day… To be fair, I wrote the last post yesterday. Anyhow, I don’t think anyone cares. But you reap what you sow. By the time I’m finished with this post my knowledge will be richer, and more robust. So what am I complaining about? As I said before I write to learn. So be it!

But first, let’s talk about one of my heroes, E. Roy Davies. He’s one of the forerunners of computer and machine vision, a subset of pattern recognition – one of the three things my profession is about. This post is based on a few chapters of his opus, Computer Vision: Algorithms, Applications, Learning.

Handsome Devil Behind It All

Alright. Enough sucking up to the masters of the field. Let’s talk about Artificial Neural Networks.

ANNs were launched off in the 1950s, and continued well into the 60s, and research into them still continues to this day. Beldoe and Browning developed n-tuple type of classifier that involved bitwise recording and lookup of binary feature data, leading to weightless or logical type of ANN. Rosenblatt’s perceptron, although, was more important than this algorithm. Let’s talk about this “perceptron”.

The simple preceptron is a linear classifier that classifies patterns into two classes. It takes a feature vector, x = (x1, x2, …, xN) as its input and produces a single scalar output sum_{i = 1}^{N} w_i x_i, the classification process being completed by applying a threshold function at \theta. The mathematics is simplified by writing - \theta as w0, and taking it to correspond to an input x0 which is maintained at a constant value of unity. The output of the linear part of the classified is then written in the form:

d = sum_{i = 1}^{N} w_i x_i - \theta = sum_{i = 1}^{N} w_i x_i + w_0 = um_{i = 1}^{N} w_i x_i

and the final output of the classifier is given by:

y = f(d) f \left( sum_{i = 1}^{N} w_i x_i \right)

This type of neuron – which as we said before, is called preceptron, can be trained using a variety of procedures, such as the fixed increment rule. The basic concept of this algorithm was to try to improve the overall error rate by moving the linear disciminant planea fixed distance toward a position where no misclassifcation would occur – by only doing this when a classification error has occurred.

w_i(k + 1) = w_i(k) \t y(k) = \omega(k)

w_i(k + 1) = w_i(k) + \eta[\omega(k) - y(k)]x_i(k) y(k) \neq \omega(k)

Let me interpret all this mumbo jumbo. What it basically means. First, take a look at this image, courtesy of Towards Data Science:

 

 

So you can clearly see what happens. A bunch of inputs are given, then they are weighted by their threshold – the hyperplane if you will. Then they are summed up. If this number is less than a value, it’s one binary choice, if it’s greater than the value, it’s another binary choice. This is a Heaviside function.

The photo below you shows the difference between separable and non-separable data. The hyperplane could do so much more if the data are separable!

 

 

So far, our preceptrons have been single-layered. Meaning that only one layer of hyperplanes can be ousted. The other concept is multilayer preceptron, or MLP. Rosenblatt himself suggested such networks, but was unable to work out his magic and represent them. Ten years later, in 1969, Minsky and Papert published their famous monograph in which they discussed MLP. It wasn’t until 1986 that Rumbelhart et al were successful in proposing a systemic approach to training of MLPs. Their solution is known as the back-propagation algorithm.

Let’s talk about it.

The problem of trainign a MLP can be simply stated as: a general layer of a MLP obtains its first feature data from the lower layers and receives its class data from higher levels. Henece, if all the weights in the MLP are potentially changeable, the information reaching a particular layer cannot be relied upon. There is no reason why training a layer in isolation would lead to overall convergence of the MLP toward and ideal classifier. Although it might be thought that this is a rather minor difficulty, in fact, this is not so; indeed, this is but one example of the so-called credit assignment problem. What is this problem? Well, it’s correctly determinign the local origins or global properties and making the right assignment of rewards, punishments, corrections, and so on.

The key to solving these problems was to modify the preceptron cwomposing the MLP by giving them a less hard activation function that the Heaviside thresholding function whch results in \theta and we call its negation w0, the hyperplane – we give started changing the Heaviside function to a sigmoid shape, such as the tanh function.

Once these softer activation functions were used, it became possible for each layer of the MLP to feel the data more precisely, and thus training procedures could be set up on a systematic basis. In particular, the rate of change of the data at each individual neuron could be communicated to other layers that could then be trained appropriately – though only on an incremntal basis. I’m not going to bore you with the mathematical details, just some points regarding the algorithm:

1) The outputs of one node are the inputs of the next, and an arbitrary choice is made to label all variables as output (y) parameters rather than input (x) variables, all output parameters are in the range 0 to 1 (because of tanh, duh!)

2) The class parameter \omega has been generalized as the target value t of the output variable y.

3) For all except the final outputs, the quantity of \delta_j has to be calculated using the formula \delta_j = y_j (1 - y_j)(\sum_{\delta_m}{w_{jm}}), the summation having to be taken over all the nodes in the layer above node j.

4) The sequence for computing the node weights involves starting with the output nodes and the proceeding downwards one layer at the time.

In the figure below you can see the difference between Heaviside activator, linear activator, and Sigmoid activator.

I have prepared a short video for people who are visual learners. Let’s hope it helps them:

 

Ok. That’s it! I’m not sure what the next post is going to be about. But a little birdie tells me it’s going to be about Deep Learning! So if you really, really read my blog, look out for it!

What am I going to do? Well, first I’m going to a casino and use my X-Ray Auto Blackjack Aviator Specs to win some hands. Then I’m gonna read chapter 14 of Davies books. It’s a refresher on basic Machine learning concepts. I hope I don’t fall asleep, I’ve been awake for 14 hours!

Hiyaaaa! It’s been almost a week. I know if I’m going to post my mundane mendacity, which all bring me much satiety, in form of a blog post in an arbitrary time pattern, it’ll take a machine learning algorithm to predict whenever I’m going to make one next, but wait! Let’s drop every mundane hackneyed thing I was gonna talk about and stick with machine learning!

I fell in love with machine learning a few years back, and I just recently bought one of the most intricate books in this subject, MIT Press’s Foundations of Machine Learning, written by two Persians and one Indian. Well whilst they are waiting at the TSA checkpoints, let us not fret and make a series of posts on the damn book! It would be my privilege, nay, my honor! First episode: PAC framework. I don’t mean to credit for what I’ve not written myself, so you should know that almost everything is taken from this book. If you enjoy this post, please purchase this book. Of course it’s not the entire book, because that would be illegal, it’s just tidbits from the book. I did so because I can’t learn without a purpose, I need can’t do passive learning in my brain, learning’s gotta be active – meaning I shan’t be content with learning something without doing something along with it, so I make blog posts about it! But I have made a visualization of PAC, in Cinema 4D, which you can see when we get to it! Well, at least I’m not a content pirate! (Yeah, Chubak, keep telling yourself that…).

Let the explanations commence!

When we are designing a machine learning algorithm, several fundamental questions shall occupy a healthy man’s mind. Such as the efficiency of what can be learned, the fact that somethings are inherently hard or easy to learn, how many examples are needed, that if there’s a general model for learning, and so on and so forth. In this episode, we begin by formalizing an answer to these questions, in the form of PAC framework.

PAC stands for Probably Approximately Correct. The PAC framework helps define the class of learnable concepts in terms of number of sample points needed to achieve an approximate solution. Meaning that it introduces a sample complexity and the time and space complexity of the learning algorithm, all in one package. Imagine the normal attributes for a machine learning algorithm as some sort of a Chinese food, all in separate packages, and this so-called PAC as a nice Turkey sandwich you can take to work with.

Let’s first introduce several definitions and the notation needed to represent the PAC model, which we’ll also use in later posts as well.

We denote the set of all possible examples or instances as 𝒳, which is also sometimes referred to as the input space. The set of all possible labels or target values is denoted by ყ. In this episode ყ is limited to 0 and 1, which corresponds to the so-called binary classification.

𝒳 maps to ყ; x \rightarrow y. Let’s call this mapping a concept, or Շ. You can see how we denote it in equation 1-1. Since ყ = {0, 1}, we can identify Շ as a subset of 𝒳, in which Շ is either 0 or 1. 𝒜 concept may be a triangle, or a rectangle, or a hexagon. You can see it more clearly in figure 1-1, but we’re not going to talk about figure 1-1 yet – I just said this to give you an idea.

Let’s assume that examples are i.i.d: independently and identically distributed according to some fixed but unknown distribution, 𝔇. The learning problem is then formulated as follows: The learner considers a fixed set of possible concepts, 𝓗, called a hypothesis set, which may or may not coincide with the concept, Շ. It receives a sample S = (x1, …, xm) that is i.i.d according to 𝔇 as well as the labels (Շ(x1), …, Շ(xm)), which are based on a specific target concept that is a member of Շ. The task is then to use the labeled sample S to select a hypothesis hs that is a member of 𝓗 that has a small generalization error with respect to the concept Շ. The generalization error of a hypothesis h that is a member of 𝓗, also referred to as risk or true error or simply, error of h and is denoted by R(h) and summed up in equation below.

R(h) = \underset{x \sim D}{P}[h(x) \neq c(x)] = \underset{x \sim D}{E}[1_{h(x) \neq c(x)}]

Let’s have a headcount. h(x) is the hypothesis of the input, Շ(x) is the concept of the input, 𝔇 is the distribution, and 1w is the indicator function of the even w. Indicator functions are 1 for all the elements of the function, and 0 for anything else. P is probability, and E is the expected value. In the upcoming visualization, the marbles which fall into the container are 1 and the marbles which fall out are 0.

The generalization error of a hypothesis is not directly accessible to the learner since both distribution 𝔇 and the target concept Շ are unknown. However, the learner can measure the empirical error of a hypothesis on the labeled sample S. You can see this empirical risk, for sample S = (x1, … , xm), formalized in equation:

\hat{R}_S(h) = \frac{1}{m} \sum_{i = 1}^{m}1_{h(x_i) \neq c(x_i) .

Thus, the empirical error of h ∈ 𝓗 is its average error over the sample S, while the generalization error is its expected error based on the distribution 𝔇. We can already note that for a fixed h ∈ 𝓗, the expectation of the empirical error based on an i.i.d sample S is equal to the generalization error, as you can see it notarized in equation below. Remember that m is the maximum index of the sample – hence, size of the sample.

\underset{S \sim D^m}{E} [\hat{R}_S(h) ] = R(h)

But what is this distribution? Distribution for a discretely-valued function like this is basically the list of all the probabilities for each outcome of 𝒳.

So let’s introduce PAC, or in other words, Probably Approximately Correct: Let n be a number such that the computational cost of representing any element, x ∈ 𝒳 is at most O(n) and denote by size(c) the maximal cost of the computational representation of c ∈ Շ. For example, 𝒳 may be a vector in Rn. For which the cost of any array-based representation would be in O(n). In addition, let hs denote the hypothesis returned by algorithm 𝒜 after received a labeled sample S. To keep notation simple, the dependency of hs on 𝒜 is not explicitly indicated.

So, a concept class Շ is said to be PAC-learnale if there exists an algorithm 𝒜 and a polynomial functiony poly(.,.,.,.) such that for any ε > 0 and > 0, for all distributions 𝔇 on 𝒳 and for any target concept c ∈ Շ, the following holds true for any sample size m ≥ poly(1/ ε, 1 /𝛿, n, size(c):

\underset{S \sim D^m} {P} [R(h_S) \leq \epsilon] \geq 1 - \delta

When such an algorithm 𝒜 exists, it’s said thatՇ has a PAC-learning algorithm.

A concept class Շ is thus PAC-learnable if the hypothesis returned by the algorithm 𝒜 after observing a number of points polynomial in 1/ ε and 1 /𝛿 is approximately correct – meaning the error is at most ε, with the high probability (at least 1 – 𝛿), which justifies the PAC terminology. The parameter 𝛿 > 0 is used to define confidence 1 – 𝛿 and ε > 0 the accuracy 1 – ε. So error rate must be between delta and epsilon. Note that if the running time of the algorithm is polynomial in 1 / ε and 1 / 𝛿, then the sample size m must also be polynomial if the full sample is received by the algorithm.

As you might have noticed, there are two things in PAC definition which correspond with its name. The probability, and the error rate. The first one is for the P in PAC, the second one is for the AC in PAC.

Several key points of the PAC definition are worth emphasizing. First, the PAC framework is a distribution-free model: no particular assumption is made about the distribution 𝔇 from which examples are drawn. Second, the training sample and the test examples used to define the error are drawn according to the same distribution 𝔇. This is a natural and necessary assumption for generalization to be possible in general. It can be relaxed to include favorable domain adaptation problems. Finally, the PAC framework deals with the questionm of learnability for a concept class Շ and not a particular concept. Note that the concept class Շ is known to the algorithm 𝒜, but of course the target concept c ∈ Շ is unknown.

Now let’s take a look at this visualization I have cooked up in Cinema 4D:

 

 

Figure 1-1

Let me explain how it works. The marbles are the samples. We strain the samples, and the ones that land inside the shared container between 𝓗 and Շ are the expectation of PAC, the correct part. The ones that fall out are the error, the probably part. And the ones that fall in the containers that aren’t shared are the approximate part.

Sometimes, the hypothesis hS returned by the algorithm is always consistent, that is, it admits no error on the training sample S. In this part of the blog post, we present a general sample complexity bound, or equivalently, a generalization bound, for consistent hypotheses, in the case where cardinality |𝓗| of the hypothesis set is infinite. Since we consider consistent hypotheses, we will assume that the target concept c is in 𝓗.

So, let 𝓗 be a finite set of functions mapping from 𝒳 to ყ – Let 𝒜 be an algorithm that for any target concept c ∈ 𝓗 and i.i.dsample S returns a consistent hypothesis hS : RS (hS) = 0. Then for any ε, 𝛿 > 0, the inequality:

\underset{S \sim D^m} {P} [R(h_S) \leq \epsilon] \geq 1 - \delta

holds if:

m \leq \frac{1}{\epsilon} \left(log|H| + log \frac{1}{\delta}\right)

The price to pay for coming up with a consistent algorithm is the use of a larger hypothesis set 𝓗 containing target concepts. Of course, the upper bound increases with |𝓗|, the cardinal rule of PAC. However, that dependency is only logarithmic. Note that the term log|𝓗|,o or the related term log2|𝓗| from which it differs by a constant factor, can be interpreted as the number of bits needed to present 𝓗. Thus the generalization, guarantee of the theorem is controlled by the ratio of this number of bits, log2|𝓗|, and the sample size m.

Along with consistent hypotheses, we also have inconsistent hypotheses. Some argue that they aren’t useful, but dare I say that they may be? I’m not sure.

Let’s talk about Baye’s error. Given a distribution 𝔇 over 𝒳 ×ყ , , the Baye’s error R* is defined as:

R^\star = \underset{h - measurable}{inf} R(h)

Finally, let’s talk about noise. Noise is the minimum of the conditional probability of ყ with 𝒳.

Well, that’s it for today! My next blog post is going to be about something completely unrelated to machine learning, yes, another signal processing post! Keep in mind that I write to learn, but if I appease you, so be it!

If you have any questions, ask, and I will try my best to answer. If you see any errors in this post, tell me and I’ll add it to the addendum just below this very line. Thank you for reading my blog post!