When I started this blog, it was graphics, but now I mostly post ML and DSP stuff and I’m really sad about it. I haven’t touched graphics programming in 2 months, and that’s something which I would like to classify as bad (who says that?). But wait! It can still be about graphics programming, I just have to do graphics stuff that has ML in it, and not the other way around, because I want ML to be my de jur my specialty.

So what do you wanna learn today, Chubak? “Something graphics-related.” But what? “Um… I wanna learn about something graphics-related that’s also about probability, because it’ll help me ease into my life as an ML student.” So Chubak, learn about Monte Carlo Integration! “I can’t play blackjack! I mean, I once coded a blackjack game, but I’ve forgotten all about the rules!” Oh Chubakabra, you’re so unfunny I could kill you! Seriously!

So I have this book, Computer Graphics: Principals and Practice. I love it. It’s written by a slew of well-to-do programmers who would eat me just by looking at me (if I were a girl, that would mean something different). I will quote and paraphrase this book to learn. Let’s learn together.

 

We have all learned about numerical methods in our high school math class. Methods such as integration, interpolation, sequences, and so on. There are two types of numerical methods: Deterministic, and randomized.

A typical deterministic method for integrating a function for integrating a function f over an interval [a, b] is to take n + 1 equally spaced points in the interval, t_0 = a, t_1 = a + \frac{b - a }{n}, \ldots, t_n - b, evaluate f at midpoint \frac{t_i + t_{i + 1}}{2} of each of the intervals these points define, sum up the results, and multiply by \frac{b -a}{b}, the width of each interval. For sufficiently continuous functions f, as n increases this will converge to correct value.

A probabilistic or Monte Carlo method for computing, or more precisely, for estimating the integral of f over the interval [a, b] is this: Let X_1, \ldots, X_n be randomly chosen points in the interval [a, b]. Then Y = (b - a) \frac{1}{n}\sum_{i = 1}^{n}f(X_i) is a random value whose average is the integral \int_{[a,b]}f. To implement this we use a random number generator to generate n points in the interval [a, b], evaluate f at each, average the results, and multiply by b-a. This gives an estimate of the integral, as shown in the figure below, \int_{-1}^{1}\sqrt{1 - x^2} dx with 20 samples approximates the correct result, which is \frac{\pi}{2}.

 

Of course, each time we compute such an estimate we’ll get a different value (duh!). The variance in these values depends on the shapes of the function f. If we generate the random points x_i nonuniformly, we must slightly alter the formula. But in using a nonuniform distribution of points we gain an enormous advantage: By making our non nonuniform distribution favor points x_i where f(x) is large, we can drastically reduce the variance of our estimate. This nonuniform sampling approach is called importance sampling, as you see before you.

 

Because major shift in rendering methods over the past several decades was from deterministic to randomized, we’ll be discussing randomized approaches to solving rendering equations. To do so means we’re using random variables, expectation, and variance. We deal with discrete values, because computers are inherently discrete. Continuous values deal with probability density function but we’re not gonna discuss that. We’re going to discuss probability mass function. PMF has two properties:

1- For every s \in S we have p(s) \geq 0.

2- \sum_{s \in S}p(s) = 1

The first property is called non-negativity. The second one is called normality. The intuition is that S represents a set of outcomes of some experiment, and p(s) is probability outcome of s, a member of S. An event is a subset of a probability space. The probability of an event is the sum of PMFs of elements of that event, insomuch as:

    \[ Pr\{E\} = \sum_{s \in S} p(s) \]

A random variable is a function, usually denoted by a capital letter, from a probability space to the real numbers:

    \[ X: S \rightarrow \boldsymbol{R}. \]

Keep in mind that the function X is not a variable, its a real-valued function. It’s not random either, X(s) is a single real number for any outcome s\inS.

A random variable is used to define events. For example, the set of outcome s for which X(s) = 1, that is, if ht and th are a set of strings representing heads or tails,

    \[ E = {s \in S : X(s) = 1} \]

and

    \[ = {ht, th} \]

is an event with probability \frac{1}{2}. We write this as Pr\{X=1\} = \frac{1}{2}, we use the predicate X=1 as a shorthand for the event defined by the predicate.

Let’s take a look at a piece of code for simulating the experiment represented by the formalities above:

 

headcount = 0
if (randb()): // first coin flip
    headcount++
if (randb()): // second coin flip
    headcount++
return headcount

Here we assume that ranb() is a Boolean function that returns true half the time. How is it related to our abstraction? Well, imagine the set S of all possible executions of the program, declaring two executions to be the same values returned by ranb are pairwise identical. This means that there are four possible program executions, in which the two ranb() invocations return TT, TF, FT, and FF. Our belief and experience is that these four executions are equally likely, that is, each occurs just about one quarter of the time.

Now, the analogy becomes clearer. The set of possible program executions, with associated probabilities, is a probability space. The variables in the program that depend on ranb invocations are random variables. I hope it’s all clear to understand now.

Let’s talk about the expected value, also called the mean. It’s basically the summation of the product of PMF and the random variable:

    \[ E[X] = \sum_{s\inS} p(s)X(s) \]

Imagine h is heads and t is tails. We saw ht and th. There’s also hh and tt. So the expected value of it would be:

    \[ E[X] = p(hh)X(hh) + p(ht)X(ht) + p(th)X(th) + p(tt)X(tt) \]

    \[ = \frac{1}{4}. 2 +\frac{1}{4} . 1 + \frac{1}{4} . 1 + \frac{1}{4} .0 \]

    \[ = 1 \text{QED} \]

You may wonder where we got the Xs from. Well, something I meant to tell you is that X is diminutive, meaning that we shall assign the value to it ourselves. Here, we have assigned 1 to h and 0 to t. The X(hh) is 2 because it has 2 hs.

Let’s talk about distribution. Probability distribution is a function that provides the probabilities of occurrences of different possible outcomes of an event (this is the Wikipedia wording because it’s 7AM and I’m toast and I can’t word it myself).

When we say random variable X has the distribution f we shall denote X \sim f.

The dispersion of values clustered around X is denoted by variance of it, and is defined as:

    \[ \boldsymbol{Var}[X] = E[(X - \bar{X})^2] \]

Where \bar{X} is the mean, or average of X. I won’t insult your intelligence by defining the average. Geez!

\sqrt{\boldsymbol{Var}} is called standard deviation. The random variables X and Y are called independent if:

    \[ Pr\{X = x \text{ and } Y = y\} = Pr\{X = x\}.Pr\{Y = y\} \]

The important properties of independent random variables are:

1)

    \[ E[XY] = E[X]E[Y] \]

2)

    \[ \boldsymbol{Var}[X + Y] = \boldsymbol{Var}[X] + \boldsymbol{Var}[Y] \]

When I started talking about probability, I talked about continuous probability vs. discrete probability. We covered discrete probability. Now to cover the difference between continuous probability and discrete probability:

1) Values are continuous. Meaning that the numbers are infinite.

2) Certain aspects of the analysis involves mathematical subtleties like measurablity.

3) Our probability space will be infinite. We must use probability density function instead of PMF.

PDF properties are:

1- For every s \in S we have p(s) \geq 0.

2- \int_{s\inS}p(s) = 1

But if the distribution of S is uniform, the PDF is defined as:

    \[ p(x) = \left\{ \begin{array}{lr}\frac{1}{b-a} & \text{ for } a \leq b \\ 0 & \text{ for } x < a \text{ or } x > b \end{array}\]

In continuous probability, E[X] is defined as:

    \[ E[X] := \int_{s\inS} p(s)X(s) \]

Now let’s compare the definitions of PMF and PDF:

    \[ \mathbb{PMF} \rightarrow p_y(t) = Pr\{Y = t\} \text{ for } t \in T \]

    \[ \mathbb{PDF} \rightarrow Pr\{a\leq X \leq b\} = \int_a^bp(r)dr \]

In continuous probability, random variables are better to be called random points. Because if S is a probability space, and Y : S \rightarrow T is a mapping to another space rather than \mathbb{R}, then we must call Y a random point rather than a random variable. The notion of probability density applies here as you can consider for any U \subset T we have:

    \[ Pr\{Y\in U} = \int_{u \in U} P_Y(u)du \]

Now let’s apply what we’ve learned to the sphere. A sphere has three coordinates, longitude, latitude, and colatitude. We only use longitude and colatitude in a \mathbb{R}^2, 2D Carthesian coordinates applied to random variable S, turning it into S^2. The particularization of it being:

    \[ Y : [0, 1] \times [0, 1] \rightarrow S^2 : (u, v) \rightarrow (\cos(2\pi u)\sin(\pi v), \cos(\pi v) \sin( 2\pi u) sin(\pi v)) \]

We’ll start with the uniform probability density p on [0, 1] \times [0, 1] or p(u, v) = 1. Scroll up to see the formula for uniform probability density. For notational convenience, well write (x, y, z) = Y(u, v).

We have the intuitive sense that if we pick points uniformly, randomly in the unit square and use f to convert them to points in the unit sphere, they’ll be clustered near the pole. This means that the induced probability density on T will not be uniform. The image below shows this.

We’ll now talk about ways to estimate the expected value of a continuous random variable, and using these to evaluate integrals. This is important because in rendering, we’re going to want to evaluate the reflectence integral:

    \[ L^{ref}(P, \omega_o) = \int_{\omega_i \in S_{+}^{2}}L(P, - \omega_i)f_s(P,\omega_i,\omega_0)\omega_i . \boldsymbol{n}d\omega_i, \]

for various value of P and \omega_0. \omega is the direction of incoming light. A code that generates a random number uniformly distributed on [0, 1] and takes square root produces value between 0 and 1. If we use the PDF on it, as it is a uniform value, the expected value is \frac{2}{3}. This also happens to be the average value of f(x) = \sqrt{x} on that interval. What does it all mean?

Let’s take a look at Theorem 3.48 of the book. It states that if f : [a, b] \rightarrow \mathbb{R} is a real-valued function, and X \sim \boldsymbol{U}(a, b) is a uniform random variable on the interval [a, b], then (b-a)f(x) is a random variable whose expected value is:

    \[ E[(b-a)f(x)] = \int_a^b f(x)dx . \]

What does it say? It says that we can use a randomized algorithm to compute the value of an integral, at least if we run the code often enough and average the results.

In general, we’ve got some quantity C, like the integral we saw above, that we’d like to evaluate, and some randomized algorithm that produces an estimate of C. We call such a random variable and estimator for the quantity. The estimator is unbiased if its expected value is actually C. Generally, unbiased estimators are to be preferred over biased ones.

We talked about discrete and continuous probabilities. There’s a third one, which we shall call mixed probabilities, that comes up in rendering. They arise exactly from the impulses in bidirectional scattering distribution functions, which we shall talk about one day (perhaps, if I’m alive tomorrow) or the impulses caused by point lights. These are probabilities that are defined on a continuum, like interval [0, 1], but are not defined strictly by PDF. Consider the following program:

if uniform(0, 1) > 0.6 :
    return 0.3
else :
    return uniform(0, 1)

Sixty percent of the time this program reports 0.3, the other 40% of the time it returns a value uniformly distributed on [0, 1]. The return value is a random variable that has a probability mass of 0.6 at 0.3, and a PDF given by d(x) = 0.4 at all other points. We shall define the PDF as:

    \[ d(x) = \left\{ \begin{array}{lr}0.4 & \text{ for } x \neq 0.3 \\ 0.6\times\infty & \text{ for } x = 0.3 \end{array}\]

Overall, a random variable with mixed probability is one for which there is a finite set of points in the domain at which the PDF is defined, and vice versa, uniform points where PMF is undefined.

Wow that was a long post! But I’ve learned so much. This blog has four focus subjects, and that’s a lot. But it’s the summer, and until the universities open, I have a lot of free time on my hands. So I’ll try to post um, about 1~2 blog posts about graphics programming a week.

Did I mention that I have a Fiverr gig now? Well , I do! Here’s the link. Enjoy your day, while I ponder about what shall I learn next!

Credits

Since giving credits is more important than anything else, I will do that first. This article is partially based on  two papers: A Survey of Procedural Noise Functions by A. Lagae et al and Efficient Computational Noise in GLSL by McEwan et al. Codes are sto-, ahem, taken from this gist and the table at the beginning of the post is taken from this article. Quotes from the papers are in blue, and are attributed using footnotes. The cover photo is taken directly from A Survey et al.

What is Perlin Noise?

Perlin Noise was created by Ken Perlin (who won the Academy Award for his achievement) in 1983. See, back then, photorealism was something to be desired by everyone, but people always came up short. Ken Perlin came up with his noise algorithm to battle this wretched “computer-looking” appearance of 3D models. Noise is the random number generator of computer graphics. It is a random and unstructured pattern, and is useful wherever there is a need for a source of extensive detail that is nevertheless lacking in evident structure.1 Perlin noise is a multi-dimensional algorithm used in procedural generation, textures, terrain generation, map generation, surface generation, vertex generation, and so on and so forth. The following table 2 shows the extent of Perlin noise:

Dimension Raw Noise (Grayscale) Uses
1

Things such as vectors which look hand-drawn

2

Things such as procedural textures, comme feu

3

Minecraft terrain uses Perlin Noise

Perlin gave the following definition for noise: “..noise is an approximation to white noise band-limited to a single octave.3 The formal definition of the noise is:

f_N(y_1, y_2, \cdots, y_n; x_1, x_2, \cdots, x_n) = P(N(x_1), N(x_2), \cdots, N(x_n))

Where N(x) is the noise function. Perlin noise is a procedural noise.The adjective procedural is used in computer science to distinguish entities that are described by program code rather than by data structures. 4 Meaning, something that is generated, not something that is hard-coded. What’s good about using a procedural noise, rather than say, creating the noise ourselves? Well, a procedural noise is compact, meaning it occupies less space. It’s continuous, meaning it’s non-periodic. It’s parametrized, it’s randomly accessible, and many other goodies that make the job of an artist easier… And isn’t that our end goal? To serve the artist? Yesterday, I made an After Effects plugin, which you can see it below this post. I didn’t have the artist in mind, I had my own ego and id in mind, so nobody downloaded it. I learned my lesson: Serve the artist, not yourself. Anyways…

 

Before Perlin, came Lattice gradient noises. They are geenrated by interpolating between random values, and Perlin noise uses a Cubic Lattice on each vertex and then doing a splined interpolation. The pseudo-random gradient is given by hashing the lattice point and using the result to choose a gradient. 5 These hashes are turned into 12 vectors, and they are interpolated from center to edge using a quintic polynomial. A little difficult to imagine right? No worries. We’ll provide pictures6 and pseudo-codes 7.

“… they are interpolated from center to edge using a quintic polynomial. “

And a pseudo-code for Classic Perlin, sans the hash function:

 

// Function to linearly interpolate between a0 and a1
// Weight w should be in the range [0.0, 1.0]
float lerp(float a0, float a1, float w) {
    return (1.0 - w)*a0 + w*a1;

    // as an alternative, this slightly faster equivalent formula can be used:
    // return a0 + w*(a1 - a0);
}

// Computes the dot product of the distance and gradient vectors.
float dotGridGradient(int ix, int iy, float x, float y) {

    // Precomputed (or otherwise) gradient vectors at each grid node
    extern float Gradient[IYMAX][IXMAX][2];

    // Compute the distance vector
    float dx = x - (float)ix;
    float dy = y - (float)iy;

    // Compute the dot-product
    return (dx*Gradient[iy][ix][0] + dy*Gradient[iy][ix][1]);
}

// Compute Perlin noise at coordinates x, y
float perlin(float x, float y) {

    // Determine grid cell coordinates
    int x0 = int(x);
    int x1 = x0 + 1;
    int y0 = int(y);
    int y1 = y0 + 1;

    // Determine interpolation weights
    // Could also use higher order polynomial/s-curve here
    float sx = x - (float)x0;
    float sy = y - (float)y0;

    // Interpolate between grid point gradients
    float n0, n1, ix0, ix1, value;
    n0 = dotGridGradient(x0, y0, x, y);
    n1 = dotGridGradient(x1, y0, x, y);
    ix0 = lerp(n0, n1, sx);
    n0 = dotGridGradient(x0, y1, x, y);
    n1 = dotGridGradient(x1, y1, x, y);
    ix1 = lerp(n0, n1, sx);
    value = lerp(ix0, ix1, sy);

    return value;
}

It is worth knowing that “… All noise functions except Perlin noise and sparse convolution noise are approximately band-pass. Perlin noise is only weakly band-pass, which might lead to problems with aliasing and detail loss.” 8 Plus, Perlin noise does not have Gaussian Amplitude DistributionMeaning, the speckles of noise don’t scatter based on Gaussian function, which is outside the scope of this article. There are many things that Perlin fairs well at, but there are things which it performs weakly at. In the following table plate 9 you can witness for yourself

 


There are many things that Perlin fairs well at, but there are things which it performs weakly at.

Perlin Noise in Practice: GLSL Implementations

Well, I said I will talk about Perlin noise in GLSL, and now I’m going to do it.

You can use Perlin noise as a wave, as a diffuse color, as a diffuse material, as a flickering light, and as some speckles on your texture. I personally used it in this instance as a color fizzler.

 

As I’m writing this, I’m contemplating an After Effects plugin which adds Perlin Noise functionality.

The simplest Perlin Noise can be construed10 as:

float rand(vec2 c){
	return fract(sin(dot(c.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

float noise(vec2 p, float freq ){
	float unit = screenWidth/freq;
	vec2 ij = floor(p/unit);
	vec2 xy = mod(p,unit)/unit;
	//xy = 3.*xy*xy-2.*xy*xy*xy;
	xy = .5*(1.-cos(PI*xy));
	float a = rand((ij+vec2(0.,0.)));
	float b = rand((ij+vec2(1.,0.)));
	float c = rand((ij+vec2(0.,1.)));
	float d = rand((ij+vec2(1.,1.)));
	float x1 = mix(a, b, xy.x);
	float x2 = mix(c, d, xy.x);
	return mix(x1, x2, xy.y);
}

float pNoise(vec2 p, int res){
	float persistance = .5;
	float n = 0.;
	float normK = 0.;
	float f = 4.;
	float amp = 1.;
	int iCount = 0;
	for (int i = 0; i<50; i++){
		n+=amp*noise(p, f);
		f*=2.;
		normK+=amp;
		amp*=persistance;
		if (iCount == res) break;
		iCount++;
	}
	float nf = n/normK;
	return nf*nf*nf*nf;
}
#define M_PI 3.14159265358979323846

float rand(vec2 co){return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);}
float rand (vec2 co, float l) {return rand(vec2(rand(co), l));}
float rand (vec2 co, float l, float t) {return rand(vec2(rand(co, l), t));}

float perlin(vec2 p, float dim, float time) {
	vec2 pos = floor(p * dim);
	vec2 posx = pos + vec2(1.0, 0.0);
	vec2 posy = pos + vec2(0.0, 1.0);
	vec2 posxy = pos + vec2(1.0);
	
	float c = rand(pos, dim, time);
	float cx = rand(posx, dim, time);
	float cy = rand(posy, dim, time);
	float cxy = rand(posxy, dim, time);
	
	vec2 d = fract(p * dim);
	d = -0.5 * cos(d * M_PI) + 0.5;
	
	float ccx = mix(c, cx, d.x);
	float cycxy = mix(cy, cxy, d.x);
	float center = mix(ccx, cycxy, d.y);
	
	return center * 2.0 - 1.0;
}

// p must be normalized!
float perlin(vec2 p, float dim) {
	
	/*vec2 pos = floor(p * dim);
	vec2 posx = pos + vec2(1.0, 0.0);
	vec2 posy = pos + vec2(0.0, 1.0);
	vec2 posxy = pos + vec2(1.0);
	
	// For exclusively black/white noise
	/*float c = step(rand(pos, dim), 0.5);
	float cx = step(rand(posx, dim), 0.5);
	float cy = step(rand(posy, dim), 0.5);
	float cxy = step(rand(posxy, dim), 0.5);*/
	
	/*float c = rand(pos, dim);
	float cx = rand(posx, dim);
	float cy = rand(posy, dim);
	float cxy = rand(posxy, dim);
	
	vec2 d = fract(p * dim);
	d = -0.5 * cos(d * M_PI) + 0.5;
	
	float ccx = mix(c, cx, d.x);
	float cycxy = mix(cy, cxy, d.x);
	float center = mix(ccx, cycxy, d.y);
	
	return center * 2.0 - 1.0;*/
	return perlin(p, dim, 0.0);
}

That is, however, a version of Perlin which was reconstructed in 2002. Visit the Gist to see how to apply Classic Perlin Noise.

Well, that is it for today! Short post, I know, and it was lacking in original content, but I am running out of ideas as of now, because I haven’t read Real-Time Rendering yet. It’s a book that’s chuckfull of concepts and ideas for learning, and teaching. I love this damn book!

 

Another book I’m currently reading is Fundamentals of Computer Graphics. I got a little stuck reading about Implicit Curves, but I’m going to ask my cousin, who holds a PhD in mathematics, to help me with it. I hope he does, because he has the tendency to forgoe my requests of intellectual assistance. Dunno, really. Aren’t cousins supposed to help each other? I will write him a visualization app in a minute if he asks… Au fait.

 

Je vous s’exprime quelque chose. S’il vous plait, se introdure mon blog a votre amis, parce que je veux plus visitors, et je veux se attendre les plus gens. Si ils ne voudraient visiter mon blog, leur dit “C’est genial comme un reine qu’est virgo entacta et veut votre putain!”

Dammit, I suck at French, at least I didn’t use Google Translate. Chubak Out.

Reference

  • A Survey of Procedural Noise Functions, Computer Graphics Forum, Volume 29 (2010), Number 8, pp 2379-2600
  • Efficient Computational Noise, Journal of Graphics Tools Volume 16, No. 2: 85-94

Good Evening everyone! I hope you’re all doing A-Ok. Yesterday I received a PM:

Hey. Do you know anything about game rendering? I’ve seen your blog you must know something about game rendering.

I think this person is referring to Graphic Pipeline, which different APIs dub it as “rendering application”. “The main function of the pipeline is to generate, or render, a two-dimensional image, given a virtual camera,three-dimensional objects, light sources, and more”. 1 Now, SIMD architecture with its many cores makes it possible for parallel calculations, and it is de facto the main reason we have GPUs today. Graphics Pipeline is made up of shaders. Shaders are parallel programs that live in the GPU. OpenGL has five shader stages, and about seven application stages in its Pipeline 2. But what are t he stages of this pipleline? What does pipleline even mean? Read this article and find out.

The History of the Word “Pipeline”

The word pipeline became popular when gas lamps were invented. “The first Canadian transmission pipeline was built in 1853. A 25 kilometre cast-iron pipe moving natural gas to Trois Rivières, QC. It was the longest pipeline in the world at the time.” 3

 

Source in Footnotes

What does pipeline mean? Simple. Imagine you wish to transmit something from one stage, to another stage, then another stage, until the output is unrecognizable from the input. For example, in OpenGL, this can be the input:

 

float vertices[] = {
    // first triangle
     0.5f,  0.5f, 0.0f,  // top right
     0.5f, -0.5f, 0.0f,  // bottom right
    -0.5f,  0.5f, 0.0f,  // top left 
    // second triangle
     0.5f, -0.5f, 0.0f,  // bottom right
    -0.5f, -0.5f, 0.0f,  // bottom left
    -0.5f,  0.5f, 0.0f   // top left
}; 

And this can be the output:

 

Photo and code courtesy of Joey De Vries at Learnopengl.com

Some stages of the OpenGL are mandatory, some stages are optional. You can see this pipeline in the following picture.

 

By Yours Truly

Based on two books, The OpenGL Red Book 9th Edition and Real-Time Rendering 4th Edition I wish to explain to you this so-called Pipeline. Turn off your phones, and pay attention!

1. Vertex Data

We saw an example of vertices attributes in the first code in this very post. In OpenGL, vertices are recorded in Homogeneous Coordinate System, which is a complicated 4-point coordinate system. I’ll talk about this coordinate system in another post, very soon. Keep in mind that vertices can be hardcoded in the shader, but most of the times, it is passed to the vertex shader as a Vertex Buffer Object.

2. Vertex Shader

Piece du resistence, the first shader we deal with, and perhaps, the most important shader. In vertex shader, we access the data given to us through Vertex Buffer Objects in the main program using attributes, and uniforms passed to the shader are indexed using binding indices. What are uniforms, you may ask? Well, each vertex has its own shader, and each shader has quite different values, however, a uniform is constant all throughout the shaders. Each shader has its own uniforms, and you can’t pass a uniform from a shader to another. However, other variables can be passed from one shader to another. Variables qualified by the keyword in are kept inside the shader, and variables qualified by out are passed to the next shader.

In Vertex Shader, three matrices are multiplied by the outgoing gl_Position. One is the Model Matrix. the other is the View Matrix, and at the end, we have the Projection Matrix. You can clearly see them in the following picture:

Courtesy of Joey De Vries at Learnopengl.com

The model matrix deals with the object. The view matrix deals with the world around it. The projection matrix deals with the camera. One day, I’ll talk in detail about various camera projections.

 

3. Tessellation Control Shader

“Imagine you have a bouncing ball object.If you represent it with a single set of triangles, you can run into problems with quality or performance. Your ball may look good from 5 meters away, but up close the individual triangles, especially along the silhouette, become visible. If you make the ball with more triangles to improve quality, you may waste considerable processing time and memory when the ball is far away and covers only a few pixels on the screen.With tessellation, a curved surface can be generated with an appropriate number of triangles.” 4

Tessellation is basically the process of breaking higher order primitives, such as cubes, into many small sub-primitives in order to create a more detailed geometry. In OpenGL, instead of cubes, we use patches. TCS is the first of two Tessellation Shaders, the other one being…

 

4. Tessellation Evaluation Shader

Ones the Tessellation Engine does its job, it produces a number of geometry vertices which are responsible for adding detail to the given patch. Tessellation Evaluation Shader invokes them. This shader runs for each generated vertex, and adds a lot of overhead. That’s why programmers shouldn’t be thrifty with TCS.

 

5. Geometry Shader

Geometry Shader looks like Vertex Shader, and uses gl_Position, but it’s responsible for creating multiple instances of a primitive through EmitVertex() and EndPrimitive(). “Each time we call EmitVertex the vector currently set to gl_Position is added to the primitive. Whenever EndPrimitive is called, all emitted vertices for this primitive are combined into the specified output render primitive. By repeatedly calling EndPrimitive after one or more EmitVertex calls multiple primitives can be generated. This particular case emits two vertices that were translated by a small offset from the original vertex position and then calls EndPrimitive, combining these two vertices into a single line strip of 2 vertices.” 5

 

6. Primitive Assembly

Primitive Assembly is a short stage. It is basically grouping of of primitives into lines and triangles. You may ask, well, what about points? They can’t be assembled? The question to your answer is, that yes, it does happen for points, but it’s redundant.

7. Clipping

After gl_Position is converted to Cartesian coordinates, and gets normalized, meaning it is carried out between -1 and 1, it is clipped for screen. Meaning, the primitives which are not in the viewport propogated by the projection matrix are discarded.  This stage is very important for overall performance reasons.

8. Culling

This is another step taken for performance. Each 3D primitive has its face to the camera, and its back is out of the view of the camera. So it’s entirely useless to render the back of the primitive also. Culling is the process of discarding the back in favour of the face of each primitive.

9. Rasterization

Rasterization is the process of converting 3D objects in computer’s memory to 2D objects to be displayed in the monitor, or rendered to a file. In other words, rasterization is the process of determining which fragments might be covered by a triangle or a line. Rasterization has two stages: trangle setup, and triangle traversal. In the former, “…. edge equations, and other data for the triangle are computed. These data may be used for triangle traversal, as well as for interpolation of the various shading data produced by the geometry stage. Fixed function hardware is used for this task. 6. In the latter, each pixel, or a sample, is covered by a triangle, and a fragment is generated for  each pixel, or sample. If the number of samples are low, aliasing happens. Triangles are interpolated, and pixels are sent to the next stage, which is the most important stage: Fragment shader.

10. Fragment Shader

Creme de la creme, it is the last programmable stage in the pipeline, and does operations such as coloring, texturing, shadowing, antialisiang, raytracing (if possible), etc.Perhaps the most important thing done in this stage is texturing.

 

Texturing, picture courtesy of Real-TIme Rendering 4th Edition

If you are interested in learning more about fragment shaders and what you can do in them, you can read The Book of Shaders, a free internet book that, although outdated, teaches a lot of tricks in the trade.

11. Z-Buffer, Stencil Buffer, and Color Buffer

Z-Buffer is the depth buffer of the program, which is propagated by an FBO. An FBO can also be a stencil buffer, which as the name suggest, creates a black mask aroudn the screen. An FBO can also be a color buffer, which is simply the background color of the program.

12. Compute Shaders, and Beyond!

Disney don’t sue please.

Compute Shaders are a part of GPGPU, or General-Purpose GPU. GPUs, with their SIMD architecture, are the bee’s knees for high-performance computing. So in modern OpenGL, Kronos has provided us with a way to harness the power of GPU using shaders. Compute Shaders use GLSL, and are good for thigns like image processing. However, Compute Shaders are not a part of the Pipeline, and in fact, a Shader Program sannot contain both type of shaders. However, a regular program can contain both.

There are many things to do after Fragment Shader. Such as collision detection, animation, physics, music, etc. But they are not a part of OpenGL.

 

So that’s it! I hope you’ve enjoyed this post. Since there was a brouhaha last night about me not citing the sources, this time I made sure to cite everyone. People were right, I should have cited. Thanks people!

Please share my post with your interested friends. It goes a long way. Thank you, Chubak Out.

The following article is based on a journal by Jorge Jimenez, Jose J. Echevarria, Tiago Sousa and Diego Gutierrez.

You can view their SMAA demo here. It runs fine on my GTX 960 2GB.

Old Methods of Antialiasing

For years, MSAA (Multisampling Antialiasing) and SSAA (Supersampling Antialiasing) have been de facto the methods of antialiasing. In fact, these two still retain the highest quality amongst the various modern antialiasing methods. As we know, aliasing is caused by the lack of samples, in spatial level (jagged lines) and in temporal level (flickering), usually around the edges and high/low contrast regions of the picture. To battle these, we have two main ways that were once the only way around, Supersampling, and Multisampling:  In Supersampling, we blow up the picture, then downsampling for the final resolution. It works fine because, as my uncle puts it, it’s a pincer attack, because it covers every basi s of the problem and surrounds it. Multisampling is similarly pincer-ish. In this method, each sample gets duplicated based on the given coefficient. In today’s high resolutions, it would require a rather fiendish graphics card to achieve this. Therefore, we need new methods of antialiasing, both in spatial, and temporal level. All these methods rely on one algorithm to do their job: edge detection algorithm. But they rely on other things as well.

 

Modern Ways of Antialiasing

There are many modern filter-based methods for antialiasing which all, although inferior to the former and latter, do their job. FXAA,  DEAA, GPAA, GBAA, CSAA, EQAA, DLAA… In this article, we’ll talk about SMAA, and its predecessor, MLAA. These modern filter-based methods have their own problems:

  • Most edge detection methods, which are the basis of these methods, only take into account numerical differences between pixels, ignoring how they appear to the viewer.
  • The original shape of the object is not always preserved, an overall rounding of corners is most of the times clearly visible in text, sharp corners, and subpixel features.
  • Most approaches are designed to handle horizontal or vertical patterns only, ignoring the vericals.
  • Real subpixel features and subpixel motion are not properly handled. Specular and shading aliasing is not completely removed.

You’ve guessed right: We raise these issues because aim to decimate them.

Morphological Antialiasing (MLAA)

MLAA tries to estimate the coverage of the original geometry. To accurately rasterize an anitalised triangle, the coverage area for each pixel inside the triangle must be calculated to blend it properly with the background. MLAA begins the image without antialiasing, and it reverses the process by re-vectorizing the silhouettes, in order to estimate such coverage areas. Then, since the background cannot be known after rasterization, MLAA blends with a neighbor, assuming that its value is similar with the original background. In other words, The algorithm detects borders (either using color or depth information) and then finds specific patterns in these. Anti-aliasing is achieved by blending pixels in the borders intelligently. MLAA has implementations in DirectX 10 and Mono Game (XNA). Games such as Fable II use it faithfully. From the creators of MLAA, comes SMAA, or Enhanced Subpixel Morphological Antialiasing, which is the main point of our post.

MLAA in action

Enhanced Subpixel Morphological Antialiasing (SMAA)

 

Comparison between SMAA and other methods in Crysis 2

SMAA offers reliable edge detection, and a simple and effective way to handle sharp geometric features and diagonal lines. Besides, SMAA doesn’t change the shape of the geometry, as many other methods do.

 

Top) No AA

Middle) MLAA

Bottom) SMAA

SMAA builds on MLAA pipeline, improving or redefining at every step. In particular, the edge detection is improved by using color information with local contrast adaptation for cleaner edges. It extends the number of patterns handled for preservation of sharp geometric features and diagonals. And lastly, it shows how morphological antialiasing can be accurately combined with multisampling or supersampling and temporal reprojection.

Edge Detection

Edge detection is vital, because undetected edges remain aliased. On the other hand, too many filtered edges can reduce the quality of the antialiased image. Different information can be used to detect he edge: Chroma, luma, depth, surface normal, or a combination of them. For four reasons, SMAA uses luma:

  1.   Less artifacts.
  2. Luma is always visible.
  3. It can handle shading aliasing.
  4. And finally, it’s faster than chroma.

 

 

Left and Center: Other Methods of Edge Detection, causing red crossing and artifacts

Right: SMAA edges, clean as a whistle

Have this image in mind. Here’s how edge detection works: the final calculated value is a boolean called left edge boundary. Boolean values for the top edge is similarly calculated. The formula is:

c_{max }= max\left(c_l, c_r, c_b, c_i, c_{2l}\right)

e_l^\prime =  e_l \wedge c_l > 0.5.c_{max}

All the c values are called contrast deltas.

Pattern Handling

SMAA pattern detection allows to preserve sharp geometric features like corners, deals with diagonal and enables accurate distance searches.

Sharp Geometric Features: The re-vectorization of silhouettes in MLAA tensd to round corners. To avoid this, SMAA makes the observation that crossing the edges in contour lines have a maximum size of oen pixel, whereas for sharp corners this length will most likely be longer. Thusly, SMAA fetches two-pixel-long crossing edges instead, this allows less aggressive corners processing.

Diagonal Patterns: We introduce a novel diogonal pattern detection. It consists of the following steps:

  1. Search for the diagonal distance d_l and d_r to the left and the right of the diagonal lines.
  2. Fetch the crossing edges e_1 and e_2.
  3. Use this input information, defining the specific diagonal patter, to access the precomputed area texture, yielding the areas a_t and and a_b.

If the diagonal pattern detection fails, then the orthogonal detection is triggered.

Accurate Distance Search: Key to pattern detection and classification is obtaining accurate edge distance (lengths to both end of the lien) MLAA makes extensive use of hardware interpolation to speed up this process. Hardware bilinear filtering can be used as a way of fetching and encoding up to four different values with a single memory access. This linear interpolation of two binary values (that is, bilinear) producing a single floating point value is shown as:

f_x(b_1,b_2,x) = x.b_1(1-x).b_

Where b_1 and b_2 are two binary values (either 0 or 1) and x is the interpolation value.

Results

MLAA works with a single sample per pixel. This translates to subsampling, which makes it impossible to recover real subpixel features.

 

MLAA vs SMAA vs No AA

SMAA, however, works in the subpixel level. This results in:

  • Local contrast
  • Diagonal pattern detection
  • Sharp geometric features
  • Accurate searches

You can view all these in the following image, with these features compared to other methods. In fact, SMAA can produce results close to SSAA 16x.

 

The overhead produced by each of the solutions is negligible. In particular, local contrast adaptation is only 0.08ms, the sharp geometric features detection adn accurate distance takes 0.01ms, and diagonals processing produces an overhead of 0.12ms. In short, SMAA is rather fast, slower than SSAA and MSAA, but more fruitful, and less resource-intensive. 

Well, thanks for reading the article! And thanks for the writers of the journal which I used for the majority of the article. I hope you guys have a good day, and also, go on reading scientific articles on your own. It’s simple, just head to libgen.io and search for what you like — not necessarily graphics programming. Read, read and read! Don’t watch too many Youtube tutorials, it kills your senses. I can’t stress that enough. I don’t intend to tell you what to do, these are all just suggestions. I’m currently studying Structured Computer Organization and enjoying it very, very much. I recommend it for everyone, even as bedtime reading.

Please, please, please tweet my blog, introduce it to your friends, and share it to people whom you want to enjoy life, and learn about graphics programming. Thank you, thanks a lot. Chubak out.

Radiosity: Hauntingly Beautiful

I saw “Radiosity Maps” option in Cinema 4D many years ago, but I didn’t understand what it was. Now, I understand, and I want you, faithful reader, to understand them as well. I’m basing this post on an old article I found on Libgen — an article from 1986 the trio of Donald P. Greenberg, Micheal F. Cohen, and Kenneth E. Torrance. I hope they don’t mind me basing my blog post on their article from 34 years ago!

We know that there’s three categories of light: ambient, diffuse, and specular. In Phong shading, which we talked about before, a simple formula is used to simulate lighting in the given scene. This, however, is not sufficient. It is enough for real-time rendering, where performance is the main concern, but when time is not of concern, we can use more complex formulae to calculate the lighting in our scene.

In 1979, Whitted published a paper in which he explained and talked about ray tracing, a method for producing images of excellent quality.

 

Raytracing, a noble way for noble men.

However, raytracing procedure is limited and can only model intra-environment reflectionss in specular direction. Additionally, shadows always exhibit sharp boundaries, and plus, since raytracing is view-dependant, for every view, there must be a new pass.

This, however, is not true of the radiosity method. This method renders Global Illumination independent of views.

In SIGGRAPH convention of 1984, Lady Cindy Goral exhibited a metohd that sparked a lot of interest and turned quite a few heads: Derived from thermal engineering, the field of Heat Transfer, they revealed a new Global Illumination method based on energy equilibrium, and called it the Radiosity Method.

 

The difference between Direct Illumination, i.e. Phong Lighting, and Radiosity is evident in the given picture. Bright lights, no more of the demented color bleeding which are caused by diffuse reflections, and softer shadows.

Two other papers were published in 1985 which introduced concepts such as hemi-cube which we’ll discuss later. For now, let’s talk about Radiosity, and what it exactly really is.

Radiosity Formulae

 The radiosity method explains energy equilibrium inside an enclosure. The light leaving the surface (its radiosity) consists of self-emitted light and reflected or transmitted incident light. The amount of light arriving at a surface requires complete specification of geometric relationships among all reflecting and transmitting surfaces, along with the light leaving every other surface. The formula of this relationship is:

Formula 1: Radiosity essence

B_i = E_i + \rho_i \sum_{j=1}^{N} B_j F_{ij} | i = 1 to N

Factors and coefficients are as follows:

Radiosity (B): The total rate of energy leaving a surface.

Emission (E): The rate of energy (light) emitted from a surface

Reflectivity (\rho): The fraction of incident light which is reflected back into the environment

Form-facor (F): The fraction of the energy leaving one surface which lands on another surface.

N = the number of discrete surfaces of patches.

What does this equation state, you may ask? Well, from this equation we understand that the amount of nergy leaving a particular surface is equal to the self-emitted light plus the reflected light. The reflected light is equal to the light leaving every other surface multiplied by both the fraction of that light which reaches the surface in question, and the reflectivity of the receiving surface.

We said that the form-factor is the fraction of the energy leaving one surface which lands on another surface. The formula for this is:

Formula 2: Form-factor essence

F_{ij} = \frac{1}{A_i} \int_{A_i} \int_{A_j} \frac{\cos\theta_i\cos\theta_j}{\pi r^2} HID_{ij}dA_jDA_i

Factors and coefficients can be seen in this figure:

 

Also, HID is a function which calculates the area of i and j.

The Hemi-Cube Method

We talked about the Hemi-Cube, but what is it? The Hemi-Cube Algorithm provides a numerical integration (you know, integrals) technique for evaluating Formula 2.  Instead of projecting into sphere, which is the normal procedure for lighting a scene, an imaginary cube is constructed around the center of the receiving patch. The environment is transformed to set the patch’s center at the origin with patch’s normal coinciding with the positive Z-axis (in other words, the tangent space!).

 

Then, the Hemi-Cube is divided into orhogonal mesh of pixels (the article puts “pixels” in quotes, as it was a novel word at the time!) at any desired resolution.

 

The Hemi-Cube divided into these so-called pixels… Witchcraft!

Ipso facto, the total value of the form-factor from the patch at the center of the hemi-cube to any patch j can be determined by the summation of these pixels.

Radiosity… GI?

As we’ve learned, Radiosity is a subset of GI, but it’s different from raytracing. Today, most 3D applications use Monte Carlo for their GI engine, which retains some aspects of Radiosity. Examples include Cinema 4D. It used to have Radiosity, but it changed it to GI, so people, mostly C4D users, think GI and Radiosity are different things.

Well, that is it! Dear reader, I appreciate your keen interest in my blog. Please leave a comment. If I’ve made an error, please alert me. I’m not perfect, I make mistakes.

Another thing. Please, if you’ve liked my blog, tweet it to your friends so they can enjoy it as well. Even if it marks you as the nerd of the group, please do it, chicks dig nerds these days. If you have a girlfriend, just take some MDMA and read my blog to her out loud, she’ll return the favor, I rather not say how. Besides tweeting you can reddit my blog. I post my blog in all the relevant subreddits, but I may miss one or two, or a million. So please, help me grow my blog’s readership. Thanks, Chubk.

What is a shadow, even?

Well, let me start off by saying that I love shadows more than light, and I have chosen the one room in the house where every being of the room occludes light. Occludes light… Hmm… How do we know what’s behind the light frustum, and what’s in front of the light frustum, or frusta?

 

Let us scrutinize shadows first. What are shadows? When you shine a light on an object with a wall behind it, what happens?

Look at this picture:

 

Umbra: Full, hard-edged shadow.

Penumbra: Half, soft-edged shadow.

What happens if we take the light source furher and further away, until the light frustum become so wide that umbra does not seem feasible anymore? Then, umbra turns into antumbra, where different penumbras meet.

Shadow Mapping

The concept was popularized by Lance Williams in 1978. Lance Williams is also the person behind mip-mapping, which we’ll discuss in another post, moon-god-willingly. So what is Shadow Mapping? It’s by far the only feasible real-time shadowing solution, as raytracing is rather resource-intensive and volume shadows are not suiable for real-time rendering.

 

Such a happy man!

Native API support is also another reason to choose shadow mapping. GLSL, for example, has a native texture type for receiving the shadow map as a Sampler2D Uniform, which we’ll see. 

But what is shadow mapping?

 Imagine angle of the light is less than 180 degrees. If we choose camera position as the light position, camera direction as the direction of the light, and shine the light frustum on the scene then the resulting depth buffer is our shadow map.

 

Left: Light Frustum Shone on the Scene | Right: The Resulting Framebuffer becomes out shadow map
Depth as seen from a ligh

The moment of truth

We said all those things, and we showed you the result, but how exactly do we know that an object occludes light at a certain position? Let’s see what happens when it doesn’t and then we’ll see what happens when it does so.

Imagine this spot on our Utah Teapot:

 

Imagine if Z_{sm} is the depth value point on the teapot. And imagine Z_{ls} is the depth value of the length of the light source to the lit point.

 

There will not be a shadow in this case, because Z_{ls} = Z_{ms}. However, imagine if Z_{ms} was behind the teapot:

 

In this case, when Z_{ls} < Z_{ms} there this point is definitely in the shadow, and since each point is a fragment, this fragment should be shaded In an Octopus’ Garden in the Shade!

Again, I’m burying the lede, and I’m sorry for that. Octopus’ Garden in the Shade has nothing to do with shaders! Anyways, we must factor in a bias when we do compare the depths. Because if we don’t, as you see in the photo, there will be surface acne.

In a scene with dueling frasta, i.e. more than one light source and shadow map, the size of the shadow map shan’t be different from each other, otherwise, as a result, the shadow will be pixelated.

 

Shadow Map: Pros vs. Cons

Shadow Map Pros are:

  1. No matter how heavy they may look, they are still better for real-time rendering than the alternatives (e.g. RayTracing).
  2. You can adjust the size of the texture which holds the depth data, ultimately, add it as an option to your game, or demo.

 And the Cons are:

  1. Aliasing occurs in low-res textures.
  2. Textures are heavy and occupy a lot of memory.
  3. Effects of self-occlusion may be visible in the output as sparkling. This can be fixed by polygon offset.

 Finally, let’s implement it in the code using OpenGL.

OpenGL Implementation

With all that said, how exactly is this implemented in OpenGL? Here’s how. The following code is taken from OpenGL Superbible 7th Edition.

Listing 1: First, we create the shadow depth buffer.

 

GLuint shadow_buffer;
GLuint shadow_tex;

glGenFramebuffers(1, &amp;shadow_buffer);
glBindFramebuffer(GL_FRAMEBUFFER, shadow_buffer);

glGenTextures(1, &amp;shadow_tex);
glBindTexture(GL_TEXTURE_2D, shadow_tex);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT32,
               DEPTH_TEX_WIDTH, DEPTH_TEX_HEIGHT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,
                GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
                     shadow_tex, 0);

glBindFramebuffer(GL_FRAMEBUFFER, 0);

Listing 2: Then, we create model-view-projection matrices, but this time, instead of the camera, we use the light!

 

vmath::mat4 model_matrix = vmath::rotate(currentTime, 0.0f, 1.0f, 0.0f);
vmath::mat4 light_view_matrix =
    vmath::lookat(light_pos,
                  vmath::vec3(0.0f),
                  vmath::vec3(0.0f, 1.0f, 0.0f);
vmath::mat4 light_proj_matrix =
   vmath::frustum(-1.0f, 1.0f, -1.0f, 1.0f,
                  1.0f, 1000.0f);
vmath::mat4 light_mvp_matrix = light_projection_matrix *
                               light_view_matrix * model_matrix;

Listing 3: Thusly, we generate a shadow matrix.

const vmath::mat4 scale_bias_matrix =
     vmath::mat4(vmath::vec4(0.5f, 0.0f, 0.0f, 0.0f),
                 vmath::vec4(0.0f, 0.5f, 0.0f, 0.0f),
                 vmath::vec4(0.0f, 0.0f, 0.5f, 0.0f),
                 vmath::vec4(0.5f, 0.5f, 0.5f, 1.0f));
vmath::mat4 shadow_matrix = scale_bias_matrix *
                            light_proj_matrix *
                            light_view_matrix *
                            model_matrix;

List 4: Nothing can be done without shaders, so we implement the vertex shader for this shadw. Not much different than any other vertex shaders.

#version 420 core

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
uniform mat4 shadow_matrix;
layout (location = 0) in vec4 position;

out VS_OUT
{
    vec4 shadow_coord;
} vs_out;

void main(void)
{
    gl_Position = proj_matrix * mv_matrix * position;
    vs_out.shadow_coord = shadow_matrix * position;
}

As you can see, we’ve got two model-view-projection matrices, one we use for ourselves, and one (the shadwo matrix) we pass to the fragment shader to see if each fragment is in the shadow, or in the light.

Listing 5: the fragment shader of it all.

#version 420 core

layout (location = 0) out vec4 color;

layout (binding = 0) uniform sampler2DShadow shadow_tex;

in VS_OUT
{
    vec4 shadow_coord;
} fs_in;

void main(void)
{
    color = textureProj(shadow_tex, fs_in.shadow_coord) * vec4(1.0);
}

That’s it! Notice something that we’ve never before seen in this blog, sampler2DShadow. It’s something recent, and not found in 3.3. This is why I say Superbible is better than LearnopenGl.com. It’s up to you, do you want something outdated written by a fan, or something up-to-date written by ARB? You choice!

 

Well, that is it for today! I might make another post, but it would be for the Europeans, because by the time I post my second entry of today, Americans will be asleep. A lot of people have visited my blog, this blog is not far from generating profit. So thank you! Don’t forget to leave a comment. I love each and everyone of you, if that’s not creepy. Really. I love everyone who reads the drivel I write. This gives me a feeling of self-importance. Anyways, before we have engaged in coitus, goodbye for now!

As the inquisitive reader may have guessed, or simply, googled my name, I hail from Persia, where Prince of Persia (as most people here put it, pe-rance of per-sh-ee-a)  games and the movie are set. Some people complained that Persians are brown, and the fact that in the movie had hasted a Nordic actor to play the role of the Prince was rather insulting and racist, however, what these warriors choose to ignore is that Persians, and most other Iranic tribes at the time, such as Parthians (Parthia, where I live) and Medians were, in fact white. Let’s not beat around the bush here. For nationalistic reasons, I never played the reboot, or the trilogy, I just played the 2D platformer on a Sega Genesis emulator. Anyways, I thought the reboot was the first instance of cel shading in computer games, however, I was wrong. Ten years before, a Dreamcast game by the name of Jetset Radio (pictured above) had set the trend. I’m too young for Dreamcast and I’ve never seen one even. But it must have been helluva game judging by the videos I’ve seen of it.

 

Legend of Zelda: Skyward Sword, one of the better games using cel shading

But what is cel shading or as Graham Sellers puts it, cell shading [sic]? It’s basically the process of using tricks in the render pipelien to make everything appear as if they were drawn by hand. The alternative approach for naming this technique is toon shading.

 

Cinema 4D’s “Toon” Effect

To find out which games use this effect, I headed to my trusty hangout, TVTropes.org, a wiki created using PMWiki and serving astray, bored media lovers for years… and in its cel shading page, it said:

Cel Shading applies first and foremost to the way the lighting is rendered.

This layman explanation is exactly, how I would describe it. Indeed, cel shading is rendering of the diffuse light channel. Texturing, and rendering. I know that light is a component of the material, and not something to be rendered, but using LUTs, we can achieve exactly this.

Anyways, what is  LUT?

 

 

A shot from Tintin’s Assets… I really don’t know what a plastic shader is. You could fill a data center with the things I don’t know!

Color LUT

If you have played around in Da Vinci Resolve, you’ll know what LUT is. It’s short for Look Up Table. Imagine you wish to change a series of colors to another series of colors. Each color must correspond to another. This is where Color LUT comes into play.

 

A LUT

In OpenGL, each LUT is a 1D array that corresponds to a number between 0.0f and 1.0f. This number is the intensity of the diffuse component of the Phong lighting system (remember the last post? Phong lighting is different from Phong material). Imagine this, if you will.

 

\vec{N} is normal vector, \vec{L} is the light vector. the LUT maps each color to each intensity using the formula. \alpha is dependent on your code.

 Let’s put it all together and see what happens.

Note: These are taken from OpenGL Superbible 7ed, by Graham Sellers.

 

Cel Shading in OpenGL

First, our front-end, at least, a part of it:

Listing 1: OpenGL Cel Shading Front End

 

//declare our LUT
static const GLubyte toon_tex_data[] =
{
    0x44, 0x00, 0x00, 0x00,
    0x88, 0x00, 0x00, 0x00,
    0xCC, 0x00, 0x00, 0x00,
    0xFF, 0x00, 0x00, 0x00
};

glGenTextures(1, &amp;tex_toon); //Generate texture
glBindTexture(GL_TEXTURE_1D, tex_toon); //Bind texture, a 1D texture
glTexStorage1D(GL_TEXTURE_1D, 1, GL_RGB8, sizeof(toon_tex_data) / 4);
glTexSubImage1D(GL_TEXTURE_1D, 0,
                0, sizeof(toon_tex_data) / 4,
                GL_RGBA, GL_UNSIGNED_BYTE,
                toon_tex_data); //Pass the data
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); //Conditions

Note that all the codes are explained in comments, and this is an introductory article, and not a tutorial, so I don’t see fit to explain the code in detail. You can buy OpenGL Superbible and see for yourself.

Listing 2: Partial of Cel Shading Vertex Shader

 

#version 420 core

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;

layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

out VS_OUT
{
    vec3 normal;
    vec3 view;
} vs_out;

void main(void)
{
    vec4 pos_vs = mv_matrix * position;

    // Calculate eye-space normal and position
    vs_out.normal = mat3(mv_matrix) * normal;
    vs_out.view = pos_vs.xyz;
// Send clip-space position to primitive assembly
    gl_Position = proj_matrix * pos_vs;
}

And finally, our fragment shader.

Listing 3: Cel shading fragment shader.

 

#version 420 core

layout (binding = 0) uniform sampler1D tex_toon;

uniform vec3 light_pos = vec3(30.0, 30.0, 100.0);

in VS_OUT
{
    vec3 normal;
    vec3 view;
} fs_in;

out vec4 color;

void main(void)
{
    // Calculate per-pixel normal and light vector
    vec3 N = normalize(fs_in.normal);
    vec3 L = normalize(light_pos - fs_in.view);
// Simple N dot L diffuse lighting
    float tc = pow(max(0.0, dot(N, L)), 5.0);

    // Sample from cell shading texture
    color = texture(tex_toon, tc) * (tc * 0.8 + 0.2);
}

If we load a donut model into the program, this is what we’ll get:

 

Well, hope you enjoyed it! I’m currently learning After Effects. And I may write a book about it. What do you think?



We do not expect to be able to display the object exactly as it would appear in reality, with texture, overcast shadows, etc. We hope only to display an image that approximates the real object closely enough to provide a certain degree of realism.


Bui Tuong Phong

 Bui Tuong was born in 1941, and became a computer scientist during the hobnobbing affair of the Vietnam War. It must have been pretty difficult for him to complete his education is the toxic environment of the 60s, let alone, be drafted to fight in the country of his fore bearers! But he managed to stay alive, until 1975, before leukemia took his life, just 2 short years before he gave us the basis of light and shading today: the Phong shader. Vietnamese names are comprised of three parts: the paternam name, the middle name, and the given name. All along, when people say “Phong shader”, they are referring to Bui Tuong’s given name. Read more about given names on Wikipedia.

Not sure if it’s Phong, but if Google is to be believed, it’s him.


“Softly let the balmy sunshine
Play around my dying bed,
E’er the dimly lighted valley
I with lonely feet must tread. “


Let The Light Enter – Poem by Frances Ellen Watkins Harper

There’s some extremely concise math behind Phong Shader. Which you need not know, really, unless you aspire to be a graphics programmer like yours truly. However, knowing them, even glancing at them will be beneficial in the long run. The following is extracted from OpenGL Superbible, 7th Edition by Graham Sellers and the Kronos Group ARB Foundation.

 

A) Some Concepts

First, let me get some concepts out of the way. If you have been 3D modelling or game developing for more than a minute, you will certainly be positioned in their line of fire, but a reminder won’t hurt.

 

A-1) Ambient Light

Most books, especially rather shoddy ones, compare this sort of light with sunlight, however, this is simply wrong. Ambient light is not sunlight. It comes from every direction, meaning, it’s omnipresent, but in calculations, it’s just a vector with 3 members. In Phong shading, it gets added at the end, but is not modified.


A-2) Diffuse Light

Diffuse light has a direction. In fact, it’s the directional component of a light source [sic]. In cinema, light is diffused with a softbox, and in computer graphics, light is diffused with the formula which we’ll put forward later. The magnitude, that is, the size of the diffuse light depends on the surface. For example, if our surface is a matte surface that absorbs more light than reflecting, the magnitude must be higher than when our surface is glossier.

Reflection/Absorption of Diffuse Light from a Matte Display
A-3) Specular Highlight

Like diffuse light, specular light is directional, but based on the glossiness of the surface, it leaves a highlight which is called shininess. In real life, shininess is not an inherent part of the material. In fact, a coat of film, or a speck of wax, will add more to its shininess than anything else ever could. Specular shininess is a factor which is set between 0 and 128, because beyond 128, it won’t affect the shader much.

“Coated Paper”, construction paper with a film of gloss over them. Childhood Paradise.
A- 4) Albedo

It’s the proportion of the incident light which is reflected away by the surface.

A-5) Phong Formula

The formula for calculating a Phong material is as follows:

I_p = k_ai_a + k_d\left(\vec{L}.\vec{N}\right)i_d + k_s\left(\vec{R}.\vec{V}\right)^\alpha i_s

Where:

k_a: Ambient material

k_d: Diffuse material

k_s: Specular material and \alpha: shininess factor

i_a: Ambient light

i_d: Diffuse light

i_s: Specular light

You may ask, what about the vectors? Well, no worries, we’ll cover them now:

\vec{N}: Surface normal

\vec{L}: Unit vector from the point being shaded into the light (in other words, the light vector)

\vec{R}: The reflection of the negative of the light vector

\vec{N}: Vector to the viewer

 

 

B) Gouraud Shading

Before stampeding towards Phong shader, let’s show how you can achieve Gouraud shading in GLSL. Note that I’m using version 4.1 of GLSL, as per Superbible’s instructions, however, you may be a fan of www.learnopengl.com and use 3.3. Doesn’t matter, same thing. So let’s see what a gouraud shading is. Is it edible, is it a suppository, or is it some sort of a sex toy. 

This method of shading was invented by Henri Gouraud in 1971. It’s not superior to Phong shading by any means, and these days, it’s mostly used as a less GPU-intensive preview method in software such as Cinema 4D. The problem is, that the glint it generates looks like a spark, as shown in the picture below:

 

This is caused by interpolating color between the vertices, and discontinuity between triangles because the colors are being interpolated linearly, and is only solved by Phong shader. Let’s see how we can do Gouraud shading in GLSL 4.1.

Listing 1: Per-vertex Gouraud Shading in GLSL 4.1

#version 410 core

// Per-vertex inputs
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

// Matrices we'll need
layout (std140) uniform constants
{
    mat4 mv_matrix;
    mat4 view_matrix;
    mat4 proj_matrix;
};

// Light and material properties
uniform vec3 light_pos = vec3(100.0, 100.0, 100.0);
uniform vec3 diffuse_albedo = vec3(0.5, 0.2, 0.7);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
uniform vec3 ambient = vec3(0.1, 0.1, 0.1);

// Outputs to the fragment shader
out VS_OUT
{
    vec3 color;
} vs_out;

void main(void)
{
    // Calculate view-space coordinate
    vec4 P = mv_matrix * position;

    // Calculate normal in view space
    vec3 N = mat3(mv_matrix) * normal;
    // Calculate view-space light vector
    vec3 L = light_pos - P.xyz;
    // Calculate view vector (simply the negative of the view-space position)
    vec3 V = -P.xyz;

    // Normalize all three vectors
    N = normalize(N);
    L = normalize(L);
    V = normalize(V);
    // Calculate R by reflecting -L around the plane defined by N
    vec3 R = reflect(-L, N);

    // Calculate the diffuse and specular contributions
    vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
    vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;

    // Send the color output to the fragment shader
    vs_out.color = ambient + diffuse + specular;

    // Calculate the clip-space position of each vertex
    gl_Position = proj_matrix * P;
}

And now, the fragment shader.

Listing 2: Fragment shader of the same concept.

 

#version 410 core

// Output
layout (location = 0) out vec4 color;

// Input from vertex shader
in VS_OUT
{
    vec3 color;
} fs_in;

void main(void)
{
    // Write incoming color to the framebuffer
    color = vec4(fs_in.color, 1.0);
}

C) Phong Shading

Before going forward, keep in mind that Phong shading and Phong lighting are two different concepts. You can get rid of Gouraud’s “starburst” by using more vertices, but why do that when Phong shading is around? In Phong shaing, instead of interpolating the color between vertices (as seen in Listings 1 and 2), we interpolate the surface normals between the vertices, and the use the generated normal to perform the entire lighting calculation for each pixel, not each vertex. However, this means more work to be done in the fragment shader, as we’ll see in Listing 4. But first, let’s see the vertex shader.

Listing 3: Phong shader’s vertex shader in GLSL 4.1.

#version 410 core

// Per-vertex inputs
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

// Matrices we'll need
layout (std140) uniform constants
{
    mat4 mv_matrix;
    mat4 view_matrix;
    mat4 proj_matrix;
};

// Inputs from vertex shader
out VS_OUT
{
    vec3 N;
    vec3 L;
    vec3 V;
} vs_out;

// Position of light
uniform vec3 light_pos = vec3(100.0, 100.0, 100.0);

void main(void)
{
    // Calculate view-space coordinate
    vec4 P = mv_matrix * position;

    // Calculate normal in view-space
    vs_out.N = mat3(mv_matrix) * normal;

    // Calculate light vector
    vs_out.L = light_pos - P.xyz;

    // Calculate view vector
    vs_out.V = -P.xyz;

    // Calculate the clip-space position of each vertex
    gl_Position = proj_matrix * P;
}

Nothing much has changed. But the same isn’t true for the fragment shader.

Listing 4: Phong shading’s fragment shader.

 

#version 410 core

// Output
layout (location = 0) out vec4 color;

// Input from vertex shader
in VS_OUT
{
    vec3 N;
    vec3 L;
    vec3 V;
} fs_in;

// Material properties
uniform vec3 diffuse_albedo = vec3(0.5, 0.2, 0.7);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
uniform vec3 ambient = vec3(0.1, 0.1, 0.1);

void main(void)
{
    // Normalize the incoming N, L and V vectors
    vec3 N = normalize(fs_in.N);
    vec3 L = normalize(fs_in.L);
    vec3 V = normalize(fs_in.V);

    // Calculate R locally
    vec3 R = reflect(-L, N);

    // Compute the diffuse and specular components for each fragment
    vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
    vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;

    // Write final color to the framebuffer
    color = vec4(ambient + diffuse + specular, 1.0);
}

Well, that is it for this lesson. I hope you enjoyed it. If this has sparked your interest in OpenGL, you buy OpenGL Superbible from Amazon, or head to learnopengl.com. If you can’t make heads or tails of the shaders, I recommend Book of Shaders. Please leave a comment. Thank you.