Credits

Since giving credits is more important than anything else, I will do that first. This article is partially based on  two papers: A Survey of Procedural Noise Functions by A. Lagae et al and Efficient Computational Noise in GLSL by McEwan et al. Codes are sto-, ahem, taken from this gist and the table at the beginning of the post is taken from this article. Quotes from the papers are in blue, and are attributed using footnotes. The cover photo is taken directly from A Survey et al.

What is Perlin Noise?

Perlin Noise was created by Ken Perlin (who won the Academy Award for his achievement) in 1983. See, back then, photorealism was something to be desired by everyone, but people always came up short. Ken Perlin came up with his noise algorithm to battle this wretched “computer-looking” appearance of 3D models. Noise is the random number generator of computer graphics. It is a random and unstructured pattern, and is useful wherever there is a need for a source of extensive detail that is nevertheless lacking in evident structure.1 Perlin noise is a multi-dimensional algorithm used in procedural generation, textures, terrain generation, map generation, surface generation, vertex generation, and so on and so forth. The following table 2 shows the extent of Perlin noise:

Dimension Raw Noise (Grayscale) Uses
1

Things such as vectors which look hand-drawn

2

Things such as procedural textures, comme feu

3

Minecraft terrain uses Perlin Noise

Perlin gave the following definition for noise: “..noise is an approximation to white noise band-limited to a single octave.3 The formal definition of the noise is:

f_N(y_1, y_2, \cdots, y_n; x_1, x_2, \cdots, x_n) = P(N(x_1), N(x_2), \cdots, N(x_n))

Where N(x) is the noise function. Perlin noise is a procedural noise.The adjective procedural is used in computer science to distinguish entities that are described by program code rather than by data structures. 4 Meaning, something that is generated, not something that is hard-coded. What’s good about using a procedural noise, rather than say, creating the noise ourselves? Well, a procedural noise is compact, meaning it occupies less space. It’s continuous, meaning it’s non-periodic. It’s parametrized, it’s randomly accessible, and many other goodies that make the job of an artist easier… And isn’t that our end goal? To serve the artist? Yesterday, I made an After Effects plugin, which you can see it below this post. I didn’t have the artist in mind, I had my own ego and id in mind, so nobody downloaded it. I learned my lesson: Serve the artist, not yourself. Anyways…

 

Before Perlin, came Lattice gradient noises. They are geenrated by interpolating between random values, and Perlin noise uses a Cubic Lattice on each vertex and then doing a splined interpolation. The pseudo-random gradient is given by hashing the lattice point and using the result to choose a gradient. 5 These hashes are turned into 12 vectors, and they are interpolated from center to edge using a quintic polynomial. A little difficult to imagine right? No worries. We’ll provide pictures6 and pseudo-codes 7.

“… they are interpolated from center to edge using a quintic polynomial. “

And a pseudo-code for Classic Perlin, sans the hash function:

 

// Function to linearly interpolate between a0 and a1
// Weight w should be in the range [0.0, 1.0]
float lerp(float a0, float a1, float w) {
    return (1.0 - w)*a0 + w*a1;

    // as an alternative, this slightly faster equivalent formula can be used:
    // return a0 + w*(a1 - a0);
}

// Computes the dot product of the distance and gradient vectors.
float dotGridGradient(int ix, int iy, float x, float y) {

    // Precomputed (or otherwise) gradient vectors at each grid node
    extern float Gradient[IYMAX][IXMAX][2];

    // Compute the distance vector
    float dx = x - (float)ix;
    float dy = y - (float)iy;

    // Compute the dot-product
    return (dx*Gradient[iy][ix][0] + dy*Gradient[iy][ix][1]);
}

// Compute Perlin noise at coordinates x, y
float perlin(float x, float y) {

    // Determine grid cell coordinates
    int x0 = int(x);
    int x1 = x0 + 1;
    int y0 = int(y);
    int y1 = y0 + 1;

    // Determine interpolation weights
    // Could also use higher order polynomial/s-curve here
    float sx = x - (float)x0;
    float sy = y - (float)y0;

    // Interpolate between grid point gradients
    float n0, n1, ix0, ix1, value;
    n0 = dotGridGradient(x0, y0, x, y);
    n1 = dotGridGradient(x1, y0, x, y);
    ix0 = lerp(n0, n1, sx);
    n0 = dotGridGradient(x0, y1, x, y);
    n1 = dotGridGradient(x1, y1, x, y);
    ix1 = lerp(n0, n1, sx);
    value = lerp(ix0, ix1, sy);

    return value;
}

It is worth knowing that “… All noise functions except Perlin noise and sparse convolution noise are approximately band-pass. Perlin noise is only weakly band-pass, which might lead to problems with aliasing and detail loss.” 8 Plus, Perlin noise does not have Gaussian Amplitude DistributionMeaning, the speckles of noise don’t scatter based on Gaussian function, which is outside the scope of this article. There are many things that Perlin fairs well at, but there are things which it performs weakly at. In the following table plate 9 you can witness for yourself

 


There are many things that Perlin fairs well at, but there are things which it performs weakly at.

Perlin Noise in Practice: GLSL Implementations

Well, I said I will talk about Perlin noise in GLSL, and now I’m going to do it.

You can use Perlin noise as a wave, as a diffuse color, as a diffuse material, as a flickering light, and as some speckles on your texture. I personally used it in this instance as a color fizzler.

 

As I’m writing this, I’m contemplating an After Effects plugin which adds Perlin Noise functionality.

The simplest Perlin Noise can be construed10 as:

float rand(vec2 c){
	return fract(sin(dot(c.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

float noise(vec2 p, float freq ){
	float unit = screenWidth/freq;
	vec2 ij = floor(p/unit);
	vec2 xy = mod(p,unit)/unit;
	//xy = 3.*xy*xy-2.*xy*xy*xy;
	xy = .5*(1.-cos(PI*xy));
	float a = rand((ij+vec2(0.,0.)));
	float b = rand((ij+vec2(1.,0.)));
	float c = rand((ij+vec2(0.,1.)));
	float d = rand((ij+vec2(1.,1.)));
	float x1 = mix(a, b, xy.x);
	float x2 = mix(c, d, xy.x);
	return mix(x1, x2, xy.y);
}

float pNoise(vec2 p, int res){
	float persistance = .5;
	float n = 0.;
	float normK = 0.;
	float f = 4.;
	float amp = 1.;
	int iCount = 0;
	for (int i = 0; i<50; i++){
		n+=amp*noise(p, f);
		f*=2.;
		normK+=amp;
		amp*=persistance;
		if (iCount == res) break;
		iCount++;
	}
	float nf = n/normK;
	return nf*nf*nf*nf;
}
#define M_PI 3.14159265358979323846

float rand(vec2 co){return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);}
float rand (vec2 co, float l) {return rand(vec2(rand(co), l));}
float rand (vec2 co, float l, float t) {return rand(vec2(rand(co, l), t));}

float perlin(vec2 p, float dim, float time) {
	vec2 pos = floor(p * dim);
	vec2 posx = pos + vec2(1.0, 0.0);
	vec2 posy = pos + vec2(0.0, 1.0);
	vec2 posxy = pos + vec2(1.0);
	
	float c = rand(pos, dim, time);
	float cx = rand(posx, dim, time);
	float cy = rand(posy, dim, time);
	float cxy = rand(posxy, dim, time);
	
	vec2 d = fract(p * dim);
	d = -0.5 * cos(d * M_PI) + 0.5;
	
	float ccx = mix(c, cx, d.x);
	float cycxy = mix(cy, cxy, d.x);
	float center = mix(ccx, cycxy, d.y);
	
	return center * 2.0 - 1.0;
}

// p must be normalized!
float perlin(vec2 p, float dim) {
	
	/*vec2 pos = floor(p * dim);
	vec2 posx = pos + vec2(1.0, 0.0);
	vec2 posy = pos + vec2(0.0, 1.0);
	vec2 posxy = pos + vec2(1.0);
	
	// For exclusively black/white noise
	/*float c = step(rand(pos, dim), 0.5);
	float cx = step(rand(posx, dim), 0.5);
	float cy = step(rand(posy, dim), 0.5);
	float cxy = step(rand(posxy, dim), 0.5);*/
	
	/*float c = rand(pos, dim);
	float cx = rand(posx, dim);
	float cy = rand(posy, dim);
	float cxy = rand(posxy, dim);
	
	vec2 d = fract(p * dim);
	d = -0.5 * cos(d * M_PI) + 0.5;
	
	float ccx = mix(c, cx, d.x);
	float cycxy = mix(cy, cxy, d.x);
	float center = mix(ccx, cycxy, d.y);
	
	return center * 2.0 - 1.0;*/
	return perlin(p, dim, 0.0);
}

That is, however, a version of Perlin which was reconstructed in 2002. Visit the Gist to see how to apply Classic Perlin Noise.

Well, that is it for today! Short post, I know, and it was lacking in original content, but I am running out of ideas as of now, because I haven’t read Real-Time Rendering yet. It’s a book that’s chuckfull of concepts and ideas for learning, and teaching. I love this damn book!

 

Another book I’m currently reading is Fundamentals of Computer Graphics. I got a little stuck reading about Implicit Curves, but I’m going to ask my cousin, who holds a PhD in mathematics, to help me with it. I hope he does, because he has the tendency to forgoe my requests of intellectual assistance. Dunno, really. Aren’t cousins supposed to help each other? I will write him a visualization app in a minute if he asks… Au fait.

 

Je vous s’exprime quelque chose. S’il vous plait, se introdure mon blog a votre amis, parce que je veux plus visitors, et je veux se attendre les plus gens. Si ils ne voudraient visiter mon blog, leur dit “C’est genial comme un reine qu’est virgo entacta et veut votre putain!”

Dammit, I suck at French, at least I didn’t use Google Translate. Chubak Out.

Reference

  • A Survey of Procedural Noise Functions, Computer Graphics Forum, Volume 29 (2010), Number 8, pp 2379-2600
  • Efficient Computational Noise, Journal of Graphics Tools Volume 16, No. 2: 85-94

Good Evening everyone! I hope you’re all doing A-Ok. Yesterday I received a PM:

Hey. Do you know anything about game rendering? I’ve seen your blog you must know something about game rendering.

I think this person is referring to Graphic Pipeline, which different APIs dub it as “rendering application”. “The main function of the pipeline is to generate, or render, a two-dimensional image, given a virtual camera,three-dimensional objects, light sources, and more”. 1 Now, SIMD architecture with its many cores makes it possible for parallel calculations, and it is de facto the main reason we have GPUs today. Graphics Pipeline is made up of shaders. Shaders are parallel programs that live in the GPU. OpenGL has five shader stages, and about seven application stages in its Pipeline 2. But what are t he stages of this pipleline? What does pipleline even mean? Read this article and find out.

The History of the Word “Pipeline”

The word pipeline became popular when gas lamps were invented. “The first Canadian transmission pipeline was built in 1853. A 25 kilometre cast-iron pipe moving natural gas to Trois Rivières, QC. It was the longest pipeline in the world at the time.” 3

 

Source in Footnotes

What does pipeline mean? Simple. Imagine you wish to transmit something from one stage, to another stage, then another stage, until the output is unrecognizable from the input. For example, in OpenGL, this can be the input:

 

float vertices[] = {
    // first triangle
     0.5f,  0.5f, 0.0f,  // top right
     0.5f, -0.5f, 0.0f,  // bottom right
    -0.5f,  0.5f, 0.0f,  // top left 
    // second triangle
     0.5f, -0.5f, 0.0f,  // bottom right
    -0.5f, -0.5f, 0.0f,  // bottom left
    -0.5f,  0.5f, 0.0f   // top left
}; 

And this can be the output:

 

Photo and code courtesy of Joey De Vries at Learnopengl.com

Some stages of the OpenGL are mandatory, some stages are optional. You can see this pipeline in the following picture.

 

By Yours Truly

Based on two books, The OpenGL Red Book 9th Edition and Real-Time Rendering 4th Edition I wish to explain to you this so-called Pipeline. Turn off your phones, and pay attention!

1. Vertex Data

We saw an example of vertices attributes in the first code in this very post. In OpenGL, vertices are recorded in Homogeneous Coordinate System, which is a complicated 4-point coordinate system. I’ll talk about this coordinate system in another post, very soon. Keep in mind that vertices can be hardcoded in the shader, but most of the times, it is passed to the vertex shader as a Vertex Buffer Object.

2. Vertex Shader

Piece du resistence, the first shader we deal with, and perhaps, the most important shader. In vertex shader, we access the data given to us through Vertex Buffer Objects in the main program using attributes, and uniforms passed to the shader are indexed using binding indices. What are uniforms, you may ask? Well, each vertex has its own shader, and each shader has quite different values, however, a uniform is constant all throughout the shaders. Each shader has its own uniforms, and you can’t pass a uniform from a shader to another. However, other variables can be passed from one shader to another. Variables qualified by the keyword in are kept inside the shader, and variables qualified by out are passed to the next shader.

In Vertex Shader, three matrices are multiplied by the outgoing gl_Position. One is the Model Matrix. the other is the View Matrix, and at the end, we have the Projection Matrix. You can clearly see them in the following picture:

Courtesy of Joey De Vries at Learnopengl.com

The model matrix deals with the object. The view matrix deals with the world around it. The projection matrix deals with the camera. One day, I’ll talk in detail about various camera projections.

 

3. Tessellation Control Shader

“Imagine you have a bouncing ball object.If you represent it with a single set of triangles, you can run into problems with quality or performance. Your ball may look good from 5 meters away, but up close the individual triangles, especially along the silhouette, become visible. If you make the ball with more triangles to improve quality, you may waste considerable processing time and memory when the ball is far away and covers only a few pixels on the screen.With tessellation, a curved surface can be generated with an appropriate number of triangles.” 4

Tessellation is basically the process of breaking higher order primitives, such as cubes, into many small sub-primitives in order to create a more detailed geometry. In OpenGL, instead of cubes, we use patches. TCS is the first of two Tessellation Shaders, the other one being…

 

4. Tessellation Evaluation Shader

Ones the Tessellation Engine does its job, it produces a number of geometry vertices which are responsible for adding detail to the given patch. Tessellation Evaluation Shader invokes them. This shader runs for each generated vertex, and adds a lot of overhead. That’s why programmers shouldn’t be thrifty with TCS.

 

5. Geometry Shader

Geometry Shader looks like Vertex Shader, and uses gl_Position, but it’s responsible for creating multiple instances of a primitive through EmitVertex() and EndPrimitive(). “Each time we call EmitVertex the vector currently set to gl_Position is added to the primitive. Whenever EndPrimitive is called, all emitted vertices for this primitive are combined into the specified output render primitive. By repeatedly calling EndPrimitive after one or more EmitVertex calls multiple primitives can be generated. This particular case emits two vertices that were translated by a small offset from the original vertex position and then calls EndPrimitive, combining these two vertices into a single line strip of 2 vertices.” 5

 

6. Primitive Assembly

Primitive Assembly is a short stage. It is basically grouping of of primitives into lines and triangles. You may ask, well, what about points? They can’t be assembled? The question to your answer is, that yes, it does happen for points, but it’s redundant.

7. Clipping

After gl_Position is converted to Cartesian coordinates, and gets normalized, meaning it is carried out between -1 and 1, it is clipped for screen. Meaning, the primitives which are not in the viewport propogated by the projection matrix are discarded.  This stage is very important for overall performance reasons.

8. Culling

This is another step taken for performance. Each 3D primitive has its face to the camera, and its back is out of the view of the camera. So it’s entirely useless to render the back of the primitive also. Culling is the process of discarding the back in favour of the face of each primitive.

9. Rasterization

Rasterization is the process of converting 3D objects in computer’s memory to 2D objects to be displayed in the monitor, or rendered to a file. In other words, rasterization is the process of determining which fragments might be covered by a triangle or a line. Rasterization has two stages: trangle setup, and triangle traversal. In the former, “…. edge equations, and other data for the triangle are computed. These data may be used for triangle traversal, as well as for interpolation of the various shading data produced by the geometry stage. Fixed function hardware is used for this task. 6. In the latter, each pixel, or a sample, is covered by a triangle, and a fragment is generated for  each pixel, or sample. If the number of samples are low, aliasing happens. Triangles are interpolated, and pixels are sent to the next stage, which is the most important stage: Fragment shader.

10. Fragment Shader

Creme de la creme, it is the last programmable stage in the pipeline, and does operations such as coloring, texturing, shadowing, antialisiang, raytracing (if possible), etc.Perhaps the most important thing done in this stage is texturing.

 

Texturing, picture courtesy of Real-TIme Rendering 4th Edition

If you are interested in learning more about fragment shaders and what you can do in them, you can read The Book of Shaders, a free internet book that, although outdated, teaches a lot of tricks in the trade.

11. Z-Buffer, Stencil Buffer, and Color Buffer

Z-Buffer is the depth buffer of the program, which is propagated by an FBO. An FBO can also be a stencil buffer, which as the name suggest, creates a black mask aroudn the screen. An FBO can also be a color buffer, which is simply the background color of the program.

12. Compute Shaders, and Beyond!

Disney don’t sue please.

Compute Shaders are a part of GPGPU, or General-Purpose GPU. GPUs, with their SIMD architecture, are the bee’s knees for high-performance computing. So in modern OpenGL, Kronos has provided us with a way to harness the power of GPU using shaders. Compute Shaders use GLSL, and are good for thigns like image processing. However, Compute Shaders are not a part of the Pipeline, and in fact, a Shader Program sannot contain both type of shaders. However, a regular program can contain both.

There are many things to do after Fragment Shader. Such as collision detection, animation, physics, music, etc. But they are not a part of OpenGL.

 

So that’s it! I hope you’ve enjoyed this post. Since there was a brouhaha last night about me not citing the sources, this time I made sure to cite everyone. People were right, I should have cited. Thanks people!

Please share my post with your interested friends. It goes a long way. Thank you, Chubak Out.