The following article is based on a journal by Jorge Jimenez, Jose J. Echevarria, Tiago Sousa and Diego Gutierrez.

You can view their SMAA demo here. It runs fine on my GTX 960 2GB.

Old Methods of Antialiasing

For years, MSAA (Multisampling Antialiasing) and SSAA (Supersampling Antialiasing) have been de facto the methods of antialiasing. In fact, these two still retain the highest quality amongst the various modern antialiasing methods. As we know, aliasing is caused by the lack of samples, in spatial level (jagged lines) and in temporal level (flickering), usually around the edges and high/low contrast regions of the picture. To battle these, we have two main ways that were once the only way around, Supersampling, and Multisampling:  In Supersampling, we blow up the picture, then downsampling for the final resolution. It works fine because, as my uncle puts it, it’s a pincer attack, because it covers every basi s of the problem and surrounds it. Multisampling is similarly pincer-ish. In this method, each sample gets duplicated based on the given coefficient. In today’s high resolutions, it would require a rather fiendish graphics card to achieve this. Therefore, we need new methods of antialiasing, both in spatial, and temporal level. All these methods rely on one algorithm to do their job: edge detection algorithm. But they rely on other things as well.

 

Modern Ways of Antialiasing

There are many modern filter-based methods for antialiasing which all, although inferior to the former and latter, do their job. FXAA,  DEAA, GPAA, GBAA, CSAA, EQAA, DLAA… In this article, we’ll talk about SMAA, and its predecessor, MLAA. These modern filter-based methods have their own problems:

  • Most edge detection methods, which are the basis of these methods, only take into account numerical differences between pixels, ignoring how they appear to the viewer.
  • The original shape of the object is not always preserved, an overall rounding of corners is most of the times clearly visible in text, sharp corners, and subpixel features.
  • Most approaches are designed to handle horizontal or vertical patterns only, ignoring the vericals.
  • Real subpixel features and subpixel motion are not properly handled. Specular and shading aliasing is not completely removed.

You’ve guessed right: We raise these issues because aim to decimate them.

Morphological Antialiasing (MLAA)

MLAA tries to estimate the coverage of the original geometry. To accurately rasterize an anitalised triangle, the coverage area for each pixel inside the triangle must be calculated to blend it properly with the background. MLAA begins the image without antialiasing, and it reverses the process by re-vectorizing the silhouettes, in order to estimate such coverage areas. Then, since the background cannot be known after rasterization, MLAA blends with a neighbor, assuming that its value is similar with the original background. In other words, The algorithm detects borders (either using color or depth information) and then finds specific patterns in these. Anti-aliasing is achieved by blending pixels in the borders intelligently. MLAA has implementations in DirectX 10 and Mono Game (XNA). Games such as Fable II use it faithfully. From the creators of MLAA, comes SMAA, or Enhanced Subpixel Morphological Antialiasing, which is the main point of our post.

MLAA in action

Enhanced Subpixel Morphological Antialiasing (SMAA)

 

Comparison between SMAA and other methods in Crysis 2

SMAA offers reliable edge detection, and a simple and effective way to handle sharp geometric features and diagonal lines. Besides, SMAA doesn’t change the shape of the geometry, as many other methods do.

 

Top) No AA

Middle) MLAA

Bottom) SMAA

SMAA builds on MLAA pipeline, improving or redefining at every step. In particular, the edge detection is improved by using color information with local contrast adaptation for cleaner edges. It extends the number of patterns handled for preservation of sharp geometric features and diagonals. And lastly, it shows how morphological antialiasing can be accurately combined with multisampling or supersampling and temporal reprojection.

Edge Detection

Edge detection is vital, because undetected edges remain aliased. On the other hand, too many filtered edges can reduce the quality of the antialiased image. Different information can be used to detect he edge: Chroma, luma, depth, surface normal, or a combination of them. For four reasons, SMAA uses luma:

  1.   Less artifacts.
  2. Luma is always visible.
  3. It can handle shading aliasing.
  4. And finally, it’s faster than chroma.

 

 

Left and Center: Other Methods of Edge Detection, causing red crossing and artifacts

Right: SMAA edges, clean as a whistle

Have this image in mind. Here’s how edge detection works: the final calculated value is a boolean called left edge boundary. Boolean values for the top edge is similarly calculated. The formula is:

c_{max }= max\left(c_l, c_r, c_b, c_i, c_{2l}\right)

e_l^\prime =  e_l \wedge c_l > 0.5.c_{max}

All the c values are called contrast deltas.

Pattern Handling

SMAA pattern detection allows to preserve sharp geometric features like corners, deals with diagonal and enables accurate distance searches.

Sharp Geometric Features: The re-vectorization of silhouettes in MLAA tensd to round corners. To avoid this, SMAA makes the observation that crossing the edges in contour lines have a maximum size of oen pixel, whereas for sharp corners this length will most likely be longer. Thusly, SMAA fetches two-pixel-long crossing edges instead, this allows less aggressive corners processing.

Diagonal Patterns: We introduce a novel diogonal pattern detection. It consists of the following steps:

  1. Search for the diagonal distance d_l and d_r to the left and the right of the diagonal lines.
  2. Fetch the crossing edges e_1 and e_2.
  3. Use this input information, defining the specific diagonal patter, to access the precomputed area texture, yielding the areas a_t and and a_b.

If the diagonal pattern detection fails, then the orthogonal detection is triggered.

Accurate Distance Search: Key to pattern detection and classification is obtaining accurate edge distance (lengths to both end of the lien) MLAA makes extensive use of hardware interpolation to speed up this process. Hardware bilinear filtering can be used as a way of fetching and encoding up to four different values with a single memory access. This linear interpolation of two binary values (that is, bilinear) producing a single floating point value is shown as:

f_x(b_1,b_2,x) = x.b_1(1-x).b_

Where b_1 and b_2 are two binary values (either 0 or 1) and x is the interpolation value.

Results

MLAA works with a single sample per pixel. This translates to subsampling, which makes it impossible to recover real subpixel features.

 

MLAA vs SMAA vs No AA

SMAA, however, works in the subpixel level. This results in:

  • Local contrast
  • Diagonal pattern detection
  • Sharp geometric features
  • Accurate searches

You can view all these in the following image, with these features compared to other methods. In fact, SMAA can produce results close to SSAA 16x.

 

The overhead produced by each of the solutions is negligible. In particular, local contrast adaptation is only 0.08ms, the sharp geometric features detection adn accurate distance takes 0.01ms, and diagonals processing produces an overhead of 0.12ms. In short, SMAA is rather fast, slower than SSAA and MSAA, but more fruitful, and less resource-intensive. 

Well, thanks for reading the article! And thanks for the writers of the journal which I used for the majority of the article. I hope you guys have a good day, and also, go on reading scientific articles on your own. It’s simple, just head to libgen.io and search for what you like — not necessarily graphics programming. Read, read and read! Don’t watch too many Youtube tutorials, it kills your senses. I can’t stress that enough. I don’t intend to tell you what to do, these are all just suggestions. I’m currently studying Structured Computer Organization and enjoying it very, very much. I recommend it for everyone, even as bedtime reading.

Please, please, please tweet my blog, introduce it to your friends, and share it to people whom you want to enjoy life, and learn about graphics programming. Thank you, thanks a lot. Chubak out.



We do not expect to be able to display the object exactly as it would appear in reality, with texture, overcast shadows, etc. We hope only to display an image that approximates the real object closely enough to provide a certain degree of realism.


Bui Tuong Phong

 Bui Tuong was born in 1941, and became a computer scientist during the hobnobbing affair of the Vietnam War. It must have been pretty difficult for him to complete his education is the toxic environment of the 60s, let alone, be drafted to fight in the country of his fore bearers! But he managed to stay alive, until 1975, before leukemia took his life, just 2 short years before he gave us the basis of light and shading today: the Phong shader. Vietnamese names are comprised of three parts: the paternam name, the middle name, and the given name. All along, when people say “Phong shader”, they are referring to Bui Tuong’s given name. Read more about given names on Wikipedia.

Not sure if it’s Phong, but if Google is to be believed, it’s him.


“Softly let the balmy sunshine
Play around my dying bed,
E’er the dimly lighted valley
I with lonely feet must tread. “


Let The Light Enter – Poem by Frances Ellen Watkins Harper

There’s some extremely concise math behind Phong Shader. Which you need not know, really, unless you aspire to be a graphics programmer like yours truly. However, knowing them, even glancing at them will be beneficial in the long run. The following is extracted from OpenGL Superbible, 7th Edition by Graham Sellers and the Kronos Group ARB Foundation.

 

A) Some Concepts

First, let me get some concepts out of the way. If you have been 3D modelling or game developing for more than a minute, you will certainly be positioned in their line of fire, but a reminder won’t hurt.

 

A-1) Ambient Light

Most books, especially rather shoddy ones, compare this sort of light with sunlight, however, this is simply wrong. Ambient light is not sunlight. It comes from every direction, meaning, it’s omnipresent, but in calculations, it’s just a vector with 3 members. In Phong shading, it gets added at the end, but is not modified.


A-2) Diffuse Light

Diffuse light has a direction. In fact, it’s the directional component of a light source [sic]. In cinema, light is diffused with a softbox, and in computer graphics, light is diffused with the formula which we’ll put forward later. The magnitude, that is, the size of the diffuse light depends on the surface. For example, if our surface is a matte surface that absorbs more light than reflecting, the magnitude must be higher than when our surface is glossier.

Reflection/Absorption of Diffuse Light from a Matte Display
A-3) Specular Highlight

Like diffuse light, specular light is directional, but based on the glossiness of the surface, it leaves a highlight which is called shininess. In real life, shininess is not an inherent part of the material. In fact, a coat of film, or a speck of wax, will add more to its shininess than anything else ever could. Specular shininess is a factor which is set between 0 and 128, because beyond 128, it won’t affect the shader much.

“Coated Paper”, construction paper with a film of gloss over them. Childhood Paradise.
A- 4) Albedo

It’s the proportion of the incident light which is reflected away by the surface.

A-5) Phong Formula

The formula for calculating a Phong material is as follows:

I_p = k_ai_a + k_d\left(\vec{L}.\vec{N}\right)i_d + k_s\left(\vec{R}.\vec{V}\right)^\alpha i_s

Where:

k_a: Ambient material

k_d: Diffuse material

k_s: Specular material and \alpha: shininess factor

i_a: Ambient light

i_d: Diffuse light

i_s: Specular light

You may ask, what about the vectors? Well, no worries, we’ll cover them now:

\vec{N}: Surface normal

\vec{L}: Unit vector from the point being shaded into the light (in other words, the light vector)

\vec{R}: The reflection of the negative of the light vector

\vec{N}: Vector to the viewer

 

 

B) Gouraud Shading

Before stampeding towards Phong shader, let’s show how you can achieve Gouraud shading in GLSL. Note that I’m using version 4.1 of GLSL, as per Superbible’s instructions, however, you may be a fan of www.learnopengl.com and use 3.3. Doesn’t matter, same thing. So let’s see what a gouraud shading is. Is it edible, is it a suppository, or is it some sort of a sex toy. 

This method of shading was invented by Henri Gouraud in 1971. It’s not superior to Phong shading by any means, and these days, it’s mostly used as a less GPU-intensive preview method in software such as Cinema 4D. The problem is, that the glint it generates looks like a spark, as shown in the picture below:

 

This is caused by interpolating color between the vertices, and discontinuity between triangles because the colors are being interpolated linearly, and is only solved by Phong shader. Let’s see how we can do Gouraud shading in GLSL 4.1.

Listing 1: Per-vertex Gouraud Shading in GLSL 4.1

#version 410 core

// Per-vertex inputs
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

// Matrices we'll need
layout (std140) uniform constants
{
    mat4 mv_matrix;
    mat4 view_matrix;
    mat4 proj_matrix;
};

// Light and material properties
uniform vec3 light_pos = vec3(100.0, 100.0, 100.0);
uniform vec3 diffuse_albedo = vec3(0.5, 0.2, 0.7);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
uniform vec3 ambient = vec3(0.1, 0.1, 0.1);

// Outputs to the fragment shader
out VS_OUT
{
    vec3 color;
} vs_out;

void main(void)
{
    // Calculate view-space coordinate
    vec4 P = mv_matrix * position;

    // Calculate normal in view space
    vec3 N = mat3(mv_matrix) * normal;
    // Calculate view-space light vector
    vec3 L = light_pos - P.xyz;
    // Calculate view vector (simply the negative of the view-space position)
    vec3 V = -P.xyz;

    // Normalize all three vectors
    N = normalize(N);
    L = normalize(L);
    V = normalize(V);
    // Calculate R by reflecting -L around the plane defined by N
    vec3 R = reflect(-L, N);

    // Calculate the diffuse and specular contributions
    vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
    vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;

    // Send the color output to the fragment shader
    vs_out.color = ambient + diffuse + specular;

    // Calculate the clip-space position of each vertex
    gl_Position = proj_matrix * P;
}

And now, the fragment shader.

Listing 2: Fragment shader of the same concept.

 

#version 410 core

// Output
layout (location = 0) out vec4 color;

// Input from vertex shader
in VS_OUT
{
    vec3 color;
} fs_in;

void main(void)
{
    // Write incoming color to the framebuffer
    color = vec4(fs_in.color, 1.0);
}

C) Phong Shading

Before going forward, keep in mind that Phong shading and Phong lighting are two different concepts. You can get rid of Gouraud’s “starburst” by using more vertices, but why do that when Phong shading is around? In Phong shaing, instead of interpolating the color between vertices (as seen in Listings 1 and 2), we interpolate the surface normals between the vertices, and the use the generated normal to perform the entire lighting calculation for each pixel, not each vertex. However, this means more work to be done in the fragment shader, as we’ll see in Listing 4. But first, let’s see the vertex shader.

Listing 3: Phong shader’s vertex shader in GLSL 4.1.

#version 410 core

// Per-vertex inputs
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

// Matrices we'll need
layout (std140) uniform constants
{
    mat4 mv_matrix;
    mat4 view_matrix;
    mat4 proj_matrix;
};

// Inputs from vertex shader
out VS_OUT
{
    vec3 N;
    vec3 L;
    vec3 V;
} vs_out;

// Position of light
uniform vec3 light_pos = vec3(100.0, 100.0, 100.0);

void main(void)
{
    // Calculate view-space coordinate
    vec4 P = mv_matrix * position;

    // Calculate normal in view-space
    vs_out.N = mat3(mv_matrix) * normal;

    // Calculate light vector
    vs_out.L = light_pos - P.xyz;

    // Calculate view vector
    vs_out.V = -P.xyz;

    // Calculate the clip-space position of each vertex
    gl_Position = proj_matrix * P;
}

Nothing much has changed. But the same isn’t true for the fragment shader.

Listing 4: Phong shading’s fragment shader.

 

#version 410 core

// Output
layout (location = 0) out vec4 color;

// Input from vertex shader
in VS_OUT
{
    vec3 N;
    vec3 L;
    vec3 V;
} fs_in;

// Material properties
uniform vec3 diffuse_albedo = vec3(0.5, 0.2, 0.7);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
uniform vec3 ambient = vec3(0.1, 0.1, 0.1);

void main(void)
{
    // Normalize the incoming N, L and V vectors
    vec3 N = normalize(fs_in.N);
    vec3 L = normalize(fs_in.L);
    vec3 V = normalize(fs_in.V);

    // Calculate R locally
    vec3 R = reflect(-L, N);

    // Compute the diffuse and specular components for each fragment
    vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
    vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;

    // Write final color to the framebuffer
    color = vec4(ambient + diffuse + specular, 1.0);
}

Well, that is it for this lesson. I hope you enjoyed it. If this has sparked your interest in OpenGL, you buy OpenGL Superbible from Amazon, or head to learnopengl.com. If you can’t make heads or tails of the shaders, I recommend Book of Shaders. Please leave a comment. Thank you.