The following article is based on a journal by Jorge Jimenez, Jose J. Echevarria, Tiago Sousa and Diego Gutierrez.

You can view their SMAA demo here. It runs fine on my GTX 960 2GB.

Old Methods of Antialiasing

For years, MSAA (Multisampling Antialiasing) and SSAA (Supersampling Antialiasing) have been de facto the methods of antialiasing. In fact, these two still retain the highest quality amongst the various modern antialiasing methods. As we know, aliasing is caused by the lack of samples, in spatial level (jagged lines) and in temporal level (flickering), usually around the edges and high/low contrast regions of the picture. To battle these, we have two main ways that were once the only way around, Supersampling, and Multisampling:  In Supersampling, we blow up the picture, then downsampling for the final resolution. It works fine because, as my uncle puts it, it’s a pincer attack, because it covers every basi s of the problem and surrounds it. Multisampling is similarly pincer-ish. In this method, each sample gets duplicated based on the given coefficient. In today’s high resolutions, it would require a rather fiendish graphics card to achieve this. Therefore, we need new methods of antialiasing, both in spatial, and temporal level. All these methods rely on one algorithm to do their job: edge detection algorithm. But they rely on other things as well.

 

Modern Ways of Antialiasing

There are many modern filter-based methods for antialiasing which all, although inferior to the former and latter, do their job. FXAA,  DEAA, GPAA, GBAA, CSAA, EQAA, DLAA… In this article, we’ll talk about SMAA, and its predecessor, MLAA. These modern filter-based methods have their own problems:

  • Most edge detection methods, which are the basis of these methods, only take into account numerical differences between pixels, ignoring how they appear to the viewer.
  • The original shape of the object is not always preserved, an overall rounding of corners is most of the times clearly visible in text, sharp corners, and subpixel features.
  • Most approaches are designed to handle horizontal or vertical patterns only, ignoring the vericals.
  • Real subpixel features and subpixel motion are not properly handled. Specular and shading aliasing is not completely removed.

You’ve guessed right: We raise these issues because aim to decimate them.

Morphological Antialiasing (MLAA)

MLAA tries to estimate the coverage of the original geometry. To accurately rasterize an anitalised triangle, the coverage area for each pixel inside the triangle must be calculated to blend it properly with the background. MLAA begins the image without antialiasing, and it reverses the process by re-vectorizing the silhouettes, in order to estimate such coverage areas. Then, since the background cannot be known after rasterization, MLAA blends with a neighbor, assuming that its value is similar with the original background. In other words, The algorithm detects borders (either using color or depth information) and then finds specific patterns in these. Anti-aliasing is achieved by blending pixels in the borders intelligently. MLAA has implementations in DirectX 10 and Mono Game (XNA). Games such as Fable II use it faithfully. From the creators of MLAA, comes SMAA, or Enhanced Subpixel Morphological Antialiasing, which is the main point of our post.

MLAA in action

Enhanced Subpixel Morphological Antialiasing (SMAA)

 

Comparison between SMAA and other methods in Crysis 2

SMAA offers reliable edge detection, and a simple and effective way to handle sharp geometric features and diagonal lines. Besides, SMAA doesn’t change the shape of the geometry, as many other methods do.

 

Top) No AA

Middle) MLAA

Bottom) SMAA

SMAA builds on MLAA pipeline, improving or redefining at every step. In particular, the edge detection is improved by using color information with local contrast adaptation for cleaner edges. It extends the number of patterns handled for preservation of sharp geometric features and diagonals. And lastly, it shows how morphological antialiasing can be accurately combined with multisampling or supersampling and temporal reprojection.

Edge Detection

Edge detection is vital, because undetected edges remain aliased. On the other hand, too many filtered edges can reduce the quality of the antialiased image. Different information can be used to detect he edge: Chroma, luma, depth, surface normal, or a combination of them. For four reasons, SMAA uses luma:

  1.   Less artifacts.
  2. Luma is always visible.
  3. It can handle shading aliasing.
  4. And finally, it’s faster than chroma.

 

 

Left and Center: Other Methods of Edge Detection, causing red crossing and artifacts

Right: SMAA edges, clean as a whistle

Have this image in mind. Here’s how edge detection works: the final calculated value is a boolean called left edge boundary. Boolean values for the top edge is similarly calculated. The formula is:

c_{max }= max\left(c_l, c_r, c_b, c_i, c_{2l}\right)

e_l^\prime =  e_l \wedge c_l > 0.5.c_{max}

All the c values are called contrast deltas.

Pattern Handling

SMAA pattern detection allows to preserve sharp geometric features like corners, deals with diagonal and enables accurate distance searches.

Sharp Geometric Features: The re-vectorization of silhouettes in MLAA tensd to round corners. To avoid this, SMAA makes the observation that crossing the edges in contour lines have a maximum size of oen pixel, whereas for sharp corners this length will most likely be longer. Thusly, SMAA fetches two-pixel-long crossing edges instead, this allows less aggressive corners processing.

Diagonal Patterns: We introduce a novel diogonal pattern detection. It consists of the following steps:

  1. Search for the diagonal distance d_l and d_r to the left and the right of the diagonal lines.
  2. Fetch the crossing edges e_1 and e_2.
  3. Use this input information, defining the specific diagonal patter, to access the precomputed area texture, yielding the areas a_t and and a_b.

If the diagonal pattern detection fails, then the orthogonal detection is triggered.

Accurate Distance Search: Key to pattern detection and classification is obtaining accurate edge distance (lengths to both end of the lien) MLAA makes extensive use of hardware interpolation to speed up this process. Hardware bilinear filtering can be used as a way of fetching and encoding up to four different values with a single memory access. This linear interpolation of two binary values (that is, bilinear) producing a single floating point value is shown as:

f_x(b_1,b_2,x) = x.b_1(1-x).b_

Where b_1 and b_2 are two binary values (either 0 or 1) and x is the interpolation value.

Results

MLAA works with a single sample per pixel. This translates to subsampling, which makes it impossible to recover real subpixel features.

 

MLAA vs SMAA vs No AA

SMAA, however, works in the subpixel level. This results in:

  • Local contrast
  • Diagonal pattern detection
  • Sharp geometric features
  • Accurate searches

You can view all these in the following image, with these features compared to other methods. In fact, SMAA can produce results close to SSAA 16x.

 

The overhead produced by each of the solutions is negligible. In particular, local contrast adaptation is only 0.08ms, the sharp geometric features detection adn accurate distance takes 0.01ms, and diagonals processing produces an overhead of 0.12ms. In short, SMAA is rather fast, slower than SSAA and MSAA, but more fruitful, and less resource-intensive. 

Well, thanks for reading the article! And thanks for the writers of the journal which I used for the majority of the article. I hope you guys have a good day, and also, go on reading scientific articles on your own. It’s simple, just head to libgen.io and search for what you like — not necessarily graphics programming. Read, read and read! Don’t watch too many Youtube tutorials, it kills your senses. I can’t stress that enough. I don’t intend to tell you what to do, these are all just suggestions. I’m currently studying Structured Computer Organization and enjoying it very, very much. I recommend it for everyone, even as bedtime reading.

Please, please, please tweet my blog, introduce it to your friends, and share it to people whom you want to enjoy life, and learn about graphics programming. Thank you, thanks a lot. Chubak out.

What is a shadow, even?

Well, let me start off by saying that I love shadows more than light, and I have chosen the one room in the house where every being of the room occludes light. Occludes light… Hmm… How do we know what’s behind the light frustum, and what’s in front of the light frustum, or frusta?

 

Let us scrutinize shadows first. What are shadows? When you shine a light on an object with a wall behind it, what happens?

Look at this picture:

 

Umbra: Full, hard-edged shadow.

Penumbra: Half, soft-edged shadow.

What happens if we take the light source furher and further away, until the light frustum become so wide that umbra does not seem feasible anymore? Then, umbra turns into antumbra, where different penumbras meet.

Shadow Mapping

The concept was popularized by Lance Williams in 1978. Lance Williams is also the person behind mip-mapping, which we’ll discuss in another post, moon-god-willingly. So what is Shadow Mapping? It’s by far the only feasible real-time shadowing solution, as raytracing is rather resource-intensive and volume shadows are not suiable for real-time rendering.

 

Such a happy man!

Native API support is also another reason to choose shadow mapping. GLSL, for example, has a native texture type for receiving the shadow map as a Sampler2D Uniform, which we’ll see. 

But what is shadow mapping?

 Imagine angle of the light is less than 180 degrees. If we choose camera position as the light position, camera direction as the direction of the light, and shine the light frustum on the scene then the resulting depth buffer is our shadow map.

 

Left: Light Frustum Shone on the Scene | Right: The Resulting Framebuffer becomes out shadow map
Depth as seen from a ligh

The moment of truth

We said all those things, and we showed you the result, but how exactly do we know that an object occludes light at a certain position? Let’s see what happens when it doesn’t and then we’ll see what happens when it does so.

Imagine this spot on our Utah Teapot:

 

Imagine if Z_{sm} is the depth value point on the teapot. And imagine Z_{ls} is the depth value of the length of the light source to the lit point.

 

There will not be a shadow in this case, because Z_{ls} = Z_{ms}. However, imagine if Z_{ms} was behind the teapot:

 

In this case, when Z_{ls} < Z_{ms} there this point is definitely in the shadow, and since each point is a fragment, this fragment should be shaded In an Octopus’ Garden in the Shade!

Again, I’m burying the lede, and I’m sorry for that. Octopus’ Garden in the Shade has nothing to do with shaders! Anyways, we must factor in a bias when we do compare the depths. Because if we don’t, as you see in the photo, there will be surface acne.

In a scene with dueling frasta, i.e. more than one light source and shadow map, the size of the shadow map shan’t be different from each other, otherwise, as a result, the shadow will be pixelated.

 

Shadow Map: Pros vs. Cons

Shadow Map Pros are:

  1. No matter how heavy they may look, they are still better for real-time rendering than the alternatives (e.g. RayTracing).
  2. You can adjust the size of the texture which holds the depth data, ultimately, add it as an option to your game, or demo.

 And the Cons are:

  1. Aliasing occurs in low-res textures.
  2. Textures are heavy and occupy a lot of memory.
  3. Effects of self-occlusion may be visible in the output as sparkling. This can be fixed by polygon offset.

 Finally, let’s implement it in the code using OpenGL.

OpenGL Implementation

With all that said, how exactly is this implemented in OpenGL? Here’s how. The following code is taken from OpenGL Superbible 7th Edition.

Listing 1: First, we create the shadow depth buffer.

 

GLuint shadow_buffer;
GLuint shadow_tex;

glGenFramebuffers(1, &amp;shadow_buffer);
glBindFramebuffer(GL_FRAMEBUFFER, shadow_buffer);

glGenTextures(1, &amp;shadow_tex);
glBindTexture(GL_TEXTURE_2D, shadow_tex);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT32,
               DEPTH_TEX_WIDTH, DEPTH_TEX_HEIGHT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,
                GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
                     shadow_tex, 0);

glBindFramebuffer(GL_FRAMEBUFFER, 0);

Listing 2: Then, we create model-view-projection matrices, but this time, instead of the camera, we use the light!

 

vmath::mat4 model_matrix = vmath::rotate(currentTime, 0.0f, 1.0f, 0.0f);
vmath::mat4 light_view_matrix =
    vmath::lookat(light_pos,
                  vmath::vec3(0.0f),
                  vmath::vec3(0.0f, 1.0f, 0.0f);
vmath::mat4 light_proj_matrix =
   vmath::frustum(-1.0f, 1.0f, -1.0f, 1.0f,
                  1.0f, 1000.0f);
vmath::mat4 light_mvp_matrix = light_projection_matrix *
                               light_view_matrix * model_matrix;

Listing 3: Thusly, we generate a shadow matrix.

const vmath::mat4 scale_bias_matrix =
     vmath::mat4(vmath::vec4(0.5f, 0.0f, 0.0f, 0.0f),
                 vmath::vec4(0.0f, 0.5f, 0.0f, 0.0f),
                 vmath::vec4(0.0f, 0.0f, 0.5f, 0.0f),
                 vmath::vec4(0.5f, 0.5f, 0.5f, 1.0f));
vmath::mat4 shadow_matrix = scale_bias_matrix *
                            light_proj_matrix *
                            light_view_matrix *
                            model_matrix;

List 4: Nothing can be done without shaders, so we implement the vertex shader for this shadw. Not much different than any other vertex shaders.

#version 420 core

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
uniform mat4 shadow_matrix;
layout (location = 0) in vec4 position;

out VS_OUT
{
    vec4 shadow_coord;
} vs_out;

void main(void)
{
    gl_Position = proj_matrix * mv_matrix * position;
    vs_out.shadow_coord = shadow_matrix * position;
}

As you can see, we’ve got two model-view-projection matrices, one we use for ourselves, and one (the shadwo matrix) we pass to the fragment shader to see if each fragment is in the shadow, or in the light.

Listing 5: the fragment shader of it all.

#version 420 core

layout (location = 0) out vec4 color;

layout (binding = 0) uniform sampler2DShadow shadow_tex;

in VS_OUT
{
    vec4 shadow_coord;
} fs_in;

void main(void)
{
    color = textureProj(shadow_tex, fs_in.shadow_coord) * vec4(1.0);
}

That’s it! Notice something that we’ve never before seen in this blog, sampler2DShadow. It’s something recent, and not found in 3.3. This is why I say Superbible is better than LearnopenGl.com. It’s up to you, do you want something outdated written by a fan, or something up-to-date written by ARB? You choice!

 

Well, that is it for today! I might make another post, but it would be for the Europeans, because by the time I post my second entry of today, Americans will be asleep. A lot of people have visited my blog, this blog is not far from generating profit. So thank you! Don’t forget to leave a comment. I love each and everyone of you, if that’s not creepy. Really. I love everyone who reads the drivel I write. This gives me a feeling of self-importance. Anyways, before we have engaged in coitus, goodbye for now!

I started developing games at the young age of sixteen,. It’s been ten years now, and the landscape of indie and hobbyist game development has changed since then. Back then, Unity was still young, and not popular as it is today. Do you know how popular Unity is today? The magnitude of Unity’s popularity can be seen by Global Game Jam 2019’s technology stats which I will plot here:

 

Global Game Jam 2019 Technology Stats. A whopping 74% used Unity!

What can be concluded from this chart, but the fact that Unity is rather popular? And its popularity is well-deserving. What astounds me is that 28 teams have used Processing. If you don’t know what Processing is, it’s a Java library/stand-alone package which is mainly used for Creative Programming, something cooked up by people who had too much time on their hands. But funnily enough, there’s one feature in Processing which I admire, and that’s filtering the image based on a given fragment shader. But it is, in no way, suitable for making games! For God’s Sake, if you want something simple, Why Don’t You Ask Evans?

 

I know I am burying the lede here, and people have told me not to before, but I want to enjoy writing these blog posts, and nothing’s more enjoyable more than talking about an author you admire. Why Didn’t They Ask Evans is a work of detective fiction by Dame Agatha Christie. I won’t spoil the book for you, but it’s about Bobby Jokes and his girl toy Frankie. Bobby, whilst golfing, stumbles upon a near-dead man on the rocks. The man mumbles “Why didn’t they ask Evans?” This book is about Occam’s Razor: Usually, the simplest answer to something is the best answer. So if you want to make a game, and don’t want to use Unity, what is the simplest answer? Grist to Our Mill, Godot Engine (pronounced Gow-Dow)!

 Only a couple of people had used this engine in this years Game Jam, and it was a point of pride for them. They called themselves “hipsters of the game developing world”. But let’s not fret, Godot is becoming more and more popular, and by this time next year, a lot more people will be using it. I wish to do my part in introducing this wonderful engine to the game making public. It’s not great for 3D games, but it’s awesome for 2D games.

Godot is only 18 megabytes, and can be downloaded from here. Along with the engine, download the Export Templates, and also, if you’re a Blender man, Better Collida Exporter which improves the exporting of .dae files in Blender. You know what’s the best thing about Godot is? It’s free, or as Richard Stallman puts it, free as in freedom. There are many free engines around but certainly, Godot is heads ans shoulders above the rest.

 Extract Godot somewhere where you keep your game development files. Start a new project, or download a template. Here, I have downloaded a template called “Platformer 2D”.

 

On the top-left you see the FileSystem file browser. I don’t use this function, I drag and drop my files onto the…

On bottom-left you see your project files. Icon.png is what Godot displays in the project manager. It’s the identity of your project! So use a good picture.

The bar on the top navigates between your 2D, 3D, and code. Also, AssetLib wihch is Godot’s version of Asset Store.

Below that is your level editor.

On the top-right you’ll see the Scene nodes. Godot’s node system is very intuitive, and we’ll introdue some of the nodes later on. Tabbed next to the Node system is the Import settings.

On the bottom-right you’ll see the Inspector. Here, you, per example, add a sprite to a texture, set a music to loop, or create a particle system. Everything done here can also be done in the code section. Tabbed next to the Inspector is the Node settings, which comprises of Alarms, and Groups.

Okay, now let’s see what goes into the making of Godot Engine. Hit Help->About and look at the third party licenses. Here we see that Godot uses zlip, curl, TinyEXR, NanoSVG, GLAD, and most importantly, SDL, amongst so many other things. Truly, a pinnacle of FOSS development. Just 10 years ago, this would have been an impossible feat to achieve. But thanks to many OSS projects, and Github, today, we have Godot. Thanks, RMS! You are truly the man who eats foot cheese, but your efforts has also given us so many wonderful things.

Godot uses OpenGL through SDL. And GLAD as OpenGL Function loader (I know it’s not important, but for me, these things are exciting). Currently, it doesn’t have an Official .gitignore but there’s an unofficial one. If you wish to create a repository for your game, make sure there’s a .t prefix before your scene and file names, otherwise, they will be binary, and completely unsuitable for a version control system like Github.

Let’s take a look at some of Godot’s nodes:

 

  • Every node has a parent. In a 2D game, most of the nodes you use inherit from Node2D.
  • Every node can have as many children as it wants. Usually, an Area 2D node has a Sprite 2D node, and a script attached to it.
  • Particles 2D generates a 2D particle system, probably using textured OpenGL points. I must do a tutorial on them one day.
  • Path 2D, gives a path to the parent node it’s attached to.
  • RayCast 2D, it casts a ray in the 2D space and if it hits somewhere, it alerts the parent node.
  • Polygon 2D, a 2D polygon.
  • Sprite, one of the most-used of the Node2D nodes. It’s usually attached to a Kinematic, Static, or an Area 2D object.
  • TileMap, a set of tiles.

As I said, nodes are very intuitive.

Now, let’s take a look at Godot’s scripting language, called GD Script, which is very similar to Python. You can also use C# if you have downloaded the Mono version:

extends Node

var lives = 4
var coins = 0
var punto

func _ready():
	self.pause_mode = PAUSE_MODE_PROCESS

func _process(delta):
	if Input.is_action_just_pressed("ui_cancel"):
		if get_tree().paused == false:
			get_tree().paused = true
		else:
			get_tree().paused = false
	
	if lives == 0 and punto == null:
		print("Perdiste")
		get_tree().quit()

_ready() function is kin to Unity’s start(), and Godot has two functions which can be equated to Unity’s update(). The first one is _process() which is the normal update, the next one is _physics_process() which is used for synchronization with the physics engine. Also, as you can see, delta time is passed to the function as a parameter, something which all engines must do! 

So why do I say Godot is Grist to Our Mill? Because for far too long, before or after Unity became popular, we relied on tools that simply weren’t up to it. Tools that were buggy, run-down, or simply wrong (looking at you, Processing!). Godot is free, Godot is ever-changing and Godot is ever-wonderful. It’s still in development, but you can always rely on it to make you a good game, free of charge, with all the features intact.

Back in 2016, when I first started out with Godot, there were not that many tutorials around. But these days it’s just a matter of Google search to access the best of Godot tutorials. And if you like books, you can always buy Godot Engine Game Development in 24 Hours which is how I learned Godot. And you can always ask /r/Godot. Q&A is also always around to answer your questions.

 

 

Well, that is it for today’s post! You see, I, too, can post about game engines, and my posts are not always either about weird Python scripts that I’ve written, or OpenGL. Thanks, and have a nice day!

As the inquisitive reader may have guessed, or simply, googled my name, I hail from Persia, where Prince of Persia (as most people here put it, pe-rance of per-sh-ee-a)  games and the movie are set. Some people complained that Persians are brown, and the fact that in the movie had hasted a Nordic actor to play the role of the Prince was rather insulting and racist, however, what these warriors choose to ignore is that Persians, and most other Iranic tribes at the time, such as Parthians (Parthia, where I live) and Medians were, in fact white. Let’s not beat around the bush here. For nationalistic reasons, I never played the reboot, or the trilogy, I just played the 2D platformer on a Sega Genesis emulator. Anyways, I thought the reboot was the first instance of cel shading in computer games, however, I was wrong. Ten years before, a Dreamcast game by the name of Jetset Radio (pictured above) had set the trend. I’m too young for Dreamcast and I’ve never seen one even. But it must have been helluva game judging by the videos I’ve seen of it.

 

Legend of Zelda: Skyward Sword, one of the better games using cel shading

But what is cel shading or as Graham Sellers puts it, cell shading [sic]? It’s basically the process of using tricks in the render pipelien to make everything appear as if they were drawn by hand. The alternative approach for naming this technique is toon shading.

 

Cinema 4D’s “Toon” Effect

To find out which games use this effect, I headed to my trusty hangout, TVTropes.org, a wiki created using PMWiki and serving astray, bored media lovers for years… and in its cel shading page, it said:

Cel Shading applies first and foremost to the way the lighting is rendered.

This layman explanation is exactly, how I would describe it. Indeed, cel shading is rendering of the diffuse light channel. Texturing, and rendering. I know that light is a component of the material, and not something to be rendered, but using LUTs, we can achieve exactly this.

Anyways, what is  LUT?

 

 

A shot from Tintin’s Assets… I really don’t know what a plastic shader is. You could fill a data center with the things I don’t know!

Color LUT

If you have played around in Da Vinci Resolve, you’ll know what LUT is. It’s short for Look Up Table. Imagine you wish to change a series of colors to another series of colors. Each color must correspond to another. This is where Color LUT comes into play.

 

A LUT

In OpenGL, each LUT is a 1D array that corresponds to a number between 0.0f and 1.0f. This number is the intensity of the diffuse component of the Phong lighting system (remember the last post? Phong lighting is different from Phong material). Imagine this, if you will.

 

\vec{N} is normal vector, \vec{L} is the light vector. the LUT maps each color to each intensity using the formula. \alpha is dependent on your code.

 Let’s put it all together and see what happens.

Note: These are taken from OpenGL Superbible 7ed, by Graham Sellers.

 

Cel Shading in OpenGL

First, our front-end, at least, a part of it:

Listing 1: OpenGL Cel Shading Front End

 

//declare our LUT
static const GLubyte toon_tex_data[] =
{
    0x44, 0x00, 0x00, 0x00,
    0x88, 0x00, 0x00, 0x00,
    0xCC, 0x00, 0x00, 0x00,
    0xFF, 0x00, 0x00, 0x00
};

glGenTextures(1, &amp;tex_toon); //Generate texture
glBindTexture(GL_TEXTURE_1D, tex_toon); //Bind texture, a 1D texture
glTexStorage1D(GL_TEXTURE_1D, 1, GL_RGB8, sizeof(toon_tex_data) / 4);
glTexSubImage1D(GL_TEXTURE_1D, 0,
                0, sizeof(toon_tex_data) / 4,
                GL_RGBA, GL_UNSIGNED_BYTE,
                toon_tex_data); //Pass the data
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); //Conditions

Note that all the codes are explained in comments, and this is an introductory article, and not a tutorial, so I don’t see fit to explain the code in detail. You can buy OpenGL Superbible and see for yourself.

Listing 2: Partial of Cel Shading Vertex Shader

 

#version 420 core

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;

layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

out VS_OUT
{
    vec3 normal;
    vec3 view;
} vs_out;

void main(void)
{
    vec4 pos_vs = mv_matrix * position;

    // Calculate eye-space normal and position
    vs_out.normal = mat3(mv_matrix) * normal;
    vs_out.view = pos_vs.xyz;
// Send clip-space position to primitive assembly
    gl_Position = proj_matrix * pos_vs;
}

And finally, our fragment shader.

Listing 3: Cel shading fragment shader.

 

#version 420 core

layout (binding = 0) uniform sampler1D tex_toon;

uniform vec3 light_pos = vec3(30.0, 30.0, 100.0);

in VS_OUT
{
    vec3 normal;
    vec3 view;
} fs_in;

out vec4 color;

void main(void)
{
    // Calculate per-pixel normal and light vector
    vec3 N = normalize(fs_in.normal);
    vec3 L = normalize(light_pos - fs_in.view);
// Simple N dot L diffuse lighting
    float tc = pow(max(0.0, dot(N, L)), 5.0);

    // Sample from cell shading texture
    color = texture(tex_toon, tc) * (tc * 0.8 + 0.2);
}

If we load a donut model into the program, this is what we’ll get:

 

Well, hope you enjoyed it! I’m currently learning After Effects. And I may write a book about it. What do you think?

Gizeh is a small library for Python which you can install using pip. It doesn’t have any official documentation, so you have to suffice to the small documentation on it’s Github Page. Before starting this tutorial, read the README file carefully, as you may confused later on. Now, let’s see what we want to create. A procedural texture of a textile. Maybe a denim, maybe another pattern. I don’t know. Right now we can’t afford to be picky. The output is entirely randomized, and if you aren’t happy with the texture, you can just run the code again.


Our texture

Not happy? Let’s run the code again (I’m using Pycharm to write and run this code, teaching you about setting up Python and a good IDE is beyond the scope of this tutorial):


But how? Simple:

1- A blue background.

2- 200 different groups of 50 circles that increase in alpha as they decrease in radius. colors are from rgb(200, 200, 200) to rgb(250, 250, 250).

3- Vertical and horizontal lines spaced together based on a coefficient of 0.01 and tilted by a random number between 1 and 8.

That’s it!

Let’s start. First, let’s setup a window, and a surface:


import gizeh as gz
import random

w = 640
h = 480

surface = gz.Surface(width=w, height=h, bg_color=(21/255, 96/255,189/255))

You can change w and h to change the height and the width. Now, let’s generate our radial gradient:


def circle_generator(x, y, r, c):
    i = r
    a = 0
    while i > 0:
        circle = gz.circle(r = i, xy=[x, y], fill=(c/255, c/255, c/255, a*0.70))
        i -= 1
        a += 1/2000

        circle.draw(surface)

Besides the location of the circles, the function accepts two other parameters: the radius, which decreases in each while loop, and the color, which we divide by 255 since in Gizeh, rgb is between 0 and 1. We increase the alpha (a) by adding 1/2000 to it in each iteration.  At the end, we draw the circle.

Now, let’s make a line generator:


def generate_lines(x1, y1, x2, y2, s, a):
    line = gz.polyline(points=[(x1, y1), (x1, y1), (x2, y2), (x2, y2)], stroke_width=s,
                         stroke=(0, 0, 0, a/255), fill=(0, 0, 0, a/255))

    line.draw(surface)

This one just generates one line, we’re actually drawing a polygon were its origin is (x1, y1) and ends at (x2, y2). If we add a randomized number to the second coordinates, we can have a mangled shape, which will come to use if we want to generate some other pattern. s is stroke, and a is alpha. We set the fill to black, but it’s useless, as we don’t have a fill.

Now, let’s draw 200 radial gradients on the screen:


for i in range(200):
    x = random.randint(0, w)
    y = random.randint(0, h)
    r = random.randint(20, 100)
    c = random.randint(200, 255)
    circle_generator(x, y, r, c)

we choose entirely random locations, a radius between 20 and 100, and a color between very light gray and white. Then we call the function.

Now, the lines!

s = random.randint(1, 3)
a = random.randint(50, 200)
coeff = 0.01

for i in range(w):
    tilt = random.randint(1, 8)
    generate_lines(w * (i * coeff), 0, w * (i * coeff) + tilt, h, s, a)



for i in range(h):
    tilt = random.randint(1, 8)
    generate_lines(0, h * (i * coeff), w, h  * (i * coeff) + tilt, s, a)

First, we choose a random opacity. and a random stroke. Then we set the coefficient to 0.01. You can experience with other coefficient values, but be warned that if you randomize it, it sometimes may take you 10 runs to get a right pattern. We then create a new random number in each iteration to use it as a tilt. We then generate the horizontal lines w times and vertical lines h times.

At the end, we save the file as a PNG. This is why you should use PyCharm, it shows the picture and updates it in a different window. Other IDEs don’t do that.


surface.write_to_png("only_bg.png")

Well, that’s it! You can experiment with it, use diagonal lines, creates woven textiles, et al. Hope you enjoyed it.

Have you ever wondered how is it possible to create a pixel-perfect crosshair? Well, wonder no more. Processing is here.


Off course, you’ll have to remove the background but that’s the easiest thing to do!

Well, no time to lose. Let us start off by writing the setup() function.

void setup(){
  size(500, 500);  
  noLoop();
}

It simply creates a 500*500 image, and sets the loop to noLoop, so draw() function won’t go on indefinitely. Now, the draw function. Before that, save the code. So the image will be saved in the same folder as your code. Alright, let’s have a looksie.

void draw(){
 noFill();
 strokeWeight(10);
 arc(width / 2, height / 2, 300, 300, QUARTER_PI, PI);
 strokeWeight(8);
 arc(width / 2, height / 2, 250, 250, QUARTER_PI, PI);
 strokeWeight(6);
 arc(width / 2, height / 2, 200, 200, 0, QUARTER_PI);
 strokeWeight(4);
 arc(width / 2, height / 2, 200, 200, 0, 3);
 strokeWeight(6);
 arc(width / 2, height / 2, 250, 250, PI, TWO_PI);
 fill(0);
 ellipse(width / 2, height / 2, 30, 30);
 strokeWeight(12);
 line(width * 0.25, height * 0.76, width / 2, height / 2);
 line(width * 0.75, height * 0.75, width / 2, height / 2);
 line(width * 0.25, height * 0.26, width / 2, height / 2);
 line(width * 0.75, height * 0.25, width / 2, height / 2);
  
  saveFrame("crosshair.png");
}

noFill() will disable the color fill, so we’ll only have strokes. strokeWeight() which we use religiously sets the size of the stroke in pixels. And arc() creates an arc. This is the main part of this short tutorial, the arc function goes if defined as:


arc(postion_x, position_x, radius_x, radius_y, start_angle, end_angle);

start_angle and end_angle are in Radians. You can use QUARTER_PI (45d), HALF_PI (perpendicular), PI (180d), and PI_TWO (circular) like:


PIs

You can add these together to create PI/3 (35d) and 2*PI/3 (275d).

At the end, we add four lines like so:


line(x1, y1, x2, y2);

We use math to access quarter and a third of the screen. All x2 and y2s are the center of the screen.

At the end, we save the frame.


Coming up next…

I have prepared a Processing and Illustrator tutorial for creating two types of planets. Stay tuned!