## My Foray into the World of After Effects Plugin Development

This will be a very short entry, as I’ve already made a post, so forgive me for the brevity of this particular parcel.

The zest that drives me in the world of graphics programming is being able to create an application like Adobe’s After Effects. I have, in the past, had a very fun time with this software of which I am fond of. But in order to achieve such a feat, one must first release a plugin or two for the software. Me, I plan on writing a book on the subject. But that’s far into the distance.

But there are many blockades on the road which must be dealt with. First one is the fact that my Visual Studio can’t debug the damn plugins. I haven’t written any yet, but I can’t get Visual Studio debugger to work, IntelliTrace won’t show the value of any of the variables, even I I engage the plugin’s PiPL entry point, set many breakpoints, even in the wretched about() function. Therefore, I couldn’t have written any plugins even if I could, as blindly compiling the aex files back and forth is simply retarded.

I have many other problems as well, but they can be solved by reading the manual. And I’m reading it thoroughly, and carefully. Let me explain how this API works:

1. Parameters
2. Flags
3. Suites

Parameters are the UI elements which you deal with in a plugin. They’re few, but applicable. Flags are orders which the plugin orders the application. And suites are a collection of functions that comprise the API functional libraries.

For example, here’s the render function of the Skeleton example:

static PF_Err
Render (
PF_InData		*in_data,
PF_OutData		*out_data,
PF_ParamDef		*params[],
PF_LayerDef		*output )
{
PF_Err				err		= PF_Err_NONE;
AEGP_SuiteHandler	suites(in_data->pica_basicP);

/*	Put interesting code here. */
GainInfo			giP;
AEFX_CLR_STRUCT(giP);
A_long				linesL	= 0;

linesL 		= output->extent_hint.bottom - output->extent_hint.top;
giP.gainF 	= params[SKELETON_GAIN]->u.fs_d.value;

if (PF_WORLD_IS_DEEP(output)){
ERR(suites.Iterate16Suite1()->iterate(	in_data,
0,								// progress base
linesL,							// progress final
&amp;params[SKELETON_INPUT]->u.ld,	// src
NULL,							// area - null for all pixels
(void*)&amp;giP,					// refcon - your custom data pointer
MySimpleGainFunc16,				// pixel function pointer
output));
} else {
ERR(suites.Iterate8Suite1()->iterate(	in_data,
0,								// progress base
linesL,							// progress final
&amp;params[SKELETON_INPUT]->u.ld,	// src
NULL,							// area - null for all pixels
(void*)&amp;giP,					// refcon - your custom data pointer
MySimpleGainFunc8,				// pixel function pointer
output));
}

return err;
}


The functions takes a reference construct, in_data and out_data which are the given data, and the generative data, and params array which is the parameters of the UI, and finally, a layerdef which is the layer that effect has been applied to it.

At first, it creates a GainInfo variable which later will serve as our reference construct, does some calculations on it, and later passes it as the refcon to our two 8bit an 16bit functions. They in turn, cast the refcon into another variable type, and place some calculations on it, and carry the operation.

This is the basis of how AE plugins do what they do. Of course, there are some plugin types such as AEGP (which element 3D falls under). Element 3D for example, uses Assimp OpenGL Model Loader to load models, then dump the framebuffer into an image, and pass it into After Effects.

That is it for the moment. I hope you’ve enjoyed this post. I will keep you updated on my exploits. And my first plugin will definitely be free.

## Godot Engine, Grist to Our Mill

I started developing games at the young age of sixteen,. It’s been ten years now, and the landscape of indie and hobbyist game development has changed since then. Back then, Unity was still young, and not popular as it is today. Do you know how popular Unity is today? The magnitude of Unity’s popularity can be seen by Global Game Jam 2019’s technology stats which I will plot here:

What can be concluded from this chart, but the fact that Unity is rather popular? And its popularity is well-deserving. What astounds me is that 28 teams have used Processing. If you don’t know what Processing is, it’s a Java library/stand-alone package which is mainly used for Creative Programming, something cooked up by people who had too much time on their hands. But funnily enough, there’s one feature in Processing which I admire, and that’s filtering the image based on a given fragment shader. But it is, in no way, suitable for making games! For God’s Sake, if you want something simple, Why Don’t You Ask Evans?

I know I am burying the lede here, and people have told me not to before, but I want to enjoy writing these blog posts, and nothing’s more enjoyable more than talking about an author you admire. Why Didn’t They Ask Evans is a work of detective fiction by Dame Agatha Christie. I won’t spoil the book for you, but it’s about Bobby Jokes and his girl toy Frankie. Bobby, whilst golfing, stumbles upon a near-dead man on the rocks. The man mumbles “Why didn’t they ask Evans?” This book is about Occam’s Razor: Usually, the simplest answer to something is the best answer. So if you want to make a game, and don’t want to use Unity, what is the simplest answer? Grist to Our Mill, Godot Engine (pronounced Gow-Dow)!

Only a couple of people had used this engine in this years Game Jam, and it was a point of pride for them. They called themselves “hipsters of the game developing world”. But let’s not fret, Godot is becoming more and more popular, and by this time next year, a lot more people will be using it. I wish to do my part in introducing this wonderful engine to the game making public. It’s not great for 3D games, but it’s awesome for 2D games.

Godot is only 18 megabytes, and can be downloaded from here. Along with the engine, download the Export Templates, and also, if you’re a Blender man, Better Collida Exporter which improves the exporting of .dae files in Blender. You know what’s the best thing about Godot is? It’s free, or as Richard Stallman puts it, free as in freedom. There are many free engines around but certainly, Godot is heads ans shoulders above the rest.

Extract Godot somewhere where you keep your game development files. Start a new project, or download a template. Here, I have downloaded a template called “Platformer 2D”.

On the top-left you see the FileSystem file browser. I don’t use this function, I drag and drop my files onto the…

On bottom-left you see your project files. Icon.png is what Godot displays in the project manager. It’s the identity of your project! So use a good picture.

The bar on the top navigates between your 2D, 3D, and code. Also, AssetLib wihch is Godot’s version of Asset Store.

Below that is your level editor.

On the top-right you’ll see the Scene nodes. Godot’s node system is very intuitive, and we’ll introdue some of the nodes later on. Tabbed next to the Node system is the Import settings.

On the bottom-right you’ll see the Inspector. Here, you, per example, add a sprite to a texture, set a music to loop, or create a particle system. Everything done here can also be done in the code section. Tabbed next to the Inspector is the Node settings, which comprises of Alarms, and Groups.

Okay, now let’s see what goes into the making of Godot Engine. Hit Help->About and look at the third party licenses. Here we see that Godot uses zlip, curl, TinyEXR, NanoSVG, GLAD, and most importantly, SDL, amongst so many other things. Truly, a pinnacle of FOSS development. Just 10 years ago, this would have been an impossible feat to achieve. But thanks to many OSS projects, and Github, today, we have Godot. Thanks, RMS! You are truly the man who eats foot cheese, but your efforts has also given us so many wonderful things.

Godot uses OpenGL through SDL. And GLAD as OpenGL Function loader (I know it’s not important, but for me, these things are exciting). Currently, it doesn’t have an Official .gitignore but there’s an unofficial one. If you wish to create a repository for your game, make sure there’s a .t prefix before your scene and file names, otherwise, they will be binary, and completely unsuitable for a version control system like Github.

Let’s take a look at some of Godot’s nodes:

• Every node has a parent. In a 2D game, most of the nodes you use inherit from Node2D.
• Every node can have as many children as it wants. Usually, an Area 2D node has a Sprite 2D node, and a script attached to it.
• Particles 2D generates a 2D particle system, probably using textured OpenGL points. I must do a tutorial on them one day.
• Path 2D, gives a path to the parent node it’s attached to.
• RayCast 2D, it casts a ray in the 2D space and if it hits somewhere, it alerts the parent node.
• Polygon 2D, a 2D polygon.
• Sprite, one of the most-used of the Node2D nodes. It’s usually attached to a Kinematic, Static, or an Area 2D object.
• TileMap, a set of tiles.

As I said, nodes are very intuitive.

Now, let’s take a look at Godot’s scripting language, called GD Script, which is very similar to Python. You can also use C# if you have downloaded the Mono version:

extends Node

var lives = 4
var coins = 0
var punto

self.pause_mode = PAUSE_MODE_PROCESS

func _process(delta):
if Input.is_action_just_pressed("ui_cancel"):
if get_tree().paused == false:
get_tree().paused = true
else:
get_tree().paused = false

if lives == 0 and punto == null:
print("Perdiste")
get_tree().quit()


_ready() function is kin to Unity’s start(), and Godot has two functions which can be equated to Unity’s update(). The first one is _process() which is the normal update, the next one is _physics_process() which is used for synchronization with the physics engine. Also, as you can see, delta time is passed to the function as a parameter, something which all engines must do!

So why do I say Godot is Grist to Our Mill? Because for far too long, before or after Unity became popular, we relied on tools that simply weren’t up to it. Tools that were buggy, run-down, or simply wrong (looking at you, Processing!). Godot is free, Godot is ever-changing and Godot is ever-wonderful. It’s still in development, but you can always rely on it to make you a good game, free of charge, with all the features intact.

Back in 2016, when I first started out with Godot, there were not that many tutorials around. But these days it’s just a matter of Google search to access the best of Godot tutorials. And if you like books, you can always buy Godot Engine Game Development in 24 Hours which is how I learned Godot. And you can always ask /r/Godot. Q&A is also always around to answer your questions.

Well, that is it for today’s post! You see, I, too, can post about game engines, and my posts are not always either about weird Python scripts that I’ve written, or OpenGL. Thanks, and have a nice day!

## The Programming Behind Cel Shading

As the inquisitive reader may have guessed, or simply, googled my name, I hail from Persia, where Prince of Persia (as most people here put it, pe-rance of per-sh-ee-a)  games and the movie are set. Some people complained that Persians are brown, and the fact that in the movie had hasted a Nordic actor to play the role of the Prince was rather insulting and racist, however, what these warriors choose to ignore is that Persians, and most other Iranic tribes at the time, such as Parthians (Parthia, where I live) and Medians were, in fact white. Let’s not beat around the bush here. For nationalistic reasons, I never played the reboot, or the trilogy, I just played the 2D platformer on a Sega Genesis emulator. Anyways, I thought the reboot was the first instance of cel shading in computer games, however, I was wrong. Ten years before, a Dreamcast game by the name of Jetset Radio (pictured above) had set the trend. I’m too young for Dreamcast and I’ve never seen one even. But it must have been helluva game judging by the videos I’ve seen of it.

But what is cel shading or as Graham Sellers puts it, cell shading [sic]? It’s basically the process of using tricks in the render pipelien to make everything appear as if they were drawn by hand. The alternative approach for naming this technique is toon shading.

To find out which games use this effect, I headed to my trusty hangout, TVTropes.org, a wiki created using PMWiki and serving astray, bored media lovers for years… and in its cel shading page, it said:

Cel Shading applies first and foremost to the way the lighting is rendered.

This layman explanation is exactly, how I would describe it. Indeed, cel shading is rendering of the diffuse light channel. Texturing, and rendering. I know that light is a component of the material, and not something to be rendered, but using LUTs, we can achieve exactly this.

Anyways, what is  LUT?

#### Color LUT

If you have played around in Da Vinci Resolve, you’ll know what LUT is. It’s short for Look Up Table. Imagine you wish to change a series of colors to another series of colors. Each color must correspond to another. This is where Color LUT comes into play.

In OpenGL, each LUT is a 1D array that corresponds to a number between 0.0f and 1.0f. This number is the intensity of the diffuse component of the Phong lighting system (remember the last post? Phong lighting is different from Phong material). Imagine this, if you will.

is normal vector, is the light vector. the LUT maps each color to each intensity using the formula. is dependent on your code.

Let’s put it all together and see what happens.

Note: These are taken from OpenGL Superbible 7ed, by Graham Sellers.

#### Cel Shading in OpenGL

First, our front-end, at least, a part of it:

Listing 1: OpenGL Cel Shading Front End

//declare our LUT
static const GLubyte toon_tex_data[] =
{
0x44, 0x00, 0x00, 0x00,
0x88, 0x00, 0x00, 0x00,
0xCC, 0x00, 0x00, 0x00,
0xFF, 0x00, 0x00, 0x00
};

glGenTextures(1, &amp;tex_toon); //Generate texture
glBindTexture(GL_TEXTURE_1D, tex_toon); //Bind texture, a 1D texture
glTexStorage1D(GL_TEXTURE_1D, 1, GL_RGB8, sizeof(toon_tex_data) / 4);
glTexSubImage1D(GL_TEXTURE_1D, 0,
0, sizeof(toon_tex_data) / 4,
GL_RGBA, GL_UNSIGNED_BYTE,
toon_tex_data); //Pass the data
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); //Conditions


Note that all the codes are explained in comments, and this is an introductory article, and not a tutorial, so I don’t see fit to explain the code in detail. You can buy OpenGL Superbible and see for yourself.

Listing 2: Partial of Cel Shading Vertex Shader

#version 420 core

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;

layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

out VS_OUT
{
vec3 normal;
vec3 view;
} vs_out;

void main(void)
{
vec4 pos_vs = mv_matrix * position;

// Calculate eye-space normal and position
vs_out.normal = mat3(mv_matrix) * normal;
vs_out.view = pos_vs.xyz;
// Send clip-space position to primitive assembly
gl_Position = proj_matrix * pos_vs;
}


And finally, our fragment shader.

#version 420 core

layout (binding = 0) uniform sampler1D tex_toon;

uniform vec3 light_pos = vec3(30.0, 30.0, 100.0);

in VS_OUT
{
vec3 normal;
vec3 view;
} fs_in;

out vec4 color;

void main(void)
{
// Calculate per-pixel normal and light vector
vec3 N = normalize(fs_in.normal);
vec3 L = normalize(light_pos - fs_in.view);
// Simple N dot L diffuse lighting
float tc = pow(max(0.0, dot(N, L)), 5.0);

// Sample from cell shading texture
color = texture(tex_toon, tc) * (tc * 0.8 + 0.2);
}


If we load a donut model into the program, this is what we’ll get:

Well, hope you enjoyed it! I’m currently learning After Effects. And I may write a book about it. What do you think?

We do not expect to be able to display the object exactly as it would appear in reality, with texture, overcast shadows, etc. We hope only to display an image that approximates the real object closely enough to provide a certain degree of realism.

Bui Tuong Phong

Bui Tuong was born in 1941, and became a computer scientist during the hobnobbing affair of the Vietnam War. It must have been pretty difficult for him to complete his education is the toxic environment of the 60s, let alone, be drafted to fight in the country of his fore bearers! But he managed to stay alive, until 1975, before leukemia took his life, just 2 short years before he gave us the basis of light and shading today: the Phong shader. Vietnamese names are comprised of three parts: the paternam name, the middle name, and the given name. All along, when people say “Phong shader”, they are referring to Bui Tuong’s given name. Read more about given names on Wikipedia.

“Softly let the balmy sunshine
Play around my dying bed,
E’er the dimly lighted valley
I with lonely feet must tread. “

Let The Light Enter – Poem by Frances Ellen Watkins Harper

There’s some extremely concise math behind Phong Shader. Which you need not know, really, unless you aspire to be a graphics programmer like yours truly. However, knowing them, even glancing at them will be beneficial in the long run. The following is extracted from OpenGL Superbible, 7th Edition by Graham Sellers and the Kronos Group ARB Foundation.

## A) Some Concepts

First, let me get some concepts out of the way. If you have been 3D modelling or game developing for more than a minute, you will certainly be positioned in their line of fire, but a reminder won’t hurt.

##### A-1) Ambient Light

Most books, especially rather shoddy ones, compare this sort of light with sunlight, however, this is simply wrong. Ambient light is not sunlight. It comes from every direction, meaning, it’s omnipresent, but in calculations, it’s just a vector with 3 members. In Phong shading, it gets added at the end, but is not modified.

##### A-2) Diffuse Light

Diffuse light has a direction. In fact, it’s the directional component of a light source [sic]. In cinema, light is diffused with a softbox, and in computer graphics, light is diffused with the formula which we’ll put forward later. The magnitude, that is, the size of the diffuse light depends on the surface. For example, if our surface is a matte surface that absorbs more light than reflecting, the magnitude must be higher than when our surface is glossier.

##### A-3) Specular Highlight

Like diffuse light, specular light is directional, but based on the glossiness of the surface, it leaves a highlight which is called shininess. In real life, shininess is not an inherent part of the material. In fact, a coat of film, or a speck of wax, will add more to its shininess than anything else ever could. Specular shininess is a factor which is set between 0 and 128, because beyond 128, it won’t affect the shader much.

##### A- 4) Albedo

It’s the proportion of the incident light which is reflected away by the surface.

##### A-5) Phong Formula

The formula for calculating a Phong material is as follows:

Where:

: Ambient material

: Diffuse material

: Specular material and : shininess factor

: Ambient light

: Diffuse light

: Specular light

You may ask, what about the vectors? Well, no worries, we’ll cover them now:

: Surface normal

: Unit vector from the point being shaded into the light (in other words, the light vector)

: The reflection of the negative of the light vector

: Vector to the viewer

## B) Gouraud Shading

Before stampeding towards Phong shader, let’s show how you can achieve Gouraud shading in GLSL. Note that I’m using version 4.1 of GLSL, as per Superbible’s instructions, however, you may be a fan of www.learnopengl.com and use 3.3. Doesn’t matter, same thing. So let’s see what a gouraud shading is. Is it edible, is it a suppository, or is it some sort of a sex toy.

This method of shading was invented by Henri Gouraud in 1971. It’s not superior to Phong shading by any means, and these days, it’s mostly used as a less GPU-intensive preview method in software such as Cinema 4D. The problem is, that the glint it generates looks like a spark, as shown in the picture below:

This is caused by interpolating color between the vertices, and discontinuity between triangles because the colors are being interpolated linearly, and is only solved by Phong shader. Let’s see how we can do Gouraud shading in GLSL 4.1.

Listing 1: Per-vertex Gouraud Shading in GLSL 4.1

#version 410 core

// Per-vertex inputs
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

// Matrices we'll need
layout (std140) uniform constants
{
mat4 mv_matrix;
mat4 view_matrix;
mat4 proj_matrix;
};

// Light and material properties
uniform vec3 light_pos = vec3(100.0, 100.0, 100.0);
uniform vec3 diffuse_albedo = vec3(0.5, 0.2, 0.7);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
uniform vec3 ambient = vec3(0.1, 0.1, 0.1);

// Outputs to the fragment shader
out VS_OUT
{
vec3 color;
} vs_out;

void main(void)
{
// Calculate view-space coordinate
vec4 P = mv_matrix * position;

// Calculate normal in view space
vec3 N = mat3(mv_matrix) * normal;
// Calculate view-space light vector
vec3 L = light_pos - P.xyz;
// Calculate view vector (simply the negative of the view-space position)
vec3 V = -P.xyz;

// Normalize all three vectors
N = normalize(N);
L = normalize(L);
V = normalize(V);
// Calculate R by reflecting -L around the plane defined by N
vec3 R = reflect(-L, N);

// Calculate the diffuse and specular contributions
vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;

// Send the color output to the fragment shader
vs_out.color = ambient + diffuse + specular;

// Calculate the clip-space position of each vertex
gl_Position = proj_matrix * P;
}



And now, the fragment shader.

Listing 2: Fragment shader of the same concept.

#version 410 core

// Output
layout (location = 0) out vec4 color;

// Input from vertex shader
in VS_OUT
{
vec3 color;
} fs_in;

void main(void)
{
// Write incoming color to the framebuffer
color = vec4(fs_in.color, 1.0);
}



## C) Phong Shading

Before going forward, keep in mind that Phong shading and Phong lighting are two different concepts. You can get rid of Gouraud’s “starburst” by using more vertices, but why do that when Phong shading is around? In Phong shaing, instead of interpolating the color between vertices (as seen in Listings 1 and 2), we interpolate the surface normals between the vertices, and the use the generated normal to perform the entire lighting calculation for each pixel, not each vertex. However, this means more work to be done in the fragment shader, as we’ll see in Listing 4. But first, let’s see the vertex shader.

Listing 3: Phong shader’s vertex shader in GLSL 4.1.

#version 410 core

// Per-vertex inputs
layout (location = 0) in vec4 position;
layout (location = 1) in vec3 normal;

// Matrices we'll need
layout (std140) uniform constants
{
mat4 mv_matrix;
mat4 view_matrix;
mat4 proj_matrix;
};

// Inputs from vertex shader
out VS_OUT
{
vec3 N;
vec3 L;
vec3 V;
} vs_out;

// Position of light
uniform vec3 light_pos = vec3(100.0, 100.0, 100.0);

void main(void)
{
// Calculate view-space coordinate
vec4 P = mv_matrix * position;

// Calculate normal in view-space
vs_out.N = mat3(mv_matrix) * normal;

// Calculate light vector
vs_out.L = light_pos - P.xyz;

// Calculate view vector
vs_out.V = -P.xyz;

// Calculate the clip-space position of each vertex
gl_Position = proj_matrix * P;
}



Nothing much has changed. But the same isn’t true for the fragment shader.

#version 410 core

// Output
layout (location = 0) out vec4 color;

// Input from vertex shader
in VS_OUT
{
vec3 N;
vec3 L;
vec3 V;
} fs_in;

// Material properties
uniform vec3 diffuse_albedo = vec3(0.5, 0.2, 0.7);
uniform vec3 specular_albedo = vec3(0.7);
uniform float specular_power = 128.0;
uniform vec3 ambient = vec3(0.1, 0.1, 0.1);

void main(void)
{
// Normalize the incoming N, L and V vectors
vec3 N = normalize(fs_in.N);
vec3 L = normalize(fs_in.L);
vec3 V = normalize(fs_in.V);

// Calculate R locally
vec3 R = reflect(-L, N);

// Compute the diffuse and specular components for each fragment
vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo;
vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo;

// Write final color to the framebuffer
color = vec4(ambient + diffuse + specular, 1.0);
}



Well, that is it for this lesson. I hope you enjoyed it. If this has sparked your interest in OpenGL, you buy OpenGL Superbible from Amazon, or head to learnopengl.com. If you can’t make heads or tails of the shaders, I recommend Book of Shaders. Please leave a comment. Thank you.

## Ranginak: Python Tri-Shade Color Generator

Color be a harsh mistress.Aristotle developed the first color theory. He believed that colors were celestial rays sent by Gods for humans to perceive them based on four elements of fire, earth, air and water.

Two hundred years before Newton cracked the color spectrum, artist and heartthrob Leonardo of the planet Vinci theorized that whilst philosophers perceive white as the “receiver” and black as the “absence of color”, both are important. He later developed his color theory based on six colors that you see in the image above. Imagine colors are Boolean operators, white is True, black is False. If True is 1 and False is 0, blue is 0.5. That’s very close to how we perceive colors today.

Meanwhile, Persian painters who came before Da Vinci and Newton gave color theory an existentialist twist. Green was the color of Nobility. Blue was color of paradise. Red was color of passion. And so on.

In 1666, Sir Isaac Newton, using two prisms, generated what we today know as the White Light Color Spectrum.

Light is not particle. It’s not wave either. Light is made up of photons. Photons carry energy, and a prism cracks this energy and splits it into seven colors. That’s just White Light. If we heat up Hydrogen until it emits light, if we split up this light, it will create a 4-color spectrum called Emission Spectrum.

### Shades of Color

In real life, all the colors in existence are made up of 3 color: Red, Green, Yellow. Imagine we have a base color, and we want to increase the red-green-yellow value incrementally until we get a shade. But if we increase the value of all three, we’ll get a random shade of color. But if we increase it based on a state, if we treat this as a 3! state machine, we’ll get six conditions. Depending on the condition that we want a red-green shade, we increase red and green values without touching yellow. And so on.

In computers, yellow is blue. But that’s just one of the colorspaces we use in computers. We have CMYK for print. HSL, HSV, YIQ and so on. In this program, we’ll use HSV and RGB. You’ll see.

### Complementary Colors

Newton also discovered that if we create a circular spectrum, the opposite colors complement each other. Boutet created the following color wheel based on this discovery:

I talked about RGB complementary colors in this post. So I’ll cut it short. Just know that in this program, once we generate the base color, we invert the color and generate an shade of six colors based on the opposite color. You’ll see. Let’s continue.

### Ranginak Color Generator

Ranginak (meaning Small Color Shade in Persian) is a Python script that generates three six-colored shades of color and 3 original colors.

The source can be found here, also, you can learn about it in this post.

The first shade is used for background. The second shade is using for mid-ground. The third shade, which is the most saturated, is used for foreground.

How does it work? Let’s start. This code requires Gizeh, which you can install by:

pip install gizeh


It also uses colorsys, which is a built-in Python library. Colorsys converts between color systems. We’ll only use it once in this code. We also make use of random and time.

import gizeh as gz
import colorsys
import random
import time


We then write our first function.

def generate_color(s_min, s_max):
random.seed(time.time())
h = random.randint(0, 100) / 100
random.seed(time.time())
v = random.randint(0, 100) / 100
random.seed(time.time())
s = random.randint(s_min, s_max) / 100
color = colorsys.hsv_to_rgb(h, s, v)

return color


s_min and s_max are minimum and maximum saturation, respectively. Then we’ll fix the seed so each time we’ll call the function, randint() will generate a fixed random number. Otherwise it’ll go haywire and generate random colors. You’ll see why seed() is important in the next function. It then puts the converts the HSV value to RGB using colorsys and puts them in a tuple and returns them.

Which we’ll write now:

def generate_color_master():
color_master = []

color_master.append(generate_color(1, 33))
color_master.append(generate_color(33, 66))
color_master.append(generate_color(66, 100))

return color_master


color_master[] is a list that contains three color tuples. One for background, one for mid-ground, one for foreground. We change s_min and s_max based on our desire to create a less saturated color for the background, and a more saturated color for the mid-ground and foreground.

def invert():
inverted = []
colors = generate_color_master()

for color_tuple in colors:
r = 1 - color_tuple[0]
g = 1  - color_tuple[1]
b = 1 - color_tuple[2]

inverted.append((r, g, b))

return inverted


Our next functions creates the complementary color based on the main colors, and returns them in a list accordingly. Now, our main function.

def generate_shade_color(r, g, b, color_tuple):
new_color = 0

addition_r = (random.randint(1, random.randint(5, 9)) / 10) * r
addition_g = (random.randint(1, random.randint(5, 9)) / 10) * g
addition_b = (random.randint(1, random.randint(5, 9)) / 10) * b

new_r = 0
new_g = 0
new_b = 0

if r == 0:
new_r = color_tuple[0] * 255
new_g = color_tuple[1] + addition_g * 255
new_b = color_tuple[2] + addition_b * 255
elif g == 0:
new_g = color_tuple[1] * 255
new_r = color_tuple[0] + addition_r * 255
new_b = color_tuple[2] + addition_b * 255
elif b == 0:
new_b = color_tuple[2] * 255
new_g = color_tuple[1] + addition_g * 255
new_r = color_tuple[0] + addition_r * 255

if int(new_r) <= 255 and int(new_g) <= 255 and int(new_g <= 255):
new_color = (new_r / 255, new_g / 255, new_b / 255)
elif int(new_r) > 255:
new_color = (1.00, new_g / 255, new_b / 255)
elif int(new_g) > 255:
new_color = (new_r / 255, 1.00, new_b / 255)
elif (new_b) > 255:
new_color = (new_r / 255, new_g / 255 , 1.00)

return new_color


Because it might get complicated, let me explain what it does in the list format:

1- r, g, and b are binary coefficients. If either is 0, it won’t change the color in our shade. If we want to disable, let’s say, r in our shade, we’ll pass r as 0 and the other two as 1. And so on. Color_tuple is the main color we wish to create a shade from.

2- addition_[channel] are random numbers between 0.1 and 0.9 that we add to the main color’s respective channel in order to create an increasing shade. Note that we multiply it by the coefficient so it’ll be 0 if the coefficient is 0.

3- We multiply the color by 255 so we can have an easier time checking if the color is out of bounds.

4- We check if the color is out of bounds. If it is, we make it 1, the maximum color.

5- We divide the color by 255 again and return the new color.

def generate_shade(r, g, b):
colors = invert()
bg = []
mg = []
fg = []

for i in range(6):
bg.append(generate_shade_color(r, g, b, colors[0]))
mg.append(generate_shade_color(r, g, b, colors[1]))
fg.append(generate_shade_color(r, g, b, colors[2]))

return [bg, mg, fg]


In this function, we create six colors for each layer.

Now, we get to the drawing part using Gizeh.

rect_w = 500
rect_h = 500

def generate_surface():
surface = gz.Surface(width=int(rect_w * 7), height=int(rect_h * 3))



We create a 3500*1500 window.

def draw_sqr(color, x, y):
sqr = gz.square(l=500, fill=color, xy=(x, y))

r = int(color[0] * 255)
g = int(color[1] * 255)
b = int(color[2] * 255)
string = "(" + str(r) + ", " + str(g) + ", " + str(b) + ")"
text2 = gz.text(string, fontfamily="Tahoma", fontsize=24, fill=(0, 0, 0), xy=(x + 20, y + 20))
text3 = gz.text(string, fontfamily="Tahoma", fontsize=23, fill=(1, 1, 1), xy=(x + 20, y + 20))

return gz.Group([sqr, text2, text3])


This function generates a square of the color color, and a text showing the colros RGB value.

Now, the main function.

def main_func(r, g, b):
colors = generate_shade(r, g, b)
original_color = generate_color_master()
items = []

bg = colors[0]
mg = colors[1]
fg = colors[2]

items.append(draw_sqr(bg[0], 250, 250))
items.append(draw_sqr(bg[1], 750, 250))
items.append(draw_sqr(bg[2], 750 + 500, 250))
items.append(draw_sqr(bg[3], 750 + 1000, 250))
items.append(draw_sqr(bg[4], 750 + 1500, 250))
items.append(draw_sqr(bg[5], 750 + 2000, 250))
items.append(draw_sqr(original_color[0], 750 + 2500, 250))

items.append(draw_sqr(mg[0], 250, 250 + 500))
items.append(draw_sqr(mg[1], 750, 250 + 500))
items.append(draw_sqr(mg[2], 750 + 500, 250 + 500))
items.append(draw_sqr(mg[3], 750 + 1000, 250 + 500))
items.append(draw_sqr(mg[4], 750 + 1500, 250 + 500))
items.append(draw_sqr(mg[5], 750 + 2000, 250 + 500))
items.append(draw_sqr(original_color[1], 750 + 2500, 250 + 500))

items.append(draw_sqr(fg[0], 250, 250 + 1000))
items.append(draw_sqr(fg[1], 750, 250 + 1000))
items.append(draw_sqr(fg[2], 750 + 500, 250 + 1000))
items.append(draw_sqr(fg[3], 750 + 1000, 250 + 1000))
items.append(draw_sqr(fg[4], 750 + 1500, 250 + 1000))
items.append(draw_sqr(fg[5], 750 + 2000, 250 + 1000))
items.append(draw_sqr(original_color[2], 750 + 2500, 250 + 1000))

return gz.Group(items)


First, we prepare our colors. Then, draw 21 square of different colors. Don’t ask why I didn’t use a loop. It brings back bad, bad memories. We return everything as a Gizeh group. Now, the near the end. We don’t want to make a function anymore, just a top-level code:

if __name__ == "__main__":
for i in range(12):
group = main_func(0, 1, 1)
surface = generate_surface()
group.draw(surface)
surface.write_to_png("shade_" + str(i) + ".png")


This code will create 12 images with a Green-Blue shade. Change g or b to 0 and change r to 1 to experiment with it. Don’t generate lots of images, just change the name of the image if you want more.

Well, that’s it. I hope you enjoyed it. Remember, the code can be found here. For now, hope you’ll have colorful dreams!

## Creating a Procedural Texture Using Gizeh

Gizeh is a small library for Python which you can install using pip. It doesn’t have any official documentation, so you have to suffice to the small documentation on it’s Github Page. Before starting this tutorial, read the README file carefully, as you may confused later on. Now, let’s see what we want to create. A procedural texture of a textile. Maybe a denim, maybe another pattern. I don’t know. Right now we can’t afford to be picky. The output is entirely randomized, and if you aren’t happy with the texture, you can just run the code again.

Not happy? Let’s run the code again (I’m using Pycharm to write and run this code, teaching you about setting up Python and a good IDE is beyond the scope of this tutorial):

But how? Simple:

1- A blue background.

2- 200 different groups of 50 circles that increase in alpha as they decrease in radius. colors are from rgb(200, 200, 200) to rgb(250, 250, 250).

3- Vertical and horizontal lines spaced together based on a coefficient of 0.01 and tilted by a random number between 1 and 8.

That’s it!

Let’s start. First, let’s setup a window, and a surface:

import gizeh as gz
import random

w = 640
h = 480

surface = gz.Surface(width=w, height=h, bg_color=(21/255, 96/255,189/255))


You can change w and h to change the height and the width. Now, let’s generate our radial gradient:

def circle_generator(x, y, r, c):
i = r
a = 0
while i > 0:
circle = gz.circle(r = i, xy=[x, y], fill=(c/255, c/255, c/255, a*0.70))
i -= 1
a += 1/2000

circle.draw(surface)


Besides the location of the circles, the function accepts two other parameters: the radius, which decreases in each while loop, and the color, which we divide by 255 since in Gizeh, rgb is between 0 and 1. We increase the alpha (a) by adding 1/2000 to it in each iteration.  At the end, we draw the circle.

Now, let’s make a line generator:

def generate_lines(x1, y1, x2, y2, s, a):
line = gz.polyline(points=[(x1, y1), (x1, y1), (x2, y2), (x2, y2)], stroke_width=s,
stroke=(0, 0, 0, a/255), fill=(0, 0, 0, a/255))

line.draw(surface)


This one just generates one line, we’re actually drawing a polygon were its origin is (x1, y1) and ends at (x2, y2). If we add a randomized number to the second coordinates, we can have a mangled shape, which will come to use if we want to generate some other pattern. s is stroke, and a is alpha. We set the fill to black, but it’s useless, as we don’t have a fill.

Now, let’s draw 200 radial gradients on the screen:

for i in range(200):
x = random.randint(0, w)
y = random.randint(0, h)
r = random.randint(20, 100)
c = random.randint(200, 255)
circle_generator(x, y, r, c)



we choose entirely random locations, a radius between 20 and 100, and a color between very light gray and white. Then we call the function.

Now, the lines!

s = random.randint(1, 3)
a = random.randint(50, 200)
coeff = 0.01

for i in range(w):
tilt = random.randint(1, 8)
generate_lines(w * (i * coeff), 0, w * (i * coeff) + tilt, h, s, a)

for i in range(h):
tilt = random.randint(1, 8)
generate_lines(0, h * (i * coeff), w, h  * (i * coeff) + tilt, s, a)


First, we choose a random opacity. and a random stroke. Then we set the coefficient to 0.01. You can experience with other coefficient values, but be warned that if you randomize it, it sometimes may take you 10 runs to get a right pattern. We then create a new random number in each iteration to use it as a tilt. We then generate the horizontal lines w times and vertical lines h times.

At the end, we save the file as a PNG. This is why you should use PyCharm, it shows the picture and updates it in a different window. Other IDEs don’t do that.

surface.write_to_png("only_bg.png")


Well, that’s it! You can experiment with it, use diagonal lines, creates woven textiles, et al. Hope you enjoyed it.

## Generating Tons of Complementary Colors with Processing

Well, this blog should be about Processing! But no, I’m just practicing game asset creation, as I wish to become a master of all trades (and jack of none). Therefore, I’m relaxing my programming muscles a bit with Processing while I’m doing graphics. And maybe, someday, music? I dunno.

So we’ve all heard of the RGB colorspace. Back in the 90s, Apple and Microsoft created this colorspace because HSL wasn’t enough for the creative programming market. Displays were becoming a lot more sophisticated and less happened in monochrome and teletypes. So sRGB was born.

RGB has three to four channels. Red, Green, Blue nad Alpha. Each have a value between 0 and 255. The higher we go, the lighter we get. There are two ways of implementing an RGB color, HEX and Decimal. For HEX display, each channel is converted to its hexadecimal notation and then put together in succession, such as (255, 255, 255) -> #FFFFFF.

In sRGB color wheel, each opposite color go together like America and apple pie. Let me show you what I mean:

In fact, this is true about all colorspaces such as HSL and HSV and CMYK, and in fact, it’s not a new concept: in 1810, painters were already using complementary colors using the traditional RGY color spectrum (in real life, colors are made up of red, green and yellow, not red, green and blue!?).

So, is there an algorithm to generate a complementary color? Of course. Imagine each color is a angle in the unit circle of the color wheel. Imagine FF is 2PI and the color we want is PI/4. So to calculate the opposite angle, we subtract 2PI (360d) from PI/4. As easy as that -> In order to generate the opposing color, we must subtract 255 from each channel.

So how can Processing help us choose the appropriate complementary color? By generating HUNDREDS and THOUSAND of complementary colors in a jiff. Let’s see how it works:

void setup() {
size(500, 500);

}

void draw() {
float r = random(255);
float g = random(255);
float b = random(255);
color bgc = color(r, g, b);

background(bgc);
textSize(16);
text("Background: ", 20, height * 0.10);
text(r, width * 0.5, 20);
text(g, width * 0.5, 50);
text(b, width * 0.5, 80);

float comp_r = 255 - r;
float comp_g = 255 - g;
float comp_b = 255 - b;
color rc = color(comp_r, comp_g, comp_b);

fill(rc);
rect(width * 0.25, height * 0.25, 200, 200);

fill(255);
textSize(16);
text("Complement: ", 20, height * 0.90);
text(r, width * 0.5, 420);
text(g, width * 0.5, 450);
text(b, width * 0.5, 480);

saveFrame("complementary_colors#####.png");

}


In each draw() loop, we generate r, g, and b. Then we display it in the background. Then we display the channel values for reference. Then we generate the complementary color, and display it in a rectangle. You can do it with a circle too. Depends on your taste.

Then, we save it in the folder you’ve saved the .pde file using saveFrame(). That’s it!

Well, hope you’ve enjoyed this tutorial. Next up: How pixel art with Photshop.

## Pixel Perfect Crosshair with Processing

Have you ever wondered how is it possible to create a pixel-perfect crosshair? Well, wonder no more. Processing is here.

Well, no time to lose. Let us start off by writing the setup() function.

void setup(){
size(500, 500);
noLoop();
}


It simply creates a 500*500 image, and sets the loop to noLoop, so draw() function won’t go on indefinitely. Now, the draw function. Before that, save the code. So the image will be saved in the same folder as your code. Alright, let’s have a looksie.

void draw(){
noFill();
strokeWeight(10);
arc(width / 2, height / 2, 300, 300, QUARTER_PI, PI);
strokeWeight(8);
arc(width / 2, height / 2, 250, 250, QUARTER_PI, PI);
strokeWeight(6);
arc(width / 2, height / 2, 200, 200, 0, QUARTER_PI);
strokeWeight(4);
arc(width / 2, height / 2, 200, 200, 0, 3);
strokeWeight(6);
arc(width / 2, height / 2, 250, 250, PI, TWO_PI);
fill(0);
ellipse(width / 2, height / 2, 30, 30);
strokeWeight(12);
line(width * 0.25, height * 0.76, width / 2, height / 2);
line(width * 0.75, height * 0.75, width / 2, height / 2);
line(width * 0.25, height * 0.26, width / 2, height / 2);
line(width * 0.75, height * 0.25, width / 2, height / 2);

saveFrame("crosshair.png");
}


noFill() will disable the color fill, so we’ll only have strokes. strokeWeight() which we use religiously sets the size of the stroke in pixels. And arc() creates an arc. This is the main part of this short tutorial, the arc function goes if defined as:

arc(postion_x, position_x, radius_x, radius_y, start_angle, end_angle);


start_angle and end_angle are in Radians. You can use QUARTER_PI (45d), HALF_PI (perpendicular), PI (180d), and PI_TWO (circular) like:

You can add these together to create PI/3 (35d) and 2*PI/3 (275d).

At the end, we add four lines like so:

line(x1, y1, x2, y2);


We use math to access quarter and a third of the screen. All x2 and y2s are the center of the screen.

At the end, we save the frame.

### Coming up next…

I have prepared a Processing and Illustrator tutorial for creating two types of planets. Stay tuned!

## Generating Color Scheme with Processing (Part 1)

Let me christen this blog through a Processing tutorial.

I hereby promise to fill this blog with game assets that I myself have created, graphics-related tutorials such as After Effects, Cinema 4D and Blender plugin development tutorials, Processing tutorials and scripts, and of course, game development tutorials! I don’t know much, but I’ll share what I know. And that’s how you get into heaven.

Chubak Bidpaa

And now, break the champagne bottle!

Anyways, in this tutorial, I want to teach you how to find out how a color matches the theme of your game wonderfully by using scripting language of Processing. Processing is a Java library that helps with aides artists and programmers in creating artwork. You can download it from here.

### The Raw Power of Processing

Processign is a very powerful language, and it’s very useful. Processing is a Java library, so it’s very similar to  its container language. It has OOP support, and each code has a setup() and draw() method which cater to initiation and the main loop, respectively.

Let’s say your level has a red theme, and by red theme I mean the overall feel of the level. And out want to know what colors beset our main color. So let’s start by setting up a window with a red background:

void setup() {
size(400, 400);
color c = color(255, 0, 0);
background(c);
noLoop();
}


This will create a 400*400 window with a red background. noLoop() later comes into play, when we write a draw() function, it’ll make it so that the draw() does not loop.

Now, to write the draw function. It’ll be much more complicated, by a small margin.

void draw(){

float red = random(255);
float green = random(255);
float blue = random(255);
color random_color = color(red, green, blue);

textSize(16);
text(red, 10, 350);
text(green, 10, 370);
text(blue, 10, 390);

fill(random_color);
rect(100, 100, 200, 200);

}


First, we create three floating point integers that hold a random number between 0 and 255. Then we create a color object with it. Then, we print those numbers for later reference, as there must be a way to assess what RGB values makes up our new color. We then change the color to the new color and draw a rectangle in the middle of the screen (origin point is top-left).

As we see, the green rectangle does not elate the red color. So we discard it and generate a new color.

As we can clearly see, purple does not go with red either. We’re not lucky today, are we?

Well, there’s no need to beat around the bush. Let’s make a hundred million colors!

In order to do so, add this line:

saveFrame("color_swatch########.png") //org jpg


At the end of the draw() function. Also, remove the noLoop() line from the setup() function. But before running the code, save the file in a separate folder. Once you run the code, it’ll save images in a folder where you’ve saved the file. Peruse through them, and maybe you’ll find your swatch!

### In Part 2…

In Part 2 of this tutorial, I’ll teach you a bit about color theory and how to generate opposing colors.

For now, Semper Fi!