This is not something that you’d find in a journal, or a book: this is something I myself have cooked up. But what do I mean by “Parametric Approach to OpenGL”, and by “Using After Effects”?

 

Browse below the latest posts to find a download link

The above video is an After Effects plugin by yours truly. Everything is based upon one fragment shader (which I got from a pastebin, but it looks like it’s from this repo), which as follows:

 

#version 330
uniform sampler2D videoTexture;
uniform float sliderVal;
uniform float multiplier16bit;
in vec4 out_pos;
in vec2 out_uvs;
out vec4 colourOut;

#define PI 3.14159265358979323846


float random (in vec2 _st) {
    return fract(sin(dot(_st.xy,
                         vec2(12.9898,78.233)))*
        43758.5453123);
}

vec2 truchetPattern(in vec2 _st, in float _index){
    _index = fract(((_index-0.5)*2.0));
    if (_index > 0.75) {
        _st = vec2(1.0) - _st;
    } else if (_index > 0.5) {
        _st = vec2(1.0-_st.x,_st.y);
    } else if (_index > 0.25) {
        _st = 1.0-vec2(1.0-_st.x,_st.y);
    }
    return _st;
}

void main() {
    vec2 st = gl_FragCoord.xy/sliderVal;
    st /= 10;
    

    vec2 ipos = floor(st);  // integer
    vec2 fpos = fract(st);  // fraction

    vec2 tile = truchetPattern(fpos, random( ipos ));

    float color = 0.0;

     color = step(tile.x,tile.y);

    gl_FragColor = vec4(vec3(color),1.0);
}

But you may say “so what, what’s so parametric about it?” Before doing anything or saying anything, let’s scrutinize the basis of an After Effects OpenGL plugin.

All AE OpenGL Plugins are based on one example in the SDK: Glator. This example was removed for a large amount of time, maybe 4-5 years, and returned in 2017. Without is, image maninpulation shall be done using Adobe’s own mangled suits, something I wouldn’t wish on my worst enemies. After Effects SDK is based on a few command selectors:

 

switch (cmd) {
			case PF_Cmd_ABOUT:
				err = About(in_data,
							out_data,
							params,
							output);
				break;
				
			case PF_Cmd_GLOBAL_SETUP:
				err = GlobalSetup(	in_data,
									out_data,
									params,
									output);
				break;
				
			case PF_Cmd_PARAMS_SETUP:
				err = ParamsSetup(	in_data,
									out_data,
									params,
									output);
				break;
				
			case PF_Cmd_GLOBAL_SETDOWN:
				err = GlobalSetdown(	in_data,
										out_data,
										params,
										output);
				break;

			case  PF_Cmd_SMART_PRE_RENDER:
				err = PreRender(in_data, out_data, reinterpret_cast<PF_PreRenderExtra*>(extra));
				break;

			case  PF_Cmd_SMART_RENDER:
				err = SmartRender(in_data, out_data, reinterpret_cast<PF_SmartRenderExtra*>(extra));
				break;
		}

Which are engulfed in a try...catch statement. You may have noticed some function calls below each command selector. Those are the bread and mead of an After Effects plgi. Our concern is with PreRender() and SmartRender(). PreRender() is not that important:

static PF_Err
PreRender(
	PF_InData				*in_data,
	PF_OutData				*out_data,
	PF_PreRenderExtra		*extra)
{
	PF_Err	err = PF_Err_NONE,
			err2 = PF_Err_NONE;

	PF_ParamDef slider_param;

	PF_RenderRequest req = extra->input->output_request;
	PF_CheckoutResult in_result;

	AEFX_CLR_STRUCT(slider_param);

	ERR(PF_CHECKOUT_PARAM(in_data,
		GLATOR_SLIDER,
		in_data->current_time,
		in_data->time_step,
		in_data->time_scale,
		&amp;slider_param));

	ERR(extra->cb->checkout_layer(in_data->effect_ref,
		GLATOR_INPUT,
		GLATOR_INPUT,
		&amp;req,
		in_data->current_time,
		in_data->time_step,
		in_data->time_scale,
		&amp;in_result));

	if (!err){
		UnionLRect(&amp;in_result.result_rect, &amp;extra->output->result_rect);
		UnionLRect(&amp;in_result.max_result_rect, &amp;extra->output->max_result_rect);
	}
	ERR2(PF_CHECKIN_PARAM(in_data, &amp;slider_param));
	return err;
}

You can see a call to PF_CHECKOUT_PARAM in the code. “Obtains parameter values, or the source video layer, at a specified time. AfterEffects makes caching decisions based on the checkout state of parameters.”1 This PARAM has been defined before in ParamSetup():

static PF_Err 
ParamsSetup (	
	PF_InData		*in_data,
	PF_OutData		*out_data,
	PF_ParamDef		*params[],
	PF_LayerDef		*output )
{
	PF_Err		err		= PF_Err_NONE;
	PF_ParamDef	def;	

	AEFX_CLR_STRUCT(def);

	PF_ADD_SLIDER(	STR(StrID_Name), 
					GLATOR_SLIDER_MIN, 
					GLATOR_SLIDER_MAX, 
					GLATOR_SLIDER_MIN, 
					GLATOR_SLIDER_MAX, 
					GLATOR_SLIDER_DFLT,
					SLIDER_DISK_ID);

	out_data->num_params = GLATOR_NUM_PARAMS;

	return err;
}

You can add as much PARAMs to your program as you wish and later, pass them to your shaders as uniforms. But how? Well, for that, we must see what goes on in SmartRender():

static PF_Err
SmartRender(
	PF_InData				*in_data,
	PF_OutData				*out_data,
	PF_SmartRenderExtra		*extra)
{
	PF_Err				err = PF_Err_NONE,
						err2 = PF_Err_NONE;

	PF_EffectWorld		*input_worldP = NULL,
						*output_worldP = NULL;
	PF_WorldSuite2		*wsP = NULL;
	PF_PixelFormat		format = PF_PixelFormat_INVALID;
	PF_FpLong			sliderVal = 0;

	AEGP_SuiteHandler suites(in_data->pica_basicP);

	PF_ParamDef slider_param;
	AEFX_CLR_STRUCT(slider_param);

	ERR(PF_CHECKOUT_PARAM(in_data,
		GLATOR_SLIDER,
		in_data->current_time,
		in_data->time_step,
		in_data->time_scale,
		&amp;slider_param));

	if (!err){
		sliderVal = slider_param.u.fd.value / 100.0f;
	}

	ERR((extra->cb->checkout_layer_pixels(in_data->effect_ref, GLATOR_INPUT, &amp;input_worldP)));

	ERR(extra->cb->checkout_output(in_data->effect_ref, &amp;output_worldP));

	ERR(AEFX_AcquireSuite(in_data,
		out_data,
		kPFWorldSuite,
		kPFWorldSuiteVersion2,
		"Couldn't load suite.",
		(void**)&amp;wsP));

	if (!err){
		try
		{
			// always restore back AE's own OGL context
			SaveRestoreOGLContext oSavedContext;

			// our render specific context (one per thread)
			AESDK_OpenGL::AESDK_OpenGL_EffectRenderDataPtr renderContext = GetCurrentRenderContext();

			if (!renderContext->mInitialized) {
				//Now comes the OpenGL part - OS specific loading to start with
				AESDK_OpenGL_Startup(*renderContext.get(), S_GLator_EffectCommonData.get());

				renderContext->mInitialized = true;
			}

			renderContext->SetPluginContext();
			
			// - Gremedy OpenGL debugger
			// - Example of using a OpenGL extension
			bool hasGremedy = renderContext->mExtensions.find(gl::GLextension::GL_GREMEDY_frame_terminator) != renderContext->mExtensions.end();

			A_long				widthL = input_worldP->width;
			A_long				heightL = input_worldP->height;

			//loading OpenGL resources
			AESDK_OpenGL_InitResources(*renderContext.get(), widthL, heightL, S_ResourcePath);

			CHECK(wsP->PF_GetPixelFormat(input_worldP, &amp;format));

			// upload the input world to a texture
			size_t pixSize;
			gl::GLenum glFmt;
			float multiplier16bit;
			gl::GLuint inputFrameTexture = UploadTexture(suites, format, input_worldP, output_worldP, in_data, pixSize, glFmt, multiplier16bit);
			
			// Set up the frame-buffer object just like a window.
			AESDK_OpenGL_MakeReadyToRender(*renderContext.get(), renderContext->mOutputFrameTexture);
			ReportIfErrorFramebuffer(in_data, out_data);

			glViewport(0, 0, widthL, heightL);
			glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
			glClear(GL_COLOR_BUFFER_BIT);
			
			// - simply blend the texture inside the frame buffer
			// - TODO: hack your own shader there
			RenderGL(renderContext, widthL, heightL, inputFrameTexture, sliderVal, multiplier16bit);

			// - we toggle PBO textures (we use the PBO we just created as an input)
			AESDK_OpenGL_MakeReadyToRender(*renderContext.get(), inputFrameTexture);
			ReportIfErrorFramebuffer(in_data, out_data);

			glClear(GL_COLOR_BUFFER_BIT);

			// swizzle using the previous output
			SwizzleGL(renderContext, widthL, heightL, renderContext->mOutputFrameTexture, multiplier16bit);

			if (hasGremedy) {
				gl::glFrameTerminatorGREMEDY();
			}

			// - get back to CPU the result, and inside the output world
			DownloadTexture(renderContext, suites, input_worldP, output_worldP, in_data,
				format, pixSize, glFmt);

			glBindFramebuffer(GL_FRAMEBUFFER, 0);
			glBindTexture(GL_TEXTURE_2D, 0);
			glDeleteTextures(1, &amp;inputFrameTexture);
		}
		catch (PF_Err&amp; thrown_err)
		{
			err = thrown_err;
		}
		catch (...)
		{
			err = PF_Err_OUT_OF_MEMORY;
		}
	}

	// If you have PF_ABORT or PF_PROG higher up, you must set
	// the AE context back before calling them, and then take it back again
	// if you want to call some more OpenGL.		
	ERR(PF_ABORT(in_data));

	ERR2(AEFX_ReleaseSuite(in_data,
		out_data,
		kPFWorldSuite,
		kPFWorldSuiteVersion2,
		"Couldn't release suite."));
	ERR2(PF_CHECKIN_PARAM(in_data, &amp;slider_param));
	ERR2(extra->cb->checkin_layer_pixels(in_data->effect_ref, GLATOR_INPUT));

	return err;
}

“But Chubak, where do we pass the uniforms?” Patience, Constance dear Patience. We pass them in RenderGL():

void RenderGL(const AESDK_OpenGL::AESDK_OpenGL_EffectRenderDataPtr&amp; renderContext,
				  A_long widthL, A_long heightL,
				  gl::GLuint		inputFrameTexture,
				  PF_FpLong			sliderVal,
				  float				multiplier16bit)
	{
		// - make sure we blend correctly inside the framebuffer
		// - even though we just cleared it, another effect may want to first
		// draw some kind of background to blend with
		glEnable(GL_BLEND);
		glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
		glBlendEquation(GL_FUNC_ADD);

		// view matrix, mimic windows coordinates
		vmath::Matrix4 ModelviewProjection = vmath::Matrix4::translation(vmath::Vector3(-1.0f, -1.0f, 0.0f)) *
			vmath::Matrix4::scale(vmath::Vector3(2.0 / float(widthL), 2.0 / float(heightL), 1.0f));

		glBindTexture(GL_TEXTURE_2D, inputFrameTexture);

		glUseProgram(renderContext->mProgramObjSu);

		// program uniforms
		GLint location = glGetUniformLocation(renderContext->mProgramObjSu, "ModelviewProjection");
		glUniformMatrix4fv(location, 1, GL_FALSE, (GLfloat*)&amp;ModelviewProjection);
		location = glGetUniformLocation(renderContext->mProgramObjSu, "sliderVal");
		glUniform1f(location, sliderVal);
		location = glGetUniformLocation(renderContext->mProgramObjSu, "multiplier16bit");
		glUniform1f(location, multiplier16bit);

		// Identify the texture to use and bind it to texture unit 0
		AESDK_OpenGL_BindTextureToTarget(renderContext->mProgramObjSu, inputFrameTexture, std::string("videoTexture"));

		// render
		glBindVertexArray(renderContext->vao);
		RenderQuad(renderContext->quad);
		glBindVertexArray(0);

		glUseProgram(0);
		glDisable(GL_BLEND);
	}

RenderGL(), like SwizzleGL(), is from a series of functions defined at the top of the file.

So what does it all amounts to? Go back to the top, to the very first listing, and you’ll see a sliderVal amongst the uniforms. That’s what I mean by Parametric OpenGL. Technically, every OpenGL is Parametric, however, this gives us a slider, or a point, or a value (depending on the type of the PARAM, I recommend reading the SDK manual). Ipso facto, parametric here means “something to mess around with”.

There’s a considerable amount of money in After Effects plugin development, and this is, perhaps, the very first blog post about this SDK. I’m not sure. When I was a kid, I spent my own fair share of the money on After Effects plugins. For a brief period in my life, they were my awe, my life and my livelihood. Make an After Effects Plugin, Make a Kid Happy!

I hope you enjoyed this very short post. I have a brouhaha with /r/EnglishLearning. They believed that my prose is, as my friend Tanami puts it, brash. If you believe so, please tell me so I can do something about it. Thank you.

 

I’ve found another book to mack on while you mack with your paramour. It’s called Fractal Worlds: Grown, Built and Imagined. I’m going to write a fractal generating software based on it, if I don’t die tomorrow. Chubak Out.

 

Further Reading

  • After Effects SDK Manual

Good Evening everyone! I hope you’re all doing A-Ok. Yesterday I received a PM:

Hey. Do you know anything about game rendering? I’ve seen your blog you must know something about game rendering.

I think this person is referring to Graphic Pipeline, which different APIs dub it as “rendering application”. “The main function of the pipeline is to generate, or render, a two-dimensional image, given a virtual camera,three-dimensional objects, light sources, and more”. 1 Now, SIMD architecture with its many cores makes it possible for parallel calculations, and it is de facto the main reason we have GPUs today. Graphics Pipeline is made up of shaders. Shaders are parallel programs that live in the GPU. OpenGL has five shader stages, and about seven application stages in its Pipeline 2. But what are t he stages of this pipleline? What does pipleline even mean? Read this article and find out.

The History of the Word “Pipeline”

The word pipeline became popular when gas lamps were invented. “The first Canadian transmission pipeline was built in 1853. A 25 kilometre cast-iron pipe moving natural gas to Trois Rivières, QC. It was the longest pipeline in the world at the time.” 3

 

Source in Footnotes

What does pipeline mean? Simple. Imagine you wish to transmit something from one stage, to another stage, then another stage, until the output is unrecognizable from the input. For example, in OpenGL, this can be the input:

 

float vertices[] = {
    // first triangle
     0.5f,  0.5f, 0.0f,  // top right
     0.5f, -0.5f, 0.0f,  // bottom right
    -0.5f,  0.5f, 0.0f,  // top left 
    // second triangle
     0.5f, -0.5f, 0.0f,  // bottom right
    -0.5f, -0.5f, 0.0f,  // bottom left
    -0.5f,  0.5f, 0.0f   // top left
}; 

And this can be the output:

 

Photo and code courtesy of Joey De Vries at Learnopengl.com

Some stages of the OpenGL are mandatory, some stages are optional. You can see this pipeline in the following picture.

 

By Yours Truly

Based on two books, The OpenGL Red Book 9th Edition and Real-Time Rendering 4th Edition I wish to explain to you this so-called Pipeline. Turn off your phones, and pay attention!

1. Vertex Data

We saw an example of vertices attributes in the first code in this very post. In OpenGL, vertices are recorded in Homogeneous Coordinate System, which is a complicated 4-point coordinate system. I’ll talk about this coordinate system in another post, very soon. Keep in mind that vertices can be hardcoded in the shader, but most of the times, it is passed to the vertex shader as a Vertex Buffer Object.

2. Vertex Shader

Piece du resistence, the first shader we deal with, and perhaps, the most important shader. In vertex shader, we access the data given to us through Vertex Buffer Objects in the main program using attributes, and uniforms passed to the shader are indexed using binding indices. What are uniforms, you may ask? Well, each vertex has its own shader, and each shader has quite different values, however, a uniform is constant all throughout the shaders. Each shader has its own uniforms, and you can’t pass a uniform from a shader to another. However, other variables can be passed from one shader to another. Variables qualified by the keyword in are kept inside the shader, and variables qualified by out are passed to the next shader.

In Vertex Shader, three matrices are multiplied by the outgoing gl_Position. One is the Model Matrix. the other is the View Matrix, and at the end, we have the Projection Matrix. You can clearly see them in the following picture:

Courtesy of Joey De Vries at Learnopengl.com

The model matrix deals with the object. The view matrix deals with the world around it. The projection matrix deals with the camera. One day, I’ll talk in detail about various camera projections.

 

3. Tessellation Control Shader

“Imagine you have a bouncing ball object.If you represent it with a single set of triangles, you can run into problems with quality or performance. Your ball may look good from 5 meters away, but up close the individual triangles, especially along the silhouette, become visible. If you make the ball with more triangles to improve quality, you may waste considerable processing time and memory when the ball is far away and covers only a few pixels on the screen.With tessellation, a curved surface can be generated with an appropriate number of triangles.” 4

Tessellation is basically the process of breaking higher order primitives, such as cubes, into many small sub-primitives in order to create a more detailed geometry. In OpenGL, instead of cubes, we use patches. TCS is the first of two Tessellation Shaders, the other one being…

 

4. Tessellation Evaluation Shader

Ones the Tessellation Engine does its job, it produces a number of geometry vertices which are responsible for adding detail to the given patch. Tessellation Evaluation Shader invokes them. This shader runs for each generated vertex, and adds a lot of overhead. That’s why programmers shouldn’t be thrifty with TCS.

 

5. Geometry Shader

Geometry Shader looks like Vertex Shader, and uses gl_Position, but it’s responsible for creating multiple instances of a primitive through EmitVertex() and EndPrimitive(). “Each time we call EmitVertex the vector currently set to gl_Position is added to the primitive. Whenever EndPrimitive is called, all emitted vertices for this primitive are combined into the specified output render primitive. By repeatedly calling EndPrimitive after one or more EmitVertex calls multiple primitives can be generated. This particular case emits two vertices that were translated by a small offset from the original vertex position and then calls EndPrimitive, combining these two vertices into a single line strip of 2 vertices.” 5

 

6. Primitive Assembly

Primitive Assembly is a short stage. It is basically grouping of of primitives into lines and triangles. You may ask, well, what about points? They can’t be assembled? The question to your answer is, that yes, it does happen for points, but it’s redundant.

7. Clipping

After gl_Position is converted to Cartesian coordinates, and gets normalized, meaning it is carried out between -1 and 1, it is clipped for screen. Meaning, the primitives which are not in the viewport propogated by the projection matrix are discarded.  This stage is very important for overall performance reasons.

8. Culling

This is another step taken for performance. Each 3D primitive has its face to the camera, and its back is out of the view of the camera. So it’s entirely useless to render the back of the primitive also. Culling is the process of discarding the back in favour of the face of each primitive.

9. Rasterization

Rasterization is the process of converting 3D objects in computer’s memory to 2D objects to be displayed in the monitor, or rendered to a file. In other words, rasterization is the process of determining which fragments might be covered by a triangle or a line. Rasterization has two stages: trangle setup, and triangle traversal. In the former, “…. edge equations, and other data for the triangle are computed. These data may be used for triangle traversal, as well as for interpolation of the various shading data produced by the geometry stage. Fixed function hardware is used for this task. 6. In the latter, each pixel, or a sample, is covered by a triangle, and a fragment is generated for  each pixel, or sample. If the number of samples are low, aliasing happens. Triangles are interpolated, and pixels are sent to the next stage, which is the most important stage: Fragment shader.

10. Fragment Shader

Creme de la creme, it is the last programmable stage in the pipeline, and does operations such as coloring, texturing, shadowing, antialisiang, raytracing (if possible), etc.Perhaps the most important thing done in this stage is texturing.

 

Texturing, picture courtesy of Real-TIme Rendering 4th Edition

If you are interested in learning more about fragment shaders and what you can do in them, you can read The Book of Shaders, a free internet book that, although outdated, teaches a lot of tricks in the trade.

11. Z-Buffer, Stencil Buffer, and Color Buffer

Z-Buffer is the depth buffer of the program, which is propagated by an FBO. An FBO can also be a stencil buffer, which as the name suggest, creates a black mask aroudn the screen. An FBO can also be a color buffer, which is simply the background color of the program.

12. Compute Shaders, and Beyond!

Disney don’t sue please.

Compute Shaders are a part of GPGPU, or General-Purpose GPU. GPUs, with their SIMD architecture, are the bee’s knees for high-performance computing. So in modern OpenGL, Kronos has provided us with a way to harness the power of GPU using shaders. Compute Shaders use GLSL, and are good for thigns like image processing. However, Compute Shaders are not a part of the Pipeline, and in fact, a Shader Program sannot contain both type of shaders. However, a regular program can contain both.

There are many things to do after Fragment Shader. Such as collision detection, animation, physics, music, etc. But they are not a part of OpenGL.

 

So that’s it! I hope you’ve enjoyed this post. Since there was a brouhaha last night about me not citing the sources, this time I made sure to cite everyone. People were right, I should have cited. Thanks people!

Please share my post with your interested friends. It goes a long way. Thank you, Chubak Out.

The following article is based on a journal by Jorge Jimenez, Jose J. Echevarria, Tiago Sousa and Diego Gutierrez.

You can view their SMAA demo here. It runs fine on my GTX 960 2GB.

Old Methods of Antialiasing

For years, MSAA (Multisampling Antialiasing) and SSAA (Supersampling Antialiasing) have been de facto the methods of antialiasing. In fact, these two still retain the highest quality amongst the various modern antialiasing methods. As we know, aliasing is caused by the lack of samples, in spatial level (jagged lines) and in temporal level (flickering), usually around the edges and high/low contrast regions of the picture. To battle these, we have two main ways that were once the only way around, Supersampling, and Multisampling:  In Supersampling, we blow up the picture, then downsampling for the final resolution. It works fine because, as my uncle puts it, it’s a pincer attack, because it covers every basi s of the problem and surrounds it. Multisampling is similarly pincer-ish. In this method, each sample gets duplicated based on the given coefficient. In today’s high resolutions, it would require a rather fiendish graphics card to achieve this. Therefore, we need new methods of antialiasing, both in spatial, and temporal level. All these methods rely on one algorithm to do their job: edge detection algorithm. But they rely on other things as well.

 

Modern Ways of Antialiasing

There are many modern filter-based methods for antialiasing which all, although inferior to the former and latter, do their job. FXAA,  DEAA, GPAA, GBAA, CSAA, EQAA, DLAA… In this article, we’ll talk about SMAA, and its predecessor, MLAA. These modern filter-based methods have their own problems:

  • Most edge detection methods, which are the basis of these methods, only take into account numerical differences between pixels, ignoring how they appear to the viewer.
  • The original shape of the object is not always preserved, an overall rounding of corners is most of the times clearly visible in text, sharp corners, and subpixel features.
  • Most approaches are designed to handle horizontal or vertical patterns only, ignoring the vericals.
  • Real subpixel features and subpixel motion are not properly handled. Specular and shading aliasing is not completely removed.

You’ve guessed right: We raise these issues because aim to decimate them.

Morphological Antialiasing (MLAA)

MLAA tries to estimate the coverage of the original geometry. To accurately rasterize an anitalised triangle, the coverage area for each pixel inside the triangle must be calculated to blend it properly with the background. MLAA begins the image without antialiasing, and it reverses the process by re-vectorizing the silhouettes, in order to estimate such coverage areas. Then, since the background cannot be known after rasterization, MLAA blends with a neighbor, assuming that its value is similar with the original background. In other words, The algorithm detects borders (either using color or depth information) and then finds specific patterns in these. Anti-aliasing is achieved by blending pixels in the borders intelligently. MLAA has implementations in DirectX 10 and Mono Game (XNA). Games such as Fable II use it faithfully. From the creators of MLAA, comes SMAA, or Enhanced Subpixel Morphological Antialiasing, which is the main point of our post.

MLAA in action

Enhanced Subpixel Morphological Antialiasing (SMAA)

 

Comparison between SMAA and other methods in Crysis 2

SMAA offers reliable edge detection, and a simple and effective way to handle sharp geometric features and diagonal lines. Besides, SMAA doesn’t change the shape of the geometry, as many other methods do.

 

Top) No AA

Middle) MLAA

Bottom) SMAA

SMAA builds on MLAA pipeline, improving or redefining at every step. In particular, the edge detection is improved by using color information with local contrast adaptation for cleaner edges. It extends the number of patterns handled for preservation of sharp geometric features and diagonals. And lastly, it shows how morphological antialiasing can be accurately combined with multisampling or supersampling and temporal reprojection.

Edge Detection

Edge detection is vital, because undetected edges remain aliased. On the other hand, too many filtered edges can reduce the quality of the antialiased image. Different information can be used to detect he edge: Chroma, luma, depth, surface normal, or a combination of them. For four reasons, SMAA uses luma:

  1.   Less artifacts.
  2. Luma is always visible.
  3. It can handle shading aliasing.
  4. And finally, it’s faster than chroma.

 

 

Left and Center: Other Methods of Edge Detection, causing red crossing and artifacts

Right: SMAA edges, clean as a whistle

Have this image in mind. Here’s how edge detection works: the final calculated value is a boolean called left edge boundary. Boolean values for the top edge is similarly calculated. The formula is:

c_{max }= max\left(c_l, c_r, c_b, c_i, c_{2l}\right)

e_l^\prime =  e_l \wedge c_l > 0.5.c_{max}

All the c values are called contrast deltas.

Pattern Handling

SMAA pattern detection allows to preserve sharp geometric features like corners, deals with diagonal and enables accurate distance searches.

Sharp Geometric Features: The re-vectorization of silhouettes in MLAA tensd to round corners. To avoid this, SMAA makes the observation that crossing the edges in contour lines have a maximum size of oen pixel, whereas for sharp corners this length will most likely be longer. Thusly, SMAA fetches two-pixel-long crossing edges instead, this allows less aggressive corners processing.

Diagonal Patterns: We introduce a novel diogonal pattern detection. It consists of the following steps:

  1. Search for the diagonal distance d_l and d_r to the left and the right of the diagonal lines.
  2. Fetch the crossing edges e_1 and e_2.
  3. Use this input information, defining the specific diagonal patter, to access the precomputed area texture, yielding the areas a_t and and a_b.

If the diagonal pattern detection fails, then the orthogonal detection is triggered.

Accurate Distance Search: Key to pattern detection and classification is obtaining accurate edge distance (lengths to both end of the lien) MLAA makes extensive use of hardware interpolation to speed up this process. Hardware bilinear filtering can be used as a way of fetching and encoding up to four different values with a single memory access. This linear interpolation of two binary values (that is, bilinear) producing a single floating point value is shown as:

f_x(b_1,b_2,x) = x.b_1(1-x).b_

Where b_1 and b_2 are two binary values (either 0 or 1) and x is the interpolation value.

Results

MLAA works with a single sample per pixel. This translates to subsampling, which makes it impossible to recover real subpixel features.

 

MLAA vs SMAA vs No AA

SMAA, however, works in the subpixel level. This results in:

  • Local contrast
  • Diagonal pattern detection
  • Sharp geometric features
  • Accurate searches

You can view all these in the following image, with these features compared to other methods. In fact, SMAA can produce results close to SSAA 16x.

 

The overhead produced by each of the solutions is negligible. In particular, local contrast adaptation is only 0.08ms, the sharp geometric features detection adn accurate distance takes 0.01ms, and diagonals processing produces an overhead of 0.12ms. In short, SMAA is rather fast, slower than SSAA and MSAA, but more fruitful, and less resource-intensive. 

Well, thanks for reading the article! And thanks for the writers of the journal which I used for the majority of the article. I hope you guys have a good day, and also, go on reading scientific articles on your own. It’s simple, just head to libgen.io and search for what you like — not necessarily graphics programming. Read, read and read! Don’t watch too many Youtube tutorials, it kills your senses. I can’t stress that enough. I don’t intend to tell you what to do, these are all just suggestions. I’m currently studying Structured Computer Organization and enjoying it very, very much. I recommend it for everyone, even as bedtime reading.

Please, please, please tweet my blog, introduce it to your friends, and share it to people whom you want to enjoy life, and learn about graphics programming. Thank you, thanks a lot. Chubak out.

Radiosity: Hauntingly Beautiful

I saw “Radiosity Maps” option in Cinema 4D many years ago, but I didn’t understand what it was. Now, I understand, and I want you, faithful reader, to understand them as well. I’m basing this post on an old article I found on Libgen — an article from 1986 the trio of Donald P. Greenberg, Micheal F. Cohen, and Kenneth E. Torrance. I hope they don’t mind me basing my blog post on their article from 34 years ago!

We know that there’s three categories of light: ambient, diffuse, and specular. In Phong shading, which we talked about before, a simple formula is used to simulate lighting in the given scene. This, however, is not sufficient. It is enough for real-time rendering, where performance is the main concern, but when time is not of concern, we can use more complex formulae to calculate the lighting in our scene.

In 1979, Whitted published a paper in which he explained and talked about ray tracing, a method for producing images of excellent quality.

 

Raytracing, a noble way for noble men.

However, raytracing procedure is limited and can only model intra-environment reflectionss in specular direction. Additionally, shadows always exhibit sharp boundaries, and plus, since raytracing is view-dependant, for every view, there must be a new pass.

This, however, is not true of the radiosity method. This method renders Global Illumination independent of views.

In SIGGRAPH convention of 1984, Lady Cindy Goral exhibited a metohd that sparked a lot of interest and turned quite a few heads: Derived from thermal engineering, the field of Heat Transfer, they revealed a new Global Illumination method based on energy equilibrium, and called it the Radiosity Method.

 

The difference between Direct Illumination, i.e. Phong Lighting, and Radiosity is evident in the given picture. Bright lights, no more of the demented color bleeding which are caused by diffuse reflections, and softer shadows.

Two other papers were published in 1985 which introduced concepts such as hemi-cube which we’ll discuss later. For now, let’s talk about Radiosity, and what it exactly really is.

Radiosity Formulae

 The radiosity method explains energy equilibrium inside an enclosure. The light leaving the surface (its radiosity) consists of self-emitted light and reflected or transmitted incident light. The amount of light arriving at a surface requires complete specification of geometric relationships among all reflecting and transmitting surfaces, along with the light leaving every other surface. The formula of this relationship is:

Formula 1: Radiosity essence

B_i = E_i + \rho_i \sum_{j=1}^{N} B_j F_{ij} | i = 1 to N

Factors and coefficients are as follows:

Radiosity (B): The total rate of energy leaving a surface.

Emission (E): The rate of energy (light) emitted from a surface

Reflectivity (\rho): The fraction of incident light which is reflected back into the environment

Form-facor (F): The fraction of the energy leaving one surface which lands on another surface.

N = the number of discrete surfaces of patches.

What does this equation state, you may ask? Well, from this equation we understand that the amount of nergy leaving a particular surface is equal to the self-emitted light plus the reflected light. The reflected light is equal to the light leaving every other surface multiplied by both the fraction of that light which reaches the surface in question, and the reflectivity of the receiving surface.

We said that the form-factor is the fraction of the energy leaving one surface which lands on another surface. The formula for this is:

Formula 2: Form-factor essence

F_{ij} = \frac{1}{A_i} \int_{A_i} \int_{A_j} \frac{\cos\theta_i\cos\theta_j}{\pi r^2} HID_{ij}dA_jDA_i

Factors and coefficients can be seen in this figure:

 

Also, HID is a function which calculates the area of i and j.

The Hemi-Cube Method

We talked about the Hemi-Cube, but what is it? The Hemi-Cube Algorithm provides a numerical integration (you know, integrals) technique for evaluating Formula 2.  Instead of projecting into sphere, which is the normal procedure for lighting a scene, an imaginary cube is constructed around the center of the receiving patch. The environment is transformed to set the patch’s center at the origin with patch’s normal coinciding with the positive Z-axis (in other words, the tangent space!).

 

Then, the Hemi-Cube is divided into orhogonal mesh of pixels (the article puts “pixels” in quotes, as it was a novel word at the time!) at any desired resolution.

 

The Hemi-Cube divided into these so-called pixels… Witchcraft!

Ipso facto, the total value of the form-factor from the patch at the center of the hemi-cube to any patch j can be determined by the summation of these pixels.

Radiosity… GI?

As we’ve learned, Radiosity is a subset of GI, but it’s different from raytracing. Today, most 3D applications use Monte Carlo for their GI engine, which retains some aspects of Radiosity. Examples include Cinema 4D. It used to have Radiosity, but it changed it to GI, so people, mostly C4D users, think GI and Radiosity are different things.

Well, that is it! Dear reader, I appreciate your keen interest in my blog. Please leave a comment. If I’ve made an error, please alert me. I’m not perfect, I make mistakes.

Another thing. Please, if you’ve liked my blog, tweet it to your friends so they can enjoy it as well. Even if it marks you as the nerd of the group, please do it, chicks dig nerds these days. If you have a girlfriend, just take some MDMA and read my blog to her out loud, she’ll return the favor, I rather not say how. Besides tweeting you can reddit my blog. I post my blog in all the relevant subreddits, but I may miss one or two, or a million. So please, help me grow my blog’s readership. Thanks, Chubk.

What is a shadow, even?

Well, let me start off by saying that I love shadows more than light, and I have chosen the one room in the house where every being of the room occludes light. Occludes light… Hmm… How do we know what’s behind the light frustum, and what’s in front of the light frustum, or frusta?

 

Let us scrutinize shadows first. What are shadows? When you shine a light on an object with a wall behind it, what happens?

Look at this picture:

 

Umbra: Full, hard-edged shadow.

Penumbra: Half, soft-edged shadow.

What happens if we take the light source furher and further away, until the light frustum become so wide that umbra does not seem feasible anymore? Then, umbra turns into antumbra, where different penumbras meet.

Shadow Mapping

The concept was popularized by Lance Williams in 1978. Lance Williams is also the person behind mip-mapping, which we’ll discuss in another post, moon-god-willingly. So what is Shadow Mapping? It’s by far the only feasible real-time shadowing solution, as raytracing is rather resource-intensive and volume shadows are not suiable for real-time rendering.

 

Such a happy man!

Native API support is also another reason to choose shadow mapping. GLSL, for example, has a native texture type for receiving the shadow map as a Sampler2D Uniform, which we’ll see. 

But what is shadow mapping?

 Imagine angle of the light is less than 180 degrees. If we choose camera position as the light position, camera direction as the direction of the light, and shine the light frustum on the scene then the resulting depth buffer is our shadow map.

 

Left: Light Frustum Shone on the Scene | Right: The Resulting Framebuffer becomes out shadow map
Depth as seen from a ligh

The moment of truth

We said all those things, and we showed you the result, but how exactly do we know that an object occludes light at a certain position? Let’s see what happens when it doesn’t and then we’ll see what happens when it does so.

Imagine this spot on our Utah Teapot:

 

Imagine if Z_{sm} is the depth value point on the teapot. And imagine Z_{ls} is the depth value of the length of the light source to the lit point.

 

There will not be a shadow in this case, because Z_{ls} = Z_{ms}. However, imagine if Z_{ms} was behind the teapot:

 

In this case, when Z_{ls} < Z_{ms} there this point is definitely in the shadow, and since each point is a fragment, this fragment should be shaded In an Octopus’ Garden in the Shade!

Again, I’m burying the lede, and I’m sorry for that. Octopus’ Garden in the Shade has nothing to do with shaders! Anyways, we must factor in a bias when we do compare the depths. Because if we don’t, as you see in the photo, there will be surface acne.

In a scene with dueling frasta, i.e. more than one light source and shadow map, the size of the shadow map shan’t be different from each other, otherwise, as a result, the shadow will be pixelated.

 

Shadow Map: Pros vs. Cons

Shadow Map Pros are:

  1. No matter how heavy they may look, they are still better for real-time rendering than the alternatives (e.g. RayTracing).
  2. You can adjust the size of the texture which holds the depth data, ultimately, add it as an option to your game, or demo.

 And the Cons are:

  1. Aliasing occurs in low-res textures.
  2. Textures are heavy and occupy a lot of memory.
  3. Effects of self-occlusion may be visible in the output as sparkling. This can be fixed by polygon offset.

 Finally, let’s implement it in the code using OpenGL.

OpenGL Implementation

With all that said, how exactly is this implemented in OpenGL? Here’s how. The following code is taken from OpenGL Superbible 7th Edition.

Listing 1: First, we create the shadow depth buffer.

 

GLuint shadow_buffer;
GLuint shadow_tex;

glGenFramebuffers(1, &amp;shadow_buffer);
glBindFramebuffer(GL_FRAMEBUFFER, shadow_buffer);

glGenTextures(1, &amp;shadow_tex);
glBindTexture(GL_TEXTURE_2D, shadow_tex);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT32,
               DEPTH_TEX_WIDTH, DEPTH_TEX_HEIGHT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,
                GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
                     shadow_tex, 0);

glBindFramebuffer(GL_FRAMEBUFFER, 0);

Listing 2: Then, we create model-view-projection matrices, but this time, instead of the camera, we use the light!

 

vmath::mat4 model_matrix = vmath::rotate(currentTime, 0.0f, 1.0f, 0.0f);
vmath::mat4 light_view_matrix =
    vmath::lookat(light_pos,
                  vmath::vec3(0.0f),
                  vmath::vec3(0.0f, 1.0f, 0.0f);
vmath::mat4 light_proj_matrix =
   vmath::frustum(-1.0f, 1.0f, -1.0f, 1.0f,
                  1.0f, 1000.0f);
vmath::mat4 light_mvp_matrix = light_projection_matrix *
                               light_view_matrix * model_matrix;

Listing 3: Thusly, we generate a shadow matrix.

const vmath::mat4 scale_bias_matrix =
     vmath::mat4(vmath::vec4(0.5f, 0.0f, 0.0f, 0.0f),
                 vmath::vec4(0.0f, 0.5f, 0.0f, 0.0f),
                 vmath::vec4(0.0f, 0.0f, 0.5f, 0.0f),
                 vmath::vec4(0.5f, 0.5f, 0.5f, 1.0f));
vmath::mat4 shadow_matrix = scale_bias_matrix *
                            light_proj_matrix *
                            light_view_matrix *
                            model_matrix;

List 4: Nothing can be done without shaders, so we implement the vertex shader for this shadw. Not much different than any other vertex shaders.

#version 420 core

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
uniform mat4 shadow_matrix;
layout (location = 0) in vec4 position;

out VS_OUT
{
    vec4 shadow_coord;
} vs_out;

void main(void)
{
    gl_Position = proj_matrix * mv_matrix * position;
    vs_out.shadow_coord = shadow_matrix * position;
}

As you can see, we’ve got two model-view-projection matrices, one we use for ourselves, and one (the shadwo matrix) we pass to the fragment shader to see if each fragment is in the shadow, or in the light.

Listing 5: the fragment shader of it all.

#version 420 core

layout (location = 0) out vec4 color;

layout (binding = 0) uniform sampler2DShadow shadow_tex;

in VS_OUT
{
    vec4 shadow_coord;
} fs_in;

void main(void)
{
    color = textureProj(shadow_tex, fs_in.shadow_coord) * vec4(1.0);
}

That’s it! Notice something that we’ve never before seen in this blog, sampler2DShadow. It’s something recent, and not found in 3.3. This is why I say Superbible is better than LearnopenGl.com. It’s up to you, do you want something outdated written by a fan, or something up-to-date written by ARB? You choice!

 

Well, that is it for today! I might make another post, but it would be for the Europeans, because by the time I post my second entry of today, Americans will be asleep. A lot of people have visited my blog, this blog is not far from generating profit. So thank you! Don’t forget to leave a comment. I love each and everyone of you, if that’s not creepy. Really. I love everyone who reads the drivel I write. This gives me a feeling of self-importance. Anyways, before we have engaged in coitus, goodbye for now!

This will be a very short entry, as I’ve already made a post, so forgive me for the brevity of this particular parcel.

The zest that drives me in the world of graphics programming is being able to create an application like Adobe’s After Effects. I have, in the past, had a very fun time with this software of which I am fond of. But in order to achieve such a feat, one must first release a plugin or two for the software. Me, I plan on writing a book on the subject. But that’s far into the distance.

But there are many blockades on the road which must be dealt with. First one is the fact that my Visual Studio can’t debug the damn plugins. I haven’t written any yet, but I can’t get Visual Studio debugger to work, IntelliTrace won’t show the value of any of the variables, even I I engage the plugin’s PiPL entry point, set many breakpoints, even in the wretched about() function. Therefore, I couldn’t have written any plugins even if I could, as blindly compiling the aex files back and forth is simply retarded.

I have many other problems as well, but they can be solved by reading the manual. And I’m reading it thoroughly, and carefully. Let me explain how this API works:

  1. Parameters
  2. Flags
  3. Suites

Parameters are the UI elements which you deal with in a plugin. They’re few, but applicable. Flags are orders which the plugin orders the application. And suites are a collection of functions that comprise the API functional libraries.

For example, here’s the render function of the Skeleton example:

 

 

static PF_Err 
Render (
	PF_InData		*in_data,
	PF_OutData		*out_data,
	PF_ParamDef		*params[],
	PF_LayerDef		*output )
{
	PF_Err				err		= PF_Err_NONE;
	AEGP_SuiteHandler	suites(in_data->pica_basicP);

	/*	Put interesting code here. */
	GainInfo			giP;
	AEFX_CLR_STRUCT(giP);
	A_long				linesL	= 0;

	linesL 		= output->extent_hint.bottom - output->extent_hint.top;
	giP.gainF 	= params[SKELETON_GAIN]->u.fs_d.value;
	
	if (PF_WORLD_IS_DEEP(output)){
		ERR(suites.Iterate16Suite1()->iterate(	in_data,
												0,								// progress base
												linesL,							// progress final
												&amp;params[SKELETON_INPUT]->u.ld,	// src 
												NULL,							// area - null for all pixels
												(void*)&amp;giP,					// refcon - your custom data pointer
												MySimpleGainFunc16,				// pixel function pointer
												output));
	} else {
		ERR(suites.Iterate8Suite1()->iterate(	in_data,
												0,								// progress base
												linesL,							// progress final
												&amp;params[SKELETON_INPUT]->u.ld,	// src 
												NULL,							// area - null for all pixels
												(void*)&amp;giP,					// refcon - your custom data pointer
												MySimpleGainFunc8,				// pixel function pointer
												output));	
	}

	return err;
}

The functions takes a reference construct, in_data and out_data which are the given data, and the generative data, and params array which is the parameters of the UI, and finally, a layerdef which is the layer that effect has been applied to it.

At first, it creates a GainInfo variable which later will serve as our reference construct, does some calculations on it, and later passes it as the refcon to our two 8bit an 16bit functions. They in turn, cast the refcon into another variable type, and place some calculations on it, and carry the operation.

This is the basis of how AE plugins do what they do. Of course, there are some plugin types such as AEGP (which element 3D falls under). Element 3D for example, uses Assimp OpenGL Model Loader to load models, then dump the framebuffer into an image, and pass it into After Effects.

That is it for the moment. I hope you’ve enjoyed this post. I will keep you updated on my exploits. And my first plugin will definitely be free.

I started developing games at the young age of sixteen,. It’s been ten years now, and the landscape of indie and hobbyist game development has changed since then. Back then, Unity was still young, and not popular as it is today. Do you know how popular Unity is today? The magnitude of Unity’s popularity can be seen by Global Game Jam 2019’s technology stats which I will plot here:

 

Global Game Jam 2019 Technology Stats. A whopping 74% used Unity!

What can be concluded from this chart, but the fact that Unity is rather popular? And its popularity is well-deserving. What astounds me is that 28 teams have used Processing. If you don’t know what Processing is, it’s a Java library/stand-alone package which is mainly used for Creative Programming, something cooked up by people who had too much time on their hands. But funnily enough, there’s one feature in Processing which I admire, and that’s filtering the image based on a given fragment shader. But it is, in no way, suitable for making games! For God’s Sake, if you want something simple, Why Don’t You Ask Evans?

 

I know I am burying the lede here, and people have told me not to before, but I want to enjoy writing these blog posts, and nothing’s more enjoyable more than talking about an author you admire. Why Didn’t They Ask Evans is a work of detective fiction by Dame Agatha Christie. I won’t spoil the book for you, but it’s about Bobby Jokes and his girl toy Frankie. Bobby, whilst golfing, stumbles upon a near-dead man on the rocks. The man mumbles “Why didn’t they ask Evans?” This book is about Occam’s Razor: Usually, the simplest answer to something is the best answer. So if you want to make a game, and don’t want to use Unity, what is the simplest answer? Grist to Our Mill, Godot Engine (pronounced Gow-Dow)!

 Only a couple of people had used this engine in this years Game Jam, and it was a point of pride for them. They called themselves “hipsters of the game developing world”. But let’s not fret, Godot is becoming more and more popular, and by this time next year, a lot more people will be using it. I wish to do my part in introducing this wonderful engine to the game making public. It’s not great for 3D games, but it’s awesome for 2D games.

Godot is only 18 megabytes, and can be downloaded from here. Along with the engine, download the Export Templates, and also, if you’re a Blender man, Better Collida Exporter which improves the exporting of .dae files in Blender. You know what’s the best thing about Godot is? It’s free, or as Richard Stallman puts it, free as in freedom. There are many free engines around but certainly, Godot is heads ans shoulders above the rest.

 Extract Godot somewhere where you keep your game development files. Start a new project, or download a template. Here, I have downloaded a template called “Platformer 2D”.

 

On the top-left you see the FileSystem file browser. I don’t use this function, I drag and drop my files onto the…

On bottom-left you see your project files. Icon.png is what Godot displays in the project manager. It’s the identity of your project! So use a good picture.

The bar on the top navigates between your 2D, 3D, and code. Also, AssetLib wihch is Godot’s version of Asset Store.

Below that is your level editor.

On the top-right you’ll see the Scene nodes. Godot’s node system is very intuitive, and we’ll introdue some of the nodes later on. Tabbed next to the Node system is the Import settings.

On the bottom-right you’ll see the Inspector. Here, you, per example, add a sprite to a texture, set a music to loop, or create a particle system. Everything done here can also be done in the code section. Tabbed next to the Inspector is the Node settings, which comprises of Alarms, and Groups.

Okay, now let’s see what goes into the making of Godot Engine. Hit Help->About and look at the third party licenses. Here we see that Godot uses zlip, curl, TinyEXR, NanoSVG, GLAD, and most importantly, SDL, amongst so many other things. Truly, a pinnacle of FOSS development. Just 10 years ago, this would have been an impossible feat to achieve. But thanks to many OSS projects, and Github, today, we have Godot. Thanks, RMS! You are truly the man who eats foot cheese, but your efforts has also given us so many wonderful things.

Godot uses OpenGL through SDL. And GLAD as OpenGL Function loader (I know it’s not important, but for me, these things are exciting). Currently, it doesn’t have an Official .gitignore but there’s an unofficial one. If you wish to create a repository for your game, make sure there’s a .t prefix before your scene and file names, otherwise, they will be binary, and completely unsuitable for a version control system like Github.

Let’s take a look at some of Godot’s nodes:

 

  • Every node has a parent. In a 2D game, most of the nodes you use inherit from Node2D.
  • Every node can have as many children as it wants. Usually, an Area 2D node has a Sprite 2D node, and a script attached to it.
  • Particles 2D generates a 2D particle system, probably using textured OpenGL points. I must do a tutorial on them one day.
  • Path 2D, gives a path to the parent node it’s attached to.
  • RayCast 2D, it casts a ray in the 2D space and if it hits somewhere, it alerts the parent node.
  • Polygon 2D, a 2D polygon.
  • Sprite, one of the most-used of the Node2D nodes. It’s usually attached to a Kinematic, Static, or an Area 2D object.
  • TileMap, a set of tiles.

As I said, nodes are very intuitive.

Now, let’s take a look at Godot’s scripting language, called GD Script, which is very similar to Python. You can also use C# if you have downloaded the Mono version:

extends Node

var lives = 4
var coins = 0
var punto

func _ready():
	self.pause_mode = PAUSE_MODE_PROCESS

func _process(delta):
	if Input.is_action_just_pressed("ui_cancel"):
		if get_tree().paused == false:
			get_tree().paused = true
		else:
			get_tree().paused = false
	
	if lives == 0 and punto == null:
		print("Perdiste")
		get_tree().quit()

_ready() function is kin to Unity’s start(), and Godot has two functions which can be equated to Unity’s update(). The first one is _process() which is the normal update, the next one is _physics_process() which is used for synchronization with the physics engine. Also, as you can see, delta time is passed to the function as a parameter, something which all engines must do! 

So why do I say Godot is Grist to Our Mill? Because for far too long, before or after Unity became popular, we relied on tools that simply weren’t up to it. Tools that were buggy, run-down, or simply wrong (looking at you, Processing!). Godot is free, Godot is ever-changing and Godot is ever-wonderful. It’s still in development, but you can always rely on it to make you a good game, free of charge, with all the features intact.

Back in 2016, when I first started out with Godot, there were not that many tutorials around. But these days it’s just a matter of Google search to access the best of Godot tutorials. And if you like books, you can always buy Godot Engine Game Development in 24 Hours which is how I learned Godot. And you can always ask /r/Godot. Q&A is also always around to answer your questions.

 

 

Well, that is it for today’s post! You see, I, too, can post about game engines, and my posts are not always either about weird Python scripts that I’ve written, or OpenGL. Thanks, and have a nice day!

Color be a harsh mistress.Aristotle developed the first color theory. He believed that colors were celestial rays sent by Gods for humans to perceive them based on four elements of fire, earth, air and water.


Aristotle/Da Vinci Color Theory

Two hundred years before Newton cracked the color spectrum, artist and heartthrob Leonardo of the planet Vinci theorized that whilst philosophers perceive white as the “receiver” and black as the “absence of color”, both are important. He later developed his color theory based on six colors that you see in the image above. Imagine colors are Boolean operators, white is True, black is False. If True is 1 and False is 0, blue is 0.5. That’s very close to how we perceive colors today.

Meanwhile, Persian painters who came before Da Vinci and Newton gave color theory an existentialist twist. Green was the color of Nobility. Blue was color of paradise. Red was color of passion. And so on.

 

A Persian miniature. Notice how Joseph Son of Israel is green, whilst the Potiphar’s wife is red. 

In 1666, Sir Isaac Newton, using two prisms, generated what we today know as the White Light Color Spectrum.


White Light Color Spectrum

Light is not particle. It’s not wave either. Light is made up of photons. Photons carry energy, and a prism cracks this energy and splits it into seven colors. That’s just White Light. If we heat up Hydrogen until it emits light, if we split up this light, it will create a 4-color spectrum called Emission Spectrum.


Shades of Color

In real life, all the colors in existence are made up of 3 color: Red, Green, Yellow. Imagine we have a base color, and we want to increase the red-green-yellow value incrementally until we get a shade. But if we increase the value of all three, we’ll get a random shade of color. But if we increase it based on a state, if we treat this as a 3! state machine, we’ll get six conditions. Depending on the condition that we want a red-green shade, we increase red and green values without touching yellow. And so on.

In computers, yellow is blue. But that’s just one of the colorspaces we use in computers. We have CMYK for print. HSL, HSV, YIQ and so on. In this program, we’ll use HSV and RGB. You’ll see.


Complementary Colors

Newton also discovered that if we create a circular spectrum, the opposite colors complement each other. Boutet created the following color wheel based on this discovery:


Boutet’s Color Wheel

I talked about RGB complementary colors in this post. So I’ll cut it short. Just know that in this program, once we generate the base color, we invert the color and generate an shade of six colors based on the opposite color. You’ll see. Let’s continue.


Ranginak Color Generator

Ranginak (meaning Small Color Shade in Persian) is a Python script that generates three six-colored shades of color and 3 original colors.

The source can be found here, also, you can learn about it in this post.

The first shade is used for background. The second shade is using for mid-ground. The third shade, which is the most saturated, is used for foreground.


Zoom in to read the RGB values

How does it work? Let’s start. This code requires Gizeh, which you can install by:


pip install gizeh

It also uses colorsys, which is a built-in Python library. Colorsys converts between color systems. We’ll only use it once in this code. We also make use of random and time.


import gizeh as gz
import colorsys
import random
import time

We then write our first function.

def generate_color(s_min, s_max):
    random.seed(time.time())
    h = random.randint(0, 100) / 100
    random.seed(time.time())
    v = random.randint(0, 100) / 100
    random.seed(time.time())
    s = random.randint(s_min, s_max) / 100
    color = colorsys.hsv_to_rgb(h, s, v)

    return color

s_min and s_max are minimum and maximum saturation, respectively. Then we’ll fix the seed so each time we’ll call the function, randint() will generate a fixed random number. Otherwise it’ll go haywire and generate random colors. You’ll see why seed() is important in the next function. It then puts the converts the HSV value to RGB using colorsys and puts them in a tuple and returns them.

Which we’ll write now:

def generate_color_master():
    color_master = []


    color_master.append(generate_color(1, 33))
    color_master.append(generate_color(33, 66))
    color_master.append(generate_color(66, 100))

    return color_master

color_master[] is a list that contains three color tuples. One for background, one for mid-ground, one for foreground. We change s_min and s_max based on our desire to create a less saturated color for the background, and a more saturated color for the mid-ground and foreground.

def invert():
    inverted = []
    colors = generate_color_master()

    for color_tuple in colors:
        r = 1 - color_tuple[0]
        g = 1  - color_tuple[1]
        b = 1 - color_tuple[2]

        inverted.append((r, g, b))

    return inverted

Our next functions creates the complementary color based on the main colors, and returns them in a list accordingly. Now, our main function.

def generate_shade_color(r, g, b, color_tuple):
    new_color = 0

    addition_r = (random.randint(1, random.randint(5, 9)) / 10) * r
    addition_g = (random.randint(1, random.randint(5, 9)) / 10) * g
    addition_b = (random.randint(1, random.randint(5, 9)) / 10) * b

    new_r = 0
    new_g = 0
    new_b = 0

    if r == 0:
        new_r = color_tuple[0] * 255
        new_g = color_tuple[1] + addition_g * 255
        new_b = color_tuple[2] + addition_b * 255
    elif g == 0:
        new_g = color_tuple[1] * 255
        new_r = color_tuple[0] + addition_r * 255
        new_b = color_tuple[2] + addition_b * 255
    elif b == 0:
        new_b = color_tuple[2] * 255
        new_g = color_tuple[1] + addition_g * 255
        new_r = color_tuple[0] + addition_r * 255


    if int(new_r) <= 255 and int(new_g) <= 255 and int(new_g <= 255):
        new_color = (new_r / 255, new_g / 255, new_b / 255)
    elif int(new_r) > 255:
        new_color = (1.00, new_g / 255, new_b / 255)
    elif int(new_g) > 255:
        new_color = (new_r / 255, 1.00, new_b / 255)
    elif (new_b) > 255:
        new_color = (new_r / 255, new_g / 255 , 1.00)



    return new_color

Because it might get complicated, let me explain what it does in the list format:

1- r, g, and b are binary coefficients. If either is 0, it won’t change the color in our shade. If we want to disable, let’s say, r in our shade, we’ll pass r as 0 and the other two as 1. And so on. Color_tuple is the main color we wish to create a shade from.

2- addition_[channel] are random numbers between 0.1 and 0.9 that we add to the main color’s respective channel in order to create an increasing shade. Note that we multiply it by the coefficient so it’ll be 0 if the coefficient is 0.

3- We multiply the color by 255 so we can have an easier time checking if the color is out of bounds.

4- We check if the color is out of bounds. If it is, we make it 1, the maximum color.

5- We divide the color by 255 again and return the new color.

def generate_shade(r, g, b):
    colors = invert()
    bg = []
    mg = []
    fg = []

    for i in range(6):
        bg.append(generate_shade_color(r, g, b, colors[0]))
        mg.append(generate_shade_color(r, g, b, colors[1]))
        fg.append(generate_shade_color(r, g, b, colors[2]))


    return [bg, mg, fg]

In this function, we create six colors for each layer.

Now, we get to the drawing part using Gizeh.

rect_w = 500
rect_h = 500

def generate_surface():
    surface = gz.Surface(width=int(rect_w * 7), height=int(rect_h * 3))

We create a 3500*1500 window.

def draw_sqr(color, x, y):
    sqr = gz.square(l=500, fill=color, xy=(x, y))


    r = int(color[0] * 255)
    g = int(color[1] * 255)
    b = int(color[2] * 255)
    string = "(" + str(r) + ", " + str(g) + ", " + str(b) + ")"
    text2 = gz.text(string, fontfamily="Tahoma", fontsize=24, fill=(0, 0, 0), xy=(x + 20, y + 20))
    text3 = gz.text(string, fontfamily="Tahoma", fontsize=23, fill=(1, 1, 1), xy=(x + 20, y + 20))

    return gz.Group([sqr, text2, text3])

This function generates a square of the color color, and a text showing the colros RGB value.

Now, the main function.

def main_func(r, g, b):
    colors = generate_shade(r, g, b)
    original_color = generate_color_master()
    items = []

    bg = colors[0]
    mg = colors[1]
    fg = colors[2]

    items.append(draw_sqr(bg[0], 250, 250))
    items.append(draw_sqr(bg[1], 750, 250))
    items.append(draw_sqr(bg[2], 750 + 500, 250))
    items.append(draw_sqr(bg[3], 750 + 1000, 250))
    items.append(draw_sqr(bg[4], 750 + 1500, 250))
    items.append(draw_sqr(bg[5], 750 + 2000, 250))
    items.append(draw_sqr(original_color[0], 750 + 2500, 250))

    items.append(draw_sqr(mg[0], 250, 250 + 500))
    items.append(draw_sqr(mg[1], 750, 250 + 500))
    items.append(draw_sqr(mg[2], 750 + 500, 250 + 500))
    items.append(draw_sqr(mg[3], 750 + 1000, 250 + 500))
    items.append(draw_sqr(mg[4], 750 + 1500, 250 + 500))
    items.append(draw_sqr(mg[5], 750 + 2000, 250 + 500))
    items.append(draw_sqr(original_color[1], 750 + 2500, 250 + 500))

    items.append(draw_sqr(fg[0], 250, 250 + 1000))
    items.append(draw_sqr(fg[1], 750, 250 + 1000))
    items.append(draw_sqr(fg[2], 750 + 500, 250 + 1000))
    items.append(draw_sqr(fg[3], 750 + 1000, 250 + 1000))
    items.append(draw_sqr(fg[4], 750 + 1500, 250 + 1000))
    items.append(draw_sqr(fg[5], 750 + 2000, 250 + 1000))
    items.append(draw_sqr(original_color[2], 750 + 2500, 250 + 1000))


    return gz.Group(items)

First, we prepare our colors. Then, draw 21 square of different colors. Don’t ask why I didn’t use a loop. It brings back bad, bad memories. We return everything as a Gizeh group. Now, the near the end. We don’t want to make a function anymore, just a top-level code:

if __name__ == "__main__":
    for i in range(12):
        group = main_func(0, 1, 1)
        surface = generate_surface()
        group.draw(surface)
        surface.write_to_png("shade_" + str(i) + ".png")

This code will create 12 images with a Green-Blue shade. Change g or b to 0 and change r to 1 to experiment with it. Don’t generate lots of images, just change the name of the image if you want more.

Well, that’s it. I hope you enjoyed it. Remember, the code can be found here. For now, hope you’ll have colorful dreams!

Gizeh is a small library for Python which you can install using pip. It doesn’t have any official documentation, so you have to suffice to the small documentation on it’s Github Page. Before starting this tutorial, read the README file carefully, as you may confused later on. Now, let’s see what we want to create. A procedural texture of a textile. Maybe a denim, maybe another pattern. I don’t know. Right now we can’t afford to be picky. The output is entirely randomized, and if you aren’t happy with the texture, you can just run the code again.


Our texture

Not happy? Let’s run the code again (I’m using Pycharm to write and run this code, teaching you about setting up Python and a good IDE is beyond the scope of this tutorial):


But how? Simple:

1- A blue background.

2- 200 different groups of 50 circles that increase in alpha as they decrease in radius. colors are from rgb(200, 200, 200) to rgb(250, 250, 250).

3- Vertical and horizontal lines spaced together based on a coefficient of 0.01 and tilted by a random number between 1 and 8.

That’s it!

Let’s start. First, let’s setup a window, and a surface:


import gizeh as gz
import random

w = 640
h = 480

surface = gz.Surface(width=w, height=h, bg_color=(21/255, 96/255,189/255))

You can change w and h to change the height and the width. Now, let’s generate our radial gradient:


def circle_generator(x, y, r, c):
    i = r
    a = 0
    while i > 0:
        circle = gz.circle(r = i, xy=[x, y], fill=(c/255, c/255, c/255, a*0.70))
        i -= 1
        a += 1/2000

        circle.draw(surface)

Besides the location of the circles, the function accepts two other parameters: the radius, which decreases in each while loop, and the color, which we divide by 255 since in Gizeh, rgb is between 0 and 1. We increase the alpha (a) by adding 1/2000 to it in each iteration.  At the end, we draw the circle.

Now, let’s make a line generator:


def generate_lines(x1, y1, x2, y2, s, a):
    line = gz.polyline(points=[(x1, y1), (x1, y1), (x2, y2), (x2, y2)], stroke_width=s,
                         stroke=(0, 0, 0, a/255), fill=(0, 0, 0, a/255))

    line.draw(surface)

This one just generates one line, we’re actually drawing a polygon were its origin is (x1, y1) and ends at (x2, y2). If we add a randomized number to the second coordinates, we can have a mangled shape, which will come to use if we want to generate some other pattern. s is stroke, and a is alpha. We set the fill to black, but it’s useless, as we don’t have a fill.

Now, let’s draw 200 radial gradients on the screen:


for i in range(200):
    x = random.randint(0, w)
    y = random.randint(0, h)
    r = random.randint(20, 100)
    c = random.randint(200, 255)
    circle_generator(x, y, r, c)

we choose entirely random locations, a radius between 20 and 100, and a color between very light gray and white. Then we call the function.

Now, the lines!

s = random.randint(1, 3)
a = random.randint(50, 200)
coeff = 0.01

for i in range(w):
    tilt = random.randint(1, 8)
    generate_lines(w * (i * coeff), 0, w * (i * coeff) + tilt, h, s, a)



for i in range(h):
    tilt = random.randint(1, 8)
    generate_lines(0, h * (i * coeff), w, h  * (i * coeff) + tilt, s, a)

First, we choose a random opacity. and a random stroke. Then we set the coefficient to 0.01. You can experience with other coefficient values, but be warned that if you randomize it, it sometimes may take you 10 runs to get a right pattern. We then create a new random number in each iteration to use it as a tilt. We then generate the horizontal lines w times and vertical lines h times.

At the end, we save the file as a PNG. This is why you should use PyCharm, it shows the picture and updates it in a different window. Other IDEs don’t do that.


surface.write_to_png("only_bg.png")

Well, that’s it! You can experiment with it, use diagonal lines, creates woven textiles, et al. Hope you enjoyed it.

Let me christen this blog through a Processing tutorial.

I hereby promise to fill this blog with game assets that I myself have created, graphics-related tutorials such as After Effects, Cinema 4D and Blender plugin development tutorials, Processing tutorials and scripts, and of course, game development tutorials! I don’t know much, but I’ll share what I know. And that’s how you get into heaven.

Chubak Bidpaa


And now, break the champagne bottle!

Anyways, in this tutorial, I want to teach you how to find out how a color matches the theme of your game wonderfully by using scripting language of Processing. Processing is a Java library that helps with aides artists and programmers in creating artwork. You can download it from here.


The Raw Power of Processing

Processign is a very powerful language, and it’s very useful. Processing is a Java library, so it’s very similar to  its container language. It has OOP support, and each code has a setup() and draw() method which cater to initiation and the main loop, respectively.

Let’s say your level has a red theme, and by red theme I mean the overall feel of the level. And out want to know what colors beset our main color. So let’s start by setting up a window with a red background:


void setup() {
  size(400, 400);
  color c = color(255, 0, 0);
  background(c);
  noLoop();
}

This will create a 400*400 window with a red background. noLoop() later comes into play, when we write a draw() function, it’ll make it so that the draw() does not loop.


Processing renders a 400*400 window with a red backgroun

Now, to write the draw function. It’ll be much more complicated, by a small margin.


void draw(){
  
  float red = random(255);
  float green = random(255);
  float blue = random(255);
  color random_color = color(red, green, blue);
  
  textSize(16);
  text(red, 10, 350);
  text(green, 10, 370);
  text(blue, 10, 390);
  
  fill(random_color);
  rect(100, 100, 200, 200);  
  
}

First, we create three floating point integers that hold a random number between 0 and 255. Then we create a color object with it. Then, we print those numbers for later reference, as there must be a way to assess what RGB values makes up our new color. We then change the color to the new color and draw a rectangle in the middle of the screen (origin point is top-left).


It has generated a green rectangle…

As we see, the green rectangle does not elate the red color. So we discard it and generate a new color.


It has generated a purple color

As we can clearly see, purple does not go with red either. We’re not lucky today, are we?

Well, there’s no need to beat around the bush. Let’s make a hundred million colors!

In order to do so, add this line:


saveFrame("color_swatch########.png") //org jpg

At the end of the draw() function. Also, remove the noLoop() line from the setup() function. But before running the code, save the file in a separate folder. Once you run the code, it’ll save images in a folder where you’ve saved the file. Peruse through them, and maybe you’ll find your swatch!


In Part 2…

In Part 2 of this tutorial, I’ll teach you a bit about color theory and how to generate opposing colors.

For now, Semper Fi!