## Pseudocolor Image Processing in a Nutshell

First, let’s take care of the pleasantries. I’m ok. Are you ok? That’s fine. How’s the missus? Oh, she is thinking about suicide? No worries, just chock a handful of Zoloft at her! How’s the kid? You don’t have a kid? Why? Because you take Risperidone? No comments!

I haven’t talked about DIP, for like, ever. I talked about DSP in one of my opi, but that was mostly about DSP rather than DIP. So it’s time dive into Gonzalez and Woods 4th edition, chapter 6, and learn about Pseudocolor Image Processing – Something I always wanted to learn about. Let’s hope this is going to be a short post because I wanna watch muh Veronica Mars1.

Pseudocolor image processing consists of assigning colors to gray values based on a specified criterion. The term pseudo or false color is used to differentiate the process of assigning colors to achromatic images from the process associated with true color images, a topic which I’m going to learn, and “teach” in the near future.

The principal use of pseudocolor is for human visualization and interpretation of grayscale events in a single image, or a sequence of images. But why do we want to convert a grayscale image to color? Are we all affected by the disease aptly called Ted Turneritus? No, we do this because humans can discern thousands of color shades, as compared to less than two dozen shades of gray.

The techniques of intensity slicing, and color coding are the simplest and earliest examples of pseudocolor processing of digital images. If an image is interpreted as a 3D function, like this:

the method can be viewed as one of placing planes parallel to the coordinate plane of the image; each plane then “slices” the function in the area of intersection. In the image below you see an example of using a plate at to slice the image intensity function into two levels.

If a different color is assigned to each side of the plane, any pixel whose intensity level is above the plane will be coded with one color, and any pixel below the plane will be coded with another. Levels that lie on the plane itself may be arbitrarily assigned one of the two colors, or they could be given a third color to highlight all the pixels at that level. The result is between two to three color images whose relative appearance can be controlled by moving the slice plane up and down in the intensity axis.

A simple but practical use of intensity slicing is shown below, with the grayscale image on the left and the pseudocolor image on the right:

This is an image of the Picker Thyroid Phantom, a radiation test pattern. Regions that appear of constant intensity in the grayscale image are actually quite variable, as shown by the various colors in the intensity image. For instance, the left lobe is a dull gray in the grayscale image, but has various colors of red, yellow and green in the intensity image. Below you see the representation of one of the grayscale intervals and its slicing:

In the preceding simple example the grayscale was divided into intervals, and a different color was assigned to each, with no regard for the meaning of the gray levels in the image. Interest in that case was simply to view the different gray levels constituting the image. Intensity slicing assumes a much more meaningful and useful role when subdivision of the grayscale is based on the physical characteristics of the image. For instance, in this image:

We see an X-ray image of a weld, containing several cracks and porosities. When there is porosity or crack in the weld, the full strength of the X-rays going through the object saturates the image sensor on the other side of the object. Thus, intensity values of 255, white, in an 8-bit image coming from such a system automatically imply a problem with the weld. If the human visual analysis is used to inspect welds, a simple color coding assigns one color to 255, and one color to the rest. Thus helping the inspectors to assess the problem better.

One of the things that interests geologists is measuring rainfall levels, especially in tropical regions of Earth. Tropical Rainfall Measuring Missions, TRMM satellite utilizes among others, three sensors especially designed to detect rain: a precipitation radar, a microwave imager, and a visible and infrared scanner. The results from the various rain sensors are processed, resulting in estimates of average rainfall over a given time period in the area monitored by the sensors. From these estimates, it is not difficult to generate grayscale images whose intensity values correspond directly to the rainfall, with each pixel representing a physical land area whose size depends on the resolution of the sensors. Such an intensity image is shown below:

Visual examination of such picture is prone to error. However, suppose that we code intensity levels from 0 to 255 using the colors shown here:

Which results in:

Thus making it easier for the scientists to meter up the rainfall levels.

There are other methods that are more general, and thus are capable of achieving a wider range of pseudocolor enhancement results than the simple slicing techniques discussed is the preceding section. Below you see an approach that is particularily attractive. Basically, the idea underlying this approach is to perform three independent transformations on the intensity of input levels. The three results are then fed separately into the red green, and blue channels of a color monitor. This method produces a composite image whose color content is modulated by the nature of the transformation function.

Now, look at this image:

It shows two monochrome images of a luggage obtained from an airport X-ray scanning system. The image on the left contains ordinary articles. The image on the right contains the same articles, as well as a block of simulated plastic explosives. The purpose of this example is to illustrate the use of intensity to color transformations to facilitate detection of explosives.

Look at this image:

It shows the transformation functions used. These sinusoidal functions contain regions of relatively constant value around the peaks as well as regions that change rapidly in the valleys. Changing the phase and frequency of each sinusoid can emphasize ranges in the grayscale. For instance, if all three transformations have the same phase and frequency, the output a grayscale image. A small change in the phase between the three transformations produces little change in pixels whose intensity correspond to peaks in the sinusoids, especially if the sinusoids have broad profiles (low frequencies). Pixels with intensity values in the steep section of the sinusoids are assigned a much stronger color content as a result of significant differences between amplitudes of the three sinusoids caused by the phase displacement between them.

The pseudocolored image of the luggage was obtained using the transformations we just saw in the above on the left, which shows the gray-level bands corresponding to the explosive, garment back, and background, respectively. Note that the explosive and background have quite different intensity levels, but they were both coded with approximately the same color as a result of the periodicity of the sine waves. The second pseudocolored image of the luggage was obtained with the transformation on the right. In this case, the explosives and garment bag intensity bands were mapped by similar transformations, and thus received essentially the same color assignment. Note that this mapping allows and observer to “see” through explosives.

It is often in our interest to combine several grayscale images, and then transform them to pseudocolor. Frequent use of this approach is in multispectral image processing, where different sensors produce individual grayscale images, each in a different spectral band.

Ok, that shall be it for tonight! I will be back tomorrow with a post about machine learning… Deep learning, perhaps? I’m not sure. There’s a lot to learn. There’s a lot to process.

My brother broke my hookah, and beat up my mom. I think, just like me, he’s got bipolarity. But well, he’s a proud 5th semester psychology student and there’s no way in hell we could get him to take medication for his mental handicap. We get bipolarity from our father’s side, and he gets it from his mother’s side. Both my uncles are bipolar, my youngest brother has Lithium deficiency with a wee bit of bipolarity, but my middle broter is… Kinda fucked up. The symptoms are showing. I don’t know why he’s so proud. Our second cousin once removed – my father’s cousin, has bipolarity. So do both my uncles, and they’re being medicated for it. Not many women get bipolarity. It’s just not in the evolution of our species. We’re bipolar because nature wanted us to hibernate during the winter, but it fucked up, so based on seasonal changes, bipolar people like me and many people in my family get depressed, it gets harder and harder for them to step out of their dwelling, they get depressed, and after depression passes, they become manic. It’s HELL.

Anyways, if you’re bipolar, I hope you’re getting the medication and the help you deserve. Many people in developing countries suffer for the cheapest of psych meds, even Lithium. This is bad.

Well, to the books!

## Generating Tons of Complementary Colors with Processing

Well, this blog should be about Processing! But no, I’m just practicing game asset creation, as I wish to become a master of all trades (and jack of none). Therefore, I’m relaxing my programming muscles a bit with Processing while I’m doing graphics. And maybe, someday, music? I dunno.

So we’ve all heard of the RGB colorspace. Back in the 90s, Apple and Microsoft created this colorspace because HSL wasn’t enough for the creative programming market. Displays were becoming a lot more sophisticated and less happened in monochrome and teletypes. So sRGB was born.

RGB has three to four channels. Red, Green, Blue nad Alpha. Each have a value between 0 and 255. The higher we go, the lighter we get. There are two ways of implementing an RGB color, HEX and Decimal. For HEX display, each channel is converted to its hexadecimal notation and then put together in succession, such as (255, 255, 255) -> #FFFFFF.

In sRGB color wheel, each opposite color go together like America and apple pie. Let me show you what I mean:

In fact, this is true about all colorspaces such as HSL and HSV and CMYK, and in fact, it’s not a new concept: in 1810, painters were already using complementary colors using the traditional RGY color spectrum (in real life, colors are made up of red, green and yellow, not red, green and blue!?).

So, is there an algorithm to generate a complementary color? Of course. Imagine each color is a angle in the unit circle of the color wheel. Imagine FF is 2PI and the color we want is PI/4. So to calculate the opposite angle, we subtract 2PI (360d) from PI/4. As easy as that -> In order to generate the opposing color, we must subtract 255 from each channel.

So how can Processing help us choose the appropriate complementary color? By generating HUNDREDS and THOUSAND of complementary colors in a jiff. Let’s see how it works:

void setup() {
size(500, 500);

}

void draw() {
float r = random(255);
float g = random(255);
float b = random(255);
color bgc = color(r, g, b);

background(bgc);
textSize(16);
text("Background: ", 20, height * 0.10);
text(r, width * 0.5, 20);
text(g, width * 0.5, 50);
text(b, width * 0.5, 80);

float comp_r = 255 - r;
float comp_g = 255 - g;
float comp_b = 255 - b;
color rc = color(comp_r, comp_g, comp_b);

fill(rc);
rect(width * 0.25, height * 0.25, 200, 200);

fill(255);
textSize(16);
text("Complement: ", 20, height * 0.90);
text(r, width * 0.5, 420);
text(g, width * 0.5, 450);
text(b, width * 0.5, 480);

saveFrame("complementary_colors#####.png");

}


In each draw() loop, we generate r, g, and b. Then we display it in the background. Then we display the channel values for reference. Then we generate the complementary color, and display it in a rectangle. You can do it with a circle too. Depends on your taste.

Then, we save it in the folder you’ve saved the .pde file using saveFrame(). That’s it!

Well, hope you’ve enjoyed this tutorial. Next up: How pixel art with Photshop.

## Pixel Perfect Crosshair with Processing

Have you ever wondered how is it possible to create a pixel-perfect crosshair? Well, wonder no more. Processing is here.

Well, no time to lose. Let us start off by writing the setup() function.

void setup(){
size(500, 500);
noLoop();
}


It simply creates a 500*500 image, and sets the loop to noLoop, so draw() function won’t go on indefinitely. Now, the draw function. Before that, save the code. So the image will be saved in the same folder as your code. Alright, let’s have a looksie.

void draw(){
noFill();
strokeWeight(10);
arc(width / 2, height / 2, 300, 300, QUARTER_PI, PI);
strokeWeight(8);
arc(width / 2, height / 2, 250, 250, QUARTER_PI, PI);
strokeWeight(6);
arc(width / 2, height / 2, 200, 200, 0, QUARTER_PI);
strokeWeight(4);
arc(width / 2, height / 2, 200, 200, 0, 3);
strokeWeight(6);
arc(width / 2, height / 2, 250, 250, PI, TWO_PI);
fill(0);
ellipse(width / 2, height / 2, 30, 30);
strokeWeight(12);
line(width * 0.25, height * 0.76, width / 2, height / 2);
line(width * 0.75, height * 0.75, width / 2, height / 2);
line(width * 0.25, height * 0.26, width / 2, height / 2);
line(width * 0.75, height * 0.25, width / 2, height / 2);

saveFrame("crosshair.png");
}


noFill() will disable the color fill, so we’ll only have strokes. strokeWeight() which we use religiously sets the size of the stroke in pixels. And arc() creates an arc. This is the main part of this short tutorial, the arc function goes if defined as:

arc(postion_x, position_x, radius_x, radius_y, start_angle, end_angle);


start_angle and end_angle are in Radians. You can use QUARTER_PI (45d), HALF_PI (perpendicular), PI (180d), and PI_TWO (circular) like:

You can add these together to create PI/3 (35d) and 2*PI/3 (275d).

At the end, we add four lines like so:

line(x1, y1, x2, y2);


We use math to access quarter and a third of the screen. All x2 and y2s are the center of the screen.

At the end, we save the frame.

### Coming up next…

I have prepared a Processing and Illustrator tutorial for creating two types of planets. Stay tuned!

## Generating Color Scheme with Processing (Part 1)

Let me christen this blog through a Processing tutorial.

I hereby promise to fill this blog with game assets that I myself have created, graphics-related tutorials such as After Effects, Cinema 4D and Blender plugin development tutorials, Processing tutorials and scripts, and of course, game development tutorials! I don’t know much, but I’ll share what I know. And that’s how you get into heaven.

Chubak Bidpaa

And now, break the champagne bottle!

Anyways, in this tutorial, I want to teach you how to find out how a color matches the theme of your game wonderfully by using scripting language of Processing. Processing is a Java library that helps with aides artists and programmers in creating artwork. You can download it from here.

### The Raw Power of Processing

Processign is a very powerful language, and it’s very useful. Processing is a Java library, so it’s very similar to  its container language. It has OOP support, and each code has a setup() and draw() method which cater to initiation and the main loop, respectively.

Let’s say your level has a red theme, and by red theme I mean the overall feel of the level. And out want to know what colors beset our main color. So let’s start by setting up a window with a red background:

void setup() {
size(400, 400);
color c = color(255, 0, 0);
background(c);
noLoop();
}


This will create a 400*400 window with a red background. noLoop() later comes into play, when we write a draw() function, it’ll make it so that the draw() does not loop.

Now, to write the draw function. It’ll be much more complicated, by a small margin.

void draw(){

float red = random(255);
float green = random(255);
float blue = random(255);
color random_color = color(red, green, blue);

textSize(16);
text(red, 10, 350);
text(green, 10, 370);
text(blue, 10, 390);

fill(random_color);
rect(100, 100, 200, 200);

}


First, we create three floating point integers that hold a random number between 0 and 255. Then we create a color object with it. Then, we print those numbers for later reference, as there must be a way to assess what RGB values makes up our new color. We then change the color to the new color and draw a rectangle in the middle of the screen (origin point is top-left).

As we see, the green rectangle does not elate the red color. So we discard it and generate a new color.

As we can clearly see, purple does not go with red either. We’re not lucky today, are we?

Well, there’s no need to beat around the bush. Let’s make a hundred million colors!

In order to do so, add this line:

saveFrame("color_swatch########.png") //org jpg


At the end of the draw() function. Also, remove the noLoop() line from the setup() function. But before running the code, save the file in a separate folder. Once you run the code, it’ll save images in a folder where you’ve saved the file. Peruse through them, and maybe you’ll find your swatch!

### In Part 2…

In Part 2 of this tutorial, I’ll teach you a bit about color theory and how to generate opposing colors.

For now, Semper Fi!