Friday, April 12, 2013

Graphics Blogs To Date - Links

Week 1 - http://thatgamedevahmedcg.blogspot.com/2013/01/week-1-polish-plans-for-gdw.html
Week 2 - http://thatgamedevahmedcg.blogspot.com/2013/02/week-2-hatching.html
Week 3 - http://thatgamedevahmedcg.blogspot.com/2013/03/week-3-bloom-shader.html
Week 4 - http://thatgamedevahmedcg.blogspot.com/2013/03/week-5-intensity-profiles.html
Week 5 - http://thatgamedevahmedcg.blogspot.com/2013/03/week-5-brightnesscontrast-curves.html
Week 6 - http://thatgamedevahmedcg.blogspot.com/2013/04/week-6-per-fragment-lighting.html
Week 7 - http://thatgamedevahmedcg.blogspot.com/2013/04/week-7-npr-shader.html
Week 8 - http://thatgamedevahmedcg.blogspot.com/2013/04/week-8-scrolling-texture.html
Week 9 - http://thatgamedevahmedcg.blogspot.com/2013/04/week-9-level-up-showcase.html
Week 10 - http://thatgamedevahmedcg.blogspot.com/2013/04/week-10-intermediate-graphics.html

Week 10 - Intermediate Graphics

This is going to be my last blog for this class, and I just wanted to highlight my experiences learning about shader based OpenGL and special effects in games.

I have to say that when I started this course I expected it to be hard. I had looked at GLSL over the holidays and knew that what I had learned in the Introduction to Graphics course wasn't enough to get up to the level that we'd be working at.

During the first week we went through basic old deprecated OpenGL and how it works. This was a good summary of the Introduction to Graphics course, and was a good refresher after the holidays. During the second week we went over the shader pipeline, and learned what exactly vertex and fragment shaders. This was covered towards the end of the Intro to Graphics course but not very clearly, so this lecture made it very simple as to what exactly the vertex and fragment shaders do.

All the lectures after that were easy to follow along, even if I didn't understand the concepts at first.

My one problem, which was a problem for me really, was that in our Introduction to Graphics course, I didn't fully grasp how to use VAOs and VBOs. The reason this was a problem for me was that I really wanted to try some advanced shaders that required the use of core GLSL but I couldn't because my framework had been fully set up in archaic OpenGL. Hopefully over the summer I will be able to complete every shader we learned in class, as I understand the material at a high-level, but I just need to implement it to get experience with the shader effects and how they work.

The only thing that I would say I wished we could have had in this course was learning about color spaces and the real math behind color. We had one tutorial that mentioned the subject of color spaces, and we talked about, at a very high level, what HSL (hue, saturation, luminance) was and how we can use it to perform certain post-processing, but I wish we had covered it a little more in-depth or been given some good resources so that we could read up on the subject on our own time. I did do my own research however to figure out how to do some Photoshop post-processing replication in shaders, but I felt like I wasn't even scratching the surface.

Which leads me to the aspect of this course that I love, which is the fact that we have to teach ourselves. I hate information being spoon-fed to me. I always end up zoning out and not caring because I know in the back of my head that somewhere in the lecture slides that day lies the answer to my question. It makes the course boring when information is just thrown at you off a slide and there's no discussion or real application. This course is the opposite. The lectures were engaging, and important. It wasn't as though we were left alone; Dr. Hogue and our TAs Mina and Dan are approachable and will help steer you in the right direction any time you ask, but the course requires students to put actual thought and effort into their work. There's no BS in this course, and you can't BS your way out of it.

All in all I think the course is well designed, and I can surely say that I learned more in this course than I did in all my other classes this semester put together.

Week 9 - Level-Up Showcase

Last week my team, Phoenix Development Studios, went to level up to show off our game The Next Dimension, a bullet-hell shooter inspired by Geometry Wars.

We decided we'd be getting our game ready for Level-Up around 3 weeks before we got approval from our Game Development Workshop Professor, Ken Finney. We buckled down and started getting our game ready to show off.

Most of the 3 weeks were spent cleaning up code, implementing keymapping and controller support, optimizing the game, and implementing some redesigns. But the one of the most significant changes we made, at the last minute with just one line of code (yes, just one line of code, it is true...sort of), was the addition of a special background in our game's level select.

This addition to our game really increased its eye-catchiness. It also added to the affect that this was a sci-fi fantasy themed shooter.

After our game was approved by Professor Finney, we made on second change that completely changed the aesthetic of our game, and that was the inclusion of one new particle system that made it look as though explosion particles were warping together and flow towards the player.

When we got to level up, our main concern was that we wouldn't have a TV monitor to show off our game. Playing on a PC screen is fine, but when you want to grab peoples attention especially from a distance and within a sea of other people's awesome games, a small monitor doesn't exactly help. But luckily we were guaranteed one and we set up and all was fine...except some stupid errors in the release build of our game concerning optimization, which has since been fixed, but that meant we had to run the debug executable which is slightly slower than the current release build so it wasn't exactly a big deal.

Before the event started we had some students coming around asking us how we did the psychedelic background, and they were surprised to find out it was just one line of code (sort of). Once the event actually started, our visuals really caught the attention of those attending the event and we got more attention than we expected and many people enjoyed our game. We got a lot of feedback and some criticisms, some we already had planned for since we knew the limitations of our game, but mostly people just loved the aesthetic of the game, and the smooth controls.

By the end of the day though, my legs felt like they were going to fall off, but it was definitely worth it. Getting the opportunity to see other student's games from other schools was interesting, and I hope my team and I get the opportunity to go next year.

Week 8 - Scrolling Texture

The Next Dimension’s story revolves around travelling between different dimensions. In the story, the level select is explained as an inter-dimensional zone. To give the feeling that this is a surreal setting we decided to use a scrolling texture. The texture below is what was used to create the inter-dimensional space aesthetic: 

Image



The texture is perfectly aligned on both the right and left axes. The reason for this is so that the texture is mapped to a sphere, so the edges must be seamless otherwise it may break immersion. The texture is also mapped to the walls in each level, another reason the texture must be seamless. Because the texture is mapped to a sphere, it feels as though the world around the player is emitting cosmic rays and bursting, making it truly feel as though you are in a surreal fantasy world. 

We also use another unique scrolling texture for the space level: 

Image
This texture used with the same shader gives a different feel do to its construction. In this case while playing the game, players tend to tilt their heads to the side and the effect pulls the player in. It truly gives the feeling that one is travelling through space.
Above is a video preview of the shader in action.
As seen above the scrolling texture truly adds to the effect of being in a surreal world.
Below is the shader code used to scroll the texture. It is a simple algorithm; we simply pass the total time elapsed since the beginning of the application and mutiply the UV coordinates to scroll the texture. The shader also samples from the texture twice, and scrolls both vertically and horizontally.

uniform sampler2D tex;
uniform float time;
uniform float speed;

void main()
{
gl_FragColor = texture2D(tex,vec2(gl_TexCoord[0].s + (time/-speed),gl_TexCoord[0].t)) + texture2D(tex,vec2(gl_TexCoord[0].t + (time/-speed),gl_TexCoord[0].s));
}

We also use another scrolling texture in our amazon inspired level. The level includes clouds that obscure the player's view of enemy spawners. The effect is meant to be one of wispy clouds flowing through the air as you travel through it, and I believe the affect was achieved perfectly.

Wispy clouds flowing through the air!
The shader algorithm used here is slightly different, as the texture is sampled twice, however it is only sampled with scrolling once.

uniform sampler2D tex;
uniform float time;
uniform float speed;

void main()
{
gl_FragColor = (texture2D(tex,vec2(gl_TexCoord[0].s + (time/-speed),gl_TexCoord[0].t)) + texture2D(tex,vec2(gl_TexCoord[0].s,gl_TexCoord[0].t)))/2;
}


Week 7 - NPR Shader

This week I completed my toon shader in GLSL, which was a lot easier than I thought. Below is a short clip of the toon shader in action.


Cel-shading

This was the part I perceived to be most difficult before I began writing the shader. It actually turned out to be quite simple. We simply do our compute our lighting as we normally would, then using the intensity at that point, we sample from a ramp to the diffuse to set of intensities in the map:

We use the sampled black-white intensity here and multiply it by the current geometry's color like we would with a regular diffuse. This achieves the cel-shading affect

Edge Detection

To complete the toon shading effect there is an edge pass. To do this we use a Sobel filter.

A Sobel filter essentially takes an image and using a convolution kernel, it detects differences in color in an image. Simply it detects edges. To do this, before we render our geometry we bind an FBO that has two color render targets, and one depth target. We render our scene to the FBO, passing the color to the first color target, and the normals to the second color target. Then, we use our edge detection program and pass all three render targets to our shader. We then use the sobel filter on the normals render target, and the depth render target and add the edges found in both together to get perfect outlines around all of our geometry.

A good example of toon shading in games, and also happens to be one of my favourites, is the Borderlands franchise. The toon shading in Borderlands is cel-shaded and uses edge detection, however the team at Gearbox took it a step further by creating their textures in such a way that it made it look like it was indeed a comic book. Their characters are also modeled in a non-realistic way which makes the use of the edge detection less obvious while not detracting from the toon effect. There are also some post-processing effects to bring out the richness of the colors, and when playing it the game really does feel like you are in a semi-realistic cartoon world.


Week 6 - Per-fragment Lighting

This week I completed a shader that uses the Blinn-Phong lighting model to light the scene. Below is a clip of the shader in action with 5 animated different colored lights.

                                        

Diffuse:


The diffuse component is essentially the intensity of light at a point on a plane/model. This is known as the Lambertian Reflection. Lambert's cosine law states that the instensity of light is proportional to the cosine of the angle formed between the light's direction and the surface normal.
Specular:

Using specular reflection in our model allows us to add specular highlights to our lighting model. Specular highlights are the bright spots on shiny objects when they are lit. In my lighting calculations I have chosen to go with the Blinn-Phong lighting model which bases the intensity of the specular component on the cosine angle of the half-vector and the normal.

Below is my GLSL shader function used to compute the total intensity and final color for one light.

vec3 returnTotalLight(Light light)
{
vec3 P = pos.xyz;//vertex positon
vec3 N = normalize(normal);//eye position in camera space
vec3 V = normalize(eye.xyz - P);//view direction
vec3 L = normalize(light.position - P);//light direction
vec3 H = normalize(L + V);//half vec for spec term

float specularLight = pow(max(dot(N,H),0), shininess);
float diffuseLight = max(dot(N,L),0);
vec3 diffuse = vec3(light.color *diffuseLight);

if(diffuseLight<=0)
{
specularLight = 0;
}

vec3 specular =vec3( light.color * specularLight);


return vec3(specular + diffuse);
};





Saturday, March 30, 2013

Week 5 - Brightness/Contrast & Curves Adjustments

Brightness & Contrast Control

Last week I completed 3 post-processing shaders. I completed a shader for controlling intensity profiles, a shader for adjusting the brightness and contrast of an image, and a shader for curves adjustment. This blog will focus on the brightness and contrast shader as well as the curves adjustment shader.

When changing the brightness of an image, a constant is added or subtracted from the luminance of every pixel in the scene.

Changing the contrast of an image changes the range of luminance values present. It essentialy expands or compresses the color of a pixel around a constant.

Below is the shader code for brightness and contrast control:


uniform float brightness;
uniform float contrast;
uniform sampler2D scene;

void main()
{
        //sample scene color
vec3 color = texture2D(scene, gl_TexCoord[0].st).rgb;
       
       //contrast color
vec3 colorContrasted = (color - 0.5) * contrast + 0.5;

       //and brightness constant
vec3 bright = colorContrasted + vec3(brightness,brightness,brightness);

gl_FragColor.rgb = bright;
}

Below are screenshots of the shader: 
Brightness = 0, Contrast = 1

Contrast Below 1

Contrast Above 1


Brightness Increased ( > 0)

Brightness Decreased ( < 0)
The curves adjustment shader uses a color map/ramp to remap the colors to a new set of colors based on the map. Below is the shader code:


uniform sampler2D scene;
uniform sampler2D ramp;

void main()
{
vec3 color = texture2D(scene, gl_TexCoord[0].st).rgb;
vec3 outColor;
outColor.r = texture2D(ramp, vec2(color.x , 0.0)).r;
outColor.g = texture2D(ramp, vec2(color.y , 0.0)).g;
outColor.b = texture2D(ramp, vec2(color.z , 0.0)).b;
gl_FragColor.rgb = outColor;
}


To remap the color we use the RGB values of the color to sample for the color ramp. This is done here:


outColor.r = texture2D(ramp, vec2(color.x , 0.0)).r;
outColor.g = texture2D(ramp, vec2(color.y , 0.0)).g;
outColor.b = texture2D(ramp, vec2(color.z , 0.0)).b;


We then simply output the color and it appears as so


Curves Adjustment
And the color ramp used to remap the color:

Week 4 - Intensity Profiles

Post-processing for 3D applications involves rendering a scene to a texture, referred to as a frame buffer object, and manipulating the data contained in the texture such that it looks different. 

This week I completed 3 post-processing shaders. I completed a shader for controlling the brightness and contrast of an image, a shader for intensity profiles, and a curves adjustment shader. This blog will focus on the on the shader for intensity profiles.

Intensity Profiles/Levels Control

Levels control is a more precise way of controlling brightness and contrast in an image. To control levels the shader requires a minimum input (sometimes referred to as the input black-level), a maximum input (input white-level), a minimum output (output black-level), a maximum output (output white-level) and a value for gamma (middle grey).

Below is the shader code for levels


uniform sampler2D scene;

uniform float minInput;
uniform float maxInput;
uniform float gamma;
uniform float minOutput;
uniform float maxOutput;


void main()

{
        //sample scene texture
vec3 color = texture2D(scene,gl_TexCoord[0].st).rgb;

//levels input range
color = min( max(color - vec3(minInput), vec3(0.0)) / (vec3(maxInput) - vec3(minInput)), vec3(1.0));
//gamma correction
color = pow(color, vec3(1.0 / gamma));

//levels output range
color = mix(vec3(minOutput), vec3(maxOutput), color);

gl_FragColor.rgb = color;
}


The levels input range line essentially rejects RGB values outside of the range of the minimum and maximum input exchanging them for either black or white (black if the color is less than the minimum input specified, and white if it's greater than the maximum input specified). Below is an example substituting the following values:

minInput = 0.2
maxInput = 0.9
gamma =  0.6;
minOutput = 0.2
maxOutput = 0.9
color = (0.1,0.1,0.1)

color = min( max(color - vec3(minInput), vec3(0.0)) / (vec3(maxInput) - vec3(minInput)), vec3(1.0));

color = min( max((0.1,0.1,0.1) - vec3(0.2), vec3(0.0)) / (vec3(0.9) - vec3(0.2)), vec3(1.0));
         = min( max( (-0.1,-0.1,-0.1), (0.0,0.0,0.0) ) / (0.7,0.7,0.7), (1,1,1))
         = min( (0.0,0.0,0.0)/(0.7,0.7,0.7), (1,1,1) )
         = min(  (0.0,0.0,0.0), (1,1,1) )
         = (0,0,0)

As seen above, the value was below the minimum input and so it was rejected and exchanged for black. Had the color been (1.0,0.9,0.9) the color would have been rejected and replaced with white since the color was greater than the maximum input. If the color was within the range the final result of the above equation would simply be the original color with an adjustment. For example an original pixel color of (0.3,0.3,0.3) would return a color of (0.1428,0.1428,0.1428), and a color of (0.8,0.3,0.5) would result in a color of (0.8571, 0.1428,0.4285).

Next is a simple gamma correction. Gamma correction is used to adjust the luminance of pixels such that they are either brighter or darker. In the case of my shader increasing the gamma value will make the image brighter and decreasing the gamma value will make the image darker. Usually the equation is the same except we do not set our exponent to 1.0/gamma, it's usually just gamma. My small issue with this is that increasing the gamma value results in a darker image and decreasing it results in a lighter image, and that just didn't feel natural to me so I reversed it.

The levels output range uses the color value, adjusted from the levels input range and gamma, to linearly interpolate between the minimum and maximum outputs. The GLSL mix function does this for us:

color = mix(vec3(minOutput), vec3(maxOutput), color);

The mix function itself however uses the following logic:

mix(x,y,a) = vec3(dot(x,(1 - a)) + dot(y,a))

If we substitute a value for color of (0.1428,0.1428,0.1428) and use the minimum and maximum outputs stated above:

color = dot((0.2,0.2,0.2),(1,1,1) - (0.1428,0.1428,0.1428) ) +  dot((0.9,0.9,0.9),(0.1428,0.1428,0.1428))
         = dot((0.2,0.2,0.2),(0.8572,0.8572,0.8572)) + 0.38556
         = 0.51432 + 0.38556
         = vec3(0.89988)

Below are screenshots of an image with my shader effect being applied, and one without any effects.




Sunday, March 3, 2013

Week 3 - Bloom Shader

This week we talked about the bloom effect.

Bloom is used to produce the real-world effect of bright-light causing a sort of over-exposure.

Looked out the window and God's graphics guy applied a bloom.
The idea is very simple; it is to make everything appear more vibrant and pretty. Luckily the implementation is just as simple as the concept. It only involves 3 steps:

1. Extract highlights from image
2. Blur the extracted highlights
3. Composite the filtered image together

The first step before any of the above, in terms of programming, is to render your scene to a frame buffer object (FBO). The reason we render the scene to an FBO is because bloom is a post-processing effect; it happens after the scene is rendered. In other words, it is not a real-time effect. Once the scene has been rendered to an FBO we can apply our effects.

The first step is a simple pass that basically keeps the brightest colors in the image mainly highlights, leaving the darker colors slightly out of it. This first pass is also rendered to an FBO for further post-processing. The second step is to blur the highlights. This step requires a creating convolution matrix. There are two ways we can do this: using a box filter or a Gaussian filter. In a box filter each source pixel is weighted equally, meaning all the values in the convolution matrix are the same. In a Gaussian the highest weight is in the center of the matrix. Generally the Gaussian filter produces better results due to its smooth distribution.

The final step is to combine the original image with the bright blurred image. Below is the effect I achieved in the shader itself.



Wednesday, February 6, 2013

Week 2 - Hatching

This week we talked about lighting and we touched on the concept of toon shading.

When I went through some of the shader assignments I saw that one of the toon shader upgrades was to add hatching support. I looked up screenshots from a hatching shader and found several images similar to the one below.


The first step is to create multiple textures of differing density. One texture with very low density, up to one with high density. Next we want to calculate the diffuse (intensity) of the surface based on the direction of the light. Based on the intensity of the light at that point, we can assign 1 scalar, acting as a weight, for each texture. In our fragment shader we sample all the textures at the same specified texture coordinate, each time we sample we multiply by the assigned weight:

vec4 color1 = texture2D(HatchTexture1,texCoord) * weight1;

Once we have sampled all the textures we add all the colors together per fragment. This should give us the line drawing effect we are looking for.


Saturday, January 26, 2013

Week 1 - Polish Plans for GDW

Our GDW game in its current state doesn't look as good as it should. After looking at the current state of our game, aesthetically, it needs improving. Due to the cartoon art style our artists already took inspiration from, our game seems perfectly suited for a toon shader.

In our research we looked at different games that apply toon shaders to see the variety of toon shaders currently being used. We looked at games such as Champions Online (screenshots), the Borderlands Series (screenshots), Dragon Ball Z Budokai Series (screenshots), and then we stumbled upon the game Killer is Dead, set to release in summer 2013. The aesthetic of this game is highly stylized and if I had to identify to shaders being used I'd say it was a toon shader and a bloom shader. We hope to achieve a similar effect in our game.

Killer is Dead Debut Gameplay Trailer - Looks awesome.



I took some screenshots of models in maya and tried to replicate the effect in Photoshop. I wasn't able to replicate the bloom as I imagined it in my head, but I was able to get a bit of a toon effect just by applying a simple find edges algorithm. I tried applying bloom by using a soft low opacity brush and going over the brighter areas but it appears to glow rather than bloom. The textures that were applied to the models don't have much detail, and the models have a low-poly count, so when I applied the simple find edges effect in Photoshop and layered it on top of the original screenshot, it already looked toon, the only thing that would need to be applied properly would be a cel shading effect.