Friday, April 12, 2013

Week 9 - Level-Up Showcase

Last week my team, Phoenix Development Studios, went to level up to show off our game The Next Dimension, a bullet-hell shooter inspired by Geometry Wars.

We decided we'd be getting our game ready for Level-Up around 3 weeks before we got approval from our Game Development Workshop Professor, Ken Finney. We buckled down and started getting our game ready to show off.

Most of the 3 weeks were spent cleaning up code, implementing keymapping and controller support, optimizing the game, and implementing some redesigns. But the one of the most significant changes we made, at the last minute with just one line of code (yes, just one line of code, it is true...sort of), was the addition of a special background in our game's level select.

This addition to our game really increased its eye-catchiness. It also added to the affect that this was a sci-fi fantasy themed shooter.

After our game was approved by Professor Finney, we made on second change that completely changed the aesthetic of our game, and that was the inclusion of one new particle system that made it look as though explosion particles were warping together and flow towards the player.

When we got to level up, our main concern was that we wouldn't have a TV monitor to show off our game. Playing on a PC screen is fine, but when you want to grab peoples attention especially from a distance and within a sea of other people's awesome games, a small monitor doesn't exactly help. But luckily we were guaranteed one and we set up and all was fine...except some stupid errors in the release build of our game concerning optimization, which has since been fixed, but that meant we had to run the debug executable which is slightly slower than the current release build so it wasn't exactly a big deal.

Before the event started we had some students coming around asking us how we did the psychedelic background, and they were surprised to find out it was just one line of code (sort of). Once the event actually started, our visuals really caught the attention of those attending the event and we got more attention than we expected and many people enjoyed our game. We got a lot of feedback and some criticisms, some we already had planned for since we knew the limitations of our game, but mostly people just loved the aesthetic of the game, and the smooth controls.

By the end of the day though, my legs felt like they were going to fall off, but it was definitely worth it. Getting the opportunity to see other student's games from other schools was interesting, and I hope my team and I get the opportunity to go next year.

Week 8 - Scrolling Texture

The Next Dimension’s story revolves around travelling between different dimensions. In the story, the level select is explained as an inter-dimensional zone. To give the feeling that this is a surreal setting we decided to use a scrolling texture. The texture below is what was used to create the inter-dimensional space aesthetic: 

Image



The texture is perfectly aligned on both the right and left axes. The reason for this is so that the texture is mapped to a sphere, so the edges must be seamless otherwise it may break immersion. The texture is also mapped to the walls in each level, another reason the texture must be seamless. Because the texture is mapped to a sphere, it feels as though the world around the player is emitting cosmic rays and bursting, making it truly feel as though you are in a surreal fantasy world. 

We also use another unique scrolling texture for the space level: 

Image
This texture used with the same shader gives a different feel do to its construction. In this case while playing the game, players tend to tilt their heads to the side and the effect pulls the player in. It truly gives the feeling that one is travelling through space.
Above is a video preview of the shader in action.
As seen above the scrolling texture truly adds to the effect of being in a surreal world.
Below is the shader code used to scroll the texture. It is a simple algorithm; we simply pass the total time elapsed since the beginning of the application and mutiply the UV coordinates to scroll the texture. The shader also samples from the texture twice, and scrolls both vertically and horizontally.

uniform sampler2D tex;
uniform float time;
uniform float speed;

void main()
{
gl_FragColor = texture2D(tex,vec2(gl_TexCoord[0].s + (time/-speed),gl_TexCoord[0].t)) + texture2D(tex,vec2(gl_TexCoord[0].t + (time/-speed),gl_TexCoord[0].s));
}

We also use another scrolling texture in our amazon inspired level. The level includes clouds that obscure the player's view of enemy spawners. The effect is meant to be one of wispy clouds flowing through the air as you travel through it, and I believe the affect was achieved perfectly.

Wispy clouds flowing through the air!
The shader algorithm used here is slightly different, as the texture is sampled twice, however it is only sampled with scrolling once.

uniform sampler2D tex;
uniform float time;
uniform float speed;

void main()
{
gl_FragColor = (texture2D(tex,vec2(gl_TexCoord[0].s + (time/-speed),gl_TexCoord[0].t)) + texture2D(tex,vec2(gl_TexCoord[0].s,gl_TexCoord[0].t)))/2;
}


Week 7 - NPR Shader

This week I completed my toon shader in GLSL, which was a lot easier than I thought. Below is a short clip of the toon shader in action.


Cel-shading

This was the part I perceived to be most difficult before I began writing the shader. It actually turned out to be quite simple. We simply do our compute our lighting as we normally would, then using the intensity at that point, we sample from a ramp to the diffuse to set of intensities in the map:

We use the sampled black-white intensity here and multiply it by the current geometry's color like we would with a regular diffuse. This achieves the cel-shading affect

Edge Detection

To complete the toon shading effect there is an edge pass. To do this we use a Sobel filter.

A Sobel filter essentially takes an image and using a convolution kernel, it detects differences in color in an image. Simply it detects edges. To do this, before we render our geometry we bind an FBO that has two color render targets, and one depth target. We render our scene to the FBO, passing the color to the first color target, and the normals to the second color target. Then, we use our edge detection program and pass all three render targets to our shader. We then use the sobel filter on the normals render target, and the depth render target and add the edges found in both together to get perfect outlines around all of our geometry.

A good example of toon shading in games, and also happens to be one of my favourites, is the Borderlands franchise. The toon shading in Borderlands is cel-shaded and uses edge detection, however the team at Gearbox took it a step further by creating their textures in such a way that it made it look like it was indeed a comic book. Their characters are also modeled in a non-realistic way which makes the use of the edge detection less obvious while not detracting from the toon effect. There are also some post-processing effects to bring out the richness of the colors, and when playing it the game really does feel like you are in a semi-realistic cartoon world.


Week 6 - Per-fragment Lighting

This week I completed a shader that uses the Blinn-Phong lighting model to light the scene. Below is a clip of the shader in action with 5 animated different colored lights.

                                        

Diffuse:


The diffuse component is essentially the intensity of light at a point on a plane/model. This is known as the Lambertian Reflection. Lambert's cosine law states that the instensity of light is proportional to the cosine of the angle formed between the light's direction and the surface normal.
Specular:

Using specular reflection in our model allows us to add specular highlights to our lighting model. Specular highlights are the bright spots on shiny objects when they are lit. In my lighting calculations I have chosen to go with the Blinn-Phong lighting model which bases the intensity of the specular component on the cosine angle of the half-vector and the normal.

Below is my GLSL shader function used to compute the total intensity and final color for one light.

vec3 returnTotalLight(Light light)
{
vec3 P = pos.xyz;//vertex positon
vec3 N = normalize(normal);//eye position in camera space
vec3 V = normalize(eye.xyz - P);//view direction
vec3 L = normalize(light.position - P);//light direction
vec3 H = normalize(L + V);//half vec for spec term

float specularLight = pow(max(dot(N,H),0), shininess);
float diffuseLight = max(dot(N,L),0);
vec3 diffuse = vec3(light.color *diffuseLight);

if(diffuseLight<=0)
{
specularLight = 0;
}

vec3 specular =vec3( light.color * specularLight);


return vec3(specular + diffuse);
};





Saturday, March 30, 2013

Week 5 - Brightness/Contrast & Curves Adjustments

Brightness & Contrast Control

Last week I completed 3 post-processing shaders. I completed a shader for controlling intensity profiles, a shader for adjusting the brightness and contrast of an image, and a shader for curves adjustment. This blog will focus on the brightness and contrast shader as well as the curves adjustment shader.

When changing the brightness of an image, a constant is added or subtracted from the luminance of every pixel in the scene.

Changing the contrast of an image changes the range of luminance values present. It essentialy expands or compresses the color of a pixel around a constant.

Below is the shader code for brightness and contrast control:


uniform float brightness;
uniform float contrast;
uniform sampler2D scene;

void main()
{
        //sample scene color
vec3 color = texture2D(scene, gl_TexCoord[0].st).rgb;
       
       //contrast color
vec3 colorContrasted = (color - 0.5) * contrast + 0.5;

       //and brightness constant
vec3 bright = colorContrasted + vec3(brightness,brightness,brightness);

gl_FragColor.rgb = bright;
}

Below are screenshots of the shader: 
Brightness = 0, Contrast = 1

Contrast Below 1

Contrast Above 1


Brightness Increased ( > 0)

Brightness Decreased ( < 0)
The curves adjustment shader uses a color map/ramp to remap the colors to a new set of colors based on the map. Below is the shader code:


uniform sampler2D scene;
uniform sampler2D ramp;

void main()
{
vec3 color = texture2D(scene, gl_TexCoord[0].st).rgb;
vec3 outColor;
outColor.r = texture2D(ramp, vec2(color.x , 0.0)).r;
outColor.g = texture2D(ramp, vec2(color.y , 0.0)).g;
outColor.b = texture2D(ramp, vec2(color.z , 0.0)).b;
gl_FragColor.rgb = outColor;
}


To remap the color we use the RGB values of the color to sample for the color ramp. This is done here:


outColor.r = texture2D(ramp, vec2(color.x , 0.0)).r;
outColor.g = texture2D(ramp, vec2(color.y , 0.0)).g;
outColor.b = texture2D(ramp, vec2(color.z , 0.0)).b;


We then simply output the color and it appears as so


Curves Adjustment
And the color ramp used to remap the color:

Week 4 - Intensity Profiles

Post-processing for 3D applications involves rendering a scene to a texture, referred to as a frame buffer object, and manipulating the data contained in the texture such that it looks different. 

This week I completed 3 post-processing shaders. I completed a shader for controlling the brightness and contrast of an image, a shader for intensity profiles, and a curves adjustment shader. This blog will focus on the on the shader for intensity profiles.

Intensity Profiles/Levels Control

Levels control is a more precise way of controlling brightness and contrast in an image. To control levels the shader requires a minimum input (sometimes referred to as the input black-level), a maximum input (input white-level), a minimum output (output black-level), a maximum output (output white-level) and a value for gamma (middle grey).

Below is the shader code for levels


uniform sampler2D scene;

uniform float minInput;
uniform float maxInput;
uniform float gamma;
uniform float minOutput;
uniform float maxOutput;


void main()

{
        //sample scene texture
vec3 color = texture2D(scene,gl_TexCoord[0].st).rgb;

//levels input range
color = min( max(color - vec3(minInput), vec3(0.0)) / (vec3(maxInput) - vec3(minInput)), vec3(1.0));
//gamma correction
color = pow(color, vec3(1.0 / gamma));

//levels output range
color = mix(vec3(minOutput), vec3(maxOutput), color);

gl_FragColor.rgb = color;
}


The levels input range line essentially rejects RGB values outside of the range of the minimum and maximum input exchanging them for either black or white (black if the color is less than the minimum input specified, and white if it's greater than the maximum input specified). Below is an example substituting the following values:

minInput = 0.2
maxInput = 0.9
gamma =  0.6;
minOutput = 0.2
maxOutput = 0.9
color = (0.1,0.1,0.1)

color = min( max(color - vec3(minInput), vec3(0.0)) / (vec3(maxInput) - vec3(minInput)), vec3(1.0));

color = min( max((0.1,0.1,0.1) - vec3(0.2), vec3(0.0)) / (vec3(0.9) - vec3(0.2)), vec3(1.0));
         = min( max( (-0.1,-0.1,-0.1), (0.0,0.0,0.0) ) / (0.7,0.7,0.7), (1,1,1))
         = min( (0.0,0.0,0.0)/(0.7,0.7,0.7), (1,1,1) )
         = min(  (0.0,0.0,0.0), (1,1,1) )
         = (0,0,0)

As seen above, the value was below the minimum input and so it was rejected and exchanged for black. Had the color been (1.0,0.9,0.9) the color would have been rejected and replaced with white since the color was greater than the maximum input. If the color was within the range the final result of the above equation would simply be the original color with an adjustment. For example an original pixel color of (0.3,0.3,0.3) would return a color of (0.1428,0.1428,0.1428), and a color of (0.8,0.3,0.5) would result in a color of (0.8571, 0.1428,0.4285).

Next is a simple gamma correction. Gamma correction is used to adjust the luminance of pixels such that they are either brighter or darker. In the case of my shader increasing the gamma value will make the image brighter and decreasing the gamma value will make the image darker. Usually the equation is the same except we do not set our exponent to 1.0/gamma, it's usually just gamma. My small issue with this is that increasing the gamma value results in a darker image and decreasing it results in a lighter image, and that just didn't feel natural to me so I reversed it.

The levels output range uses the color value, adjusted from the levels input range and gamma, to linearly interpolate between the minimum and maximum outputs. The GLSL mix function does this for us:

color = mix(vec3(minOutput), vec3(maxOutput), color);

The mix function itself however uses the following logic:

mix(x,y,a) = vec3(dot(x,(1 - a)) + dot(y,a))

If we substitute a value for color of (0.1428,0.1428,0.1428) and use the minimum and maximum outputs stated above:

color = dot((0.2,0.2,0.2),(1,1,1) - (0.1428,0.1428,0.1428) ) +  dot((0.9,0.9,0.9),(0.1428,0.1428,0.1428))
         = dot((0.2,0.2,0.2),(0.8572,0.8572,0.8572)) + 0.38556
         = 0.51432 + 0.38556
         = vec3(0.89988)

Below are screenshots of an image with my shader effect being applied, and one without any effects.




Sunday, March 3, 2013

Week 3 - Bloom Shader

This week we talked about the bloom effect.

Bloom is used to produce the real-world effect of bright-light causing a sort of over-exposure.

Looked out the window and God's graphics guy applied a bloom.
The idea is very simple; it is to make everything appear more vibrant and pretty. Luckily the implementation is just as simple as the concept. It only involves 3 steps:

1. Extract highlights from image
2. Blur the extracted highlights
3. Composite the filtered image together

The first step before any of the above, in terms of programming, is to render your scene to a frame buffer object (FBO). The reason we render the scene to an FBO is because bloom is a post-processing effect; it happens after the scene is rendered. In other words, it is not a real-time effect. Once the scene has been rendered to an FBO we can apply our effects.

The first step is a simple pass that basically keeps the brightest colors in the image mainly highlights, leaving the darker colors slightly out of it. This first pass is also rendered to an FBO for further post-processing. The second step is to blur the highlights. This step requires a creating convolution matrix. There are two ways we can do this: using a box filter or a Gaussian filter. In a box filter each source pixel is weighted equally, meaning all the values in the convolution matrix are the same. In a Gaussian the highest weight is in the center of the matrix. Generally the Gaussian filter produces better results due to its smooth distribution.

The final step is to combine the original image with the bright blurred image. Below is the effect I achieved in the shader itself.