GPU Programming for Video Games
Homework #2: Now You Are Thinking with Shaders
Due: Friday, June 20 at 23:59:59 (via T-square)
The homework will be graded out of 80 points. If you are unable to
finish all the problems in time, just turn in what you have done for
homework will guide you through a series of tasks that explore various
creative possibilities of shader programming. Each task will involve modifying
code from the “GPUXXIntroShaders”
demo on the class website (be sure you get the June 14 version).
For each problem, you will want to create
a duplicate of the shader you are modifying so you have the original
The problems are not meant to be cumulative; i.e. if two problems ask you to
add some effect to some shader, start with a fresh copy of
the original shader for each problem so you can study each effect individually.
Use a different free model downloaded from the Unity Asset Store for
each problem. Be sure you choose models that have “base” (aka “diffuse”)
textures (this will probably be most of the available models).
As you work through the problems, paste your code and screenshots into
single Microsoft Word or PDF document.
You only need to provide
the parts of code that you modify; i.e. you don’t necessarily have to paste
in an entire function. Include enough information so
that I can tell what part of the program the line
of code you show are
included in, perhaps including a few lines of code
after to provide context.
You may use something that isn’t
Microsoft Word as long as it can output PDF files.
1) Modify “GPUXXSpecmapVertexLit.shader”
and “GPUXXSpecmapPixelLit.shader” — to
employ a simple form of
shading. Gooch shading attempts to mimic some
techniques that artists use,
particularly in technical illustrations, where
the goal is to convey information, and not necessarily be “photorealistic.”
Instead of the usual diffuse lighting,
select the vertex color
(in the case of “VertexLit”) or pixel color (in the case of “PixelLit”)
by linearly interpolating between blue (0,0,1)
and yellow (1,1,0) according to the formula (1+L.N)/2, where L is the light
direction, N is the surface normal, and the period indicates a dot
Implement the interpolation using the
function. Note this formula varies between 0 and 1; 0 should give blue,
1 should give yellow. Note that unlike with usual diffuse shading, surfaces
will be visible (as blue) even if no light impacts it. Please deactivate
(however you wish) the specular and ambient components of the lighting.
Note we will also ignore the textures in this problem;
you may delete or comment out the
tex2D lines, however you see fit.
(If you dig into
Gooch’s paper, you will see that what I have described and ask you to
implement is the simplest and most jarring form of Gooch shading; the paper
discusses combing this with the usual diffuse shading term and other
interesting tricks. You need not
do any of that here, but you may find it interesting reading).
Take a screenshot that shows one of your objects
that shows two instances of your object at the same orientation, one using
your vertex-lit Gooch shader and one using your pixel-lit Gooch shader.
Ungraded, optional experiment: Check out Section 8.4.4 of
half-Lambertian shading; this is used extensively on face textures
in games like
Half-Life 2, Portal, and Left4Dead. It’s a simple way to avoid less-lit
parts of models looking too dark without having to crank up the ambient
light. Modify the
and “GPUXXSpecmapPixelLit.shader” programs to use half-Lambert diffuse shading
instead of the standard Lambert diffuse shading. Leave the specular component
2) Modify “GPUXXTexturedTileCorrectly.shader” to implement a simple
of “depth cueing.”
Like Gooch shading, depth cueing is a nonphotorealistic technique that
may help CAD designers when, for instance, viewing complex wireframe models
(although here we’re not using wireframe models). Here, we will make
parts of the cube that are further away appear darker than those that
are close. Instead of just
painting the texture, found from
multiply it by a depthFactor originally
calculated in the vertex shader by a line like:
output.depthFactor = max(0,min(1,(e-output.sv.z)/(e-s)));
where e and s
represent the starting and ending coordinate of
the depth cueing effect (i.e. anything beyond e is not seen, anything
closer than s looks the same).
Note my use of the word “originally” above — we want to do this computation
in the vertex shader, and then have the hardware automatically interpolate
these depth factors for us and pass these interpolated values on to the pixel
shader. So you’ll need
to add another line to the v2f
structure, that “repurposes” a register such as
TEXCOORD1 to pass this information along.
Normally e and s would be passed in from
the main application through variables, but to keep things simple,
you can “hard code” these into your shader code.
Experiment with values of e and s for your particular scene to get
a good illustration of the depth cueing effect.
Take a screenshot that clearly shows the depth cueing effect.
<!–3) Make the “OneLightTeapotHLSL” demo moodier by
incorporating a spotlight effect.
In particular, see
the “Spotlight Effect” slide in the “Lighting & Rasterization” lecture.
multiply the intensity from the max(dot(normal,light),0)
calculation already given
in the code by this spotlight term.
For a spotlight, we
need to think of it as pointed in a particular direction; we’ll assume the
spotlight is always pointed at (1,0,0) in world coordinate space,
so the spotlight is aimed a little to the side. (In this
assignment, for simplicity,
you can “hard code” the
spotlight aim point into your shader code; the right software engineering
thing to do would be to include this information in the Lights class, but
that’s overkill for what we need here.)
In the spotlight formula in the lecture slides, the vectors must be normalized,
so you’ll need to do another call to normalize. Choose a power sufficiently
high that the spotlight effect is quite dramatic. I found that I had to use
a very high power, on the order of three digits!
Take a screenshot that clearly shows the spotlight effect.
3) Modify one of the GPUXX shaders that uses a “base” (aka “diffuse”)
texture (I don’t care which,
as long as it uses the v2f structure)
to make the base texture wave in a strange
Instead of just using input.tc
in the texture lookup in the pixel shader,
you should first add a time-varying function of the form
A*sin(B*input.tc.y)*sin(C*_Time.x) to the the xcoordinate of
the ycoordinate of input.tc. Be sure to use += and = so that you add,
and not overwrite.
Unity sets the uniform _Time variable for you; the different components
hold _Time with different scale factors, but here, we will multiply by
our own scale factor.
the deformation in x is determined by the y coordinate, and the deformation
in y is determined by the x coodinate.
A and D control the amount of
B and E sort of set the distance
between “deformation valleys”,
C and F control the speed. Pick A to be
different than D, B to be different than E, and C to be different than F.
(These parameters can be “hard coded” into your shader code.)
You will need to experiment a lot with those settings
to get something interesting and useful.
When you run it, you should see the picture morph in strange oscillatory
ways. Experiment with various values of the parameters.
Take a screenshot that illustrates the wavy effect.
Ungraded, optional experiment: Modify
“GPUXXSpecNormMap.shader” to deform the normal map texture as well as
the base texture. See what it looks like when you deform one but not the
4) Modify one of the shader programs that uses vertex normals
so that the object pulses like a
creepy staypuff marshmellow.
OK, what in the world do I mean by that? Here,
instead of varying the texture coordinates with time,
we will vary the vertex positions.
Before the line that performs
the UNITY_MATRIX_MVP transformation,
put in a line like this:
input.v += A * input.n * (1+sin(B*_Time.x))
This will make each vertex bulge out along the direction of its normal in a
creepy pulsating staypuff marshmellow sort of way. Experiment with
A and B (you can “hard code” these) until you get something that you think look
awesome. (As an aside, it’s also fun to drop the “1+” in the equation above,
which will cause the teapot to kind of implode in on itself during half of the sine
Take a screenshot of the awesome marshmellowiness.
Deliverables: Assemble your shader code and screenshots in an orderly
fashion into a single Microsoft
Word document or PDF file (using whatever tools you wish to create a PDF).
For each problem above, give the code, then give the screenshot. You only
need to show code for shaders that you modified; i.e. if you had to modify
the vertex shader but not the fragment shader, you need only show the vertex
shader (and vice-versa).
Include “HW2” and as much
as possible of your full name in the
filename, e.g., HW2_Aaron_Lanterman.doc. (The upload
procedure should be reasonably self explanatory once
you log in to T-square.)
Be sure to finish sufficiently in
advance of the deadline that you will
be able to work around any troubles
T-square gives you to successfully
submit before the deadline. If you have
trouble getting T-square to work,
please e-mail your compressed
file to Prof. Lanterman at email@example.com,
with “GPU HW #2” and your full
name in the header line;
use this e-mail submission as a last resort if T-square isn’t working.
You are welcome to discuss
high-level implementation issues
with your fellow students,
but you should avoid actually looking at
one another student’s
code as whole, and
under no circumstances should you be copying
any portion of another student’s code.
However, asking another student to
focus on a few lines of your code discuss
why a you are getting a particular
kind of error is reasonable.
Basically, these “ground rules” are
intended to prevent a student from
“freeloading” off another student,
even accidentally, since they
won’t get the full yummy nutritional
educational goodness out of the assignment if they do.
code from homeworks done in previous years is strictly prohibited.