Lighting Without Normals
Blinn-Phong lighting requires normals. Could we shade a model that doesn't have normals? In a sense, we can. We do need normals, but we can compute them on the fly instead of having them pre-defined and loaded as vertex attributes. Why should we bother shading a model without normals? Maybe VRAM is full. Maybe we are working up a quick and dirty prototype. Maybe we are intellectually curious.
Recall that we calculated normals ourselves on the CPU by taking the cross product of two vectors tangent to a triangle. That same math can be performed in the fragment shader. We know the current fragment's eye space position. If we could figure out the eye space position of the fragment to the right, we could subtract to get a tangent vector:
vec3 right = neighbor.position - position;
vec3 right = neighbor.position - position;
The right
vector tells us how the fragments' x-, y-, and z-coordinates change as we move rightward on the screen. Sadly, we can't just reach over to a neighbor fragment and read its variables. However, there is a GLSL function that can: dFdx
. That name smells of Calculus. Indeed, the function computes the derivative of an arbitrary expression with respect to x. In other words, it tells us how the expression changes as x changes.
The expression whose difference we care about is the eye space position. We compute both tangent vectors and cross them with this code:
vec3 right = dFdx(position);
vec3 up = dFdy(position);
vec3 normal = normalize(right.cross(up));
vec3 right = dFdx(position); vec3 up = dFdy(position); vec3 normal = normalize(right.cross(up));
The function dFdy
gives us the derivative with respect to y. Crossing two vectors that skim a surface produces the normal. From there, the rest of the lighting is performed just as with pre-defined normals.
This renderer uses dFdx
and dFdy
to compute the normals on a torus:
Observe that the faces appear discrete. Normals are not smoothed with this method, as each face is processed independently.
Summary
Humans rely on lighting cues to interpret shape, and we must illuminate the surfaces in our renderers. We decompose a surface's illumination into three components: a diffuse term representing how much light the surface scatters broadly, a specular term representing how much light the surface reflects in a shiny highlight, and an ambient term representing how much indirect light a surface reflects. The amount of color reflected depends on several properties of the light—including color and position—but also several properties of the surface—including the albedo, shininess, and normal. Light sources can be point lights, directional lights, or spotlights. If a scene has multiple light sources, their illumination adds together to make a surface brighter. If backfaces are visible, their normals must be flipped. Should a surface not have normals, we may measure its orientation using derivative functions in the fragment shader.