Diffuse Term

How to 3D

Chapter 6: Lighting

Diffuse Term

To shade a fragment, we must know not only the fragment's normal, but also the direction to the light source:

The angle \(a\) between these two vectors gives us a measure of the “litness” of the surface, which is a value in [0, 1]. When the vectors form an angle of 0 degrees, they point in the same direction, and the fragment is fully illuminated. When the angle between these two vectors is 90 degrees or more, the fragment receives no light.

These facts establish two endpoints of the litness map:

We must also figure out the litness between these two endpoints. Perhaps a straight line would do. Or we could turn to the work of physicist Lambert, who found that the cosine function gives a reasonable measure of illumination for a certain class of surfaces:

When the angle reaches 90 degrees, the cosine goes negative. We don't want litness to be negative in our renderers. When a surface faces away from a light source, its litness must be clamped to 0:

Hence we have the following definition of litness:

$$\mathrm{litness} = \mathrm{max}(0, \cos a)$$

Compared to arithmetic, trigonometric functions like cos are expensive to calculate. Since the GPU will be processing a lot of fragments, we'd like to find a way to compute the cosine very quickly. Good news: when two vectors have unit length, the cosine of the angle between them is just their dot product.

You may want to know why the dot product gives the cosine. Consider the dot product between two 2-vectors \(\mathbf{p}\) and \(\mathbf{q}\):

$$\mathbf{p} \cdot \mathbf{q} = p_x \times q_x + p_y \times q_y$$

Recall that vectors can be represented in polar coordinates by their radius and angle:

$$\begin{aligned} \mathbf{p} &= (p_r, p_a) \\ \mathbf{q} &= (q_r, q_a) \end{aligned}$$

The polar coordinates are turned into Cartesian coordinates using sine and cosine:

$$\begin{aligned} p_x &= p_r \cos p_a \\ p_y &= p_r \sin p_a \\ q_x &= q_r \cos q_a \\ q_y &= q_r \sin q_a \\ \end{aligned}$$

The angle between \(\mathbf{p}\) and \(\mathbf{q}\) is the difference between their angle components: \(p_a - q_a\). We want the cosine of this angle to determine the fragment's shading. Starting from the dot product, we work our way back to the cosine by substituting in the values above and applying a trigonometric identity:

$$\begin{aligned} \mathbf{p} \cdot \mathbf{q} &= p_x \times q_x + p_y \times q_y \\ &= p_r \cos p_a \times q_r \cos q_a + p_r \sin p_a \times q_r \sin q_a \\ &= p_r q_r (\cos p_a \times \cos q_a + \sin p_a \times \sin q_a) \\ &= p_r q_r \cos\left(p_a - q_a\right) \\ \end{aligned}$$

When a vector has unit length, its radius is 1, which allows us to simplify even further:

$$\begin{aligned} \mathbf{p} \cdot \mathbf{q} &= p_r q_r \cos\left(p_a - q_a\right) \\ &= 1 \times 1 \times \cos~(p_a - q_a) \\ &= \cos~(p_a - q_a) \\ \end{aligned}$$

This simplification is why normals and other vectors used for shading are expected to have unit length. If that's the case, litness may be expressed as a fast dot product:

$$\mathrm{litness} = \mathrm{max}(0, \mathrm{normal} \cdot \mathrm{lightDirection})$$

The light direction is a unit vector leading from the fragment to the light source. To compute this direction, we need to know the positions of the fragment and the light source. The light source position is something we send in as a uniform or hardcode in the shader. The fragment position comes from the vertex shader.

To hardcode the light 10 units up from the origin and to receive the interpolated fragment position, we write these lines in the fragment shader:

const vec3 lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;
const vec3 lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;

In main, we compute the light direction by subtracting one position from another:

void main() {
  vec3 lightDirection = normalize(lightPosition - mixPosition);
  // ...
}
void main() {
  vec3 lightDirection = normalize(lightPosition - mixPosition);
  // ...
}

Next we throw in the normal from the vertex shader and compute the litness:

const vec3 lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;
in vec3 mixNormal;
out vec4 fragmentColor;

void main() {
  vec3 lightDirection = normalize(lightPosition - mixPosition);
  vec3 normal = normalize(mixNormal); 
  float litness = max(0.0, dot(normal, lightDirection));
  fragmentColor = vec4(vec3(litness), 1.0);
}
const vec3 lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;
in vec3 mixNormal;
out vec4 fragmentColor;

void main() {
  vec3 lightDirection = normalize(lightPosition - mixPosition);
  vec3 normal = normalize(mixNormal); 
  float litness = max(0.0, dot(normal, lightDirection));
  fragmentColor = vec4(vec3(litness), 1.0);
}

The result is a grayscale shading of your surface, as seen in this torus renderer:

Lambert's model provides a reasonable approximation of how light behaves when it reflects off a surface with a rough or matte finish. You see no glossy highlights like you'd encounter on a very smooth surface. Instead, light bounces off in all directions. Light that bounces off in all directions is called diffuse light. To get other kinds of reflective behavior, we will soon add additional terms onto this diffuse foundation.

If you rotate just the torus in this renderer, you should observe a problem. The shading sticks to the surface instead of adapting to the new orientation. The problem is that this renderer isn't taking the spaces of the graphics pipeline into account.

← Sphere NormalsEye-space Lighting →