Eye Space Lighting

How to 3D

Chapter 6: Lighting

Eye Space Lighting

When this renderer first loads, the shading indicates that the light source is positioned above the torus:

However, when you rotate just the torus so that it flips over, the shading suggests that the light source is below the torus, even though the light source's yellow orb hasn't moved.

The problem is these lines in the fragment shader:

const vec3 lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;
const vec3 lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;

What space are these in? Clip space? Eye space? World space? Model space? Are they even in the same space? When we don't consider the spaces of our coordinates, we end up with strange behaviors.

The vertex shader in this renderer implicitly determines the space:

in vec3 position;
in vec3 normal;
out vec3 mixPosition;
out vec3 mixNormal;

void main() {
  // ...
  mixPosition = position;
  mixNormal = normal;
}
in vec3 position;
in vec3 normal;
out vec3 mixPosition;
out vec3 mixNormal;

void main() {
  // ...
  mixPosition = position;
  mixNormal = normal;
}

Note that there are no transformations in the assignments to mixPosition and mixNormal. Since position and normal are model space values, the interpolated values are also in model space.

The fragment shader has this line:

vec3 lightDirection = normalize(lightPosition - mixPosition);
vec3 lightDirection = normalize(lightPosition - mixPosition);

The subtraction implies that lightPosition is also in model space. Since lighting is being performed in the untransformed model space, rotating the torus has no effect on the shading. The model space shading sticks to the torus as it is transformed into its world space orientation.

Rarely do we want to perform lighting in model space. Light sources are usually defined in either world space or eye space. Lamp posts, wall-mounted torches, and other fixtures are typically defined in world space because they are anchored to some position in the world. Flashlights in the viewer's hands are typically defined in eye space because they move around with the viewer.

Whether our lights are defined in world space or eye space, shading tends to be performed in eye space. This is because some of the lighting terms we're about to discuss involve the position of the eye. In eye space, the position of the eye is predictably at \(\begin{bmatrix}0&0&0\end{bmatrix}\).

Therefore, the fragment position and normal must both be in eye space. This is done in the vertex shader:

uniform eyeFromWorld;
uniform worldFromModel;
in vec3 position;
in vec3 normal;
out vec3 mixPositionEye;
out vec3 mixNormalEye;

void main() {
  // ...
  mixPositionEye = (eyeFromModel * worldFromModel * vec4(position, 1.0)).xyz;
  mixNormalEye = (eyeFromWorld * worldFromModel * vec4(normal, 0.0)).xyz;
}
uniform eyeFromWorld;
uniform worldFromModel;
in vec3 position;
in vec3 normal;
out vec3 mixPositionEye;
out vec3 mixNormalEye;

void main() {
  // ...
  mixPositionEye = (eyeFromModel * worldFromModel * vec4(position, 1.0)).xyz;
  mixNormalEye = (eyeFromWorld * worldFromModel * vec4(normal, 0.0)).xyz;
}

The suffix Eye has been appended to the identifiers to explicitly indicate their space. Note that the homogeneous coordinate for the normal is 0 instead of 1. The homogeneous coordinate was added to make translation work. Vectors are mere directions; they do not translate.

The light position must also be in eye space. This fragment shader is almost identical to the one that performed lighting in model space, except lightPosition has been renamed lightPositionEye and turned into a uniform:

uniform vec3 lightPositionEye;
in vec3 mixPositionEye;
in vec3 mixNormalEye;
out vec4 fragmentColor;

void main() {
  vec3 lightDirection = normalize(lightPositionEye - mixPositionEye);
  vec3 normal = normalize(mixNormalEye); 
  float litness = max(0.0, dot(normal, lightDirection));
  fragmentColor = vec4(vec3(litness), 1.0);
}
uniform vec3 lightPositionEye;
in vec3 mixPositionEye;
in vec3 mixNormalEye;
out vec4 fragmentColor;

void main() {
  vec3 lightDirection = normalize(lightPositionEye - mixPositionEye);
  vec3 normal = normalize(mixNormalEye); 
  float litness = max(0.0, dot(normal, lightDirection));
  fragmentColor = vec4(vec3(litness), 1.0);
}

In this renderer, the light source is fixed at \(\begin{bmatrix}2&2&8\end{bmatrix}\) in world space:

Its world space position is converted to eye space just once by the CPU and uploaded as a uniform with this TypeScript code:

const vec3 lightPositionWorld = new Vector3(2, 2, 8);
const vec3 lightPositionEye = eyeFromWorld
  .multiplyMatrix(worldFromModel)
  .multiplyVector(lightWorldPosition.toVector4(1));
shader.setUniform3f('lightPositionEye', lightPositionEye.x, lightPositionEye.y, lightPositionEye.z);
const vec3 lightPositionWorld = new Vector3(2, 2, 8);
const vec3 lightPositionEye = eyeFromWorld
  .multiplyMatrix(worldFromModel)
  .multiplyVector(lightWorldPosition.toVector4(1));
shader.setUniform3f('lightPositionEye', lightPositionEye.x, lightPositionEye.y, lightPositionEye.z);

As you rotate the torus, only the faces pointing toward the light source are illuminated by the light source. This is the behavior we want.

← Diffuse TermColor →