Projective Texturing
Imagine a bat signal has been turned on somewhere in Gotham City. Light bursts forth from a lamp. Some of the light hits a filter in the shape of a bat and proceeds no further. Other light escapes into the cityscape, landing on nearby buildings, clouds, trees, and so on.
Use your mouse to broadcast the signal in different directions in this renderer:
How do you think the renderer is doing this?
The bat signal is a texture that is being projected onto the scene. Pretend you are holding a flashlight whose lens is covered by a shaped filter. When the light lands on a nearby surface, the image is small. As the light lands on surfaces farther away, the image gets bigger, just as it does with a digital projector. When the projected texture lands on a fragment in the scene, it contributes its color to that fragment.
Given the way graphics cards work, we don't actively project the texture. Rather, we figure out how the vertices and fragments receive it. Each vertex must be assigned texture coordinates that locate it within the texture. Since the texture moves around, the texture coordinates cannot possibly be computed statically and stored in a VBO. Instead, the texture coordinates are determined dynamically in the vertex shader.
Somehow we must find where a vertex lands on the projected image. Good news. We've done this before. We performed a very similar operation when trying to figure out where a vertex lands on the image plane in a perspective projection.
Back then, we moved the vertex from model space into the larger world, and then from the world into a space where the eye was at the origin, and then from eye space into the normalized unit cube that WebGL expects. The end result was a set of coordinates that positioned the vertex on the image plane. This is the matrix gauntlet that carried us through these spaces:
clipPosition = clipFromEye * eyeFromWorld * worldFromModel *
vec4(position, 1.0);
clipPosition = clipFromEye * eyeFromWorld * worldFromModel * vec4(position, 1.0);
In projective texturing, we treat the light source exactly like an eye. But instead of going into eye space where the eye is at the origin, we go into light space where the light is at the origin. The modified gauntlet looks like this:
texPosition = clipFromLight * lightFromWorld * worldFromModel *
vec4(position, 1.0);
texPosition = clipFromLight * lightFromWorld * worldFromModel * vec4(position, 1.0);
The lightFromWorld
matrix is constructed with the aid of a FirstPersonCamera
instance. The clipFromLight
matrix is a perspective matrix that shapes the aperture of the spotlight.
This gauntlet lands us in the [-1, 1] interval of the unit cube, but we want to be in the [0, 1] interval of texture coordinates. So, we need to prepend a couple of extra matrices that do some range-mapping:
texPosition = Matrix4.translate(0.5, 0.5, 0) *
Matrix4.scale(0.5, 0.5, 1) *
clipFromLight * lightFromWorld * worldFromModel *
vec4(position, 1.0);
texPosition = Matrix4.translate(0.5, 0.5, 0) * Matrix4.scale(0.5, 0.5, 1) * clipFromLight * lightFromWorld * worldFromModel * vec4(position, 1.0);
That's a lot of matrices to be multiplying for every vertex. We should avoid this cost by multiplying all these matrices together in TypeScript. Since TypeScript doesn't allow us to overload builtin operators like *
, multiplying five matrices together is ugly. However, if we put all the matrices in an array, we may use the reduce
function to iterate through and accumulate their product:
const lightCamera = FirstPersonCamera.lookAt(
lightPosition,
lightTarget,
new Vector3(0, 1, 0)
);
const matrices = [
Matrix4.translate(0.5, 0.5, 0),
Matrix4.scale(0.5, 0.5, 1),
Matrix4.perspective(45, 1, 0.1, 1000),
lightCamera.matrix,
worldFromModel,
];
let textureFromModel = matrices.reduce((accum, matrix) =>
accum.multiplyMatrix(matrix)
);
const lightCamera = FirstPersonCamera.lookAt( lightPosition, lightTarget, new Vector3(0, 1, 0) ); const matrices = [ Matrix4.translate(0.5, 0.5, 0), Matrix4.scale(0.5, 0.5, 1), Matrix4.perspective(45, 1, 0.1, 1000), lightCamera.matrix, worldFromModel, ]; let textureFromModel = matrices.reduce((accum, matrix) => accum.multiplyMatrix(matrix) );
Like our other matrices, the matrix must be uploaded as a uniform:
shader.setUniformMatrix4('textureFromModel', textureFromModel);
shader.setUniformMatrix4('textureFromModel', textureFromModel);
The vertex shader receives the matrix and transforms the vertex position into the texture coordinates that locate the vertex on the projected texture:
uniform mat4 textureFromModel;
in vec3 position;
out vec4 mixTexPosition;
void main() {
// ...
mixTexPosition = textureFromModel * vec4(position, 1.0);
}
uniform mat4 textureFromModel; in vec3 position; out vec4 mixTexPosition; void main() { // ... mixTexPosition = textureFromModel * vec4(position, 1.0); }
Note that the texture coordinates are a vec4
. The coordinates are in clip space, which means they haven't yet been divided by their w-component. We perform the perspective divide in the fragment shader and then look up the projected color like we look up a color from any texture:
uniform sampler2D signal;
in vec4 mixTexPosition;
void main() {
vec2 texPosition = mixTexPosition.xy / mixTexPosition.w;
vec3 signalColor = texture(signal, texPosition).rgb;
fragmentColor = vec4(signalColor, 1.0);
}
uniform sampler2D signal; in vec4 mixTexPosition; void main() { vec2 texPosition = mixTexPosition.xy / mixTexPosition.w; vec3 signalColor = texture(signal, texPosition).rgb; fragmentColor = vec4(signalColor, 1.0); }
Alternatively, GLSL provides a lookup function that will perform the perspective divide for us. It's called textureProj
:
vec3 signalColor = textureProj(signal, mixTexPosition).rgb;
vec3 signalColor = textureProj(signal, mixTexPosition).rgb;
This example code computes the color using only the projected texture and doesn't perform any other lighting. In the bat signal renderer, the projected color is added onto a darkened diffuse term.
We can't tell in the bat signal renderer, but if we were to look behind the light source, we would find a second instance of the projected texture. The projection works in both directions. In the backward projection, the w-component is negative. We cancel out the unwanted second instance with a conditional expression:
vec3 signalColor = mixTexPosition.w > 0.0
? textureProj(signal, mixTexPosition).rgb
: vec3(0.0);
vec3 signalColor = mixTexPosition.w > 0.0 ? textureProj(signal, mixTexPosition).rgb : vec3(0.0);
We may not have many occasions where we need to project an image onto a scene. However, projective texturing is commonly used to add shadows, as we'll soon learn.