Projective Texturing
Imagine a bat signal has been turned on somewhere in Gotham City. Light bursts forth from a lamp. Some of the light hits a filter in the shape of a bat and proceeds no further. Other light escapes into the cityscape, landing on nearby buildings, clouds, trees, and so on.
Use your mouse to broadcast the signal in different directions in this renderer:
How do you think the renderer is doing this?
The bat signal is a texture that is being projected onto the scene. Pretend you are holding a flashlight whose lens is covered by a shaped filter. When the light lands on a nearby surface, the image is small. As the light lands on surfaces farther away, the image gets bigger, just as it does with a digital projector. When the projected texture lands on a fragment in the scene, it contributes its color to that fragment.
Given the way graphics cards work, we don't actively project the texture. Rather, we figure out how the vertices and fragments receive it. Each vertex must be assigned texture coordinates that locate it within the texture. Since the texture moves around, the texture coordinates cannot possibly be computed statically and stored in a VBO. Instead, the texture coordinates are determined dynamically in the vertex shader.
Somehow we must find where a vertex lands on the projected image. Good news. We've done this before. We performed a very similar operation when trying to figure out where a vertex lands on the image plane in a perspective projection.
Back then, we moved the vertex from model space into the larger world, and then from the world into a space where the eye was at the origin, and then from eye space into the normalized unit cube that WebGL expects. The end result was a set of coordinates that positioned the vertex on the image plane. This is the matrix gauntlet that carried us through these spaces:
clipPosition = clipFromEye *
eyeFromWorld *
worldFromModel *
vec4(position, 1.0);
clipPosition = clipFromEye * eyeFromWorld * worldFromModel * vec4(position, 1.0);
In projective texturing, we treat the light source exactly like an eye. But instead of going into eye space where the eye is at the origin, we go into light space where the light is at the origin. The modified gauntlet looks like this:
texPosition = clipFromLight *
lightFromWorld *
worldFromModel *
vec4(position, 1.0);
texPosition = clipFromLight * lightFromWorld * worldFromModel * vec4(position, 1.0);
The lightFromWorld
matrix is constructed with the aid of a FirstPersonCamera
instance. The clipFromLight
matrix is a perspective matrix that shapes the aperture of the projecting light.
This gauntlet lands us in the [-1, 1] interval of the unit cube, but we want to be in the [0, 1] interval of texture coordinates. So, we need to prepend a couple of extra matrices that do some range-mapping:
texPosition = Matrix4.translate(0.5, 0.5, 0) *
Matrix4.scale(0.5, 0.5, 1) *
clipFromLight *
lightFromWorld *
worldFromModel *
vec4(position, 1.0);
texPosition = Matrix4.translate(0.5, 0.5, 0) * Matrix4.scale(0.5, 0.5, 1) * clipFromLight * lightFromWorld * worldFromModel * vec4(position, 1.0);
That's a lot of matrices to be multiplying for every vertex. We should avoid this cost by multiplying all these matrices together in TypeScript. Since TypeScript doesn't allow us to overload builtin operators like *
, multiplying five matrices together is ugly. However, if we put all the matrices in an array, we may use the reduce
function to iterate through and accumulate their product:
const lightCamera = new FirstPersonCamera(
lightPosition,
lightTarget,
new Vector3(0, 1, 0)
);
const matrices = [
Matrix4.translate(0.5, 0.5, 0),
Matrix4.scale(0.5, 0.5, 1),
Matrix4.perspective(45, 1, 0.1, 1000),
lightCamera.matrix,
worldFromModel,
];
let textureFromModel = matrices.reduce((accum, matrix) =>
accum.multiplyMatrix(matrix)
);
const lightCamera = new FirstPersonCamera( lightPosition, lightTarget, new Vector3(0, 1, 0) ); const matrices = [ Matrix4.translate(0.5, 0.5, 0), Matrix4.scale(0.5, 0.5, 1), Matrix4.perspective(45, 1, 0.1, 1000), lightCamera.matrix, worldFromModel, ]; let textureFromModel = matrices.reduce((accum, matrix) => accum.multiplyMatrix(matrix) );
Like our other matrices, the matrix must be uploaded as a uniform:
shader.setUniformMatrix4fv('textureFromModel', textureFromModel.buffer());
shader.setUniformMatrix4fv('textureFromModel', textureFromModel.buffer());
The vertex shader receives the matrix and transforms the vertex position into the texture coordinates that locate the vertex on the projected texture:
uniform mat4 textureFromModel;
in vec3 position;
out vec4 mixTexPosition;
void main() {
// ...
mixTexPosition = textureFromModel * vec4(position, 1.0);
}
uniform mat4 textureFromModel; in vec3 position; out vec4 mixTexPosition; void main() { // ... mixTexPosition = textureFromModel * vec4(position, 1.0); }
Note that the texture coordinates are a vec4
. The coordinates are in clip space, which means they haven't yet been divided by their w-component. We perform the perspective divide in the fragment shader and then look up the projected color like we look up a color from any texture:
uniform sampler2D signal;
in vec4 mixTexPosition;
void main() {
vec2 texPosition = mixTexPosition.xy / mixTexPosition.w;
vec3 signalColor = texture(signal, texPosition).rgb;
fragmentColor = vec4(signalColor, 1.0);
}
uniform sampler2D signal; in vec4 mixTexPosition; void main() { vec2 texPosition = mixTexPosition.xy / mixTexPosition.w; vec3 signalColor = texture(signal, texPosition).rgb; fragmentColor = vec4(signalColor, 1.0); }
Alternatively, GLSL provides a lookup function that will perform the perspective divide for us. It's called textureProj
:
vec3 signalColor = textureProj(signal, mixTexPosition).rgb;
vec3 signalColor = textureProj(signal, mixTexPosition).rgb;
This example code computes the color using only the projected texture and doesn't perform any other lighting. In the bat signal renderer, the projected color is added onto a darkened diffuse term.
Projective texturing comes with some surprises. If look behind a projective light source, we find a second instance of the projected texture. Move the mouse around this scene to find two bat signals.
The projection works goes both forward and backward. In the backward projection, everything is flipped, including the w-component. We cancel out the unwanted second instance by zeroing out the color when the w-component is negative.
vec3 signalColor = mixTexPosition.w > 0.0
? textureProj(signal, mixTexPosition).rgb
: vec3(0.0);
vec3 signalColor = mixTexPosition.w > 0.0 ? textureProj(signal, mixTexPosition).rgb : vec3(0.0);
If we aim the light at the nearest buildings, we see far more of the bat signal than we should. It's not getting blocked by the buildings closer to the light. That's because the fragment shader naively assumes nothing is in the scene besides it and the light source. To address this, we'll need to look into another use of projective texturing: shadowmapping. But that's the topic of another chapter.
Summary
Textures are more than just images, and they provide more than just surface color. They are general-purpose files that can store any kind of numeric data. They may contain alpha information that we use to mask out fragments. Some objects are too costly to render as a trimesh, like vegetation. We instead paste a flattened picture of the object onto a billboard, a simple quadrilateral that always faces the camera. The surrounding environment is also too complex to render as full geometry. We capture it in six textures and paste them on a skybox that follows the viewer around but is drawn behind all other geometry. Mirror-like objects treat the skybox texture as an environment map. The reflected eye vector acts as a set of 3D texture coordinates that point at the color that reaches the eye. Textures can also provide lighting data like normals, shininesses, discrete levels of litness in a toon shader, or a degree of self-shadowing known as ambient occlusion. In fact, the texture itself might be an image projected onto the scene by a light source.