Texture Setup

How to 3D

Chapter 9: Textures

Texture Setup

Once an image has been read or generated, it must be shuttled off to the graphics card. It needs to be in VRAM, just like the vertex attributes, so that the shaders can read from it quickly.

The WebGL API for handling textures is a mishmash of ideas that have developed over several decades. Graphics technology has changed significantly in that time, and the WebGL API that we're about to explore has become a little disjointed as it has evolved.

The graphics card has special hardware called a texture unit that performs texture lookups. To comply with the WebGL standard, a card must have at least eight texture units. That means we can use up to eight textures on a single draw call. Your card may support more. Issue this query to find how many units your card has:

const unitCount = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
console.log(unitCount);
const unitCount = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
console.log(unitCount);

The units are named gl.TEXTURE0, gl.TEXTURE1, and so on.

Each unit may be in one of several different modes. A mode is called a texture target in the WebGL specification. Some modes correspond to different dimensionalities of the data. If the texture is a plain 2D image, then the target is gl.TEXTURE_2D. If the texture is volumetric data, such as that produced by a scientific simulation or medical equipment like a CT scanner, then the target is gl.TEXTURE_3D. The full OpenGL standard allows one-dimensional textures via gl.TEXTURE_1D, but WebGL does not. We will encounter some additional targets later on.

The pixel data is uploaded to a texture object, which is a data structure on the graphics card that holds the pixel data and other settings that influence how the texture is read.

The following function creates a texture object, uploads an image's pixel data into the object, and associates the object with a given texture unit's gl.TEXTURE_2D target:

function createRgbaTexture2d(width: number, height: number, image: HTMLImageElement | Uint8ClampedArray, textureUnit: GLenum = gl.TEXTURE0) {
  gl.activeTexture(textureUnit);
  const texture = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, texture);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, image);
  gl.generateMipmap(gl.TEXTURE_2D);
  return texture;
}
function createRgbaTexture2d(width: number, height: number, image: HTMLImageElement | Uint8ClampedArray, textureUnit: GLenum = gl.TEXTURE0) {
  gl.activeTexture(textureUnit);
  const texture = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, texture);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, image);
  gl.generateMipmap(gl.TEXTURE_2D);
  return texture;
}

There are many functions at work here: activeTexture chooses the texture unit, createTexture creates a new texture object, and texImage2D allocates storage in VRAM for the pixels and transfers the image into it. We'll discuss mipmaps soon. If the image is grayscale instead of RGBA, we pass slightly different parameters to texImage2D:

gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, width, height, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, image);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, width, height, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, image);

Generally, we associate each texture object with a different texture unit. For example, if we have three different images for terrain textures, we might put them on texture units 0, 1, and 2 as follows:

createTexture2d(64, 64, grassImage.height, grassImage, gl.TEXTURE0);
createTexture2d(128, 256, sandImage, gl.TEXTURE1);
createTexture2d(dirtImage.width, dirtImage.height, dirtImage, gl.TEXTURE2);
createTexture2d(64, 64, grassImage.height, grassImage, gl.TEXTURE0);
createTexture2d(128, 256, sandImage, gl.TEXTURE1);
createTexture2d(dirtImage.width, dirtImage.height, dirtImage, gl.TEXTURE2);

If the image is an HTMLImageElement, it tracks its own width and height. A Uint8ClampedArray does not.

Once the textures are on the graphics card, the next step is to paste them on a model. For that to happen, we must establish a correspondence between the 3D geometry and the 2D image.

← Generating TexturesMapping Vertices to Texels →