Reading the Framebuffer

How to 3D

Chapter 5: Graphics Pipeline

Reading the Framebuffer

The framebuffer is normally the end of the graphics pipeline. What if we want to take the rendered image and process it in an image editor or collect a series of stills and produce an animated GIF? We could use our operating system's or browser's screenshot feature, but sometimes we'd like a more programmatic solution. Let's implement one.

First we need a way to make the browser store a blob of binary data when we ask it to. Browsers don't generally touch the filesystem—except when we ask them to download something. So we define this function that creates and clicks on an artificial download link:

export function downloadBlob(name: string, blob: Blob) {
  // Inject a link element into the page. Clicking on
  // it makes the browser download the binary data.
  let link = document.createElement('a');
  link.download = name;
  link.href = URL.createObjectURL(blob);
  document.body.appendChild(link);
  link.click();

  // Remove the link after a slight pause. Browsers...
  setTimeout(() => {
    URL.revokeObjectURL(link.href);
    document.body.removeChild(link);
  });
}
export function downloadBlob(name: string, blob: Blob) {
  // Inject a link element into the page. Clicking on
  // it makes the browser download the binary data.
  let link = document.createElement('a');
  link.download = name;
  link.href = URL.createObjectURL(blob);
  document.body.appendChild(link);
  link.click();

  // Remove the link after a slight pause. Browsers...
  setTimeout(() => {
    URL.revokeObjectURL(link.href);
    document.body.removeChild(link);
  });
}

Drop this function into your lib/web-utilities.ts.

Next we need a function that asks the drawing canvas to package up its pixels as an in-memory PNG image. Put this takeScreenshot function in your utilities file also:

export async function takeScreenshot(canvas: HTMLCanvasElement) {
  const png: Blob = await new Promise(resolve => {
    canvas.toBlob(blob => resolve(blob!), 'image/png');
  });
  downloadBlob('screenshot.png', png);
}
export async function takeScreenshot(canvas: HTMLCanvasElement) {
  const png: Blob = await new Promise(resolve => {
    canvas.toBlob(blob => resolve(blob!), 'image/png');
  });
  downloadBlob('screenshot.png', png);
}

Now when a renderer needs a screenshot, it calls this function. For example, the renderer below has this code in its initialize function:

const takeScreenshotButton = document.getElementById('take-screenshot-button')!;
takeScreenshotButton.addEventListener('click', () => {
  render();
  takeScreenshot(canvas);
});
const takeScreenshotButton = document.getElementById('take-screenshot-button')!;
takeScreenshotButton.addEventListener('click', () => {
  render();
  takeScreenshot(canvas);
});

When the button is clicked, we take a screenshot. To ensure that the framebuffer has been drawn into, we call render immediately beforehand.

Summary

The pipeline between our models and a rendered image has many stages. In the vertex shader, the position data passes through six canonical spaces: model, world, eye, clip, normalized, and pixel. Model space is the coordinate system used by the artist. World space is the coordinate system in which all our models are assembled. Eye space is the coordinate system that puts the viewer at the center. Normalized space is the unit cube system that WebGL renders into the viewport. Clip space is very nearly normalized space, but the perspective divide hasn't yet been applied. Pixel space is the coordinate system of the viewport. Most of these spaces are convenient fiction; WebGL itself doesn't have a notion of a viewer or a world. It only rasterizes whatever it finds in the unit cube. To get our models into that cube, we use orthographic and perspective projections. If a transformed model happens to land in the unit cube, its fragments must then pass the depth and scissor tests before reaching the framebuffer, where they may be blended with other geometry. If our model is animated, we loop over the pipeline with continuous rendering.

← Skeletal AnimationLab: Boxels →