The Fourth Wall
Rendering by itself produces aquariums—worlds we can see but can't interact with. They teem with colorful creatures living their own lives, but we users remain on the other side of the glass, passively spectating. We don't just want to see a virtual world; we want to be part of it. Cameras add some interactivity, allowing us to move around inside the aquarium. But for a renderer to be truly interactive, it must allow us to touch the objects within, and they must respond.
In our discussion of cameras, we learned how a renderer can respond to taps on the glass with pointer event listeners with code like this:
window.addEventListener('pointerdown', event => {
// handle down events
});
window.addEventListener('pointerup', event => {
// handle up events
});
window.addEventListener('pointerdown', event => {
// handle down events
});
window.addEventListener('pointerup', event => {
// handle up events
});In this chapter, we'll examine what needs to go inside these event listeners so that the user's 2D gestures influence the 3D world behind the glass. By the chapter's end, you'll be able to answer the following questions:
- How do we smooth out digital inputs so interaction doesn't feel abrupt?
- How do we map the 2D mouse coordinates, which are in pixel space, to the 3D spaces in which the scene and models are defined?
- What transformation machinery powers renderers that allow users to spin models around?
- How can we identify what object a user is clicking on?
Let's tear down the wall between our physical and virtual worlds.