The Fourth Wall
Rendering by itself produces aquariums. These aquariums teem with colorful creatures darting back and forth, but we users remain on the other side of the glass, passively spectating. We don't just want to see a virtual world; we want to interact with it. Cameras add some interactivity, allowing us to move around inside the aquarium. But for a renderer to be truly interactive, it must allow us to touch and manipulate objects.
In our discussion of cameras, you learned how a renderer can respond to taps on the glass with pointer event listeners:
window.addEventListener('pointerdown', event => {
// handle down events
});
window.addEventListener('pointerup', event => {
// handle up events
});
window.addEventListener('pointerdown', event => { // handle down events }); window.addEventListener('pointerup', event => { // handle up events });
In this chapter, we'll examine what needs to go inside these event listeners so that the user's 2D gestures influence the 3D world behind the glass. By the chapter's end, you'll be able to answer the following questions:
- How do we map the 2D mouse coordinates, which are in pixel space, to the 3D spaces in which the scene is defined?
- What is going on in the 3D tools that allow the user to spin models around?
- How can I identify what object a user is clicking on?
- How can we employ a physics system so that objects fall, bounce, collide, and get pushed around by external forces?
Let's tear down a wall.