Third-person Camera
A first-person camera gives a first-person perspective of the virtual world. The player feels like the active agent in the scene. Sometimes, however, we want to interact indirectly through an avatar. This is especially true in games where we engage in melee combat or perform athletic maneuvers instead of firing projectiles. On these occasions we need a third-person camera like the one in this renderer, which can be moved with the mouse and WASD keys:
Let's examine what a third-person camera abstraction might look like. In the following discussion, we assume that the third-person camera is associated with an avatar model that is placed at the origin, stands upright along the y-axis, and looks along the negative z-axis—all in model space.
State
Our ThirdPersonCamera
abstraction maintains the following state in order to orient itself and the avatar to which it is attached:
- The avatar's world space position, which we'll call the anchor.
- The viewer's offset from the avatar in model space. A viewer one unit directly behind and one unit up from the avatar will have a position of \(\begin{bmatrix}0&1&1\end{bmatrix}\).
- The world's up vector.
- The avatar's forward vector, which is the world space direction in which the avatar is looking.
- The avatar's focal distance, a scalar that measures how far ahead the avatar is looking along its forward vector. This wasn't needed with the first-person camera, but it is here because we need to have the camera look at the same thing as the avatar.
- The avatar's right vector in world space.
-
The
eyeFromWorld
matrix. -
The avatar's
worldFromModel
matrix.
Place this and all subsequent code in lib/camera.ts
. Three of the instance variables will be assigned in a helper method, so we annotate them with !
to reassure the TypeScript compiler that they will be initialized.
A first-person camera positions and orients just a camera. In contrast, a third-person camera positions and orients the avatar—which is visible—and the camera is situated behind at some distance. When the avatar moves or turns, the camera tags along. The abstraction therefore maintains two matrices. The worldFromModel
matrix is used to transform the avatar model. This matrix applies only to the avatar; other models in the scene will have their own worldFromModel
matrices not maintained by the camera. The eyeFromWorld
matrix puts the viewer looking over the avatar's shoulder.
Behaviors
The ThirdPersonCamera
class provides several behaviors for initializing the camera, moving and turning it, and building its matrices.
Constructor
The constructor receives the avatar's position, the position at which it's looking, and the viewer's position in the avatar's model space. From the two positions, it computes the avatar's forward vector and focal distance.
As before, some of the state must be recalculated whenever the avatar's position or orientation changes, and this logic is factored out to reorient
.
Reorient
The reorient
method is responsible for assembling the eyeFromWorld
and worldFromModel
matrices and the right
vector whenever the avatar is moved or turned. Let's work through this in stages.
We want to rotate the avatar so that it is looking in the desired forward direction, so we must build a rotation matrix out of the avatar's three world space axes. We have the avatar's forward vector from the camera state, and we compute the other two vectors using cross products, just as with the first-person camera.
The right vector is part of the state because we need it for strafing. The up vector isn't need elsewhere.
Earlier we formed the rotation matrix of a first-person camera by dropping into its rows the axes of the incoming world space that were to become the x-, y-, and z-axes of the outgoing eye space. Can we use that same trick here? No. We have the opposite situation. The right, up, and forward vectors that we have are the outgoing world space vectors that we want the model's x-, y-, and z-axes to become.
There's a related law of rotation matrices that can help us out in this inverted situation: the columns of a rotation matrix represent the outgoing space's vectors that the incoming space's x-, y-, and z-axes become. For example, this rotation matrix makes the avatar's right arm, which points along the x-axis in model space, point along the world space vector \(\mathrm{right}\):
Altogether, this matrix rotates our avatar into the desired orientation:
We also need to translate the avatar from its origin in model space to its position in world space. Together the translation and rotation matrices form the avatar's worldFromModel
matrix. With a camera's matrix, our goal is to first translate the camera to the origin and then rotate its line of sight. The avatar is different. It's already at the origin; we want to move it away to the anchor. If we translate first, the rotation will swing it away from the anchor. So, we rotate first and translate second.
Next up is the eyeFromWorld
matrix. It is assembled in much the same way as the first-person camera's matrix. But this time the camera's position is derived from the avatar. In model space, the camera is at this.offset
. We need its world space position. To find its forward vector, we must identify what the avatar is looking at, and then have the camera look at it too. Once we have the camera's position and forward vector, we build the matrix with Matrix4.look
.
Strafe, Advance, Yaw
When the viewer strafes or advances, the anchor must be updated. When the viewer yaws, the avatar's forward vector must be rotated. These changes mean a new worldFromModel
matrix must be formed. Since the camera is positioned behind the avatar, the eyeFromWorld
matrix must also be updated. The methods we've written for FirstPersonCamera
update the state and call reorient
, so they are just as useful here.