# Computer-graphics recipes

This section describes how to perform some operations common for Computer
Graphics (CG). Note that while the 3D Computer Graphics community is used to
work almost exclusively with 4×4 matrices, **nalgebra** defines a wider number
of transformation types that
the user is strongly encouraged to use instead.

##### info

You are encouraged to look at the **nalgebra-glm** crate as well. This provides
a simpler interface for manipulating vectors and matrices using homogeneous
coordinates. It is inspired from the popular C++ GLM library. You will find
more details on a dedicated page of this user guide.

## #

Transformations using Matrix4In the field of CG, a 4×4 matrix usually has a specific interpretation: it is
thought as a transformation matrix that mixes scaling (including shearing),
rotation and translation. Though using 4x4 matrices is convenient because most
3D transformations (including projections) can be represented using those
so-called *homogeneous coordinates*, they do not have provide strong guarantees
regarding their properties. For example, a method that takes a `Matrix4`

in
argument cannot have the guarantee that it is a pure rotation, an
isometry, or even an arbitrary but invertible transformation! That's why all
the transformation types are
recommended instead of raw matrices.

However, it is sometimes convenient to work directly with 4x4 matrices.
Especially for small applications where one wants to avoid the complexity of
having to select the right transformation type for the task at hand. Therefore,
**nalgebra** has a limited but useful support for transforming 3x3 matrices
(for 2D transformations) and 4x4 matrices (for 3d transformations).

### #

Homogeneous raw transformation matrix creationThe following methods may be used on a `Matrix4`

to build a 4x4 homogeneous
raw transformation matrix.

Method | Description |
---|---|

`::new_scaling(s)` | An uniform scaling matrix with scaling factor `s` . |

`::new_nonuniform_scaling(vs)` | A non-uniform scaling matrix with scaling factors along each coordinate given by the coordinates of the vector `vs` . |

`::new_translation(t)` | A pure translation matrix specified by the displacement vector `t` . |

`::new_rotation_wrt_point(axang, pt)` | A composition of rotation and translation such that the point `pt` is left invariant. The rotational part is specified as a rotation axis multiplied by the rotation angle. |

`::from_scaled_axis(axang)` | A pure rotation matrix specified by a rotation axis multiplied by the rotation angle. |

`::from_euler_angle(r, p, y)` | A pure rotation matrix from Euler angles applied in order: roll - pitch - yaw. |

`::new_orthographic(...)` | An orthographic projection matrix. |

`::new_perspective(...)` | A perspective projection matrix. |

`::new_observer_frame(eye, target, up)` | A composition of rotation and translation corresponding to the local frame of a viewer standing at the point `eye` and looking toward `target` . The `up` direction is the vertical direction. |

`::look_at_rh(...)` | A right-handed look-at view matrix. |

`::look_at_lh(...)` | A left-handed look-at view matrix. |

Note that a few of those functions are also defined for `Matrix3`

which can
hold the homogeneous coordinates of 2D transformations.

### #

Homogeneous raw transformation matrix modificationOnce created, a `Matrix4`

(or `Matrix3`

for 2D transformations) can be modified
by appending or prepending transformations. The function signature follow the
same pattern as the transformation matrix creation functions listed above.
In-place appending and prepending are supported and have a name with a `_mut`

suffix, e.g., `.append_scaling_mut(...)`

instead of `.append_scaling(...)`

.

Method | Description |
---|---|

`.append_scaling(s)` | Appends to `self` a uniform scaling with scaling factor `s` . |

`.prepend_scaling(s)` | Prepends to `self` a uniform scaling with scaling factor `s` . |

`.append_nonuniform_scaling(vs)` | Appends to `self` a non-uniform scaling with scaling factors along each coordinate given by the coordinates of `vs` . |

`.prepend_nonuniform_scaling(vs)` | Prepends to `self` a non-uniform scaling with scaling factors along each coordinate given by the coordinates of `vs` . |

`.append_translation(t)` | Appends to `self` a translation specified by the vector `t` . |

`.prepend_translation(t)` | Prepends to `self` a translation specified by the vector `t` . |

Note that there isn't any method to append or prepend a rotation. That is because a
specific method does not provide any performance benefit. Instead, you may
explicitly construct an homogeneous rotation matrix using, e.g.,
`::from_scaled_axis`

, and then multiply the result with the matrix you want it
appended or prepended to.

### #

Using raw transformation matrices on points and vectorsHomogeneous raw transformation matrix do not have a compatible dimensions for
multiplication with a vector of a point. For example a 3D transformation
represented by a `Matrix4`

cannot multiply a 3D vector represented by a
`Vector3`

(because the matrix has 4 columns while the vector has only 3 rows).
There are two main ways to deal with this issue:

- Use the
`.transform_vector(...)`

and`.transform_point(...)`

methods that directly take a`Vector3`

and`Point3`

as argument. - Use homogeneous coordinates for vectors and points as well. In that case, a
3D vector
`Vector3::new(x, y, z)`

should be given given a fourth coordinate set to zero, i.e.`Vector4::new(x, y, z, 0.0)`

while a 3D point`Point3::new(x, y, z)`

should be represented as a vector with its fourth coordinate equal to one, i.e.,`Vector4::new(x, y, z, 1.0)`

. Then the`Matrix4`

can multiply the augmented vector directly.

## #

Build a MVP matrixThe Model-View-Projection matrix is the common denomination of the composition of three transformations:

- The
__model transformation__gives its orientation and position to an object in the 3D scene. It is different for every object. - The
__view transformation__that moves any point of the scene into the local coordinate of the camera. It is the same for every object on the scene, but different for every camera. - The
__projection__that translates and stretches the displayable part of the scene so that it fits into the double unit cube (aka. Normalized Device Coordinates). We already discussed it there. There is usually only one projection per display.

Note that it is also common to construct only a View-Projection matrix and let the graphics card combine it with the model transformation in shaders. For completeness, our example will deal with the model transformation as well.

The model and view transformations are direct isometries. Thus, we can simply
use the dedicated `Isometry3`

type. The projection is not an isometry and
requires the use of a raw `Matrix4`

or a dedicated
projection type like `Perspective3`

.

Of course, the last four `let`

will usually be written in a single line:

## #

Screen-space to view-spaceIt is the projection matrix task to stretch and translate the displayable 3D objects into the double unit cube, i.e., it transforms points from view-coordinates (the camera local coordinate system) to Normalized Device Coordinates (aka. clip-space). Then, the screen itself will contain everything that can be seen from the cube's face located on the plane $z = -1$. Therefore, a whole line (in clip-space) parallel to the $\mathbf{z}$ axis will be mapped to a single point on screen-space (the display device's 2D coordinate system). The following shows one such line $\mathcal{L}$ in view space, normalized device coordinates, and screen-space. More details about the different coordinate systems use in compute graphics can be found there.

Now observe that there is a bijective relationship between each point in
screen-space and each line parallel to the $\mathbf{z}$ axis in clip-space.
Moreover, both the perspective and orthographic projections are bijective and
map lines to lines. It is thus possible to perform a so-called *unprojection*,
i.e., from a 2D point in screen-space compute the corresponding 3D line in
view-space. Typically, this line can then be used for picking using ray
casting. The next
example takes a point on a screen of size $800 \times 600$ and retrieves the
corresponding line in view-space. It follows three steps:

- Convert the point from screen-space to two points in clip-space. One will lie on the near-plane with $z = -1$ and the other on the far-plane with $z = 1$.
- Apply the inverse projection both points.
- Compute the parameters of the line that passes through those two points.

The resulting 3D line will be in the local space of the camera. Thus, it might be useful to multiply its location an direction by the inverse view matrix in order to obtain coordinates in world-space. The same procedure will work with any other straight line preserving projection as well, e.g., with the orthographic projection.

## #

Conversions for shadersShaders don't understand the high-level types defined by **nalgebra**.
Therefore, you will usually need to convert your data into pointers to a
contiguous array of floating point numbers with components arranged in a
specific way in-memory. Using the `.as_slice()`

method of raw matrices and
vectors, one can retrieve a reference to a contiguous array containing all
components in column-major order. Note that this method will not exist for
matrices that do not own their data, e.g. matrix
slices.

Higher-level types of transformations like `Rotation3`

, `Similarity3`

cannot
be converted into arrays directly so you will have to convert them to raw
matrices first. The underlying 3×3 or 4×4 matrix of `RotationMatrix3`

and
`RotationMatrix2`

can be directly retrieved by the `.matrix()`

method. All the
other transformations must first be converted into their homogeneous
coordinates representation
using the method `.to_homogeneous()`

. This will return a `Matrix2`

or a
`Matrix3`

that may then be reinterpreted as arrays.