Lecture Notes

5. Interpretations of Linear Maps

Part of the Series on Linear Algebra.

By Akshay Agrawal. Last updated Sept. 19, 2017.

Previous entry: Linear Maps and Matrices; Next entry: Null Spaces and Ranges

Linear maps and matrices are linear algebra’s principal actors, so it is imperative for the reader to be comfortable with them. In this section, we provide a variety of intuitions for and interpretations of these two actors.

5.1 Linear Maps and Matrices are Two Sides of the Same Coin

This was stated in the previous section, but it bears repeating:

Every linear map can be encoded with a matrix, and every matrix-vector multiplication can be interpreted as applying a linear map to a vector.

The reduction from linear maps to matrices is made possible by the fact that linear maps are completely determined by their action on bases. The number of columns in the matrix of a linear map is equal to the dimension of the domain, which in turn is equal to the number of vectors in any basis for the domain.

As an example, let be defined such that . Then the matrix for with respect to the standard basis is

because and . So in particular the product

can be intepreted as ie, as the sum of applied to the standard basis vectors and , each of which is scaled by its corresponding coordinate.

5.2 Linear Maps Preserve Lines

Every linear map preserves lines — this provides the intuition for why linear maps are named as such. In particular, consider the line , where is a scalar is a direction vector, and a vector in ; let be a linear map. Applying to results in another line precisely because linear maps are additive and homogeneous:

5.3 Linear Maps as Geometric Transformations

In this subsection, we present a few important examples of geometric transformations — dilations, rotations, and reflections — that are linear maps. Because compositions of linear maps are themselves linear maps, the transformations presented here may be thought of as building blocks for assembling more complicated maps. In fact,

every linear map can be decomposed into the composition of dilations, rotations, and reflections.

We will encounter this deep result later, when studying the singular value decomposition.

(Unless otherwise stated, all matrices in this section are with respect to the standard bases.)

Dilations

Any function that operates on vectors by stretching or shrinking vectors (that is, by dilating them) is a linear map whose matrix is given by

for some are positive scalars.

Rotations

Any function that rotates vectors by some angle counter-clockwise about the origin is a linear map. In two dimensions, a rotation is given by the matrix

This rotation matrix is derived by considering where the standard basis vectors land on the unit circle after passing through a rotation of radians.

Reflections

Any function that reflects vectors across a line passing through the origin is a linear map. For example, below is the matrix for a reflection across the -axis:

The simplest way to represent reflections across arbitrary lines through the origin is to change coordinates to a basis that makes the reflection natural; we’ll return to this idea in a subsequent section.

5.4 Special Instances of Matrix Multiplication

Here we present some special instances of matrix multiplications. In order to represent these products, it will be convenient to introduce the transpose of a matrix. The transpose of a matrix is denoted and is defined such that . It is a useful fact that .

Dot product

The matrix product of a row vector by a column vector is called a dot product. If and are vectors from the same -dimensional vector space, then

Outer product

The matrix product of a column vector by a row vector is called an outer product. If and are vectors from the same -dimensional vector space then

as you can verify yourself using the definition of matrix multiplication.

Example: Gram matrix

The Gram matrix of a set of vectors is the matrix such that . You can check for yourself that if is the matrix whose -th column is , then . Moreover, letting , it is also true that , where are the columns of (and hence the rows of ).

5.5 Different Views on Matrix Multiplication

There are at least two useful ways to interpret matrix-vector and matrix-matrix multiplication: from the perspective of the columns of the matrix on the left-hand side and from the perspective of its rows. In the following, let be an matrix and an matrix, and let be the columns of , the rows of (represented as column vectors), the columns of , and the rows of .

Linear combinations of column vectors

As defined in the previous section, the matrix-vector product , , is given by forming a linear combination of the columns of with the coordinates of as the coefficients:

An engineering interpretation of this view is that the columns of are actuators that represent the movement of to , and represents the result of applying the actuators to each of .

The matrix product can be calculated by horizontally concatenating the vectors :

Linear combinations of row vectors

The vector-matrix product , where is a row vector, can be cast as a linear combination of the rows of with the coordinates of as coefficients:

The matrix product can be calculated by vertically concatenating the vectors :

Dot products with row vectors

The matrix-vector product can also be cast in terms of the rows of :

An engineering interpretation is that the rows of are sensors whose measurements of an input signal or data point are recorded in the rows of .

Sum of outer products

The matrix product can be expressed as a sum of outer products

as you should verify.

5.6 Exercises

  1. Prove that if two lines in a vector space are parallel, then applying a linear map to each of them yields two lines that are parallel in .