Linear Operations and A Bit of Matrix

Sanjiv Gautam
5 min readMay 31, 2020

--

Extracted from https://www.youtube.com/watch?v=yjBerM5jWsc&list=PL49CF3715CB9EF31D&index=9

Basis Vector of a Span

Set of linearly independent vectors, whose linear combination spans the whole space!

Span of Vectors

If we take two vectors a and b , and calculate its linear combination (it means scaling a or b or both and adding them), then set of all vectors we can reach from those combination is called SPAN of the vectors.

Span of Vectors is like asking “What are different set of vectors you can reach with only these two operations of adding and scaling (linear combination)?”

If I have i and j (two basis, I could have chosen non-basis vector as well) vectors, then their span is the whole 2D SPACE.

If we have three vectors i,j and k and if k contributes nothing in adding to span, which means if i and j give same span as i, j and k (k is useless, just like you), then it is called linearly dependent. It means k can be expressed by i and j too. It means k falls in the span of i and j, so it is linearly dependent on them.

What does it mean for a bunch of vectors to span a space?

Vectors v1,v2,v3…vn spans a space , means that space contains all the combination of those vectors. The combination includes linearly independent and linearly dependent vectors.

Instead of saying ‘Take all linear combination of the vectors and put them into a space’ ,we just say SPAN.

We are interested in set of vectors, that span the space, and is also independent. The topic brings us to BASIS!

Independency of vectors:

Let x1,x2,x3…xn be n vectors. Then they are said to be independent if c1.x1+c2.x2+…cn.xn !=0, otherwise they are dependent.

How to picture them? So if one vector(d) can be formed as linear combination of other two vectors (a and b), then it is dependent. In other words it can be said as :

c1*a + c2*b = -d, (d is formed as linear combination of a and b).

c1*a+c2*b+c3*d = 0, then d can be expressed as linear combination of a and b, (we can also do with a and b), so it is dependent vector. Here, both a and b can be linearly dependent.

Basis Vectors of Span

They hold two properties:

  1. Linearly independent
  2. Span the space

We do not look for a single vector to be basis. We check for more than 1 vectors if they are linearly independent and if they span the space.

If we take R3 as our space, and we look into 2 vectors(v1,v2) that are linearly independent, are they basis?

NO! Because they do not span the space. How do we know they do not span the space? Its because their linear combinations c1v1 and c2v2 do not lie on 3 dimensional space, rather on 2 dimensional space. Think it as this way, you put [v1,v2] as matrix.. v1 has 3 rows, v2 has 3 rows.. But the matrix has 2 columns and z column(3rd) is absent. So it never goes to the third dimension. So it doesn’t span.

Column Space

Set of all possible outputs A.v is called column space. Column space is the span of the columns of the matrix.

Rank

When a matrix transforms vector into a line(A.v), transformation is said to have rank 1. If all vectors land on 2D plane, we say it has rank 2.

RANK => No of dimensions of the output when transformed.

When we have 2*2 matrix, what is the highest rank it can be?

RANK 2. The output can have 2 dimension at max.

Max rank a matrix can have is minimum(rows,columns).

If row or column of matrix is dependent on other (check one could be achieved by linear operation with other, if it can, then it is linearly dependent).

Rank is the maximum number of linearly independent rows or columns in matrix.

Rank 2
Rank 2

Why does this have rank 2?

Because column 2 could be achieved from column 1 with linear operation. So we can say only 1 and 3 column are independent.

Null Space

Matrix transformation squishes into origin. i.e. A.v = 0.

Matrix Shape meaning

The figure above shows 2*3 matrix. 2 rows and 3 columns. Since 2 rows means the output will have 2 dimension and 3 columns, means input has 3 dimensional input, which is mapped to 2D input.

Matrix Multiplication

Multiplying two matrices is like doing 2 transformations! If A is rotation and B is shear matrix, then A.B would give us combined transformation when first A is done and then B is done!

How we see matrix ?

If we are to multiply A, B and C matrix, then it would be something like ABC, so what we are doing is first we are applying C transformation, then B and then A. Remember this order.

A^-1. A = I. So it means if we apply transformation A, and then apply inverse of what we have applied, then it equals to identity.

Why there is no inverse of matrix whose determinant is 0?

Remember what determinant of 0 means, it means vector is squished into a line or a point (considering 2D matrix), then when we take inverse of it, we want to go from that squished line to many points. But function doesn’t allow you to have a 2D from a line. In order to have 2D from a line, we need the function that takes single point to map many outputs which is not what a function does.

Dot Product of Vectors

You know the geometrical meaning of two vectors v and w?

When w is projected on v, then it has certain length, right? So, dot product of v and w is length of v * length of w. It gives the value of how similar two vectors are with each other!

What happens when they are perpendicular? Projection of v on w is a single point, whose length is 0 (because its a point!) so multiplying two gives us 0 (FUCKING BRILLIANT! )

Now comes, its relation of mathematics calculation with geometric interpretation. I mean why element wise multiplication of v and w and adding them is same as multiplying length of projection of v to length of projection of w. If v is a vector [1 2 3 ] and w = [4 5 6], then v.w means 1*4+2*5+3*6.

When we take dot product of v and w, imagine that v is projected on w, so new coordinates of v lies in the span of w(same line). s

--

--

Sanjiv Gautam
Sanjiv Gautam

Written by Sanjiv Gautam

Just an average boy who wishes mediocrity over luxury.

No responses yet