Generated by AskSia.ai — graphs, formulas, traps
A vector v ∈ ℝⁿ is an ordered list of n numbers. Operations:
u + v = (u₁+v₁, ..., uₙ+vₙ)
c·u = (cu₁, ..., cuₙ) (scalar multiplication)
u·v = u₁v₁ + ... + uₙvₙ (dot product, returns scalar)Dot product gives geometry:
u·v = |u|·|v|·cos θ → θ = 90° iff u·v = 0 (orthogonal)Span(v₁, ..., vₖ) = all linear combinations c₁v₁ + ... + cₖvₖ. Linearly independent iff the only way to combine them to 0 is all c=0.
| Concept | Test |
|---|---|
| Span ℝⁿ | Have ≥ n vectors that are linearly indep. |
| Linearly indep. | RREF has pivot in every column |
| Basis | Both above + exactly n vectors for ℝⁿ |
A vector is a direction + magnitude (no fixed location). A point is a position. They look the same in coordinates but live in different geometric worlds. Vector spaces always include 0; affine point sets don't.
Orthogonal: u·v = 0. Orthonormal: orthogonal AND each |v| = 1.
If {q₁, ..., qₖ} orthonormal:
x = (x·q₁)q₁ + (x·q₂)q₂ + ... + (x·qₖ)qₖCoefficients come straight from dot products — no system to solve. This is why orthonormal bases are gold.
proj_v(u) = (u·v / v·v) · v (project u onto line spanned by v)
proj_W(x) = Σ (x·qᵢ)·qᵢ (project x onto W with onb q₁,...,qₖ)A = QR where Q has orthonormal columns, R upper triangularComes from Gram-Schmidt on columns of A. Used to solve least squares numerically.
Nonzero orthogonal vectors are automatically linearly independent — but the converse fails. Independent vectors can have any angle between them; orthogonality is a stronger condition.
(AB)ᵢⱼ = Σ_k Aᵢₖ · Bₖⱼ (row of A · column of B)Need # cols of A = # rows of B. Result has rows of A and cols of B.
| Property | Form |
|---|---|
| Associative | A(BC) = (AB)C |
| Distributive | A(B+C) = AB + AC |
| NOT commutative | AB ≠ BA in general |
| Transpose | (AB)ᵀ = BᵀAᵀ (order flips!) |
| Identity | AI = IA = A |
An m×n matrix A maps ℝⁿ → ℝᵐ. Each column of A is the image of a standard basis vector.
A·eᵢ = i-th column of A(AB)ᵀ = BᵀAᵀ, not AᵀBᵀ. The order flips. Same rule for inverses: (AB)⁻¹ = B⁻¹A⁻¹. Memorize this — it shows up everywhere.
A·v = λ·v (v ≠ 0)v is an eigenvector, λ is its eigenvalue. A acts on v just as scaling — no rotation, no shearing — just stretches by λ.
1. Solve det(A − λI) = 0 → characteristic polynomial → λ values
2. For each λ, solve (A − λI)v = 0 → null space gives eigenvectors
3. Stack: A is diagonalizable iff total # of independent eigenvectors = n
A = PDP⁻¹ where P = [v₁ | v₂ | ... | vₙ], D = diag(λ₁, ..., λₙ)Lets you compute Aᵏ easily: Aᵏ = PDᵏP⁻¹. Powers of diagonal D are trivial.
If det(A) = 0, then λ = 0 is an eigenvalue (with eigenvectors = null space). Don't dismiss it because it 'doesn't stretch'. Zero eigenvalues correspond to non-invertibility.
A is diagonalizable iff it has n linearly independent eigenvectors.
| Sufficient condition | Why |
|---|---|
| n distinct eigenvalues | distinct λ → independent eigenvectors |
| A symmetric | spectral theorem: orthonormal eigenbasis |
| A normal (AAᵀ = AᵀA) | diagonalizable by unitary matrix |
Aᵏ = P · Dᵏ · P⁻¹ where Dᵏ = diag(λ₁ᵏ, ..., λₙᵏ)Massive speed-up. Computing A¹⁰⁰ directly is hopeless; via diagonalization it's instant.
PCA in 1 sentence: diagonalize the covariance matrix; eigenvectors point along axes of max variance.
Some matrices have repeated eigenvalues but too few independent eigenvectors. e.g. [[2,1],[0,2]] has λ=2 (twice) but only 1 eigenvector. NOT diagonalizable. You need Jordan form.
1. Form augmented matrix [A | b]
2. Use 3 row operations: swap, scale, add multiple of row
3. Reduce until RREF: pivots = 1, columns above/below pivots = 0
4. Read solutions off RREF
| RREF shape | Solutions |
|---|---|
| Pivot in every column | unique solution |
| No row [0…0 | nonzero] | infinite solutions (free vars) |
| Row [0 0 … 0 | k] with k≠0 | NO solution (inconsistent) |
REF (echelon) is just stair-step shape. RREF (reduced) requires pivots = 1 AND columns above pivots = 0. RREF is unique; many REFs exist for the same A. Most problems want RREF.
det(A) ≠ 0 ⟺ A invertible ⟺ Ax=0 has only trivial solution2×2: |a b; c d| = ad − bc
3×3: cofactor expansion along any row/col
n×n: row-reduce and track sign flips + scalings| Property | Effect on det |
|---|---|
| Swap rows | det negates |
| Scale row by c | det multiplies by c |
| Add k·row to another | det unchanged |
| Transpose | det(Aᵀ) = det(A) |
| Product | det(AB) = det(A)·det(B) |
| Inverse | det(A⁻¹) = 1/det(A) |
|det(A)| = volume scaling factor. det(A) = 2 means A doubles areas (2D) or volumes (3D). det < 0 means orientation flipped.
The determinant is multilinear, not additive. There's no nice rule for det of a sum. Don't make this up.
| If you see… | Use § |
|---|---|
| 'span', 'linear combination' | §1 vector spaces |
| 'linearly independent' | §1 + §3 (RREF check) |
| 'basis', 'dimension' | §1 + §3 |
| 'compute AB' | §2 multiplication |
| 'rotation', 'reflection' | §2 transformation matrix |
| 'solve Ax = b' | §3 RREF |
| 'rank', 'pivot', 'free var' | §3 |
| 'invertible', 'det', 'singular' | §4 |
| 'volume', 'area scaling' | §4 |det| |
| 'eigenvalue', 'eigenvector' | §5 char polynomial |
| 'diagonalize', 'Aᵏ', 'PDP⁻¹' | §5 + §7 |
| 'orthogonal', 'projection' | §6 |
| 'Gram-Schmidt' | §6 |
| 'least squares' | §6 normal equations |
| 'Markov chain', 'steady state' | §7 (eigenvector for λ=1) |
| 'PCA', 'principal component' | §7 SVD / spectral |
Question about Ax=b? RREF the augmented matrix.
Question about A alone? Eigenvalues, det, rank.
Geometry word ('rotation', 'project')? Build the matrix, then apply.
'Find a basis for…'? Find spanning set, RREF, keep pivot columns.
To solve Ax = b, you almost never compute A⁻¹ explicitly. Just RREF [A | b]. Computing A⁻¹ is wasteful and numerically unstable. Inverses matter for theory, not for numerics.
Always sanity-check dimensions before computing. A 4×3 matrix can't be inverted. A·B with mismatched inner dims is undefined. Half the lost points come from skipping this check.