线性代数笔记(网易公开课)
来源:互联网 发布:php 淘宝csv导入源码 编辑:程序博客网 时间:2024/05/20 16:45
Linear Algebra Handnote(1)
If
L is lower triangular with 1’s on the diagonal, so isL−1 Elimination = Facotization:
A=LU AT is the matrix that makes these two inner products equal for everyx andy :(Ax)Ty=xT(ATy)
Inner product ofAx withy = Inner product ofx withATy DEFINITION: The space
Rn consists of all column vectorsv withn componentsDEFINITION: A subspace of a vector space is a set of vectors (including 0) that satisfies two requirements: (1)
v+w is in the subspace, (2)cv is in the subspaceThe colomn space consists of all linear combinations of the columns. The combinations are all possible vectors
Ax . They fill the column spaceC(A) The system
Ax=b is solvable if and only ifb is in the column space ofA The nullspace of A consists of all solutions to
Ax=0 . These vectorsx are inRn . The nullspace containing all solutions ofAx=0 is denoted byN(A)
- the nullspace is a subspace of
Rn , the column space is a subspace ofRm - the nullspace consists of all combinations of the special solutions
Nullspace(plane) perpendicular to row space(line)
Ax=0 hasr pivots andn−r free variables:n columns minusr pivot columns. The nullspace matrixN (contains all special solutions) contains then−r special solutions. ThenAN=0 Ax=0 hasr independent equations so it hasn−r independent solutions.xparticular : the particular solution solvesAxp=b xnullspace : then−r special solutions solveAxn=0 Complete solution: one
xp , manyxn :x=xp+xn The four possibilities for linear equations depend on the rank
r :r=m , andr=n : Square ane invertible,Ax=b has 1 solutionr=m , andr<n : Short and wide,Ax=b has∞ solutionsr<m , andr=n : Tall and thin,Ax=b has 0 or 1 solutionsr<m , andr<n : Not full rank,Ax=b has 0 or∞ solutions
Independent vections (no extra vectors)
- Spanning a space (enough vectors to produce the rest)
- Basis for a space (not too many or too few)
Dimension of a space (the number of vectors in a basis)
Any set of
n vectors inRm must be linearly dependent ifn>m The columns spans the column space. The rows span the row space
- The column space / row space of a matrix is the subspace of
Rm /Rn spanned by the columns/rows.
- The column space / row space of a matrix is the subspace of
A basis for a vector space is a sequence of vectors with two properties: linear independent and span the space.
- The basis is not unique. But the combination that produces the vector is unique.
- The columns of a
n×n invertible matrix are a basis forRn . - The pivot columns of A are a basis for its column space.
DEFINITION: The dimension of a space is the number of vectors in every basis.
The space
Z that contains only the zero vector. The dimension of this space is zero. The empty set (containing no vectors) is a basis for Z. We can never allow the zero vector into a basis, because then linear independence is lost.
Four Fundamental Subspaces
1. The row space
2. The column space
3. The nullspace is
4. The left nullspace
A has the same row space asR . Same dimensionr and same basis.- The column space of
A has dimensionr . The number of independent columns equals the number of independent rows. A has the same nullspace asR . Same dimensionn−r and same basis.- The left nullspace of
A (the nullspace ofAT has dimensionm−r .
Fundamental Theorem of Linear Algebra, Part 1
- The column space and row space both have dimension
r . The nullspaces have dimensions
n−r andm−r .Every rank one matrix has the special form
A=uvT=column×row. The nullspace
N(A) and the row spaceC(AT) are orthogonal subspaces ofRn .DEFINITION: The orthogonal complement of a subspace
V contains every vector that is perpendicular toV .
Fundamental Theorem of Linear Algebra, Part 2
*
*
Projection Onto a Line
* The projection matrix
* The projection
Projection Onto a Subspace
Problem: Find the combination
AT(b−Ax¯)=0 , orATAx¯=ATb The symmetric matrix
ATA isn×n . It is inverible if thea ’s are independent.- The solution is
x¯=(ATA)−1ATb - The projection of
b onto the subspacep=Ax¯=A(ATA)−1ATb - The projection matrix
P=A(ATA)−1AT
*
Least Squares Approximations
When
Ax=b has no solution, multiply byAT and solveATAx¯=ATb The least squares solution
x¯ minimizesE=||Ax−b||2 . This is the sum of squares of the errors in them equations (m>n )- The best
x¯ comes from the normal equationsATAx¯=ATb
Orthogonal Bases and Gram-Schmidt
orthonormal vectors
- A matrix with orthonormal columns is assigned the special letter
Q . The matrixQ is easy to work with becauseQTQ=I - When
Q is square,QTQ=I means thatQT=Q−1 : transpose = inverse. - If the columns are only orthogonal (not unit vectors), dot products give a diagonal matrix (not the identity matrix)
- A matrix with orthonormal columns is assigned the special letter
Every permutation matrix is an orthogonal matrix.
If
Q has orthonormal columns (QTQ=I ), it leaves lengths unchangedOrthogonal is good
Use Gram-Schmidt for the Factorization
A=QR
(Gram-Schmidt) From independent vectors
a1,⋯,an , Gram-Schmidt constructs orthonormal vectorsq1,⋯,qn . The matrces with these columns satisfyA=QR . ThenR=QTA is upper triangular because laterq ’s are orthogonal to earliera ’s.Least squares:
RTRx¯=RTQTb orRx¯=QTb orx¯=R−1QTb
Determinants
- The determinant is zero when the matrix has no inverse
- The product of the pivots is the determinant
- The determinant changes sign when two rows (or two columns) are exchanged
- Determinants give
A−1 andA−1b (this formulat is called Cramer’s Rule) - When the edge of a box are the rows of
A , the volume is|detA| - For
n special numbersλ , called eigenvalues, the determinants ofA−λI is zero.
The properties of the determinant
- The determinant of the
n×n identity matrix is 1. - The determinant changes sign when two rows are exchanged
- The determinant is a linear function of each row separately (all other rows stay fixed!)
- If two rows of
A are equal, thendetA=0 - Subtracting a multiple of one row from another row leave
detA unchanged.|ab c−lad−lb|=∣∣∣acbd∣∣∣
- A matrix with a row of zeros has
detA=0 - If
A is triangular thendetA=a11a22⋯ann=productofdiagonalentries - If
A is singular thendetA=0 . IfA is invertible thendetA≠0 - Elimination goes from
A toU . detA=+−detU=+−(productofthepivots)
- Elimination goes from
- The determinant of
AB isdetAtimesdetB - The transpose
AT has the same determinant asA
Every rule of the rows can apply to columns*
Cramer’s Rule
- If
detA is not zero,Ax=b is solved by determinants:x1=detB1detA,x2=detB2detA,⋯,xn=detBndetA - The matrix
Bj has thej th column ofA replaced by the vectorb
Cross Product
||u×v||=||u||||v|||sinθ| |u⋅v|=||u||||v|||cosθ| The length of
u×v equals the area of the parallelogram with sidesu andv It points by the right hand rule (points along your right thumb when the fingers curl from
u tov
Eigenvalues and Eigenvectors
The basic equation is
Ax=λx , The numberλ is an eigenvalue ofA - When
A is squared, the eigenvectors stay the same. The eigenvalues are squared.
- When
The projection matrix has eigenvalues
λ=1 andλ=0 P is singular, soλ=0 is an eigenvalue- Each column of
P adds to 1, soλ=1 is an eigenvalue P is symmetric, so its eigenvectors are perpendicular
- Permutations have all
|λ|=1 The reflection matrix has eigenvalues 1 and -1
Solve the eigenvalue problem for an
n×n matrix- Compute the determinant of
A−λI . It is a polynomial inλ of degreen - Find the roots of this polynomial
- For each eigenvalue
λ , solve(A−λI)x=0 to find an eigenvectorx
- Compute the determinant of
Bad news: elimination does not preserve the
λ ’s- Good news: the product of eigenvalues equals the determinant, the sum of the eigenvalues equals the sum of the diagonal entries (trace)
Diagonalizing a Matrix
Suppose the
n×n matrixA hasn linearly independent eigenvectorsx1,⋯,xn . Put them into the columns of an eigenvector matrixS . ThenS−1AS is the eigenvalue matrixΛ :S−1AS=Λ=[λ1 ⋱ λn]
There is no connection between invertibility and diagonalizability:
- Invertibility is concerned with the eigenvalues (
λ=0 orλ≠0 ) - Diagonalizability is concerned with the eigenvectors (too few or enough for
S )
- Invertibility is concerned with the eigenvalues (
Applications to differential equations
One equation
dudt=λu has the solutionu(t)=Ceλt n equationsdudt=Au starting from the vectoru(0) att=0 Solve linear constant coefficient equations by exponentials
eλtx , whenAx=λx
Symmetric Matrices
- A symmetric matrix has only real eigenvalues.
The eigenvectors can be chosen orthonormal.
(Spectral Theorem) Every symmetric matrix haas the facorization
A=QΛQT with real eigenvalues inΛ and orthonormal eigenvectors inS=Q :- Symmetric diagonalization:
A=QΛQ−1=QΛQT withQ−1=QT
- Symmetric diagonalization:
(Orthogonal Eigenvectors) Eigenvectors of a real symmetric matrix (when they correspond to different
λ ’s) are always perpendicular.product of pivots = determinant = product of eigenvalues
Eigenvalues VS. Pivots
For symmetric matrices the pivots and the eigenvalues have the same signs:
- The number of positive eigenvalues of
A=AT equals the number of positive pivots.
- The number of positive eigenvalues of
All symmetric matrices are diagonalizable
Positive Definite Matrices
Symmetric matrices that have positive eigenvalues
2
× 2 matrices- The eigenvalues of
A are positive if and only ifa>0 andac−b2>0 .
- The eigenvalues of
xTAx is positive for all nonzero vectorsx - If
A andB are symmetric positive definite, so isA+B
- If
When a symmetric matrix has one of these five properties, it has them all:
- All
n pivots are positive - All
n upper left determinants are positive - All
n eigenvalues are positive xTAx is positive except atx=0 . This is the energy-based definitionA equalsRTR for a matrixR with independent columns
- All
Positive Semidefinite Matrices
Similar Matrices
DEFINITION: Let
M be any invertible matrix. ThenB=M−1AM is similar toA (No change in
λ ’s) Similar matricesA andM−1AM have the same eigenvalues. Ifx is an eigenvector ofA , thenM−1x is an eigenvector ofB . But two matrices can have the same repeatedλ , and fail to be similar.
Jordan Form
- What is “Jordan Form”?
- For every
A , we want to chooseM so thatM−1AM is nearly diagonal as possible
- For every
JT is the similar toJ , the matrixM that produces the similarity happens to be the reverse identity(Jordan form) If
A hass independent eigenvectors, it is similar to a matrixJ that hass Jordan blocks on its diagonal: Some matrixM putsA into Jordan form.- Jordan block: The eigenvalue is on the diagonal with
1 ’s just above it. Each block inJ has one eigenvalueλi , one eigenvector. and 1’s above the diagonal
- Jordan block: The eigenvalue is on the diagonal with
A is similar toB if they share the same Jordan formJ – not otherwise
Singular Value Decomposition (SVD)
Two sets of singular vectors,
u ’s andv ’s. Theu ’s are eigenvectors ofAAT and thev ’s are eigenvectors ofATA .The singular vectos
v1,⋯,vr are in the row space ofA . The outputsu1,⋯,ur are in the column space ofA . The singular valuesσ1,⋯,σr are all positive numbers, the equatinosAvi=σiui tell us:
- We need
n−r morev ’s andm−r moreu ’s, from the nullspaceN(A) and the left nullspaceN(AT) . They can be orthonormal bases for those two nullspaces. Include all thev ’s andu ’s inV andU , so these matrices become square.
The orthonormal columns of
- 线性代数笔记(网易公开课)
- 统计学相关(网易公开课笔记)
- MIT 公开课:Gilbert Strang《线性代数》课程笔记(汇总)
- 网易公开课哈佛大学CS50学习笔记(2)
- 线性代数公开课链接
- 麻省理工公开课:线性代数
- MIT线性代数公开课
- 麻省理工学院公开课:线性代数
- 网易公开课哈佛大学CS50学习笔记
- 网易公开课“Programming Paradigms” 笔记
- 麻省理工公开课:线性代数_学习笔记01
- 麻省理工公开课:线性代数_学习笔记02
- 《麻省理工公开课:线性代数》Lecture 1~3 笔记
- 线性代数公开课学习资源
- MIT公开课 线性代数(1)
- MIT公开课 线性代数(2)
- 网易公开课(各个名校)
- 编程方法(网易公开课)
- 记APP实现多语言(国际化)过程,兼容Android 7.0以上
- LeetCode||52. N-Queens II
- JDBC
- C语言之旅(3)指针数组与数组指针
- win10禁止数字签名
- 线性代数笔记(网易公开课)
- 第二次被mo意义续
- 动态HTML处理和机器图像识别
- Spring
- XListView请求网络数据
- vue.js之v-show 与 v-if
- 树的前序、中序遍历的递归和非递归实现
- ubuntu简单编译安装nginx
- JVM性能调优监控工具jps、jstack、jmap、jhat、jstat、hprof使用详解