Operator Norms for Weighted Sequence Spaces #
This file establishes the connection between explicit matrix norm formulas and abstract operator norms on weighted ℓ¹ spaces.
Mathematical Background #
Finite-Dimensional Case (Exercise 2.7.2) #
For matrices A : ℝ^{N+1} → ℝ^{N+1} acting on the weighted ℓ¹_ν space, the induced operator norm equals the weighted column-sum formula:
‖A‖{op} = max{0≤j≤N} (1/νʲ) Σᵢ |Aᵢⱼ| νⁱ
This is the content of finWeightedMatrixNorm_eq_opNorm.
Infinite-Dimensional Case (Propositions 7.3.13-7.3.14) #
For bounded linear operators A : ℓ¹_ω → ℓ¹_ω:
Proposition 7.3.13: ‖A‖ ≤ sup_n (1/ωₙ) Σₘ |Aₘₙ| ωₘ
Proposition 7.3.14: For block-diagonal operators of the form
A = [ A_N 0 ] [ 0 c·I ]
where A_N is an (N+1)×(N+1) matrix and c is a scalar on the tail, we have: ‖A‖ ≤ max(K, |c|)
where K = finWeightedMatrixNorm(A_N).
Main Definitions #
WeightedPiLp: Finite-dimensional weighted ℓ¹ space onFin (N+1) → ℝFinWeightedMatrix.toWeightedCLM: Matrix as continuous linear map on weighted spaceBlockDiagOp: Block-diagonal operator structure for Chapter 7.7
Main Results #
finWeightedMatrixNorm_eq_opNorm: Matrix norm = operator norm (Exercise 2.7.2)Proposition_7_3_14: Operator norm bound for block-diagonal operators
References #
- Exercise 2.7.2: Finite-dimensional weighted norms
- Proposition 7.3.13: General column-norm bound
- Proposition 7.3.14: Block-diagonal operator bound
- Theorem 7.7.1: Application to Taylor series ODE
Finite-Dimensional Weighted ℓ¹ Space #
We construct the finite-dimensional weighted ℓ¹_ν space on Fin (N+1) → ℝ.
The norm is: ‖x‖_{1,ν} = Σₙ |xₙ| νⁿ
This can be realized as PiLp 1 (fun n : Fin (N+1) => ScaledReal ν n),
but we also need direct definitions for computing with explicit formulas.
The finite-dimensional weighted ℓ¹ space as PiLp.
Elements are functions Fin (N+1) → ℝ with norm Σₙ |xₙ| νⁿ.
We use ScaledReal ν n as the fiber at index n, which has norm |x| · νⁿ.
Then PiLp 1 computes the ℓ¹ sum automatically.
Equations
- FinWeighted.Space ν N = PiLp 1 fun (n : Fin (N + 1)) => ScaledReal ν ↑n
Instances For
Matrix as Continuous Linear Map on Weighted Space #
We define how a matrix A : Matrix (Fin (N+1)) (Fin (N+1)) ℝ acts as a continuous linear map on the weighted ℓ¹ space.
Bound on ‖Ax‖ in terms of column norms and ‖x‖. This is the key estimate: ‖Ax‖ ≤ (max_j colNorm_j) · ‖x‖ Proof outline (following Proposition 7.3.13): ‖Ax‖ = Σᵢ |(Ax)ᵢ| νⁱ = Σᵢ |Σⱼ Aᵢⱼ xⱼ| νⁱ ≤ Σᵢ Σⱼ |Aᵢⱼ| |xⱼ| νⁱ [triangle ineq] = Σⱼ (Σᵢ |Aᵢⱼ| νⁱ) |xⱼ| [swap sums] = Σⱼ (colNorm_j · νʲ) |xⱼ| [def of colNorm] ≤ (max_j colNorm_j) · Σⱼ |xⱼ| νʲ [factor out max] = (max_j colNorm_j) · ‖x‖
Lemma 7.3.11 application: The column norm is achieved on basis vectors. For the j-th basis vector eⱼ with ‖eⱼ‖ = νʲ: ‖A eⱼ‖ / ‖eⱼ‖ = colNorm_j This shows the matrix norm is achieved, not just bounded.
Exercise 2.7.2: The weighted matrix norm equals the operator norm. ‖A‖{1,ν} = max_j (1/νʲ) Σᵢ |Aᵢⱼ| νⁱ = ‖A‖{op} Proof:
- (≥): The sup over column norms is achieved on basis vectors (opNorm_achieved_on_basis)
- (≤): Every vector gives at most the max column norm (mulVecWeightedLinear_norm_le)
toWeightedCLM preserves matrix multiplication.
This shows that the embedding of matrices into weighted CLMs is a ring homomorphism. The proof swaps the order of summation and uses associativity of multiplication.
toWeightedCLM preserves subtraction from identity.
This shows that applying the matrix (I - M) via toWeightedCLM equals the identity CLM minus toWeightedCLM of M. The proof reduces to showing that the Kronecker delta sum ∑ₖ δᵢₖ · xₖ = xᵢ.
If the CLM corresponding to a matrix is a unit, so is the matrix.
The proof uses the fact that toWeightedCLM M is essentially M.mulVec transported to the weighted space via an isomorphism. If the CLM is invertible (a unit), then M.mulVec must be injective, which for square matrices over a field means det M ≠ 0.
Key insight: FinWeighted.Space ν N ≅ Fin (N+1) → ℝ as vector spaces (the norms differ but the underlying spaces are the same). So IsUnit (toWeightedCLM M) ↔ IsUnit (toLin' M) ↔ IsUnit M.
If ‖I - M‖ < 1 for a square matrix M, then M is invertible.
This is the matrix version of the Neumann series invertibility criterion.
Block-Diagonal Operators #
For Chapter 7.7, operators have block-diagonal structure:
A = [ A_N 0 ] [ 0 c·I ]
where A_N is an (N+1)×(N+1) finite matrix and c is a scalar acting diagonally on the infinite tail.
Proposition 7.3.14 gives: ‖A‖ ≤ max(‖A_N‖_{1,ν}, |c|)
A block-diagonal operator on ℓ¹_ν.
The operator acts as:
- A finite matrix
finBlockon coordinates 0, 1, ..., N - A scalar
tailScalartimes identity on coordinates N+1, N+2, ...
See equations (7.47) and (7.48) for the operators A† and A in Chapter 7.7.
The (N+1)×(N+1) matrix acting on the finite part
- tailScalar : ℝ
The scalar multiplying the identity on the tail
Instances For
The action of a block-diagonal operator on a sequence.
(A·a)ₙ = (finBlock · a_{[0,N]})ₙ for n ≤ N (A·a)ₙ = tailScalar · aₙ for n > N
Equations
Instances For
The projection of a sequence onto its first N+1 coordinates
Equations
- BlockDiag.projFin a n = lpWeighted.toSeq a ↑n
Instances For
The tail projection: coordinates N+1, N+2, ...
Equations
- BlockDiag.projTail a n = if n ≤ N then 0 else lpWeighted.toSeq a n
Instances For
Direct sum decomposition: a = embedFin(projFin a) + projTail a
Bound on the finite block contribution.
For the finite part, we use the weighted matrix norm.
Equality version: factoring out the scalar from tail
The action of a block-diagonal operator preserves membership in ℓ¹_ν.
Motivation: This lemma is a prerequisite for Proposition 7.3.14, which is used in Theorem 7.7.1 (Taylor series ODE). The operators appearing in Theorem 7.7.1 (such as I - A·DF(ā)) have block-diagonal structure. Before we can apply Proposition 7.3.14's bound ‖A‖_{op} ≤ max(K, |c|), we must first establish that the block-diagonal operator maps ℓ¹_ν → ℓ¹_ν.
Structure: For A = [A_N, 0; 0, c·I], the output (Aa)_n is:
- n ≤ N: Σⱼ A_{nj} aⱼ (finite matrix-vector product)
- n > N: c · aₙ (scalar multiplication on tail)
Summability proof: We show Σₙ |(Aa)ₙ| νⁿ < ∞ using comparison.
Define bounding function g(n) := (𝟙_{n≤N} · M) + |c| · |aₙ| · νⁿ
where M := max_{0≤n≤N} |Σⱼ A_{nj} aⱼ| · νⁿ (finite maximum).
Then:
- For n ≤ N: |(Aa)ₙ| νⁿ ≤ M ≤ g(n)
- For n > N: |(Aa)ₙ| νⁿ = |c| |aₙ| νⁿ ≤ g(n)
The function g is summable since:
- 𝟙_{n≤N} · M has finite support
- |c| · |aₙ| · νⁿ is summable (a ∈ ℓ¹_ν)
The action as a linear map
Equations
- A.toLinearMap = { toFun := fun (a : ↥(l1Weighted ν)) => lpWeighted.mk (A.action (lpWeighted.toSeq a)) ⋯, map_add' := ⋯, map_smul' := ⋯ }
Instances For
The block-diagonal operator as a continuous linear map
Equations
- A.toCLM = A.toLinearMap.mkContinuous (max (l1Weighted.finWeightedMatrixNorm ν A.finBlock) |A.tailScalar|) ⋯
Instances For
The CLM action matches the BlockDiagOp action
Operator norm of the CLM is bounded by max of finite block norm and tail scalar
Block-diagonal operator is injective if finite block matrix is invertible and tail scalar ≠ 0.
The proof shows that if A x = A y, then both the finite and tail parts match, which forces x = y when the finite block is invertible and the tail scalar is nonzero.
Proposition 7.3.14: Block-Diagonal Operator Norm Bound #
For a block-diagonal operator A with finite block A_N and tail scalar c, we have: ‖A‖ ≤ max(‖A_N‖_{1,ν}, |c|)
This is the key result connecting the computable matrix norm to the abstract operator norm for the Chapter 7.7 formalization.
Proposition 7.3.14: Operator norm bound for block-diagonal operators.
Given a block-diagonal operator A = [A_N, 0; 0, c·I] on ℓ¹_ν, we have:
‖A‖ ≤ max(K, |c|)
where K = ‖A_N‖_{1,ν} is the weighted matrix norm of the finite block.
Proof outline (following the textbook):
- Split ‖Aa‖ into finite and tail contributions
- Bound finite part using matrix norm: ≤ K · ‖a_{fin}‖
- Bound tail part using scalar: ≤ |c| · ‖a_{tail}‖
- Combine using ‖a‖ = ‖a_{fin}‖ + ‖a_{tail}‖ and max bound