Documentation

RadiiPolynomial.TaylorODE.OperatorNorm

Operator Norms for Weighted Sequence Spaces #

This file establishes the connection between explicit matrix norm formulas and abstract operator norms on weighted ℓ¹ spaces.

Mathematical Background #

Finite-Dimensional Case (Exercise 2.7.2) #

For matrices A : ℝ^{N+1} → ℝ^{N+1} acting on the weighted ℓ¹_ν space, the induced operator norm equals the weighted column-sum formula:

‖A‖{op} = max{0≤j≤N} (1/νʲ) Σᵢ |Aᵢⱼ| νⁱ

This is the content of finWeightedMatrixNorm_eq_opNorm.

Infinite-Dimensional Case (Propositions 7.3.13-7.3.14) #

For bounded linear operators A : ℓ¹_ω → ℓ¹_ω:

Proposition 7.3.13: ‖A‖ ≤ sup_n (1/ωₙ) Σₘ |Aₘₙ| ωₘ

Proposition 7.3.14: For block-diagonal operators of the form

A = [ A_N 0 ] [ 0 c·I ]

where A_N is an (N+1)×(N+1) matrix and c is a scalar on the tail, we have: ‖A‖ ≤ max(K, |c|)

where K = finWeightedMatrixNorm(A_N).

Main Definitions #

Main Results #

References #

Finite-Dimensional Weighted ℓ¹ Space #

We construct the finite-dimensional weighted ℓ¹_ν space on Fin (N+1) → ℝ. The norm is: ‖x‖_{1,ν} = Σₙ |xₙ| νⁿ

This can be realized as PiLp 1 (fun n : Fin (N+1) => ScaledReal ν n), but we also need direct definitions for computing with explicit formulas.

@[reducible, inline]
abbrev FinWeighted.Space (ν : PosReal) (N : ) :

The finite-dimensional weighted ℓ¹ space as PiLp.

Elements are functions Fin (N+1) → ℝ with norm Σₙ |xₙ| νⁿ.

We use ScaledReal ν n as the fiber at index n, which has norm |x| · νⁿ. Then PiLp 1 computes the ℓ¹ sum automatically.

Equations
Instances For
    theorem FinWeighted.norm_eq_sum {ν : PosReal} {N : } (x : Space ν N) :
    x = n : Fin (N + 1), |(x.ofLp n)| * ν ^ n
    def FinWeighted.stdBasis {ν : PosReal} {N : } (j : Fin (N + 1)) :
    Space ν N

    Standard basis vector eⱼ with 1 at position j and 0 elsewhere

    Equations
    Instances For
      @[simp]
      theorem FinWeighted.stdBasis_apply_self {ν : PosReal} {N : } (j : Fin (N + 1)) :
      (stdBasis j).ofLp j = 1
      @[simp]
      theorem FinWeighted.stdBasis_apply_ne {ν : PosReal} {N : } (i j : Fin (N + 1)) (h : i j) :
      (stdBasis j).ofLp i = 0
      theorem FinWeighted.norm_stdBasis {ν : PosReal} {N : } (j : Fin (N + 1)) :
      stdBasis j = ν ^ j

      The norm of a basis vector: ‖eⱼ‖ = νʲ

      Matrix as Continuous Linear Map on Weighted Space #

      We define how a matrix A : Matrix (Fin (N+1)) (Fin (N+1)) ℝ acts as a continuous linear map on the weighted ℓ¹ space.

      Matrix action as a linear map on the weighted space. This is the standard matrix action (Ax)ᵢ = Σⱼ Aᵢⱼ xⱼ

      Equations
      • One or more equations did not get rendered due to their size.
      Instances For

        Bound on ‖Ax‖ in terms of column norms and ‖x‖. This is the key estimate: ‖Ax‖ ≤ (max_j colNorm_j) · ‖x‖ Proof outline (following Proposition 7.3.13): ‖Ax‖ = Σᵢ |(Ax)ᵢ| νⁱ = Σᵢ |Σⱼ Aᵢⱼ xⱼ| νⁱ ≤ Σᵢ Σⱼ |Aᵢⱼ| |xⱼ| νⁱ [triangle ineq] = Σⱼ (Σᵢ |Aᵢⱼ| νⁱ) |xⱼ| [swap sums] = Σⱼ (colNorm_j · νʲ) |xⱼ| [def of colNorm] ≤ (max_j colNorm_j) · Σⱼ |xⱼ| νʲ [factor out max] = (max_j colNorm_j) · ‖x‖

        The matrix as a continuous linear map on the weighted space

        Equations
        Instances For

          Operator norm is bounded by the weighted matrix norm

          theorem FinWeightedMatrix.opNorm_achieved_on_basis {ν : PosReal} {N : } (A : Matrix (Fin (N + 1)) (Fin (N + 1)) ) (j : Fin (N + 1)) :

          Lemma 7.3.11 application: The column norm is achieved on basis vectors. For the j-th basis vector eⱼ with ‖eⱼ‖ = νʲ: ‖A eⱼ‖ / ‖eⱼ‖ = colNorm_j This shows the matrix norm is achieved, not just bounded.

          Exercise 2.7.2: The weighted matrix norm equals the operator norm. ‖A‖{1,ν} = max_j (1/νʲ) Σᵢ |Aᵢⱼ| νⁱ = ‖A‖{op} Proof:

          • (≥): The sup over column norms is achieved on basis vectors (opNorm_achieved_on_basis)
          • (≤): Every vector gives at most the max column norm (mulVecWeightedLinear_norm_le)
          theorem FinWeightedMatrix.toWeightedCLM_mul {ν : PosReal} {N : } (A B : Matrix (Fin (N + 1)) (Fin (N + 1)) ) :

          toWeightedCLM preserves matrix multiplication.

          This shows that the embedding of matrices into weighted CLMs is a ring homomorphism. The proof swaps the order of summation and uses associativity of multiplication.

          toWeightedCLM preserves subtraction from identity.

          This shows that applying the matrix (I - M) via toWeightedCLM equals the identity CLM minus toWeightedCLM of M. The proof reduces to showing that the Kronecker delta sum ∑ₖ δᵢₖ · xₖ = xᵢ.

          If the CLM corresponding to a matrix is a unit, so is the matrix.

          The proof uses the fact that toWeightedCLM M is essentially M.mulVec transported to the weighted space via an isomorphism. If the CLM is invertible (a unit), then M.mulVec must be injective, which for square matrices over a field means det M ≠ 0.

          Key insight: FinWeighted.Space ν N ≅ Fin (N+1) → ℝ as vector spaces (the norms differ but the underlying spaces are the same). So IsUnit (toWeightedCLM M) ↔ IsUnit (toLin' M) ↔ IsUnit M.

          If ‖I - M‖ < 1 for a square matrix M, then M is invertible.

          This is the matrix version of the Neumann series invertibility criterion.

          Block-Diagonal Operators #

          For Chapter 7.7, operators have block-diagonal structure:

          A = [ A_N 0 ] [ 0 c·I ]

          where A_N is an (N+1)×(N+1) finite matrix and c is a scalar acting diagonally on the infinite tail.

          Proposition 7.3.14 gives: ‖A‖ ≤ max(‖A_N‖_{1,ν}, |c|)

          structure BlockDiag.BlockDiagOp (ν : PosReal) (N : ) :

          A block-diagonal operator on ℓ¹_ν.

          The operator acts as:

          • A finite matrix finBlock on coordinates 0, 1, ..., N
          • A scalar tailScalar times identity on coordinates N+1, N+2, ...

          See equations (7.47) and (7.48) for the operators A† and A in Chapter 7.7.

          • finBlock : Matrix (Fin (N + 1)) (Fin (N + 1))

            The (N+1)×(N+1) matrix acting on the finite part

          • tailScalar :

            The scalar multiplying the identity on the tail

          Instances For
            def BlockDiag.BlockDiagOp.action {ν : PosReal} {N : } (A : BlockDiagOp ν N) (a : ) :

            The action of a block-diagonal operator on a sequence.

            (A·a)ₙ = (finBlock · a_{[0,N]})ₙ for n ≤ N (A·a)ₙ = tailScalar · aₙ for n > N

            Equations
            Instances For
              def BlockDiag.projFin {ν : PosReal} {N : } (a : (l1Weighted ν)) :
              Fin (N + 1)

              The projection of a sequence onto its first N+1 coordinates

              Equations
              Instances For
                def BlockDiag.embedFin {N : } (x : Fin (N + 1)) :

                Embedding a finite vector as a sequence with zero tail

                Equations
                Instances For
                  def BlockDiag.projTail {ν : PosReal} {N : } (a : (l1Weighted ν)) :

                  The tail projection: coordinates N+1, N+2, ...

                  Equations
                  Instances For
                    theorem BlockDiag.direct_sum_decomp {ν : PosReal} {N : } (a : (l1Weighted ν)) (n : ) :

                    Direct sum decomposition: a = embedFin(projFin a) + projTail a

                    theorem BlockDiag.finBlock_norm_bound {ν : PosReal} {N : } (A : BlockDiagOp ν N) (a : (l1Weighted ν)) :
                    n : Fin (N + 1), |j : Fin (N + 1), A.finBlock n j * lpWeighted.toSeq a j| * ν ^ n l1Weighted.finWeightedMatrixNorm ν A.finBlock * j : Fin (N + 1), |lpWeighted.toSeq a j| * ν ^ j

                    Bound on the finite block contribution.

                    For the finite part, we use the weighted matrix norm.

                    theorem BlockDiag.tailScalar_norm_eq {ν : PosReal} {N : } (A : BlockDiagOp ν N) (a : (l1Weighted ν)) :
                    ∑' (n : { n : // N < n }), |A.tailScalar * lpWeighted.toSeq a n| * ν ^ n = |A.tailScalar| * ∑' (n : { n : // N < n }), |lpWeighted.toSeq a n| * ν ^ n

                    Equality version: factoring out the scalar from tail

                    theorem BlockDiag.norm_split {ν : PosReal} {N : } (a : (l1Weighted ν)) :
                    a = n : Fin (N + 1), |lpWeighted.toSeq a n| * ν ^ n + ∑' (n : { n : // N < n }), |lpWeighted.toSeq a n| * ν ^ n

                    Split the ℓ¹ norm into finite and tail parts

                    theorem BlockDiag.BlockDiagOp.action_mem {ν : PosReal} {N : } (A : BlockDiagOp ν N) (a : (l1Weighted ν)) :

                    The action of a block-diagonal operator preserves membership in ℓ¹_ν.

                    Motivation: This lemma is a prerequisite for Proposition 7.3.14, which is used in Theorem 7.7.1 (Taylor series ODE). The operators appearing in Theorem 7.7.1 (such as I - A·DF(ā)) have block-diagonal structure. Before we can apply Proposition 7.3.14's bound ‖A‖_{op} ≤ max(K, |c|), we must first establish that the block-diagonal operator maps ℓ¹_ν → ℓ¹_ν.

                    Structure: For A = [A_N, 0; 0, c·I], the output (Aa)_n is:

                    • n ≤ N: Σⱼ A_{nj} aⱼ (finite matrix-vector product)
                    • n > N: c · aₙ (scalar multiplication on tail)

                    Summability proof: We show Σₙ |(Aa)ₙ| νⁿ < ∞ using comparison.

                    Define bounding function g(n) := (𝟙_{n≤N} · M) + |c| · |aₙ| · νⁿ

                    where M := max_{0≤n≤N} |Σⱼ A_{nj} aⱼ| · νⁿ (finite maximum).

                    Then:

                    • For n ≤ N: |(Aa)ₙ| νⁿ ≤ M ≤ g(n)
                    • For n > N: |(Aa)ₙ| νⁿ = |c| |aₙ| νⁿ ≤ g(n)

                    The function g is summable since:

                    • 𝟙_{n≤N} · M has finite support
                    • |c| · |aₙ| · νⁿ is summable (a ∈ ℓ¹_ν)

                    The action as a linear map

                    Equations
                    Instances For

                      The block-diagonal operator as a continuous linear map

                      Equations
                      Instances For
                        @[simp]
                        theorem BlockDiag.BlockDiagOp.toCLM_apply {ν : PosReal} {N : } (A : BlockDiagOp ν N) (a : (l1Weighted ν)) (n : ) :

                        The CLM action matches the BlockDiagOp action

                        Operator norm of the CLM is bounded by max of finite block norm and tail scalar

                        theorem BlockDiag.BlockDiagOp.injective_of_parts {ν : PosReal} {N : } (A : BlockDiagOp ν N) (h_fin : IsUnit A.finBlock) (h_tail : A.tailScalar 0) :

                        Block-diagonal operator is injective if finite block matrix is invertible and tail scalar ≠ 0.

                        The proof shows that if A x = A y, then both the finite and tail parts match, which forces x = y when the finite block is invertible and the tail scalar is nonzero.

                        Proposition 7.3.14: Block-Diagonal Operator Norm Bound #

                        For a block-diagonal operator A with finite block A_N and tail scalar c, we have: ‖A‖ ≤ max(‖A_N‖_{1,ν}, |c|)

                        This is the key result connecting the computable matrix norm to the abstract operator norm for the Chapter 7.7 formalization.

                        theorem Proposition_7_3_14.opNorm_blockDiag_le {ν : PosReal} {N : } (A : BlockDiag.BlockDiagOp ν N) (A_CLM : (l1Weighted ν) →L[] (l1Weighted ν)) (h_action : ∀ (a : (l1Weighted ν)) (n : ), lpWeighted.toSeq (A_CLM a) n = A.action (lpWeighted.toSeq a) n) :

                        Proposition 7.3.14: Operator norm bound for block-diagonal operators.

                        Given a block-diagonal operator A = [A_N, 0; 0, c·I] on ℓ¹_ν, we have:

                        ‖A‖ ≤ max(K, |c|)

                        where K = ‖A_N‖_{1,ν} is the weighted matrix norm of the finite block.

                        Proof outline (following the textbook):

                        1. Split ‖Aa‖ into finite and tail contributions
                        2. Bound finite part using matrix norm: ≤ K · ‖a_{fin}‖
                        3. Bound tail part using scalar: ≤ |c| · ‖a_{tail}‖
                        4. Combine using ‖a‖ = ‖a_{fin}‖ + ‖a_{tail}‖ and max bound