Javascript required
Skip to content Skip to sidebar Skip to footer

In Continuous Dual Space a Hilbert Space

In mathematics, vector space of linear forms

In mathematics, any vector space V {\displaystyle V} has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on V {\displaystyle V} , together with the vector space structure of pointwise addition and scalar multiplication by constants.

The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the algebraic dual space. When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space.

Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces. When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis.

Early terms for dual include polarer Raum [Hahn 1927], espace conjugué, adjoint space [Alaoglu 1940], and transponierter Raum [Schauder 1930] and [Banach 1932]. The term dual is due to Bourbaki 1938.[1]

Algebraic dual space [edit]

Given any vector space V {\displaystyle V} over a field F {\displaystyle F} , the (algebraic) dual space V {\displaystyle V^{*}} [2] (alternatively denoted by V {\displaystyle V^{\lor }} [3] or V {\displaystyle V'} [4] [5])[nb 1] is defined as the set of all linear maps φ : V F {\displaystyle \varphi :V\to F} (linear functionals). Since linear maps are vector space homomorphisms, the dual space may be denoted hom ( V , F ) {\displaystyle \hom(V,F)} .[6] The dual space V {\displaystyle V^{*}} itself becomes a vector space over F {\displaystyle F} when equipped with an addition and scalar multiplication satisfying:

( φ + ψ ) ( x ) = φ ( x ) + ψ ( x ) ( a φ ) ( x ) = a ( φ ( x ) ) {\displaystyle {\begin{aligned}(\varphi +\psi )(x)&=\varphi (x)+\psi (x)\\(a\varphi )(x)&=a\left(\varphi (x)\right)\end{aligned}}}

for all φ , ψ V {\displaystyle \varphi ,\psi \in V^{*}} , x V {\displaystyle x\in V} , and a F {\displaystyle a\in F} .

Elements of the algebraic dual space V {\displaystyle V^{*}} are sometimes called covectors or one-forms.

The pairing of a functional φ {\displaystyle \varphi } in the dual space V {\displaystyle V^{*}} and an element x {\displaystyle x} of V {\displaystyle V} is sometimes denoted by a bracket: φ ( x ) = [ x , φ ] {\displaystyle \varphi (x)=[x,\varphi ]} [7] or φ ( x ) = x , φ {\displaystyle \varphi (x)=\langle x,\varphi \rangle } .[8] This pairing defines a nondegenerate bilinear mapping[nb 2] , : V × V F {\displaystyle \langle \cdot ,\cdot \rangle :V\times V^{*}\to F} called the natural pairing.

Finite-dimensional case [edit]

If V is finite-dimensional, then V has the same dimension as V. Given a basis {e 1, ..., e n } in V, it is possible to construct a specific basis in V , called the dual basis. This dual basis is a set {e 1, ..., e n } of linear functionals on V, defined by the relation

e i ( c 1 e 1 + + c n e n ) = c i , i = 1 , , n {\displaystyle \mathbf {e} ^{i}(c^{1}\mathbf {e} _{1}+\cdots +c^{n}\mathbf {e} _{n})=c^{i},\quad i=1,\ldots ,n}

for any choice of coefficients ci F . In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations

e i ( e j ) = δ j i {\displaystyle \mathbf {e} ^{i}(\mathbf {e} _{j})=\delta _{j}^{i}}

where δ j i {\displaystyle \delta _{j}^{i}} is the Kronecker delta symbol. This property is referred to as bi-orthogonality property.

Proof

Consider {e 1, ..., e n } the basis of V. Let {e 1, ..., e n } be defined as the following:

e i ( c 1 e 1 + + c n e n ) = c i , i = 1 , , n {\displaystyle \mathbf {e} ^{i}(c^{1}\mathbf {e} _{1}+\cdots +c^{n}\mathbf {e} _{n})=c^{i},\quad i=1,\ldots ,n} .

We have:

  1. e i , i = 1 , 2 , , n , {\displaystyle e^{i},i=1,2,\dots ,n,} are linear functionals. Indeed, for x , y V {\displaystyle x,y\in V} such as x = α 1 e 1 + + α n e n {\displaystyle x=\alpha _{1}e_{1}+\dots +\alpha _{n}e_{n}} and y = β 1 e 1 + + β n e n {\displaystyle y=\beta _{1}e_{1}+\dots +\beta _{n}e_{n}} (i.e, e i ( x ) = α i {\displaystyle e^{i}(x)=\alpha _{i}} and e i ( y ) = β i {\displaystyle e^{i}(y)=\beta _{i}} ). Then, x + λ y = ( α 1 + λ β 1 ) e 1 + + ( α n + λ β n ) e n {\displaystyle x+\lambda y=(\alpha _{1}+\lambda \beta _{1})e_{1}+\dots +(\alpha _{n}+\lambda \beta _{n})e_{n}} and e i ( x + λ y ) = α i + λ β i = e i ( x ) + λ e i ( y ) {\displaystyle e^{i}(x+\lambda y)=\alpha _{i}+\lambda \beta _{i}=e^{i}(x)+\lambda e^{i}(y)} . Therefore, e i V {\displaystyle e^{i}\in V^{*}} for i = 1 , 2 , , n {\displaystyle i=1,2,\dots ,n} .
  2. Suppose λ 1 e 1 + + λ n e n = 0 V {\displaystyle \lambda _{1}e^{1}+\cdots +\lambda _{n}e^{n}=0\in V^{*}} . Applying this functional on the basis vectors of V {\displaystyle V} successively, lead us to λ 1 = λ 2 = = λ n = 0 {\displaystyle \lambda _{1}=\lambda _{2}=\dots =\lambda _{n}=0} (The functional applied in e i {\displaystyle e_{i}} results in λ i {\displaystyle \lambda _{i}} ). Therefore, {e 1, ..., e n } is l.i. on V {\displaystyle V^{*}} .
  3. Lastly, consider g V {\displaystyle g\in V^{*}} . Then
g ( x ) = g ( α 1 e 1 + + α n e n ) = α 1 g ( e 1 ) + + α n g ( e n ) = e 1 ( x ) g ( e 1 ) + + e n ( x ) g ( e n ) {\displaystyle g(x)=g(\alpha _{1}e_{1}+\dots +\alpha _{n}e_{n})=\alpha _{1}g(e_{1})+\dots +\alpha _{n}g(e_{n})=e^{1}(x)g(e_{1})+\dots +e^{n}(x)g(e_{n})}

and {e 1, ..., e n } generates V {\displaystyle V^{*}} . Hence, it is the basis of V {\displaystyle V^{*}} .

For example, if V is R 2, let its basis be chosen as {e 1 = (1/2, 1/2), e 2 = (0, 1)}. The basis vectors are not orthogonal to each other. Then, e 1 and e 2 are one-forms (functions that map a vector to a scalar) such that e 1(e 1) = 1, e 1(e 2) = 0, e 2(e 1) = 0, and e 2(e 2) = 1. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as

[ e 11 e 12 e 21 e 22 ] [ e 11 e 21 e 12 e 22 ] = [ 1 0 0 1 ] . {\displaystyle {\begin{bmatrix}e_{11}&e_{12}\\e_{21}&e_{22}\end{bmatrix}}{\begin{bmatrix}e^{11}&e^{21}\\e^{12}&e^{22}\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}.}

Solving this equation shows the dual basis to be {e 1 = (2, 0), e 2 = (−1, 1)}. Because e 1 and e 2 are functionals, they can be rewritten as e 1(x, y) = 2x and e 2(x, y) = −x + y. In general, when V is R n , if E = (e 1, ..., e n ) is a matrix whose columns are the basis vectors and Ê = (e 1, ..., e n ) is a matrix whose columns are the dual basis vectors, then

E T E ^ = I n , {\displaystyle E^{T}{\hat {E}}=I_{n},}

where I n is an identity matrix of order n. The biorthogonality property of these two basis sets allows any point xV to be represented as

x = i x , e i e i = i x , e i e i , {\displaystyle \mathbf {x} =\sum _{i}\langle \mathbf {x} ,\mathbf {e} ^{i}\rangle \mathbf {e} _{i}=\sum _{i}\langle \mathbf {x} ,\mathbf {e} _{i}\rangle \mathbf {e} ^{i},}

even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product , {\displaystyle \langle \cdot ,\cdot \rangle } and the corresponding duality pairing are introduced, as described below in § Bilinear products and dual spaces.

In particular, R n can be interpreted as the space of columns of n real numbers, its dual space is typically written as the space of rows of n real numbers. Such a row acts on R n as a linear functional by ordinary matrix multiplication. This is because a functional maps every n-vector x into a real number y. Then, seeing this functional as a matrix M, and x, y as a n ×1 matrix and a 1×1 matrix (trivially, a real number) respectively, if Mx = y then, by dimension reasons, M must be a 1×n matrix; that is, M must be a row vector.

If V consists of the space of geometrical vectors in the plane, then the level curves of an element of V form a family of parallel lines in V, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element. So an element of V can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses. More generally, if V is a vector space of any dimension, then the level sets of a linear functional in V are parallel hyperplanes in V, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.[9]

Infinite-dimensional case [edit]

If V is not finite-dimensional but has a basis[nb 3] e α indexed by an infinite set A, then the same construction as in the finite-dimensional case yields linearly independent elements e α ( αA ) of the dual space, but they will not form a basis.

For instance, the space R , whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers N: for iN , e i is the sequence consisting of all zeroes except in the i-th position, which is 1. The dual space of R is (isomorphic to) RN , the space of all sequences of real numbers: each real sequence (an ) defines a function where the element (xn ) of R is sent to the number

n a n x n , {\displaystyle \sum _{n}a_{n}x_{n},}

which is a finite sum because there are only finitely many nonzero xn . The dimension of R is countably infinite, whereas RN does not have a countable basis.

This observation generalizes to any[nb 3] infinite-dimensional vector space V over any field F: a choice of basis {e α  : αA} identifies V with the space (FA )0 of functions f : A → F such that fα = f(α) is nonzero for only finitely many αA , where such a function f is identified with the vector

α A f α e α {\displaystyle \sum _{\alpha \in A}f_{\alpha }\mathbf {e} _{\alpha }}

in V (the sum is finite by the assumption on f, and any vV may be written in this way by the definition of the basis).

The dual space of V may then be identified with the space FA of all functions from A to F: a linear functional T on V is uniquely determined by the values θα = T(e α ) it takes on the basis of V, and any function θ : AF (with θ(α) = θα ) defines a linear functional T on V by

T ( α A f α e α ) = α A f α T ( e α ) = α A f α θ α . {\displaystyle T\left(\sum _{\alpha \in A}f_{\alpha }\mathbf {e} _{\alpha }\right)=\sum _{\alpha \in A}f_{\alpha }T(e_{\alpha })=\sum _{\alpha \in A}f_{\alpha }\theta _{\alpha }.}

Again the sum is finite because fα is nonzero for only finitely many α.

The set (F A )0 may be identified (essentially by definition) with the direct sum of infinitely many copies of F (viewed as a 1-dimensional vector space over itself) indexed by A, i.e. there are linear isomorphisms

V ( F A ) 0 α A F . {\displaystyle V\cong (F^{A})_{0}\cong \bigoplus _{\alpha \in A}F.}

On the other hand, FA is (again by definition), the direct product of infinitely many copies of F indexed by A, and so the identification

V ( α A F ) α A F α A F F A {\displaystyle V^{*}\cong \left(\bigoplus _{\alpha \in A}F\right)^{*}\cong \prod _{\alpha \in A}F^{*}\cong \prod _{\alpha \in A}F\cong F^{A}}

is a special case of a general result relating direct sums (of modules) to direct products.

Considering cardinal numbers, denoted here as absolute values, one has thus for a F-vector space V that has an infinite basis A

| V | = max ( | F | , | A | ) < | V | = | F | | A | . {\displaystyle |V|=\max(|F|,|A|)<|V^{\ast }|=|F|^{|A|}.}

It follows that, if a vector space is not finite-dimensional, then the axiom of choice implies that the algebraic dual space is always of larger dimension (as a cardinal number) than the original vector space (since, if two bases have the same cardinality, the spanned vector spaces have the same cardinality). This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional.

Bilinear products and dual spaces [edit]

If V is finite-dimensional, then V is isomorphic to V . But there is in general no natural isomorphism between these two spaces.[10] Any bilinear form ⟨·,·⟩ on V gives a mapping of V into its dual space via

v v , {\displaystyle v\mapsto \langle v,\cdot \rangle }

where the right hand side is defined as the functional on V taking each wV to v, w. In other words, the bilinear form determines a linear mapping

Φ , : V V {\displaystyle \Phi _{\langle \cdot ,\cdot \rangle }:V\to V^{*}}

defined by

[ Φ , ( v ) , w ] = v , w . {\displaystyle \left[\Phi _{\langle \cdot ,\cdot \rangle }(v),w\right]=\langle v,w\rangle .}

If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of V . If V is finite-dimensional, then this is an isomorphism onto all of V . Conversely, any isomorphism Φ {\displaystyle \Phi } from V to a subspace of V (resp., all of V if V is finite dimensional) defines a unique nondegenerate bilinear form , Φ {\displaystyle \langle \cdot ,\cdot \rangle _{\Phi }} on V by

v , w Φ = ( Φ ( v ) ) ( w ) = [ Φ ( v ) , w ] . {\displaystyle \langle v,w\rangle _{\Phi }=(\Phi (v))(w)=[\Phi (v),w].\,}

Thus there is a one-to-one correspondence between isomorphisms of V to a subspace of (resp., all of) V and nondegenerate bilinear forms on V.

If the vector space V is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms. In that case, a given sesquilinear form ⟨·,·⟩ determines an isomorphism of V with the complex conjugate of the dual space

Φ , : V V ¯ . {\displaystyle \Phi _{\langle \cdot ,\cdot \rangle }:V\to {\overline {V^{*}}}.}

The conjugate of the dual space V ¯ {\displaystyle {\overline {V^{*}}}} can be identified with the set of all additive complex-valued functionals f : VC such that

f ( α v ) = α ¯ f ( v ) . {\displaystyle f(\alpha v)={\overline {\alpha }}f(v).}

Injection into the double-dual [edit]

There is a natural homomorphism Ψ {\displaystyle \Psi } from V {\displaystyle V} into the double dual V = { Φ : V F : Φ l i n e a r } {\displaystyle V^{**}=\{\Phi :V^{*}\to F:\Phi \ \mathrm {linear} \}} , defined by ( Ψ ( v ) ) ( φ ) = φ ( v ) {\displaystyle (\Psi (v))(\varphi )=\varphi (v)} for all v V , φ V {\displaystyle v\in V,\varphi \in V^{*}} . In other words, if e v v : V F {\displaystyle \mathrm {ev} _{v}:V^{*}\to F} is the evaluation map defined by φ φ ( v ) {\displaystyle \varphi \mapsto \varphi (v)} , then Ψ : V V {\displaystyle \Psi :V\to V^{**}} is defined as the map v e v v {\displaystyle v\mapsto \mathrm {ev} _{v}} . This map Ψ {\displaystyle \Psi } is always injective;[nb 3] it is an isomorphism if and only if V {\displaystyle V} is finite-dimensional.[11] Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism. Infinite-dimensional Hilbert spaces are not a counterexample to this[to what? clarification needed ], as they are isomorphic to their continuous double duals, not to their algebraic double duals.

Transpose of a linear map [edit]

If f : VW is a linear map, then the transpose (or dual) f  : W V is defined by

f ( φ ) = φ f {\displaystyle f^{*}(\varphi )=\varphi \circ f\,}

for every φ W {\displaystyle \varphi \in W^{*}} . The resulting functional f ( φ ) {\displaystyle f^{*}(\varphi )} in V {\displaystyle V^{*}} is called the pullback of φ {\displaystyle \varphi } along f {\displaystyle f} .

The following identity holds for all φ W {\displaystyle \varphi \in W^{*}} and v V {\displaystyle v\in V} :

[ f ( φ ) , v ] = [ φ , f ( v ) ] , {\displaystyle [f^{*}(\varphi ),\,v]=[\varphi ,\,f(v)],}

where the bracket [·,·] on the left is the natural pairing of V with its dual space, and that on the right is the natural pairing of W with its dual. This identity characterizes the transpose,[12] and is formally similar to the definition of the adjoint.

The assignment ff produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W to V ; this homomorphism is an isomorphism if and only if W is finite-dimensional. If V = W then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that (fg) = g f . In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself. It is possible to identify (f ) with f using the natural injection into the double dual.

If the linear map f is represented by the matrix A with respect to two bases of V and W, then f is represented by the transpose matrix A T with respect to the dual bases of W and V , hence the name. Alternatively, as f is represented by A acting on the left on column vectors, f is represented by the same matrix acting on the right on row vectors. These points of view are related by the canonical inner product on R n , which identifies the space of column vectors with the dual space of row vectors.

Quotient spaces and annihilators [edit]

Let S be a subset of V. The annihilator of S in V , denoted here S 0, is the collection of linear functionals fV such that [f, s] = 0 for all sS . That is, S 0 consists of all linear functionals f : VF such that the restriction to S vanishes: f| S = 0. Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement.

The annihilator of a subset is itself a vector space. The annihilator of the zero vector is the whole dual space: { 0 } 0 = V {\displaystyle \{0\}^{0}=V^{*}} , and the annihilator of the whole space is just the zero covector: V 0 = { 0 } V {\displaystyle V^{0}=\{0\}\subseteq V^{*}} . Furthermore, the assignment of an annihilator to a subset of V reverses inclusions, so that if STV , then

0 T 0 S 0 V . {\displaystyle 0\subseteq T^{0}\subseteq S^{0}\subseteq V^{*}.}

If A and B are two subsets of V then

A 0 + B 0 ( A B ) 0 , {\displaystyle A^{0}+B^{0}\subseteq (A\cap B)^{0},}

and equality holds provided V is finite-dimensional. If Ai is any family of subsets of V indexed by i belonging to some index set I, then

( i I A i ) 0 = i I A i 0 . {\displaystyle \left(\bigcup _{i\in I}A_{i}\right)^{0}=\bigcap _{i\in I}A_{i}^{0}.}

In particular if A and B are subspaces of V then

( A + B ) 0 = A 0 B 0 . {\displaystyle (A+B)^{0}=A^{0}\cap B^{0}.}

If V is finite-dimensional and W is a vector subspace, then

W 00 = W {\displaystyle W^{00}=W}

after identifying W with its image in the second dual space under the double duality isomorphism VV ∗∗ . In particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space.

If W is a subspace of V then the quotient space V/W is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional f : VF factors through V/W if and only if W is in the kernel of f. There is thus an isomorphism

( V / W ) W 0 . {\displaystyle (V/W)^{*}\cong W^{0}.}

As a particular consequence, if V is a direct sum of two subspaces A and B, then V is a direct sum of A 0 and B 0.

Continuous dual space [edit]

When dealing with topological vector spaces, the continuous linear functionals from the space into the base field F = C {\displaystyle \mathbb {F} =\mathbb {C} } (or R {\displaystyle \mathbb {R} } ) are particularly important. This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space V {\displaystyle V^{*}} , denoted by V {\displaystyle V'} . For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps. Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space".

For a topological vector space V {\displaystyle V} its continuous dual space,[13] or topological dual space,[14] or just dual space [13] [14] [15] [16] (in the sense of the theory of topological vector spaces) V {\displaystyle V'} is defined as the space of all continuous linear functionals φ : V F {\displaystyle \varphi :V\to {\mathbb {F} }} .

Important examples for continuous dual spaces are the space of compactly supported test functions D {\displaystyle {\mathcal {D}}} and its dual D , {\displaystyle {\mathcal {D}}',} the space of arbitrary distributions (generalized functions), the space of arbitrary test functions E {\displaystyle {\mathcal {E}}} and its dual E , {\displaystyle {\mathcal {E}}',} the space of compactly supported distributions, and the space of rapidly decreasing test functions S , {\displaystyle {\mathcal {S}},} the Schwartz space, and its dual S , {\displaystyle {\mathcal {S}}',} the space of tempered distributions (slowly growing distributions) in the theory of generalized functions.

Properties [edit]

If X is a Hausdorff topological vector space (TVS), then the continuous dual space of X is identical to the continuous dual space of the completion of X.[1]

Topologies on the dual [edit]

There is a standard construction for introducing a topology on the continuous dual V {\displaystyle V'} of a topological vector space V {\displaystyle V} . Fix a collection A {\displaystyle {\mathcal {A}}} of bounded subsets of V {\displaystyle V} . This gives the topology on V {\displaystyle V} of uniform convergence on sets from A , {\displaystyle {\mathcal {A}},} or what is the same thing, the topology generated by seminorms of the form

φ A = sup x A | φ ( x ) | , {\displaystyle \|\varphi \|_{A}=\sup _{x\in A}|\varphi (x)|,}

where φ {\displaystyle \varphi } is a continuous linear functional on V {\displaystyle V} , and A {\displaystyle A} runs over the class A . {\displaystyle {\mathcal {A}}.}

This means that a net of functionals φ i {\displaystyle \varphi _{i}} tends to a functional φ {\displaystyle \varphi } in V {\displaystyle V'} if and only if

 for all A A φ i φ A = sup x A | φ i ( x ) φ ( x ) | i 0. {\displaystyle {\text{ for all }}A\in {\mathcal {A}}\qquad \|\varphi _{i}-\varphi \|_{A}=\sup _{x\in A}|\varphi _{i}(x)-\varphi (x)|{\underset {i\to \infty }{\longrightarrow }}0.}

Usually (but not necessarily) the class A {\displaystyle {\mathcal {A}}} is supposed to satisfy the following conditions:

 for all x V  there exists some A A  such that x A . {\displaystyle {\text{ for all }}x\in V\quad {\text{ there exists some }}A\in {\mathcal {A}}\quad {\text{ such that }}x\in A.}
 for all A , B A  there exists some C A  such that A B C . {\displaystyle {\text{ for all }}A,B\in {\mathcal {A}}\quad {\text{ there exists some }}C\in {\mathcal {A}}\quad {\text{ such that }}A\cup B\subseteq C.}
  • A {\displaystyle {\mathcal {A}}} is closed under the operation of multiplication by scalars:
 for all A A  and all λ F  such that λ A A . {\displaystyle {\text{ for all }}A\in {\mathcal {A}}\quad {\text{ and all }}\lambda \in {\mathbb {F} }\quad {\text{ such that }}\lambda \cdot A\in {\mathcal {A}}.}

If these requirements are fulfilled then the corresponding topology on V {\displaystyle V'} is Hausdorff and the sets

U A = { φ V : φ A < 1 } ,  for A A {\displaystyle U_{A}~=~\left\{\varphi \in V'~:~\quad \|\varphi \|_{A}<1\right\},\qquad {\text{ for }}A\in {\mathcal {A}}}

form its local base.

Here are the three most important special cases.

If V {\displaystyle V} is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on V {\displaystyle V'} is normed (in fact a Banach space if the field of scalars is complete), with the norm

φ = sup x 1 | φ ( x ) | . {\displaystyle \|\varphi \|=\sup _{\|x\|\leq 1}|\varphi (x)|.}

Each of these three choices of topology on V {\displaystyle V'} leads to a variant of reflexivity property for topological vector spaces:

Examples [edit]

Let 1 < p < ∞ be a real number and consider the Banach space  p of all sequences a = (a n ) for which

a p = ( n = 0 | a n | p ) 1 p < . {\displaystyle \|\mathbf {a} \|_{p}=\left(\sum _{n=0}^{\infty }|a_{n}|^{p}\right)^{\frac {1}{p}}<\infty .}

Define the number q by 1/p + 1/q = 1. Then the continuous dual of p is naturally identified with q : given an element φ ( p ) {\displaystyle \varphi \in (\ell ^{p})'} , the corresponding element of q is the sequence ( φ ( e n ) ) {\displaystyle (\varphi (\mathbf {e} _{n}))} where e n {\displaystyle \mathbf {e} _{n}} denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element a = (a n ) ∈ q , the corresponding continuous linear functional φ {\displaystyle \varphi } on p is defined by

φ ( b ) = n a n b n {\displaystyle \varphi (\mathbf {b} )=\sum _{n}a_{n}b_{n}}

for all b = (bn ) ∈ p (see Hölder's inequality).

In a similar manner, the continuous dual of  1 is naturally identified with  ∞ (the space of bounded sequences). Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremum norm) and c 0 (the sequences converging to zero) are both naturally identified with  1 .

By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space. This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics.

By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures.

Transpose of a continuous linear map [edit]

If T : V → W is a continuous linear map between two topological vector spaces, then the (continuous) transpose T′ : W′ → V′ is defined by the same formula as before:

T ( φ ) = φ T , φ W . {\displaystyle T'(\varphi )=\varphi \circ T,\quad \varphi \in W'.}

The resulting functional T′(φ) is in V′ . The assignment T → T′ produces a linear map between the space of continuous linear maps from V to W and the space of linear maps from W′ to V′ . When T and U are composable continuous linear maps, then

( U T ) = T U . {\displaystyle (U\circ T)'=T'\circ U'.}

When V and W are normed spaces, the norm of the transpose in L(W′, V′) is equal to that of T in L(V, W). Several properties of transposition depend upon the Hahn–Banach theorem. For example, the bounded linear map T has dense range if and only if the transpose T′ is injective.

When T is a compact linear map between two Banach spaces V and W, then the transpose T′ is compact. This can be proved using the Arzelà–Ascoli theorem.

When V is a Hilbert space, there is an antilinear isomorphism iV from V onto its continuous dual V′ . For every bounded linear map T on V, the transpose and the adjoint operators are linked by

i V T = T i V . {\displaystyle i_{V}\circ T^{*}=T'\circ i_{V}.}

When T is a continuous linear map between two topological vector spaces V and W, then the transpose T′ is continuous when W′ and V′ are equipped with"compatible" topologies: for example, when for X = V and X = W , both duals X′ have the strong topology β(X′, X) of uniform convergence on bounded sets of X, or both have the weak-∗ topology σ(X′, X) of pointwise convergence onX. The transpose T′ is continuous from β(W′, W) to β(V′, V), or from σ(W′, W) to σ(V′, V).

Annihilators [edit]

Assume that W is a closed linear subspace of a normed spaceV, and consider the annihilator of W in V′ ,

W = { φ V : W ker φ } . {\displaystyle W^{\perp }=\{\varphi \in V':W\subseteq \ker \varphi \}.}

Then, the dual of the quotient V /W can be identified with W , and the dual of W can be identified with the quotient V′ /W .[20] Indeed, let P denote the canonical surjection from V onto the quotient V /W; then, the transpose P′ is an isometric isomorphism from (V /W )′ into V′ , with range equal to W . If j denotes the injection map from W into V, then the kernel of the transpose j′ is the annihilator of W:

ker ( j ) = W {\displaystyle \ker(j')=W^{\perp }}

and it follows from the Hahn–Banach theorem that j′ induces an isometric isomorphism V′ /W W′ .

Further properties [edit]

If the dual of a normed space V is separable, then so is the space V itself. The converse is not true: for example, the space  1 is separable, but its dual  ∞ is not.

Double dual [edit]

This is a natural transformation of vector addition from a vector space to its double dual. x 1, x 2 denotes the ordered pair of two vectors. The addition + sends x 1 and x 2 to x 1 + x 2 . The addition +′ induced by the transformation can be defined as [ Ψ ( x 1 ) + Ψ ( x 2 ) ] ( φ ) = φ ( x 1 + x 2 ) = φ ( x ) {\displaystyle [\Psi (x_{1})+'\Psi (x_{2})](\varphi )=\varphi (x_{1}+x_{2})=\varphi (x)} for any φ {\displaystyle \varphi } in the dual space.

In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator Ψ : VV′′ from a normed space V into its continuous double dual V′′ , defined by

Ψ ( x ) ( φ ) = φ ( x ) , x V , φ V . {\displaystyle \Psi (x)(\varphi )=\varphi (x),\quad x\in V,\ \varphi \in V'.}

As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning ‖ Ψ(x) ‖ = ‖ x for all xV . Normed spaces for which the map Ψ is a bijection are called reflexive.

When V is a topological vector space then Ψ(x) can still be defined by the same formula, for every xV , however several difficulties arise. First, when V is not locally convex, the continuous dual may be equal to { 0 } and the map Ψ trivial. However, if V is Hausdorff and locally convex, the map Ψ is injective from V to the algebraic dual V′ of the continuous dual, again as a consequence of the Hahn–Banach theorem.[nb 4]

Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual V′ , so that the continuous double dual V′′ is not uniquely defined as a set. Saying that Ψ maps from V to V′′ , or in other words, that Ψ(x) is continuous on V′ for every xV , is a reasonable minimal requirement on the topology of V′ , namely that the evaluation mappings

φ V φ ( x ) , x V , {\displaystyle \varphi \in V'\mapsto \varphi (x),\quad x\in V,}

be continuous for the chosen topology on V′ . Further, there is still a choice of a topology on V′′ , and continuity of Ψ depends upon this choice. As a consequence, defining reflexivity in this framework is more involved than in the normed case.

See also [edit]

  • Covariance and contravariance of vectors
  • Dual module
  • Dual norm
  • Duality (mathematics)
  • Duality (projective geometry)
  • Pontryagin duality
  • Reciprocal lattice – dual space basis, in crystallography

Notes [edit]

  1. ^ For V {\displaystyle V^{\lor }} used in this way, see An Introduction to Manifolds (Tu 2011, p. 19). This notation is sometimes used when ( ) {\displaystyle (\cdot )^{*}} is reserved for some other meaning. For instance, in the above text, F {\displaystyle F^{*}} is frequently used to denote the codifferential of F {\displaystyle F} , so that F ω {\displaystyle F^{*}\omega } represents the pullback of the form ω {\displaystyle \omega } . Halmos (1974, p. 20) uses V {\displaystyle V'} to denote the algebraic dual of V {\displaystyle V} . However, other authors use V {\displaystyle V'} for the continuous dual, while reserving V {\displaystyle V^{*}} for the algebraic dual (Trèves 2006, p. 35).
  2. ^ In many areas, such as quantum mechanics, ⟨·,·⟩ is reserved for a sesquilinear form defined on V × V .
  3. ^ a b c Several assertions in this article require the axiom of choice for their justification. The axiom of choice is needed to show that an arbitrary vector space has a basis: in particular it is needed to show that RN has a basis. It is also needed to show that the dual of an infinite-dimensional vector space V is nonzero, and hence that the natural map from V to its double dual is injective.
  4. ^ If V is locally convex but not Hausdorff, the kernel of Ψ is the smallest closed subspace containing {0}.

References [edit]

  1. ^ a b Narici & Beckenstein 2011, pp. 225–273.
  2. ^ Katznelson & Katznelson (2008) p. 37, §2.1.3
  3. ^ Tu (2011) p. 19, §3.1
  4. ^ Axler (2015) p. 101, §3.94
  5. ^ Halmos (1974) p. 20, §13
  6. ^ Tu (2011) p. 19, §3.1
  7. ^ Halmos (1974) p. 21, §14
  8. ^ Misner, Thorne & Wheeler 1973
  9. ^ Misner, Thorne & Wheeler 1973, §2.5
  10. ^ Mac Lane & Birkhoff 1999, §VI.4
  11. ^ Halmos (1974) pp. 25, 28
  12. ^ Halmos (1974) §44
  13. ^ a b Robertson & Robertson 1964, II.2
  14. ^ a b Schaefer 1966, II.4
  15. ^ Rudin 1973, 3.1
  16. ^ Bourbaki 2003, II.42
  17. ^ Schaefer 1966, IV.5.5
  18. ^ Schaefer 1966, IV.1
  19. ^ Schaefer 1966, IV.1.2
  20. ^ Rudin 1991, chapter 4

Bibliography [edit]

  • Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN978-3-319-11079-0.
  • Bourbaki, Nicolas (1989). Elements of mathematics, Algebra I. Springer-Verlag. ISBN3-540-64243-9.
  • Bourbaki, Nicolas (2003). Elements of mathematics, Topological vector spaces. Springer-Verlag.
  • Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN0-387-90093-4.
  • Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN978-0-8218-4419-9.
  • Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN978-0-387-95385-4, MR 1878556, Zbl 0984.00001
  • Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. ISBN978-1-4419-7400-6.
  • Mac Lane, Saunders; Birkhoff, Garrett (1999). Algebra (3rd ed.). AMS Chelsea Publishing. ISBN0-8218-1646-2. .
  • Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973). Gravitation. W. H. Freeman. ISBN0-7167-0344-0.
  • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN978-1584888666. OCLC 144216834.
  • Rudin, Walter (1973). Functional Analysis . International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN9780070542259.
  • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN978-0-07-054236-5. OCLC 21163277.
  • Robertson, A.P.; Robertson, W. (1964). Topological vector spaces. Cambridge University Press.
  • Schaefer, Helmut H. (1966). Topological vector spaces. New York: The Macmillan Company.
  • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN978-1-4612-7155-0. OCLC 840278135.
  • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN978-0-486-45352-1. OCLC 853623322.

External links [edit]

  • Weisstein, Eric W. "Dual space". MathWorld.

gellatlysteranded.blogspot.com

Source: https://en.wikipedia.org/wiki/Dual_space