Notions from Linear Algebra

Một phần của tài liệu Advanced quantum mechanics materials and photons, 3rd edition (Trang 83 - 93)

The mathematical structure of quantum mechanics resembles linear algebra in many respects, and many notions from linear algebra are very useful in the investigation of quantum systems. Bra-ket notation makes the linear algebra aspects of quantum mechanics particularly visible and easy to use. Therefore we will first introduce a few notions of linear algebra in standard notation, and then rewrite everything in bra-ket notation.

Tensor Products

SupposeV is anN-dimensional real vector space with a Cartesian basis1eˆa, 1 ≤ aN,eˆaTã ˆeb=δab. Furthermore, assume thatua,vaare Cartesian components2 of the two vectorsuandv,

u=

N

a=1

ˆ

eaua≡ ˆeaua, v= ˆeava. (4.2)

Here we use summation convention: Whenever an index appears twice in a multiplicative term, it is automatically summed over its full range of values. We will continue to use this convention throughout the remainder of the book.

1We write scalar products of vectors initially asuvto be consistent with proper tensor product notation used in (4.5), but we will switch soon to the shorter notationsuãv,uvfor scalar products and tensor products.

2It does not matter whether we write the expansion coefficients of a vectorvon the right or left of the basis vectors,eˆavavaeˆa. However, writing them on the right agrees with the conventions for bra-ket notation introduced further below, and also has the advantage that the coefficientsva naturally transform with matricesR−1from the left, if the basis vectorseˆatransform with matrix coefficientsRaifrom the right, see Eqs. (4.10) and (4.14) below and also Problem4.1.

4.1 Notions from Linear Algebra 65 You are very familiar with the scalar product of vectors,

uT ãv=uaeˆaT ã ˆebvb=uava. (4.3) You can also think of the scalar product in terms of Cartesian components as the product of a(N )-matrix with an(N×1)-matrix, which yields a(1×1)-matrix,

u1, . . . , uN

⎜⎝ v1

... vN

⎟⎠=uava. (4.4)

Thetensor productM =uvTof the two vectors yields anN×Nmatrix with componentsMab=uavbin the Cartesian basis:

M =uvT= ˆea⊗ ˆebTuavb. (4.5) At the most basic level, you can think of the tensor product in terms of the components of the vectors as a matrix product of an(N×1)-matrix with a(N )- matrix, which yields an(N×N )-matrix.

⎜⎝ u1

... uN

⎟⎠

v1, . . . , vN =

⎜⎝

u1v1 . . . u1vN ... . . . ... uNv1. . . uNvN

⎟⎠. (4.6)

In a slightly more abstract setting, you can think of it as a map of two copies of theN-dimensonal vector spaceVinto anN2-dimensional vector spaceVVwith basis vectorseˆa⊗ ˆebT.

Tensor products appear naturally in basic linear algebra e.g. in the following simple problem: Suppose u = ˆeaua andw = ˆeawa are two vectors in an N- dimensional vector space, and we would like to calculate the partwof the vector wthat is parallel tou. The unit vector in the direction ofuisuˆ = u/|u|, and we have

w= ˆu|w|cos(u,w), (4.7)

where cos(u,w)= ˆuT ã ˆwis the cosine of the angle betweenuandw. Substituting the expression for cos(u,w)into (4.7) yields

w= ˆu(uˆT ãw)= ˆuauˆbwceˆa(eˆbTã ˆec)= ˆuauˆbwc(eˆa⊗ ˆebT)ã ˆec

=(uˆ⊗ ˆuT)ãw, (4.8)

i.e. the tensor productP= ˆu⊗ ˆuT is the projector onto the direction of the vector u. If you are still uncomfortable with the tensor product notation, note that Eq. (4.8) in the Cartesian coordinates is

wa= ˆuauˆbwb=(uˆ ⊗ ˆuT)abwb. (4.9) The matrixM is called a2nd rank tensor due to its transformation properties under linear transformations of the vectors appearing in the product.

Suppose we perform a transformation of the Cartesian basis vectorseˆato a new seteˆi of basis vectors,

ˆ

ea→ ˆei = ˆeaRai, (4.10) subject to the constraint that the new basis vectors also provide a Cartesian basis,

ˆ

eiã ˆej =δabRaiRbj =RaiRaj =δij. (4.11) Linear transformations which map Cartesian bases into Cartesian bases are denoted as rotations.

We definedRajδabRbj in Eq. (4.11), i.e. numericallyRaj =Raj. Equation (4.11) is in matrix notation

RT ãR=1, (4.12)

i.e.RT =R−1.

However, a change of basis in our vector space does nothing to the vector v, except that the vector will have different components with respect to the new basis vectors,

v= ˆeava= ˆeivi = ˆeaRaivi. (4.13) Equations (4.13) and (4.11) and the uniqueness of the decomposition of a vector with respect to a set of basis vectors imply

va=Raivi, vi =(R−1)iava =(RT)iava=vaRai. (4.14) This is thepassive interpretationof transformations: The transformation changes the reference frame, but not the physical objects (here: vectors). Note that the particular equation (R−1)iava = (RT)iava in (4.14) used the property R−1 = RT of a rotation matrix. If we would not have required the new basis to also be a Cartesian basis, we could not assume that R is a rotation matrix, and the second equation in (4.14) would only readvi = (R−1)iava. This is the general case for a basis transformation in a vector space, as outlined in Problem 4.1.

Therefore, in general, the expansion coefficients of the vectors change inversely (orcontravariant), namely withR−1from the left, to thecovarianttransformation of the reference frame, which transforms withRfrom the right. We will often use the passive interpretation for symmetry transformations of quantum systems.

4.1 Notions from Linear Algebra 67 The transformation laws (4.10) and (4.14) definefirst rank tensors, because the transformation laws are linear (or first order) in the transformation matricesR or R−1.

The tensor productM = uvT = ˆea⊗ ˆebTuavbthen defines asecond rank tensor, because the components and the basis transform quadratically (or in second order) with the transformation matricesRorR−1,

Mij =uivj =(R−1)ia(R−1)jbuavb=(R−1)ia(R−1)jbMab, (4.15) ˆ

ei⊗ ˆejT= ˆea⊗ ˆebTRaiRbj. (4.16) The concept immediately generalizes ton-th order tensors.

Writing the tensor product explicitly asuvTreminds us that thea-th row of Mis just the row vectoruavT, while theb-th column is just the column vectoruvb. However, usually one simply writesuvfor the tensor product, just as one writes uãvinstead ofuvfor the scalar product.

Dual Bases

We will now complicate things a little further by generalizing to more general sets of basis vectors which may not be orthonormal. Strictly speaking this is overkill for the purposes of quantum mechanics, because the infinite-dimensional basis vectors which we will use in quantum mechanics are still mutually orthogonal, just like Euclidean basis vectors in finite-dimensional vector spaces. However, sometimes it is useful to learn things in a more general setting to acquire a proper understanding, and besides, non-orthonormal basis vectors are useful in solid state physics (as explained in an example below) and unavoidable in curved spaces.

Letai, 1≤iN, be another basis of the vector spaceV. Generically this basis will not be orthonormal:ai ãaj = δij. The corresponding dual basiswith basis vectorsai is defined through the requirements

aiãaj =δij. (4.17)

Apparently a basis is self-dual (ai = ai) if and only if it is orthonormal (i.e.

Cartesian).

For the explicit construction of the dual basis, we observe that the scalar product of theNvectorsai defines a symmetricN×N matrixgwith components

gij =aiãaj. (4.18)

This matrix is not degenerate, because otherwise it would have at least one vanishing eigenvalue, i.e. there would exist N numbers Xi (not all vanishing) such that gijXj =0. This would imply existence of a non-vanishing vectorX =aiXi with

vanishing length,

X2=aiãajXiXj =XigijXj =0. (4.19) The matrixgis therefore invertible, and we denote the components of the inverse matrixg−1asgij,

gijgj k =δik. (4.20)

The inverse matrix can be used to construct the dual basis vectors as

ai =gijaj. (4.21)

The condition for dual basis vectors is readily verified,

ai ãak =gijajãak=gijgj k =δik. (4.22) For an example of the construction of a dual basis, consider Fig.4.1.

The vectorsa1 anda2 provide a basis. The angle betweena1 anda2 is π/4 radian, and their lengths are|a1| =2 and|a2| =√

2. The matrixgij therefore has the following components in this basis,

g=

g11 g12 g21 g22

=

aa1 aa2

aa1 aa2

= 4 2

2 2

. (4.23)

The inverse matrix is then g−1=

g11 g12 g21 g22

=1 2

1 −1

−1 2

. (4.24)

This yields with (4.21) the dual basis vectors

Fig. 4.1 The blue vectors are the basis vectorsai. The red vectors are the dual basis

vectorsai 2

1

2

1 a

a

a

a

4.1 Notions from Linear Algebra 69

a1=1 2a1−1

2a2, a2= −1

2a1+a2. (4.25)

These equations determined the vectorsai in Fig.4.1.

Decomposition of the Identity

Equation (4.17) implies that the decomposition of a vectorvVwith respect to the basisaican be written as (note summation convention)

v=ai(aiãv), (4.26)

i.e. the projection ofvonto thei-th basis vectorai (the componentvi in standard notation) is given through scalar multiplication with the dual basis vectorai:

vi =ai ãv. (4.27)

The right-hand side of Eq. (4.26) contains three vectors in each summand, and brackets have been employed to emphasize that the scalar product is between the two rightmost vectors in each term. Another way to make that clear is to write the combination of the two leftmost vectors in each term as a tensor product:

v=aiaiãv. (4.28)

If we first evaluate all the tensor products and sum overi, we have for every vector vV

v=(aiai)ãv, (4.29)

which makes it clear that the sum of tensor products in this equation adds up to the identity matrix,

aiai =1. (4.30)

This is the statement that every vector can be uniquely decomposed in terms of the basisai, and therefore this is a basic example of acompleteness relation.

Note that we can just as well expandvwith respect to the dual basis vectorsai: v=aivi =ai(aiãv)=(aiai)ãv. (4.31) Therefore we also have the dual completeness relation

aiai =1. (4.32)

We could also have inferred this from transposition of Eq. (4.30).

Linear transformationsof vectors can be written in terms of matrices,

v=Aãv. (4.33)

If we insert the decompositions with respect to the basisai,

v=aiaiãv=aiaiãAãajaj ãv, (4.34) we find the equation in componentsvi = Aijvj, with the matrix elements of the operatorA,

Aij =ai ãAãaj. (4.35)

Using (4.30), we can also infer that

A=aiaiãAãajaj =Aijaiaj. (4.36)

An Application of Dual Bases in Solid State Physics: The Laue Conditions for Elastic Scattering off a Crystal

Non-orthonormal bases and the corresponding dual bases play an important role in solid state physics. Assume e.g. thatai, 1 ≤ i ≤ 3, are the three fundamental translation vectors of a three-dimensional lattice L. They generate the lattice according to

=aimi, mi ∈Z. (4.37)

In three dimensions one can easily construct the dual basis vectors using cross products:

ai =ij k aj×ak

2a(aa3)= 1

2Vij kaj×ak, (4.38) whereV = a(aa3)is the volume of the lattice cell spanned by the basis vectorsai.

The vectorsai, 1 ≤ i ≤ 3, generate thedual lattice or reciprocal lattice L˜ according to

˜=niai, ni ∈Z, (4.39)

and the volume of a cell in the dual lattice is

4.1 Notions from Linear Algebra 71 Fig. 4.2 The Laue equation

(4.41) is the condition for constructive interference between scattering centers along the line generated by the primitive basis vectorai

k k’

α φ α’ ai

V˜ =a(aa3)= 1

V. (4.40)

Max von Laue derived in 1912 the conditions for constructive interference in the coherent elastic scattering off a regular array of scattering centers. If the directions of the incident and scattered waves of wavelengthλ are eˆk andeˆk, as shown in Fig.4.2, the condition for constructive interference from all scattering centers along a line generated byai is

|ai|%

cosα−cosα&

=% ˆ ek− ˆek

&

ãai =niλ, (4.41) with integer numbersni.

We can use the wavevector shift,

k=kk= 2π λ

%eˆk− ˆek

&

, (4.42)

to write equation (4.41) more neatly as

kãai =2π ni. (4.43)

If we want to have constructive interference from all scattering centers in the crystal this condition must hold for all three values ofi. On the other hand, for surface scattering, Eq. (4.43) must only hold for the two vectorsa1anda2which generate the lattice structure of the scattering centers on the surface.

In 1913 W.L. Bragg observed that for scattering from a bulk crystal equations (4.43) are equivalent to constructive interference from specular reflection from sets of equidistant parallel planes in the crystal, and that the Laue conditions can be reduced to the Bragg equation in this case. However, for scattering from one or two- dimensional crystals3and for the Ewald construction one still has to use the Laue conditions.

3For scattering off two-dimensional crystals the Laue conditions can be recast in simpler forms in special cases. E.g. for orthogonal incidence a plane grating equation can be derived from the Laue conditions, or if the momentum transferkis in the plane of the crystal a two-dimensional Bragg equation can be derived.

If we study scattering off a three-dimensional crystal, we know that the three dual basis vectors ai span the whole three-dimensional space. Like any three- dimensional vector, the wavevector shift can then be expanded in terms of the dual basis vectors according to

k=ai(aiãk), (4.44)

and substitution of Eq. (4.43) yields

k=2π niai, (4.45)

i.e. the condition for constructive interference from coherent elastic scattering off a three-dimensional crystal is equivalent to the statement thatk/(2π )is a vector in the dual latticeL. Furthermore, energy conservation in the elastic scattering implies˜

|p| = |p|,

k2+2kãk=0. (4.46)

Equations (4.43) and (4.46) together lead to the Ewald construction for the momenta of elastically scattered beams (see Fig.4.3): Draw the dual lattice and multiply all distances by a factor 2π. Then draw the vector−k from one (arbitrary) point of this rescaled dual lattice. Draw a sphere of radius|k|around the endpoint of−k.

Any point in the rescaled dual lattice which lies on this sphere corresponds to the k vector of an elastically scattered beam;kpoints from the endpoint of−k(the center of the sphere) to the rescaled dual lattice point on the sphere.

k

k’ Δk

Fig. 4.3 The Ewald construction of the wave vectors of elastically scattered beams. The points correspond to the reciprocal lattice stretched with the factor 2π

4.1 Notions from Linear Algebra 73 We have already noticed that for scattering off a planar array of scattering centers, Eq. (4.43) must only hold for the two vectorsa1anda2which generate the lattice structure of the scattering centers on the surface. And if we have only a linear array of scattering centers, Eq. (4.43) must only hold for the vectora1which generates the linear array. In those two cases the wavevector shift can be decomposed into components orthogonal and parallel to the scattering surface or line, and the Laue conditions then imply that the parallel component is a vector in the rescaled dual lattice,

k=k⊥+k=k⊥+ai(ai ãk)=k⊥+2π niai. (4.47) The rescaled dual lattice is also important in theumklapp processes in phonon- phonon or electron-phonon scattering in crystals. Lattices can only support oscil- lations with wavelengths larger than certain minimal wavelengths, which are determined by the crystal structure. As a result momentum conservation in phonon- phonon or electron-phonon scattering involves the rescaled dual lattice,

kinkout ∈2π× ˜L, (4.48)

see textbooks on solid state physics.

Bra-ket Notation in Linear Algebra

The translation of the previous notions in linear algebra into bra-ket notation starts with the notion of aket vectorfor a vector,v= |v, and abra vectorfor a transposed vector,4vT= v|. The tensor product is

uvT= |uv|, (4.49)

and the scalar product is

uv= u|v. (4.50)

The appearance of the brackets on the right-hand side motivated the designation

“bra vector” for a transposed vector and “ket vector” for a vector.

The decomposition of a vector in the basis|ai, using the dual basis|aiis

|v = |aiai|v, (4.51)

4In the case of a complex finite-dimensional vector space, the “bra vector” would actually be the transpose complex conjugate vector,v| =v+=v∗T.

and corresponds to the decomposition of unity

|aiai| =1. (4.52)

A linear operator maps vectors|v into vectors|v,|v = A|v. This reads in components

ai|v = ai|A|v = ai|A|ajaj|v, (4.53) where

Aijai|A|aj (4.54)

are the matrix elements of the linear operatorA. There is no real advantage in using bra-ket notation in the linear algebra of finite-dimensional vector spaces, but it turns out to be very useful in quantum mechanics.

Một phần của tài liệu Advanced quantum mechanics materials and photons, 3rd edition (Trang 83 - 93)

Tải bản đầy đủ (PDF)

(816 trang)