Notion of the Vector Space
We start with the definition of the vector space over the field of real numbersR.
Definition 1.1 A vector space is a setVof elements called vectors satisfying the following axioms.
A To every pair,xandyof vectors inVthere corresponds a vectorxCy, called the sum ofxandy, such that
(A.2) xCy/C z DxC.yC z /(addition is associative),
(A.3) There exists inVa unique vector zero 0 , such that 0 CxDx,8x2V, (A.4) To every vectorxinVthere corresponds a unique vectorx such that xC.x/D 0
B To every pair˛andx, where˛is a scalar real number andxis a vector inV, there corresponds a vector˛x, called the product of˛andx, such that
(B.1) ˛ ˇx/D.˛ˇ/x(multiplication by scalars is associative),
(B.3) ˛ xCy/ D ˛x C˛y (multiplication by scalars is distributive with respect to vector addition),
(B.4) ˛Cˇ/x D ˛x C ˇx (multiplication by scalars is distributive with respect to scalar addition),8˛; ˇ2R,8x;y 2V.
(1) The set of all real numbersR.
Directional arrows in two or three dimensions form a complete set, where standard operations such as summation, scalar multiplication, and the concepts of negative and zero vectors are applicable This adherence to fundamental axioms demonstrates the consistency and coherence of these directional arrows in vector space.
M Itskov, Tensor Algebra and Tensor Analysis for Engineers, Mathematical Engineering, DOI 10.1007/978-3-642-30879-6 1, © Springer-Verlag Berlin Heidelberg 2013
2 1 Vectors and Tensors in a Finite-Dimensional Space zero vector vector addition x y
Fig 1.1 Geometric illustration of vector axioms in two dimensions
(3) The set of alln-tuples of real numbersR: aD
Indeed, the axioms (A) and (B) apply to then-tuples if one defines addition, multiplication by a scalar and finally the zero tuple, respectively, by aCbD
(4) The set of all real-valued functions defined on a real line.
Basis and Dimension of the Vector Space
Definition 1.2 A set of vectorsx1;x2; : : : ;xnis called linearly dependent if there exists a set of corresponding scalars˛1; ˛2; : : : ; ˛n2R, not all zero, such that
1.2 Basis and Dimension of the Vector Space 3
Otherwise, the vectorsx 1 ;x 2 ; : : : ;x n are called linearly independent In this case, none of the vectorsx i is the zero vector (Exercise1.2).
Xn i D 1 ˛ i x i (1.2) is called linear combination of the vectorsx1;x2; : : : ;xn, where˛i 2 R.iD1, 2; : : : ; n/.
Theorem 1.1 The set ofn non-zero vectorsx1;x2; : : : ;xn is linearly dependent if and only if some vectorxk.2kn/is a linear combination of the preceding onesxi.i D1; : : : ; k1/.
Proof If the vectorsx1;x2; : : : ;xnare linearly dependent, then
Xn i D1 ˛ixi D 0 ; where not all˛ i are zero Let˛ k 2kn/be the last non-zero number, so that ˛ i D0 i DkC1; : : : ; n/ Then,
Thereby, the case k D 1 is avoided because ˛1x1 D 0 implies that x1 D 0
(Exercise1.1) Thus, the sufficiency is proved The necessity is evident.
Definition 1.4 A basis in a vector spaceVis a setG Vof linearly independent vectors such that every vector inVis a linear combination of elements ofG A vector spaceVis finite-dimensional if it has a finite basis.
This book focuses exclusively on finite-dimensional vector spaces, where, despite the existence of infinitely many bases, each base consistently contains the same number of vectors.
Theorem 1.2 All the bases of a finite-dimensional vector spaceVcontain the same number of vectors.
Proof Let G D fg1;g2; : : : ;gng andF D ff1;f2; : : : ;fmg be two arbitrary bases ofVwith different numbers of elements, saym > n Then, every vector inV is a linear combination of the following vectors: f1;g1;g2; : : : ;gn: (1.3)
4 1 Vectors and Tensors in a Finite-Dimensional Space
The vectors in question are non-zero and linearly dependent, allowing us to identify a vector \( g_k \) that is a linear combination of the preceding vectors By excluding \( g_k \), we form the set \( G_0 \) consisting of \( f_1, g_1, g_2, \ldots, g_{k-1}, g_{k+1}, \ldots, g_n \), where every vector in \( V \) can still be expressed as a linear combination of the elements in \( G_0 \) Next, we analyze the vectors \( f_1, f_2, g_1, g_2, \ldots, g_{k-1}, g_{k+1}, \ldots, g_n \) and apply the exclusion process again None of the vectors \( f_i \) can be eliminated since they are linearly independent Once all \( g_i \) (for \( i = 1, 2, \ldots, n \)) are exhausted, we conclude that the vectors \( f_1, f_2, \ldots, f_{n+1} \) must be linearly dependent, which contradicts the initial assumption that they form a basis \( F \).
Definition 1.5 The dimension of a finite-dimensional vector spaceVis the number of elements in a basis ofV.
Theorem 1.3 Every setF D ff1;f2; : : : ;fngof linearly independent vectors in ann-dimensional vectors spaceVforms a basis ofV Every set of more thann vectors is linearly dependent.
The proof of this theorem closely resembles the previous one Let G = {g1, g2, , gn} represent a basis of V, where the vectors defined are linearly dependent and non-zero By excluding one vector, gk, we derive a new set, G', such that every vector in V can be expressed as a linear combination of the elements in G' This process can be repeated until we arrive at a set F that maintains the same property Given that the vectors {f1, f2, , fn} are linearly independent, they establish a basis for V Consequently, any additional vectors in V, such as {fn+1, fn+2, }, can be represented as linear combinations of F, demonstrating that any set containing more than n vectors is inherently linearly dependent.
Theorem 1.4 Every setF D ff1;f2; : : : ;fmgof linearly independent vectors in ann-dimensional vector spaceVcan be extended to a basis.
If \( m = n \), then \( F \) is already a basis according to Theorem 1.3 Conversely, if \( m < n \), we seek to identify \( n - m \) vectors \( f_{m+1}, f_{m+2}, \ldots, f_n \) such that the complete set of vectors \( f_1, f_2, \ldots, f_m, f_{m+1}, \ldots, f_n \) remains linearly independent, thereby forming a basis Assuming, for contradiction, that only \( k < n - m \) such vectors can be found leads to the conclusion that for every vector \( x \in V \), there exist scalars \( \alpha, \alpha_1, \alpha_2, \ldots, \alpha_{m+k} \), not all zero, satisfying the equation \( \alpha x + \alpha_1 f_1 + \alpha_2 f_2 + \ldots + \alpha_{m+k} f_{m+k} = 0 \).
Components of a Vector, Summation Convention
In the context of linear algebra, if the vectors \( f_i \) (for \( i = 1, 2, \ldots, m+k \)) are linearly independent, it follows that all vectors \( x \) within the vector space \( V \) can be expressed as linear combinations of these vectors Consequently, the dimension of \( V \) is determined to be \( m + k \), which is less than \( n \) This outcome contradicts the initial assumption of the theorem, highlighting the importance of linear independence in defining the dimensionality of vector spaces.
1.3 Components of a Vector, Summation Convention
LetGD fg1;g2; : : : ;gngbe a basis of ann-dimensional vector spaceV Then, xD
Theorem 1.5 The representation (1.4) with respect to a given basisGis unique.
Xn iD1 y i gi be two different representations of a vectorx, where not all scalar coefficientsx i andy i i D1; 2; : : : ; n/are pairwise identical Then,
In the context of the identity xD.1/x (Exercise 1.1), the numbers x_i and y_i must either be pairwise equal, meaning x_i = y_i for i = 1, 2, , n, or the vectors g_i are linearly dependent However, the latter scenario is not possible, as these vectors constitute a basis of the vector space V.
In tensor algebra, the scalar components \( x_i \) (where \( i = 1, 2, \ldots, n \)) represent the vector \( x \) with respect to the basis \( \{ g_1, g_2, \ldots, g_n \} \) This relationship is commonly expressed in a concise format without the summation symbol, denoted as \( x = \sum_{i=1}^{n} x_i g_i \).
Einstein's summation convention, denoted as Xn i D 1 x i g i Dx i g i (1.5), implies that when an index appears twice in a multiplicative term—once as a superscript and once as a subscript—summation is automatically understood This repeated index, known as a dummy index, assumes values ranging from 1 to n, where n represents the dimension of the relevant vector space.
6 1 Vectors and Tensors in a Finite-Dimensional Space index changes (from superscript to subscript or vice versa) if it appears under the fraction bar.
Scalar Product, Euclidean Space, Orthonormal Basis
The scalar product is fundamental in vector and tensor algebra, influencing the essential properties of vector spaces Its definition significantly affects the characteristics and behavior of these spaces.
Definition 1.6 The scalar (inner) product is a real-valued functionxy of two vectorsxandyin a vector spaceV, satisfying the following conditions.
(C.3) ˛ xy/D ˛x/y Dx.˛y/ (associative rule for the multiplication by a scalar),8˛2R,8x;y; z 2V,
(C.4) xx0 8x2V; xxD0 if and only if xD 0
An n-dimensional vector space equipped with a scalar product, adhering to specific properties, is referred to as Euclidean space E n Utilizing this scalar product, the Euclidean length, or norm, of a vector x is defined as kxk = √(x · x).
A vector whose length is equal to 1 is referred to as unit vector.
Definition 1.7 Two vectorsxandyare called orthogonal (perpendicular), denoted byx?y, if xyD0: (1.7)
Of special interest is the so-called orthonormal basis of the Euclidean space.
Definition 1.8 A basisED fe1;e2; : : : ;engof an n-dimensional Euclidean space
E n is called orthonormal if eiej Dı ij ; i; j D1; 2; : : : ; n; (1.8) where ı ij Dı ij Dı j i D
0 fori¤j (1.9) denotes the Kronecker delta.
An orthonormal basis consists of pairwise orthogonal unit vectors, raising the important question of its existence In this article, we will demonstrate that every set of mn linearly independent vectors can form an orthonormal basis.
Dual Bases
In \( \mathbb{E}^n \), independent vectors can be orthogonalized and normalized using the Gram-Schmidt procedure This process allows for the construction of new vectors \( e_1, e_2, \ldots, e_m \) from the original linearly independent vectors \( x_1, x_2, \ldots, x_m \), ensuring that \( e_i \cdot e_j = \delta_{ij} \) for \( i, j = 1, 2, \ldots, m \) Since the vectors \( x_i \) are linearly independent, they are all non-zero, which enables the definition of the first unit vector as \( e_1 = \frac{x_1}{\|x_1\|} \).
We analyze the vector \( e_2' = D x_2 \cdot \frac{x_2 e_1}{e_1} \) (1.11), which is orthogonal to \( e_1 \) This property also applies to the unit vector \( e_2 = \frac{e_2'}{||e_2'||} \) Additionally, it is important to note that \( e_2' \neq 0 \), as this would imply \( e_2' = 0 \) and lead to \( x_2 = \frac{x_2 e_1}{||x_1||} x_1 \) Such a conclusion contradicts the linear independence of the vectors \( x_1 \) and \( x_2 \).
We construct the vectors \( e_3 = e_0 \) orthogonal to \( e_1 \) and \( e_2 \), ultimately obtaining a set of orthonormal vectors \( e_1, e_2, \ldots, e_m \) These vectors are non-zero, mutually orthogonal, and therefore linearly independent, representing an orthonormal basis in \( E^n \) when \( m = n \) In this orthonormal basis, the scalar product of two vectors \( x = \sum x_i e_i \) and \( y = \sum y_i e_i \) in \( E^n \) simplifies to \( xy = x_1 y_1 + x_2 y_2 + \ldots + x_n y_n \) Consequently, the length of the vector \( x \) is expressed using the Pythagorean theorem as \( \|x\| = \sqrt{x_1^2 + x_2^2 + \ldots + x_n^2} \).
Definition 1.9 Let G D fg1;g2; : : : ;gng be a basis in the n-dimensional Euclidean space E n Then, a basis G 0 D ˚ g 1 ;g 2 ; : : : ;g n of E n is called dual toG, if gig j Dı j i ; i; j D1; 2; : : : ; n: (1.15)
8 1 Vectors and Tensors in a Finite-Dimensional Space
In the following we show that a set of vectorsG 0 D˚ g 1 ;g 2 ; : : : ;g n satisfying the conditions (1.15) always exists, is unique and forms a basis inE n
Let \( \{ e_1, e_2, \ldots, e_n \} \) be an orthonormal basis in \( E_n \) Since \( G \) also represents a basis, we can express the relationship between the bases as \( e_i = \alpha_{ij} g_j \) and \( g_i = \beta_{ji} e_j \), where \( \alpha_{ij} \) and \( \beta_{ji} \) are the components of the vectors Substituting the first equation into the second leads to \( g_i = \beta_{ji} \alpha_{jk} g_k \), resulting in the identity \( 0 = \beta_{ij} \alpha_{jk} \delta_{ki} g_k \) Given that the vectors \( g_i \) are linearly independent, we derive the equation \( \beta_{ij} \alpha_{kj} = \delta_{ki} \) Additionally, we can express \( g_i \) in terms of the orthonormal basis as \( g_i = \alpha_{ij} e_j \) By utilizing the established relationships, we ultimately find that \( g_i g_j = \beta_{ki} e_k \alpha_{lj} e_l \).
We demonstrate that the vectors \( g_i \) for \( i = 1, 2, \ldots, n \) are linearly independent, thereby forming a basis for \( E_n \) Assuming the contrary, if \( a_i g_i = 0 \) where not all scalars \( a_i \) (for \( i = 1, 2, \ldots, n \)) are zero, we can multiply both sides of this equation by the vectors \( g_j \) (for \( j = 1, 2, \ldots, n \)), which leads to a contradiction By applying equation (1.167), as referenced in Exercise 1.5, we arrive at an inconsistency, reinforcing the linear independence of the vectors.
The next important question is whether the dual basis is unique Let G 0 D ˚g 1 ;g 2 ; : : : ;g n andH 0 D˚ h 1 ;h 2 ; : : : ;h n be two arbitrary non-coinciding bases inE n , both dual toGD fg1;g2; : : : ;gng Then, h i Dh i j g j ; iD1; 2; : : : ; n:
Forming the scalar product with the vectorsgj.j D 1; 2; : : : ; n/we can conclude that the basesG 0 andH 0 coincide: ı i j Dh i gj D h i k g k gj Dh i k ı k j Dh i j ) h i Dg i ; iD1; 2; : : : ; n:
Thus, we have proved the following theorem.
Theorem 1.6 To every basis in an Euclidean spaceE n there exists a unique dual basis.
Relation (1.19) allows for the determination of the dual basis, which can also be derived without relying on an orthonormal basis Let \( g_i \) represent a basis dual to \( g_i \) where \( i = 1, 2, \ldots, n \) The relationship \( g_i = g_{ij} g_j \) holds, and substituting this into the previous relation leads to \( g_i = g_{ij} g_j k g_k \) By performing scalar multiplication with the vectors \( g_l \) and utilizing the properties defined in (1.15), we find that \( \delta_{li} = g_{ij} g_j k \delta_{lk} = g_{ij} g_j l \) for \( i, l = 1, 2, \ldots, n \) Consequently, it is evident that the matrices \( g_{kj} \) and \( g^{kj} \) are inverses of each other, resulting in the equation \( g^{kj} = g_{kj}^{-1} \).
By applying scalar multiplication to the first and second relations in equation (1.21) using the vectors \( g_j \) and \( g_{j.j} \) for \( j = 1, 2, \ldots, n \), we derive significant identities with the assistance of equation (1.15) These identities are expressed as \( g_{ij} = g_{ji} = g_i g_j \) and \( g_{ij} = g_{ji} = g_i g_j \) for \( i, j = 1, 2, \ldots, n \).
An orthonormal basis in \( E^n \) is defined as self-dual, meaning that the inner product of the basis vectors satisfies \( e_i \cdot e_j = \delta_{ij} \) for \( i, j = 1, 2, \ldots, n \) Utilizing dual bases allows for the representation of any vector \( x \) in \( E^n \) as \( x = \sum_{i} x^i g_i \), where \( x^i = \langle x, g_i \rangle \) and \( g_i \) are the dual basis vectors.
10 1 Vectors and Tensors in a Finite-Dimensional Space Indeed, using (1.15) we can write xg i D x j g j g i Dx j ı i j Dx i ; xgi D xjg j gi Dxjı i j Dxi; iD1; 2; : : : ; n:
The components of a vector in dual bases are essential for calculating the scalar product For two arbitrary vectors, \( x = x^i g_i \) and \( y = y^i g_i \), the scalar product can be expressed as \( xy = x^i y^j g_{ij} = x^i y_i = x^i y^i \) Consequently, the length of the vector \( x \) can be represented as \( \|x\| = \sqrt{x^i x^j g_{ij}} = \sqrt{x^i x_j g^{jp}} = \sqrt{x^i x_i} \).
In three-dimensional Euclidean space, let G = {g1, g2, g3} represent a basis, and define the mixed product of vectors as g = [g1, g2, g3] The mixed product is expressed as [abc] = (ab)c + (bc)a + (ca)b, where “×” indicates the vector (or cross) product We can derive a new set of vectors as follows: g1 = g1 × (g2 × g3), g2 = g1 × (g3 × g1), and g3 = g1 × (g1 × g2).
The vectors defined by equation (1.33) meet the criteria outlined in condition (1.15) and are confirmed to be linearly independent, as shown in Exercise 1.11, thus establishing them as the dual basis to the set {1, 2, 3} Additionally, it can be demonstrated that g 2 is equal to the determinant of the matrix formed by the vectors, represented as g = |g ij| Utilizing equation (1.16), we derive that g is equal to the product of the components g1, g2, and g3, expressed in terms of the basis vectors and their respective coefficients.
1.5 Dual Bases 11 wheree ij k denotes the permutation symbol (also called Levi-Civita symbol) It is defined by e ij k De ij k D e i e j e k
1 ifij kis an even permutation of 123,
1 ifij kis an odd permutation of 123,
(1.36) where the orthonormal vectorse1,e2ande3are numerated in such a way that they form a right-handed system In this case,Œe1e2e3D1.
On the other hand, we can write again using (1.16)2 gij Dgigj D
The latter sum can be represented as a product of two matrices so that g ij Dh ˇ i j i h ˇ i j i T
Since the determinant of the matrix product is equal to the product of the matrix determinants we finally have ˇˇgijˇˇDˇˇˇˇ j i ˇˇˇ 2 Dg 2 : (1.38)
With the aid of the permutation symbol (1.36) one can write gigjgk Deij kg; i; j; kD1; 2; 3; (1.39) which by (1.28)2yields an alternative representation of the identities (1.33) as gigj Deij kgg k ; i; j D1; 2; 3: (1.40)
Similarly to (1.35) one can also show that (see Exercise1.12) g 1 g 2 g 3 Dg 1 (1.41) and ˇˇg ij ˇˇDg 2 : (1.42)
12 1 Vectors and Tensors in a Finite-Dimensional Space
Thus, g i g j g k D e ij k g ; i; j; kD1; 2; 3; (1.43) which yields by analogy with (1.40) g i g j D e ij k g gk; i; j D1; 2; 3: (1.44)
Relations (1.40) and (1.44) permit a useful representation of the vector product. Indeed, leta D a i g i D a i g i andb D b j g j D b j g j be two arbitrary vectors inE 3 Then, in view of (1.32) abD a i g i b j g j
Da i b j e ij k gg k Dg ˇˇˇˇ ˇˇ a 1 a 2 a 3 b 1 b 2 b 3 g 1 g 2 g 3 ˇˇˇˇ ˇˇ; abD a i g i b j g j
In the context of an orthonormal basis in \( E^3 \), the relations (1.40) and (1.44) simplify to \( e_i e_j = D e_{ij} k e_k \) for \( i, j = 1, 2, 3 \) Consequently, the vector product expressed in equation (1.45) can be reformulated as \( ab = |a| |b| e_1 e_2 e_3 \), where \( a = D a_i e_i \) and \( b = D b_j e_j \).
Second-Order Tensor as a Linear Mapping
Let us consider a set Lin n of all linear mappings of one vector into another one withinE n Such a mapping can be written as y DAx; y2E n ; 8x2E n ; 8A2Lin n : (1.48)
Elements of the set Lin n are called second-order tensors or simply tensors Linearity of the mapping (1.48) is expressed by the following relations:
1.6 Second-Order Tensor as a Linear Mapping 13
A.˛x/D˛ Ax/ ; 8x2E n ; 8˛2R; 8A2Lin n : (1.50) Further, we define the product of a tensor by a scalar number˛2Ras
.˛A/xD˛ Ax/DA.˛x/ ; 8x2E n (1.51) and the sum of two tensors A and B as
Thus, properties (A.1), (A.2) and (B.1)–(B.4) apply to the set Lin n Setting in (1.51) ˛D 1we obtain the negative tensor by
Further, we define a zero tensor 0 in the following manner
0xD 0 ; 8x2E n ; (1.54) so that the elements of the set Lin n also fulfill conditions (A.3) and (A.4) and accordingly form a vector space.
The properties of second-order tensors can thus be summarized by
AC.BCC/D.ACB/CC; (addition is associative); (1.56)
AC.A/D0; (1.58) ˛ ˇA/D.˛ˇ/A; (multiplication by scalars is associative); (1.59)
1ADA; (1.60) ˛ ACB/D˛AC˛B; (multiplication by scalars is distributive with respect to tensor addition); (1.61)
.˛Cˇ/AD˛ACˇA; (multiplication by scalars is distributive with respect to scalar addition); 8A;B;C2Lin n ; 8˛; ˇ2R: (1.62)
Example 1.2 Vector product in E 3 The vector product of two vectors in E 3 represents again a vector inE 3 z D w x; z 2E 3 ; 8 w ;x2E 3 : (1.63)
In a finite-dimensional space, the mapping \( w: x \mapsto z \) is linear, as shown in equation (1.45) and confirmed in Exercise 1.16 This linearity implies that for any scalar \( \alpha \) and vectors \( x, y \) in \( E^3 \), the following properties hold: \( w(\alpha x) = \alpha w(x) \) and \( w(x + y) = w(x) + w(y) \) Consequently, this relationship can be represented using a second-order tensor, expressed as \( w(x) = W x \), where \( W \) belongs to the linear space \( Lin^3 \) for all \( x \) in \( E^3 \).
The tensor which forms the vector product by a vector w according to (1.65) will be denoted in the following by w Thus, we writeO w xD O w x: (1.66)
A rotation of a vector in \(E^3\) about an axis transforms it into another vector, which can be expressed as a linear mapping This relationship indicates that the mapping is linear, satisfying conditions such as \(r(\alpha a) = \alpha r(a)\) and \(r(a + b) = r(a) + r(b)\) for all \(\alpha \in \mathbb{R}\) and vectors \(a, b \in E^3\) Consequently, this rotation can be represented by a second-order tensor \(R\) in \(E^3\), confirming that \(r(a) = Ra\) for all vectors \(a\) in the space.
This tensor R is referred to as rotation tensor.
To construct the rotation tensor that rotates an arbitrary vector \( \mathbf{a} \in \mathbb{R}^3 \) about an axis defined by a unit vector \( \mathbf{e} \in \mathbb{R}^3 \), we can decompose the vector \( \mathbf{a} \) into components along and perpendicular to the rotation axis This allows us to express the rotated vector \( \mathbf{r}(\mathbf{a}) \) as \( \mathbf{r}(\mathbf{a}) = \mathbf{a} \cos \omega + \mathbf{y} \sin \omega = \mathbf{a} \cos \omega + \mathbf{a}_\perp \sin \omega \), where \( \omega \) represents the rotation angle Utilizing geometric identities, we can further express \( \mathbf{a} \) as \( \mathbf{a} = (\mathbf{e} \cdot \mathbf{a}) \mathbf{e} + \mathbf{y} \) and \( \mathbf{y} = \mathbf{a} - (\mathbf{e} \cdot \mathbf{a}) \mathbf{e} \).
DeaD Oea; (1.71) where “˝” denotes the so-called tensor product (1.80) (see Sect.1.7), we obtain r.a/Dcos!aCsin!eaO C.1cos!/ e˝e/a: (1.72)
1.6 Second-Order Tensor as a Linear Mapping 15
Fig 1.2 Finite rotation of a vector in E 3
Thus the rotation tensor can be given by
RDcos!ICsin!eOC.1cos!/e˝e; (1.73) where I denotes the so-called identity tensor (1.89) (see Sect.1.7).
The rotation tensor can be effectively represented by utilizing the relationship xDyeD ey By rewriting the equation (1.70) as r.a/DaCx.cos!1+Cysin! (1.74) and considering equation (1.71) 2, we derive the expression r.a/DaCsin!eaO C.1cos!/ eO/ 2 a This formulation provides a comprehensive expression for the rotation tensor.
RDICsin!eOC.1cos!/ eO/ 2 (1.76) known as the Euler-Rodrigues formula (see, e.g., [9]).
The Cauchy stress tensor serves as a linear mapping of the unit surface normal into the Cauchy stress vector, essential for understanding stress at a point P within a body B in its current configuration at time t By envisioning a smooth surface that intersects point P and divides body B into two sections, we can define a force p and a couple m, which arise from the forces exerted by the material on either side of the surface A As the area A approaches zero while keeping point P as an inner point, we adhere to a fundamental postulate of continuum mechanics, which involves analyzing the limit of these forces.
16 1 Vectors and Tensors in a Finite-Dimensional Space
The Cauchy stress vector, denoted as \( t \), is a fundamental concept in continuum mechanics, defined by Cauchy’s postulate, which states that it depends solely on the outward unit normal \( n \) of a surface This means that the Cauchy stress vector remains consistent across all surfaces passing through point \( P \) that share the same normal vector Additionally, Cauchy’s theorem indicates that the relationship between the normal vector \( n \) and the stress vector \( t \) is linear, provided that \( t \) is a continuous function of the position vector \( x \) at point \( P \) This relationship can be effectively represented using a second-order tensor known as the Cauchy stress tensor, expressed mathematically as \( t = n \cdot \sigma \).
On the basis of the “right” mapping (1.48) we can also define the “left” one by the following condition
In the context of linear algebra, for every vector space \( E^n \), there exists a unique vector \( y_A \in E^n \) that satisfies a specific condition for all vectors \( x \in E^n \) Let \( G = \{g_1, g_2, \ldots, g_n\} \) and \( G' = \{g^1, g^2, \ldots, g^n\} \) denote dual bases in \( E^n \) Any two arbitrary vectors \( x, y \in E^n \) can be expressed in terms of the basis as \( x = x_i g_i \) and \( y = y_i g_i \) Furthermore, we can analyze the vector \( y_A \) defined as \( D y_i g_i \).
Ag j On the other hand, we obtain the same result also by y.Ax/Dy xj Ag j
We demonstrate that the vector \( y \), which meets condition (1.78) for all \( x \) in \( E^n \), is unique Conversely, if we take two vectors \( a \) and \( b \) in \( E^n \), the relationship \( a \cdot b \) implies \( a \cdot b = 0 \) for all \( x \) in \( E^n \), leading to the conclusion \( a \cdot b = 0 \) According to axiom (C.4), this implies that \( a = b \).
Since the order of mappings in (1.78) is irrelevant we can write them without brackets and dots as follows y.Ax/D.yA/xDyAx: (1.79)
Tensor Product, Representation of a Tensor
1.7 Tensor Product, Representation of a Tensor with Respect to a Basis
The tensor product is crucial for constructing a second-order tensor from two vectors To define the tensor product, we consider two vectors, \( a \) and \( b \), in \( E^n \) An arbitrary vector \( x \) in \( E^n \) can be transformed into another vector using the mapping \( a \otimes b x \), represented by the symbol “\(\otimes\)” as \( a \otimes b \).
It can be shown that the mapping (1.80) fulfills the conditions (1.49)–(1.51) and for this reason is linear Indeed, by virtue of (B.1), (B.4), (C.2) and (C.3) we can write
.a˝b/ xCy/DaŒb.xCy/Da.bxCby/
Thus, the tensor product of two vectors represents a second-order tensor Further, it holds c˝.aCb/Dc˝aCc˝b; aCb/˝c Da˝cCb˝c; (1.83)
Indeed, mapping an arbitrary vectorx 2 E n by both sides of these relations and using (1.52) and (1.80) we obtain c˝.aCb/xDc.axCbx/Dc.ax/Cc.bx/
D.c˝a/xC.c˝b/xD.c˝aCc˝b/x; Œ.aCb/˝cxD.aCb/ cx/Da.cx/Cb.cx/
18 1 Vectors and Tensors in a Finite-Dimensional Space For the “left” mapping by the tensora˝bwe obtain from (1.78) (see Exercise1.21) y.a˝b/D.ya/b; 8y2E n : (1.85)
The collection of all second-order tensors in Lin n forms a vector space This article demonstrates that a basis for Lin n can be established using the tensor product method.
Theorem 1.7 Let F D ff1;f 2 ; : : : ;f n g and G D fg1;g 2 ; : : : ;g n g be two arbitrary bases ofE n Then, the tensorsf i ˝g j i; j D1; 2; : : : ; n/represent a basis of L in n The dimension of the vector space L in n is thusn 2
We demonstrate that every tensor in Lin n can be expressed as a linear combination of the tensors fi ˝ gj, where i and j range from 1 to n Specifically, let A be any second-order tensor in Lin n, and we explore the corresponding linear combination that illustrates this relationship.
A 0 D f i Ag j fi ˝gj; where the vectors f i and g i i D1; 2; : : : ; n/ form the bases dual toF and G, respectively The tensors A and A 0 coincide if and only if
D f i Ag j fix k ı j k Dx j f i Ag j fi:
On the other hand, Ax D x j Ag j By virtue of (1.27) and (1.28) we can represent the vectors Ag j j D1; 2; : : : ; n/with respect to the basisFby Ag j D f i
Ag j fi D f i Ag j fi j D1; 2; : : : ; n/ Hence,
The condition (1.86) holds true for all x in the set E n Additionally, we demonstrate that the tensors fi ˝ gj, where i and j range from 1 to n, are linearly independent If they were not, there would exist non-zero scalars αij, for i and j from 1 to n, such that the linear combination αij fi ˝ gj equals zero.
The correct mapping of g k kD1; 2; : : : ; n/ through this tensor equality results in ˛ i k fi D 0 for k = 1, 2, , n However, this contradicts the established fact that the vectors fk (for k = 1, 2, , n) constitute a basis and are thus linearly independent.
For the representation of second-order tensors we will in the following use primarily the basesgi˝gj,g i ˝g j ,g i ˝gj orgi˝g j i; j D1; 2; : : : ; n/ With respect to these bases a tensor A2Lin n is written as
Change of the Basis, Transformation Rules
ADA ij gi˝gj DAijg i ˝g j DA i j gi ˝g j DA i j g i ˝gj (1.87) with the components (see Exercise1.22)
A ij Dg i Ag j ; Aij Dgi Agj;
Note, that the subscript dot indicates the position of the above index For example, for the components A i j ,i is the first index while for the components A j i ,iis the second index.
Of special importance is the so-called identity tensor I It is defined by
With the aid of (1.25), (1.87) and (1.88) the components of the identity tensor can be expressed by
I ij Dg i Ig j Dg i g j Dg ij ; Iij Dgi Igj Dgi gj Dg ij ;
I i j DI i j DI i j Dg i Igj Dgi Ig j Dg i gj Dgig j Dı j i ; (1.90) wherei; j D1; 2; : : : ; n Thus,
IDgijg i ˝g j Dg ij gi ˝gj Dg i ˝gi Dgi˝g i : (1.91)
The components of the identity tensor, represented as (1.90)1;2, are defined by relation (1.25) and characterize the metric properties of Euclidean space, often referred to as metric coefficients Consequently, the identity tensor is commonly known as the metric tensor In the context of an orthonormal basis, relation (1.91) simplifies accordingly.
1.8 Change of the Basis, Transformation Rules
This article explains the transformation of vector and tensor components when the basis changes Specifically, it discusses a vector \( x \) and a second-order tensor \( A \), illustrating their transformation using equations (1.27) and (1.87), which show how \( x \) and \( A \) relate to the new basis through the transformation rules.
ADA ij g i ˝g j DA ij g i ˝g j DA i j g i ˝g j DA i j g i ˝g j : (1.94)
20 1 Vectors and Tensors in a Finite-Dimensional Space With the aid of (1.21) and (1.28) we can write x i Dxg i Dx g ij g j
Dx j g j i ; (1.95) whereiD1; 2; : : : ; n Similarly we obtain by virtue of (1.88)
A ij Dg i Ag j Dg i A g j k gk
DA i k g kj Dg i l Alkg kj ; (1.96)
Aij Dgi Agj Dgi A gj kg k
The transformation rules (1.95)–(1.97) are applicable not only to dual bases but also to any arbitrary bases in \( E^n \) Specifically, let \( g_i \) and \( g^N_i \) (where \( i = 1, 2, \ldots, n \)) represent two such bases, allowing for the expression \( x = x^i g_i = N x^i g^N_i \).
ADA ij gi ˝gj D NA ij gN i ˝ Ng j : (1.99)
By means of the relations gi Da j i gN j ; iD1; 2; : : : ; n (1.100) one thus obtains xDx i g i Dx i a j i gN j ) xN j Dx i a j i ; j D1; 2; : : : ; n; (1.101)
ADA ij gi ˝gj D A ij a i k gN k ˝ a l j gN l
Special Operations with Second-Order Tensors
In Section 1.6, we established that the set Lin n constitutes a finite-dimensional vector space, where its elements are second-order tensors These tensors can be treated as vectors in E n 2, enabling operations typical for vectors, including summation, scalar multiplication, and the scalar product, which will be defined for second-order tensors in Section 1.10 However, it is important to note that these tensors differ from conventional vectors in Euclidean space.
1.9 Special Operations with Second-Order Tensors 21 for second-order tensors one can additionally define some special operations as for example composition, transposition or inversion.
Composition (simple contraction) Let A;B2Lin n be two second-order tensors. The tensor CDAB is called composition of A and B if
For the left mapping (1.78) one can write y.AB/D.yA/B; 8y2E n : (1.104)
In order to prove the last relation we use again (1.78) and (1.103): y.AB/xDyŒ.AB/xDyŒA.Bx/
D.yA/.Bx/DŒ.yA/Bx; 8x2E n :
The composition of tensors (1.103) is generally not commutative so that AB ¤
BA Two tensors A and B are called commutative if on the contrary AB D BA. Besides, the composition of tensors is characterized by the following properties (see Exercise1.26):
For example, the distributive rule (1.106)1can be proved as follows ŒA.BCC/xDAŒ.BCC/xDA.BxCCx/DA.Bx/CA.Cx/
D.AB/xC.AC/xD.ABCAC/x; 8x2E n : For the tensor product (1.80) the composition (1.103) yields
.a˝b/ c˝d/D.bc/a˝d; a;b;c;d 2E n : (1.108) Indeed, by virtue of (1.80), (1.82) and (1.103)
22 1 Vectors and Tensors in a Finite-Dimensional Space Thus, we can write
DA i k B k j gi ˝g j DA i k Bkjg i ˝g j ; (1.109) where A and B are given in the form (1.87).
Powers, polynomials and functions of second-order tensors On the basis of the composition (1.103) one defines by
; mD1; 2; 3 : : : ; A 0 DI (1.110) powers (monomials) of second-order tensors characterized by the following evident properties
.˛A/ k D˛ k A k ; k; lD0; 1; 2 : : : (1.112) With the aid of the tensor powers a polynomial of A can be defined by g A/Da0 ICa1 ACa2 A 2 C: : :Cam A m D
Xm kD0 ak A k : (1.113) g A/: Lin n 7!Lin n represents a tensor function mapping one second-order tensor into another one within Lin n By this means one can define various tensor functions.
Of special interest is the exponential one exp.A/D
A k k! (1.114) given by the infinite power series.
Transposition The transposed tensor A T is defined by:
A T xDxA; 8x2E n ; (1.115) so that one can also write
AyDyA T ; xAyDyA T x; 8x;y2E n : (1.116) Indeed, x.Ay/D.xA/yDy
1.9 Special Operations with Second-Order Tensors 23
Transposition represents a linear operation over a second-order tensor since
The composition of second-order tensors is transposed by
Indeed, in view of (1.104) and (1.115)
.AB/ T xDx.AB/D.xA/BDB T xA/DB T A T x; 8x2E n :
For the tensor product of two vectorsa;b2 E n we further obtain by use of (1.80) and (1.85)
The existence and uniqueness of the transposed tensor are guaranteed, as every tensor A in Lin n can be expressed using the tensor product of the basis vectors in E n, as shown in equation (1.87) Consequently, by examining equation (1.121), we can draw further insights into this representation.
A T DA ij gj˝gi DAijg j ˝g i DA i j g j ˝gi DA i j gj ˝g i ; (1.122) or
A T DA j i gi˝gj DAj ig i ˝g j DA j i g i ˝gj DA j i gi˝g j : (1.123)
Comparing the latter result with the original representation (1.87) one observes that the components of the transposed tensor can be expressed by
A T i j DA j i Dg j k A k l g li : (1.125) For example, the last relation results from (1.88) and (1.116) within the following steps
A T i j Dg i A T gj Dgj Ag i Dgj
A k l gk˝g l g i Dgj kA k l g li : According to (1.124) the homogeneous (covariant or contravariant) components of the transposed tensor can simply be obtained by reflecting the matrix of the
24 1 Vectors and Tensors in a Finite-Dimensional Space original components from the main diagonal It does not, however, hold for the mixed components (1.125).
The transposition operation (1.115) gives rise to the definition of symmetric
M T DM and skew-symmetric second-order tensors W T D W.
Obviously, the identity tensor is symmetric
One can easily show that the tensor w (1.66) is skew-symmetric so thatO w O T D O w : (1.127)
Indeed, by virtue of (1.32) and (1.116) on can write x w O T yDy w xO Dy w x/DŒy w xD Œx w y
A tensor A2Lin n is referred to as invertible if there exists a tensor A 1 2Lin n satisfying the condition xDA 1 y; 8x2E n : (1.129)
The tensor A 1 is called inverse of A The set of all invertible tensors Inv n D ˚A2Lin n :9A 1 forms a subset of all second-order tensors Lin n Inserting (1.128) into (1.129) yields xDA 1 yDA 1 Ax/D
Theorem 1.8 A tensor A is invertible if and only if AxD 0 implies thatxD 0.
Proof First we prove the sufficiency To this end, we map the vector equation
The equation Ax = 0 implies that A is a linear transformation where the solution x is equivalent to the zero vector To demonstrate the necessity of this condition, we start with a basis G = {g1, g2, , gn} in the vector space E^n It can be established that the transformed vectors hi = A(g_i) for i = 1, 2, , n also form a basis for E^n Conversely, if these vectors are linearly dependent, then there exist scalars a_i (not all zero) such that a_i hi = 0, leading to the conclusion that 0 = a_i hi = a_i A(g_i) = A(a), where a = Σ a_i g_i ≠ 0, which contradicts the theorem's assumption Additionally, we introduce the tensor A' = g_i ⊗ h_i, where the vectors h_i are dual to the vectors g_i.
1.9 Special Operations with Second-Order Tensors 25 can show that this tensor is inverse to A, such that A 0 DA 1 Indeed, letxDx i g i be an arbitrary vector in E n Then, y D Ax D x i Ag i D x i h i and therefore
Conversely, it can be shown that an invertible tensor A is inverse to A 1 and consequently
To demonstrate the proof, we examine the bases \( g_i \) and \( A_{g_i} \) for \( i = 1, 2, \ldots, n \) Let \( y = y_i A_{g_i} \) represent an arbitrary vector in \( E^n \) Additionally, define \( x = A_1 y = y_i g_i \) based on equation (1.130) Consequently, \( Ax = y_i A_{g_i} = y \), which confirms that the tensor \( A \) serves as the inverse of \( A_1 \).
Relation (1.131) establishes the uniqueness of the inverse of a tensor If two distinct tensors, A1 and Ae1, are both inverses of A, then there exists at least one vector y in En such that A1y is not equal to Ae1y By applying A to both sides of this inequality and considering relation (1.131), we arrive at a contradiction, thereby confirming the uniqueness of the inverse.
By means of (1.120), (1.126) and (1.131) we can write (see Exercise1.39)
The composition of two arbitrary invertible tensors A and B is inverted by
Mapping both sides of this vector identity by A 1 and then by B 1 , we obtain with the aid of (1.130) xDB 1 A 1 y; 8x2E n :
Orthogonal tensors are defined based on the operations of transposition and inversion, as they remain unchanged after consecutive applications of these operations This property categorizes them as a distinct subset within the linear algebra framework of Lin n.
For orthogonal tensors we can write in view of (1.130) and (1.131)
To demonstrate that the rotation tensor is orthogonal, we can extend the vector defining the rotation axis to an orthonormal basis, ensuring that the dot product of the basis vectors is zero By applying a vector identity, we can establish the relationship between the vectors in the context of three-dimensional space.
26 1 Vectors and Tensors in a Finite-Dimensional Space we can write
The rotation tensor (1.73) takes thus the form
RDcos!ICsin! p˝qq˝p/C.1cos!/ e˝e/ : (1.138) Hence,
RR T DŒcos!ICsin! p˝qq˝p/C.1cos!/ e˝e/ Œcos!Isin! p˝qq˝p/C.1cos!/ e˝e/
Dcos 2 !ICsin 2 ! e˝e/Csin 2 ! p˝pCq˝q/DI:
Alternatively one can express the transposed rotation tensor (1.73) by
Dcos.!/ICsin.!/eOCŒ1cos.!/e˝e (1.139) taking (1.121), (1.126) and (1.127) into account Thus, R T (1.139) describes the rotation about the same axis e by the angle !, which likewise implies that
The exponential function of a skew-symmetric tensor, denoted as exp(W), results in an orthogonal tensor This occurs because a skew-symmetric tensor W commutes with its transpose, W^T Additionally, the identity exp(A * C * B) = exp(A) * exp(B) holds true for commutative tensors, highlighting the relationship between these mathematical concepts.
A T k for integerk(Exercise1.37) we can write
Dexp.W/ Œexp.W/ T ; (1.140) where W denotes an arbitrary skew-symmetric tensor.
Scalar Product of Second-Order Tensors
Consider two second-order tensorsa˝bandc˝d given in terms of the tensor product (1.80) Their scalar product can be defined in the following manner:
1.10 Scalar Product of Second-Order Tensors 27
It leads to the following identity (Exercise1.41): c˝d: ADcAd DdA T c: (1.142)
For two arbitrary tensors A and B given in the form (1.87) we thus obtain
A : BDAijB ij DA ij Bij DA i j B i j DA i j B i j : (1.143)
Similar to vectors the scalar product of tensors is a real function characterized by the following properties (see Exercise1.42)
(D.2) A : BCC/DA : BCA : C (distributive rule),
(D.3) ˛ A : B/D.˛A/: BDA : ˛B/(associative rule for multiplication by a scalar), 8A;B2Lin n ;8˛2R,
(D.4) A : A0 8A2Lin n ; A : AD0 if and only if AD0.
We prove for example the property (D.4) To this end, we represent an arbitrary tensor A with respect to an orthonormal basis of Lin n as: A D A ij ei ˝ej D
Aije i ˝e j , where A ij DAij; i; j D1; 2; : : : ; n/, sincee i Dei.iD1; 2; : : : ; n/ form an orthonormal basis ofE n (1.8) Keeping (1.143) in mind we then obtain:
Using this important property one can define the norm of a second-order tensor by: kAk D.AWA/ 1=2 ; A2Lin n : (1.144)
For the scalar product of tensors one of which is given by a composition we can write
We prove this identity first for the tensor products:
28 1 Vectors and Tensors in a Finite-Dimensional Space
For three arbitrary tensors A, B and C given in the form (1.87) we can write in view of (1.109), (1.125) and (1.143)
B i k : (1.146) Similarly we can prove that
On the basis of the scalar product one defines the trace of second-order tensors by: trADA : I: (1.148)
For the tensor product (1.80) the trace (1.148) yields in view of (1.142) tr.a˝b/Dab: (1.149)
With the aid of the relation (1.145) we further write tr.AB/DA : B T DA T : B: (1.150)
In view of (D.1) this also implies that tr.AB/Dtr.BA/ : (1.151)
Decompositions of Second-Order Tensors
Additive decomposition into a symmetric and a skew-symmetric part Every second-order tensor can be decomposed additively into a symmetric and a skew- symmetric part by
1.11 Decompositions of Second-Order Tensors 29 where symAD 1
Symmetric and skew-symmetric tensors form subsets of Lin n defined respec- tively by
The subsets of Lin n can be demonstrated to be vector spaces, known as subspaces The axioms (A.1)–(A.4) and (B.1)–(B.4), which encompass operations involving the zero tensor, hold true for both symmetric and skew-symmetric tensors Notably, the zero tensor is the sole linear mapping that is simultaneously symmetric and skew-symmetric, resulting in Sym n \Skew n equating to zero.
For every symmetric tensor MDM ij gi˝gjit follows from (1.124) that M ij D
Similarly we can write for a skew-symmetric tensor
The space W2Skew n (1.157) is defined under the conditions that W ii = 0 and W ij = W ji for i ≠ j, where i and j range from 1 to n The basis of Sym n is comprised of tensors g_i ⊗ g_i and 1/2 n(n-1) tensors g_i ⊗ g_j + g_j ⊗ g_i In contrast, the basis of Skew n includes 1/2 n(n-1) tensors g_i ⊗ g_j - g_j ⊗ g_i, where i > j Consequently, the dimensions of Sym n and Skew n are 1/2 n(n+1) and 1/2 n(n-1), respectively This implies that any basis of Skew n serves as a complement to any basis of Sym n, forming a complete basis for Lin n.
Taking (1.40) and (1.169) into account a skew symmetric tensor (1.157) can be represented in three-dimensional space by
W ij gi˝gj gj ˝gi
30 1 Vectors and Tensors in a Finite-Dimensional Space where w D
Thus, every skew-symmetric tensor in three-dimensional space describes a cross product by a vector w (1.159) called axial vector One immediately observes that
Obviously, symmetric and skew-symmetric tensors are mutually orthogonal such that (see Exercise1.46)
Spaces characterized by this property are called orthogonal.
Additive decomposition into a spherical and a deviatoric part For every second-order tensor A we can write
In the context of tensor analysis, the spherical tensor S can be expressed as SD˛I, where ˛ represents a scalar value, and the deviatoric tensor D is defined by the condition trD = D0 Both spherical and deviatoric tensors, much like symmetric and skew-symmetric tensors, constitute orthogonal subspaces within Lin n.
Tensors of Higher Orders
Higher-order tensors can be defined similarly to second-order tensors, with a third-order tensor representing a linear mapping from \( E^n \) to \( Lin^n \).
In the context of Exercises 31, Lin n refers to the collection of all linear mappings that transform vectors in E n into second-order tensors within Lin n Additionally, third-order tensors can also be expressed in relation to a basis in Lin n.
ADA ij k gi˝gj ˝gk DAij kg i ˝g j ˝g k
DA i j k gi˝g j ˝g k DA ik j g i ˝gj ˝g k : (1.165)
For the components of the tensor A (1.165) we can thus write by analogy with (1.146)
A ij k DA ij s g sk DA i st g sj g t k DAr stg r i g sj g t k ;
Aij kDA r j k gr i DA r s k gr igsj DA r st gr igsjgt k: (1.166)
1.1 Prove that ifx 2 Vis a vector and ˛ 2 R is a scalar, then the following identities hold.
(a) 0 D 0 , (b)˛ 0 D 0 , (c)0x D 0 , (d)x D 1/x, (e) if˛x D 0 , then either˛D0orxD 0 or both.
1.2 Prove that xi ¤ 0 i D 1; 2; : : : ; n/ for linearly independent vectors x1, x2; : : : ;xn In other words, linearly independent vectors are all non-zero.
1.3 Prove that any non-empty subset of linearly independent vectors x1, x2; : : : ;xnis also linearly independent.
1.4 Write out in full the following expressions forn= 3: (a)ı j i a j , (b)ı ij x i x j , (c)ı i i , (d) @fi
1.6 Prove that a set of mutually orthogonal non-zero vectors is always linearly independent.
1.7 Prove the so-called parallelogram law:kxCyk 2 D kxk 2 C2xyC kyk 2
1.8 LetGD fg1;g2; : : : ;gngbe a basis inE n anda2E n be a vector Prove that agi D0 i D1; 2; : : : ; n/if and only ifaD 0
1.9 Prove thataDbif and only ifaxDbx; 8x2E n
1.10 (a) Construct an orthonormal set of vectors orthogonalizing and normalizing(with the aid of the procedure described in Sect.1.4) the following linearly independent vectors:
32 1 Vectors and Tensors in a Finite-Dimensional Space g1D
9;; where the components are given with respect to an orthonormal basis.
(b) Construct a basis inE 3 dual to the given above utilizing relations (1.16)2, (1.18) and (1.19).
(c) As an alternative, construct a basis in E 3 dual to the given above by means of (1.21)1, (1.24) and (1.25)2.
(d) Calculate again the vectorsg i dual togi i D1; 2; 3/by using relations (1.33) and (1.35) Compare the result with the solution of problem (b).
1.11 Verify that the vectors (1.33) are linearly independent.
1.12 Prove identities (1.41) and (1.42) by means of (1.18), (1.19) and (1.24), respectively.
1.13 Prove relations (1.40) and (1.44) by using (1.39) and (1.43), respectively.
1.14 Verify the following identities involving the permutation symbol (1.36) for n = 3: (a) ı ij eij k D 0, (b)e i km ej km D 2ı j i , (c) e ij k eij k D 6, (d) e ij m eklm D ı k i ı l j ı l i ı k j
1 abDb˝aa˝b; 8a;b;c2E 3 : (1.169) 1.16 Prove relations (1.64) using (1.45).
1.19 Prove formula (1.58), where the negative tensorA is defined by (1.53).
1.20 Prove that not every second order tensor in Lin n can be represented as a tensor product of two vectorsa;b2E n asa˝b.
1.23 Evaluate the tensor WD O w D w , where w Dw i gi.
1.24 Evaluate components of the tensor describing a rotation about the axise3by the angle˛.
1.25 Let ADA ij gi˝gj , where
3 5 and the vectorsg i i D1; 2; 3/are given in Exercise1.10 Evaluate the components
1.27 Let ADA i j g i ˝g j , BDB i j g i ˝g j , CDC i j g i ˝g j and DDD i j g i ˝g j , where h
Find commutative pairs of tensors.
1.28 Let A and B be two commutative tensors Write out in full.ACB/ k , where kD2; 3; : : :
1.29 Prove that exp.ACB/Dexp.A/exp.B/ ; (1.170) where A and B commute.
1.31 Prove that exp.A/exp.A/Dexp.A/exp.A/DI.
1.32 Prove that exp.kA/DŒexp.A/ k for all integerk.
1.33 Prove that exp.ACB/Dexp.A/Cexp.B/I if ABDBAD0.
1.35 Compute the exponential of the tensors DDD i j g i ˝g j , ED E i j g i ˝g j and FDF i j gi˝g j , where h
34 1 Vectors and Tensors in a Finite-Dimensional Space
1.38 Evaluate the components B ij , Bij, B i j and B i j of the tensor BD A T , where
A k 1DA k , wherekD1; 2; 3; : : : 1.41 Prove identity (1.142) using (1.87) and (1.141).
1.42 Prove by means of (1.141)–(1.143) the properties of the scalar product (D.1)– (D.3).
1.43 Verify thatŒ.a˝b/ c˝d/ : ID.ad/ bc/.
1.44 Express trA in terms of the components A i j , Aij, A ij
3 5 and the vectorsgi.i D1; 2; 3/are given in Exercise1.10 Calculate the axial vector of W.
1.46 Prove that M : W D 0, where M is a symmetric tensor and W a skew- symmetric tensor.
1.47 Evaluate trW k , where W is a skew-symmetric tensor andkD1; 3; 5; : : : 1.48 Verify that sym.skewA/Dskew.symA/D0; 8A2Lin n
1.49 Prove that sph.devA/Ddev.sphA/D0; 8A2Lin n
Vector and Tensor Analysis in Euclidean Space
Vector- and Tensor-Valued Functions, Differential Calculus
We examine a vector-valued function \( x(t) \) and a tensor-valued function \( A(t) \) of a real variable \( t \) We assume these functions are continuous, satisfying the conditions \( \lim_{t \to t_0} [x(t) - x(t_0)] = 0 \) and \( \lim_{t \to t_0} [A(t) - A(t_0)] = 0 \) for all \( t_0 \) in their domain The functions \( x(t) \) and \( A(t) \) are defined as differentiable if the limits \( \frac{dx}{dt} = \lim_{s \to 0} \frac{x(t+s) - x(t)}{s} \) and \( \frac{dA}{dt} = \lim_{s \to 0} \frac{A(t+s) - A(t)}{s} \) exist.
A.tCs/A.t / s (2.2) exist and are finite They are referred to as the derivatives of the vector- and tensor- valued functionsx.t /and A.t /, respectively.
For differentiable vector- and tensor-valued functions the usual rules of differen- tiation hold.
1 Product of a scalar function with a vector- or tensor-valued function: d dt Œu.t /x.t /D du dtx.t /Cu.t /dx dt ; (2.3) d dt Œu.t /A.t /D du dtA.t /Cu.t /dA dt : (2.4)
2 Mapping of a vector-valued function by a tensor-valued function: d dt ŒA.t /x.t /D dA dt x.t /CA.t /dx dt : (2.5)
M Itskov, Tensor Algebra and Tensor Analysis for Engineers, Mathematical Engineering, DOI 10.1007/978-3-642-30879-6 2, © Springer-Verlag Berlin Heidelberg 2013
36 2 Vector and Tensor Analysis in Euclidean Space
3 Scalar product of two vector- or tensor-valued functions: d dt Œx.t /y.t /D dx dt y.t /Cx.t /dy dt; (2.6) d dt ŒA.t /WB.t /D dA dt WB.t /CA.t /W dB dt : (2.7)
4 Tensor product of two vector-valued functions: d dt Œx.t /˝y.t /D dx dt ˝y.t /Cx.t /˝ dy dt : (2.8)
5 Composition of two tensor-valued functions: d dt ŒA.t /B.t /D dA dt B.t /CA.t /dB dt : (2.9)
6 Chain rule: d dtxŒu.t /D dx du du dt; d dtAŒu.t /D dA du du dt: (2.10)
7 Chain rule for functions of several arguments: d dtxŒu.t /,v.t /D @x
@v dv dt; (2.12) where@=@u denotes the partial derivative It is defined for vector and tensor valued functions in the standard manner by
The differentiation rules can be confirmed using basic differential calculus To illustrate this, we consider the derivative of the composition of two second-order tensors, defined by two tensor-valued functions.
O 1.s/D A.tCs/A.t / s dA dt ; O 2.s/D B.tCs/B.t / s dB dt : (2.15) Bearing the definition of the derivative (2.2) in mind we have s!0limO 1.s/D0; lim s!0 O 2.s/D0:
Coordinates in Euclidean Space, Tangent Vectors
1 s A.t /CsdA dt CsO 1 s/ B.t /CsdB dt CsO 2 s/
Clim s!0s dA dt CO 1.s/ dB dt CO 2.s/
D dA dt B.t /CA.t /dB dt :
2.2 Coordinates in Euclidean Space, Tangent Vectors
A coordinate system establishes a unique relationship between vectors in n-dimensional Euclidean space (E n) and a set of n real numbers, denoted as x 1, x 2, , x n These real numbers are referred to as the coordinates of the corresponding vectors.
Thus, we can write x i Dx i r/ , r Dr x 1 ; x 2 ; : : : ; x n
; (2.16) where r 2 E n and x i 2 R.i D1; 2; : : : ; n/ Henceforth, we assume that the functionsx i Dx i r/andrDr x 1 ; x 2 ; : : : ; x n are sufficiently differentiable.
Example 2.1 Cylindrical coordinates inE 3 The cylindrical coordinates (Fig.2.1) are defined by rDr.';z; r/Drcos'e1Crsin'e2Cze3 (2.17) and r Dq re 1 / 2 C.re 2 / 2 ; zDre 3 ;
2arccosre 1 r ifre2< 0; (2.18) wheree i i D1; 2; 3/form an orthonormal basis inE 3
38 2 Vector and Tensor Analysis in Euclidean Space ϕ e 1 r x 1 e 2 x 2 x 3 = z e 3 r g 3 g 1 g 2
Fig 2.1 Cylindrical coordinates in three-dimensional space
The vector components relative to a fixed basis, denoted as H = {h₁, h₂, , hₙ}, represent the coordinates of the vector According to Theorem 1.5 from the previous chapter, there is a one-to-one correspondence between the vector r and its components xᵢ, expressed as r = Σ xᵢ hᵢ and xᵢ = r(hᵢ) for i = 1, 2, , n The components xᵢ are known as the linear coordinates of the vector r, where H₀ = {h₁, h₂, , hₙ} serves as the dual basis to H.
The Cartesian coordinates result as a special case of the linear coordinates (2.19) wherehi Dei iD1; 2; : : : ; n/so that rDx i ei , x i Drei; iD1; 2; : : : ; n: (2.20)
Letx i D x i r/andy i D y i r/ iD1; 2; : : : ; n/be two arbitrary coordinate systems inE n Since their correspondences are one to one, the functions x i D Ox i y 1 ; y 2 ; : : : ; y n , y i D Oy i x 1 ; x 2 ; : : : ; x n
; i D1; 2; : : : ; n (2.21) are invertible These functions describe the transformation of the coordinate sys- tems Inserting one relation (2.21) into another one yields y i D Oy i xO 1 y 1 ; y 2 ; : : : ; y n
2.2 Coordinates in Euclidean Space, Tangent Vectors 39
The further differentiation with respect toy j delivers with the aid of the chain rule
@y j ; i; j D1; 2; : : : ; n: (2.23) The determinant of the matrix (2.23) takes the form ˇˇı ij ˇˇD1D ˇˇ ˇˇ
The determinantˇˇ@y i =@x k ˇˇon the right hand side of (2.24) is referred to as Jacobian determinant of the coordinate transformationy i D Oy i x 1 ; x 2 ; : : : ; x n
.i D1; 2; : : : ; n/ Thus, we have proved the following theorem.
Theorem 2.1 If the transformation of the coordinatesy i D Oy i x 1 ; x 2 ; : : : ; x n admits an inverse formx i D Ox i y 1 ; y 2 ; : : : ; y n
.i D1; 2; : : : ; n/and ifJ andK are the Jacobians of these transformations thenJK D1.
One of the important consequences of this theorem is that
Now, we consider an arbitrary curvilinear coordinate system i D i r/ , rDr
; (2.26) wherer 2E n and i 2R.i D1; 2; : : : ; n/ The equations i Dconst ; i D1; 2; : : : ; k1; kC1; : : : ; n (2.27) define a curve inE n called k -coordinate line The vectors (see Fig.2.2) g k D @r
The tangent vectors \( k; kD1; 2; : : : ; n \) are associated with the k-coordinate lines and are proven to be linearly independent, thus forming a basis of \( E^n \) If these vectors were linearly dependent, there would exist non-zero scalars \( \alpha_i \in \mathbb{R} \) (for \( i = 1, 2, \ldots, n \)) such that \( \alpha_i g_i = 0 \) Additionally, the coordinates \( x_i = x_i(r) \) (for \( i = 1, 2, \ldots, n \)) represent linear coordinates in \( E^n \) relative to a specific basis.
40 2 Vector and Tensor Analysis in Euclidean Space g k r (θ k + s) r (θ k ) θ k Δ r
Fig 2.2 Illustration of the tangent vectors
Since the basis vectorsh j j D1; 2; : : : ; n/are linearly independent ˛ i @x j
This is a homogeneous linear equation system with a non-trivial solution ˛ i i D1; 2; : : : ; n/ Hence, ˇˇ@x j =@ i ˇˇD0, which obviously contradicts rela- tion (2.25).
Example 2.2 Tangent vectors and metric coefficients of cylindrical coordinates inE 3 By means of (2.17) and (2.28) we obtain g1 D @r
The metric coefficients take by virtue of (1.24) and (1.25)2the form gij
The dual basis results from (1.21)1by g 1 D 1 r 2 g 1 D 1 r sin'e 1 C1 r cos'e 2 ; g 2 Dg2De3; g 3 Dg 3 Dcos'e 1 Csin'e 2 : (2.31)
2.3 Co-, Contra- and Mixed Variant Components 41
Coordinate Transformation Co-, Contra- and Mixed
Let i D i r/ and N i D N i r/ i D1; 2; : : : ; n/ be two arbitrary coordinate systems inE n It holds
Ifg i is the dual basis tog i i D1; 2; : : : ; n/, then we can write
The transformation of dual vectors under a change of coordinate system highlights the distinction between covariant and contravariant variables Specifically, the transformation rules associated with these variables are expressed as (2.32) for covariant and (2.33) for contravariant forms In notation, covariant variables are represented with lower indices, while contravariant variables are indicated with upper indices.
The transformation of vector and tensor components can be understood through co- and contravariant rules when associated with tangent vectors Specifically, the relationship can be expressed as xDxig i Dx i g i D NxigN i D Nx i gN i.
D NAijgN i ˝ Ng j D NA ij gN i ˝ Ng j D NA i j gN i ˝ Ng j : (2.36) Then, by means of (1.28), (1.88), (2.32) and (2.33) we obtain
42 2 Vector and Tensor Analysis in Euclidean Space
In tensor analysis, the components are categorized as covariant and contravariant, denoted by x i, Aij and x i, A ij, respectively, while the tensor components A i j are classified as mixed variant The transformation rules applicable to these components can also extend to higher-order tensors, such as third-order tensors.
From the very beginning we have supplied coordinates with upper indices which imply the contravariant transformation rule Indeed, let us consider the transforma- tion of a coordinate systemN i D N i
Thus, the differentials of the coordinates really transform according to the con- travariant law (2.33).
Example 2.3 Transformation of linear coordinates into cylindrical ones (2.17).
Let x i D x i r/ be linear coordinates with respect to an orthonormal basis ei i D1; 2; 3/inE 3 : x i Drei , rDx i ei: (2.44)
By means of (2.17) one can write x 1 Drcos'; x 2 Drsin'; x 3 Dz (2.45) and consequently
Gradient, Covariant and Contravariant Derivatives
The reciprocal derivatives can easily be obtained from (2.23) by inverting the matrix h@x i
2.4 Gradient, Covariant and Contravariant Derivatives
In mathematical analysis, functions defined over coordinates in R^n can be classified as scalar, vector, or tensor-valued differentiable functions, commonly known as fields These include scalar fields, vector fields, and tensor fields Additionally, these fields can be represented through a one-to-one correspondence, allowing for alternative expressions such as ˚ = ˚(r), x = x(r), and A = A(r).
In the following we assume that the so-called directional derivatives of the func- tions (2.48) d ds˚ rCsa/ ˇˇ ˇˇ s D 0
The mappings \( a \mapsto ds d \theta (rCsa) \), \( a \mapsto ds d x (rCsa) \), and \( a \mapsto ds d A (rCsa) \) are all linear with respect to the vector \( a \) It is established that \( A.rCsa/A.r/ s (2.49) \) exists for all \( a \in \mathbb{R}^n \) For instance, the directional derivative of the scalar function can be expressed as \( \theta = \theta (r) d ds \theta [r + Cs (a + b)] \) evaluated at \( s = 0 \).
Vector and tensor analysis in Euclidean space involves functions that depend on specific variables, allowing for the application of the chain rule This results in the equation d/ds [r + c1a + c2b] = 0, where the derivatives of the functions are considered.
C d ds˚ rCsb/ ˇˇ ˇˇ sD0 and finally d ds˚ ŒrCs aCb/ ˇˇ ˇˇ s D 0
(2.51) for alla;b2E n In a similar fashion we can write d ds˚ rCs˛a/ ˇˇ ˇˇ s D 0
Representingawith respect to a basis asaDa i g i we thus obtain d ds˚ rCsa/ ˇˇ ˇˇ s D 0
; (2.53) whereg i form the basis dual to gi iD1; 2; : : : ; n/ This result can finally be expressed by d ds˚ rCsa/ ˇˇ ˇˇ s D 0
Dgrad˚a; 8a2E n ; (2.54) where the vector denoted by grad˚ 2 E n is referred to as gradient of the function ˚D˚ r/ According to (2.53) and (2.54) it can be represented by grad˚ D d ds˚ rCsg i / ˇˇ ˇˇ s D 0 g i : (2.55)
2.4 Gradient, Covariant and Contravariant Derivatives 45
Example 2.4 Gradient of the scalar function krk Using the definition of the directional derivative (2.49) we can write d dskrCsak ˇˇ ˇˇ s D 0
D d ds prrC2s ra/Cs 2 aa/ ˇˇ ˇˇ s D 0
2 ra/C2s aa/ prrC2s ra/Cs 2 aa/ ˇˇ ˇˇ ˇsD0
D ra krk: Comparing this result with (2.54) delivers gradkrk D r krk: (2.56)
Similarly to (2.54) one defines the gradient of the vector functionxDx.r/and the gradient of the tensor function ADA.r/: d dsx.rCsa/ ˇˇ ˇˇ sD0
D.gradA/a; 8a2E n : (2.58) Herein, gradxand gradA represent tensors of second and third order, respectively.
In order to evaluate the above gradients (2.54), (2.57) and (2.58) we represent the vectorsrandawith respect to the linear coordinates (2.19) as r Dx i hi; aDa i hi: (2.59)
With the aid of the chain rule we can further write for the directional derivative of the function˚ D˚ r/: d ds˚ rCsa/ ˇˇ ˇˇ s D 0
46 2 Vector and Tensor Analysis in Euclidean Space
Comparing this result with (2.54) and bearing in mind that it holds for all vectorsa we obtain grad˚D @˚
The representation (2.60) can be rewritten in terms of arbitrary curvilinear coordi- natesr Dr
1 ; 2 ; : : : ; n and the corresponding tangent vectors (2.28) Indeed, in view of (2.33) and (2.60) grad˚ D @˚
Comparison of the last result with (2.55) yields d ds˚ rCsg i / ˇˇ ˇˇ s D 0
The gradient is defined as being independent of the choice of coordinate system, as indicated in definition (2.54) This independence is further illustrated by relation (2.61) By considering equation (2.33), we can express the gradient for any arbitrary coordinate system.
Similarly to relation (2.61) one can express the gradients of the vector-valued functionxDx.r/and the tensor-valued function ADA.r/by gradxD @x
In the context of simple shear, the deformation gradient F2Lin 3 is defined as the gradient of the function relating the position vectors x and X of a material point in both the current and reference configurations.
For the Cartesian coordinates inE 3 wherex D x i e i andX DX i e i we can write by using (2.64)1
2.4 Gradient, Covariant and Contravariant Derivatives 47
Fig 2.3 Simple shear of a rectangular sheet where the matrixh
In the case of simple shear it holds (see Fig.2.3) x 1 DX 1 CX 2 ; x 2 DX 2 ; x 3 DX 3 ; (2.68) wheredenotes the amount of shear Insertion into (2.67) yields h
Henceforth, the derivatives of the functions ˚ D ˚
1 ; 2 ; : : : ; n with respect to curvilinear coordi- nates i will be denoted shortly by ˚;iD @˚
They obey the covariant transformation rule (2.32) with respect to the indexisince
Vector and tensor analysis in Euclidean space involves the representation of scalars, vectors, and second-order tensors These mathematical entities can be expressed in relation to a basis, utilizing specific notations to convey their components effectively.
The covariant derivative, denoted as A;iDAklj, is a differential operator that acts on the components of a vector or tensor A According to the transformation rules outlined in equations (2.71) and (2.72), this operator adheres to the covariant rule concerning the index i, which is indicated by the lower position of the coordinate index.
On the basis of the covariant derivative we can also define the contravariant one.
To this end, we formally apply the rule of component transformation (1.95)1 as /j i Dg ij /j j Accordingly, x j j i Dg i k x j j k ; x j j i Dg i k x j j k ;
A kl j i Dg i m A kl j m ; Aklj i Dg i m Aklj m ; A k l j i Dg i m A k l j m : (2.73) For scalar functions the covariant and the contravariant derivative are defined to be equal to the partial one so that: ˚j i D˚j i D˚;i: (2.74)
In view of (2.63)–(2.70), (2.72) and (2.74) the gradients of the functions ˚ D ˚
1 ; 2 ; : : : ; n take the form grad˚D˚j i g i D˚j i g i ; gradxDx j j i g j ˝g i Dxjj i g j ˝g i Dx j j i g j ˝g i Dxjj i g j ˝g i ; gradADA kl ji gk˝gl˝g i DAklji g k ˝g l ˝g i DA k l ji gk˝g l ˝g i
Christoffel Symbols, Representation of the Covariant Derivative
In this section, we will outline the procedure for constructing the differential operator of the covariant derivative, building upon the previously introduced concept.
The covariant derivative can be expressed using the components of vectors or tensors by first calculating the partial derivatives of tangent vectors with respect to coordinates These derivatives, which are also vectors in \(E^n\), can be represented in terms of tangent vectors \(g_i\) or dual vectors \(g^i\) This leads to the formulation of the Christoffel symbols of the first and second kind, denoted as \( \Gamma^k_{ij} \) and \( \Gamma_k^{ij} \) respectively The relationship between these symbols is established through the equation \( \Gamma^k_{ij} = g^{kl} \Gamma_{l ij} \) Additionally, the Christoffel symbols can be expressed as \( \Gamma^k_{ij} = g_{i,j} g^k \) and \( \Gamma^k_{ij} = g_{j,i} g^k \), highlighting their interdependence in the context of differential geometry.
For the dual basis g i i D1; 2; : : : ; n/ one further gets by differentiating the identitiesg i gj Dı j i (1.15):
Dg i ; k g j C j k i ; i; j; kD1; 2; : : : ; n: Hence, j k i D kj i D g i ;kgj D g i ;jgk; i; j; kD1; 2; : : : ; n (2.81) and consequently g i ;kD j k i g j D kj i g j ; i; kD1; 2; : : : ; n: (2.82)
By means of the identities following from (2.79) gij;kDg i g j
;kDg i ;kg j Cg i g j ;kDi kjCj ki; (2.83) wherei; j; kD1; 2; : : : ; nand in view of (2.77) we finally obtain
50 2 Vector and Tensor Analysis in Euclidean Space ij k D 1 2 g ki ; j Cg kj ; i g ij ; k
2g kl gli;jCg lj ;ig ij ;l
It is seen from (2.84) and (2.85) that all Christoffel symbols identically vanish in the Cartesian coordinates (2.20) Indeed, in this case g ij De i e j Dı ij ; i; j D1; 2; : : : ; n (2.86) and hence ij kD ij k D0; i; j; kD1; 2; : : : ; n: (2.87)
Example 2.6 Christoffel symbols for cylindrical coordinates in E 3 (2.17) By virtue of relation (2.30)1 we realize that g11;3D 2r, while all other derivatives gi k;j i; j; kD1; 2; 3/(2.83) are zero Thus, Eq (2.84) delivers
The Christoffel symbols of the first kind, specifically \( \Gamma_{ij}^{k} \), are defined such that \( \Gamma_{131}^{311} = \Gamma_{113}^{r} = 2.88 \), while all other symbols are zero Utilizing equations (2.77) and (2.30), we derive the relationships \( \Gamma_{ij}^{1} = g_{11} \Gamma_{ij}^{1} Dr^2 \), \( \Gamma_{ij}^{2} = g_{22} \Gamma_{ij}^{2} \), and \( \Gamma_{ij}^{3} = g_{33} \Gamma_{ij}^{3} \) for \( i, j = 1, 2, 3 \).
By virtue of (2.88) we can further write
13 1 D 31 1 D 1 r; 11 3 D r; (2.90) while all remaining Christoffel symbols of the second kind ij k i; j; kD1; 2; 3/ (2.85) vanish.
Now, we are in a position to express the covariant derivative in terms of the vector or tensor components by means of the Christoffel symbols For the vector-valued functionxDx
Dx i ;jg i Cx i ij k g k D x i ;jCx k kj i g i ; (2.91) or alternatively using (2.82)
2.5 Christoffel Symbols, Representation of the Covariant Derivative 51 x; j D xig i
D x i ; j g i x i kj i g k D x i ; j x k ij k g i : (2.92) Comparing these results with (2.72) yields x i j j Dx i ;jCx k kj i ; xij j Dxi;jx k ij k ; i; j D1; 2; : : : ; n: (2.93)
Similarly, we treat the tensor-valued function ADA
DA ij ;kg i ˝g j CA ij g i ;k˝g j CA ij g i ˝g j ;k
DA ij ;kg i ˝g j CA ij i k l g l ˝g j CA ij g i ˝ j k l g l
A ij ; k CA lj lk i CA i l lk j g i ˝g j : (2.94)
A ij j k DA ij ;kCA lj lk i CA i l lk j ; i; j; kD1; 2; : : : ; n: (2.95)
By analogy, we further obtain
Aijj k DAij;kA lj i k l Ai l j k l ;
A i j j k DA i j ; k CA l j lk i A i l j k l ; i; j; kD1; 2; : : : ; n: (2.96) Similar expressions for the covariant derivative can also be formulated for tensors of higher orders.
From (2.87), (2.93), (2.95) and (2.96) it is seen that the covariant derivative taken in Cartesian coordinates (2.20) coincides with the partial derivative: x i j j Dx i ;j; xij j Dxi;j;
The formal application of the covariant derivative to tangent vectors and metric coefficients leads to significant identities known as Ricci's Theorem These identities include the expressions: \( g_{ij}^j Dg_i^j + g_{l}^{ij} l D = 0 \) and \( g_{ij}^j Dg_i^j + g_{l}^{lj} i D = 0 \) These results stem from the foundational equations and properties established in previous sections, emphasizing the interrelation between the geometry of the manifold and the behavior of its curvature.
In Euclidean space, the identities involving vector and tensor analysis can be expressed as \( g_{ij,jk} Dg_{ij,k} + g_{lj} i k l g_{il,jk} Dg_{ij,k} + g_{il,k} g_{j,m}(g_{lm,k} + m_{kl} + l_{km}) = 0 \), where indices \( i, j, k \) range from 1 to \( n \) These identities can also be derived by considering equation (1.25) and applying the product rules of differentiation for the covariant derivative, as outlined in Exercise 2.7.
Aijj k Daij k bj Caibjj k for Aij Daibj; (2.101)
A ij j k Da i j k b j Ca i b j j k for A ij Da i b j ; (2.102)
A i j j k Da i j k bj Ca i bjj k for A i j Da i bj; i; j; kD1; 2; : : : ; n: (2.103)
Applications in Three-Dimensional Space: Divergence and Curl
Divergence of a tensor field One defines the divergence of a tensor field S.r/by divSD lim
SndA; (2.104) where the integration is carried out over a closed surface areaAwith the volumeV and the outer unit normal vectornillustrated in Fig.2.4.
In this integration process, we examine a curvilinear parallelepiped defined by the edges formed by the coordinate lines and their respective curves The infinitesimal surface elements of this parallelepiped are expressed in vector form as \(dA(i) = \pm d j g_j d k g_k = \pm g g_i d j d k\), where \(g = [g_1, g_2, g_3]\) and \(i, j, k\) represent an even permutation of the indices 1, 2, and 3 Consequently, the corresponding infinitesimal volume element is defined as \(dV = dA(i) d_i g_i\), ensuring no summation over the index \(i\).
We also need the identities
2.6 Applications in Three-Dimensional Space: Divergence and Curl 53 dA n
Fig 2.4 Definition of the divergence: closed surface with the area A, volume V and the outer unit normal vector n
Fig 2.5 Derivation of the divergence in three-dimensional space g; k DŒg 1 g 2 g 3 ; k D 1k l Œg l g 2 g 3 C 2k l Œg 1 g l g 3 C 3k l Œg 1 g 2 g l
;iDg;ig i Cgg i ;iD li l gg i li i gg l D 0 ; (2.108) following from (1.39), (2.76) and (2.82) With these results in hand, one can express the divergence (2.104) as follows divSD lim
Keeping (2.105) and (2.106) in mind and using the abbreviation
54 2 Vector and Tensor Analysis in Euclidean Space s i i DS i g i g i i
; iD1; 2; 3 (2.109) we can thus write divSD lim
V s i ; i g dV; (2.110) wherei; j; k is again an even permutation of 1,2,3 Assuming continuity of the integrand in (2.110) and applying (2.108) and (2.109) we obtain divSD 1 gs i ; i D 1 g Sgg i
DS; i g i ; (2.111) which finally yields by virtue of (2.72)2 divSDS;ig i DS j i j i g j DS j i j i g j : (2.112)
Example 2.7 The momentum balance in Cartesian and cylindrical coordinates Let us consider a material body or a part of it with a massm, volumeV and outer surface
According to Euler's law of motion, the vector sum of external volume forces (fdV) and surface tractions (stdA) equals the vector sum of inertia forces (xdm) Here, 'x' represents the position vector of a material element (dm), and the dot over the variable indicates the material time derivative.
Applying the Cauchy theorem (1.77) to the first integral on the right hand side and using the identity dmDdV it further delivers
V fdV; (2.114) where denotes the density of the material Dividing this equation by V and considering the limit caseV !0we obtain by virtue of (2.104)
2.6 Applications in Three-Dimensional Space: Divergence and Curl 55 xR DdivCf: (2.115)
This vector equation is referred to as the momentum balance.
Representing vector and tensor variables with respect to the tangent vectors g i i D1; 2; 3/of an arbitrary curvilinear coordinate system as
R xDa i g i ; D ij g i ˝g j ; f Df i g i and expressing the divergence of the Cauchy stress tensor by (2.112) we obtain the component form of the momentum balance (2.115) by a i D ij j j Cf i ; i D1; 2; 3: (2.116)
The covariant derivative of the Cauchy stress tensor can be expressed as \( ijj k D ij ;kC lj lk i C i l lk j ; i; j; kD1; 2; 3 \) and is further simplified to \( ijj j D ij ;jC lj lj i C i l lj j ; iD1; 2; 3 \).
By virtue of the expressions for the Christoffel symbols (2.90) and keeping in mind the symmetry of the Cauchy stress tensors ij D j i i ¤j D1; 2; 3/we thus obtain for cylindrical coordinates:
The balance equations finally take the form a 1 D 11 ; ' C 12 ; z C 13 ; r C3 31 r Cf 1 ; a 2 D 21 ; ' C 22 ; z C 23 ; r C 32 r Cf 2 ; a 3 D 31 ; ' C 32 ; z C 33 ; r r 11 C 33 r Cf 3 : (2.120)
In Cartesian coordinates, where gi D ei i D1; 2; 3/, the covariant derivative coincides with the partial one, so that
56 2 Vector and Tensor Analysis in Euclidean Space ijj j D ij ;jD ij ;j: (2.121) Thus, the balance equations reduce to xR1D 11 ;1C 12 ;2C 13 ;3Cf 1 ; xR2D 21 ;1C 22 ;2C 23 ;3Cf 2 ; xR3D 31 ;1C 32 ;2C 33 ;3Cf 3 ; (2.122) wherexR i Da i i D1; 2; 3/.
Divergence and curl of a vector field Now, we consider a differentiable vector fieldt
One defines the divergence and curl of t
1 ; 2 ; 3 respec- tively by divt D lim
The integration is conducted over a closed surface area \( A \) with volume \( V \) and the outer unit normal vector \( n \) By referencing equations (1.66) and (2.104), the curl can be expressed as \( \text{curl} \, t = \lim \).
Treating the vector field in the same manner as the tensor field we can write divt Dt; i g i Dt i j i (2.126) and in view of (2.75)2(see also Exercise 1.44) divt Dtr.gradt/ : (2.127)
The same procedure applied to the curl (2.124) leads to curlt Dg i t; i : (2.128)
By virtue of (2.72)1and (1.44) we further obtain (see also Exercise2.8) curlt Dt i j j g j g i De j i k 1 gt i j j g k : (2.129)With respect to the Cartesian coordinates (2.20) withg i D e i i D1; 2; 3/ the divergence (2.126) and curl (2.129) simplify to
2.6 Applications in Three-Dimensional Space: Divergence and Curl 57 divt Dt i ;iDt 1 ;1Ct 2 ;2Ct 3 ;3Dt1;1Ct 2 ;2Ct 3 ;3; (2.130) curlt De j i k ti;je k
Now, we are going to discuss some combined operations with a gradient, divergence, curl, tensor mapping and products of various types (see also Exercise2.12).
1 Curl of a gradient: curl grad˚ D 0 : (2.132)
2 Divergence of a curl: div curlt D0: (2.133)
3 Divergence of a vector product: div u v /D v curl u u curl v : (2.134)
4 Gradient of a divergence: grad divt Ddiv.gradt/ T ; (2.135) grad divtDcurl curltCdiv gradt Dcurl curltCt; (2.136) where the combined operatort Ddiv gradtis known as the Laplacian.
5 Skew-symmetric part of a gradient skew.gradt/D 1
6 Divergence of a (left) mapping div.tA/DAWgradtCtdivA: (2.138)
7 Divergence of a product of a scalar-valued function and a vector-valued function div.˚t/Dtgrad˚C˚divt: (2.139)
8 Divergence of a product of a scalar-valued function and a tensor-valued function div.˚A/DAgrad˚C˚divA: (2.140)
We prove, for example, identity (2.132) To this end, we apply (2.75)1, (2.82) and (2.128) Thus, we write
58 2 Vector and Tensor Analysis in Euclidean Space curl grad˚ Dg j ˚j i g i
D˚;ij g j g i ˚;i kj i g j g k D 0 (2.141) taking into account that ˚; ij D ˚; j i , ij l D j i l and g i g j D g j g i i ¤j; i; j D1; 2; 3/.
In Example 2.8, we derive the balance of mechanical energy from the momentum balance (2.115) by applying scalar multiplication with the velocity vector \( v \) This leads to the expression \( v \cdot (x/R) = v \, \text{div} + v f \) By utilizing equation (2.138), we can reformulate this as \( v \cdot (x/R) + W \nabla v = \text{div}(v) + v f \) Integrating this relationship over the volume of the body \( V \) results in the time derivative \( \frac{d}{dt} \).
In this analysis, we consider the mass of the body, denoted as \( m \), and utilize the definition of divergence alongside the Cauchy theorem, which states that the Cauchy stress vector is represented by \( \mathbf{t} = \mathbf{D} \cdot \mathbf{n} \) By applying these principles, we derive the relationship represented by the expression \( \frac{d}{dt} \).
The equation V v fdV (2.142) illustrates the balance of mechanical energy, where the first two integrals on the left side represent the time rate of kinetic energy and stress power Conversely, the right side of the equation accounts for the power exerted by external forces, including traction on the body's boundary and internal volume forces.
Example 2.9 Axial vector of the spin tensor The spin tensor is defined as a skew- symmetric part of the velocity gradient by wDskew.grad v / : (2.143)
By virtue of (1.158) we can represent it in terms of the axial vector
2.6 Applications in Three-Dimensional Space: Divergence and Curl 59 wD O w ; (2.144) which in view of (2.137) takes the form: w D 1
Example 2.10 Navier-Stokes equations for a linear-viscous fluid in Cartesian and cylindrical coordinates A linear-viscous fluid (also called Newton fluid or Navier- Poisson fluid) is defined by a constitutive equation
D pIC2dC trd/I; (2.146) where dDsym.grad v /D 1
The rate of deformation tensor, denoted as (2.147), is influenced by hydrostatic pressure (p) and material constants, including shear viscosity and the second viscosity coefficient By substituting (2.147) into equation (2.146) and considering equation (2.127), we can derive further insights into the material behavior.
Substituting this expression into the momentum balance (2.115) and using (2.135) and (2.140) we obtain the relation
P v D gradpCdiv grad v C.C/grad div v Cf (2.149) referred to as the Navier-Stokes equation By means of (2.136) it can be rewritten as
P v D gradpC.2C/grad div v curl curl v Cf: (2.150)
For an incompressible fluid characterized by the kinematic condition trdDdiv v D
0, the latter two equations simplify to
P v D gradpcurl curl v Cf: (2.152) With the aid of the identity v D v ; i j i (see Exercise2.14) we thus can write
60 2 Vector and Tensor Analysis in Euclidean Space
In Cartesian coordinates this relation is thus written out as
Pv i D p; i C v i ; 11 Cv i ; 22 Cv i ; 33 /Cf i ; iD1; 2; 3: (2.154)
In the context of arbitrary curvilinear coordinates, the vector Laplacian can be expressed using a specific representation, as outlined in Exercise 2.16 For cylindrical coordinates, this representation simplifies to a particular form, incorporating terms related to the derivatives of the vector components Specifically, it includes contributions from the second derivatives and mixed derivatives, along with additional terms that account for the geometry of the cylindrical system, ultimately resulting in a comprehensive expression for the vector Laplacian in these coordinates.
Inserting this result into (2.151) and using the representations v P D Pv i gi andf D f i gi we finally obtain
2.1 Evaluate tangent vectors, metric coefficients and the dual basis of spherical coordinates inE 3 defined by (Fig.2.6) r.'; ; r/Drsin'sine 1 Crcose 2 Crcos'sine 3 : (2.157)
@ k (2.43) for the transformation of linear coordi- nates in the spherical ones and vice versa.
Fig 2.6 Spherical coordinates in three-dimensional space
2.3 Evaluate gradients of the following functions ofr:
(a) 1 krk, (b)r w , (c)rAr, (d) Ar, (e) w r, where w and A are some vector and tensor, respectively.
2.4 Evaluate the Christoffel symbols of the first and second kind for spherical coordinates (2.157).
2.6 Prove identities (2.99) and (2.100) by using (1.91).
2.7 Prove the product rules of differentiation for the covariant derivative (2.101)– (2.103).
2.8 Verify relation (2.129) applying (2.112), (2.125) and using the results of Exercise 1.23.
2.9 Write out the balance equations (2.116) in spherical coordinates (2.157).
2.10 Evaluate tangent vectors, metric coefficients, the dual basis and Christoffel symbols for cylindrical surface coordinates defined by r.r; s;z/Drcoss re 1 Crsins re 2 Cze 3 : (2.158)
2.11 Write out the balance equations (2.116) in cylindrical surface coordi- nates (2.158).
62 2 Vector and Tensor Analysis in Euclidean Space
2.13 Write out the gradient, divergence and curl of a vector fieldt.r/in cylindrical and spherical coordinates (2.17) and (2.157), respectively.
2.14 Prove that the Laplacian of a vector-valued functiont.r/can be given by tDt; i j i Specify this identity for Cartesian coordinates.
2.15 Write out the Laplacian˚of a scalar field˚ r/in cylindrical and spherical coordinates (2.17) and (2.157), respectively.
2.16 Write out the Laplacian of a vector field t.r/ in component form in an arbitrary curvilinear coordinate system Specify the result for spherical coordi- nates (2.157).
Curves and Surfaces in Three-Dimensional