1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

tensor algebra and tensor analysis for engineers with applications to continuum mechanics second edition pdf

263 46 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Tensor Algebra and Tensor Analysis for Engineers With Applications to Continuum Mechanics
Tác giả Mikhail Itskov
Người hướng dẫn Prof. Dr.-Ing. Mikhail Itskov
Trường học RWTH Aachen University
Chuyên ngành Continuum Mechanics
Thể loại book
Năm xuất bản 2007
Thành phố Aachen
Định dạng
Số trang 263
Dung lượng 1,96 MB

Cấu trúc

  • Cover

  • Preface to the Second Edition

  • Preface to the First Edition

  • Contents

  • 1 Vectors and Tensors in a Finite-Dimensional Space

    • 1.1 Notion of the Vector Space

    • 1.2 Basis and Dimension of the Vector Space

    • 1.3 Components of a Vector, Summation Convention

    • 1.4 Scalar Product, Euclidean Space, Orthonormal Basis

    • 1.5 Dual Bases

    • 1.6 Second-Order Tensor as a Linear Mapping

    • 1.7 Tensor Product, Representation of a Tensor with Respect to a Basis

    • 1.8 Change of the Basis, Transformation Rules

    • 1.9 Special Operations with Second-Order Tensors

    • 1.10 Scalar Product of Second-Order Tensors

    • 1.11 Decompositions of Second-Order Tensors

    • 1.12 Tensors of Higher Orders

    • Exercises

  • 2 Vector and Tensor Analysis in Euclidean Space

    • 2.1 Vector- and Tensor-Valued Functions, Differential Calculus

    • 2.2 Coordinates in Euclidean Space, Tangent Vectors

    • 2.3 Coordinate Transformation. Co-, Contra- and Mixed Variant Components

    • 2.4 Gradient, Covariant and Contravariant Derivatives

    • 2.5 Christoffel Symbols, Representation of the Covariant Derivative

    • 2.6 Applications in Three-Dimensional Space: Divergence and Curl

    • Exercises

  • 3 Curves and Surfaces in Three-Dimensional Euclidean Space

    • 3.1 Curves in Three-Dimensional Euclidean Space

    • 3.2 Surfaces in Three-Dimensional Euclidean Space

    • 3.3 Application to Shell Theory

    • Exercises

  • 4 Eigenvalue Problem and Spectral Decomposition of Second-Order Tensors

    • 4.1 Complexification

    • 4.2 Eigenvalue Problem, Eigenvalues and Eigenvectors

    • 4.3 Characteristic Polynomial

    • 4.4 Spectral Decomposition and Eigenprojections

    • 4.5 Spectral Decomposition of Symmetric Second-Order Tensors

    • 4.6 Spectral Decomposition of Orthogonal and Skew-Symmetric Second-Order Tensors

    • 4.7 Cayley-Hamilton Theorem

    • Exercises

  • 5 Fourth-Order Tensors

    • 5.1 Fourth-Order Tensors as a Linear Mapping

    • 5.2 Tensor Products, Representation of Fourth-Order Tensors with Respect to a Basis

    • 5.3 Special Operations with Fourth-Order Tensors

    • 5.4 Super-Symmetric Fourth-Order Tensors

    • 5.5 Special Fourth-Order Tensors

    • Exercises

  • 6 Analysis of Tensor Functions

    • 6.1 Scalar-Valued Isotropic Tensor Functions

    • 6.2 Scalar-Valued Anisotropic Tensor Functions

    • 6.3 Derivatives of Scalar-Valued Tensor Functions

    • 6.4 Tensor-Valued Isotropic and Anisotropic Tensor Functions

    • 6.5 Derivatives of Tensor-Valued Tensor Functions

    • 6.6 Generalized Rivlin’s Identities

    • Exercises

  • 7 Analytic Tensor Functions

    • 7.1 Introduction

    • 7.2 Closed-Form Representation for Analytic Tensor Functions and Their Derivatives

    • 7.3 Special Case: Diagonalizable Tensor Functions

    • 7.4 Special case: Three-Dimensional Space

    • 7.5 Recurrent Calculation of Tensor Power Series and Their Derivatives

    • Exercises

  • 8 Applications to Continuum Mechanics

    • 8.1 Polar Decomposition of the Deformation Gradient

    • 8.2 Basis-Free Representations for the Stretch and Rotation Tensor

    • 8.3 The Derivative of the Stretch and Rotation Tensor with Respect to the Deformation Gradient

    • 8.4 Time Rate of Generalized Strains

    • 8.5 Stress Conjugate to a Generalized Strain

    • 8.6 Finite Plasticity Based on the Additive Decomposition of Generalized Strains

    • Exercises

  • Solutions

  • References

  • Index

Nội dung

Notion of the Vector Space

We start with the definition of the vector space over the field of real numbers

Definition 1.1 A vector space is a setVof elements called vectors satisfying the following axioms.

A To every pair, x and y of vectors inVthere corresponds a vector x + y , called the sum of x and y , such that

(A.3) there exists inVa unique vector zero 0 , such that 0 + x = x ,∀x∈V,

(A.4) to every vector x inVthere corresponds a unique vector−x such that x + (−x ) = 0

B To every pairαand x , whereαis a scalar real number and x is a vector in

V, there corresponds a vectorαx , called the product ofαand x , such that

(B.1) α(βx ) = (αβ) x (multiplication by scalars is associative),

(B.3) α( x + y ) = αx +αy (multiplication by scalars is distributive with respect to vector addition),

(B.4) (α+β) x = αx +βx (multiplication by scalars is distributive with respect to scalar addition),

1) The set of all real numbersR.

2 1 Vectors and Tensors in a Finite-Dimensional Space zero vector vector addition x y

Fig 1.1.Geometric illustration of vector axioms in two dimensions

The collection of all directional arrows in two or three dimensions adheres to established mathematical operations, such as summation and scalar multiplication By applying these definitions, along with the concepts of the negative and zero vectors, it becomes evident that the fundamental axioms for directional arrows are satisfied.

3) The set of alln-tuples of real numbersR: a ⎧⎪

Indeed, the axioms (A) and (B) apply to the n-tuples if one defines ad- dition, multiplication by a scalar and finally the zero tuple, respectively, by a + b ⎧⎪

4) The set of all real-valued functions defined on a real line.

Basis and Dimension of the Vector Space

1.2 Basis and Dimension of the Vector Space

Definition 1.2 A set of vectors x 1 ,x 2 , ,x n is called linearly dependent if there exists a set of corresponding scalars α 1 , α 2 , , α n ∈ R, not all zero, such that n i=1 α i x i = 0 (1.1)

Otherwise, the vectors x 1 ,x 2 , ,x n are called linearly independent In this case, none of the vectors x i is the zero vector (Exercise 1.2).

Definition 1.3 The vector x n i=1 α i x i (1.2) is called linear combination of the vectors x 1 ,x 2 , ,x n , where α i ∈ R(i

Theorem 1.1 The set ofnnon-zero vectorsx 1 ,x 2 , ,x n is linearly depen- dent if and only if some vectorx k (2≤k≤n)is a linear combination of the preceding onesx i (i= 1, , k−1).

Proof If the vectors x 1 ,x 2 , ,x n are linearly dependent, then n i=1 α i x i = 0, where not allα i are zero Letα k (2≤k≤n) be the last non-zero number, so thatα i = 0 (i=k+ 1, , n) Then, k i=1 α i x i = 0 ⇒ x k k−1 i=1

Thereby, the case k = 1 is avoided because α 1 x 1 = 0 implies that x 1 = 0

(Exercise 1.1) Thus, the sufficiency is proved The necessity is evident.

Definition 1.4 A basis of a vector spaceVis a setG of linearly independent vectors such that every vector in Vis a linear combination of elements of G.

A vector spaceV is finite-dimensional if it has a finite basis.

This book focuses exclusively on finite-dimensional vector spaces, highlighting that, despite the infinite number of possible bases for such spaces, each basis contains the same number of vectors.

4 1 Vectors and Tensors in a Finite-Dimensional Space

Theorem 1.2 All the bases of a finite-dimensional vector space V contain the same number of vectors.

Proof LetG ={g 1 ,g 2 , ,g n } and F ={f 1 ,f 2 , ,f m } be two arbitrary bases ofVwith different numbers of elements, saym > n Then, every vector inVis a linear combination of the following vectors: f 1 ,g 1 ,g 2 , ,g n (1.3)

The vectors in question are non-zero and linearly dependent, allowing us to identify a vector \( g_k \) that serves as a linear combination of the preceding vectors By excluding \( g_k \), we form a new set \( G \) consisting of \( f_1, g_1, g_2, \ldots, g_{k-1}, g_{k+1}, \ldots, g_n \), which maintains the property that every vector in \( V \) can be expressed as a linear combination of the elements in \( G \) Next, we examine the vectors \( f_1, f_2, g_1, g_2, \ldots, g_{k-1}, g_{k+1}, \ldots, g_n \) and apply the exclusion process again However, we find that none of the vectors \( f_i \) can be eliminated, as they are linearly independent.

As soon as all g i (i= 1,2, , n) are exhausted we conclude that the vectors f 1 ,f 2 , ,f n+1 are linearly dependent This contradicts, however, the previous assumption that they belong to the basisF.

Definition 1.5 The dimension of a finite-dimensional vector space Vis the number of elements in a basis ofV.

Theorem 1.3 Every set F ={f 1 ,f 2 , ,f n } of linearly independent vec- tors in an n-dimensional vectors space V forms a basis of V Every set of more thann vectors is linearly dependent.

The proof of this theorem closely resembles the previous one Consider a basis \( G = \{ g_1, g_2, \ldots, g_n \} \) of vector space \( V \) The vectors defined in equation (1.3) are both linearly dependent and non-zero By excluding one vector \( g_k \), we can form a new set \( G \) such that every vector in \( V \) can be expressed as a linear combination of the elements in \( G \) This process can be repeated until we arrive at a set \( F \) that retains the same property Since the vectors \( f_i \) (where \( i = 1, 2, \ldots, n \)) are linearly independent, they constitute a basis for \( V \) Consequently, any additional vectors in \( V \), such as \( f_{n+1}, f_{n+2}, \ldots \), can be expressed as linear combinations of \( F \) Therefore, any set containing more than \( n \) vectors must be linearly dependent.

Theorem 1.4 Every set F ={f 1 ,f 2 , ,f m } of linearly independent vec- tors in ann-dimensional vector space Vcan be extended to a basis.

Components of a Vector, Summation Convention

If \( m = n \), then according to Theorem 1.3, \( F \) is already a basis In the case where \( m < n \), we aim to identify \( n - m \) additional vectors \( f_{m+1}, f_{m+2}, \ldots, f_n \) to ensure that the entire set of vectors \( f_1, f_2, \ldots, f_m, f_{m+1}, \ldots, f_n \) remains linearly independent and thus forms a basis Assuming, for contradiction, that only \( k < n - m \) such vectors can be found leads to the conclusion that for any vector \( x \in V \), there exist scalars \( \alpha, \alpha_1, \alpha_2, \ldots, \alpha_{m+k} \), not all zero, satisfying the equation \( \alpha x + \alpha_1 f_1 + \alpha_2 f_2 + \ldots + \alpha_{m+k} f_{m+k} = 0 \) Since \( \alpha \) must equal zero to avoid linear dependence among the vectors \( f_i \), it follows that all vectors \( x \) in \( V \) can be expressed as linear combinations of \( f_i \), indicating that the dimension of \( V \) is \( m + k < n \), which contradicts the theorem's assumption.

1.3 Components of a Vector, Summation Convention

LetG={g 1 ,g 2 , ,g n }be a basis of ann-dimensional vector spaceV Then, x n i=1 x i g i , ∀x∈V (1.4)

Theorem 1.5 The representation (1.4) with respect to a given basis G is unique.

Proof Let x n i=1 x i g i and x n i=1 y i g i be two different representations of a vector x , where not all scalar coefficients x i andy i (i= 1,2, , n) are pairwise identical Then,

The equation −y i g i = 1 x i − y i g i implies that either the numbers x i and y i are equal for all i (i = 1, 2, , n) or the vectors g i are linearly dependent However, the latter scenario is not feasible since the vectors g i constitute a basis of the vector space V.

In tensor analysis, the scalar components \( x_i \) (where \( i = 1, 2, \ldots, n \)) of the vector \( x \) are defined with respect to the basis \( G = \{ g_1, g_2, \ldots, g_n \} \) This representation is commonly expressed in a concise format, omitting the summation symbol for simplicity.

In a finite-dimensional space, the expression \( \sum_{i=1}^{n} x_i g_i = x_i g_i \) exemplifies Einstein’s summation convention, where summation is implied for any index that appears twice—once as a superscript and once as a subscript This repeated index, known as a dummy index, takes on values from 1 to \( n \), the dimension of the vector space Additionally, the role of the index changes when it appears under a fraction bar, switching between superscript and subscript.

Scalar Product, Euclidean Space, Orthonormal Basis

The scalar product is crucial in vector and tensor algebra, as it significantly influences the properties of the vector space The definition and characteristics of the scalar product determine the behavior and structure of this mathematical space.

Definition 1.6 The scalar (inner) product is a real-valued function xãy of two vectorsxandy in a vector spaceV, satisfying the following conditions.

(C.2) xã( y + z ) = xãy + xãz (distributive rule),

(C.3) α( xãy ) = (αx )ãy = xã(αy ) (associative rule for the multiplica- tion by a scalar),∀α∈R,∀x,y,z∈V,

(C.4) xãx≥0 ∀x∈V, xãx = 0 if and only if x = 0

Ann-dimensional vector space furnished by the scalar product with properties

(C.1-C.4) is called Euclidean space E n On the basis of this scalar product one defines the Euclidean length (also called norm) of a vector x by x =√ xãx (1.6)

A vector whose length is equal to 1 is referred to as unit vector.

Definition 1.7 Two vectors x andy are called orthogonal (perpendicular), denoted byx⊥y, if xãy = 0 (1.7)

Of special interest is the so-called orthonormal basis of the Euclidean space.

Definition 1.8 A basis E ={e 1 ,e 2 , ,e n } of an n-dimensional Euclidean spaceE n is called orthonormal if e i ãe j =δ ij , i, j= 1,2, , n, (1.8) where

1.4 Scalar Product, Euclidean Space, Orthonormal Basis 7 δ ij =δ ij =δ i j 1 fori=j,

0 fori=j (1.9) denotes the Kronecker delta.

An orthonormal basis consists of pairwise orthogonal unit vectors, raising the important question of its existence This article will demonstrate that any set of m ≤ n linearly independent vectors in E n can be transformed into an orthonormal basis through the Gram-Schmidt procedure, which involves orthogonalizing and normalizing these vectors using a linear transformation.

Starting with a set of linearly independent vectors \(x_1, x_2, \ldots, x_m\), it is possible to create their linear combinations \(e_1, e_2, \ldots, e_m\) such that the inner product \(e_i \cdot e_j = \delta_{ij}\) for \(i, j = 1, 2, \ldots, m\) Since these vectors are linearly independent, they are all non-zero, allowing us to define the first unit vector as \(e_1 = x_1\).

We analyze the vector \( e_2 = x_2 - (x_2 \cdot e_1)e_1 \), which is orthogonal to \( e_1 \) This relationship also applies to the unit vector \( e_2 = \frac{e_2}{\|e_2\|} \) It is important to note that \( e_2 \cdot e_2 = 0 \) would imply \( e_2 = 0 \), leading to \( x_2 = (x_2 \cdot e_1)e_1 = (x_2 \cdot e_1)x_1^{-1}x_1 \) However, this contradicts the linear independence of the vectors \( x_1 \) and \( x_2 \).

We construct the orthonormal vectors e₁, e₂, , eₘ by applying a systematic procedure to ensure that each vector is orthogonal to the previous ones Specifically, we derive e₃ as e₃ = x₃ - (x₃ · e₂)e₂ - (x₃ · e₁)e₁, ensuring it remains orthogonal to both e₁ and e₂ By repeating this process, we obtain a complete set of mutually orthogonal and non-zero vectors, which are linearly independent When the number of vectors m equals n, this collection forms an orthonormal basis in Eⁿ, as established by Theorem 1.3.

With respect to an orthonormal basis the scalar product of two vectors x =x i e i and y =y i e i inE n takes the form xãy =x 1 y 1 +x 2 y 2 + .+x n y n (1.13)

For the length of the vector x (1.6) we thus obtain the Pythagoras formula x x 1 x 1 +x 2 x 2 + .+x n x n , x∈E n (1.14)

8 1 Vectors and Tensors in a Finite-Dimensional Space

Dual Bases

Definition 1.9 Let G={g 1 ,g 2 , ,g n } be a basis in then-dimensional Eu- clidean space E n Then, a basis G g 1 ,g 2 , ,g n of E n is called dual to

A unique set of vectors \( G = \{g_1, g_2, \ldots, g_n\} \) that satisfies the conditions outlined in (1.15) always exists and forms a basis in \( E^n \) Given an orthonormal basis \( E = \{e_1, e_2, \ldots, e_n\} \) in \( E^n \), we can express the basis vectors as \( e_i = \alpha_{ji} g_j \) and \( g_i = \beta_{ij} e_j \) for \( i = 1, 2, \ldots, n \) Substituting the first equation into the second leads to the relationship \( g_i = \beta_{ij} \alpha_{kj} g_k \), resulting in the equation \( 0 = \beta_{ij} \alpha_{kj} - \delta_{ik} g_k \) Since the vectors \( g_i \) are linearly independent, we derive \( \beta_{ij} \alpha_{kj} = \delta_{ik} \) Additionally, we can define \( g_i = \alpha_{ij} e_j \) for \( i = 1, 2, \ldots, n \), utilizing Einstein's summation convention for simplicity Ultimately, through the relations established in (1.8), (1.16), and (1.18), we find that \( g_i \approx g_j \beta_{ik} e_k \approx \alpha_{jl} e_l \).

We demonstrate that the vectors \( g_i \) (for \( i = 1, 2, \ldots, n \)) are linearly independent, thus forming a basis for \( E_n \) Assuming the contrary, if \( a_i g_i = 0 \) where not all scalars \( a_i \) are zero, and multiplying both sides of this equation by the vectors \( g_j \) (for \( j = 1, 2, \ldots, n \)), we arrive at a contradiction By applying the principles outlined in (1.167) (refer to Exercise 1.5), we can further substantiate this conclusion.

The next important question is whether the dual basis is unique Let G g 1 ,g 2 , ,g n and H h 1 ,h 2 , ,h n be two arbitrary non-coinciding bases inE n , both dual toG={g 1 ,g 2 , ,g n } Then,

Forming the scalar product with the vectors g j (j= 1,2, , n) we can con- clude that the basesG andH coincide: δ j i = h i ãg j h i k g k ãg j =h i k δ k j =h i j ⇒ h i = g i , i= 1,2, , n. Thus, we have proved the following theorem.

Theorem 1.6 To every basis in an Euclidean spaceE n there exists a unique dual basis.

Relation (1.19) allows for the determination of the dual basis, which can also be derived without an orthonormal basis Specifically, if g_i represents a basis dual to g_i (for i = 1, 2, , n), the relationship can be expressed as g_i = g_ij g_j By substituting this into the first relation, we obtain g_i = g_ij g_jk g_k When we multiply this equation scalarly with the vectors g_l, we utilize relation (1.15) to arrive at δ_l_i = g_ij g_jk δ_l_k = g_ij g_jl, for i, l = 1, 2, , n Consequently, this demonstrates that the matrices [g_kj] and g_kj are inverses of each other.

Now, multiplying scalarly the first and second relation (1.21) by the vectors g j and g j (j = 1,2, , n), respectively, we obtain with the aid of (1.15) the following important identities: g ij =g ji = g i ãg j , g ij =g ji = g i ãg j , i, j= 1,2, , n (1.25)

An orthonormal basis in \(E^n\) is defined as self-dual, meaning that the basis vectors satisfy the conditions \(e_i = e_i\) and \(e_i \cdot e_j = \delta_{ji}\) for indices \(i, j = 1, 2, \ldots, n\) Utilizing dual bases, any vector in \(E^n\) can be expressed as \(x = x_i g_i\), where \(x_i = x \cdot g_i\) for all \(x \in E^n\) This representation is crucial for understanding vector decomposition in the context of linear algebra.

10 1 Vectors and Tensors in a Finite-Dimensional Space xãg i x j g j ãg i =x j δ i j =x i , xãg i x j g j ãg i =x j δ j i =x i , i= 1,2, , n.

The components of a vector in relation to dual bases are essential for calculating the scalar product For two arbitrary vectors, \( x = x^i g_i \) and \( y = y^i g_i \), the scalar product can be expressed as \( x \cdot y = x^i y^j g_{ij} \) Furthermore, the length of vector \( x \) can be calculated using the formula \( \|x\| = \sqrt{x^i x^j g_{ij}} \).

Example Dual basis in E 3 Let G = {g 1 ,g 2 ,g 3 } be a basis of the three-dimensional Euclidean space and g= [ g 1 g 2 g 3 ], (1.31) where [• • •] denotes the mixed product of vectors It is defined by

[ abc ] = ( aìb )ãc = ( bìc )ãa = ( cìa )ãb, (1.32) where “×” denotes the vector (also called cross or outer) product of vectors. Consider the following set of vectors: g 1 =g −1 g 2 ×g 3 , g 2 =g −1 g 3 ×g 1 , g 3 =g −1 g 1 ×g 2 (1.33)

The vectors defined in equation (1.33) meet the criteria outlined in condition (1.15), demonstrating their linear independence as shown in Exercise 1.11, thereby establishing them as a basis dual to \( g_i \) for \( i = 1, 2, 3 \) Additionally, it can be derived that \( g_2 = |g_{ij}| \) (1.34), where the notation \( |•| \) represents the determinant of the matrix Utilizing equation (1.16), we can express \( g \) as \( g = [ g_1 \, g_2 \, g_3 ] \beta_1 i e_i \beta_2 j e_j \beta_3 k e_k \).

=β 1 i β j 2 β 3 k [ e i e j e k ] =β 1 i β 2 j β 3 k e ijk =β j i , (1.35) wheree ijk denotes the permutation symbol (also called Levi-Civita symbol).

It is defined by e ijk =e ijk = [ e i e j e k ]

1 ifijkis an even permutation of 123,

−1 ifijkis an odd permutation of 123,

1.5 Dual Bases 11 where the orthonormal vectors e 1 , e 2 and e 3 are numerated in such a way that they form a right-handed system In this case, [ e 1 e 2 e 3 ] = 1.

On the other hand, we can write again using (1.16) 2 g ij = g i ãg j 3 k=1 β i k β j k

The latter sum can be represented as a product of two matrices so that [g ij ] β j i β j i

Since the determinant of the matrix product is equal to the product of the matrix determinants we finally have

With the aid of the permutation symbol (1.36) one can write

[ g i g j g k ] =e ijk g, i, j, k= 1,2,3, (1.39) which by (1.28) 2 yields an alternative representation of the identities (1.33) as g i ×g j =e ijk gg k , i, j= 1,2,3 (1.40)

Similarly to (1.35) one can also show that (see Exercise 1.12) g 1 g 2 g 3

=e ijk g , i, j, k= 1,2,3, (1.43) which yields by analogy with (1.40) g i ×g j = e ijk g g k , i, j= 1,2,3 (1.44)

Relations (1.40) and (1.44) permit a useful representation of the vector prod- uct Indeed, let a =a i g i =a i g i and b =b j g j =b j g j be two arbitrary vectors inE 3 Then,

12 1 Vectors and Tensors in a Finite-Dimensional Space a×b a i g i × b j g j

For the orthonormal basis inE 3 relations (1.40) and (1.44) reduce to e i ×e j =e ijk e k =e ijk e k , i, j= 1,2,3, (1.46) so that the vector product (1.45) can be written by a×b a 1 a 2 a 3 b 1 b 2 b 3 e 1 e 2 e 3

Second-Order Tensor as a Linear Mapping

Let us consider a setLin n of all linear mappings of one vector into another one withinE n Such a mapping can be written as y =A x, y∈E n , ∀x∈E n , ∀ A ∈ Lin n (1.48)

Elements of the set Lin n are called second-order tensors or simply tensors. Linearity of the mapping (1.48) is expressed by the following relations:

A(αx ) =α(A x ), ∀x∈E n , ∀α∈R, ∀ A ∈ Lin n (1.50) Further, we define the product of a tensor by a scalar numberα∈Ras

(α A) x =α(A x ) =A(αx ), ∀x∈E n (1.51) and the sum of two tensorsAandB as

Thus, properties (A.1), (A.2) and (B.1-B.4) apply to the setLin n Setting in (1.51)α=−1 we obtain the negative tensor by

Further, we define a zero tensor0in the following manner

1.6 Second-Order Tensor as a Linear Mapping 13

0 x = 0, ∀x∈E n , (1.54) so that the elements of the setLin n also fulfill conditions (A.3) and (A.4) and accordingly form a vector space.

The properties of second-order tensors can thus be summarized by

A+ (− A) =0 , (1.58) α(β A) = (αβ)A , (multiplication by scalars is associative), (1.59)

1A=A , (1.60) α(A+B) =α A+α B , (multiplication by scalars is distributive with respect to tensor addition), (1.61)

(α+β)A=α A+β A , (multiplication by scalars is distributive with respect to scalar addition), ∀ A , B , C ∈ Lin n , ∀α, β∈R (1.62)

Example Vector product inE 3 The vector product of two vectors in

According to (1.45) the mapping x→z is linear so that w×(αx ) =α( w×x ), w×( x + y ) = w×x + w×y, ∀w,x,y∈E 3 , ∀α∈R (1.64) Thus, it can be described by means of a tensor of the second order by w×x =W x, W ∈ Lin 3 , ∀x∈E 3 (1.65)

The tensor which forms the vector product by a vector w according to (1.65) will be denoted in the following by ˆ w Thus, we write w×x = ˆ wx (1.66)

Example Representation of a rotation by a second-order tensor.

A rotation of a vector a inE 3 about an axis yields another vector r inE 3 It can be shown that the mapping a→r ( a ) is linear such that

14 1 Vectors and Tensors in a Finite-Dimensional Space a ˜ a y a ∗ x ω e

Fig 1.2.Finite rotation of a vector inE 3 r (αa ) =αr ( a ), r ( a + b ) = r ( a ) + r ( b ), ∀α∈R, ∀a,b∈E 3 (1.68) Thus, it can again be described by a second-order tensor as r ( a ) =R a, ∀a∈E 3 , R ∈ Lin 3 (1.69)

This tensorRis referred to as rotation tensor.

Let us construct the rotation tensor which rotates an arbitrary vector a∈

In a three-dimensional space E³, the vector a can be decomposed into two components: a parallel component a* along the rotation axis defined by the unit vector e, and a perpendicular component x This allows us to express the rotated vector ˜a as ˜a = a* + x cosω + y sinω, where ω is the rotation angle Utilizing geometric identities, we can derive that a* equals (e⊗e)a, and y can be expressed as ˆea Consequently, the rotated vector can be represented as ˜a = cosωa + sinωˆea + (1−cosω)(e⊗e)a, which leads us to the formulation of the rotation tensor.

R= cosω I+ sinωˆ e + (1−cosω) e⊗e, (1.73) whereIdenotes the so-called identity tensor (1.89) (see Sect 1.7).

1.6 Second-Order Tensor as a Linear Mapping 15

A useful representation of the rotation tensor can be derived by recognizing that x = y × e = −e × y By rewriting the equation (1.70) as ˜a = a + x (cosω − 1) + y sinω, and considering equation (1.71), we can express ˜a as ˜a = a + sinωeaˆ + (1 − cosω)(ˆe)²a This formulation provides a clear expression for the rotation tensor.

R=I+ sinωˆ e + (1−cosω) (ˆ e ) 2 (1.76) known as the Euler-Rodrigues formula (see, e.g., [9]).

The Cauchy stress tensor represents a linear mapping of the unit surface normal into the Cauchy stress vector In a given body B at time t, to define stress at point P, we consider a smooth surface intersecting P, dividing B into two sections This results in a force Δp and a couple Δm from the material forces acting across the surface ΔA As the area ΔA approaches zero while keeping P as an inner point, a fundamental postulate of continuum mechanics asserts that the limit t = lim ΔA→0 Δp/ΔA exists and is definitive, leading to the definition of the Cauchy stress vector t Cauchy’s fundamental postulate indicates that this vector depends solely on the outward unit normal n of the surface, meaning the Cauchy stress vector remains consistent for all surfaces through P with n as the normal Additionally, Cauchy’s theorem establishes that the mapping from n to t is linear.

In a finite-dimensional space, the mapping of a continuous function of the position vector \( x \) at point \( P \) is represented by the Cauchy stress tensor \( \sigma \) This relationship is expressed as \( t = \sigma n \), where \( t \) denotes the stress vector and \( n \) is the normal vector.

On the basis of the “right” mapping (1.48) we can also define the “left” one by the following condition

First, it should be shown that for all y∈E n there exists a unique vector y A ∈

E n satisfying the condition (1.78) for all x ∈ E n Let G = {g 1 ,g 2 , ,g n } and G g 1 ,g 2 , ,g n be dual bases inE n Then, we can represent two arbitrary vectors x,y ∈ E n , by x = x i g i and y = y i g i Now, consider the vector y A=y i g i ã

On the other hand, we obtain the same result also by yã(A x ) = yã x j A g j

Further, we show that the vector y A, satisfying condition (1.78) for all x∈E n , is unique Conversely, let a,b∈E n be two such vectors Then, we have aãx = bãx ⇒ ( a−b )ãx = 0, ∀x∈E n ⇒ ( a−b )ã( a−b ) = 0, which by axiom (C.4) implies that a = b

Since the order of mappings in (1.78) is irrelevant we can write them without brackets and dots as follows yã(A x ) = ( y A)ãx = y A x (1.79)

Tensor Product, Representation of a Tensor with Respect to

The tensor product is crucial for constructing a second-order tensor from two vectors By considering two vectors, a and b, in E^n, we can map an arbitrary vector x in E^n to another vector using the operation a ⊗ b This mapping illustrates the relationship between the vectors and is represented by the symbol "⊗".

1.7 Tensor Product, Representation of a Tensor with Respect to a Basis 17

It can be shown that the mapping (1.80) fulfills the conditions (1.49-1.51) and for this reason is linear Indeed, by virtue of (B.1), (B.4), (C.2) and (C.3) we can write

Thus, the tensor product of two vectors represents a second-order tensor. Further, it holds c⊗( a + b ) = c⊗a + c⊗b, ( a + b )⊗c = a⊗c + b⊗c, (1.83)

Indeed, mapping an arbitrary vector x∈E n by both sides of these relations and using (1.52) and (1.80) we obtain c⊗( a + b ) x = c ( aãx + bãx ) = c ( aãx ) + c ( bãx )

For the “left” mapping by the tensor a⊗b we obtain from (1.78) (see Exercise 1.20) y ( a⊗b ) = ( yãa ) b, ∀y∈E n (1.85)

The collection of all second-order tensors in Lin n forms a vector space In this section, we will demonstrate how to construct a basis for Lin n using the tensor product method.

Theorem 1.7 Let F = {f 1 ,f 2 , ,f n } and G = {g 1 ,g 2 , ,g n } be two arbitrary bases ofE n Then, the tensorsf i ⊗g j (i, j= 1,2, , n)represent a basis of L in n The dimension of the vector space L in n is thusn 2

Every tensor in Lin n can be expressed as a linear combination of the tensors f i ⊗ g j, where i and j range from 1 to n To illustrate this, we take an arbitrary second-order tensor A ∈ Lin n and examine a specific linear combination.

18 1 Vectors and Tensors in a Finite-Dimensional Space

A f i A g j f i ⊗g j , where the vectors f i and g i (i= 1,2, , n) form the bases dual toF andG, respectively The tensorsAandA coincide if and only if

On the other hand, A x = x j A g j By virtue of (1.27-1.28) we can repre- sent the vectorsA g j (j= 1,2, , n) with respect to the basisF byA g j f i ã

Condition (1.86) is satisfied for all x in the set E n Additionally, we demonstrate that the tensors f i ⊗ g j, where i and j range from 1 to n, are linearly independent If they were not, it would imply the existence of non-zero scalars α ij such that the equation α ij f i ⊗ g j = 0 holds true.

The right mapping of g k (k= 1,2, , n) by this tensor equality yields then: α ik f i = 0 (k= 1,2, , n) This contradicts, however, the fact that the vec- tors f k (k= 1,2, , n) form a basis and are therefore linearly independent.

For the representation of second-order tensors we will in the following use primarily the bases g i ⊗g j , g i ⊗g j , g i ⊗g j or g i ⊗g j (i, j= 1,2, , n) With respect to these bases a tensorA ∈ Lin n is written as

A= A ij g i ⊗g j = A ij g i ⊗g j = A i ãj g i ⊗g j = A iã j g i ⊗g j (1.87) with the components (see Exercise 1.21)

A i ãj = g i A g j , A iã j = g i A g j , i, j= 1,2, , n (1.88) Note, that the subscript dot indicates the position of the above index For example, for the components A i ãj ,iis the first index while for the components

A jã i ,iis the second index.

Of special importance is the so-called identity tensorI It is defined by

With the aid of (1.25), (1.87) and (1.88) the components of the identity tensor can be expressed by

Change of the Basis, Transformation Rules

I ij = g i I g j = g i ãg j =g ij , I ij = g i I g j = g i ãg j =g ij ,

I i ãj = I iã j = I i j = g i I g j = g i I g j = g i ãg j = g i ãg j =δ i j , (1.90) wherei, j= 1,2, , n Thus,

The components of the identity tensor, denoted as (1.90) 1,2, are defined by relation (1.25) and characterize the metric properties of Euclidean space, commonly referred to as metric coefficients Consequently, the identity tensor is often known as the metric tensor In the context of an orthonormal basis, relation (1.91) simplifies significantly.

1.8 Change of the Basis, Transformation Rules

Now, we are going to clarify how the vector and tensor components transform with the change of the basis Let x be a vector andAa second-order tensor. According to (1.27) and (1.87) x =x i g i =x i g i , (1.93)

A= A ij g i ⊗g j = A ij g i ⊗g j = A i ãj g i ⊗g j = A iã j g i ⊗g j (1.94) With the aid of (1.21) and (1.28) we can write x i = xãg i = xã g ij g j

=x j g ji , (1.95) wherei= 1,2, , n Similarly we obtain by virtue of (1.88)

= A i ãk g kj =g il A lk g kj , (1.96)

= A iã k g kj =g il A lk g kj , (1.97) wherei, j= 1,2, , n The transformation rules (1.95-1.97) hold not only for dual bases Indeed, let g i and ¯ g i (i= 1,2, , n) be two arbitrary bases in

20 1 Vectors and Tensors in a Finite-Dimensional Space

By means of the relations g i =a j i g ¯ j , i= 1,2, , n (1.100) one thus obtains x =x i g i =x i a j i ¯ g j ⇒ x¯ j =x i a j i , j= 1,2, , n, (1.101)

Special Operations with Second-Order Tensors

In Section 1.6, we established that the set Lin n constitutes a finite-dimensional vector space, where its elements are second-order tensors that can be treated as vectors in E n 2 These tensors can undergo various vector operations, including summation, scalar multiplication, and scalar products, which will be defined in Section 1.10 Unlike traditional vectors in Euclidean space, second-order tensors allow for additional specialized operations such as composition, transposition, and inversion.

Composition (simple contraction) LetA , B ∈ Lin n be two second- order tensors The tensorC=ABis called composition of AandBif

For the left mapping (1.78) one can write y (AB) = ( y A)B , ∀y∈E n (1.104)

In order to prove the last relation we use again (1.78) and (1.103): y (AB) x = yã[(AB) x ] = yã[A(B x )]

The composition of tensors is generally non-commutative, meaning that AB does not equal BA Tensors A and B are considered commutative if AB equals BA Additionally, the composition of tensors exhibits specific properties that can be further explored in Exercise 1.25.

1.9 Special Operations with Second-Order Tensors 21

For example, the distributive rule (1.106) 1 can be proved as follows

For the tensor product (1.80) the composition (1.103) yields

( a⊗b ) ( c⊗d ) = ( bãc ) a⊗d, a,b,c,d∈E n (1.108) Indeed, by virtue of (1.80), (1.82) and (1.103)

AB= A ik B kã j g i ⊗g j = A ik B kj g i ⊗g j

= A i ãk B k ãj g i ⊗g j = A iã k B kj g i ⊗g j , (1.109) whereAandBare given in the form (1.87).

Powers, polynomials and functions of second-order tensors On the basis of the composition (1.103) one defines by

, m= 1,2,3 , A 0 =I (1.110) powers (monomials) of second-order tensors characterized by the following evident properties

Using tensor powers, a polynomial function of a second-order tensor A can be expressed as g(A) = a₀I + a₁A + a₂A² + + aₘAᵐ, where g(A): Lin(n) → Lin(n) indicates a tensor function that transforms one second-order tensor into another within the space of linear transformations This approach allows for the definition of various tensor functions, with particular emphasis on the exponential function exp(A).

A k k! (1.114) given by the infinite power series.

22 1 Vectors and Tensors in a Finite-Dimensional Space

Transposition.The transposed tensorA T is defined by:

A T x = x A , ∀x∈E n , (1.115) so that one can also write

Transposition represents a linear operation over a second-order tensor since

The composition of second-order tensors is transposed by

Indeed, in view of (1.104) and (1.115)

For the tensor product of two vectors a,b∈E n we further obtain by use of (1.80) and (1.85)

The transposed tensor is guaranteed to exist uniquely, as every tensor \( A \) in \( \mathbb{L}^n \) can be expressed using the tensor product of the basis vectors in \( E^n \) This representation aligns with the formulation provided in equation (1.87), confirming the relationship outlined in (1.121).

A T = A ij g j ⊗g i = A ij g j ⊗g i = A i ãj g j ⊗g i = A iã j g j ⊗g i , (1.122) or

A T = A ji g i ⊗g j = A ji g i ⊗g j = A j ãi g i ⊗g j = A jã i g i ⊗g j (1.123)Comparing the latter result with the original representation (1.87) one ob- serves that the components of the transposed tensor can be expressed by

1.9 Special Operations with Second-Order Tensors 23

A T j iã = A j ãi =g jk A kã l g li ,

A T i ãj = A jã i =g jk A k ãl g li (1.125) For example, the last relation results from (1.88) and (1.116) within the fol- lowing steps

The homogeneous components of a transposed tensor, whether covariant or contravariant, can be derived by reflecting the original component matrix across its main diagonal However, this method does not apply to mixed components.

The transposition operation (1.115) gives rise to the definition of symmet- ricM T =Mand skew-symmetric second-order tensorsW T =− W.

Obviously, the identity tensor is symmetric

One can easily show that the tensor ˆ w (1.66) is skew-symmetric so that ˆ w T =−w.ˆ (1.127)

Indeed, by virtue of (1.32) and (1.116) on can write x w ˆ T y = y wx ˆ = yã( wìx ) = [ ywx ] =−[ xwy ]

A tensorA ∈ Lin n is referred to as invertible if there exists a tensorA −1 ∈

The tensorA −1 is called inverse of A The set of all invertible tensorsInv n A ∈ Lin n :∃ A −1 forms a subset of all second-order tensorsLin n Inserting (1.128) into (1.129) yields x =A −1 y =A −1 (A x ) A −1 A x, ∀x∈E n and consequently

24 1 Vectors and Tensors in a Finite-Dimensional Space

Theorem 1.8 A tensor A is invertible if and only if A x = 0 implies that x = 0.

Proof First we prove the sufficiency To this end, we map the vector equation

In this article, we explore the relationship between a linear transformation A and its inverse A −1 Starting with the equation A x = 0, we demonstrate that this leads to the identity I x = x, confirming the necessity of the transformation We examine a basis G = {g1, g2, , gn} in E n and show that the transformed vectors hi = A gi also form a basis for E n If these vectors were linearly dependent, it would imply a contradiction, as the combination a i hi = 0 would lead to a non-zero vector a = a i gi = 0 Furthermore, we introduce the tensor A = gi ⊗ hi, which is dual to the vectors hi We prove that this tensor acts as the inverse of A, illustrating that for any arbitrary vector x = xi gi in E n, the transformation A yields y = A x = xi hi, confirming the relationship A y = gi ⊗ hi xj hj.

Conversely, it can be shown that an invertible tensorAis inverse toA −1 and consequently

For the proof we again consider the bases g i and A g i (i= 1,2, , n) Let y = y i A g i be an arbitrary vector in E n Let further x = A −1 y = y i g i in view of (1.130) Then, A x =y i A g i = y which implies that the tensor A is inverse toA −1

Relation (1.131) implies the uniqueness of the inverse Indeed, ifA −1 and

If A and A −1 are two distinct tensors that are inverses of each other, then there exists at least one vector y in E n such that A −1 y = A −1 y By applying the tensor A to both sides of this vector equation and considering the implications of (1.131), we arrive at a contradiction.

By means of (1.120), (1.126) and (1.131) we can write (see Exercise 1.38)

The composition of two arbitrary invertible tensorsAandBis inverted by

Mapping both sides of this vector identity byA −1 and then byB −1 , we obtain with the aid of (1.130) x =B −1 A −1 y, ∀x∈E n

On the basis of transposition and inversion one defines the so-called orthogonal tensors They do not change after consecutive transposition and inversion and form the following subset ofLin n :

1.9 Special Operations with Second-Order Tensors 25

For orthogonal tensors we can write in view of (1.130) and (1.131)

To demonstrate that the rotation tensor is orthogonal, we first extend the vector defining the rotation axis into an orthonormal basis {e, q, p}, where e is the cross product of q and p By applying the vector identity p (q × x) - q (p × x) = (q · p) × x for all x in E³, we can express the rotation tensor as e = p ⊗ q - q ⊗ p.

The rotation tensor (1.73) takes thus the form

Alternatively one can express the transposed rotation tensor (1.73) by

= cos (−ω)I+ sin (−ω) ˆ e + [1−cos (−ω)] e⊗e (1.139) taking (1.121), (1.126) and (1.127) into account Thus,R T (1.139) describes the rotation about the same axis e by the angle −ω, which likewise implies thatR T R x = x,∀x∈E 3

The exponential function of a skew-symmetric tensor results in an orthogonal tensor, as skew-symmetric tensors commute with their transposed counterparts This relationship is highlighted by the identity exp(A + B) = exp(A) exp(B), which applies to commutative tensors.

A T k for integerk(Exercise 1.36) we can write

= exp (W) [exp (W)] T , (1.140) whereWdenotes an arbitrary skew-symmetric tensor.

26 1 Vectors and Tensors in a Finite-Dimensional Space

Scalar Product of Second-Order Tensors

Consider two second-order tensors a⊗b and c⊗d given in terms of the tensor product (1.80) Their scalar product can be defined in the following manner: ( a⊗b ) : ( c⊗d ) = ( aãc ) ( bãd ), a,b,c,d∈E n (1.141)

It leads to the following identity (Exercise 1.40): c⊗d :A= c A d = d A T c (1.142)

For two arbitrary tensorsAandB given in the form (1.87) we thus obtain

A:B= A ij B ij = A ij B ij = A i ãj B iã j = A iã j B i ãj (1.143)

Similar to vectors the scalar product of tensors is a real function characterized by the following properties (see Exercise 1.41)

(D.3) α(A:B) = (α A) : B=A: (α B) (associative rule for multiplica- tion by a scalar), ∀ A , B ∈ Lin n ,∀α∈R,

(D.4) A: A ≥0 ∀ A ∈ Lin n , A:A= 0 if and only if A=0.

We demonstrate the property (D.4) by expressing an arbitrary tensor A in relation to an orthonormal basis of Lin n Specifically, we represent A as A = A ij e i ⊗ e j, where A ij = A ij for i, j = 1, 2, , n, utilizing the orthonormal basis e i (i = 1, 2, , n) of E n By considering equation (1.143), we derive further insights into this representation.

Using this important property one can define the norm of a second-order tensor by:

For the scalar product of tensors one of which is given by a composition we can write

We prove this identity first for the tensor products:

Decompositions of Second-Order Tensors

For three arbitrary tensorsA,BandCgiven in the form (1.87) we can write in view of (1.109), (1.125) and (1.143)

Similarly we can prove that

On the basis of the scalar product one defines the trace of second-order tensors by: trA=A:I (1.148)

For the tensor product (1.80) the trace (1.148) yields in view of (1.142) tr ( a⊗b ) = aãb (1.149)

With the aid of the relation (1.145) we further write tr (AB) =A:B T =A T :B (1.150)

In view of (D.1) this also implies that tr (AB) = tr (BA) (1.151)

1.11 Decompositions of Second-Order Tensors

Additive decomposition into a symmetric and a skew-symmetric part Every second-order tensor can be decomposed additively into a sym- metric and a skew-symmetric part by

28 1 Vectors and Tensors in a Finite-Dimensional Space where symA=1

Symmetric and skew-symmetric tensors form subsets of Lin n defined respec- tively by

The subsets of Lin n can be demonstrated to be vector spaces, known as subspaces Both symmetric and skew-symmetric tensors satisfy the axioms (A.1-A.4) and (B.1-B.4), including operations involving the zero tensor Notably, the zero tensor is the sole linear mapping that is both symmetric and skew-symmetric, resulting in the intersection of Sym n and Skew n being equal to zero.

For every symmetric tensor M= M ij g i ⊗g j it follows from (1.124) that

M ij = M ji (i=j, i, j= 1,2, , n) Thus, we can write

Similarly we can write for a skew-symmetric tensor

W ij ( g i ⊗g j −g j ⊗g i ), W ∈ Skew n (1.157) taking into account that W ii = 0 and W ij =−W ji (i=j, i, j= 1,2, , n). Therefore, the basis ofSym n is formed byn tensors g i ⊗g i and 1

2 n(n−1) tensors g i ⊗g j + g j ⊗g i , while the basis ofSkew n consists of 1

In the context of tensor analysis, the tensors are defined as 2 n(n−1) tensors g i ⊗g j −g j ⊗g i for indices i > j = 1,2, …, n The dimensions of the symmetric and skew-symmetric tensor spaces, denoted as Sym n and Skew n, are calculated to be 1/2 n(n+1) and 1/2 n(n−1), respectively According to equation (1.152), any basis of Skew n can be combined with a basis of Sym n to form a complete basis of Lin n Furthermore, by considering equations (1.40) and (1.169), a skew-symmetric tensor can be effectively represented in three-dimensional space.

Tensors of Higher Orders

Thus, every skew-symmetric tensor in three-dimensional space describes a cross product by a vector w (1.159) called axial vector One immediately observes that

Obviously, symmetric and skew-symmetric tensors are mutually orthogo- nal such that (see Exercise 1.45)

M:W= 0, ∀ M ∈ Sym n , ∀ W ∈ Skew n (1.161) Spaces characterized by this property are called orthogonal.

Additive decomposition into a spherical and a deviatoric part.

For every second-order tensorAwe can write

The decomposition of a tensor A can be expressed as A = sphA + devA, where sphA represents the spherical component defined as sphA = (1/n)tr(A)I, and devA denotes the deviatoric part given by devA = A - (1/n)tr(A)I Spherical tensors can be characterized by the form S = αI, with α being a scalar, while deviatoric tensors are defined by the condition tr(D) = 0 Both spherical and deviatoric tensors, similar to symmetric and skew-symmetric tensors, constitute orthogonal subspaces within the space of n-dimensional tensors.

Similarly to second-order tensors we can define tensors of higher orders For example, a third-order tensor can be defined as a linear mapping fromE n to

Lin n Thus, we can write

In the equation Y = A x, where Y belongs to the set of linear mappings Lin n, it applies to all vectors x in the space E n and all linear operators A in Lin n This framework encompasses the representation of third-order tensors relative to a basis in Lin n, demonstrating the versatility of linear mappings in tensor analysis.

30 1 Vectors and Tensors in a Finite-Dimensional Space

=A i ãjk g i ⊗g j ⊗g k =A iãk j g i ⊗g j ⊗g k (1.165) For the components of the tensorA(1.165) we can thus write by analogy with (1.146)

A ijk =A ij ããs g sk =A i ãst g sj g tk =A rst g ri g sj g tk ,

A ijk =A r ãjk g ri =A rs ããk g ri g sj =A rst g ri g sj g tk (1.166)

1.1.Prove that if x∈Vis a vector andα∈Ris a scalar, then the following identities hold.

(a)−0 = 0 , (b) α0 = 0 , (c) 0 x = 0 , (d) −x = (−1) x , (e) ifαx = 0 , then eitherα= 0 or x = 0 or both.

1.2.Prove that x i = 0 (i = 1,2, , n) for linearly independent vectors x 1 ,x 2 , ,x n In other words, linearly independent vectors are all non-zero.

1.3.Prove that any non-empty subset of linearly independent vectors x 1 ,x 2 , ,x n is also linearly independent.

1.4.Write out in full the following expressions forn= 3: (a)δ j i a j , (b)δ ij x i x j , (c)δ i i , (d) ∂f i

1.6.Prove that a set of mutually orthogonal non-zero vectors is always linearly independent.

1.7.Prove the so-called parallelogram law: x + y 2 = x 2 + 2 xãy + y 2

1.8.LetG={g 1 ,g 2 , ,g n } be a basis inE n and a∈E n be a vector Prove that aãg i = 0 (i= 1,2, , n) if and only if a = 0

1.9.Prove that a = b if and only if aãx = bãx, ∀x∈E n

1.10.(a) Construct an orthonormal set of vectors orthogonalizing and nor- malizing (with the aid of the procedure described in Sect 1.4) the following linearly independent vectors: g 1 ⎧⎨

1.12 Tensors of Higher Orders 31 where the components are given with respect to an orthonormal basis. (b) Construct a basis inE 3 dual to the given above utilizing relations (1.16) 2 , (1.18) and (1.19).

(c) As an alternative, construct a basis inE 3 dual to the given above by means of (1.21) 1 , (1.24) and (1.25) 2

(d) Calculate again the vectors g i dual to g i (i= 1,2,3) by using relations (1.33) and (1.35) Compare the result with the solution of problem (b).

1.11.Verify that the vectors (1.33) are linearly independent.

1.12.Prove identities (1.41) and (1.42) by means of (1.18), (1.19) and (1.24), respectively.

1.13.Prove relations (1.40) and (1.44) by using (1.39) and (1.43), respectively.

1.14.Verify the following identities involving the permutation symbol (1.36) forn= 3: (a)δ ij e ijk = 0, (b)e ikm e jkm = 2δ i j , (c)e ijk e ijk = 6, (d)e ijm e klm δ k i δ j l −δ l i δ k j

1.18.Prove formula (1.58), where the negative tensor− Ais defined by (1.53).

1.19.Prove that not every second order tensor inLin n can be represented as a tensor product of two vectors a,b∈E n as a⊗b

1.23.Evaluate components of the tensor describing a rotation about the axis e 3 by the angleα.

⎦ and the vectors g i (i= 1,2,3) are given in Exercise 1.10 Evaluate the com- ponents A ij , A i ãj and A iã j

32 1 Vectors and Tensors in a Finite-Dimensional Space

1.26.LetA= A i ãj g i ⊗g j ,B= B i ãj g i ⊗g j ,C= C i ãj g i ⊗g j andD= D i ãj g i ⊗g j , where

Find commutative pairs of tensors.

1.27.LetAandB be two commutative tensors Write out in full (A+B) k , wherek= 2,3,

1.28.Prove that exp (A+B) = exp (A) exp (B), (1.170) whereAandBcommute.

1.29.Prove that exp (k A) = [exp (A)] k , wherek= 2,3,

1.31.Prove that exp (− A) exp (A) = exp (A) exp (− A) =I.

1.32.Prove that exp (A+B) = exp (A) + exp (B)− IifAB=BA=0. 1.33.Prove that exp

1.34.Compute the exponential of the tensorsD= D i ãj g i ⊗g j ,E= E i ãj g i ⊗g j andF= F i ãj g i ⊗g j , where

1.37.Evaluate the components B ij , B ij , B i ãj and B iã j of the tensorB=A T , whereAis defined in Exercise 1.24.

1.41.Prove by means of (1.141-1.143) the properties of the scalar product (D.1-D.3).

1.43.Express trAin terms of the components A i ãj , A ij , A ij

⎦ and the vectors g i (i= 1,2,3) are given in Exercise 1.10 Calculate the axial vector of W.

1.45.Prove thatM:W= 0, whereMis a symmetric tensor andWa skew- symmetric tensor.

1.46.Evaluate trW k , whereWis a skew-symmetric tensor andk= 1,3,5,

1.47.Verify that sym (skewA) = skew (symA) =0 , ∀ A ∈ Lin n

1.48.Prove that sph (devA) = dev (sphA) =0 , ∀ A ∈ Lin n

Vector and Tensor Analysis in Euclidean Space

Vector- and Tensor-Valued Functions, Differential Calculus

In the following we consider a vector-valued function x (t) and a tensor-valued functionA(t) of a real variablet Henceforth, we assume that these functions are continuous such that t→t lim0

[A(t)− A(t 0 )] =0 (2.1) for allt 0 within the definition domain The functions x (t) andA(t) are called differentiable if the following limits d x dt = lim s→0 x (t+s)−x (t) s , dA dt = lim s→0

A(t+s)− A(t) s (2.2) exist and are finite They are referred to as the derivatives of the vector- and tensor-valued functions x (t) andA(t), respectively.

For differentiable vector- and tensor-valued functions the usual rules of differentiation hold.

1) Product of a scalar function with a vector- or tensor-valued function: d dt[u(t) x (t)] = du dtx (t) +u(t)d x dt, (2.3) d dt[u(t)A(t)] = du dt A(t) +u(t)dA dt (2.4)

2) Mapping of a vector-valued function by a tensor-valued function: d dt[A(t) x (t)] = dA dt x (t) +A(t)d x dt (2.5)

36 2 Vector and Tensor Analysis in Euclidean Space

3) Scalar product of two vector- or tensor-valued functions: d dt[ x (t)ãy (t)] = d x dt ãy (t) + x (t)ã d y dt, (2.6) d dt[A(t) :B(t)] = dA dt :B(t) +A(t) : dB dt (2.7)

4) Tensor product of two vector-valued functions: d dt[ x (t)⊗y (t)] =d x dt ⊗y (t) + x (t)⊗d y dt (2.8)

5) Composition of two tensor-valued functions: d dt[A(t)B(t)] =dA dt B(t) +A(t)dB dt (2.9)

6) Chain rule: d dtx [u(t)] = d x du du dt, d dt A[u(t)] = dA du du dt (2.10)

7) Chain rule for functions of several arguments: d dtx [u(t) ,v(t)] = ∂x

∂v dv dt, (2.12) where ∂/∂u denotes the partial derivative It is defined for vector and tensor valued functions in the standard manner by

The differentiation rules can be confirmed using basic differential calculus For instance, to find the derivative of the composition of two second-order tensors, we start by defining two tensor-valued functions.

O 1 (s) = A(t+s)− A(t) s −dA dt , O 2 (s) =B(t+s)− B(t) s −dB dt (2.15) Bearing the definition of the derivative (2.2) in mind we have s→0limO 1 (s) =0 , lim s→0 O 2 (s) =0

Coordinates in Euclidean Space, Tangent Vectors

2.2 Coordinates in Euclidean Space, Tangent Vectors

Definition 2.1 A coordinate system is a one to one correspondence between vectors in then-dimensional Euclidean space E n and a set of nreal numbers

(x 1 , x 2 , , x n ) These numbers are called coordinates of the corresponding vectors.

, (2.16) where r ∈E n and x i ∈ R(i= 1,2, , n) Henceforth, we assume that the functionsx i =x i ( r ) and r = r x 1 , x 2 , , x n are sufficiently differentiable.

Example Cylindrical coordinates in E 3 The cylindrical coordinates (Fig 2.1) are defined by r = r (ϕ, z, r) =rcosϕe 1 +rsinϕe 2 +ze 3 (2.17) and r

2π−arccos rãe 1 r if rãe 2

Ngày đăng: 20/10/2021, 21:42

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN