Kalman-Yakubovich and Normal Sylvester Matrix

Một phần của tài liệu complex conjugate matrix eqatrions for systems and control (Trang 25 - 29)

1.2 Univariate Linear Matrix Equations

1.2.2 Kalman-Yakubovich and Normal Sylvester Matrix

A general form of the continuous-time Lyapunov matrix equation is the so-called normal Sylvester matrix equation

AXXB=C. (1.8)

A general form of the discrete-time Lyapunov matrix equation is the so-called Kalman-Yakubovich matrix equation

XAXB=C. (1.9)

In the matrix equations (1.8) and (1.9),A∈Rn×n,B∈Rp×p, andC∈Rn×pare the known matrices, andX∈Rn×pis the matrix to be determined. On the solvability of the normal Sylvester matrix equation (1.8), there exists the following result which has been well-known as Roth’s removal rule.

Theorem 1.5 ([210])Given matrices A ∈ Rn×n, B ∈ Rp×p, and C ∈ Rn×p, the normal Sylvester matrix equation (1.8) has a solution if and only if the following two partitioned matrices are similar

A C 0 B

, A0

0B

;

or equivalently, the following two polynomial matrices are equivalent:

sIAC 0 sIB

,

sIA 0 0 sIB

.

This matrix equation has a unique solution if and only ifλ (A)λ (B)=∅.

The result in the preceding theorem was generalized to the Kalman-Yakubovich matrix equation (1.9) in [238]. This is the following theorem.

Theorem 1.6 Given matrices A∈ Rn×n, B ∈ Rp×p, and C ∈ Rn×p, the Kalman- Yakubovich matrix equation (1.9) has a solution if and only if there exist nonsingular real matrices S and R such that

S

sI+A C 0 sB+I

R=

sI+A 0 0 sB+I

.

On the numerical solutions to the normal Sylvester matrix equations, there have been a number of results reported in literature over the past 30 years. The Bartels- Stewart method [13] may be the first numerically stable approach to systematically solving small-to-medium scale Lyapunov and normal Sylvester matrix equations.

The basic idea of this method is to apply the Schur decomposition to transform the

equation into a triangular system which can be solved efficiently by forward or back- ward substitutions. To save computation time, Bartels-Stewart method was extended in [167] to treat the adjoint equations. In [131, 140], the backward stability analysis and backward error analysis of Bartels-Stewart algorithm were given. In [220], three columnwise direct solver schemes were proposed by modifying the Bartels-Stewart algorithm. In [133], the so-called Hessenberg-Schur algorithm was proposed for the normal Sylvester matrix equation (1.8). This algorithm requires the transformation of the larger of the two matricesAandB, sayA, into the upper Hessenberg form and the otherBto the real Schur form. Like the Bertels-Stewart and Hammarling algorithms, the Hessenberg-Schur method is also an example of the transformation method. Different from the Bertels-Stewart algorithm, the Hessenberg-Schur algo- rithm only requires the matrix A to be reduced to Hessenberg form. In [205], a numerical algorithm was proposed to solve the normal Sylvester matrix equation (1.8) by orthogonal reduction of the matrixBto a block upper Hessenberg form. In [17], a factored ADI (alternating direction implicit) method was presented. In [139], projection methods that use the extended block Arnoldi process were proposed to solve the low-rank normal Sylvester matrix equations.

For explicit solutions to the normal Sylvester matrix equation (1.8), perhaps the earliest result may be the one in the form of finite double matrix series for the case of both A andBbeing of Jordan forms in [183]. In [157], two explicit solutions were established in terms of principle idempotents and nilpotence of the coefficient matrices. In [37], explicit general solutions were given by using eigenprojections.

In [171], an infinite series representation of the unique solution was established by converting the normal Sylvester matrix equation (1.8) into a Kalman-Yakubovich matrix equation in the form of (1.9). WhenB is in Jordan form, a finite iterative approach was proposed in [92] to solve the equation (1.8). WhenAis a normalized lower Hessenberg matrix, the normal Sylvester matrix equation (1.8) was investigated in [46]. It was pointed out in [46] that the solution is uniquely determined by its first row. When the right-hand side of the normal Sylvester equation (1.8) is a matrix with rank 1, a simple iterative method was proposed in [233] based on an extension of the Astrom-Jury-Agniel algorithm.

When the coefficient matrices are not in any canonical forms, explicit solutions in the form ofX=MN−1were established for the normal Sylvester matrix equation (1.8) in literature, for example, [152, 182, 194, 258]. In [152, 182, 258], the matrix N is the value of the eigenpolynomial ofAatB, while in [194] it is the value of an annihilating polynomial ofAatB. In [152], the solution was obtained by applying Cayley-Hamilton theorem andM was expressed as the sum of a group of matrices which can be iteratively derived in terms of the coefficient matrices. In [182], the solution was derived based on a Bezout identity related to the mutual primeness of two polynomials, andM was given in terms of the coefficient matrices of adj(sIA).

In [258], the solution was established with the help of Kronecker maps, and M was represented by the controllability matrix and observability matrix. In [194], the solution was constructed based on the similarity of two partitioned matrices, andM was provided by a finite double series form associated with the coefficient matrices. In addition, by applying spectral theory, in [171] the unique solution was

expressed by a contour integration on resolvents ofAandB. By applying the Faddeev iterative sequence, a finite double series solution was also derived in [137]. In [47], a closed-form finite series representation of the unique solution was developed. In this solution, some coefficients are closely related to the companion matrices of the characteristic polynomials of matricesAandB. The result in [47] is very elegant, and thus it is provided in the following theorem. Before proceeding, for a polynomial

α (s)=α0+α1s+ ã ã ã +αt−1st−1+st the corresponding companion matrix is defined as

Kα =

⎢⎢

⎢⎢

⎢⎣

0 1

1 ...

1

α0−α1−α2 ã ã ã −αt−1

⎥⎥

⎥⎥

⎥⎦ ,

and the corresponding upper Hankle matrix as

Hα =

⎢⎢

⎢⎢

α1 α2 ã ã ãαt−1 1 α2 α3 ã ã ã 1

ã ã ã ã αt−1 1

1

⎥⎥

⎥⎥

⎦.

Theorem 1.7 ([47])Given matrices A ∈ Rn×n, B ∈ Rp×p, and C = CACBT ∈ Rn×pwith CA∈Rn×q, let the normal Sylvester matrix equation (1.8) have a unique solution, and letα (s)andβ (s)with respective degreesμandνbe coprime monic polynomials such that

α (A)C=0, Cβ (B)=0.

Then the unique solution of the normal Sylvester matrix equation (1.8) has the rep- resentation

X = ν

j=0

μ i=0

γijAi−1CBj−1

=

CAACA ã ã ãAμ−1CA Iq

⎢⎢

CTB CBTB

ã ã ã CBTBν−1

⎥⎥

,

where the matrix= γij

μ×ν is the unique solution of KαTKβ =

1 0ã ã ã0T

1 0ã ã ã0

. (1.10)

In the preceding theorem, the symbol “⊗” represents the Kronecker product of two matrices. The detailed definition can be found in Sect.2.1of this monograph.

On the solution to the matrix equation (1.10), the following result has been given in [47].

Lemma 1.1 ([47])The unique solution of the matrix equation (1.10), when it exists, is given by

(1) whenνμ,

= −

Hα0μ×μ) α Kβ−1

, (2) whenνμ,

=

β

KαT−1 Hβ 0ν)×ν

.

On the solution of the Kalman-Yakubovich matrix equation (1.9), many results have been reported in literature. In [23], the unique solution was given in terms of a Toeplitz matrix whenAandBwere in companion forms. In [137], a finite iterative method for solving the Kalman-Yakubovich matrix equation was given based on the Faddeev sequence. The solution can be quickly obtained if a solution was known for a Kalman-Yakubovich matrix equation with a right-hand side of rank 1. In [305], the unique solutions to the Kalman-Yakubovich and Stein equations were given in terms of controllability matrices, observability matrices and Hankel matrices. In addition, explicit solutions in the form ofX = MN−1 to the Kalman-Yakubovich matrix equation were established in literature, for example, [155, 280]. In [155], the solution was obtained by applying linear operators approach, andMwas expressed as a finite double sum in terms of the coefficients of the characteristic polynomial of the matrixA. In [280], the solution was established with the aid of Kronecker maps, andMwas expressed in terms of the controllability and observability matrices.

On the numerical approach for solving the Kalman-Yakubovich matrix equa- tion, the typical methods are the Smith-type iterations. In [18], the presented Smith iteration for the Kalman-Yakubovich matrix equation (1.9) was in the form of X(k+1) = AX(k)B+C withX(0) = C. A quadratically convergent version of this iteration was given in [218]. This iteration can be written as

X(k+1)=X(k)+A(k)X(k)B(k), A(k+1)=A2(k),

B(k+1)=B2(k),

with initial valuesX(0)=C,A(0)=A, andB(0)=B. Obviously, the preceding iteration only works for squareAandB. Moreover, the Smith (l) iteration was pro- posed in [218]. It was shown in [200] that a moderate increase in the number of shifts lcan accelerate the convergence significantly. However, it was also observed that the speed of convergence was hardly improved by a further increase ofl[134, 200]. To improve the speed of convergence, one can adopt the so-called Smith accelerative iteration [217]. In addition, a new Smith-type iteration named ther-Smith iteration was proposed in [310].

In fact, the normal Sylvester matrix equation (1.8) can be transformed into a Kalman-Yakubovich matrix equation [171]. For a nonzero real constant numbera, it follows from (1.8) that

(aI+A)X(aIB)(aIA)X(aI+B)=2aC.

Ifais chosen so that(aIA)−1and(aI+B)−1exist, then pre- and post-multiplying by these matrices, respectively, gives

(aIA)−1(aI+A)X(aIB) (aI+B)−1−X

=2a(aIA)−1C(aI+B)−1. Denote

U=(aIA)−1(aI+A) ,V =(aIB) (aI+B)−1. It is easily known that

(aIA)−1= 1

2a(U+I), (aI+B)−1= 1

2a(V +I).

With the previous expressions, now the normal Sylvester matrix equation (1.8) has been transformed into the following Kalman-Yakubovich matrix equation

UX VX= 1

2a(U+I)C(V +I).

Thus, some numerical approaches for Kalman-Yakubovich matrix equations can be applied to normal Sylvester matrix equations. A more general transformation approach from normal Sylvester matrix equations to Kalman-Yakubovich matrix equations was presented in [188].

Một phần của tài liệu complex conjugate matrix eqatrions for systems and control (Trang 25 - 29)

Tải bản đầy đủ (PDF)

(496 trang)