This section is started with the invertibility of a complex polynomial matrix in the framework of conjugate products.
Definition 9.13 A polynomial matrixU(s)∈Cn×n[s]is said to be invertible in the framework of conjugate products, if there exits a polynomial matrixV(s)∈Cn×n[s]
such thatU(s)V(s)=IandV(s)U(s)=I. In this case, theV(s)is called the inverse ofU(s)in the framework of conjugate products, and is denoted byU−(s)= V(s).
Lemma 9.10 Given a square polynomial matrix U(s), if there exist polynomial matrices VR(s)and VL(s)such that U(s)VR(s)=I , and VL(s)U(s)=I , then VR(s)=VL(s).
Proof By applying Lemma9.6, one has VL(s)
=VL(s)I
=VL(s)(U(s)VR(s))
=(VL(s)U(s))VR(s)
=IVR(s)
=VR(s).
The proof is thus completed.
According to this lemma, for a square polynomial matrix U(s), if U(s) V(s)=I, thenU−(s)=V(s)andV−(s)=U(s). In addition, by applying Item (3) of Lemma9.6the following result is easily obtained.
Lemma 9.11 If U(s), V(s)∈Cn×nare both invertible in the framework of conju- gate products, then U(s)V(s)is also invertible in the framework of conjugate products. Moreover,
(U(s)V(s))−=V−(s)U−(s).
For a polynomial matrix A(s)∈Cn×m[s], the elementary row transformations have the following three forms:
(1) Row switching transformations. This transformation switches all matrix ele- ments on rowi with their counterparts on row j, and can be implemented by pre-multiplyingA(s)by the following matrix
En(i,j)=
⎡
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎣ 1...
1
0 1
1 ...
1
1 0
1 ...
1
⎤
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎦
thei-th row
the j-th row
. (9.10)
(2) Row addition transformations. This transformation adds rowipre-multiplied by f(s) to row j, and can be implemented by pre-multiplying A(s) by the following matrix
En(i(f(s)), j )=
⎡
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎣ 1 ...
1 ...
f(s)ã ã ã1 ...
1
⎤
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎦
thei-th row the j-th row
. (9.11)
(3) Row multiplication transformations. This transformation pre-multiplies all elements on row i with nonzero a ∈C, and can be implemented by pre- multiplyingA(s)by the following matrix
En(i(a))=
⎡
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎢⎢
⎣ 1 ...
1 a
1 ...
1
⎤
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎥⎥
⎦
thei-th row. (9.12)
Similarly, there are three types of elementary column transformations forA(s)∈ Cn×m[s].
(1) Column switching transformations. This transformation switches all matrix elements on columni with their counterparts on column j, and can be imple- mented by post-multiplyingA(s)byEm(i,j).
(2) Column addition transformations. This transformation adds columni post- multiplied by f(s)to column j, and can be implemented by post-multiplying A(s)byEm(j(f(s)),i).
(3) Column multiplication transformations. This transformation post-multiplies all elements on columniwith nonzeroa∈C, and can be implemented by post- multiplyingA(s)byEm(i(a)).
Remark 9.4 In the preceding definitions of elementary transformations, by the mul- tiplication it means the conjugate product.
The matrices with the forms (9.10)–(9.12) are called elementary matrices. By applying Lemma9.1, it is easily checked that all the elementary matrices are invert- ible. Moreover, the following relations hold
En−(i,j)=En(i,j),
En−(i(a))=En(i(a)), (9.13)
En−(i(f(s)), j )=En(i(−f(s)), j).
Definition 9.14 A polynomial matrix P(s)is said to be unimodular if there exist elementary matricesEi(s),i ∈I[1,n], such that
P(s)=E1(s)E2(s)ã ã ãEn(s).
Remark 9.5 From now on, if no special interpretations, when the terms “inversion”
and “unimodularity” are mentioned, they refer to those in the framework of conjugate products.
The following simple properties of unimodular matrices are easily derived.
Proposition 9.2 Let P(s), Q(s)∈Cn×n[s]be two unimodular matrices. Then, (1) P(s)Q(s)is unimodular;
(2) P(s)is invertible, and P−(s)is also unimodular.
Definition 9.15 IfA(s)∈Cm×n[s] can be transformed intoB(s)via finite sequences of elementary transformations in the framework of conjugate product, then A(s)is said to be conequivalent to B(s), and denoted by
A(s)⇐⇒ B(s).
By the definition of unimodular matrices, one can easily obtain the following result.
Theorem 9.12 For two polynomial matrices A(s), B(s)∈Cm×n[s], A(s)
⇐⇒ B(s)if and only if there exist two unimodular matrices P(s)and Q(s)such that
B(s)=P(s)A(s)Q(s).
By simple calculations, it is easily known that conequivalence is an equivalence relation.
Proposition 9.3 For three polynomial matrices A(s), B(s), C(s)∈Cm×n[s], the following statements hold:
(1) Reflexivity: A(s)⇐⇒ A(s);
(2) Transitivity: if A(s)⇐⇒ B(s), B(s)⇐⇒ C(s), then A(s)⇐⇒ C(s);
(3) Invertibility: if A(s)⇐⇒ B(s), then B(s)⇐⇒ A(s).
The following theorem provides a normal form for a polynomial matrix under elementary row and column transformations in the framework of conjugate products.
To obtain such a normal form, Lemma9.1and the Euclidean division algorithm in the framework of conjugate product are applied.
Using the above notation, one can manipulate polynomial matrices in the frame- work of conjugate products in ways that mirror the ways one manipulates polynomial matrices in the framework of ordinary products. The following result describes a diag- onal normal form of polynomial matrices in the framework of conjugate products.
Theorem 9.13 A polynomial matrix A(s)∈Cm×ncan be conequivalent to a diago- nal polynomial matrix. In details, one can find unimodular matrices U(s)and V(s) such that
U(s)A(s)V(s)=(s), where
(s)=
⎡
⎢⎢
⎢⎢
⎢⎣ λ1(s)
λ2(s) ...
λr(s)
0(m−r)×(n−r)
⎤
⎥⎥
⎥⎥
⎥⎦
, (9.14)
andλi(s), i∈I[1,r], are monic polynomials obeying a division property λi(s)|λi+1(s),i ∈I[1,r−1].
Proof A constructive method is adopted to prove the conclusion.
Step 1: If A(s)=0, one can chooseU(s)=Im, V(s)=In, and then(s)= A(s). The conclusion holds. IfA(s)=0, go to Step 2.
Step 2: By performing row and column switching operations onA(s), bring to the (1, 1) position the least degree polynomial entry in A(s). The obtained polynomial matrix is denoted by
ai j(s) .
Step 3: By applying the Euclidean division algorithm, one can obtain ai1(s)=qi1(s)a11(s)+ri1(s), degri1(s) <dega11(s),i ∈I[2,m], a1j(s)=a11(s)q1j(s)+r1j(s), degr1j(s) <dega11(s), j ∈I[2,n].
If the remaindersr1j(s), j ∈I[2,n], andri1(s),i ∈I[2,m], are all zeroes, then go to Step 4, otherwise, find a remainder of least degree. Ifr1j0(s)is such a remainder, add the first column post-multiplied by−q1j0(s)to column j0, then go to Step 2; if ri01(s)is such a remainder, add the first row pre-multiplied by−qi01to rowi0, then go to Step 2.
Since the degrees of remaindersr1j(s), j ∈I[2,n], andri1(s),i ∈I[2,m], are lower than the degree ofa11(s), the degrees of the entry(1, 1)will decrease in each cycle of Step 2 and Step 3. After finite sequences, one can obtain a matrix
⎡
⎢⎢
⎢⎢
⎣
˜
a11(s) 0 0 ã ã ã 0 0 a22 a23 ã ã ã a2n 0 a32 a33 ã ã ã a3n
ã ã ã ã 0 am2 am3 ã ã ãamn
⎤
⎥⎥
⎥⎥
⎦, (9.15)
where the degree of a˜11(s) is lower than those of the other nonzero entries. If
˜
a11(s)|ai j(s),i ∈I[2, m], j ∈I[2, n], then go to Step 4. If there is an element which is not right or left divisible bya˜11(s), then add the column or row where this element is to the first column or row, and go to Step 3.
Step 4: Repeat the procedure from Steps 1 through 3 to the submatrix on the bottom right of (9.15), the original matrixA(s)can be transformed to
⎡
⎢⎢
⎢⎢
⎢⎢
⎣
˜
a11(s) 0 0 0 ã ã ã 0 0 a˜22(s) 0 0 ã ã ã 0 0 0 a32 a33 ã ã ã a3n 0 0 a33 a34 ã ã ã a3n
ã ã ã ã 0 0 am3am4ã ã ãamn
⎤
⎥⎥
⎥⎥
⎥⎥
⎦ ,
wherea˜11(s)|a˜22(s),a˜11(s)|ai j(s),a˜22(s)|ai j(s),i ∈I[3,m], j ∈I[3,n].
Step 5: Repeat the procedure from Steps 1 through 4, after finite operation sequences one can obtain
⎡
⎢⎢
⎢⎢
⎢⎣ ˆ a11(s)
ˆ a22(s)
...
ˆ arr(s)
0(m−r)×(n−r)
⎤
⎥⎥
⎥⎥
⎥⎦
whereaii(s)|a(i+1),(i+1)(s),i∈I[1,r−1].
Step 6: Pre-multiplyingaˆii(s)by some constantci,i∈I[1,r−1], one can obtain the monicλi(s)∈I[1,r−1]. The form (9.14) is derived.
The matrix(s)is called the Smith normal form ofA(s).
At the end of this section, the equivalence between a unimodular matrix and an invertible matrix will be proven by using the Smith normal form in Theorem9.13.
Theorem 9.14 A square complex polynomial matrix is unimodular if and only if it is invertible.
Proof Necessity can be easily proven by Definition9.14and the simple fact that any elementary matrix is invertible.
Now, let us show the sufficiency. Let A(s)∈Cn×n[s] be an invertible matrix. By applying Theorem9.13, there exist two unimodular matricesU(s),V(s)∈Cn×n[s]
such that
U(s)A(s)V(s)=D(s), (9.16) where D(s)=diag(d1(s),d2(s), . . .,dn(s)), di(s)∈C[s],i ∈I[1,n]. It follows from Proposition 9.2that U(s)andV(s)are invertible. Combining this with the fact that A(s)is invertible, by Lemma 9.11one knows that D(s)=diag(d1(s),
d2(s),. . .,dn(s))is invertible. According to Definition9.13,D(s)is invertible if and only if there exists a polynomial matrixE(s)=
ei j(s)
such thatD(s)E(s)=I. This relation implies that
di(s)eii(s)=1,i ∈I[1,n], (9.17) ei j(s)=0,i = j.
It is easily obtained from (9.17) thatdi(s),i ∈I[1,n], are all nonzero constants.
Hence,D(s)is also unimodular. In addition, sinceU(s)andV(s)are invertible, then it follows from (9.16) that
A(s)=U−(s)D(s)V−(s). (9.18) According to Item (1) of Proposition9.2, the relation (9.18) together with the fact thatU−(s),V−(s)andD(s)are unimodular, implies that A(s)is unimodular.
With the above two aspects, the conclusion is thus true.
Remark 9.6 In the framework of ordinary products, a unimodular matrix can be defined as a polynomial matrix whose determinant is a nonzero constant. However, such a definition is not suitable to the case in the framework of conjugate products since the conjugate product of two complex polynomials does not obey commuta- tivity, and thus the determinant can not be conventionally defined.