Читать книгу Mathematics for Enzyme Reaction Kinetics and Reactor Performance - F. Xavier Malcata - Страница 49

4.5.1 Full Matrix

Оглавление

The inverse (n × n) matrix A−1, of a given (n × n) matrix A, satisfies, by definition,

(4.124)

therefore, if A−1 is described by

(4.125)

then one may insert Eq. (4.125) and Eq. (4.1) with m = n to transform Eq. (4.124) to

(4.126)

– corresponding to AA−1 being equal to In . Recalling the algorithm of multiplication of matrices as per Eq. (4.47), one finds that Eq. (4.126) is equivalent to

(4.127)

hence, a system of n2 linear algebraic equations in n2 unknowns, i.e. α1,1, α1,2, …, α1,n, α2,1, α2,2, …, α2,n, …, αn,1, αn,2, …, αn,n, arises – the solution of which will be postponed at this point.

If one labels

(4.128)

then Eq. (4.124) will read

(4.129)

in terms of left‐ and right‐hand sides; this is equivalent to Eq. (4.127), as seen above. Assume now that another matrix C exists, such that

(4.130)

thus mimicking the intermediate and right‐hand sides of Eq. (4.124); in view of Eq. (4.64), one has it that

(4.131)

where insertion of Eq. (4.130) unfolds

(4.132)

After applying the associative property as conveyed by Eq. (4.57), one gets

(4.133)

where combination with Eq. (4.129) gives rise to

(4.134)

and finally to

(4.135)

on account of Eqs. (4.61) and (4.128). In other words, the only matrix C that satisfies Eq. (4.130) is indeed A−1, to be calculated via Eq. (4.127), so the second equality in Eq. (4.124) is fully proven once the first equality is true; this result is also consistent with the need that the number of columns of A matches the number of rows of A−1 (thus guaranteeing existence of AA−1) and vice versa (so as to assure existence of A−1 A) – which obviously implies that A and A−1 are square matrices of similar order.

In view of the definition of inverse, one realizes that

(4.136)

based on Eq. (4.124) after replacing A by A−1 – so ordered subtraction of Eq. (4.136) from Eq. (4.124) gives rise to

(4.137)

once the left‐ and middle‐hand sides of the former have been previously swapped; postmultiplication of the first equality in Eq. (4.137) by A produces

(4.138)

where Eqs. (4.70) and (4.76) permit conversion to

(4.139)

The second equality in Eq. (4.124) then supports transformation of Eq. (4.139) to

(4.140)

after having applied the associative property as per Eq. (4.57) – whereas Eq. (4.61) accounts for simplification to

(4.141)

one thus concludes that

(4.142)

after adding ( A−1)−1 to both sides, and recalling Eqs. (4.19) and (4.45). Therefore, composition of the inversion operation with itself cancels it out – in much the same way already found for transposal. A similar reasoning can be developed involving premultiplication of the second equality of Eq. (4.137) by A, viz.

(4.143)

where the distributive property as per Eq. (4.82) and the associative property as per Eq. (4.57), coupled with Eq. (4.67) yield

(4.144)

the definition of inverse labeled as Eq. (4.124) may again be invoked to write

(4.145)

or else

(4.146)

in view of Eq. (4.64) – which retrieves Eq. (4.141), and consequently leads also to Eq. (4.142).

On the other hand, one finds that

(4.147)

i.e. the inverse of a product of matrices is given by the product of their inverses, in reverse order; to prove so, one should realize that

(4.148)

can be obtained after postmultiplying AB by B−1 A−1, followed by application of Eq. (4.56) – where both A and B are (n × n) matrices. In view of Eq. (4.124), one may replace Eq. (4.148) by

(4.149)

where Eqs. (4.57), (4.61), and (4.124) allow further simplification to

(4.150)

one may similarly show that

(4.151)

involving premultiplication of AB by B−1 A−1 – again on the basis of the associative property of multiplication of matrices as per Eq. (4.56), which degenerates to

(4.152)

due again to Eq. (4.124). In view of the features of In as neutral element as conveyed by Eq. (4.64), one may redo Eq. (4.152) to

(4.153)

– again with the aid of Eq. (4.124); the set of Eqs. (4.150) and (4.153) guarantees full validity of Eq. (4.147), in view of the definition of inverse labeled as Eq. (4.124).

The result conveyed by Eq. (4.147) can obviously be extended to any number of factors – by sequentially applying it pairwise, i.e. the inverse of a product of matrices is but the product of their inverses, again in reverse order. When the matrices of interest are identical, this rule leads to

(4.154)

where the right‐hand side may be rewritten as

(4.155)

owing to the definition of power; hence, the power and inverse signs are interchangeable.

In the particular case of matrix A degenerating to scalar matrix α In, Eq. (4.147) prompts

(4.156)

the inverse of a scalar matrix α In is given by merely α−1 In, since In is the neutral element of multiplication, so Eq. (4.156) is equivalent to

(4.157)

also with the aid of Eq. (4.24) – where Eq. (4.61) supports final transformation to

(4.158)

Equation (4.158) consequently indicates that the inverse of the product of a scalar by a matrix is simply the product of the reciprocal of the said scalar by the inverse of the matrix proper.

One may finally investigate what the combination of the transpose and inverse operators will look like, by first setting the product AT × ( A−1)T and then realizing that

(4.159)

based on Eq. (4.120); however, Eq. (4.124) has it that

(4.160)

where Eq. (4.108) allows further simplification to

(4.161)

One may similarly write

(4.162)

at the expense again of the rule of transposition of a product of matrices, see Eq. (4.120); the definition of inverse as conveyed by Eq. (4.124) permits simplification of Eq. (4.162) to

(4.163)

whereas Eq. (4.108) may again be invoked to attain

(4.164)

Inspection of Eqs. (4.161) and (4.164) confirms compatibility with the form of Eq. (4.124), so one concludes that

(4.165)

– meaning that the inverse of AT is merely the transpose of A−1; therefore, the transpose and inverse operators can also be exchanged without affecting the final result.

Although being square is a necessary condition for invertibility of a matrix, it is far from being also a sufficient condition; in fact, the rank of (n × n) matrix A must coincide with its order, so as to guarantee existence of A−1 (to be discussed later). Under such conditions, the said square matrix is termed regular – otherwise it is termed singular; as will be seen, the associated determinant is a convenient tool to effect this distinction.

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Подняться наверх