Читать книгу Mathematics for Enzyme Reaction Kinetics and Reactor Performance - F. Xavier Malcata - Страница 46

4.3 Multiplication of Matrices

Оглавление

If (m × n) matrix A, or [ai,j] as per Eq. (4.2), and (n × p) matrix B, defined as

(4.46)

are considered – with number of columns of A equal to number of rows of B, then the said matrices can be multiplied via

(4.47)

the product is an (m × p) matrix, with generic element di,l .

Note that multiplication in reverse order will not be possible unless p = m, due to the matching between number of columns of B and number of rows of A that would then be required; this example suffices to prove that multiplication of matrices is not commutative. However, even in the case of (n × n) matrices A and B that can be multiplied in either order, one gets

(4.48)

en lieu of Eq. (4.47), as well as

(4.49)

with generic element ek,j – written with the aid of the commutative property of multiplication of scalars. The equality of di,l to ej,k cannot be guaranteed, because the elements chosen for the partial two‐factor products are not the same; for instance, the element positioned in the first row and column of AB looks like , whereas the corresponding element of BA reads – which is obviously distinct from the former, despite coincidence of (only) the first term.

Consider now three matrices A, B, and C, of the (m × n), (n × p), and (p × q) types, respectively – so product AB is an (m × p) rectangular matrix, whereas product of (m × p) AB by (p × q) C will be an (m × q) matrix, ABC . Recalling Eqs. (4.2) and (4.46), and complementing them with

(4.50)

one gets

(4.51)

– where application of the algorithm conveyed by Eq. (4.47) leads to

(4.52)

A second application of the said algorithm to Eq. (4.52) gives rise to

(4.53)

which may be algebraically rearranged as

(4.54)

as per the distributive property of multiplication of plain scalars – where exchange of summations is possible, as no constraint is imposed upon their limits (i.e. n is independent of p) besides commutativity of addition of scalars; further manipulation yields

(4.55)

at the expense of the algorithm conveyed by Eq. (4.47) applied twice reversewise – thus prompting the conclusion

(4.56)

based on Eqs. (4.2), (4.46), and (4.50). Therefore, multiplication of matrices is associative, provided that the relative order of multiplication of the original factors is kept; Eq. (4.56) is often coined as

(4.57)

in view of the common (intermediate) form in Eq. (4.54).

If (m × n) matrix A is multiplied by the (n × n) identity matrix, In, then Eq. (4.47) can be revisited as

(4.58)

where the identity matrix is defined as

(4.59)

– with a main diagonal of 1's, and 0's elsewhere; since the summations are both nil for carrying a nil factor, Eq. (4.58) breaks down to

(4.60)

– so Eq. (4.2) will finally support

(4.61)

since 1 ≤ r ≤ n. In other words, multiplication of a matrix by the (compatible) identity matrix leaves the former unchanged – so In plays the role of neutral element for the multiplication of matrices. This very same conclusion can be attained if the order of multiplication is reversed, i.e.

(4.62)

as per Eq. (4.47), provided that the matrices are still compatible with regard to multiplication – so In has been swapped with Im; since the right‐hand side reduces to its middle term, Eq. (4.62) simplifies again to

(4.63)

or else

(4.64)

in view of Eq. (4.2) – i.e. the order of multiplication of the identity matrix by another matrix (when feasible) does not affect the final result.

When an (m × n) matrix A is postmultiplied by a (compatible) null (n × p) matrix, 0n×p, one gets

(4.65)

as per Eq. (4.2) – where Eq. (4.47) can be employed to get

(4.66)

together with the trivial rules of multiplication of a plain scalar by zero and summation of any number of resulting zeros; Eq. (4.66) is thus equivalent to

(4.67)

meaning that postmultiplication by the null matrix always degenerates to a null matrix (with the same number of columns). By the same token, premultiplication of A by the (compatible) null (p × m) matrix 0p×m, i.e.

(4.68)

on the basis of Eq. (4.2), gives rise to

(4.69)

by virtue of Eq. (4.47) – complemented by the nil product of zero by any scalar and the nil sum of resulting zeros; Eq. (4.69) is an alias of

(4.70)

meaning that premultiplication by a null matrix leads necessarily to the corresponding null matrix as product – where the latter has number of columns not necessarily coincident with that of the factor null matrix.

A final property of interest pertains to simultaneous performance of addition and multiplication of matrices, viz.

(4.71)

upon retrieval of Eqs. (4.2), (4.3), and (4.50) with pn; in view of Eq. (4.4), it becomes possible to redo Eq. (4.71) to

(4.72)

whereas application of Eq. (4.47) allows further transformation to

(4.73)

In view of the distributive property of multiplication of scalars, Eq. (4.73) turns to

(4.74)

where splitting of the summation meanwhile took place – with the aid of the associative property of addition of scalars; the algorithm labeled as Eq. (4.47) may again be retrieved, viz.

(4.75)

along with Eq. (4.4) – which is the same to say

(4.76)

again at the expense of Eqs. (4.2), (4.3), and (4.50). By the same token,

(4.77)

– in view of the definitions of A, B, and C, via their generic elements conveyed by Eqs. (4.2), (4.46), and (4.50) with pn; Eq. (4.4) supports transformation of Eq. (4.77) to

(4.78)

while Eq. (4.47) permits conversion to

(4.79)

Based on the distributive property of multiplication of scalars, Eq. (4.79) will give rise to

(4.80)

together with replacement of the summation of the sum by the sum of corresponding summations; Eqs. (4.4) and (4.47) may finally be called upon to write

(4.81)

or else

(4.82)

again due to Eqs. (4.2), (4.46), and (4.50). Therefore, both the pre‐ and the postmultiplication of matrices are distributive.

In the case of an (n × n) matrix, one may proceed to sequential multiplication k times by itself – usually labeled as

(4.83)

the outcome is still an (n × n) square matrix – while

(4.84)

is usually set by convention. In view of the unique features of an identity matrix outlined in Eqs. (4.61) and (4.64), one realizes that

(4.85)

– because multiplying an (n × n) identity matrix by itself leaves any of them unchanged.

If two matrices are partitioned in blocks, multiplication is still to follow the algorithm conveyed by Eq. (4.47) – as long as elements are replaced by germane submatrices; however, the underlying rules of compatibility between columns and rows of factor blocks are to be satisfied by all products. Consider, in this regard, the most common (and simplest) case of a (2 × 2) block matrix, viz.

(4.86)

– where (m × n) matrix A was partitioned through (m1 × n1) matrix A1,1, (m1 × n2) matrix A1,2, (m2 × n1) matrix A2,1, and (m2 × n2) matrix A2,2 as constitutive blocks, with m1 + m2 = m and n1 + n2 = n; coupled with

(4.87)

– with (p × q) matrix B partitioned as (p1 × q1) matrix B1,1, (p1 × q2) matrix B1,2, (p2 × q1) matrix B2,1, and (p2 × q2) matrix B2,2 as constitutive blocks – as well as p1 + p2 = p and q1 + q2 = q. The product AB is possible if n = p, besides the number of columns of the blocks of A coinciding with the number of rows of the corresponding blocks of B . In fact, the said product will look like

(4.88)

following the regular algorithm of multiplication of matrices – as long as n1 = p1 to allow existence of A1,1 B1,1, A1,1 B1,2, A2,1 B1,1, and A2,1 B1,2, as well as n2 = p2 to allow calculation of A1,2 B2,1, A1,2 B2,2, A2,2 B2,1, and A2,2 B2,2. This approach is particularly advantageous when one (or more) of the foregoing blocks is either an identity or a null matrix – since the matrix elements in Eq. (4.88) would yield much simpler expressions, involving only a portion of A or B rather than their whole set of elements. Such a concept may logically be extended to any partition as factor matrices – provided that it is mathematically feasible per se, and the associated block matrices are compatible with regard to multiplication.

Consider now (m × n) matrix A, (n × p) matrix B, and (p × m) matrix C; (m × p) matrix AB exists, and its product by (p × m) matrix C will eventually lead to (m × m) matrix ABC – so there will be a true main diagonal of ABC for it being square, and its trace can accordingly be calculated. Recall the associative property of multiplication of matrices, i.e.

(4.89)

as per Eq. (4.57); assuming matrices A and B are defined by their generic elements ai,j and bj,k, as per Eq. (4.1), and

(4.90)

en lieu of Eq. (4.46), respectively, one may multiply A by B to get

(4.91)

A similar reasoning may be applied to multiplication of matrix AB by matrix C, with generic element ck,l, viz.

(4.92)

ordered multiplication of Eqs. (4.91) and (4.92) leads indeed to

(4.93)

where the associative property of multiplication of scalars was taken on board. The trace will pick up the sum of only the elements in the main diagonal, i.e. those abiding to l = i, according to

(4.94)

where advantage was implicitly taken of the commutativity of addition of scalars; however, the order of factors in each term and corresponding summations is arbitrary – because both addition and multiplication of scalars are commutative, while the limits of the said summations are independent of each other. Consequently, Eq. (4.94) may be rewritten as

(4.95)

where the definition of trace of a matrix was recalled once more; on the other hand,

(4.96)

as per Eqs. (4.90)-(4.92) – following convenient relabeling of subscript (dummy)l toi, since C ≡ [ck,i] ≡ [ck,l] plays the role of generic element, with j holding no relationship to i (or l, for that matter). Hence, the product of BC by A looks like

(4.97)

where A ≡ [ai,j] ≡ [ai,l] for absence of constraints encompassing j and l; Eq. (4.2) was again followed, coupled with the distributive property of multiplication of scalars. The trace of ( BC) A abides to

(4.98)

based on its definition; the right‐hand side of Eq. (4.98) is identical to the right‐hand side of Eq. (4.95), so one readily concludes that

(4.99)

– whereas combination with Eq. (4.57) leads finally to

(4.100)

Remember that BC is an (n × m) matrix and A is an (m × n) matrix, so ( BC) A = BCA is an (n × n) matrix – and thus distinct from the (m × m) matrix ( AB) C = ABC as outlined above, due to the product of matrices not being commutative; nevertheless, the traces of BCA and ABC are the same. A similar derivation would prove that

(4.101)

where (p × n) matrix CA is compatible with (n × p) matrix B, thus yielding a (p × p) matrix CAB that possesses a trace for being square – and equal to that of BCA, despite CAB being distinct from (n × n) matrix BCA . Note that the next similar move of swapping the first factor (i.e. C) to the last position without modifying the sequence of the other two (i.e. AB) would transform tr{ CAB } to tr{ ABC } again – so tr{ CAB } = tr{ ABC } would close the cycle with Eqs. (4.100) and (4.101).

A particular situation covered by Eq. (4.100) pertains to an (n × m) matrix A and an (m × n) matrix C, together with Im playing the role of matrix B; the products ABC and BCA in Eq. (4.100) look like

(4.102)

which degenerates to

(4.103)

at the expense of Eqs. (4.61) and (4.64). Therefore, the trace of the product of two matrices remains unchanged when the said matrices switch positions (should that be compatible with multiplication).

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Подняться наверх