Читать книгу Linear Algebra - Michael L. O'Leary - Страница 9
Preface
ОглавлениеThis book is an introduction to linear algebra. Its goal is to develop the standard first topics of the subject. Although there are many computations in the sections, which is expected, the focus is on proving the results and learning how to do this. For this reason, the book starts with a chapter dedicated to basic logic, set theory, and proof‐writing. Although linear algebra has many important applications ranging from electrical circuitry and quantum mechanics to cryptography and computer gaming, these topics will need to wait for another day. The goal here is to master the mathematics so that one is ready for a second course in the subject, either abstract or applied. This may go against current trends in mathematics education, but if any mathematical subject can stand onits own and be learned for its own sake, it is the amazing and beautiful linear algebra.
In addition to the focus on proofs, linear transformations play a central role. For this reason, functions are introduced early, and once the important sets of ℝn are defined in the second chapter, linear transformations are described in the third chapter and motivate the introduction of matrices and their operations. From there, invertible linear transformations and invertible matrices are encountered in the fourth chapter followed by a complete generalization of all previous topics in the fifth with the definition of abstract vector spaces. Geometries are added to the abstractions in the sixth chapter, and the book concludes with nice matrix representations. Therefore, the book’s structure is as follows.
Logic and Set Theory Statements and truth tables are introduced. This includes logical equivalence so that the reader becomes familiar with the logic of statements. This is particularly important when dealing with implications and reasoning that involves De Morgan’s laws. Sets and their operations follow with an introduction to quantification including how to negate both universal and existential sentences. Proof methods are next, including direct and indirect proof, and these are applied to proofs involving subsets. Mathematical induction is also presented. The chapter closes with an introduction to functions, including the concepts of one‐to‐one, onto, and binary operation.
Euclidean Space The definition of ℝn is the focus of the second chapter with the main interpretation being that of arrows originating atthe origin. Euclidean distance and length are defined, and these are followed by the dot and cross products. Applications include planes and lines, areas and volumes, and the orthogonal projection.
Transformations and Matrices Now that functions have been defined and interesting sets to serve as their domains and codomains have been given, linear transformations are introduced. After some basic properties, it is shown that these functions have nice representations as matrices. The matrix operations come next, their definitions being motivated by the definitions of the function operations. Linear operators on ℝ2 and ℝ3 serve as important examples of linear transformations. These include the reflections, rotations, contractions, dilations, and shears. The introduction of the kernel and the range is next. Issues with finding these sets motivate the need for easier techniques. Thus, Gauss–Jordan elimination and Gaussian elimination finally make their appearance.
Invertibility The fourth chapter introduces the idea of an invertible matrix and ties it to the invertible linear operator. The standard procedure of how to find an inverse is given using elementary matrices, and inverses are then used to solve certain systems of linear equations. The determinant with its basic properties is next. How the elementary row operations affect the determinant is explained and carefully proved using mathematical induction. The next section combines the inverse and the determinant, and important results concerning both are proved. The chapter concludes with some mathematical applications including orthogonal matrices, Cramer’s Rule, and how the determinant can be used to compute the area or volume of the image of a polygon or a solid under a linear transformation.
Abstract Vectors Now that the concrete work has been done, it is time to generalize. Vector spaces lead the way as the generalization of ℝn, and these are quickly followed by linear transformations between these abstract vector spaces. The important topics of subspace, linear dependence and linear independence, and basis and dimension soon follow. The proof that every vector space has a basis is given for the sake of completion, but, other than for the result, the techniques are not pursued very far because this book is, after all, an introduction to the subject. Rank and nullity are defined, both in terms of linear transformations and in terms of matrices. The chapter then concludes with probably the most important topic of the book, isomorphism. Along with isomorphism, coordinates, coordinate maps, and change of basis matrices are presented. The section and chapter concludes with the discoveryofthe standard matrix of a linear transformation. Although there is more to come, a standing ovation for the standard matrix and its diagram would not be inappropriate.
Inner Product Spaces Although ℝn is usually viewed as Cartesian space, it is technically just a set of n × 1 matrices. Any geometry that it has was given to it in the second chapter, even though its geometry is a copy of the geometry of Cartesian space. A close examination reveals that the geometry of ℝn is based on the dot product. Mimicking this, an abstract vector space is given its geometry with an inner product, which is a function defined so that it has the same basic properties as the dot product. The vector space then becomes an inner product space so that distances, lengths, and angles can be found using objects like matrices, polynomials, and functions. Other topics related to the inner product include a generalization of the orthogonal projection, orthonormal bases, direct sums, and the Gram–Schmidt process.
Matrix Theory The book concludes with an introduction to the powerful concepts of eigenvalues and eigenvectors. Both the characteristic polynomial and the minimal polynomial are defined and used throughout the chapter. Generalized eigenvectors are presented and used to write ℝn as a direct sum of subspaces. The concept of similar matrices is given, and if a matrix does not have enough eigenvectors, it is proved that such matrices are similar to matrices with a nice form. This is where Schur’s Lemma makes its appearance. However, if a matrix does have enough eigenvectors, the matrix is similar to a very nice diagonal matrix. This is the last section of the book, which includes orthogonal diagonalization, simultaneous diagonalization, and a quick introduction to quadratic forms and how to use eigenvalues to find an equation for a conic section without a middle term.
As with any textbook, where the course is taught influences how the book is used. Many universities and colleges have an introduction to proof course. Because such courses serve as a prerequisite for any proof‐intensive mathematics course, the first chapter of this book can be passed over at these institutions and used only as a reference. If there is no such prerequisite, the first chapter serves as a detailed introduction to proof‐writing that is short enough not to infringe too much on the time spent on purely linear algebra topics. Wherever the book finds itself, the course outline can easily be adjusted with any excluded topics serving as bonus reading for the eager student.
Now for some technical comments. Theorems, definitions, and examples are numbered sequentially as a group in the now common chapter.section.number format. Although some proofs find their way into the text, most start with Proof, end with ■, and are indented. Examples, on the other hand, are simply indented. Some equations are numbered as (chapter.number) and are referred to simply using (chapter.number). Most if not all of the mathematical notation should be clear. Itwas decided to represent vectors as columns. This leads to some interesting type‐setting, but the clarity and consistency probably more than makes up for any formatting issues. Vectors are boldface, such as u and v, and scalars are not. Most sums are written like u1 + u2 + ⋯ + uk. There is a similar notation for products. However, there are times when summation and product notation must be used. Therefore, if u1, u2, …, uk are vectors,
and if r1, r2, …, rk are real numbers,
Each section ends with a list of exercises. Some are computations, some are verifications where the job is to make a computation that illustrates a theorem from the section, and some involve proving results where remembering one’s logic and set theory and how to prove sentences will go a long way.
Solution manuals, one for students and one for instructors, are available. See the book’s page at wiley.com.
Lastly, this book was typeset using LATEX from the free software distribution of TEX Live running in Arch Linux with the KDE Plasma desktop. Thediagramswere created using LibreOffice Draw.
Michael L. O’Leary
Glen Ellyn, Illinois
September, 2020