-
Notifications
You must be signed in to change notification settings - Fork 11
/
LinearAlgebra.tex
1821 lines (1668 loc) · 80.4 KB
/
LinearAlgebra.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\ifx\wholebook\relax\else
\documentclass[twoside]{book}
\usepackage[active]{srcltx}
\usepackage[LY1]{fontenc}
\input{DhbDefinitions}
\begin{document}
\fi
\chapter{Linear algebra}
\label{ch:linearalgebra}
\begin{flushright}
{\sl On ne trouve pas l'espace, il faut toujours le
construire.}\footnote{Space is not to be found; it must always be
constructed.}\\ Gaston Bachelard
\end{flushright}
\vspace{1 ex} Linear algebra concerns itself with the manipulation
of vectors and matrices. The concepts of linear algebra are not
difficult and linear algebra is usually taught in the first year
of university. Solving systems of linear equations are even taught
in high school. Of course, one must get used to the book keeping
of the indices. The concise notation introduced in linear algebra
for vector and matrix operations allows expressing difficult
problems in a few short equations. This notation can be directly
adapted to object oriented programming.
Figure \ref{fig:linearalgebraclasses} shows the classes described
in this chapter.
\begin{figure}
\centering\includegraphics[width=11cm]{Figures/LinearAlgebraClasses}
\caption{Linear algebra classes}\label{fig:linearalgebraclasses}
\end{figure}
Like chapter \ref{ch:function}, this chapter discusses some
fundamental concepts and operations that shall be used throughout
the rest of the book. It might appear austere to many readers
because, unlike the preceding chapters, it does not contains
concrete examples. However, the reader will find example of use of
linear algebra in nearly all remaining chapters of this book.
The chapter begins with a reminder of operations defined on
vectors and matrices. Then, two methods for solving systems of
linear equations are discussed. This leads to the important
concept of matrix inversion. Finally the chapter closes of the
problem of finding eigenvalues and eigenvectors.
\section{Vectors and matrices}
\label{sec:linearalgebra} Linear algebra concerns itself with
vectors in multidimensional spaces and the properties of
operations on these vectors. It is a remarkable fact that such
properties can be studied without explicit specification of the
space dimension\footnote{In fact, most mathematical properties
discussed in this chapter are valid for space with an infinite
number of dimensions (Hilbert spaces).}.
A vector is an object in a multidimensional space. It is
represented by its components measured on a reference system. A
reference system is a series of vectors from which the entire
space can be generated. A commonly used mathematical notation for
a vector is a lower case bold letter, ${\bf v}$ for example. If
the set of vectors ${\bf u}_1,\ldots,{\bf u}_n$ is a reference
system for a space with $n$ dimension, then any vector of the
space can be written as:
\begin{equation}
\label{eq:vectordef}
{\bf v} = v_1{\bf u}_1+\cdots+v_n{\bf u}_n,
\end{equation}
where $v_1,\ldots,v_n$ are real numbers in the case of a real
space or complex numbers in a complex space. The numbers
$v_1,\ldots v_n$ are called the components of the vector.
A matrix is a linear operator over vectors from one space to
vectors in another space not necessarily of the same dimension.
This means that the application of a matrix on a vector is another
vector. To explain what {\sl linear} means, we must quickly
introduce some notation.
A matrix is commonly represented with an upper case bold letter,
${\bf M}$ for example. The application of the matrix ${\bf M}$ on
the vector ${\bf M}$ is denoted by ${\bf M}\cdot{\bf v}$. The fact
that a matrix is a linear operator means that
\begin{equation}
{\bf M}\cdot\left(\alpha{\bf u}+\beta{\bf v}\right)=\alpha{\bf
M}\cdot{\bf u}+\beta{\bf M}\cdot{\bf v},
\end{equation}
for any matrix ${\bf M}$, any vectors ${\bf u}$ and ${\bf v}$, any
numbers $\alpha$ and $\beta$.
Matrices are usually represented using a table of numbers. In
general, the number of rows and the number of columns are not the
same. A square matrix is a matrix having the same number of rows
and columns. A square matrix maps a vector onto another vector of
the same space.
Vectors and matrices have an infinite number of representations
depending on the choice of reference system. Some properties of
matrices are independent from the reference system. Very often the
reference system is not specified explicitly. For example, the
vector ${\bf v}$ of equation \ref{eq:vectordef} is represented by
the array of numbers $\left( v_1v_2\cdots v_n\right)$ where $n$ is
the dimension of the vector. Writing the components of a vector
within parentheses is customary. Similarly a matrix is represented
with a table of numbers called the components of the matrix; the
table is also enclosed within parentheses. For example, the $n$ by
$m$ matrix ${\bf A}$ is represented by:
\begin{equation}
{\bf A}=\pmatrix{a_{11}& a_{12}&\ldots& a_{1n}\cr
a_{21}& a_{22}&\ldots& a_{2n}\cr
\vdots&\vdots&\ddots&\vdots\cr
a_{m1}& a_{m2}&\ldots& a_{mn}\cr}.
\end{equation}
The components can be real or complex numbers. In this book we
shall deal only with vectors and matrices having real components.
For simplicity a matrix can also be written with matrix
components. That is, the $n$ by $m$ matrix ${\bf A}$ can be
written in the following form:
\begin{equation}
\label{eq:matrixcomp}
{\bf A}=\pmatrix{{\bf B}&{\bf C}\cr{\bf D}&{\bf E}\cr},
\end{equation}
where ${\bf B}$, ${\bf C}$, ${\bf D}$ and ${\bf E}$ are all
matrices of lower dimensions. Let ${\bf B}$ be a $p$ by $q$
matrix. Then, ${\bf C}$ is a $p$ by $m-q$ matrix, ${\bf D}$ is a
$n-p$ by $q$ matrix and ${\bf E}$ is a $n-p$ by $m-q$ matrix.
Using this notation one can carry all conventional matrix
operations using formulas similar to those written with the
ordinary number components. There is, however, one notable
exception: the order of the products must be preserved since
matrix multiplication is not commutative. For example, the product
between two matrices expressed as matrix components can be carried
out as:
\begin{equation}
\label{eq:matrixcompmult}
\pmatrix{{\bf B_1}&{\bf C_1}\cr{\bf D_1}&{\bf E_1}\cr}\cdot
\pmatrix{{\bf B_2}&{\bf C_2}\cr{\bf D_2}&{\bf E_2}\cr}=
\pmatrix{{\bf B_1}\cdot{\bf B_2}+{\bf C_1}\cdot{\bf D_2}&
{\bf B_1}\cdot{\bf C_2}+{\bf C_1}\cdot{\bf E_2}\cr
{\bf D_1}\cdot{\bf B_2}+{\bf E_1}\cdot{\bf D_2}&
{\bf D_1}\cdot{\bf C_2}+{\bf E_1}\cdot{\bf E_2}\cr},
\end{equation}
In equation \ref{eq:matrixcompmult} the dimension of the
respective matrix components must permit the corresponding
product. For example the number of rows of the matrices ${\bf
B_1}$ and ${\bf C_1}$ must be equal to the number of columns of
the matrices ${\bf B_2}$ and ${\bf C_2}$ respectively.
Common operations defined on vectors and matrices are summarized
below. In each equation, the left-hand side shows the vector
notation and the right-hand side shows the expression for
coordinates and components.
\vskip 2ex\noindent The sum of two vectors of dimension $n$ is a
vector of dimension $n$:
\begin{equation}
\label{eq:vectorsum}
\begin{tabular}{ccl}
\relboxc{0.27}{${\bf w}={\bf u}+{\bf
v}$}&\relboxc{0.27}{$w_i=u_i+v_i$}&\relboxl{0.27}{for
$i=1,\ldots,n$}
\end{tabular}
\end{equation}
The product of a vector of dimension $n$ by a number $\alpha$ is a
vector of dimension $n$:
\begin{equation}
\begin{tabular}{ccl}
\relboxc{0.27}{${\bf w}=\alpha{\bf v}$}&\relboxc{0.27}{$w_i=\alpha
v_i$}&\relboxl{0.27}{for $i=1,\ldots,n$}
\end{tabular}
\end{equation}
The scalar product of two vectors of dimension $n$ is a number:
\begin{equation}
\begin{tabular}{ccl} \relboxc{0.27}{$s={\bf u}\cdot{\bf
v}$}&\relboxc{0.27}{$\displaystyle s=\sum_{i=1}^n u_i
v_i$}&\relboxc{0.27}{ }
\end{tabular}
\end{equation}
The norm of a vector is denoted $\left|{\bf v}\right|$. The norm
is the square root of the scalar product with itself.
\begin{equation}
\begin{tabular}{ccl} \relboxc{0.27}{$\left|{\bf v}\right|=\sqrt{{\bf v}\cdot{\bf
v}}$}&\relboxc{0.27}{$\left|{\bf
v}\right|=\sqrt{\displaystyle\sum_{i=1}^n v_i
v_i}$}&\relboxc{0.27}{ }
\end{tabular}
\end{equation}
The tensor product of two vectors of respective dimensions $n$ and
$m$ is an $n$ by $m$ matrix:
\begin{equation}
\begin{tabular}{ccl}
\relboxc{0.27}{${\bf T}={\bf u}\otimes{\bf v}$}&\relboxc{0.27}{$T_{ij}=u_i v_j$}&\relboxl{0.27}{for
$i=1,\ldots,n$}\\&&\relboxl{0.27}{and $j=1,\ldots,m$}
\end{tabular}
\end{equation}
The sum of two matrices of same dimensions is a matrix of same
dimensions:
\begin{equation}
\begin{tabular}{ccc}
\relboxc{0.27}{${\bf C}={\bf A}+{\bf B}$}&\relboxc{0.27}{$c_{ij}=a_{ij}+b_{ij}$}&\relboxl{0.27}{for $i=1,\ldots,n$}\\
&&\relboxl{0.27}{and $j=1,\ldots,m$}\\
\end{tabular}
\end{equation}
The product of a matrix by a number $\alpha$ is a matrix of same
dimensions:
\begin{equation}
\begin{tabular}{ccc}
\relboxc{0.27}{${\bf B}=\alpha{\bf A}$}&\relboxc{0.27}{$b_{ij}=\alpha a_{ij}$}&\relboxl{0.27}{for $i=1,\ldots,n$}\\
&&\relboxl{0.27}{and $j=1,\ldots,m$}\\
\end{tabular}
\end{equation}
The transpose of a $n$ by $m$ matrix is a $m$ by $n$ matrix:
\begin{equation}
\begin{tabular}{ccc}
\relboxc{0.27}{${\bf B}={\bf A}^{\mathop{\rm T}}$}&\relboxc{0.27}{$b_{ij}=a_{ji}$}&\relboxl{0.27}{for $i=1,\ldots,n$}\\
&&\relboxl{0.27}{and $j=1,\ldots,m$}\\
\end{tabular}
\end{equation}
The product of a $n$ by $m$ matrix with a vector of dimension $n$
is a vector of dimension $m$:
\begin{equation}
\begin{tabular}{ccc}
\relboxc{0.27}{${\bf u}={\bf A}\cdot{\bf v}$}&\relboxc{0.27}{$u_i=\displaystyle\sum_{i=1}^n a_{ij}v_i$}&\relboxl{0.27}{for $i=1,\ldots,m$}\\
\end{tabular}
\end{equation}
The transposed product of a vector of dimension $m$ by a $n$ by
$m$ matrix is a vector of dimension $n$:
\begin{equation}
\begin{tabular}{ccc}
\relboxc{0.27}{${\bf u}={\bf v}\cdot{\bf A}$}&\relboxc{0.27}{$u_i=\displaystyle\sum_{i=1}^m a_{ji}v_i$}&\relboxl{0.27}{for $i=1,\ldots,m$}\\
\end{tabular}
\end{equation}
The product of a $n$ by $p$ matrix with a $p$ by $m$ matrix is a
$n$ by $m$ matrix:
\begin{equation}
\label{eq:matrixproduct}
\begin{tabular}{ccc}
\relboxc{0.27}{${\bf C}={\bf A}\cdot{\bf B}$}&\relboxc{0.27}{$c_{ij}=\displaystyle\sum_{k=1}^p a_{ik}a_{kj}$}&\relboxl{0.27}{for $i=1,\ldots,m$}\\
&&\relboxl{0.27}{$j=1,\ldots,m$}\\
\end{tabular}
\end{equation}
There are of course other operations (the outer product for
example) but they will not be used in this book.
To conclude this quick introduction, let us mention matrices with
special properties.
A square matrix is a matrix which has the same number of rows and
columns. To shorten sentences, we shall speak of a square matrix
of dimension $n$ instead of a $n$ by $n$ matrix.
An identity matrix ${\bf I}$ is a matrix such that
\begin{equation}
{\bf I}\cdot{\bf v}={\bf v}
\end{equation}
for any vector ${\bf v}$. This implies that the identity matrix is
a square matrix. the representation of the identity matrix
contains 1 in the diagonal and 0 off the diagonal in any system of
reference. For any square matrix ${\bf A}$ we have:
\begin{equation}
{\bf I}\cdot{\bf A}={\bf A}\cdot{\bf I}={\bf A}
\end{equation}
One important property for the algorithms discussed in this book
is symmetry. A symmetrical matrix is a matrix such that ${\bf
A}^{\mathop{\rm T}}={\bf A}$. In any system of reference the
components of a symmetric matrix have the following property:
\begin{equation}
a_{ij}=a_{ji}, \mbox{\quad for all $i$ and $j$.}
\end{equation}
The sum and product of two symmetric matrices is a symmetric
matrix. The matrix ${\bf A}^{\mathop{\rm T}}\cdot{\bf A}$ is a
symmetric matrix for any matrix ${\bf A}$. If the matrix ${\bf A}$
represented in equation \ref{eq:matrixcomp} is symmetric, we have
${\bf D}={\bf C}^{\mathop{\rm T}}$.
\subsection{Vector and matrix implementation}
\label{sec:slinearalgebra} \marginpar{Figure
\ref{fig:linearalgebraclasses} with the box {\bf Vector} and {\bf
Matrix} grayed.} Listings \ref{ls:vector} and \ref{ls:matrix} show
respectively the implementation of vectors and matrices.
A special implementation for symmetric matrices
is shown in listing \ref{ls:symmatrix}.
The public interface is designed as to map itself as close as
possible to the mathematical definitions. Here are some example of
code using operations between vectors and matrices:
\begin{codeExample}
\begin{verbatim}
| u v w a b c |
u := #(1 2 3) asPMVector.
v := #(3 4 5) asPMVector.
a := PMMatrix rows: #((1 0 1) (-1 -2 3)).
b := PMMatrix rows: #((1 2 3) (-2 1 7) (5 6 7)).
w := 4 * u + (3 * v).
c := a * b.
v := a * u.
w := c transpose * v.
w := v * c
\end{verbatim}
\end{codeExample}
In the first two lines after the declarative statement, the
vectors ${\bf u}$ and ${\bf v}$ are defined from their component
array using the creator method {\tt asPMVector}. They are
3-dimensional vectors. The matrices ${\bf a}$ and ${\bf b}$ are
created by supplying the components to the class creation method
{\tt rows:}. The matrix ${\bf a}$ is a 2 by 3 matrix, whereas the
matrix ${\bf b}$ is a square matrix of dimension 3. In all cases
the variable {\tt w} is assigned to a vector and the variable {\tt
c} is assigned to a matrix. First, the vector ${\bf w}$ is
assigned to a linear combination of the vectors ${\bf u}$ and
${\bf v}$. Apart from the parentheses required for the second
product, the expression is identical to what one would write in
mathematics (compare this expression with equation
\ref{eq:vectordef}).
Next the matrix ${\bf c}$ is defined as the product of the
matrices ${\bf a}$ and ${\bf b}$ in this order. It is a direct
transcription of the left part of equation \ref{eq:matrixproduct}
up to the case of the operands.
The next assignment redefines the vector ${\bf v}$ as the product
of the matrix ${\bf A}$ with the vector ${\bf u}$. It is now a
2-dimensional vector. Here again the correspondence between the
Pharo and the mathematical expression is direct.
The last two lines compute the vector ${\bf w}$ as the transpose
product with the matrix ${\bf a}$. The result of both line is the
same\footnote{\label{ft:covariant}There is a subtle difference
between regular vectors and transposed vectors, which is
overlooked by our choice of implementation, however. Transposed
vectors or covariant vectors as they are called in differential
geometry should be implemented in a proper class. This extension
is left as an exercise to the reader.}. The first line makes the
transposition of the matrix ${\bf a}$ explicit, whereas the second
line used the implicit definition of the transpose product. The
second line is faster than the previous one since no memory
assignment is required for the temporary storage of the transpose
matrix.
The use of other methods corresponding to the operations defined
in equations \ref{eq:vectorsum} to \ref{eq:matrixproduct} are left
as an exercise to the reader.
\rubrique{Implementation} A vector is akin to an instance of the
Pharo class {\tt Array}, for which mathematical operations
have been defined. Thus, a vector in Pharo can be implemented
directly as a subclass of the class {\tt Array}. A matrix is an
object whose instance variable is an array of vectors.
The operations described in the preceding section can be assigned
to the corresponding natural operators. The multiplication,
however, can involve several types of operands. It can be applied
between
\begin{enumerate}
\item a vector and a number,
\item a matrix and a number or
\item a vector and a matrix.
\end{enumerate}
Thus, the multiplication will be implemented using double
dispatching as explained in section \ref{sec:polymath} for
operations between polynomials. Double dispatching is described in
appendix (\cf section \ref{sec:doubledisp}).
The method {\tt asPMVector} is defined for compatibility with a
similar method defined in the class {\tt Collection} to construct a
vector out of any collection object.
The method {\tt tensorProduct} returns an instance of a symmetric
matrix. This class is defined in listing \ref{ls:symmatrix}.
The method {\tt accumulate} is meant to be used when there is a
need to add several vectors. Indeed the following code
\begin{codeExample}
\begin{verbatim}
| a b c d e |
a := #(1 2 3 4 5) asPMVector.
b := #(2 3 4 5 6) asPMVector.
c := #(3 4 5 6 7) asPMVector.
d := #(4 5 6 7 8) asPMVector.
e := a+b+c+d.
\end{verbatim}
\end{codeExample}
creates a lots of short-lived vectors, namely one
for each addition. Using the method {\tt accumulate} reduces the
memory allocation:
\begin{codeExample}
\begin{verbatim}
| a b c d e |
a := #(1 2 3 4 5) asPMVector.
b := #(2 3 4 5 6) asPMVector.
c := #(3 4 5 6 7) asPMVector.
d := #(4 5 6 7 8) asPMVector.
e := a copy.
e accumulate: b; accumulate: c; accumulate: d.
\end{verbatim}
\end{codeExample}
If vectors of large dimension are used, using accumulation instead
of addition can make a big difference in performance since many
large short-lived objects put a heavy toll of the garbage
collector.
\begin{listing} Vector class in Pharo \label{ls:vector}
\input{Smalltalk/LinearAlgebra/DhbVector}
\end{listing}
\noindent The class {\tt PMMatrix} has two instance variables:
\begin{description}
\item[\tt rows] an array of vectors, each representing a
row of the matrix and
\item[\tt lupDecomposition ] a pointer to an object of the class
{\tt PMLUPDecomposition} containing the LUP decomposition of the
matrix if already computed. LUP decomposition is discussed in
section \ref{sec:lup}.
\end{description}
This implementation reuses the vector implementation of the vector
scalar product to make the code as compact as possible. the
iterator methods {\tt columnsCollect:}, {\tt columnsDo:}, {\tt
rowsCollect:} and {\tt rowsDo:} are designed to limit the need for
index management to these methods only.
An attentive reader will have noticed that the iterator methods
{\tt rowsDo:} and {\tt rowsCollect:} present a potential breach of
encapsulation. Indeed, the following expression
\begin{quote}
\begin{verbatim}
aMatrix rowsDo:[ :each | each at: 1 put: 0]
\end{verbatim}
\end{quote}
changes the matrix representation outside of the normal way.
Similarly, the expression
\begin{quote}
\begin{verbatim}
aMatrix rowsCollect:[ :each | each]
\end{verbatim}
\end{quote}
gives direct access to the matrix's internal representation.
The method {\tt square} implements the product of the transpose of
a matrix with itself. This construct is used in several algorithms
presented in this book.
\begin{quotation}
\noindent {\bf Note:} The presented matrix implementation is
straightforward. Depending on the problem to solve, however, it is
not the most efficient one. Each multiplication allocates a lot of
memory. If the problem is such that one can allocate memory once
for all, more efficient methods can be designed.
\end{quotation}
The implementation of matrix operations --- addition, subtraction,
product --- uses double or multiple dispatching to determine
whether or not the result is a symmetric matrix. Double and
multiple dispatching are explained in sections
\ref{sec:doubledisp} and \ref{sec:multipledisp}. The reader who is
not familiar with multiple dispatching should trace down a few
examples between simple matrices using the debugger.
\begin{listing} Matrix classes \label{ls:matrix}
\input{Smalltalk/LinearAlgebra/DhbMatrix}
\end{listing}
Listing \ref{ls:symmatrix} shows the implementation of the class
{\tt PMSymmetricMatrix} representing symmetric matrices. A few
algorithms are implemented differently for symmetric matrices.
The reader should pay attention to the methods implementing
addition, subtraction and products. Triple dispatching is used to
ensure that the addition or subtraction of two symmetric matrices
yields a symmetric matrix whereas the same operations between a
symmetric matrix and a normal matrix yield a normal matrix.
Product requires quadruple dispatching.
\begin{listing} Symmetric matrix classes \label{ls:symmatrix}
\input{Smalltalk/LinearAlgebra/DhbSymmetricMatrix}
\end{listing}
\section{Linear equations}
\label{sec:lineqs} A linear equation is an equation in which the
unknowns appear to the first order and are combined with the other
unknowns only with addition or subtraction. For example, the
following equation:
\begin{equation}
3x_1-2x_2+4x_3=0,
\end{equation}
is a linear equation for the unknowns $x_1$, $x_2$ and $x_3$. The
following equation
\begin{equation}
\label{eq:nonlineq}
3x_1^2-2x_2+4x_3 - 2 x_2 x_3=0,
\end{equation}
is not linear because $x_1$ appears as a second order term and a
term containing the product of the unknowns $x_2$ and $x_3$ is
present. However, equation \ref{eq:nonlineq} is linear for the
unknown $x_2$ (or $x_3$) alone. A system of linear equation has
the same number of equations as there are unknowns. For example
\begin{equation}
\label{eq:lineqex}
\left\{
\begin{array}{lcr}
3x_1+2y_2+4z_3&=&16\\
2x_1-5y_2-z_3&=&6\\
x_1-2y_2-2z_3&=&10\\
\end{array}\right.
\end{equation}
is a system of linear equation which can be solved for the 3
unknowns $x_1$, $x_2$ and $x_3$. Its solution is $x_1=2$, $x_2=-1$
and $x_3=3$.
A system of linear equations can be written in
vector notation as follows:
\begin{equation}
\label{eq:lineqs}
{\bf A}\cdot{\bf x}={\bf y}.
\end{equation}
The matrix ${\bf A}$ and the vector ${\bf z}$ are given. Solving
the system of equations is looking for a vector ${\bf x}$ such
that equation \ref{eq:lineqs} holds. The vector ${\bf x}$ is the
solution of the system. A necessary condition for a unique
solution to exist is that the matrix ${\bf A}$ be a square matrix.
Thus, we shall only concentrate on square matrices\footnote{It is
possible to solve system of linear equations defined with a
non-square matrix using technique known as singular value
decomposition (SVD). In this case, however, the solution of the
system is not a unique vector, but a subspace of $n$-dimensional
space where $n$ is the number of columns of the system's matrix.
The SVD technique is similar to the techniques used to find
eigenvalues and eigenvectors.}. A sufficient condition for the
existence of a unique solution is that the rank of the matrix
--- that is the number of linearly independent rows
--- is equal to the dimension of the matrix.
If the conditions are all fulfilled, the solution to equation
\ref{eq:lineqs} can be written in vector notation:
\begin{equation}
{\bf x}={\bf A}^{-1}{\bf y}.
\end{equation}
where ${\bf A}^{-1}$ denotes the inverse of the matrix ${\bf A}$
(\cf section \ref{sec:matrixinversion}).
Computing the inverse of a matrix in the general case is
numerically quite demanding (\cf section
\ref{sec:matrixinversion}). Fortunately, there is no need to
explicitly compute the inverse of a matrix to solve a system of
equations. Let us first assume that the matrix of the system is a
triangular matrix, that is we have:
\begin{equation}
\label{eq:trianglesystem}
{\bf T}{\bf x}={\bf y}^{\prime}.
\end{equation}
where ${\bf T}$ is a matrix such that:
\begin{equation}
T_{ij}=0 \mbox{\quad for $i>j$}.
\end{equation}
\rubrique{Backward substitution} The solution of the system of
equation \ref{eq:trianglesystem} can be obtained using backward
substitution. The name {\sl backward} comes from the fact that the
solution begins by calculating the component with the highest
index; then it works its way backward on the index calculating
each components using all components with higher index.
\begin{mainEquation}
\left\{
\begin{array}{lcl}
x_n & = & {\displaystyle y_n^{\prime} \over\displaystyle t_{nn}},
\\*[2 ex]
x_i & = & {\displaystyle y_i^{\prime} - \sum_{j=i+1}^n t_{ij}x_j
\over\displaystyle a_{nn}^{\prime}} \mbox{\quad for
$i=n-1,\ldots,1$}.
\end{array}
\right.
\end{mainEquation}
\rubrique{Gaussian elimination} Any system as described by
equation \ref{eq:lineqs} can be transformed into a system based on
a triangular matrix. This can be achieved through a series of
transformations leaving the solution of the system of linear
equations invariant. Let us first rewrite the system under the
form of a single matrix ${\bf S}$ defined as follows:
\begin{equation}
{\bf S}=\pmatrix{a_{11}& a_{12}&\ldots& a_{1n}&y_1\cr
a_{21}& a_{22}&\ldots& a_{2n}&y_2\cr
\vdots&\vdots&\ddots&\vdots&\vdots\cr
a_{n1}& a_{n2}&\ldots& a_{nn}&y_n\cr}.
\end{equation}
Among all transformations leaving the solution of the system
represented by the matrix ${\bf S}$ invariant, there are two
transformations, which help to obtain a system corresponding to
triangular matrix. First, any row of the matrix ${\bf S}$ can be
exchanged. Second, a row of the matrix ${\bf S}$ can be replaced
by a linear combination\footnote{In such linear combination, the
coefficient of the replaced row must not be zero.} of that row
with another one. The trick is to replace all rows of the matrix
${\bf S}$, except for the first one, with rows having a vanishing
first coefficient.
If $a_{11}=0$, we permute the first row with row $i$ such that
$a_{i1}\ne 0$. Then, we replace each row $j$, where $j>1$, by
itself subtracted with the first row multiplied by $a_{i1}/a_{11}$
This yields a new system matrix ${\bf S}^{\prime}$ of the form:
\begin{equation}
{\bf S}^{\prime}=\pmatrix{a_{11}^{\prime}& a_{12}^{\prime}&\ldots& a_{1n}^{\prime}&y_1^{\prime}\cr
0& a_{22}^{\prime}&\ldots& a_{2n}^{\prime}&y_2^{\prime}\cr
\vdots&\vdots&\ddots&\vdots&\vdots\cr
0& a_{n2}^{\prime}&\ldots& a_{nn}^{\prime}&y_n^{\prime}\cr}.
\end{equation}
This step is called {\sl pivoting} the system on row 1 and the
element $a_{11}$ after the permutation is called the pivot. By
pivoting the system on the subsequent rows, we shall obtain a
system built on a triangular matrix as follows:
\begin{equation}
{\bf S}^{\left( n \right)}=\pmatrix{a_{11}^{\left( n \right)}& a_{12}^{\left( n \right)}& a_{13}^{\left( n \right)}&\ldots& a_{1n}^{\left( n \right)}&y_1^{\left( n \right)}\cr
0& a_{22}^{\left( n \right)}& a_{23}^{\left( n \right)}&\ldots& a_{2n}^{\left( n \right)}&y_2^{\left( n \right)}\cr
0& 0& a_{33}^{\left( n \right)}&\ldots& a_{3n}^{\left( n \right)}&y_3^{\left( n \right)}\cr
\vdots&\vdots& &\ddots&\vdots&\vdots\cr
0& 0&\ldots&0& a_{nn}^{\left( n \right)}&y_n^{\left( n \right)}\cr}.
\end{equation}
This algorithm is called Gaussian elimination. Gaussian
elimination will fail if we are unable to find a row with a
non-zero pivot at one of the steps. In that case the system does
not have a unique solution .
The first $n$ columns of the matrix ${\bf S}^{\left( n \right)}$
can be identified to the matrix ${\bf T}$ of equation
\ref{eq:trianglesystem} and the last column of the matrix ${\bf
S}^{\left( n \right)}$ corresponds to the vector ${\bf
y}^{\prime}$ of equation \ref{eq:trianglesystem}. Thus, the final
system can be solved with backward substitution.
\begin{quote}
{\bf Note:} The reader can easily understand that one could have
made a transformation to obtain a triangular matrix with the
elements above the diagonal all zero. In this case, the final step
is called forward substitution since the first component of the
solution vector is computed first. The two approaches are fully
equivalent.
\end{quote}
\rubrique{Fine points} A efficient way to avoid a null pivot is to
systematically look for the row having the largest pivot at each
step. To be precise, before pivoting row $i$, it is first
exchanged with row $j$ such that $\left| a_{ij}^{\left( i-1
\right)}\right|>\left| a_{ik}^{\left( i-1 \right)}\right|$ for all
$k=i,\ldots,n$ if such row exists. The systematic search for the
largest pivot ensures optimal numerical precision \cite{PhiTay}.
The reader will notice that all operations can be performed in
place since the original matrix ${\bf S}$ is not needed to compute
the final result.
Finally it should be noted that the pivoting step can be performed
on several vectors ${\bf y}$ at the same time. If one must solve
the same system of equations for several sets of constants,
pivoting can be applied to all constant vectors by extending the
matrix ${\bf S}$ with as many columns as there are additional
constant vectors as follows:
\begin{equation}
{\bf S}=\pmatrix{a_{11}& a_{12}&\ldots& a_{1n}&y_{11}&\ldots&y_{m1}\cr
a_{21}& a_{22}&\ldots& a_{2n}&y_{12}&\ldots&y_{m2}\cr
\vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\cr
a_{n1}& a_{n2}&\ldots& a_{nn}&y_{1n}&\ldots&y_{mn}\cr}.
\end{equation}
backward substitution must of course be evaluated separately for
each constant vector.
Gaussian elimination is solely dedicated to solving systems of
linear equations. The algorithm is somewhat slower than LUP
decomposition described in section \ref{sec:lup}. When applied to
systems with several constant vectors, however, Gaussian
elimination is faster since the elimination steps are made only
once. In the case of LUP decomposition, obtaining the solution for
each vector requires more operations than those needed by backward
substitution.
\subsection{Linear equations --- General implementation}
\marginpar{Figure \ref{fig:linearalgebraclasses} with the box {\bf
LinearEquations} grayed.} Although using matrix and vector
notation greatly simplifies the discussion of Gaussian
elimination, there is little gain in making an implementation
using matrices and vectors explicitly.
The class creation methods or constructors will take as arguments
either a matrix and a vector or an array of arrays and an array.
The class implementing Gaussian elimination has the following
instance variables:
\begin{description}
\item[\texttt{rows}] an array or a vector whose elements contain the rows of the
matrix ${\bf S}$.
\item[\tt solutions] an array whose elements contain the solutions of the
system corresponding to each constant vector.
\end{description}
Solving the system is entirely triggered by retrieving the
solutions. The instance variable {\tt solutions} is used to keep
whether or not Gaussian elimination has already taken place. If
this variable is {\tt nil} Gaussian elimination has not yet been
performed. Gaussian elimination is performed by the method {\tt
solve}. At the end of the algorithm, the vector of solutions is
allocated into the instance variable {\tt solutions}. Similarly,
backward substitution is triggered by the retrieving of a solution
vector. If the solution for the specified index has not yet been
computed, backward substitution is performed and the result is
stored in the solution array.
\subsection{Linear equation implementation}
Listing \ref{ls:lineqs} shows the class {\tt PMLinearEquationSystem} implementing Gaussian elimination.
To solve the system of equations \ref{eq:lineqex} using Gaussian elimination, one needs to write the following expression:
\begin{codeExample}
\begin{verbatim}
(PMLinearEquationSystem equations: #((3 2 4)
(2 -5 -1)
(1 -2 2))
constant: #(16 6 10)
) solution.
\end{verbatim}
\end{codeExample}
This expression has been written on three lines to delineate the
various steps. The first two lines create an instance of the class
{\tt PMLinearEquationSystem} by feeding the coefficients of the
equations, rows by rows, on the first line and giving the constant
vector on the second line. The last line is a call to the method
{\tt solution} retrieving the solution of the system.
Solving the same system with an additional constant vector
requires a little more code, but not much:
\begin{codeExample}
\begin{verbatim}
| s sol1 sol2 |
s := PMLinearEquationSystem equations: #((3 2 4) (2 -5 -1) (1 -2 2))
constants: #((16 6 10)
(7 10 9)).
sol1 := s solutionAt: 1.
sol2 := s solutionAt: 2.
\end{verbatim}
\end{codeExample}
In this case, the creation method differs in that the two constant
vectors are supplied in an array columns by columns. Similarly,
the two solutions must be fetched one after the other.
The class {\tt PMLinearEquationSystem}. The class method {\tt
equations:constants:} allows one to create a new instance for a given
matrix and a series of constant vectors.
The method {\tt solutionAt:} returns the solution for a given
constant vector. The index specified as argument to that method
corresponds to that of the desired constant vector.
The method {\tt solve} performs all required pivoting steps using
a {\tt do:} iterator. The method {\tt pivotStepAt:} first swaps
rows to bring a pivot having the largest absolute value in place
and then invokes the method {\tt pivotAt:} to perform the actual
pivoting.
Convenience methods {\tt equations:constant:} and {\tt solution}
are supplied to treat the most frequent case where there is only
one single constant vector. However, the reader should be reminded
that LUP decomposition is more effective in this case.
If the system does not have a solution --- that is, if the
system's matrix is singular --- an arithmetic error occurs in the
method {\tt pivotAt:} when the division with the zero pivot is
performed. The method {\tt solutionAt:} traps this error within an
exception handling structure and sets the solution vector to a
special value --- the integer 0 --- as a flag to prevent
attempting Gaussian elimination a second time. Then, the value
{\tt nil} is returned to represent the non-existent solution.
\begin{listing} Implementation of a system of linear equations
\label{ls:lineqs}
\input{Smalltalk/LinearAlgebra/DhbLinearEquationSystem}
\end{listing}
\section{LUP decomposition}
\label{sec:lup} LUP decomposition is another technique to solve a
system of linear equations. It is an alternative to the Gaussian
elimination \cite{CorLeiRiv}. Gaussian elimination can solve a
system with several constant vectors, but all constant vectors
must be known before starting the algorithm.
LUP decomposition is done once for the matrix of a given system.
Thus, the system can be solved for any constant vector obtained
after the LUP decomposition. In addition, LUP decomposition gives
a way to calculate the determinant of a matrix and it can be used
to compute the inverse of a matrix.
LUP stands for Lower, Upper and Permutation. It comes from the
observation that any non-singular square matrix ${\bf A}$ can be
decomposed into a product of 3 square matrices of the same
dimension as follows:
\begin{equation}
\label{eq:lup}
{\bf A}={\bf L}\cdot{\bf U}\cdot{\bf P},
\end{equation}
where ${\bf L}$ is a matrix whose components located above the
diagonal are zero (lower triangular matrix), ${\bf U}$ is a matrix
whose components located below the diagonal are zero (upper
triangular matrix) and ${\bf P}$ is a permutation matrix. The
decomposition of equation \ref{eq:lup} is non-unique. One can
select a unique decomposition by requiring that all diagonal
elements of the matrix ${\bf L}$ be equal to 1.
The proof that such decomposition exists is the algorithm itself.
It is a proof by recursion. We shall first start to construct an
LU decomposition, that is an LUP decomposition with an identity
permutation matrix. Let us write the matrix as follows:
\begin{equation}
{\bf A}=\pmatrix{a_{11}& a_{12}&\ldots& a_{1n}\cr
a_{21}& a_{22}&\ldots& a_{2n}\cr
\vdots&\vdots&\ddots&\vdots\cr
a_{m1}& a_{m2}&\ldots& a_{mn}\cr}=
\pmatrix{a_{11}&{\bf w}^{\mathop{\rm T}}\cr{\bf v}&{\bf
A}^{\prime}},
\end{equation}
where ${\bf v}$ and ${\bf w}$ are two vectors of dimension $n-1$
and ${\bf A}^{\prime}$ is a square matrix of dimension $n-1$.
Written in this form, one can write an LU decomposition of the
matrix A as follows:
\begin{mainEquation}
\label{eq:lupfirst}
{\bf A}=\pmatrix{a_{11}&{\bf w}^{\mathop{\rm T}}\cr{\bf v}&{\bf
A}^{\prime}}=\pmatrix{1&0\cr{\displaystyle{\bf v} \over\displaystyle a_{11}}&{\bf
I}_{n-1}}\cdot\pmatrix{a_{11}&{\bf w}^{\mathop{\rm T}}\cr0&{\bf
A}^{\prime}-{\displaystyle{\bf v}\otimes{\bf w}\over\displaystyle a_{11}}},
\end{mainEquation}
where ${\bf I}_{n-1}$ is an identity matrix of dimension $n-1$.
The validity of equation \ref{eq:lupfirst} can be verified by
carrying the product of the two matrices of the right-hand side
using matrix components as discussed in section
\ref{sec:linearalgebra}. We now are left with the problem of
finding an LU decomposition for the matrix ${\bf A}^{\prime}-{{\bf
v}\otimes{\bf w}\over a_{11}}$. This matrix is called the Shur's
complement of the matrix with respect to the pivot element
$a_{11}$. Let us assume that we have found such a decomposition
for that matrix, that is, that we have:
\begin{mainEquation}
\label{eq:lupshur}
{\bf A}^{\prime}-{\displaystyle{\bf v}\otimes{\bf w}\over\displaystyle
a_{11}}={\bf L}^{\prime}\cdot{\bf U}^{\prime}.
\end{mainEquation}
The we have:
\begin{eqnarray}
\label{eq:luplast}
\pmatrix{1&0\cr{\displaystyle{\bf v} \over\displaystyle a_{11}}&{\bf
I}_{n-1}}\cdot\pmatrix{a_{11}&{\bf w}^{\mathop{\rm T}}\cr0&{\bf L}^{\prime}\cdot{\bf U}^{\prime}}
&=&\pmatrix{1&0\cr{\displaystyle{\bf v} \over\displaystyle a_{11}}&{\bf I}_{n-1}}\cdot
\pmatrix{1&0\cr0&{\bf L}^{\prime}}\cdot
\pmatrix{a_{11}&{\bf w}^{\mathop{\rm T}}\cr0&{\bf
U}^{\prime}}\nonumber\\*[3 ex]
&=&\pmatrix{1&0\cr{\displaystyle{\bf v} \over\displaystyle a_{11}}&{\bf L}^{\prime}}\cdot
\pmatrix{a_{11}&{\bf w}^{\mathop{\rm T}}\cr0&{\bf U}^{\prime}}.
\end{eqnarray}
The above equality can be verified by carrying the multiplication
with matrix components. The second line of equation
\ref{eq:luplast} is the LU decomposition of the matrix ${\bf A}$,
which we were looking for. The algorithm is constructed
recursively on the successive Shur's complements. For practical
implementation, however, it is best to use a loop.
Building the Shur's complement involves a division by the pivot
element. As for Gaussian elimination, the algorithm runs into
trouble if this element is zero. The expression for the Shur's
complement (equations \ref{eq:lupshur}) shows that, if the pivot
element is small, rounding errors occur when computing the
elements of the Shur's complement. The strategy avoiding this
problem is the same as for Gaussian elimination. One must find the
row having the component with the largest absolute value and use
this component as the pivot. This is where the permutation matrix
comes into play. It will keep track of each row permutation
required to bring the row selected for pivoting into the first
position. If a non-zero pivot element cannot be found at one step,
then the matrix ${\bf A}$ is singular and no solution to the
system of equation can be found (\cf similar discussion in section
\ref{sec:lineqs}).
Now that we have proved that an LUP decomposition exists for any
non-singular matrix, let us see how it is used to solve a system
of linear equations. Consider the system described by equation
\ref{eq:lineqs}; using the LUP decomposition of the matrix ${\bf
A}$, it can be rewritten as:
\begin{equation}
\label{eq:lupsys}
{\bf L}{\bf U}\cdot{\bf x}={\bf P}\cdot{\bf y},
\end{equation}
where we have used the fact that ${\bf P}^{-1}={\bf P}$ for any
permutation matrix. Equation \ref{eq:lupsys} can be solved in two
steps. First one solves the system
\begin{equation}
\label{eq:lupforward}
{\bf L}\cdot\tilde{{\bf x}}={\bf P}\cdot{\bf y}
\end{equation}
using forward substitution. Then, one solves the system
\begin{equation}
\label{eq:lupbackward}
{\bf U}\cdot{\bf x}=\tilde{{\bf x}}
\end{equation}
using backward substitution. One can see that the LUP
decomposition can be used several times to solve a linear system
of equations with the same matrix and different constant vectors.
Performing LUP decomposition is faster than performing Gaussian
elimination because Gaussian elimination must also transform the
constant vector. To compute the solution vector, however, Gaussian
elimination only needs backward substitution whereas LUP requires
both forward and backward substitution. The end result is that
solving a system of linear equation for a single constant vector
is slightly faster using LUP decomposition. If one needs to solve
the same system of equations for several constant vectors known in
advance, Gaussian elimination is faster. If the constant vectors
are not known in advance --- or cannot be all stored with the
original matrix --- LUP decomposition is the algorithm of choice.
\subsection{LUP decomposition --- General implementation}
\marginpar{Figure \ref{fig:linearalgebraclasses} with the box {\bf
LUPDecomposition} grayed.} To implement the LUP algorithm, let us
first note that we do not need much storage for the three matrices
${\bf L}$, ${\bf U}$ and ${\bf P}$. Indeed, the permutation matrix
${\bf P}$ can be represented with a vector of integers of the same
dimension as the matrix rows. Since the diagonal elements of the
matrix ${\bf L}$ are set in advance, we only need to store
elements located below the diagonal. These elements can be stored
in the lower part of the matrix ${\bf U}$. Looking at the
definition of the Shur's complement (equation \ref{eq:lupshur})
and at equation \ref{eq:luplast} we can see that all operations
can be performed within a matrix of the same size as the original
matrix\footnote{If the matrix ${\bf A}$ is no longer needed after
solving the system of equation, the LUP decomposition can even be
performed inside the matrix ${\bf A}$ itself.}.
The implementation of the LUP algorithm must create storage to
place the components of the matrix whose LUP decomposition is to
be computed. A method implements the solving of the system of
equations for a given constant vector. Within the method the LUP
decomposition itself is performed if it has not yet been made
using lazy initialization. During the computation of the LUP
decomposition the parity of the permutation is tracked. This
information is used to compute the determinant of the matrix (\cf
section \ref{sec:determinant}). Thus, the class implementing LUP
decomposition has the following instance variables.
\begin{description}
\item[\tt rows]contains a copy of the rows of the matrix
representing the system of linear equations, \ie the matrix ${\bf
A}$; copying the matrix is necessary since LUP decomposition
destroys the components; at the end of the LUP decomposition, it
will contain the components of the matrices ${\bf L}$ and ${\bf
U}$,
\item[\tt permutation]contains an array of integers describing the permutation,
\ie the matrix ${\bf P}$,
\item[\tt parity]contains parity of the permutation for efficiency purpose
\footnote{The parity is needed to compute the determinant. It
could be computed from the parity matrix. However, the overhead of
keeping track of the parity is negligible compared to the LUP
steps and it is much faster than computing the parity.}.
\end{description}
The instance variable {\tt permutation} is set to undefined ({\tt
nil} in Smalltalk) at initialization time by
default. It is used to check whether the decomposition has already
been made or not.
The method {\tt solve} implements the solving of the equation
system for a given constant vector. It first checks whether the
LUP decomposition has been performed. If not, LUP decomposition is
attempted. Then, the methods implementing the forward and backward
substitution algorithms are called in succession.
\subsection{LUP decomposition}
Listing \ref{ls:lup} shows the methods of the class {\tt
PMLUPDecomposition} implementing LUP decomposition.
To solve the system of equations \ref{eq:lineqex} using LUP
decomposition, one needs to write to evaluate the following
expression:
\begin{codeExample}
\begin{verbatim}
(PMLUPDecomposition equations: #( (3 2 4) (2 -5 -1) (1 -2 2)) )
solve: #(16 6 10).
\end{verbatim}
\end{codeExample}
This expression has been written on two lines to delineate the
various steps. The first line creates an instance of the class
{\tt PMLUPDecomposition} by giving the coefficients of the
equations, rows by rows. The second line is a call to the method
{\tt solve:} retrieving the solution of the system for the
supplied constant vector.
Solving the same system for several constant vectors requires
storing the LUP decomposition in a variable:
\begin{codeExample}
\begin{verbatim}
| s sol1 sol2 |
s := PMLUPDecomposition equations: #( (3 2 4) (2 -5 -1) (1 -2 2)).
sol1 := s solve: #(16 6 10).
sol2 := s solve: #(7 10 9).