Full text: Mapping without the sun

259 
Gram-Schmidt Orthonormalization Method: 
Since orthonormal vectors are easier to work with, we need an 
algorithm to change vectors aj,..., a k by linear combinations to 
an orthonormal set. The Gram-Schmidt method produces 
orthonormal vectors q k ... q k such that the span of q,... q, is the 
same as the span of a,; : : : ; aj for 1 ,j, k. We do this by 
iteratively subtracting o_ projections onto previous subspaces: 
a’j = aj - (q T i aj)qi -...-(q^-^jjqj-l; where a’j is subsequently 
discarded if it is 0, or normalized if it is not: 
qj =a Vll a ’jll 
If a b ...a k are linearly independent, we do not need to worry 
about discarding any vectors, they will be linearly independent 
and thus, a fortiori, nonzero. 
We usually need to use QR factorisations to find the best 
linear least squares solution to some data. Thus in order to 
solve the (approximate) equations Ax approx= b, for x where A 
is an m x n matrix (m>n) we really need to solve the 
optimisation problem 
2.3 sigular value decomposition, SVD 
Singular value decomposition takes a rectangular matrix of 
gene expression data (defined as A, where A is a n xp matrix) 
in which the n rows represents the genes, and the p columns 
represents the experimental conditions. The SVD theorem 
states: 
y^nxp_ jjnxn gnxp yT pxp 
Where U T U = I nxp V T V = I pxp (i.e. U and V are orthogonal) 
Where the columns of U are the left singular vectors [gene 
coefficient vectors)-, S (the same dimensions as A) has singular 
values and is diagonal (mode amplitudes)-, and V T has rows that 
are the right singular vectors (expression level vectors). The 
SVD represents an expansion of the original data in a 
coordinate system where the covariance matrix is diagonal. 
Every matrix has a singular value decomposition .SVD is a 
kind of reliable but may cost ten more time than QR 
decomposition. Calculating the SVD consists of finding the 
eigenvalues and eigenvectors of AA T and A T A. The eigenvectors 
of A T A make up the columns of V, the eigenvectors of AA T make 
up the columns of U. Also, the singular values in S are square 
roots of eigenvalues from AA T or A T A. The singular values are the 
diagonal entries of the S matrix and are arranged in descending 
order. The singular values are always real numbers. If the matrix 
A is a real matrix, then U and V are also real. 
SVD factorizations are usually used to find the best linear least 
squares solution and compression to some data, that digital image 
is transformed into singular value matrix that contains non-zero 
singular values by singular value decomposition (SVD), the 
image is compressed. 
3.MATRIX LIBRARY 
A matrix library is for performing matrix algebra calculations 
in programs in easy and efficient manner for engineering & 
scientific works. They ofen supports most of the matrix algebra 
operations and enables programmers to use matrix object just 
like other built-in data types in their programs. It supports 
arithmetic operations, sub-matrix operations, inversion, various 
matrix decompositions, solution of simultaneous linear 
equations, eigen value and eigen vector problems, and much 
more. 
3.1 meschach: 
Library for performing operations on matrices and vectors 
Meschach is a library of routines written in C for matrix 
computations. These include operations for basic numerical 
linear algebra; routines for matrix factorisations; solving 
systems of equations; solving least squares problems; 
computing eigenvalues, eigenvectors and singular 
values;sparse matrix computations including both direct and 
iterative methods. This package makes use of the features of 
the C programming language: data structures, dynamic 
memory allocation and deallocation, pointers, functions as 
parameters and objects. Meschach has a number of self- 
contained data structures for matrices, vectors and other 
mathematical objects. 
Meschach has the virtue of compiling under Linux and most 
other operating systems, and is openly available under 
copyright, provided the customary acknowledgment is 
observed and errors are reported. Meschach was designed to 
solve systems of dense or sparse linear equations, compute 
eigenvalues and eigenvectors, and solve least squares problems, 
among other things. It provides nearly 400 functions for 
doubles and complex numbers. Matrices can easily be sent to 
files or to the standard output. Meschach computes Fast Fourier 
Transforms, extracts columns and rows, and computes 
eigenvalues of symmetric matrices. You can fill a matrix with 
random ints and complexes. Meschach code is easily extensible, 
if you happen to be doing computational work in C that needs 
matrices, this is a highly useful library. 
3.2 The CwMtx library for matrix, vector and quaternion 
math 
CwMtx is a library wrote in C++ that provides the matrix and 
vector operations that are used extensively in engineering and 
science problems. A special feature of this library is the 
quaternion class which implements quaternion math. 
Quaternions are very useful for attitude determination in 3D 
space because they do not suffer from singularities. 
Furthermore, successive rotations and transformations of 
vectors can be accomplished by simple quaternion 
multiplication. Attitude dynamics can be expressed in a very 
compact form using quaternions. 
3.3 Blitz 
Blitz ++ is a C++ class library for scientific computing 
which provides performance on par with Fortran 77/90. It uses 
template techniques to achieve high performance. The current 
versions provide dense arrays and vectors, random number 
generators, and small vectors and matrices. Blitz++ is 
distributed freely under an open source license, and 
contributions to the library are welcomed. It is distributed 
under the GNU GPL, and with it you can freely create objects. 
It supports the KAI, Intel, gcc, Metroworks, and Cray 3.0.0.0 
C++ compilers and provides an n-dimensional array class that 
can contain integral, floating, complex, and well-behaved user- 
defined types. Its constructor is more complex that the CwMtx 
constructor. 
4.DIRECT SOLVERS FOR LARGE SPARSE MATRICES 
The technique to solve sparse matrix including direct Factor 
and iterative method. UMFPACK, superLU, intel 
PARDISO,GSS are all famous direct solvers for sparse 
matrices.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.