#include <boost/numeric/mtl/mtl.hpp> int main(int argc, char* argv[]) { using namespace mtl; using namespace mtl::matrix; const unsigned xd= 2, yd= 5, n= xd * yd; dense2D<double> A(n, n); compressed2D<double> B(n, n); hessian_setup(A, 3.0); laplacian_setup(B, xd, yd); typedef std::complex<double> cdouble; dense_vector<cdouble> v(n), w(n); for (int i= 0; i < size(v); i++) v[i]= cdouble(i+1, n-i), w[i]= cdouble(i+n); v+= A * w; w= B * v; std::cout << "v is " << v << "\n"; std::cout << "w is " << w << "\n"; return 0; }
The example shows that sparse and dense matrices can be multiplied with vectors. For the sake of performance, the products are implemented with different algorithms. The multiplication of Morton-ordered matrices with vectors is supported but currently not efficient.
As all products the result of a matrix-vector multiplication can be
to a vector variable.
Warning: the vector argument and the result must be different! Expressions like v= A*v will throw an exception. More subtle aliasing, e.g., partial overlap of the vectors might not be detected and result in undefined mathematical behavior.
Matrix-vector products (MVP) can be combined with other vector operations. The library now supports expressions like
r= b - A*x.
Also supported is scaling of arguments, as well for the matrix as for the vector:
#include <boost/numeric/mtl/mtl.hpp> int main(int argc, char* argv[]) { using namespace mtl; using namespace mtl::matrix; const unsigned xd= 2, yd= 5, n= xd * yd; dense2D<double> A(n, n); laplacian_setup(A, xd, yd); dense_vector<double> v(n), w(n, 7.0); // Scale A with 4 and multiply the scaled view with w v= 4 * A * w; std::cout << "v is " << v << "\n"; // Scale w with 4 and multiply the scaled view with A v= A * (4 * w); std::cout << "v is " << v << "\n"; // Scale both with 2 before multiplying v= 2 * A * (2 * w); std::cout << "v is " << v << "\n"; // Scale v after the MVP v= A * w; v*= 4; std::cout << "v is " << v << "\n"; return 0; }
All three expressions and the following block compute the same result. The first two versions are equivalent: matrix elements are more numerous but only used once while vector elements are less in number but accessed more often in the operation. In both cases nnz additional multiplications are performed where nnz is the number of non-zeros in A. One can easily see that the third expressions adds 2 nnz operations, obviously much less efficient.
Under the assumption that n is smaller than nnz, clearly less operations are required when the matrix vector product is performed without scaling and the result is scaled afterward. However, on most computer MVP is memory bandwidth limited and most likely the additional sweep costs more time than the scaling in the expressions above. With the strong bandwidth limitation in MVP, the scaling in the three expression will not be perceived for most large vectors (it will be done while waiting anyway for data from memory).
Return to Matrix Expressions Table of Content Proceed to Vector Norms
Matrix-Vector Expressions -- MTL 4 -- Peter Gottschling and Andrew Lumsdaine
-- Generated on 19 May 2009 by Doxygen 1.5.5 -- Copyright 2007 by the Trustees of Indiana University.