Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

for class I have to write my own linear equation solver for sparse matrices. I am free to use any type of data structure for sparse matrices and I have to implement several solves, including conjuguate gradient.

I was wondering if there is a famous way to store sparse matrices such that multiplication with a vector is relatively fast.

Right now my sparse matrices are basically implemented a wrapped std::map< std::pair<int, int>, double> which stores the data, if any. This transforms the multiplication of a matrix with from vector to a O(n2) complexity to a O(n2log(n)) as I have to perform look-up for each matrix elements. I've looked into the Yale Sparse matrix format and it seems that retrieval of an element is also in O(log(n)) so I'm not sure if it would be much faster.

For reference I have a 800x800 matrix that is populated with 5000 entries. It takes roughly to 450 seconds solve such a system with the conjugate gradient method.

Do you think it's possible to do it much quicker with another data structure?

thanks!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
474 views
Welcome To Ask or Share your Answers For Others

1 Answer

The most common choices are CSC or CSR storage. These are both efficient for matrix-vector multiplication. It's also very easy to code those multiplication routines, if you have to do it yourself.

That said, Yale storage also yields very efficient matrix-vector multiply. If you are performing matrix element lookup, then you have misunderstood how to use the format. I suggest you study some of the standard sparse libraries to learn how matrix-vector multiplication is implemented.

Even with your current storage you can perform matrix multiplication in O(n) complexity. All sparse matrix-vector multiplication algorithms that I have ever seen boil down to the same steps. For example consider y = Ax.

  1. Zeroise the result vector, y.
  2. Initialise an iterator for the non-zero elements of the matrix, A.
  3. Get the next non-zero element of the matrix, A[i,j] say. Note that the pattern of i,j doesn't follow a regular pattern. It simply reflects the order in which the non-zero elements of A are stored.
  4. y[i] += A[i,j]*x[j]
  5. If there are more elements of A, goto 3.

I suspect you are writing the classic double for loop dense multiplication code:

for (i=0; i<N; i++)
    for (j=0; j<N; j++)
        y[i] += A[i,j]*x[j]

and that's what is leading you to perform lookups.

But I'm not suggesting that you stick with your std::map storage. That's not going to be super efficient. I'd recommend CSC mainly because it is the most widely used.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...