int MatCreateMPIAIJ(MPI_Comm comm,int m,int n,int M,int N,int d_nz,int *d_nnz,int o_nz,int *o_nnz,Mat *A)Collective on MPI_Comm
comm | - MPI communicator | |
m | - number of local rows (or PETSC_DECIDE to have calculated if M is given) This value should be the same as the local size used in creating the y vector for the matrix-vector product y = Ax. | |
n | - This value should be the same as the local size used in creating the x vector for the matrix-vector product y = Ax. (or PETSC_DECIDE to have calculated if N is given) For square matrices n is almost always m. | |
M | - number of global rows (or PETSC_DETERMINE to have calculated if m is given) | |
N | - number of global columns (or PETSC_DETERMINE to have calculated if n is given) | |
d_nz | - number of nonzeros per row in DIAGONAL portion of local submatrix (same value is used for all local rows) | |
d_nnz | - array containing the number of nonzeros in the various rows of the DIAGONAL portion of the local submatrix (possibly different for each row) or PETSC_NULL, if d_nz is used to specify the nonzero structure. The size of this array is equal to the number of local rows, i.e 'm'. You must leave room for the diagonal entry even if it is zero. | |
o_nz | - number of nonzeros per row in the OFF-DIAGONAL portion of local submatrix (same value is used for all local rows). | |
o_nnz | - array containing the number of nonzeros in the various rows of the OFF-DIAGONAL portion of the local submatrix (possibly different for each row) or PETSC_NULL, if o_nz is used to specify the nonzero structure. The size of this array is equal to the number of local rows, i.e 'm'. |
If PETSC_DECIDE or PETSC_DETERMINE is used for a particular argument on one processor than it must be used on all processors that share the object for that argument.
The AIJ format (also called the Yale sparse matrix format or compressed row storage), is fully compatible with standard Fortran 77 storage. That is, the stored row and column indices can begin at either one (as in Fortran) or zero. See the users manual for details.
The user MUST specify either the local or global matrix dimensions (possibly both).
The parallel matrix is partitioned such that the first m0 rows belong to process 0, the next m1 rows belong to process 1, the next m2 rows belong to process 2 etc.. where m0,m1,m2... are the input parameter 'm'.
The DIAGONAL portion of the local submatrix of a processor can be defined as the submatrix which is obtained by extraction the part corresponding to the rows r1-r2 and columns r1-r2 of the global matrix, where r1 is the first row that belongs to the processor, and r2 is the last row belonging to the this processor. This is a square mxm matrix. The remaining portion of the local submatrix (mxN) constitute the OFF-DIAGONAL portion.
If o_nnz, d_nnz are specified, then o_nz, and d_nz are ignored.
By default, this format uses inodes (identical nodes) when possible. We search for consecutive rows with the same nonzero structure, thereby reusing matrix information to achieve increased efficiency.
-mat_aij_no_inode | - Do not use inodes | |
-mat_aij_inode_limit <limit> | - Sets inode limit (max limit=5) | |
-mat_aij_oneindex | - Internally use indexing starting at 1 rather than 0. Note that when calling MatSetValues(), the user still MUST index entries starting at 0! |
Consider the following 8x8 matrix with 34 non-zero values, that is assembled across 3 processors. Lets assume that proc0 owns 3 rows, proc1 owns 3 rows, proc2 owns 2 rows. This division can be shown
1 2 0 | 0 3 0 | 0 4 Proc0 0 5 6 | 7 0 0 | 8 0 9 0 10 | 11 0 0 | 12 0 ------------------------------------- 13 0 14 | 15 16 17 | 0 0 Proc1 0 18 0 | 19 20 21 | 0 0 0 0 0 | 22 23 0 | 24 0 ------------------------------------- Proc2 25 26 27 | 0 0 28 | 29 0 30 0 0 | 31 32 33 | 0 34
A B C D E F G H I
Where the submatrices A,B,C are owned by proc0, D,E,F are owned by proc1, G,H,I are owned by proc2.
The 'm' parameters for proc0,proc1,proc2 are 3,3,2 respectively. The 'n' parameters for proc0,proc1,proc2 are 3,3,2 respectively. The 'M','N' parameters are 8,8, and have the same values on all procs.
The DIAGONAL submatrices corresponding to proc0,proc1,proc2 are submatrices [A], [E], [I] respectively. The OFF-DIAGONAL submatrices corresponding to proc0,proc1,proc2 are [BC], [DF], [GH] respectively. Internally, each processor stores the DIAGONAL part, and the OFF-DIAGONAL part as SeqAIJ matrices. for eg: proc1 will store [E] as a SeqAIJ matrix, ans [DF] as another SeqAIJ matrix.
When d_nz, o_nz parameters are specified, d_nz storage elements are allocated for every row of the local diagonal submatrix, and o_nz storage locations are allocated for every row of the OFF-DIAGONAL submat. One way to choose d_nz and o_nz is to use the max nonzerors per local rows for each of the local DIAGONAL, and the OFF-DIAGONAL submatrices.
proc0 : dnz = 2, o_nz = 2 proc1 : dnz = 3, o_nz = 2 proc2 : dnz = 1, o_nz = 4We are allocating m*(d_nz+o_nz) storage locations for every proc. This translates to 3*(2+2)=12 for proc0, 3*(3+2)=15 for proc1, 2*(1+4)=10 for proc3. i.e we are using 12+15+10=37 storage locations to store 34 values.
When d_nnz, o_nnz parameters are specified, the storage is specified for every row, coresponding to both DIAGONAL and OFF-DIAGONAL submatrices.
proc0: d_nnz = [2,2,2] and o_nnz = [2,2,2] proc1: d_nnz = [3,3,2] and o_nnz = [2,1,1] proc2: d_nnz = [1,1] and o_nnz = [4,4]Here the space allocated is sum of all the above values i.e 34, and hence pre-allocation is perfect.
Level:intermediate
Location:src/mat/impls/aij/mpi/mpiaij.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages