int MatCreateMPIBDiag(MPI_Comm comm,int m,int M,int N,int nd,int bs,int *diag,PetscScalar **diagv,Mat *A)Collective on MPI_Comm
comm | - MPI communicator | |
m | - number of local rows (or PETSC_DECIDE to have calculated if M is given) | |
M | - number of global rows (or PETSC_DETERMINE to have calculated if m is given) | |
N | - number of columns (local and global) | |
nd | - number of block diagonals (global) (optional) | |
bs | - each element of a diagonal is an bs x bs dense matrix | |
diag | - optional array of block diagonal numbers (length nd). For a matrix element A[i,j], where i=row and j=column, the diagonal number is | |
diagv | - pointer to actual diagonals (in same order as diag array), if allocated by user. Otherwise, set diagv=PETSC_NULL on input for PETSc to control memory allocation. |
The parallel matrix is partitioned across the processors by rows, where each local rectangular matrix is stored in the uniprocessor block diagonal format. See the users manual for further details.
The user MUST specify either the local or global numbers of rows (possibly both).
The case bs=1 (conventional diagonal storage) is implemented as a special case.
Level:intermediate
Location:src/mat/impls/bdiag/mpi/mpibdiag.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages