Docs:  FAQ

General

  • Where should I send PETSc bug reports and questions?
  • How can I subscribe to the PETSc users mailing list?
  • Why is PETSc programmed in C, instead of Fortran or C++?
  • Does all the PETSc error checking and logging reduce PETSc's efficiency?
  • How do such a small group of people manage to write and maintain such a large and marvelous package as PETSc?
  • What happened to the very cool "domain" directory that was in previous versions of PETSc and allowed me to easily set up and solve elliptic PDEs on all kinds of grids? I can't find it in PETSc.
  • How do I collect all the values from a parallel PETSc vector into a sequential vector on each processor?
  • How do I print out all the PETSc manual pages to put into a binder?
  •  

    Installation

  • How do I begin using PETSc if the software has already been completely built and installed by someone else?
  • The PETSc distribution is SO large. How can I reduce my disk space usage?
  • I want to use PETSc only for uniprocessor programs. Must I still install and use a version of MPI?
  • Can I install PETSc to not use X windows (either under Unix or Windows with gcc, the gnu compiler)?
  • Why do you use MPI?
  • How do I install PETSc using BlockSolve, and use it in my code?
  • Usage

    Execution

  • PETSc executables are SO big and take SO long to link.
  • PETSc has so many options for my program that it is hard to keep them straight.
  • PETSc automatically handles many of the details in parallel PDE solvers. How can I understand what is really happening within my program?
  • Assembling large sparse matrices takes a long time. What can I do make this process faster?
  • How can I generate performance summaries with PETSc?
  • Debugging

  • How do I debug on the Cray T3D/T3E?
  • How do I debug if -start_in_debugger does not work on my machine?
  • Shared Libraries

  • Can I install PETSc libraries as shared libraries?
  • Why should I use shared libraries?
  • How do I delete the shared libraries?
  • How do I link to the PETSc shared libraries?
  • When running my program, I encounter an error saying "petsc shared libraries not found".
  • What the purpose of the DYLIBPATH variable in the file ${PETSC_DIR}/bmake/${PETSC_ARCH}/packages?
  • What if I want to link to the regular .a library files?
  • What do I do if I want to move my executable to a different machine?
  • What is the deal with dynamic libraries (and difference with shared libraries)

  • General

    Where should I send PETSc bug reports and questions?

    Send all maintenance requests to the PETSc developers via the email address petsc-maint@mcs.anl.gov . Also, see the file bugreporting. html

    How can I subscribe to the PETSc users mailing list?

    You can join the PETSc users mailing list by sending email to majordomo@mcs.anl.gov with the message, "subscribe petsc-users". We will update users regarding new releases, changes, etc. through this mailing list.

    Why is PETSc programmed in C, instead of Fortran or C++?

    C enables us to build data structures for storing sparse matrices, solver information, etc. in ways that Fortran simply does not allow. ANSI C is a complete standard that all modern C compilers support. The language is identical on all machines. C++ is still evolving and compilers on different machines are not identical. Using C function pointers to provide data encapsulation and polymorphism allows us to get many of the advantages of C++ without using such a large and more complicated language. It would be natural and reasonable to have coded PETSc in C++; we opted to use C instead.

    Does all the PETSc error checking and logging reduce PETSc's efficiency?

    Actually the impact is quite small. But if it really concerns you to get the absolute fastest rate you can, then edit the file ${PETSC_DIR}/bmake/${PETSC_ARCH}/base.O and remove -DPETSC_DEBUG and -DPETSC_LOG. Then recompile the package. We do not recommend this unless you have a complete running code that is well tested, and you do not plan to alter it. Our measurements never indicate more then a 3 to 5% difference in performance with all error checking and profiling compiled out of PETSc.

    How do such a small group of people manage to write and maintain such a large and marvelous package as PETSc?

    a) We work very efficiently.

    1. We use Emacs for all editing; the etags feature makes navigating and changing our source code very easy.
    2. Our manual pages are generated automatically from formatted comments in the code, thus alleviating the need for creating and maintaining manual pages.
    3. We employ automatic nightly tests of PETSc on several different machine architectures. This process helps us to discover problems the day after we have introduced them rather than weeks or months later.

    b) We are very careful in our design (and are constantly revising our design) to make the package easy to use, write, and maintain.

    c) We are willing to do the grunt work of going through all the code regularly to make sure that all code conforms to our interface design. We will never keep in a bad design decision simply because changing it will require a lot of editing; we do a lot of editing.

    d) We constantly seek out and experiment with new design ideas; we retain the the useful ones and discard the rest. All of these decisions are based on practicality.

    e) Function and variable names are chosen to be very consistent throughout the software. Even the rules about capitalization are designed to make it easy to figure out the name of a particular object or routine. Our memories are terrible, so careful consistent naming puts less stress on our limited human RAM.

    f) The PETSc directory tree is carefully designed to make it easy to move throughout the entire package.

    g) Our bug reporting system, based on email to petsc-maint@mcs.anl.gov, makes it very simple to keep track of what bugs have been found and fixed. In addition, the bug report system retains an archive of all reported problems and fixes, so it is easy to refind fixes to previously discovered problems.

    h) We contain the complexity of PETSc by using object-oriented programming techniques including data encapsulation (this is why your program cannot, for example, look directly at what is inside the object Mat) and polymorphism (you call MatMult() regardless of whether your matrix is dense, sparse, parallel or sequential; you don't call a different routine for each format).

    i) We try to provide the functionality requested by our users.

    j) We never sleep.

    What happened to the very cool "domain" directory that was in previous versions of PETSc and allowed me to easily set up and solve elliptic PDEs on all kinds of grids? I can't find it in PETSc.

    That code was all written only for sequential machines. We hope to redo it for parallel machines using PETSc someday. Domain is no longer available or supported.

    How do I collect all the values from a parallel PETSc vector into a sequential vector on each processor?

    You can do this by first creating a SEQ vector on each processor with as many entries as the global vector. Say mpivec is your parallel vector and seqvec a sequential vector where you want to store all the values from mpivec, but on a single node.
    int N;
    ierr = VecGetSize(mpivec,&N);
    Vec seqvec;
    ierr = VecCreateSeq(PETSC_COMM_SELF,N,&seqvec); or
    ierr = VecCreateSeqWithArray(PETSC_COMM_SELF,N,array,&seqvec);

    then create a vector scatter that gathers together the values from all processors into the large sequential vector on each processor.
    IS is;
    ierr = ISCreateStride(PETSC_COMM_SELF,N,0,1,&is);CHKERRA(ierr);
    VecScatter ctx;
    ierr = VecScatterCreate(mpivec,is,seqvec,is,&ctx);CHKERRA(ierr);

    Now to get the values into the seq vector from the parallel vector use
    ierr = VecScatterBegin(mpivec,seqvec,INSERT_VALUES,SCATTER_FORWARD,ctx);CHKERRA(ierr);
    ierr = VecScatterEnd(mpivec,seqvec,INSERT_VALUES,SCATTER_FORWARD,ctx);CHKERRA(ierr);

    To get the values from the seq vector into the parallel vector use
    ierr = VecScatterBegin(seqvec,mpivec,INSERT_VALUES,SCATTER_REVERSE,ctx);CHKERRA(ierr);
    ierr = VecScatterEnd(seqvec,mpivec,INSERT_VALUES,SCATTER_REVERSE,ctx);CHKERRA(ierr);

    How do I print out all of the PETSc manual pages to put into a binder?

    Obtain the software tool html2ps and write a script that runs through all the manualpages and prints them
    to a postscript printer. Something like (for Unix csh/tcsh)

    foreach i (~/petsc/docs/manualpages/*/*.html)
    html2ps $i | lpr -Plw3
    end

     


    Installation

    How do I begin using PETSc if the software has already been completely built and installed by someone else?

    Assuming that the PETSc libraries have been successfully built for a particular architecture and level of optimization, a new user must merely:

    a) Set the environmental variable PETSC_DIR to the full path of the PETSc home directory (for example, /home/username/petsc).

    b) Set the environmental variable PETSC_ARCH, which indicates the architecture on which PETSc will be used. For example, use "setenv PETSC_ARCH sun4". More generally, the command "setenv PETSC_ARCH `$PETSC_DIR/bin/petscarch`" can be placed in a .cshrc file if using the csh or tcsh shell. Thus, even if several machines of different types share the same filesystem, PETSC_ARCH will be set correctly when logging into any of them.

    c) Begin by copying one of the many PETSc examples (in, for example, petsc/src/sles/examples/tutorials) and its corresponding makefile.

    d) See the introductory section of the PETSc users manual for tips on documentation.

    The PETSc distribution is SO large. How can I reduce my disk space usage?

    a) The directory ${PETSC_DIR}/docs contains a set of HTML manual pages in for use with a browser. You can delete these pages to save about .8 Mbyte of space.

    b) The PETSc users manual is provided in PostScript and HTML formats in ${PETSC_DIR}/docs/manual.ps and ${PETSC_DIR}/docs/manual.html, respectively. Each requires several hundred kilobytes of space. You can delete either version that you do not need.

    c) The PETSc test suite contains sample output for many of the examples. These are contained in the PETSc directories ${PETSC_DIR}/src/*/examples/tutorials/output and ${PETSC_DIR}/src/*/examples/tests/output. Once you have run the test examples, you may remove all of these directories to save about 300 Kbytes of disk space.

    d) The debugging versions (BOPT=g) of the libraries are larger than the optimized versions (BOPT=O). In a pinch you can work with BOPT=O, although we do not recommend it generally because finding bugs is much easier with the BOPT=g version.

    e) you can delete bin/demos and bin/bitmaps

    I want to use PETSc only for uniprocessor programs. Must I still install and use a version of MPI?

    For those using PETSc as a sequential library, the software can be compiled and run without using an implementation of MPI. To do this, edit the file ${PETSC_DIR}/bmake/${PETSC_ARCH}/packages and change the lines that define the location of MPI to

    MPI_LIB = ${PETSC_DIR}/lib/lib${BOPT}/${PETSC_ARCH}/libmpiuni.a
    MPI_INCLUDE = -I${PETSC_DIR}/src/sys/src/mpiuni
    MPIRUN = ${PETSC_DIR}/src/sys/src/mpiuni/mpirun

    If you compile PETSc as such, you will be able to run PETSc ONLY on one processor. Also, you will be able to run the program directly, without using the mpirun command.

    Can I install PETSc to not use X windows (either under Unix or Windows with gcc, the gnu compiler)?

    Yes. Edit the file bmake/${PETSC_ARCH}/petscconf.h and remove the line
    #define HAVE_X11
    then edit bmake/${PETSC_ARCH}/packages and remove the lines starting with
    X11_

    Why do you use MPI?

    MPI is the message-passing standard. Because it is a standard, it will not change over time; thus, we do not have to change PETSc every time the provider of the message-passing system decides to make an interface change. MPI was carefully designed by experts from industry, academia, and government labs to provide the highest quality performance and capability. For example, the careful design of communicators in MPI allows the easy nesting of different libraries; no other message-passing system provides this support. All of the major parallel computer vendors were involved in the design of MPI and have committed to providing quality implementations. In addition, since MPI is a standard, several different groups have already provided complete free implementations. Thus, one does not have to rely on the technical skills of one particular group to provide the message-passing libraries. Today, MPI is the only practical, portable approach to writing efficient parallel numerical software.

    How do I install PETSc using BlockSolve, and use it in my code?

    First, you must install BlockSolve package.  Then edit the bmake/${PETSC_ARCH}/packages file, and specify the following variables with the correct paths:

    BLOCKSOLVE_INCLUDE = -I/home/petsc/software/BlockSolve95/include
    BLOCKSOLVE_LIB = -L/home/petsc/software/BlockSolve95/lib/libO/${PETSC_ARCH} -lBS95
    PETSC_HAVE_BLOCKSOLVE = -DPETSC_HAVE_BLOCKSOLVE

    Now to use BlockSolve, on can use MatType MATMPIROWBS  (with MatCreate() ) or use MatCreateMPIRowbs(). The preconditioners that work with BlockSolve are PCILU and PCICC

     


    Using

     How do I use PETSc for Domain Decomposition?

    PETSc includes Additive Schwarz methods in the suite of preconditioners. These may be activated with the runtime option 
    -pc_type asm. 
    Various other options may be set, including the degree of overlap
    -pc_asm_overlap <number>
    the type of restriction/extension 
    -pc_asm_type [basic,restrict,interpolate,none] - Sets ASM type and several others. You may see the available ASM options by using
    -pc_type asm -help
    Also, see the procedural interfaces in the manual pages, with names PCASMxxxx()
    and check the index of the users manual for PCASMxxx().

    Note that Paulo Goldfeld contributed a preconditioner "nn", a version of your Neumann-Neumann balancing preconditioner; this may be activated via
    -pc_type nn
    The program petsc/src/contrib/oberman/laplacian_ql contains an example of its use.


    Execution

    PETSc executables are SO big and take SO long to link.

    We find this annoying as well. On most machines PETSc now uses shared libraries by default, so executables should be much smaller. Also, if you have room, compiling and linking PETSc on your machine's /tmp disk or similar local disk, rather than over the network will be much faster.

    PETSc has so many options for my program that it is hard to keep them straight.

    Running the PETSc program with the option -help will print of many of the options. To print the options that have been specified within a program, employ -optionsleft to print any options that the user specified but were not actually used by the program and all options used; this is helpful for detecting typo errors.

    PETSc automatically handles many of the details in parallel PDE solvers. How can I understand what is really happening within my program?

    You can use the option -log_info to get more details about the solution process. The option -log_summary provides details about the distribution of time spent in the various phases of the solution process. You can use ${PETSC_DIR}/bin/petscview, which is a Tk/Tcl utility that provides high-level visualization of the computations within a PETSc program. This tool illustrates the changing relationships among objects during program execution in the form of a dynamic icon tree.

    Assembling large sparse matrices takes a long time. What can I do make this process faster?

    See the Performance chapter of the users manual for many tips on this.

    a) Preallocate enough space for the sparse matrix. For example, rather than calling MatCreateSeqAIJ(comm,n,n,0,PETSC_NULL,&mat); call MatCreateSeqAIJ(comm,n,n,rowmax,PETSC_NULL,&mat); where rowmax is the maximum number of nonzeros expected per row. Or if you know the number of nonzeros per row, you can pass this information in instead of the PETSC_NULL argument. See the  manual pages for each of the MatCreateXXX() routines.

    b) Insert blocks of values into the matrix, rather than individual components. 

    How can I generate performance summaries with PETSc?

    Firstly, to generate PETSc timing and flop logging, the compiler flag -DPETSC_LOG (which is the default) must be specified in the file petsc/bmake/${PETSC_ARCH}/base.${BOPT} Then use these options at runtime: -log_summary -optionsleft See the Performance chapter of the users manual for information on interpreting the summary data. If using the PETSc (non)linear solvers, one can also specify -snes_view or -sles_view for a printout of solver info. Only the highest level PETSc object used needs to specify the view option.


    Debugging

    How do I debug on the Cray T3D/T3E?

    Use TotalView. First, link your program with the additional option -Xn where n is the number of processors to use when debugging. Then run totalview programname -a your arguments The -a is used to distinguish between totalview arguments and yours.

    How do I debug if -start_in_debugger does not work on my machine?

    For a uniprocessor job, ex1, with MPICH using ch_p4 as the underlying communication layer, the procedure is:

    - Create a dummy file with the text "local 0" -

    - Start the debugger directly: gdb ex1

    - Run with a command such as: run -p4pg dummy

    With MPICH using shmem as the underlying communication layer, the procedure is:

    - dbx ex1 - run -np 3 (other petsc options) .


    Shared Libraries

    Can I install PETSc libraries as shared libraries?

    Yes. The PETSc installation process installs the regular libraries and builds the shared libraries from these regular libraries. The shared libraries are placed in the same location as the regular libraries location.

    If you wish to rebuild/update the shared libraries, you can invoke the following command from any directory in the PETSc source:
        make BOPT=O shared

    Why should I use shared libraries?

    When you link to shared libraries, the function symbols from the shared libraries are not copied in the executable. This way the size of the executable is considerably smaller than when using regular libraries. This helps in a couple of ways:
        1) saves disk space when more than one executable is created, and  
        2) improves the compile time immensly, because the compiler has to write a much smaller file (executable) to the disk.

    How do I delete the shared libraries?

    You can delete the shared libraries by invoking the following command from any directory in the PETSc source:
         make BOPT=O deleteshared

    How do I link to the PETSc shared libraries?

    By default, the compiler should pick up the shared libraries instead of the regular ones. Nothing special should be done for this.

    When running my program, I encounter an error saying "petsc shared libraries not found".

    By default, PETSc adds the path to the shared libraries to the executable by using options supported by the linker. This problem might occur if the linker flag does not work properly or if  the path to the shared libraries is different when running the executable (for example, if the executable is run on a different machine where the file system is mounted differently, and the path to the shared libraries is different). One way to fix this problem is to add this new path to the DYLIBPATH variable in the file ${PETSC_DIR}/bmake/${PETSC_ARCH/packages.  Another fix  is to add this path to the LD_LIBRARY_PATH enviornmental variable.

    What is the purpose of the DYLIBPATH variable in the file ${PETSC_DIR}/bmake/${PETSC_ARCH}/packages?

    This makefile variable is used to specify any paths to any other shared libaries used by PETSc (or the application), where these shared libraries are NOT present in the system default paths in which the dynamic linker searches. These paths are added into the executable and are avilable to the dynamic linker at runtime. An example where this is useful is if the compiler  is installed in a non-standard location, and some of the compiler libraries are installed as shared libraries. Multiple paths can be specified in the C_DYLIBPATH variable as follows:
         C_DYLIBPATH  = ${CLINKER_SLFLAG}:path1 ${CLINKER_SLFLAG}:path2

    What If I want to link to the regular .a library files?

    The simplest way to do this is first to delete the PETSc shared libraries, and then to rebuild your executable. Some compilers do provide a flag indicating that the linker should not look for shared libraries. For example, gcc has the flag -static to indicate only static libraries should be used. But this may not work on all machines, since some of the usual system/compiler/other libraries are distributed only as shared libraries, and using the -static flag avoids these libraries so that the compiler will fail to create the executable.

    What do I do if I want to move my executable to a different machine?

    You would also need to have access to the shared libraries on this new machine. The other alternative is to build the exeutable without shared libraries by first deleting the shared libraries, and then creating the executable. 

    What is the deal with dynamic libraries (and difference between shared libraries)

    PETSc libraries are installed as dynamic libraries when the flag PETSC_USE_DYNAMIC_LIBRARIES is defined in bmake/${PETSC_ARCH}/petscconf.h. The difference with this - from shared libraries - is the way the libraries are used. From the program the library is loaded using dlopen() - and the functions are searched using dlsymm(). This separates the resolution of function names from link-time to run-time - i.e when dlopen()/dlsymm() are called.

    When using Dynamic libraries - PETSc libraries cannot be moved to a different location after they are built.