classes ImplementationNotes gid_clear pyret upkpyobj MPI id retval upkscalar ParallelNetwork integ_time runworker upkstr ParallelTransfer look send_time upkvec SubWorld look_take step_time userid Threads mech_time submit vtransfer_time context nhost take wait_time done pack time working event_time post unpack
objref pc
pc = new ParallelContext()
pc = new ParallelContext(nhost)
When using the Bulletin board with Python, the methods submit , context , pack , and post have been augmented and pyret and upkpyobj have been introduced to allow a more Pythonic style. I.e. The executable string for submit and context may be replaced by a Python callable that returns a Python Object (retrieved with pyret), the args to submit, context, pack, and post may be Python Objects, and a bulletin board message value which is a Python Object may be retrieved with upkpyobj. At the end of the following hoc parallelization and discussion the same example is repeated as a Python parallelization. The only restriction is that any python object arguments or return values must be pickleable (see http://docs.python.org/library/pickle.html. As of this writing, hoc objects are not pickleable.)
The simplest form of parallelization of a loop from the users point of view is
func f() { // a function with no context that *CHANGES* return $1*$1 //except its argument } objref pc pc = new ParallelContext() pc.runworker() // master returns immediately, workers in // infinite loop running and jobs from bulletin board s = 0 if (pc.nhost == 1) { // use the serial form for i=1, 20 { s += f(i) } }else{ // use the bulletin board form for i=1, 20 { // scatter processes pc.submit("f", i) // any context needed by f had better be } // the same on all hosts while (pc.working) { // gather results s += pc.retval // the return value for the executed function } } print s pc.done // tell workers to quit
Several things need to be highlighted:
If a given task submits other tasks, only those child tasks will be gathered by the working loop for that given task. At this time the system groups tasks according to the parent task and the pc instance is not used. See submit for further discussion of this limitation. The safe strategy is always to use the idiom:
for i = 1,n {pc.submit(...)} // scatter a set of tasks while(pc.working)) { ... } // gather them all
Earlier submitted tasks tend to complete before later submitted tasks, even if they submit tasks themselves. Ie, A submitted task has the same general priority as the parent task and the specific priority of tasks with the same parent is in submission order. A free cpu always works on the next unexecuted task with highest priority.
Each task manages a separate group of submissions whose results are returned only to that task. Therefore you can submit tasks which themselves submit tasks.
The pc.working call checks to see if a result is ready. If so it returns the unique system generated task id (a positive integer) and the return value of the task function is accessed via the pc.retval function. The arguments to the function executed by the submit call are also available. If all submissions have been computed and all results have been returned, pc.working returns 0. If results are pending, working executes tasks from ANY ParallelContext until a result is ready. This last feature keeps cpus busy but places stringent requirements on how the user changes global context without introducing bugs. See the discussion in working .
ParallelContext.working may not return results in the order of submission.
Hoc code subsequent to pc.runworker() is executed only by the master since that call returns immediately if the process is the master and otherwise starts an infinite loop on each worker which requests and executes submit tasks from ANY ParallelContext instance. This is the standard way to seed the bulletin board with submissions. Note that workers may also execute tasks that themselves cause submissions. If subsidiary tasks call pc.runworker, the call returns immediately. Otherwise the task it is working on would never complete! The pc.runworker() function is also called for each worker after all hoc files are read in and executed.
The basic organization of a simulation is:
Issues having to do with context can become quite complex. Context transfer from one machine to another should be as small as possible. Don't fall into the trap of a context transfer which takes longer than the computation itself. Remember, you can do thousands of c statements in the time it takes to transfer a few doubles. Also, with a single cpu, it is often the case that statements can be moved out of an innermost loop, but can't be in a parallel computation. eg.//setup which is exactly the same on every machine. // ie declaration of all functions, procedures, setup of neurons pc.runworker() to start the execute loop if this machine is a worker // the master scatters tasks onto the bulletin board and gathers results pc.done()
ie we only need to set gnabar_hh 20 times. But the first pass at parallelization would look like:// pretend g is a Vector assigned earlier to conductances to test for i = 1, 20 forall gnabar_hh = g.x[i] for j = 1, 5 stim.amp = s[j] run() } }
and not only do we take the hit of repeated evaluation of gnabar_hh but the statement must be interpreted each time. A run must be quite lengthy to amortize this overhead.for i = 1, 20 { for j= 1, 5 { sprint(cmd, "{forall gnabar_hh = g[%d]} stim.amp = s[%d] run()\n", i, j) pc.submit(cmd) } } while (pc.working) { }
Python version
Here we re-implement the first example above as a Python program
Note the replacement of the string "f" in the submit method by a Python Callable object and the retrieval of the result by the pyret() method instead of retval().from neuron import h def f(arg): # a function with no context that *CHANGES* return arg*arg #except its argument pc = h.ParallelContext() pc.runworker() # master returns immediately, workers in # infinite loop s = 0 if pc.nhost() == 1: # use the serial form for i in range(1, 21): s += f(i) else: # use the bulletin board form for i in range(1, 21): # scatter processes pc.submit(f, i) # any context needed by f had better be the same on all$ while pc.working(): # gather results s += pc.pyret() # the return value for the executed function print s pc.done() # wait for workers to finish printing
The PVM (parallel virtual machine) should be setup so that it allows execution on all hosts of the csh script $NEURONHOME/bin/bbsworker.sh . (Simulations may also be run under MPI but the launch mechanisms are quite different) The simulation hoc files should be available on each machine with the same relative path with respect to the user's $HOME directory. For example, I start my 3 machine pvm with the command
where hineshf is a text file with the contents:pvm hineshf
Again, the purpose of the ep=$HOME/nrn/bin tokens is to specify the path to find bbsworker.shhines ep=$HOME/nrn/bin spine ep=$HOME/nrn/bin pinky ep=$HOME/nrn/bin
A simulation is started by moving to the proper working directory (should be a descendant of your $HOME directory) and launching neuron as in
The exact same hoc files should exist in the same relative locations on all host machines.special init.hoc
ParallelContext
n = pc.nhost()
if (pc.nhost == 1) { for i=1, 20 { print i, sin(i) } }else{ for i=1,20 { pc.submit(i, "sin", i) } while (pc.working) { print pc.userid, pc.retval } }
ParallelContext
myid = pc.id()
ParallelContext
pc.submit("statement\n")
pc.submit("function_name", arg1, ...)
pc.submit(object, "function_name", arg1, ...)
pc.submit(userid, ..as above..)
pc.submit(python_callable, arg1, ...)
If the first argument to submit is a non-negative integer then args are not saved and when the id for this task is returned by working that non-negative integer can be retrieved with userid
If there is no explicit userid, then the args (after the function name) are saved locally and can be unpacked when the corresponding working call returns. A local userid (unique only for this ParallelContext) is generated and returned by the submit call and is also retrieved with userid when the corresponding working call returns. This is very useful in associating a particular parameter vector with its return value and avoids the necessity of explicitly saving them or posting them. If they are not needed and you do not wish to pay the overhead of storage, supply an explicit userid. Unpacking args must be done in the same order and have the same type as the args of the "function_name". They do not have to be unpacked. Saving args is time efficient since it does not imply extra communication with the server.
The argument form causes function_name(copyofarg1, ...) to execute on some indeterminate host in the PVM. Args must be scalars, strings, or Vectors. Note that they are *COPIES* so that even string and Vector arguments are call by value and not call by reference. (This is different from the normal semantics of a direct function call). In this case efficiency was chosen at the expense of pedantic consistency since it is expected that in most cases the user does not need the return copy. In the event more than a single scalar return value is required use post within the function_name body with a key equal to the id of the task. For example:
The object form executes the function_name(copyofarg1, ...) in the context of the object. IT MUST BE THE CASE that the string resultfunc function_name() {local id id = hoc_ac_ $o1.reverse() pc.post(id, $o1) return 0 } ... while( (id = pc.working) != 0) { pc.take(id) pc.upkvec.printf }
identifies the "same" object on the host executing the function as on the host that submitted the task. This is guaranteed only if all hosts, when they start up, execute the same code that creates these objects. If you start creating these objects after the worker code diverges from the master (after pc.runworker) you really have to know what you are doing and the bugs will be VERY subtle.print object
The python_callable form allows args to be any Python objects as well as numbers, strings, or hoc Vectors. The return is a Python object and can only be retrieved with pyret . The Python objects must be pickleable (hoc objects are not presently pickleable). Python object arguments may be retrieved with upkpyobj .
A task should gather the results of all the tasks it submits before scattering other tasks even if scattering with different ParallelContext instances. This is because results are grouped by parent task id's instead of (parent task id, pc instance). Thus the following idiom needs extra user defined info to distinguish between pc1 and pc2 task results.
since pc1.working may get a result from a pc2 submission If this behavior is at all inconvenient, I will change the semantics so that pc1 results only are gathered by pc1.working calls and by no others.for i=1,10 pc1.submit(...) for i=1,10 pc2.submit(...) for i=1,10 { pc1.working() ...) for i=1,10 { pc2.working() ...)
Searching for the proper object context (pc.submit(object, ...) on the host executing the submitted task is linear in the number of objects of that type.
ParallelContext
id = pc.working()
While there are pending submissions and results are not ready, pending submissions from any ParallelContext from any host are calculated. Note that returns of completed submissions are not necessarily in the order that they were made by pc.submit.
Note that if the submitted task was specified as a Python callable, then pyret would have to be used in place of retval .while ((id = pc.working) > 0) { // gather results of previous pc.submit calls print id, pc.retval }
Note that if the submission did not have an explicit userid then all the arguments of the executed function may be unpacked.
It is essential to emphasize that when a task calls pc.working, while it is waiting for a result, it may execute any number of other tasks and unless care is taken to understand the meaning of "task context" and guarantee that context after the working call is the same as the context before the working call, SUBTLE ERRORS WILL HAPPEN more or less frequently and indeterminately. For example consider the following:
I only know one way around this problem. Perhaps there are other and better ways.function f() { ... write some values to some global variables ... pc.submit("g", ...) // when g is executed on another host it will not in general // see the same global variable values you set above. pc.working() // get back result of execution of g(...) // now the global variables may be different than what you // set above. And not because g changes them but perhaps // because the host executing this task started executing // another task that called f which then wrote DIFFERENT values // to these global variables.
function f() { local id id = hoc_ac_; ... write some values to some global variables ... pc.post(id, the, global, variables) pc.submit("g", ...) pc.working() pc.take(id) // unpack the info back into the global variables ... }
ParallelContext
scalar = pc.retval()
ParallelContext
python_object = pc.pyret()
ParallelContext
scalar = pc.userid()
See submit with regard to retrieving the original arguments of the submit call corresponding to the working return.
Can be useful in organizing results according to an index defined during submission.
ParallelContext
pc.runworker()
The basic style is that the master and each host execute the same code up til the pc.runworker call and that code sets up all the context that is required to be identical on all hosts so that any host can run any task whenever the host requests something todo. The latter takes place in the runworker loop and when a task is waiting for a result in a working call. Many parallel processing bugs are due to inconsistent context among hosts and those bugs can be VERY subtle. Tasks should not change the context required by other tasks without extreme caution. The only way I know how to do this safely is to store and retrieve a copy of the authoritative context on the bulletin board. See working for further discussion in this regard.
The runworker method is called automatically for each worker after all files have been read in and executed --- i.e. if the user never calls it explicitly from hoc. Otherwise the workers would exit since the standard input is at the end of file for workers. This is useful in those cases where the only distinction between master and workers is that code executed from the gui or console.
ParallelContext
pc.done()
ParallelContext
pc.context("statement\n")
pc.context("function_name", arg1, ...])
pc.context(object, "function_name", arg1, ...)
pc.context(userid, ..as above..)
pc.context(python_callable, arg1, ...)
There is no return in the sense that working does not return when one of these tasks completes.
This method was introduced with the following protocol in mind
proc save_context() { // executed on master sprint(tstr, "%s", this) pc.look_take(tstr) // remove previous context if it exists // master packs a possibly complicated context from within // an object whose counterpart exists on all workers pc.post(tstr) pc.context(this, "restore_context", tstr) // all workers do this } proc restore_context() { pc.look($s1) // don't remove! Others need it as well. // worker unpacks possibly complicated context }
This method was introduced in an attempt to get a parallel multiple run fitter which worked in an interactive gui setting. As such it increases safety but is not bulletproof since there is no guarantee that the user doesn't change a global variable that is not part of the fitter. It is also difficult to write safe code that invariably makes all the relevant worker context identical to the master. An example of a common bug is to remove a parameter from the parameter list and then call save_context(). Sure enough, the multiple run fitters on all the workers will no longer use that parameter, but the global variables that depend on the parameter may be different on different hosts and they will now stay different! One fix is to call save_context() before the removal of the parameter from the list and save_context() after its removal. But the inefficiency is upsetting. We need a better automatic mirroring method.
ParallelContext
pc.post(key)
pc.post(key, ...)
Later unpacking of the message body must be done in the same order as this posting sequence.
ParallelContext
pc.take(key)
pc.take(key, ...)
&x
. Unpacked Vectors will be resized to the
correct size of the vector item of the message.
To unpack Python objects, upkpyobj must be used.
ParallelContext
boolean = pc.look(key)
boolean = pc.look(key, ...)
ParallelContext
boolean = pc.look_take(key, ...)
Note that a look followed by a take is *NOT* equivalent to look_take. It can easily occur that another task might take the message between the look and take and the latter will then block until some other process posts a message with the same key.
ParallelContext
pc.pack(...)
ParallelContext
pc.unpack(...)
&soma.gnabar_hh(.3)
To unpack Python objects, upkpyobj must be used.
ParallelContext
x = pc.upkscalar()
ParallelContext
str = pc.upkstr(str)
ParallelContext
vec = pc.upkvec()
vec = pc.upkvec(vecsrc)
ParallelContext
python_object = pc.upkpyobj()
ParallelContext
st = pc.time()
st = pc.time ... print pc.time - st
ParallelContext
total = pc.wait_time()
To determine the time spent exchanging spikes during a simulation, use the idiom:
wait = pc.wait_time() pc.solve(tstop) wait = pc.wait_time() - wait
ParallelContext
total = pc.step_time()
ParallelContext
total = pc.send_time()
ParallelContext
total = pc.event_time()
ParallelContext
total = pc.integ_time()
ParallelContext
transfer_exchange_time = pc.vtransfer_time()
splitcell_exchange_time = pc.vtransfer_time(1)
reducedtree_computation_time = pc.vtransfer_time(2)
splitcell_exchange_time includes the reducedtree_computation_time.
reducedtree_computation_time refers to the extra time used by the multisplit backbone_style 1 and 2 methods between send and receive of matrix information. This amount is also included in the splitcell_exchange_time.
ParallelContext
pc.mech_time()
mechanism_time = pc.mech_time(i)
ParallelContext
With the following information you may be encouraged to provide a more efficient implementation. You may also see enough information here to decide that this implementation is about as good as can be expected in the context of your problem.
The master NEURON process contains the server for the bulletin board system. Communication between normal hoc code executing on the master NEURON process and the server is direct with no overhead except packing and unpacking messages and manipulating the send and receive buffers with pvm commands. The reason I put the server into the master process is twofold. 1) While the master is number crunching, client messages are still promptly dealt with. I noticed that when neuron was cpu bound, a separate server process did not respond to requests for about a tenth of a second. 2) No context switching between master process and server. If pvm is not running, a local implementation of the server is used which has even less overhead than pvm packing and unpacking.
Clients (worker processes) communicate with the bulletin board server (in the master machine) with pvm commands pvm_send and pvm_recv. The master process is notified of asynchronous events via the SIGPOLL signal. Unfortunately this is often early since a pvm message often consists of several of these asynchronous events and my experience so far is that (pvm_probe(-1,-1) > 0) is not always true even after the last of this burst of signals. Also SIGPOLL is not available except under UNIX. However SIGPOLL is only useful on the master process and should not affect performance with regard to whether a client is working under Win95, NT, or Linux. So even with SIGPOLL there must be software polling on the server and this takes place on the next execute() call in the interpreter. (an execute call takes place when the body of every for loop, if statement, or function/procedure call is executed.) In the absence of a SIGPOLL signal this software polling takes place every POLLDELAY=20 executions. Of course this is too seldom in the case of fadvance calls with a very large model, and too often in the case of for i=1,100000 x+=i. Things are generally ok if the message at the end of a run says that the amount of time spent waiting for something to do is small compared to the amount of time spent doing things. Perhaps a timer would help.
The bulletin board server consists of several lists implemented with the STL (Standard Template Library) which makes for reasonably fast lookup of keys. ie searching is not proportional to the size of the list but proportional to the log of the list size.
Posts go into the message list ordered by key (string order). They stay there until taken with look_take or take. Submissions go into a work list ordered by id and a todo list of id's by priority. When a host requests something to do, the highest priority (first in the list) id is taken off the todo list. When done, the id goes onto a results list ordered by parent id. When working is called and a results list has an id with the right parent id, the id is removed from the results list and the (id, message) pair is removed from the work list.
If args are saved (no explicit userid in the submit call), they are stored locally and become the active buffer on the corresponding working return. The saving is in an STL map associated with userid. The data itself is not copied but neither is it released until the next usage of the receive buffer after the working call returns.
ParallelContext allgather alltoall broadcast allreduce barrier
Here is how I got it going on a 24 cpu beowulf cluster and a dual processor Mac OSX G5. The cluster consisted of 12 dual processor nodes named node0 to node11 and a master. From the outside world you could only login to the master using ssh and from there to any of the nodes you also had to use ssh. For a second opinion see Bill Lytton's notes on installing MPI.
1) Figure out how to login to a worker without typing a password.
ie. do not go on unless you can
ssh node1
or rsh node1
. If the former works then you must
export RSHCOMMAND=ssh
before building the MPICH version of MPI since
that information is compiled into one of the files. It's too late to set
it after MPICH has been built.
On the Beowulf cluster master I did:
ssh-keygen -t rsa
and just hit return three times (once to use the default file location
and twice to specify and confirm an empty password).
Then I did a
cd $HOME/.ssh
and copied the id_rsa.pub file to authorized_keys.
Now I could login to any node without using a password.
On the OSX machine I did the same thing but had to also check the SystemPreferences/Internet&Network Sharing/Services/RemoteLogin box.
2) install MPI
I use http://www-unix.mcs.anl.gov/mpi/mpich/downloads/mpich.tar.gz which on extraction ended up in $HOME/mpich-1.2.7. I built on osx with
and the same way on the beowulf cluster but with i686 instead of powerpc. I then added $HOME/mpich-1.2.7/powerpc/bin to my PATH because the NEURON configuration process will need to find mpicc and mpicxx and we will eventually be using mpirun.export RSHCOMMAND=ssh ./configure --prefix=`pwd`/powerpc --with-device=ch_p4 make make install
Note: some systems may have a
different implementation of MPI already installed and in that
implementation the c++ compiler
may be called mpic++. If that is in your path, then you will need to
go to $HOME/mpich-1.2.7/powerpc/bin and
ln -s mpicxx mpic++
. This will prevent NEURON's configure from becoming
confused and deciding to use mpicc from one MPI version and mpic++ from another!
ie. configure looks first for mpic++ and only if it does not find it does
it try mpicxx.
You can gain some confidence if you go to mpich-1.2.7/examples/basic and test with
If this fails on the mac, you may need a machine file with the proper name that is indicated at the end of the $HOME/.ssh/authorized_keys file. In my case, since ssh-keygen called my machine Michael-Hines-Computer-2.local I have to usemake hello++ mpirun -np 2 hello++
where $HOME/mpifile has the single line{mpirun -machinefile $HOME/mpifile -np 2 hello++
Michael-Hines-Computer-2.local
3) build NEURON using the --with-paranrn argument.
On the beowulf my neuron sources were in $HOME/neuron/nrn and interviews was installed in $HOME/neuron/iv and I decided to build in a separate object directory called $HOME/neuron/mpi-gcc2.96 so I created the latter directory, cd'd to it and used
On the mac, I created a $HOME/neuron/withmpi directory and configured with../nrn/configure --prefix=`pwd` --srcdir=../nrn --with-paranrn
../nrn/configure --prefix=`pwd` --srcdir=../nrn --with-paranrn \ --enable-carbon --with-iv=/Applications/NEURON-5.8/iv
4) test by going to $HOME/neuron/nrn/src/parallel and trying
You should get an output similar tompirun -np 2 ~/neuron/withmpi/i686/bin/nrniv -mpi test0.hoc
nrnmpi_init(): numprocs=2 myid=0 NEURON -- Version 5.8 2005-8-22 19:58:19 Main (52) by John W. Moore, Michael Hines, and Ted Carnevale Duke and Yale University -- Copyright 1984-2005 loading membrane mechanisms from i686/.libs/libnrnmech.so Additional mechanisms from files hello from id 0 on NeuronDev 0 bbs_msg_cnt_=0 bbs_poll_cnt_=6667 bbs_poll_=93 0 hello from id 1 on NeuronDev [hines@NeuronDev parallel]$
5) If your machine is a cluster, list the machine names in a file (on the beowulf cluster $HOME/mpi32 has the contents
) and I use the mpirun commandnode0 ... node11
On my mac, for some bizarre reason known only to the tiger creators, the mpirun requires a machinefile with the linempirun -machinefile $HOME/mpi32 -np 24 \ /home/hines/neuron/mpi*6/i686/bin/nrniv -mpi test0.hoc
Michael-Hines-Computer-2.local
MPI
waittime = pc.barrier()
MPI
result = pc.allreduce(value, type)
pc.allreduce(src_dest_vector, type)
If the first arg is a Vector the reduce is done element-wise. ie min of each rank's v.x[0] returned in each rank's v.x[0], etc. Note that each vector must have the same size.
MPI
pc.allgather(value, result_vector)
MPI
pc.alltoall(vsrc, vcnts, vdest)
pc.alltoall(vcnts, one, vdest)
where one is a vector filled with 1.
// assume vsrc is a sorted Vector with elements ranging from 0 to tstop // then the following is a parallel sort such that vdest is sorted on // host i and for i < j, all the elements of vdest on host i are < // than all the elements on host j. vsrc.sort cnts = new Vector(pc.nhost) j = 0 for i=0, pc.nhost-1 { x = (i+1)*tvl k = 0 while (j < s.size) { if (s.x[j] < x) { j += 1 k += 1 }else{ break } } cnts.x[i] = k } pc.alltoall(vsrc, cnts, vdest)
MPI
pc.broadcast(strdef, root)
pc.broadcast(vector, root)
ParallelContext id_bbs nhost_bbs subworlds id_world nhost_world
The subworlds method divides the world of processors into subworlds, each of which can execute a task that independently and assynchronously creates and simulates (and destroys if the task networks are different) a separate network described using the ParallelNetwork and ParallelTransfer methods. The task, executing in the subworld can also make use of the MPI collectives. Different subworlds can use the same global identifiers without interference and the spike communication, transfers, and MPI collectives are localized to within a subworld. I.e. in MPI terms, each subworld utilizes a distinct MPI communicator. In a subworld, the id and nhost refer to the rank and number of processors in the subworld. (Note that every subworld has a id == 0 rank processor.)
Only the rank id == 0 subworld processors communicate with the bulletin board. Of these processors, one ( id_world == 0) is the master processor and the others are the workers. The master submits tasks to the bulletin board (and executes a task if no results are available) and the workers execute tasks and post the results to the bulletin board. Remember, all the workers also have id == 0 but different id_world and id_bbs ranks. The subworld id ranks greater than 0 are not called workers --- their global rank is id_world but their bulletin board rank, id_bbs is -1. When a worker (or the master) receives a task to execute, the exact same function with arguments that define the task will be executed on all the processes of the subworld. A subworld is exactly analogous to the old world of a network simulation in which processes distinguish themselves by means of id which is unique among the nhost processes in the subworld.
A runtime error will result if an id_bbs == -1 rank processor tries
to communicate with the bulletin board, thus the general idiom for
a task posting or taking information from the bulletin board should be either
if (pc.id == 0) { ... }
or if (pc.id_bbs != -1) { ... }
.
The latter is more general since the former would not be correct if
subworlds has NOT been called since in that case
pc.id == pc.id_world == pc.id_bbs
and
pc.nhost == pc.nhost_world == pc.nhost_bbs
SubWorld
pc.subworlds(subworld_size)
Each subworld has its own unique MPI communicator for the MPI functions such as barrier and so those collectives do not affect other subworlds. All the ParallelNetwork notions are local to a subworld. I.e. independent networks using the same gids can be simulated simultaneously in different subworlds. Only rank 0 of a subworld ( id == 0) can use the bulletin board and has a non-negative nhost_bbs and id_bbs .
Thus the bulletin board interacts with nhost_bbs processes each with id == 0. And each of those rank 0 processes interacts with nhost processes using MPI commands isolated within each subworld.
Probably the most useful values of subworld_size are 1 and nhost_world . The former uses the bulletin board to communicate between all processes but allows the use of gid specified networks within each process. ie. one master and nhost_world - 1 workers. The latter uses all processes to simulate a parallel network and there is only one process, the master, ( id_world == 0 ) interacting with the bulletin board.
objref pc pc = new ParallelContext() {pc.subworlds(3)} func f() {local ret ret = pc.id_world*100 + pc.id_bbs*10 + pc.id printf( \ "userid=%d arg=%d ret=%03d world %d of %d bbs %d of %d net %d of %d\n", \ hoc_ac_, $1, ret, \ pc.id_world, pc.nhost_world, pc.id_bbs, pc.nhost_bbs, pc.id, pc.nhost) system("sleep 1") return ret } hoc_ac_ = -1 if (pc.id_world == 0) { printf("before runworker\n") } {f(1)} {pc.runworker()} {printf("\nafter runworker\n") f(2) } {printf("\nbefore submit\n")} for i=3, 6 { pc.submit("f", i) } {printf("after submit\n")} while((userid = pc.working()) != 0) { arg = pc.upkscalar() printf("result userid=%d arg=%d return=%03d\n", \ userid, arg, pc.retval) } {printf("\nafter working\n") f(7) } {pc.done()} quit()
If the above code is saved in temp.hoc and executed with 6 processes using
mpiexec -n 6 nrniv -mpi temp.hoc
then the output will look like
(some lines may be out of order)
One can see from the output that before the runworker call, all the processes called f. After runworker, only the master returned so there is only one call to f. All tasks were submitted to the bulletin board before any task generated print output. In this case, during the while loop, the master started on the task with arg=4 and the two associates within that subworld also executed f(4). Only the master returned the result of f(4) to the bulletin board (the return values of the two subworld associates were discarded). The master and its network associates also executed f(5) and f(6). f(3) was executed by the world rank 3 process (bbs rank 1, net rank 0) and that subworlds two net associates.$ mpiexec -n 6 nrniv -mpi temp.hoc numprocs=6 NEURON -- VERSION 7.2 (454:bb5c4f755f59) 2010-07-30 Duke, Yale, and the BlueBrain Project -- Copyright 1984-2008 See http://www.neuron.yale.edu/credits.html before runworker userid=-1 arg=1 ret=000 world 0 of 6 bbs 0 of 2 net 0 of 3 userid=-1 arg=1 ret=192 world 2 of 6 bbs -1 of -1 net 2 of 3 userid=-1 arg=1 ret=492 world 5 of 6 bbs -1 of -1 net 2 of 3 userid=-1 arg=1 ret=391 world 4 of 6 bbs -1 of -1 net 1 of 3 userid=-1 arg=1 ret=091 world 1 of 6 bbs -1 of -1 net 1 of 3 userid=-1 arg=1 ret=310 world 3 of 6 bbs 1 of 2 net 0 of 3 after runworker userid=-1 arg=2 ret=000 world 0 of 6 bbs 0 of 2 net 0 of 3 before submit after submit userid=21 arg=4 ret=000 world 0 of 6 bbs 0 of 2 net 0 of 3 userid=20 arg=3 ret=310 world 3 of 6 bbs 1 of 2 net 0 of 3 userid=20 arg=3 ret=391 world 4 of 6 bbs -1 of -1 net 1 of 3 userid=21 arg=4 ret=091 world 1 of 6 bbs -1 of -1 net 1 of 3 userid=21 arg=4 ret=192 world 2 of 6 bbs -1 of -1 net 2 of 3 userid=20 arg=3 ret=492 world 5 of 6 bbs -1 of -1 net 2 of 3 result userid=21 arg=4 return=000 userid=22 arg=5 ret=091 world 1 of 6 bbs -1 of -1 net 1 of 3 userid=22 arg=5 ret=000 world 0 of 6 bbs 0 of 2 net 0 of 3 userid=22 arg=5 ret=192 world 2 of 6 bbs -1 of -1 net 2 of 3 result userid=22 arg=5 return=000 userid=23 arg=6 ret=000 world 0 of 6 bbs 0 of 2 net 0 of 3 userid=23 arg=6 ret=192 world 2 of 6 bbs -1 of -1 net 2 of 3 userid=23 arg=6 ret=091 world 1 of 6 bbs -1 of -1 net 1 of 3 result userid=23 arg=6 return=000 result userid=20 arg=3 return=310 after working userid=0 arg=7 ret=000 world 0 of 6 bbs 0 of 2 net 0 of 3 $
SubWorld
numprocs = pc.nhost_world()
SubWorld
rank = pc.id_world()
SubWorld
numprocs = pc.nhost_bbs()
SubWorld
rank = pc.id_bbs()
ParallelContext cell gid_connect psolve spike_record checkpoint gid_exists set_gid2node spike_statistics gid2cell max_histogram set_maxstep threshold gid2obj outputcell spike_compress timeout
The methods described in this section work for intra-machine connections regardless of how NEURON is configured (Thus all parallel network models can be executed on any serial machine). However machine spanning connections can only be made if NEURON has been configured with the --with-mpi option (or other options that automatically set it such as --with-paranrn). (See MPI for installation hints).
The fundamental requirement is that each
cell be associated with a unique integer global id (gid). The
ParallelNetManager in nrn/share/lib/hoc/netparmpi.hoc is a sample
implementation that makes use of these facilities. That implementation
assumes that all conductance based cells contain a public
connect2target(targetsynapse, netcon)
which connects the target synapse
object to a specific range variable (e.g. soma.v(.5)) and returns the
new NetCon in the second object argument. Artificial cells may either be
bare or wrapped in class and made public as a Point Process object field. That is,
cells built as NetworkReadyCells are compatible with the
ParallelNetManager and that manager follows as closely as possible
the style of network construction used by the NetGUI builder.
Notes:
Gid, sid, and pieces.
The typical network simulation sets up a one to one correspondence between gid and cell. This most common usage is suggested by the method name, cell , that makes the correspondence as well as the accessor method, gid2cell . That's because, almost always, a cell has one spike detection site and the entire cell is on a single cpu. But either or both of those assertions can break down and then one must be aware that, rigorously, a gid is associated with a spike detection site (defined by a NetCon source). For example, many spike detection sites per cell are useful for reciprocal synapses. Each side of each reciprocal synapse will require its own distinct gid. When load balance is a problem, or when you have more cpus than cells, it is useful to split cells into pieces and put the pieces on different cpus ( splitcell and multisplit ). But now, some pieces will not have a spike detection site and therefore don't have to have a gid. In either case, it can be administratively useful to invent an administrative policy for gid values that encodes whole cell identification. For a cell piece that has no spike output, one can still give it a gid associated with an arbitrary spike detection site that is effectively turned off because it is not the source for any existing NetCon and it was never specified as an outputcell . In the same way, it is also useful to encode a multisplit sid (split id) with whole cell identification.
ParallelNetwork
pc.set_gid2node(gid, id)
Commonly, a cell has only one spike detector location and hence we normally identify a gid with a cell. However, cell can have several distinct spike detection locations or spike detector point processes and each must be associated with a distinct gid. (e.g. dendro-dendritic synapses).
ParallelNetwork
integer = pc.gid_exists(gid)
Return 2 if the gid is owned by this machine and has been associated with a NetCon source location via the cell method.
Return 1 if the gid is owned by this machine but has not been associated with a NetCon source location.
Return 0 if the gid is NOT owned by this machine.
ParallelNetwork
th = pc.threshold(gid)
th = pc.threshold(gid, th)
ParallelNetwork
pc.cell(gid, netcon)
pc.cell(gid, netcon, 0)
netcon = new NetCon(cell, nil)
statement to
get a temporary netcon which can be destroyed after its use in the
pc.cell call. The weight and delay of this temporary netcon are
not relevant; they come into the picture with
gid_connect .
Note that cells which do not send spikes to other machines are not required to call this and in fact do not need a gid. However the administrative detail would be significantly more complicated due to the multiplication of cases in regard to whether the source and target exist AND the source is an outputcell.
ParallelNetwork
pc.outputcell(gid)
ParallelNetwork
pc.spike_record(gid, spiketimevector, gidvector)
ParallelNetwork
netcon = pc.gid_connect(srcgid, target)
netcon = pc.gid_connect(srcgid, target, netcon)
Note that if the srcgid is owned by this machine then cell must be called earlier to make sure that the srcgid is associated with a NetCon source location.
Note that if the srcgid is not owned by this machine, then this machines target will only get spikes from the srcgid if the source gid's machine had called outputcell or the third arg of cell was 1.
If the third arg exists, it must be a NetCon object with target the same as the second arg. The src of that NetCon will be replaced by srcgid and that NetCon returned. The purpose is to re-establish a connection to the original srcgid after a gid_clear .
ParallelNetwork
pc.psolve(tstop)
ParallelNetwork
oldtimeout = pc.timeout(seconds)
ParallelNetwork
local_minimum_delay = pc.set_maxstep(default_max_step)
ParallelNetworknspike = pc.spike_compress(nspike, gid_compress)
This method should only be called after the entire network has been set up since the gid compression mapping requires a knowledge of which cells are sending interprocessor spikes.
If nspike = 0 , compression is turned off.
If nspike < 0, the current value of nspike is returned.
If gid_compress = 0, or if some cpu has more than 256 cells that send interprocessor spikes, the real 4 byte integer gids are used in the (spiketime, gid) pairs and only the spiketime is compressed to 1 byte. i.e. instead of 2 bytes the pair consists of 5 bytes.
ParallelNetwork
object = pc.gid2obj(gid)
ParallelNetwork
object = pc.gid2cell(gid)
ParallelNetwork
nsendmax = pc.spike_statistics(&nsend, &nrecv, &nrecv_useful)
nsendmax is the maximum number of spikes sent from this machine to all other machines due to a single maximum step interval.
nsend is the total number of spikes sent from this machine to all other machines.
nrecv is the total number of spikes received by this machine. This number is the same for all machines.
nrecv_useful is the total number of spikes received from other machines that are sent to cells on this machine. (note: this does not include any nsend spikes from this machine)
ParallelNetwork
pc.max_histogram(vec)
Note that the current implementation of the spike exchange mechanism uses MPI_Allgather with a fixed buffer size that allows up to nrn_spikebuf_size spikes per cpu to be sent to all other machines. The default value of this is 0. If some cpu needs to send more than this number of spikes, then a second MPI_Allgatherv is used to send the overflow.
ParallelNetwork
i = pc.checkpoint()
ParallelContext multisplit source_var target_var setup_transfer splitcell
Gap junctions are assumed to couple cells relatively weakly so that the modified euler method is acceptable for accuracy and stability. For purposes of load balance, and regardless of coupling strength, a cell may be split into two subtrees with each on a different processor. See splitcell . Splitting a cell into more than two pieces can be done with multisplit .
Except for "splitcell" and "multisplit, the methods described in this section work for intra-machine connections regardless of how NEURON is configured. However machine spanning connections can only be made if NEURON has been configured with the --with-paranrn option. (This automatically sets the --with-mpi option).
ParallelTransfer
pc.source_var(&source_variable, source_global_index)
ParallelTransfer
pc.target_var(&target_variable, source_global_index)
pc.target_var(targetPointProcess, &target_variable, source_global_index)
Transfer occurs during finitialize just prior to BEFORE BREAKPOINT blocks of mod files and calls to type 0 FInitializeHandler statements. For the fixed step method, transfer occurs just before calling the SOLVE blocks. For the variable step methods transfer occurs just after states are scattered. Though any source variable can be transferred to any number of any target variable, it generally only makes sense to transfer voltage values.
ParallelTransfer
pc.setup_transfer()
ParallelTransfer
rootsection pc.splitcell_connect(host_with_other_subtree)
This method is not normally called by the user but is wrapped by the ParallelNetManager method, splitcell which provides a simple interface to support load balanced network simulations.
See multisplit for less restrictive parallel simulation of individual cells.
ParallelTransfer
section pc.multisplit(x, sid)
section pc.multisplit(x, sid, backbone_style)
pc.multisplit()
The default backbone_style (no third arg) is 2. With this style, we allow multiple pieces of the same cell to be on the same cpu. This means that one can split a cell into more pieces than available cpus in order to more effectively load balance.
For backbone_style 2, the entire cell is solved exactly via gaussian elimination regardless of the number of backbones or their size. So the stability-accuracy properties are the same as if the cell were entirely on one cpu. In this case all calls to multisplit for that entire single cell must have no third arg or a third arg of 2. Best performance militates that you should split a cell so that it has as few backbones as possible consistent with load balance since the reduced tree matrix must be solved between the MPI matrix send phase and the MPI matrix receive phase and that is a computation interval in which, in many situations, nothing else can be accomplished.
The no arg call signals that no further multisplit calls will be forthcoming and the system can determine the communication pattern needed to carry out the multisplit computations. All hosts, even those that have no multisplit cells, must participate in this determination. (If anyone calls multisplit(...), everyone must call multisplit().)
For backbone_style 0 or 1, if nodes have the same split id, sid, they must be on different hosts but that is not a serious restriction since in that case the subtrees would normally be connected together using the standard connect statement.
If all the trees connected into a single cell have only one sid, the simulation is numerically identical to splitcell which is numerically identical to all the trees connected together on a single cpu to form one cell. If one or more of the trees has two sids, then numerical accuracy, stability, and performance are a bit more ambiguous and depend on the electrical distance between the two sids. The rule of thumb is that voltage at one sid point should not significantly affect voltage at the other sid point within a single time step. Note that this electical distance has nothing to do with nseg. The stability criterion is not proportional to dt/dx^2 but the much more favorable dt/L^2 where dx is the size of the shortest segment and L is the distance between the sid nodes. In principle the subtrees of the whole cell can be the individual sections. However the matrix solution of the nodes on the path between the two sids takes twice as many divisions and 4 times as many multiplications and subtractions as normally occurs on that path. Hence there is an accuracy/performance optimum with respect to the distance between sids on the same tree. This also complicates load balance considerations.
If the third arg exists and is 1, for one or both of the sids forming a backbone, the backbone is declared to be short which means that it is solved exactly by gaussian elimination without discarding any off diagonal elements. Two short backbones cannot be connected together but they may alternate with long backbones. If the entire cell consists of single sid subtrees connected to a short backbone then the numerical accuracy is the same as if the entire tree was gausian eliminated on a single cpu. It does not matter if a one sid subtree is declared short or not; it is solved exactly in any case.
Note: using multisplit automatically sets CVode . cache_efficient (1)
ParallelContext
pc.gid_clear()
pc.gid_clear(type)
With type = 2 clears any information setup by splitcell or multisplit With type = 3 clears any information setup by setup_transfer .
With a type arg of 0 or no arg, clears all the above information.
ParallelContext nthread sec_in_thread thread_ctime thread_stat partition thread_busywait thread_how_many_proc
Multiple threads cannot be used with extracellular , LinearMechanism , and only with the fixed step and global variable time step integration methods.
Mechanisms that are not thread safe can only be used by thread 0.
Mod files that use VERBATIM blocks are not considered thread safe. The mod file author can use the THREADSAFE keyword in the NEURON block to force the thread enabled translation.
Mod files that assign values to GLOBAL variables are not considered thread safe. If the mod file is using the GLOBAL as a counter, prefix the offending assignment statements with the PROTECT keyword so that multiple threads do not attempt to update the value at the same time (race condition). If the mod file is using the GLOBAL essentially as a file scope LOCAL along with the possibility of passing values back to hoc in response to calling a PROCEDURE, use the THREADSAFE keyword in the NEURON block to automatically treat those GLOBAL variables as thread specific variables. Hoc assigns and evaluates only the thread 0 version and if FUNCTIONs and PROCEDUREs are called from Hoc, the thread 0 version of these globals are used.
Threads
n = pc.nthread(n)
n = pc.nthread(n, 0)
n = pc.nthread()
Threads
pc.partition(i, seclist)
pc.partition()
Threads
pc.thread_stat()
Threads
previous = pc.thread_busywait(next)
Threads
n = pc.thread_how_many_proc()
Threads
sec i = pc.sec_in_thread()
Threads
ct = pc.thread_ctime(i)
pc.thread_ctime()