This is an old revision of the document!
Table of Contents
This Wiki page is outdated, please go to: https://info.gwdg.de/wiki/doku.php?id=wiki:hpc:mpi4py
The gather operations collects data from all tasks and delivers this collection to the root task or to all tasks. The syntax of the gathering methods is
comm.Gather(sendbuf, recvbuf, int root=0) comm.Gatherv(sendbuf, recvbuf, int root=0) comm.Allgather(sendbuf, recvbuf) comm.Allgatherv(sendbuf, recvbuf) comm.gather(sendobj=None, recvobj=None, int root=0) comm.allgather(sendobj=None, recvobj=None)
Compared to the scatter methods, the roles of sendbuf and recvbuf are exchanged for
Gather sendbuf must contain the same number of elements in all tasks and the recvbuf on root must contain
size times that number of elements, where
size is the total number of tasks. For
Gatherv the integer tuples counts and displacements characterize the layout of recvbuf, as described in the scatter section. Here sendbuf must contain exactly
counts[rank] elements in task
rank. The following code demonstrates this.
a_size = 4 recvdata = None senddata = (rank+1)*numpy.arange(a_size,dtype=numpy.float64) if rank == 0: recvdata = numpy.arange(size*a_size,dtype=numpy.float64) comm.Gather(senddata,recvdata,root=0) print 'on task',rank,'after Gather: data = ',recvdata counts=(2,3,4) dspls=(0,3,8) if rank == 0: recvdata = numpy.empty(12,dtype=numpy.float64) sendbuf = [senddata,counts[rank]] recvbuf = [recvdata,counts,dspls,MPI.DOUBLE] comm.Gatherv(sendbuf,recvbuf,root=0) print 'on task',rank,'after Gatherv: data = ',recvdata
The lower case function
gather communicates generic python data objects analogous to the
scatter function. An example is
senddata = ['rank',rank] rootdata = comm.gather(senddata,root=0) print 'on task',rank,'after bcast: data = ',rootdata
The “all” variants of the gather methods deliver the collected data not only to the root task, but to all tasks in the communicator. These methods therefore lack the
alltoall operations combine gather and scatter. In a communicator with
size tasks every task has a sendbuf and a recvbuf with exactly
size data objects. The alltoall operation takes the i-th object from the sendbuf of task j and copies it into the j-th object of the recvbuf of task i. The operation can be thought of as a transpose of the matrix with tasks as columns and data objects as rows. The syntax of the alltoall methods is
comm.Alltoall(sendbuf, recvbuf) comm.Alltoallv(sendbuf, recvbuf) comm.alltoall(sendobj=None, recvobj=None)
The data objects in sendbuf and recvbuf have types depending on the method. In
Alltoall the data objects are equally sized consecutive sections of buffer like objects. In
Alltoallv they are sections of varying sizes and varying displacements of buffer like objects. The layout must be described the in counts und diplacements tuples, which have to be defined for sendbuf and recvbuf in every tasks in a way consistent with the intended transposition in the (task,object) matrix. Finally in the lower case
alltoall function the data objects can be of any allowed Python type, provided that the data objects in sendbuf conform to those in recvbuf, which they will replace.
An example for
a_size = 1 senddata = (rank+1)*numpy.arange(size*a_size,dtype=numpy.float64) recvdata = numpy.empty(size*a_size,dtype=numpy.float64) comm.Alltoall(senddata,recvdata)
Next an example for
alltoall, to be started with two tasks.
data0 = ['rank',rank] data1 = [1,'rank',rank] senddata=(data0,data1) print 'on task',rank,' senddata = ',senddata recvdata = comm.alltoall(senddata) print 'on task',rank,' recvdata = ',recvdata
Global Data Reduction
The data reduction functions of MPI combine data from all tasks with one of a set of predefined operations and deliver the result of this “reduction” to the root tsks or to all tasks. mpi4py provides the following operations for reduction:
MPI.MIN minimum MPI.MAX maximum MPI.SUM sum MPI.PROD product MPI.LAND logical and MPI.BAND bitwise and MPI.LOR logical or MPI.BOR bitwise or MPI.LXOR logical xor MPI.BXOR bitwise xor MPI.MAXLOC max value and location MPI.MINLOC min value and location
The reduction operations need data of the appropriate type.
The syntax for the reduction methods is
comm.Reduce(sendbuf, recvbuf, op=MPI.SUM, root=0) comm.Allreduce(sendbuf, recvbuf, op=MPI.SUM) comm.reduce(sendobj=None, recvobj=None, op=MPI.SUM, root=0) comm.allreduce(sendobj=None, recvobj=None, op=MPI.SUM)
The followimg example shows the use of
Allreduce. sendbuf and recvbuf must be buffer like data objects with the same number of elements of the same type in all tasks. The reduction operation is performed elementwise using the corresponding elements in sendbuf in all tasks. The result is stored into the corresponding element of recvbuf in the root task by
Reduce and in all tasks by
Allreduce. If the op paramter is omitted, the default operation
MPI.SUM is used.
a_size = 3 recvdata = numpy.zeros(a_size,dtype=numpy.int) senddata = (rank+1)*numpy.arange(a_size,dtype=numpy.int) comm.Reduce(senddata,recvdata,root=0,op=MPI.PROD) print 'on task',rank,'after Reduce: data = ',recvdata comm.Allreduce(senddata,recvdata) print 'on task',rank,'after Allreduce: data = ',recvdata
The use of the lower case methods reduce and allreduce operating on generic python data objects is limited, because the reduction operations are undefined for most of the data objects (like lists, tuples etc.).
The reduce_scatter functions operate elementwise on
size sections of the buffer like data objects sendbuf. The sections must have the equal number of elements in all tasks. The result of the reductions in section i is copied to recvbuf in task i, which must have an appropriate length.
The syntax for the reduction methods is
comm.Reduce_scatter_block(sendbuf, recvbuf, op=MPI.SUM) comm.Reduce_scatter(sendbuf, recvbuf, recvcounts=None, op=MPI.SUM)
Reduce_scatter_block the number of elements in all sections must be equal and the number of elements in sendbuf must be
size times that number. An example code is the following
a_size = 3 recvdata = numpy.zeros(a_size,dtype=numpy.int) senddata = (rank+1)*numpy.arange(size*a_size,dtype=numpy.int) print 'on task',rank,'senddata = ',senddata comm.Reduce_scatter_block(senddata,recvdata,op=MPI.SUM) print 'on task',rank,'recvdata = ',recvdata
Reduce_scatter the number of elements in the sections can be different. They must be given in the integer tuple recvcounts. The number of elements in sendbuf must be sum of the numbers of elements in the sections. On task i recvbuf must have the length of section i of sendbuf. The following code gives an example for this.
recv_size = range(1,size+1) recvdata = numpy.zeros(recv_size[rank],dtype=numpy.int) send_size = 0 for i in range(0,size): send_size =send_size + recv_size[i] senddata = (rank+1)*numpy.arange(send_size,dtype=numpy.int) print 'on task',rank,'senddata = ',senddata comm.Reduce_scatter(senddata,recvdata,recv_size,op=MPI.SUM) print 'on task',rank,'recvdata = ',recvdata
Reduction with MINLOC and MAXLOC
The reduction operations MINLOC and MAXLOC differ from all others: they return two results, the minimum resp. maximum of the values in the different tasks and the rank of a task, which holds the extreme value. mpi4py provides the two operations only for the lower case
allreduce mehods for comparing a single numerical data object in every task. An example is given in
inp = numpy.random.rand(size) senddata = inp[rank] recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC) print 'on task',rank,'reduce: ',senddata,recvdata recvdata=comm.allreduce(senddata,None,op=MPI.MINLOC) print 'on task',rank,'allreduce: ',senddata,recvdata