**This is an old revision of the document!**

### Table of Contents

# Outdated

This Wiki page is outdated, please go to: https://info.gwdg.de/wiki/doku.php?id=wiki:hpc:mpi4py

### Global Data Reduction

The data reduction functions of MPI combine data from all tasks with one of a set of predefined operations and deliver the result of this “reduction” to the root tsks or to all tasks. mpi4py provides the following operations for reduction:

MPI.MIN minimum MPI.MAX maximum MPI.SUM sum MPI.PROD product MPI.LAND logical and MPI.BAND bitwise and MPI.LOR logical or MPI.BOR bitwise or MPI.LXOR logical xor MPI.BXOR bitwise xor MPI.MAXLOC max value and location MPI.MINLOC min value and location

The reduction operations need data of the appropriate type.

#### reduce

The syntax for the reduction methods is

comm.Reduce(sendbuf, recvbuf, op=MPI.SUM, root=0) comm.Allreduce(sendbuf, recvbuf, op=MPI.SUM) comm.reduce(sendobj=None, recvobj=None, op=MPI.SUM, root=0) comm.allreduce(sendobj=None, recvobj=None, op=MPI.SUM)

The followimg example shows the use of `Reduce`

and `Allreduce`

. sendbuf and recvbuf must be buffer like data objects with the same number of elements of the same type in all tasks. The reduction operation is performed elementwise using the corresponding elements in sendbuf in all tasks. The result is stored into the corresponding element of recvbuf in the root task by `Reduce`

and in all tasks by `Allreduce`

. If the op paramter is omitted, the default operation `MPI.SUM`

is used.

**reduce_array.py:**

a_size = 3 recvdata = numpy.zeros(a_size,dtype=numpy.int) senddata = (rank+1)*numpy.arange(a_size,dtype=numpy.int) comm.Reduce(senddata,recvdata,root=0,op=MPI.PROD) print 'on task',rank,'after Reduce: data = ',recvdata comm.Allreduce(senddata,recvdata) print 'on task',rank,'after Allreduce: data = ',recvdata

The use of the lower case methods reduce and allreduce operating on generic python data objects is limited, because the reduction operations are undefined for most of the data objects (like lists, tuples etc.).

#### reduce_scatter

The reduce_scatter functions operate elementwise on `size`

sections of the buffer like data objects sendbuf. The sections must have the equal number of elements in all tasks. The result of the reductions in section *i* is copied to recvbuf in task *i*, which must have an appropriate length.
The syntax for the reduction methods is

comm.Reduce_scatter_block(sendbuf, recvbuf, op=MPI.SUM) comm.Reduce_scatter(sendbuf, recvbuf, recvcounts=None, op=MPI.SUM)

In `Reduce_scatter_block`

the number of elements in all sections must be equal and the number of elements in sendbuf must be `size`

times that number. An example code is the following

**reduce_scatter_block:**

a_size = 3 recvdata = numpy.zeros(a_size,dtype=numpy.int) senddata = (rank+1)*numpy.arange(size*a_size,dtype=numpy.int) print 'on task',rank,'senddata = ',senddata comm.Reduce_scatter_block(senddata,recvdata,op=MPI.SUM) print 'on task',rank,'recvdata = ',recvdata

In `Reduce_scatter`

the number of elements in the sections can be different. They must be given in the integer tuple recvcounts. The number of elements in sendbuf must be sum of the numbers of elements in the sections. On task *i* recvbuf must have the length of section *i* of sendbuf. The following code gives an example for this.

**reduce_scatter:**

recv_size = range(1,size+1) recvdata = numpy.zeros(recv_size[rank],dtype=numpy.int) send_size = 0 for i in range(0,size): send_size =send_size + recv_size[i] senddata = (rank+1)*numpy.arange(send_size,dtype=numpy.int) print 'on task',rank,'senddata = ',senddata comm.Reduce_scatter(senddata,recvdata,recv_size,op=MPI.SUM) print 'on task',rank,'recvdata = ',recvdata

#### Reduction with MINLOC and MAXLOC

The reduction operations MINLOC and MAXLOC differ from all others: they return two results, the minimum resp. maximum of the values in the different tasks and the rank of a task, which holds the extreme value. mpi4py provides the two operations only for the lower case `reduce`

and `allreduce`

mehods for comparing a single numerical data object in every task. An example is given in

**reduce_minloc.py:**

inp = numpy.random.rand(size) senddata = inp[rank] recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC) print 'on task',rank,'reduce: ',senddata,recvdata recvdata=comm.allreduce(senddata,None,op=MPI.MINLOC) print 'on task',rank,'allreduce: ',senddata,recvdata

## Code Examples

The python codes for all examples described in this tutorial are available from http://wwwuser.gwdg.de/~ohaan/mpi4py_examples/