Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:mpi4py [2019/09/20 10:19]
vend [reduce] moved to https://projects.gwdg.de/projects/parallelrechnerbeschaffung-2012-13/wiki/outdated
en:services:application_services:high_performance_computing:mpi4py [2019/09/20 10:20] (current)
vend [Code Examples] moved to https://projects.gwdg.de/projects/parallelrechnerbeschaffung-2012-13/wiki/outdated
Line 21: Line 21:
  
  
-===  reduce_scatter ​ === 
  
-The reduce_scatter functions operate elementwise on ''​size''​ sections of the buffer like data objects sendbuf. The sections must have the equal number of elements in all tasks. The result of the reductions in section //i// is copied to recvbuf in task //i//, which must have an appropriate length. ​ 
-The syntax for the reduction methods is 
  
-<​code>​ 
-comm.Reduce_scatter_block(sendbuf,​ recvbuf, op=MPI.SUM) 
-comm.Reduce_scatter(sendbuf,​ recvbuf, recvcounts=None,​ op=MPI.SUM)</​code>​ 
-\\ 
-In ''​Reduce_scatter_block''​ the number of elements in all sections must be equal and the number of elements in sendbuf must be ''​size''​ times that number. An example code is the following 
  
-**reduce_scatter_block:​** 
-<​code>​ 
-a_size = 3 
-recvdata = numpy.zeros(a_size,​dtype=numpy.int) 
-senddata = (rank+1)*numpy.arange(size*a_size,​dtype=numpy.int) 
-print 'on task',​rank,'​senddata ​ = ',​senddata 
-comm.Reduce_scatter_block(senddata,​recvdata,​op=MPI.SUM) 
-print 'on task',​rank,'​recvdata = ',​recvdata</​code>​ 
-\\ 
-In ''​Reduce_scatter''​ the number of elements in the sections can be different. They must be given in the integer tuple recvcounts. The number of elements in sendbuf must be sum of the numbers of elements in the sections. On task //i// recvbuf must have the length of section //i// of sendbuf. The following code gives an example for this.  
- 
-**reduce_scatter:​** 
-<​code>​ 
-recv_size = range(1,​size+1) 
-recvdata = numpy.zeros(recv_size[rank],​dtype=numpy.int) 
-send_size = 0 
-for i in  range(0,​size):​ 
-   ​send_size =send_size + recv_size[i] 
-senddata = (rank+1)*numpy.arange(send_size,​dtype=numpy.int) 
-print 'on task',​rank,'​senddata ​ = ',​senddata 
-comm.Reduce_scatter(senddata,​recvdata,​recv_size,​op=MPI.SUM) 
-print 'on task',​rank,'​recvdata = ',​recvdata</​code>​ 
-\\ 
-===  Reduction with MINLOC and MAXLOC ​ === 
- 
-The reduction operations MINLOC and MAXLOC differ from all others: they return two results, the minimum resp. maximum of the values in the different tasks and the rank of a task, which holds the extreme value. mpi4py ​ provides the two operations only for the lower case ''​reduce''​ and ''​allreduce''​ mehods for comparing a single ​ numerical data object in every task. An example is given in 
- 
-**reduce_minloc.py:​**  ​ 
-<​code>​ 
-inp = numpy.random.rand(size) 
-senddata = inp[rank] 
-recvdata=comm.reduce(senddata,​None,​root=0,​op=MPI.MINLOC) 
-print 'on task',​rank,'​reduce: ​   ',​senddata,​recvdata ​ 
- 
-recvdata=comm.allreduce(senddata,​None,​op=MPI.MINLOC) 
-print 'on task',​rank,'​allreduce:​ ',​senddata,​recvdata</​code>​ 
-\\ 
-=====  Code Examples ​ ===== 
- 
-The python codes for all examples described in this tutorial are available from [[http://​wwwuser.gwdg.de/​~ohaan/​mpi4py_examples/​]] 
- 
-[[Kategorie:​ Scientific Computing]]