[gmx-developers] rvec structure
Nathan Moore
nmoore at physics.umn.edu
Sat Jul 9 00:11:39 CEST 2005
Dear Berk, David,
Well, the implementation with collectives is marginally better. It looks
from the trace files though that there's still considerable time spent in
MPI_Wait (often more time in Wait than in the collective).
Did an alternate version of gmx_sumf using MPI collectives (function
defined in network.c) ever get written? Any insight on where else the
MPI_wait might be occurring? So far I've only looked at the dpcc and
villin benchmarks.
regards,
Nathan Moore
> Hi,
>
> You don't need a temp array.
>
> My test code is in CVS revision 1.8 of mvxvf.c:
>
> > static int *recvcounts=NULL,*displs;
> > int i;
> >
> > if (recvcounts == NULL) {
> > snew(recvcounts,nsb->nprocs);
> > snew(displs,nsb->nprocs);
> > for(i=0; i<nsb->nprocs; i++) {
> > recvcounts[i] = nsb->homenr[i]*sizeof(x[0]);
> > displs[i] = nsb->index[i]*sizeof(x[0]);
> > }
> > }
> > MPI_Allgatherv(arrayp(x[nsb->index[nsb->pid]],nsb->homenr[nsb->pid]),
> > MPI_BYTE,x,recvcounts,displs,MPI_BYTE,MPI_COMM_WORLD);
>
> It also contains the force summing, just one like is enough for this:
> MPI_Allreduce(f,f,nsb->natoms*DIM,mpi_type,MPI_SUM,MPI_COMM_WORLD);
>
> Berk.
>
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-developers-request at gromacs.org.
>
More information about the gromacs.org_gmx-developers
mailing list