[gmx-users] large scale simulation?
Peter C. Lai
pcl at uab.edu
Fri Mar 30 23:29:24 CEST 2012
Yeah I usually average 21ns/day on a openmpi-over-qdr infiniband cluster
using fewer than 300 nodes and with a 100K system with 60K TIPS3P (hydrogen LJ
terms) waters (single precision gromacs, ffw3, and openmpi compiled with icc
It's going to depend on your forcefield and how you've optimized your system
too (PME:PP ratio etc.)
On 2012-03-30 07:07:25PM +0200, David van der Spoel wrote:
> Op 30 mar 2012 om 19:02 heeft Albert <mailmd2011 at gmail.com> het volgende geschreven:
> > Hello:
> > I am wondering does anybody have experience with GROMACS for large scale simulation? I've heard lot of people said that it would be difficult for Gromacs to do so. eg: I've got a 60,000 atoms system, is it possible for GROMACS to produce 100 ns/days or even more? suppose I can use as much CPU as possible.... My recent experience for such system, Gromacs can only produce up to 20ns/day.... If I would like to produce 1000 ns, I have to wait for 50 days......
> We are about to publish a paper where have 1,2 million atoms and get 30 ns/day. On 2000 cores.
> > thank you very much
> > best
> > A.
> > --
> > gmx-users mailing list gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> gmx-users mailing list gmx-users at gromacs.org
> Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users