[gmx-users] Angle group
Cyrus Djahedi
cyrusdja at kth.se
Fri Jul 25 13:35:20 CEST 2014
There is something else. I also tried using mk_angndx to generate an index file prior to using g_angle by: "mk_angndx -s C0.tpr -type angle"
It generates a lot of different angle group that I can not identify. Each line in the index-file has 9 atoms in it, are the atoms divided three-by-three to form a triplet? Is there any way of knowing which group represents which triplet?
g_angle:
Group 0 (Theta=110.0_502.42) has 7680 elements
Group 1 (Theta=111.0_376.81) has 10560 elements
Group 2 (Theta=112.0_837.36) has 960 elements
Group 3 (Theta=107.5_586.15) has 4944 elements
Group 4 (Theta=108.5_586.15) has 5616 elements
Group 5 (Theta=109.5_460.55) has 2976 elements
Group 6 (Theta=113.5_376.81) has 3840 elements
Group 7 (Theta=111.6_418.68) has 1872 elements
Group 8 (Theta=109.5_376.81) has 960 elements
Select a group: 2
Selected 2: 'Theta=112.0_837.36'
Last frame 10000 time 10000.000
Found points in the range from 93 to 124 (max 180)
< angle > = 108.363
< angle^2 > = 11742.5
Std. Dev. = 0.214073
________________________________________
Från: gromacs.org_gmx-users-bounces at maillist.sys.kth.se [gromacs.org_gmx-users-bounces at maillist.sys.kth.se] för gromacs.org_gmx-users-request at maillist.sys.kth.se [gromacs.org_gmx-users-request at maillist.sys.kth.se]
Skickat: den 24 juli 2014 22:09
Till: gromacs.org_gmx-users at maillist.sys.kth.se
Ämne: gromacs.org_gmx-users Digest, Vol 123, Issue 133
Send gromacs.org_gmx-users mailing list submissions to
gromacs.org_gmx-users at maillist.sys.kth.se
To subscribe or unsubscribe via the World Wide Web, visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or, via email, send a message with subject or body 'help' to
gromacs.org_gmx-users-request at maillist.sys.kth.se
You can reach the person managing the list at
gromacs.org_gmx-users-owner at maillist.sys.kth.se
When replying, please edit your Subject line so it is more specific
than "Re: Contents of gromacs.org_gmx-users digest..."
Today's Topics:
1. Fw:Normal Mode Analysis (xy21hb)
2. Re: Angle group (Justin Lemkul)
3. continuation run segmentation fault (David de Sancho)
4. Re: Error in system_inflate.gro coordinates does not match
(RINU KHATTRI)
5. Re: gromacs.org_gmx-users Digest, Vol 123, Issue 124: reply
to message 1 (Guilherme Duarte Ramos Matos)
----------------------------------------------------------------------
Message: 1
Date: Thu, 24 Jul 2014 20:58:15 +0800 (CST)
From: xy21hb <xy21hb at 163.com>
To: "gromacs.org_gmx-users"
<gromacs.org_gmx-users at maillist.sys.kth.se>
Subject: [gmx-users] Fw:Normal Mode Analysis
Message-ID: <3c94d655.1bf0d.1476873575d.Coremail.xy21hb at 163.com>
Content-Type: text/plain; charset=GBK
Dear all,
I wonder if there is anywhere I can know the details of mdp files used for normal mode analysis.
I understand from the maunal that it needs steepest descent, conjugate gradient, l-bfgs, nm in "md" options consecutively,
but I am not sure about other parameters set in these different stages.
Many thanks,
OAY
------------------------------
Message: 2
Date: Thu, 24 Jul 2014 08:59:51 -0400
From: Justin Lemkul <jalemkul at vt.edu>
To: gmx-users at gromacs.org
Subject: Re: [gmx-users] Angle group
Message-ID: <53D10347.80008 at vt.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 7/24/14, 8:55 AM, Cyrus Djahedi wrote:
> I tried using g_angle with an index file defining the three atoms that make up the bond as "Group 12 ( O1_C1_C4) has 960 elements". I get:
>
> Group 0 ( System) has 27396 elements
> Group 1 ( Other) has 6768 elements
> Group 2 ( GL4b) has 352 elements
> Group 3 ( G14b) has 6048 elements
> Group 4 ( GL1b) has 368 elements
> Group 5 ( Water) has 20628 elements
> Group 6 ( SOL) has 20628 elements
> Group 7 ( non-Water) has 6768 elements
> Group 8 ( O1) has 320 elements
> Group 9 ( O4) has 16 elements
> Group 10 ( C1) has 320 elements
> Group 11 ( C4) has 320 elements
> Group 12 ( O1_C1_C4) has 960 elements
> Group 13 ( C1_O1_C4) has 960 elements
> Group 14 ( C4_C1_O1) has 960 elements
> Group 15 ( C1_C4_O1) has 960 elements
> Group 16 ( O1_C1_C4) has 960 elements
> Select a group: 12
> Selected 12: 'O1_C1_C4'
> Last frame 10000 time 10000.000
> Found points in the range from 5 to 43 (max 180)
> < angle > = 23.1856
> < angle^2 > = 537.601
> Std. Dev. = 0.170041
>
> I dont know exactly what angle it is referring to. The angle I'm looking for is formed at the O1-atom, flanked by the carbon atoms and is around 118-120 degrees . As you can see in the index-options I tried defining the triplets in different order, this made no difference however. Any suggestions?
>
The values printed are an average over all triplets in the chosen index group.
I would think that order would absolutely matter here; check carefully how you
have created the groups. The angle formed by C1-O1-C4 must be different than
the angle of O1-C1-C4.
-Justin
--
==================================================
Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow
Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201
jalemkul at outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul
==================================================
------------------------------
Message: 3
Date: Thu, 24 Jul 2014 16:29:53 +0100
From: David de Sancho <daviddesancho at gmail.com>
To: gmx-users at gromacs.org
Subject: [gmx-users] continuation run segmentation fault
Message-ID:
<CALUsGp1wsySkZXLCK6jwRaavPPJB0K7YJyTn33P9HPo67f17bQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Dear all
I am having some trouble continuing some runs with Gromacs 4.5.5 in our
local cluster. Surprisingly, the simulations run smoothly in the same
number of nodes and cores before in the same system. And even more
surprisingly if I reduce the number of nodes to 1 with its 12 processors,
then it runs again.
And the script I am using to run the simulations looks something like this@
# Set some Torque options: class name and max time for the job. Torque
> developed from a program called
> # OpenPBS, hence all the PBS references in this file
> #PBS -l nodes=4:ppn=12,walltime=24:00:00
source /home/dd363/src/gromacs-4.5.5/bin/GMXRC.bash
> application="/home/user/src/gromacs-4.5.5/bin/mdrun_openmpi_intel"
> options="-s data/tpr/filename.tpr -deffnm data/filename -cpi data/filename"
>
> #! change the working directory (default is home directory)
> cd $PBS_O_WORKDIR
> echo Running on host `hostname`
> echo Time is `date`
> echo Directory is `pwd`
> echo PBS job ID is $PBS_JOBID
> echo This jobs runs on the following machines:
> echo `cat $PBS_NODEFILE | uniq`
> #! Run the parallel MPI executable
> #!export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/lib64:/usr/lib64"
> echo "Running mpiexec $application $options"
> mpiexec $application $options
And the error messages I am getting look something like this
> [compute-0-11:09645] *** Process received signal ***
> [compute-0-11:09645] Signal: Segmentation fault (11)
> [compute-0-11:09645] Signal code: Address not mapped (1)
> [compute-0-11:09645] Failing at address: 0x10
> [compute-0-11:09643] *** Process received signal ***
> [compute-0-11:09643] Signal: Segmentation fault (11)
> [compute-0-11:09643] Signal code: Address not mapped (1)
> [compute-0-11:09643] Failing at address: 0xd0
> [compute-0-11:09645] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
> [compute-0-11:09645] [ 1]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
> [0x2af2091443f9]
> [compute-0-11:09645] [ 2]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
> [0x2af209142963]
> [compute-0-11:09645] [ 3]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
> [0x2af20996e33c]
> [compute-0-11:09645] [ 4]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
> [0x2af20572cfa7]
> [compute-0-11:09645] [ 5]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
> [0x2af205219636]
> [compute-0-11:09645] [ 6]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
> [0x2af20aa2259b]
> [compute-0-11:09645] [ 7]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
> [0x2af20aa2a04b]
> [compute-0-11:09645] [ 8]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
> [0x2af20aa22da9]
> [compute-0-11:09645] [ 9]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
> [0x2af205204dcc]
> [compute-0-11:09645] [10]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
> [0x2af205236f0c]
> [compute-0-11:09645] [11]
> /home/dd363/src/gromacs-4.5.5/lib/libgmx_mpi.so.6(gmx_setup_nodecomm+0x14b)
> [0x2af204b8ba6b]
> [compute-0-11:09645] [12]
> /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(mdrunner+0x86c)
> [0x415aac]
> [compute-0-11:09645] [13]
> /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(main+0x1928)
> [0x41d968]
> [compute-0-11:09645] [14] /lib64/libc.so.6(__libc_start_main+0xf4)
> [0x38d281d994]
> [compute-0-11:09643] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
> [compute-0-11:09643] [ 1]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
> [0x2b56aca403f9]
> [compute-0-11:09643] [ 2]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
> [0x2b56aca3e963]
> [compute-0-11:09643] [ 3]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
> [0x2b56ad26a33c]
> [compute-0-11:09643] [ 4]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
> [0x2b56a9028fa7]
> [compute-0-11:09643] [ 5]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
> [0x2b56a8b15636]
> [compute-0-11:09643] [ 6]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
> [0x2b56ae31e59b]
> [compute-0-11:09643] [ 7]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
> [0x2b56ae32604b]
> [compute-0-11:09643] [ 8]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
> [0x2b56ae31eda9]
> [compute-0-11:09643] [ 9]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
> [0x2b56a8b00dcc]
> [compute-0-11:09643] [10]
> /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
> [0x2b56a8b32f0c]
> [compute-0-11:09643] [11]
> /home/dd363/src/gromacs-4.5.5/lib/libgmx_mpi.so.6(gmx_setup_nodecomm+0x14b)
> [0x2b56a8487a6b]
> [compute-0-11:09643] [12]
> /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(mdrunner+0x86c)
> [0x415aac]
> [compute-0-11:09643] [13]
> /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(main+0x1928)
> [0x41d968]
> [compute-0-11:09643] [14] /lib64/libc.so.6(__libc_start_main+0xf4)
> [0x38d281d994]
> [compute-0-11:09643] [15]
> /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(do_cg+0x189)
> [0x407449]
> [compute-0-11:09643] *** End of error message ***
> [compute-0-11:09645] [15]
> /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(do_cg+0x189)
> [0x407449]
> [compute-0-11:09645] *** End of error message ***
> [compute-0-13.local][[30524,1],19][btl_tcp_endpoint.c:456:mca_btl_tcp_endpoint_recv_blocking]
> recv(15) failed: Connection reset by peer (104)
> [compute-0-13.local][[30524,1],17][btl_tcp_endpoint.c:456:mca_btl_tcp_endpoint_recv_blocking]
> recv(15) failed: Connection reset by peer (104)
> [compute-0-12.local][[30524,1],29][btl_tcp_endpoint.c:456:mca_btl_tcp_endpoint_recv_blocking]
> recv(15) failed: Connection reset by peer (104)
A number of checks have been carried out. The continuation runs crash right
away. The segfaults have ocurred in two different nodes, so bad compute
nodes are probably ruled out. The MPI library is working fine on a number
of test programs. There are no signs of system problems. On the other hand
Signal 11 means trying to access memory that the computer thinks I should
not have access to.
Any ideas on what may be going wrong?
Thanks
David
------------------------------
Message: 4
Date: Thu, 24 Jul 2014 21:27:50 +0530
From: RINU KHATTRI <nickname.mittu at gmail.com>
To: gmx-users at gromacs.org
Subject: Re: [gmx-users] Error in system_inflate.gro coordinates does
not match
Message-ID:
<CAOEfx3+MSQyQL0p6TZQUpiiAf15Qwm5ZdF0Vkj-Ny3drnECTbQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
hello everyone
thank you justin
i did the same
till minimization without the ligand it is in the lipid and center but i
edit the box size arbitrary i used x and y axis as present in popc but in z
axis used 10.00000 so there is overlapping of protein and lipid i think
this can create problem
help
On Wed, Jul 23, 2014 at 10:48 PM, Justin Lemkul <jalemkul at vt.edu> wrote:
>
>
> On 7/23/14, 12:12 PM, RINU KHATTRI wrote:
>
>> hello everyone
>>
>> thank you justin but how can i increase the box size i am using the box
>> vector which is present in popc_whole.gro
>> how can i edit it
>>
>
> editconf
>
>
> and one more problem when i see it in vmd my ligand is out side the
>> protein
>>
>
> Position the protein-ligand complex like you want before packing the
> lipids around the protein, remove the ligand, then assemble the membrane
> protein system. With strong restraints, the protein should not move, so
> you can just paste in the ligand coordinates afterwards. Then adjust the
> box and solvate.
>
>
> -Justin
>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==================================================
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
------------------------------
Message: 5
Date: Thu, 24 Jul 2014 13:09:21 -0700
From: Guilherme Duarte Ramos Matos <gduarter at uci.edu>
To: gromacs.org_gmx-users at maillist.sys.kth.se
Subject: Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 123, Issue
124: reply to message 1
Message-ID:
<CACmdn50O-A0JSYUqzvBh6VDCtpktO0YYBOp+=kJzj+zZiJFkaQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Thanks for the reply!
I actually managed to solve this issue. I was building the super cell
with Mercury, the Cambridge Crystallographic Database software, but I
was not aware of connectivity issues that appeared when I built the
crystal with fragments of molecules. It was solved easily with a
different option in the packing/ slicing utility.
Thanks!
~ Guilherme
*****************************************************
Guilherme D. R. Matos
Graduate Student at UC Irvine
Mobley Group
*****************************************************
On Wed, Jul 23, 2014 at 2:48 AM,
<gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:
>
> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Molecular Solid PBC problem (Justin Lemkul)
> 2. about cos-accelation (Hyunjin Kim)
> 3. g_energy questions (Andy Chao)
> 4. Re: Error in system_inflate.gro coordinates does not match
> (RINU KHATTRI)
> 5. Angle group (Cyrus Djahedi)
> 6. Re: about cos-accelation (Dr. Vitaly Chaban)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 22 Jul 2014 20:15:23 -0400
> From: Justin Lemkul <jalemkul at vt.edu>
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Molecular Solid PBC problem
> Message-ID: <53CEFE9B.4010801 at vt.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>
>
> On 7/22/14, 7:53 PM, Guilherme Duarte Ramos Matos wrote:
> > Dear GROMACS user community,
> >
> > I'm working with molecular dynamics of molecular solids and I am having
> > trouble to set up the calculations.
> >
> > I got the crystal structure's pdb file from the Cambridge Database and used
> > editconf to generate the coordinate file. The topology file is really
> > simple: it just carries the hamiltonian of an Einstein crystal, that is,
> > harmonic potentials binding each atom of the molecule to its lattice
> > position. The relevant part of the mdp file is:
> >
> > ; NEIGHBORSEARCHING PARAMETERS
> > ; nblist update frequency
> > nstlist = 1
> > ; ns algorithm (simple or grid)
> > ns_type = grid
> > ; Periodic boundary conditions: xyz (default), no (vacuum)
> > ; or full (infinite systems only)
> > pbc = xyz
> > ; nblist cut-off
> > rlist = 1.0
> >
> > Unfortunately, after running grompp, I get the following warning:
> >
> > WARNING 1 [file molecule_ideal.top, line 351]:
> > 10116 non-matching atom names
> > atom names from molecule_ideal.top will be used
> > atom names from input.gro will be ignored
> >
> > The funny and worrying part of this problem is that all the atom types were
> > changed in the output of mdrun. The simulation just didn't crash because of
>
> As it should; gromp warned you that a huge number of atoms were out of order
> with respect to the topology, so the topology is used, and the identity and/or
> types of the atoms are changed accordingly.
>
> > the hamiltonian used. I investigated a little bit and it seemed that
> > GROMACS was not able to connect the fragments in the wall to their
> > neighboring periodic copies. That happened because fragments were numbered
> > as distinct molecules. Check this small portion of the coordinate file:
> >
>
> How did you generate the original topology? The mismatch between coordinates
> and topology could also be causing issues with bonded geometry, because
> everything is likely to get scrambled.
>
> > 35RES C1 211 0.017 5.561 4.241
> > 35RES N1 212 0.033 5.362 4.363
> > 35RES O1 213 0.145 5.367 4.163
> > 35RES C2 214 0.074 5.421 4.245
> > 35RES H1 215 0.057 5.283 4.386
> > 35RES H3 216 0.087 5.628 4.238
> > 36RES C1 217 0.017 5.561 5.526
> > 36RES N1 218 0.033 5.362 5.648
> > 36RES O1 219 0.145 5.367 5.448
> > 36RES C2 220 0.074 5.421 5.530
> > 36RES H1 221 0.057 5.283 5.671
> > 36RES H3 222 0.087 5.628 5.523
> > 37RES C1 223 0.017 5.561 6.811
> > 37RES N1 224 0.033 5.362 6.933
> > 37RES O1 225 0.145 5.367 6.733
> > 37RES C2 226 0.074 5.421 6.815
> > 37RES H1 227 0.057 5.283 6.956
> > 37RES H3 228 0.087 5.628 6.808
> > 38RES C1 229 0.753 0.786 1.671
> > 38RES N1 230 0.770 0.587 1.793
> > 38RES O1 231 0.882 0.592 1.593
> > 38RES C2 232 0.811 0.646 1.675
> > 38RES O2 233 0.636 0.631 1.973
> > 38RES C3 234 0.687 0.665 1.868
> > 38RES C4 235 0.672 0.798 1.799
> > 38RES H1 236 0.794 0.508 1.816
> > 38RES H2 237 0.696 0.797 1.593
> > 38RES H3 238 0.824 0.852 1.668
> > 38RES H4 239 0.707 0.870 1.855
> > 38RES H5 240 0.579 0.817 1.779
> >
> > The molecule number 38 has 12 atoms and is inside the walls while 35, 36
> > and 37 have 6 atoms each, represent similar fragments, along the wall but
> > are accounted as isolated molecules.
> >
> > Does anyone have a suggestion to help me with this problem?
>
> If you can provide the exact details of what these molecules are and how you
> generated the topology, probably, but without that information it's a bit hard
> to suggest anything.
>
> -Justin
>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==================================================
>
>
>
------------------------------
--
Gromacs Users mailing list
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
End of gromacs.org_gmx-users Digest, Vol 123, Issue 133
*******************************************************
More information about the gromacs.org_gmx-users
mailing list