A+ A A-

NAMD

NAMD on GPU

1. SSH into GPU cluster using the following command: ssh This email address is being protected from spambots. You need JavaScript enabled to view it. .

2. The default setting of NAMD has been set to use rsh to distribute tasks to the compute node. In the GPU cluster; rsh is disabled. User need to change his/her environment from rsh to ssh connection by typing the following command: export CONV_RSH=/usr/bin/ssh (For NAMD authentication) and export PVM_RSH=/usr/bin/ssh (For Torque authentication).

3. From then, create a directory named: namdtest in the following directory: /GPU/home/username.

4. To run NAMD jobs through Ethernet network, do the following:

4.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "++++++++++"
echo "host files is:"
echo " "
cat $PBS_NODEFILE
cp $PBS_NODEFILE $PBS_STEP_OUT.hostfile
echo " "
echo "++++++++++"

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT

-----------------------------------------------------------------------
5. To run NAMD jobs through Infiniband network, do the following:

5.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "original machine file is:"
echo "++++++++++"
cat $PBS_NODEFILE
echo "++++++++++"
cat $PBS_NODEFILE|sed -e 's/.*/&-ib/'>$PBS_STEP_OUT.hostfile
echo "modified machine file is:"
echo "++++++++++"
cat $PBS_STEP_OUT.hostfile

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT
-----------------------------------------------------------------------
6. Save the file and run the command: msub namd.moab. NOTE: In the script file, user can either use partition=c2070 or partition=c1060.
7. To check the status of the job, type the command: showq.
8. To check the status of the nodes, type the command: pbsnodes. Note: This command will display available nodes and gpus within the nodes.

Last Updated on Tuesday, 03 June 2014 14:35

Hits: 1236

Emboss with Gcc

EMBOSS on GPU

1. ssh into the GPU cluster using the following: ssh –X This email address is being protected from spambots. You need JavaScript enabled to view it. .
2. In your user home directory: create a file named: .embossrc.
3. Insert the following line in the file named .embossrc: INCLUDE /GPU/opt/emboss-
intel/EMBOSS-6.3.1/test/.embossrc.
4. From then, export the following path in the user environment (.bashrc or .profile
file):

export LD_LIBRARY_PATH=/GPU/opt/emboss-gcc-6.3/lib:$LD_LIBRARY_PATH
export PATH=/GPU/opt/emboss-gcc-6.3/bin:$PATH

5. To test/lists EMBOSS and Embassy sub-packages, type the command: wossname -
auto –alpha and the command will display more than 200 programs as follows:
----------------------------------------------------------------------------------
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$ wossname –auto –alpha
ALPHABETIC LIST OF PROGRAMS
aaindexextract Extract amino acid property data from AAINDEX
abiview Test an application ACD file
:
:
:
Yank Add a sequence reference (a full USA) to a list file
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
-----------------------------------------------------------------------------------
6. To search information about specific program, type the command: tfm –program
programname.
7. To list database names, type the command: showdb and it will display the following:
----------------------------------------------------------------------------
Display information on configured databases
Name
Type
ID
Qry All
Comment
qapblast
Protein
OK
OK OK
Blast swissnew
:
:
tgenbank Nucleotide OK
OK OK
GenBank in native...
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
----------------------------------------------------------------------------------
8. Create the below example script and name the file: emboss.moab:
---------------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=1:ppn=8:gpus=4 partition=c1060
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/embosstest/out
#MSUB -e /GPU/home/username/embosstest/err
#MSUB -d /GPU/home/username/embosstest
#MSUB -mb
##### Running commands
needle tembl:z11115 tembl:z11115 -out all.needle –auto
---------------------------------------------------------------------------
Note: the below script submit a job that will read 11115 records in the database.
9. Submit the job using the command: msub emboss.moab.
10. An output file named: all.needle is then generated in the following directory:
/GPU/home/username/embosstest.
11. To check the status of the job, type: showq.
12. To check status of the nodes, type: pbsnodes.
Caution: Users should not attempt to read database in the login node as the processes
hangs the login node and disallow users to login to the cluster.
For more information about EMBOSS; you may visit: http://emboss.sourceforge.net/.

Last Updated on Tuesday, 03 June 2014 14:32

Hits: 1160

Emboss with Intel

EMBOSS on GPU

1. ssh into the GPU cluster using the following: ssh –X This email address is being protected from spambots. You need JavaScript enabled to view it. .
2. In your user home directory: create a file named: .embossrc.
3. Insert the following line in the file named .embossrc: INCLUDE /GPU/opt/emboss-
intel/EMBOSS-6.3.1/test/.embossrc.
4. From then, export the following path in the user environment (.bashrc or .profile
file):

export LD_LIBRARY_PATH=/GPU/opt/emboss-intel-new/lib:$LD_LIBRARY_PATH
export PATH=/GPU/opt/emboss-intel-new/bin:$PATH

5. To test/lists EMBOSS and Embassy sub-packages, type the command: wossname -
auto –alpha and the command will display more than 200 programs as follows:
----------------------------------------------------------------------------------
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$ wossname –auto –alpha
ALPHABETIC LIST OF PROGRAMS
aaindexextract Extract amino acid property data from AAINDEX
abiview Test an application ACD file
:
:
:
Yank Add a sequence reference (a full USA) to a list file
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
-----------------------------------------------------------------------------------
6. To search information about specific program, type the command: tfm –program
programname.
7. To list database names, type the command: showdb and it will display the following:
----------------------------------------------------------------------------
Display information on configured databases
Name
Type
ID
Qry All
Comment
qapblast
Protein
OK
OK OK
Blast swissnew
:
:
tgenbank Nucleotide OK
OK OK
GenBank in native...
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
----------------------------------------------------------------------------------
8. Create the below example script and name the file: emboss.moab:
---------------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=1:ppn=8:gpus=4 partition=c1060
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/embosstest/out
#MSUB -e /GPU/home/username/embosstest/err
#MSUB -d /GPU/home/username/embosstest
#MSUB -mb
##### Running commands
needle tembl:z11115 tembl:z11115 -out all.needle –auto
---------------------------------------------------------------------------
Note: the below script submit a job that will read 11115 records in the database.
9. Submit the job using the command: msub emboss.moab.
10. An output file named: all.needle is then generated in the following directory:
/GPU/home/username/embosstest.
11. To check the status of the job, type: showq.
12. To check status of the nodes, type: pbsnodes.
Caution: Users should not attempt to read database in the login node as the processes
hangs the login node and disallow users to login to the cluster.
For more information about EMBOSS; you may visit: http://emboss.sourceforge.net/.

Last Updated on Tuesday, 03 June 2014 14:32

Hits: 1156

Gaussian at CHPC

  • NOTE : You should always run your jobs from scratch5

We have two versions of Gaussian 09 installed at CHPC. Here is an example on how to access them:

username@login01:~/scratch5 $ module avail                   ### list available modules
username@login01:~/scratch5 $ module add gaussian/g09.A01    ### load g09 version A01 (older) or
username@login01:~/scratch5 $ module add gaussian/g09.D01    ### load g09 version D01 (new)

Example moab job script for g09.A01

#/bin/csh
#MSUB -l nodes=1:ppn=12
#MSUB -l feature=dell
#MSUB -l walltime=2:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /export/home/username/scratch5/log.out
#MSUB -e /export/home/username/scratch5/log.err
#MSUB -d /export/home/username/scratch5
 
source /opt/gridware/applications/gaussian/old/g09/g09setup 
source /etc/profile.d/modules.sh
module add gaussian/g09.A01
g09 < input.com > output.log

Example moab job script for g09.D01

#/bin/csh
#MSUB -l nodes=1:ppn=12
#MSUB -l feature=dell
#MSUB -l walltime=2:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /export/home/username/scratch5/log.out
#MSUB -e /export/home/username/scratch5/log.err
#MSUB -d /export/home/username/scratch5
 
   
source /opt/gridware/applications/gaussian/g09/g09setup
source /etc/profile.d/modules.sh
module add gaussian/g09.D01
g09 < input.com > output.log

Both examples above use this example of a gaussian input file:

%nprocshared=12
%nprocl=1
#P HF/6-31G*  IOP(6/33=2,6/41=10,6/42=17) SCF=Tight Pop=MK       
Title Card Required
0 1
 C                 -3.19550100    0.11344600   -0.18511100
 O                 -3.05859100    0.83554400   -1.15941100
 N                 -2.14200500   -0.25446800    0.60848900
 H                 -2.32841200   -0.78506500    1.44398900
 C                 -0.75900100    0.12341400    0.36048900
 C                  0.14379200   -0.42259800    1.48678900
 C                  0.59047400   -1.84630300    1.03238900
 C                  1.59689900    0.10938400    1.28248900
 C                  0.08697100   -2.05959700   -0.42431100
 C                  2.05278100   -1.31402200    0.82788900
 C                 -0.14011000   -0.59319400   -0.87441100
 C                  1.35866600   -2.48501300   -1.17731100
 C                  1.32989700   -0.05621300   -1.07501100
 C                  1.66721100    0.99508300    0.02128900
 C                  2.20428100   -1.28752400   -0.71551100
 C                  0.60162500    2.08159700    0.02358900
 N                 -0.63668200    1.58111300    0.28218900
 O                  0.86964000    3.24909300   -0.21751100
 H                 -1.43757500    2.10622300   -0.04521100
 O                  2.93691800    1.56466600   -0.15981100
 H                  2.76333000    2.51016900   -0.30861100
 H                 -0.28050600   -0.28619200    2.48278900
 H                  0.43696300   -2.68340100    1.71298900
 H                  2.07960500    0.56667700    2.14378900
 H                 -0.78403700   -2.70728500   -0.52611100
 H                  2.86017500   -1.79913300    1.37408900
 H                 -0.75990800   -0.47138600   -1.76101100
 H                  1.22276500   -2.52601100   -2.26181100
 H                  1.75375300   -3.44771800   -0.83891100
 H                  1.50280200    0.34398500   -2.07391100
 H                  3.22968200   -1.22473700   -1.07621100
 C                 -4.55170800   -0.42883700    0.22928900
 H                 -5.23089700    0.41157200    0.38338900
 H                 -4.52881600   -1.04383700    1.13088900
 H                 -4.94951500   -1.02253200   -0.59631100

 

Last Updated on Tuesday, 03 June 2014 14:27

Hits: 1389

Documentation for users:

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions