A+ A A-

Emboss with Intel

EMBOSS on GPU

1. ssh into the GPU cluster using the following: ssh –X This email address is being protected from spambots. You need JavaScript enabled to view it. .
2. In your user home directory: create a file named: .embossrc.
3. Insert the following line in the file named .embossrc: INCLUDE /GPU/opt/emboss-
intel/EMBOSS-6.3.1/test/.embossrc.
4. From then, export the following path in the user environment (.bashrc or .profile
file):

export LD_LIBRARY_PATH=/GPU/opt/emboss-intel-new/lib:$LD_LIBRARY_PATH
export PATH=/GPU/opt/emboss-intel-new/bin:$PATH

5. To test/lists EMBOSS and Embassy sub-packages, type the command: wossname -
auto –alpha and the command will display more than 200 programs as follows:
----------------------------------------------------------------------------------
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$ wossname –auto –alpha
ALPHABETIC LIST OF PROGRAMS
aaindexextract Extract amino acid property data from AAINDEX
abiview Test an application ACD file
:
:
:
Yank Add a sequence reference (a full USA) to a list file
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
-----------------------------------------------------------------------------------
6. To search information about specific program, type the command: tfm –program
programname.
7. To list database names, type the command: showdb and it will display the following:
----------------------------------------------------------------------------
Display information on configured databases
Name
Type
ID
Qry All
Comment
qapblast
Protein
OK
OK OK
Blast swissnew
:
:
tgenbank Nucleotide OK
OK OK
GenBank in native...
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
----------------------------------------------------------------------------------
8. Create the below example script and name the file: emboss.moab:
---------------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=1:ppn=8:gpus=4 partition=c1060
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/embosstest/out
#MSUB -e /GPU/home/username/embosstest/err
#MSUB -d /GPU/home/username/embosstest
#MSUB -mb
##### Running commands
needle tembl:z11115 tembl:z11115 -out all.needle –auto
---------------------------------------------------------------------------
Note: the below script submit a job that will read 11115 records in the database.
9. Submit the job using the command: msub emboss.moab.
10. An output file named: all.needle is then generated in the following directory:
/GPU/home/username/embosstest.
11. To check the status of the job, type: showq.
12. To check status of the nodes, type: pbsnodes.
Caution: Users should not attempt to read database in the login node as the processes
hangs the login node and disallow users to login to the cluster.
For more information about EMBOSS; you may visit: http://emboss.sourceforge.net/.

Last Updated on Tuesday, 03 June 2014 14:32

Hits: 1304

Gaussian at CHPC

  • NOTE : You should always run your jobs from scratch5

We have two versions of Gaussian 09 installed at CHPC. Here is an example on how to access them:

username@login01:~/scratch5 $ module avail                   ### list available modules
username@login01:~/scratch5 $ module add gaussian/g09.A01    ### load g09 version A01 (older) or
username@login01:~/scratch5 $ module add gaussian/g09.D01    ### load g09 version D01 (new)

Example moab job script for g09.A01

#/bin/csh
#MSUB -l nodes=1:ppn=12
#MSUB -l feature=dell
#MSUB -l walltime=2:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /export/home/username/scratch5/log.out
#MSUB -e /export/home/username/scratch5/log.err
#MSUB -d /export/home/username/scratch5
 
source /opt/gridware/applications/gaussian/old/g09/g09setup 
source /etc/profile.d/modules.sh
module add gaussian/g09.A01
g09 < input.com > output.log

Example moab job script for g09.D01

#/bin/csh
#MSUB -l nodes=1:ppn=12
#MSUB -l feature=dell
#MSUB -l walltime=2:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /export/home/username/scratch5/log.out
#MSUB -e /export/home/username/scratch5/log.err
#MSUB -d /export/home/username/scratch5
 
   
source /opt/gridware/applications/gaussian/g09/g09setup
source /etc/profile.d/modules.sh
module add gaussian/g09.D01
g09 < input.com > output.log

Both examples above use this example of a gaussian input file:

%nprocshared=12
%nprocl=1
#P HF/6-31G*  IOP(6/33=2,6/41=10,6/42=17) SCF=Tight Pop=MK       
Title Card Required
0 1
 C                 -3.19550100    0.11344600   -0.18511100
 O                 -3.05859100    0.83554400   -1.15941100
 N                 -2.14200500   -0.25446800    0.60848900
 H                 -2.32841200   -0.78506500    1.44398900
 C                 -0.75900100    0.12341400    0.36048900
 C                  0.14379200   -0.42259800    1.48678900
 C                  0.59047400   -1.84630300    1.03238900
 C                  1.59689900    0.10938400    1.28248900
 C                  0.08697100   -2.05959700   -0.42431100
 C                  2.05278100   -1.31402200    0.82788900
 C                 -0.14011000   -0.59319400   -0.87441100
 C                  1.35866600   -2.48501300   -1.17731100
 C                  1.32989700   -0.05621300   -1.07501100
 C                  1.66721100    0.99508300    0.02128900
 C                  2.20428100   -1.28752400   -0.71551100
 C                  0.60162500    2.08159700    0.02358900
 N                 -0.63668200    1.58111300    0.28218900
 O                  0.86964000    3.24909300   -0.21751100
 H                 -1.43757500    2.10622300   -0.04521100
 O                  2.93691800    1.56466600   -0.15981100
 H                  2.76333000    2.51016900   -0.30861100
 H                 -0.28050600   -0.28619200    2.48278900
 H                  0.43696300   -2.68340100    1.71298900
 H                  2.07960500    0.56667700    2.14378900
 H                 -0.78403700   -2.70728500   -0.52611100
 H                  2.86017500   -1.79913300    1.37408900
 H                 -0.75990800   -0.47138600   -1.76101100
 H                  1.22276500   -2.52601100   -2.26181100
 H                  1.75375300   -3.44771800   -0.83891100
 H                  1.50280200    0.34398500   -2.07391100
 H                  3.22968200   -1.22473700   -1.07621100
 C                 -4.55170800   -0.42883700    0.22928900
 H                 -5.23089700    0.41157200    0.38338900
 H                 -4.52881600   -1.04383700    1.13088900
 H                 -4.94951500   -1.02253200   -0.59631100

 

Last Updated on Tuesday, 03 June 2014 14:27

Hits: 1584

DL_poly

DL_Poly_2.18 up to 3.10  (Sun Microsystems)

###These lines are for Moab
#MSUB -l nodes=5:ppn=12
#MSUB -l partition=dell|westmere
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /lustre/SCRATCH5/users/username/work/stdout.out
#MSUB -e /lustre/SCRATCH5/users/username/work/stderr.err
#MSUB -d /lustre/SCRATCH5/users/username/work
#MSUB -mb
#MSUB -M This email address is being protected from spambots. You need JavaScript enabled to view it.
##### Running commands
exe=/opt/gridware/applications/dlpoly/DLPOLY_3.09.Y
nproc=`cat $PBS_NODEFILE | wc -l`
mpirun -np $nproc $exe

Last Updated on Tuesday, 03 June 2014 14:25

Hits: 1567

Sun Compilers

1. Sun Compilers

To use sun compilers, clustertools and intel compilers, please do the following on the command line:

user@login02:~> module add sunstudio

user@login02:~> module add clustertools

user@login02:~> module add intel-XE/11.1 Or intel-XE/12.0 Or intel-XE/13.0

These will add sun compilers and clustertools (including MPI compiled to run over the infiniband and MPI compiler) to your path.

Path to SUN MPI compilers:

Code Name

Directory

Notes

mpicc

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpicc

MPI Sun C compiler

mpicxx

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpicxx

MPI Sun C++ compiler

mpif77

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpif77

MPI Sun Fotran 77 compiler

mpif90

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpif90

MPI Sun Fotran 90 compiler

 

2. GNU Compilers

Path to GNU compilers:

gcc: /usr/bin/gcc

gfortran: /usr/bin/gfortran

Path to GNU MPI compilers:

Code Name

Directory

Notes

mpicc

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpicc

MPI gcc C compiler

mpicxx

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpicxx

MPI g++ C++ compiler

mpif77

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpif77

MPI gfotran Fotran 77 compiler

mpif90

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpif90

MPI gfotran Fotran 90 compiler

 

3. Intel Compilers

Path to Intel MPI compilers:

Code Name

Directory

Notes

mpicc

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpicc

MPI icc C compiler

mpicxx

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpicxx

MPI ic++ C++ compiler

mpif77

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpif77

MPI ifort Fotran 77 compiler

mpif90

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpif90

MPI ifort Fotran 90 compiler

 

Last Updated on Tuesday, 03 June 2014 14:48

Hits: 1771

Documentation for users:

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions