A+ A A-

DL_poly

DL_Poly_2.18 up to 3.10  (Sun Microsystems)

###These lines are for Moab
#MSUB -l nodes=5:ppn=12
#MSUB -l partition=dell|westmere
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /lustre/SCRATCH5/users/username/work/stdout.out
#MSUB -e /lustre/SCRATCH5/users/username/work/stderr.err
#MSUB -d /lustre/SCRATCH5/users/username/work
#MSUB -mb
#MSUB -M This email address is being protected from spambots. You need JavaScript enabled to view it.
##### Running commands
exe=/opt/gridware/applications/dlpoly/DLPOLY_3.09.Y
nproc=`cat $PBS_NODEFILE | wc -l`
mpirun -np $nproc $exe

Last Updated on Tuesday, 03 June 2014 14:25

Hits: 1433

Sun Compilers

1. Sun Compilers

To use sun compilers, clustertools and intel compilers, please do the following on the command line:

user@login02:~> module add sunstudio

user@login02:~> module add clustertools

user@login02:~> module add intel-XE/11.1 Or intel-XE/12.0 Or intel-XE/13.0

These will add sun compilers and clustertools (including MPI compiled to run over the infiniband and MPI compiler) to your path.

Path to SUN MPI compilers:

Code Name

Directory

Notes

mpicc

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpicc

MPI Sun C compiler

mpicxx

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpicxx

MPI Sun C++ compiler

mpif77

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpif77

MPI Sun Fotran 77 compiler

mpif90

/opt/gridware/sun-hpc-ct-8.2-Linux-sun/bin/mpif90

MPI Sun Fotran 90 compiler

 

2. GNU Compilers

Path to GNU compilers:

gcc: /usr/bin/gcc

gfortran: /usr/bin/gfortran

Path to GNU MPI compilers:

Code Name

Directory

Notes

mpicc

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpicc

MPI gcc C compiler

mpicxx

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpicxx

MPI g++ C++ compiler

mpif77

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpif77

MPI gfotran Fotran 77 compiler

mpif90

/opt/gridware/sun-hpc-ct-8.2-Linux-gnu/bin/mpif90

MPI gfotran Fotran 90 compiler

 

3. Intel Compilers

Path to Intel MPI compilers:

Code Name

Directory

Notes

mpicc

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpicc

MPI icc C compiler

mpicxx

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpicxx

MPI ic++ C++ compiler

mpif77

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpif77

MPI ifort Fotran 77 compiler

mpif90

/opt/gridware/sun-hpc-ct-8.2-Linux-intel/bin/mpif90

MPI ifort Fotran 90 compiler

 

Last Updated on Tuesday, 03 June 2014 14:48

Hits: 1617

FAQ

SUN Cluster

Q: How do I Login to SUN cluster?
A: Logging in - via Secure Shell

Q: How do I change my password?
A: Logging in - Changing your password

Q: What are the complilers on SUN?
A: Compilers on SUN

Q: How do I Submit/Run Jobs?
A: Submit/Run on Sun

Last Updated on Tuesday, 03 June 2014 14:54

Hits: 1354

Running Jobs

PBS WorkLoad Manager

All jobs on the GPU and SUN are scheduled by PBSPro

How to submit on GPU:

1. Compile your code

2. Run your submit script. (Please click here to view example scripts for customization)

Please note that you need to use an MPI inorder to call an mpi program. The system installed MPI is under /GPU/opt/open-mpi-new/.

export the mpi using the following command or add the lines on your .profile

 

1. export PATH=/GPU/opt/open-mpi-new/bin:$PATH

2. export LD_LIBRARY_PATH=/GPU/opt/open-mpi-new/lib:$LD_LIBRARY_PATH

 

Partitions available on GPU:

1. C2070

2. C1060

 

Moab Job Submit:

msub  scriptname -l feature=feature-name

allows users to submit jobs directly to Moab.

top of the page

 


Moab job submit on SUN:

1. Compile your code (Please click here on how to compile on SUN)

2. Run your submit script (Please click here to view example scripts for customization)

Partitions available on SUN

1. nehalem

2. westmere

3. dell

4. sparc

5. test

6. viz

Moab Job Submit:

msub  scriptname -l feature=feature-name

Allows users to submit to Moab.

top of the page


How to cancel jobs

Cancel jobs on GPU and SUN cluster:

mjobctl -c jobid

used to selectively cancel the specified job(s) (active, idle, or non-queued) from the queue


Debugging

mpirun_dbg.dbx, mpirun_dbg.ddd, mpirun_dbg.gdb


Monitoring

For monitoring on nodes use one of

  • nmon
  • vmstat
  • top
  • xloadl(X11)

and of course ps and free

top of the page

Last Updated on Tuesday, 03 June 2014 14:37

Hits: 2143

Documentation for users:

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions