A+ A A-

HPC School Application

The CHPC invites applications from suitably qualified candidates to attend the 2013 HPC school. Its purpose is to introduce South African students to fundamental knowledge of high performance computing techniques.

The HPC School is aimed at recent B.Sc. (Hons) or B.Eng. graduates, and new M.Sc or Ph.D. students in the fields of computational chemistry, applied mathematics, physics, computational biology, bioinformatics, computer science, engineering or related subjects with a strong computing content.

The course will cover the concepts and theory of parallel computers, and programming for parallel systems with MPI, OpenMP and CUDA, using the C, Fortran or python programming languages.

Students need to have second year mathematics or applied mathematics (or equivalent) and programming experience in a high level language: C, Fortran or python.

Students who have a strong background in numerical methods and GNU Octave or Matlab (or Maple or Mathematica) or other programming (eg. Java) may also qualify.

The HPC School will run from Monday 1 July to Saturday 6 July.

There are no fees for successful applicants. The CHPC will cover the costs of accommodation during the HPC School as well as local return air travel for students from outside the Gauteng province as needed.

Eligible applicants should be registered at a South African University in 2013 or be accepted for graduate study at a South African university in 2014 ― proof of registration or acceptance must be provided including a letter of recommendation from your supervisor.

Course content will assume a reasonable background in Mathematics (at least including multivariate calculus and linear algebra) and programming ability in a high level language (C, Fortran, python, or similar) as well as 2nd year in at least one of Physics, Applied Maths, Maths, Computer Science, Statistics, or Engineering. A full academic transcript must be attached to your application.

No prior background in HPC will be assumed. Interactive lectures and computer tutorials will introduce the students to a range of key aspects of HPC and further illustrate how these tools are currently being applied to address research problems.

Transport, accommodation, and full board will be provided. Owing to budgetary constraints, only limited places are available for suitably qualified students.

CLOSING DATE FOR APPLICATIONS:

09h00 Friday 7 June 2013

Should you wish to become one of the participants, please complete the application form and email the document back to This email address is being protected from spambots. You need JavaScript enabled to view it. before the closing date. Successful candidates will be notified by Tuesday 11 June 2013.

The CHPC calls for all talented students to submit their applications. As a publicly funded institution, the CHPC supports the transformation of South Africa and thus the workshop organisers highly encourage students from previously disadvantaged backgrounds to submit their applications. For general enquiries please email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Download form in Open Office ODT format here.

Download form in Word DOC format from here.

Last Updated on Thursday, 19 June 2014 15:23

Hits: 2113

HPC School 2013

HPC Winter School - July 2013

More info coming soon 

Last Updated on Tuesday, 19 February 2013 14:50

Hits: 1433

NAMD

NAMD on GPU

1. SSH into GPU cluster using the following command: ssh This email address is being protected from spambots. You need JavaScript enabled to view it. .

2. The default setting of NAMD has been set to use rsh to distribute tasks to the compute node. In the GPU cluster; rsh is disabled. User need to change his/her environment from rsh to ssh connection by typing the following command: export CONV_RSH=/usr/bin/ssh (For NAMD authentication) and export PVM_RSH=/usr/bin/ssh (For Torque authentication).

3. From then, create a directory named: namdtest in the following directory: /GPU/home/username.

4. To run NAMD jobs through Ethernet network, do the following:

4.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "++++++++++"
echo "host files is:"
echo " "
cat $PBS_NODEFILE
cp $PBS_NODEFILE $PBS_STEP_OUT.hostfile
echo " "
echo "++++++++++"

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT

-----------------------------------------------------------------------
5. To run NAMD jobs through Infiniband network, do the following:

5.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "original machine file is:"
echo "++++++++++"
cat $PBS_NODEFILE
echo "++++++++++"
cat $PBS_NODEFILE|sed -e 's/.*/&-ib/'>$PBS_STEP_OUT.hostfile
echo "modified machine file is:"
echo "++++++++++"
cat $PBS_STEP_OUT.hostfile

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT
-----------------------------------------------------------------------
6. Save the file and run the command: msub namd.moab. NOTE: In the script file, user can either use partition=c2070 or partition=c1060.
7. To check the status of the job, type the command: showq.
8. To check the status of the nodes, type the command: pbsnodes. Note: This command will display available nodes and gpus within the nodes.

Last Updated on Tuesday, 03 June 2014 14:35

Hits: 1404

Emboss with Gcc

EMBOSS on GPU

1. ssh into the GPU cluster using the following: ssh –X This email address is being protected from spambots. You need JavaScript enabled to view it. .
2. In your user home directory: create a file named: .embossrc.
3. Insert the following line in the file named .embossrc: INCLUDE /GPU/opt/emboss-
intel/EMBOSS-6.3.1/test/.embossrc.
4. From then, export the following path in the user environment (.bashrc or .profile
file):

export LD_LIBRARY_PATH=/GPU/opt/emboss-gcc-6.3/lib:$LD_LIBRARY_PATH
export PATH=/GPU/opt/emboss-gcc-6.3/bin:$PATH

5. To test/lists EMBOSS and Embassy sub-packages, type the command: wossname -
auto –alpha and the command will display more than 200 programs as follows:
----------------------------------------------------------------------------------
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$ wossname –auto –alpha
ALPHABETIC LIST OF PROGRAMS
aaindexextract Extract amino acid property data from AAINDEX
abiview Test an application ACD file
:
:
:
Yank Add a sequence reference (a full USA) to a list file
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
-----------------------------------------------------------------------------------
6. To search information about specific program, type the command: tfm –program
programname.
7. To list database names, type the command: showdb and it will display the following:
----------------------------------------------------------------------------
Display information on configured databases
Name
Type
ID
Qry All
Comment
qapblast
Protein
OK
OK OK
Blast swissnew
:
:
tgenbank Nucleotide OK
OK OK
GenBank in native...
[ This email address is being protected from spambots. You need JavaScript enabled to view it. ~]$
----------------------------------------------------------------------------------
8. Create the below example script and name the file: emboss.moab:
---------------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=1:ppn=8:gpus=4 partition=c1060
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/embosstest/out
#MSUB -e /GPU/home/username/embosstest/err
#MSUB -d /GPU/home/username/embosstest
#MSUB -mb
##### Running commands
needle tembl:z11115 tembl:z11115 -out all.needle –auto
---------------------------------------------------------------------------
Note: the below script submit a job that will read 11115 records in the database.
9. Submit the job using the command: msub emboss.moab.
10. An output file named: all.needle is then generated in the following directory:
/GPU/home/username/embosstest.
11. To check the status of the job, type: showq.
12. To check status of the nodes, type: pbsnodes.
Caution: Users should not attempt to read database in the login node as the processes
hangs the login node and disallow users to login to the cluster.
For more information about EMBOSS; you may visit: http://emboss.sourceforge.net/.

Last Updated on Tuesday, 03 June 2014 14:32

Hits: 1300

Documentation for users:

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions