A+ A A-

HPC School Programme

Outline of courses:

  1. Introduction to HPC Architectures, Parallel Programming Overview and Models. UJ Cluster and CHPC Environment Set-up; Compilers, Shell Scripting and Job Submission.

  2. Introduction to Programming Shared Memory Multicore & SMP Systems with OpenMP.

  3. Parallel Programming Distributed Memory Systems with MPI; Introduction and Fundamentals.

  4. Advanced Parallel Programming with MPI; Topologies and Parallel I/O.

  5. GPGPU: Introduction to GPU Programming with CUDA, OpenCL or rapid prototyping tools.

  6. HPC Visualization Tools and Applications: Paraview/VTK, OpenFOAM and others.

 

Last Updated on Friday, 24 May 2013 18:19

Hits: 1640

HPC School Application

The CHPC invites applications from suitably qualified candidates to attend the 2013 HPC school. Its purpose is to introduce South African students to fundamental knowledge of high performance computing techniques.

The HPC School is aimed at recent B.Sc. (Hons) or B.Eng. graduates, and new M.Sc or Ph.D. students in the fields of computational chemistry, applied mathematics, physics, computational biology, bioinformatics, computer science, engineering or related subjects with a strong computing content.

The course will cover the concepts and theory of parallel computers, and programming for parallel systems with MPI, OpenMP and CUDA, using the C, Fortran or python programming languages.

Students need to have second year mathematics or applied mathematics (or equivalent) and programming experience in a high level language: C, Fortran or python.

Students who have a strong background in numerical methods and GNU Octave or Matlab (or Maple or Mathematica) or other programming (eg. Java) may also qualify.

The HPC School will run from Monday 1 July to Saturday 6 July.

There are no fees for successful applicants. The CHPC will cover the costs of accommodation during the HPC School as well as local return air travel for students from outside the Gauteng province as needed.

Eligible applicants should be registered at a South African University in 2013 or be accepted for graduate study at a South African university in 2014 ― proof of registration or acceptance must be provided including a letter of recommendation from your supervisor.

Course content will assume a reasonable background in Mathematics (at least including multivariate calculus and linear algebra) and programming ability in a high level language (C, Fortran, python, or similar) as well as 2nd year in at least one of Physics, Applied Maths, Maths, Computer Science, Statistics, or Engineering. A full academic transcript must be attached to your application.

No prior background in HPC will be assumed. Interactive lectures and computer tutorials will introduce the students to a range of key aspects of HPC and further illustrate how these tools are currently being applied to address research problems.

Transport, accommodation, and full board will be provided. Owing to budgetary constraints, only limited places are available for suitably qualified students.

CLOSING DATE FOR APPLICATIONS:

09h00 Friday 7 June 2013

Should you wish to become one of the participants, please complete the application form and email the document back to This email address is being protected from spambots. You need JavaScript enabled to view it. before the closing date. Successful candidates will be notified by Tuesday 11 June 2013.

The CHPC calls for all talented students to submit their applications. As a publicly funded institution, the CHPC supports the transformation of South Africa and thus the workshop organisers highly encourage students from previously disadvantaged backgrounds to submit their applications. For general enquiries please email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Download form in Open Office ODT format here.

Download form in Word DOC format from here.

Last Updated on Thursday, 19 June 2014 15:23

Hits: 2298

HPC School 2013

HPC Winter School - July 2013

More info coming soon 

Last Updated on Tuesday, 19 February 2013 14:50

Hits: 1532

NAMD

NAMD on GPU

1. SSH into GPU cluster using the following command: ssh This email address is being protected from spambots. You need JavaScript enabled to view it. .

2. The default setting of NAMD has been set to use rsh to distribute tasks to the compute node. In the GPU cluster; rsh is disabled. User need to change his/her environment from rsh to ssh connection by typing the following command: export CONV_RSH=/usr/bin/ssh (For NAMD authentication) and export PVM_RSH=/usr/bin/ssh (For Torque authentication).

3. From then, create a directory named: namdtest in the following directory: /GPU/home/username.

4. To run NAMD jobs through Ethernet network, do the following:

4.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "++++++++++"
echo "host files is:"
echo " "
cat $PBS_NODEFILE
cp $PBS_NODEFILE $PBS_STEP_OUT.hostfile
echo " "
echo "++++++++++"

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT

-----------------------------------------------------------------------
5. To run NAMD jobs through Infiniband network, do the following:

5.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "original machine file is:"
echo "++++++++++"
cat $PBS_NODEFILE
echo "++++++++++"
cat $PBS_NODEFILE|sed -e 's/.*/&-ib/'>$PBS_STEP_OUT.hostfile
echo "modified machine file is:"
echo "++++++++++"
cat $PBS_STEP_OUT.hostfile

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT
-----------------------------------------------------------------------
6. Save the file and run the command: msub namd.moab. NOTE: In the script file, user can either use partition=c2070 or partition=c1060.
7. To check the status of the job, type the command: showq.
8. To check the status of the nodes, type the command: pbsnodes. Note: This command will display available nodes and gpus within the nodes.

Last Updated on Tuesday, 03 June 2014 14:35

Hits: 1618

CHPC Student Cluster Competition 2014

Documentation for users:

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions