A+ A A-

HPC School Programme

Outline of courses:

  1. Introduction to HPC Architectures, Parallel Programming Overview and Models. Cluster and CHPC Environment Set-up; Compilers, Shell Scripting and Job Submission.

  2. Introduction to Programming Shared Memory Multicore & SMP Systems with OpenMP.

  3. Parallel Programming Distributed Memory Systems with MPI; Introduction and Fundamentals.

  4. Advanced Parallel Programming with MPI; Topologies and Parallel I/O.

  5. Introduction to GPU Programming with CUDA (Note: may not be offered in 2016).

 

Last Updated on Friday, 13 May 2016 16:01

Hits: 5562

HPC School Application

The CHPC invites applications from suitably qualified candidates to attend the 2017 HPC school. Its purpose is to introduce South African students to fundamental knowledge of high performance computing techniques.

 

APPLICATIONS ARE NOW CLOSED

 

Should you wish to become one of the participants, please complete the application and registration form at events.chpc.ac.za/winterschool before the closing date. Successful candidates will be notified from 6 June 2017.

The CHPC calls for all talented students to submit their applications. As a publicly funded institution, the CHPC supports the transformation of South Africa and thus the workshop organisers highly encourage students from previously disadvantaged backgrounds to submit their applications.

 

For general enquiries please email: This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Educators

The CHPC Winter School is also open to educators from South African tertiary education institutions who wish to include HPC and parallel programming in their teaching.

Download Educator Application Form in Open Office ODT format here.

Download Educator Application Form in Word DOC format from here.

Please complete the application form as well as the online registration at events.chpc.ac.za/winterschool and upload the completed Educator Application Form at that site.

 

Last Updated on Monday, 05 June 2017 17:34

Hits: 6712

HPC School 2013

HPC Winter School - July 2013

More info coming soon 

Last Updated on Tuesday, 19 February 2013 14:50

Hits: 2508

NAMD

NAMD on GPU

1. SSH into GPU cluster using the following command: ssh This email address is being protected from spambots. You need JavaScript enabled to view it. .

2. The default setting of NAMD has been set to use rsh to distribute tasks to the compute node. In the GPU cluster; rsh is disabled. User need to change his/her environment from rsh to ssh connection by typing the following command: export CONV_RSH=/usr/bin/ssh (For NAMD authentication) and export PVM_RSH=/usr/bin/ssh (For Torque authentication).

3. From then, create a directory named: namdtest in the following directory: /GPU/home/username.

4. To run NAMD jobs through Ethernet network, do the following:

4.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "++++++++++"
echo "host files is:"
echo " "
cat $PBS_NODEFILE
cp $PBS_NODEFILE $PBS_STEP_OUT.hostfile
echo " "
echo "++++++++++"

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT

-----------------------------------------------------------------------
5. To run NAMD jobs through Infiniband network, do the following:

5.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "original machine file is:"
echo "++++++++++"
cat $PBS_NODEFILE
echo "++++++++++"
cat $PBS_NODEFILE|sed -e 's/.*/&-ib/'>$PBS_STEP_OUT.hostfile
echo "modified machine file is:"
echo "++++++++++"
cat $PBS_STEP_OUT.hostfile

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT
-----------------------------------------------------------------------
6. Save the file and run the command: msub namd.moab. NOTE: In the script file, user can either use partition=c2070 or partition=c1060.
7. To check the status of the job, type the command: showq.
8. To check the status of the nodes, type the command: pbsnodes. Note: This command will display available nodes and gpus within the nodes.

Last Updated on Tuesday, 03 June 2014 14:35

Hits: 3510

CHPC in the News

CHPC SAGrid Cluster Available

Documentation for users:

Dirisa Storage Unit Available

Lengau Cluster Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions