A+ A A-

Compiling, Linking & Scalable Codes

Sun compilers

Please see this page for information on sun compilers, clustertools and intel compilers.

The primary compiler on the Cluster system is the IBM Pathscale compiler which provides methods for 'C', 'C++' and various specifications of Fortran.

IBM e1350 Cluster

  • 'C' Compiler: pathcc version 2.9.99
  • 'C++' Compiler: pathCC version 2.9.99
  • 'C' and 'C++' Compiler: gcc (GCC) 3.3.3 (SuSE Linux)
  • Fortran90/77 Compiler: pathf90 version 2.9.99


Compilers and Libraries


Code Name











with gcc




with intel 2012




with gcc-4.5.1 , intel 11 and openmpi-1.4.2-intel




with gcc




with intel 2012

mvapich2 (r5668)



with intel 2012




with gcc




with intel 11.1




with intel 2012




with gcc




with intel 2012 , using mvapich2(r5668) mpi lib




with intel 2012 , using mvapich2(r5668) mpi lib

In addition in my home directory /CHPC/home/nallsopp you will find the Examples.tar.gz and a few example loadleveller scripts. If you want an example of a loadleveller script for going over infiniband with MVAPICH take a look at hello_mpi.ll. If you want to go over IP with MPICH then take a look at run_hello_ib.ll, but please remember to build and run with the appropriate mpif90, mpicc, mpirun etc.

Also, you will notice that there are various new classed which have been created. You can see what is available by typing: llclass



top of the page

IBM Blue Gene/P

  • Based on GCC 4.1.2: gcc, g++, gfortran
  • IBM XL C/C++ Advanced Edition for Blue Gene/P, V9.0.
  • IBM Fortran: mpixlf77, mpixlf2003, mpixlf90, mpixlf95

Common to the Cluster environment is the Message Passing Interface (MPI) and the preferred implementation is that of MVAPICH [1] of which the most recent release is 0.9.9. The MVAPICH is an implementation specifically built to make use of the Infiniband connectivity and it is essential that your codes link with these libraries to get the best performance. Alternatives for MVAPICH are MPICH which is built with the Pathscale compiler but will not make use of the Infiniband interconnect but will rather use the 1Gb ethernet TCP/IP stack.


Interaction matrix



InfiniBand 10Gb








In addition to MPI libraries there are also some maths libraries available namely GSL, FFTW, GotoBLAS/LAPACK and ACML.




All compilers are cross compilers.

They compile on front end node:

  • SLES 10
  • Power 5

They target Blue Gene/P

  • CNK
  • PowerPC440d


C/C++ Compilers for Blue Gene/P

IBM xlc compilers



C compiler


thread safe C compiler


C++ compiler


thread safe C++ compiler


GNU gcc compilers



GNU C compiler


GNU C++ compiler

top of the page

Fortran Compilers for Blue Gene/P

IBM xlf [77 | 90 | 95 | 2003 ]



Fortran 77 compiler


thread safe Fortran 77 compiler


thread safe Fortran 90 compiler


thread safe Fortran 95 compiler


thread safe Fortran 2003 compiler


GNU gfortran [77 | 90 | 95 ]



GNU Fortran 77 / 90 / 95 compiler


MPI C Compilers for Blue Gene/P




MPI gcc C compiler


MPI g++ C++ compiler


MPI xlc C thread safe compiler


MPI xlc C++ thread safe compiler


MPI C Compilers for Blue Gene/P




MPI gcc C compiler


MPI g++ C++ compiler


MPI xlc C thread safe compiler


MPI xlc C++ thread safe compiler

top of the page

MPI Fortran Compilers for Blue Gene/P




MPI gfortran Fortran 77 compiler


MPI gfortran Fortran 90 compiler


MPI xlf Fortran 77 thread safe compiler


MPI xlf Fortran 90 thread safe compiler


MPI xlf Fortran 95 thread safe compiler


MPI xlf Fortran 95 thread safe compiler



IBM ESSL Library


Include files







Library files












Name Mangling

Name mangling is a mechanism by which names of functions, procedures, and common blocks from Fortran source files are converted into an internal representation when compiled into object files. For example, a Fortran subroutine called foo gets turned into the name "foo_" when placed in the object file. We do this to avoid name collisions with similar functions in other libraries. This makes mixing code from C, C++, and Fortran easier. Name mangling ensures that function, subroutine, and common-block names from a Fortran program or library do not clash with names in libraries from other programming languages. For example, the Fortran library contains a function named "access", which performs the same function as the function access in the standard C library. However, the Fortran library access function takes four arguments, making it incompatible with the standard C library access function, which takes only two arguments. If your program links with the standard C library, this would cause a symbol name clash. Mangling the Fortran symbols prevents this from happening. By default, we follow the same name mangling conventions as the GNU g77 compiler and libf2c library when generating mangled names. Names without an underscore have a single underscore appended to them, and names containing an underscore have two underscores appended to them. The following examples should help make this clear:

  • molecule -> molecule_
  • run_check -> run_check_ _
  • energy_ -> energy_ _ _

This behavior can be modified by using the -fno-second-underscore and the -fno-underscoring options to the pathf95 compiler. PGI Fortran and Intel Fortran's default policies correspond to our -fno-second-underscore option. Common block names are also mangled. Our name for the blank common block is the same as g77 (_BLNK_ _). PGI's compiler uses the same name for the blank common block, while Intel's compiler uses _BLANK_ _.

In general BEST PRACTICE for the cluster system is to use the following:

  • % export FC=pathf90
  • % export CC=pathcc
  • % export FFLAGS=”-fno-underscoring”

If you run into a problem with double underscoring then use the mvapich*_nsu (no second underscore) library.


Scalable Codes On IBM Blue Gene/P and Sun Microsystems

Download scalable codes on IBM Blue Gene/P here.

Download scalable codes on Sun Microsystems Cluster here.

top of the page

Last Updated on Tuesday, 20 November 2012 11:53

Hits: 1406

Logging In

Find guidelines on the following below:


CHPC Use Policies

Please make sure you have read and signed the CHPC Use Policy and returned it. Chances are you have already done so to get to this point.

top of the page

Logging in via Secure Shell

CHPC systems use the UNIX operating system. Click here to download the readme file for all our clusters.

Most systems have an SSH client that may be used to log in to the CHPC. Linux and MacOS systems have this as standard, while PuTTY is a free downloadable client for MS-Windows.

Login using your ssh client to the system (GPU & Sun system) and optionally set the command line argument -X to enable X-windows display forwarding back to your local host. For example:

To login into the GPU:

ssh This email address is being protected from spambots. You need JavaScript enabled to view it. (Anywhere from the internet)

SUN cluster loggins Using Linux

1. Login from anywhere on the internet

ssh This email address is being protected from spambots. You need JavaScript enabled to view it.

2. Login from CSIR

ssh This email address is being protected from spambots. You need JavaScript enabled to view it.

Login via Putty

1.     Open Putty.exe

2.     Category: Session

3.     Under Host Name or IP address:

·         sun.chpc.ac.za (from anywhere in the internet)

·         or : gpu.chpc.ac.za

4.     Port: 22

5.     Connection Type: SSH

6.     Saved Session: e.g CHPC-SUN or CHPC-GPU

7.     Close window on exit: Only on clean exit

8.     Click Open

9.     Your Username [press Enter]

10.   Your Password [press Enter]

This will connect you to a shell on the login node of the cluster. From here you will be able to conduct almost all of your activities.

top of the page


The root directory in unix / (forward slash) is the base of the file system. Other disk systems may be mounted on mount points on the root directory. The other directories are normally on separate disk subsystems from the system directories containing the libraries and programs.

The directory in which a user's login session starts, is the home directory.

In commands, it may also be referred by a short form, using the tilde symbol, ~.

The tilde is expanded by the shell to refer to the full directory path of the home directory, typically /GPU/home/username (GPU) or /export/home/username (Sun). This directory is owned by the user and contains files enabling correct startup of the user's session such as setting shell variables.

The current working directory may be referred to by its full pathname or . (dot), while the parent directory which is one level up is referred to by .. (double dot).

You may change to your home directory by typing cd on its own. Or, you may refer to files in your home directory by using the tilde shortcut symbol when in a different working directory, eg.:

cat ~/myfile.text

to display the contents of the file in /GPU/home/username/myfile.text (GPU) OR /export/home/username/myfile.text (Sun) on the console.

Tip: to change your working directory to the previous directory, type cd -

top of the page

File permissions

In unix, file permissions for reading, writing and executing may be specified for the classes owner, group and world. In this way access may be controlled. The chown and chmod commands are used to change a file or directory's permissions.

top of the page

Disk space

The unix disk free command df shows the filesystem free space and mount points. The '-h' command line switch causes the output to be in a format more easily read by a human.
For example, to show all free space in GPU cluster:

% df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               49G   18G   29G  38% /
tmpfs                  12G     0   12G   0% /dev/shm
/dev/gpfs              14T  942G   13T   7% /GPU

For example, to show all free space in Sun cluster:
% df -h

Filesystem Size Used Avail Use% Mounted on
/dev/sda3 119G 45G 68G 40% /
udev 7.9G 188K 7.9G 1% /dev
/dev/sda1 130M 25M 98M 21% /boot 1.6T 680G 865G 45% /opt/gridware 1.9T 717G 1.1T 41% /export/home 3.6T 1.8T 1.9T 50% /scratch/work 2.0T 210G 1.8T 11% /scratch/home 72T 15G 68T 1% /lustre/SCRATCH1 72T 13T 56T 19% /lustre/SCRATCH2 72T 38T 31T 55% /lustre/SCRATCH3 72T 2.9T 66T 5% /lustre/SCRATCH4

To show disk usage, use the unix command du :
du -sh .

Show usage in all subdirectories of a specified directory
du -sh directoryname

top of the page

Changing your password on GPU OR Sun cluster

To change your password, login to gpu.chpc.ac.za (GPU) OR sun.chpc.ac.za (Sun).

To change your password type passwd and follow the prompts, first to enter your existing password, and then the new password. You will be prompted twice for the new password to ensure correctness.

For example (GPU):

Please enter old (i.e. current) password:
Please enter new password:
Please re-enter new password:

For example (SUN cluster):

NB:You'll be requested to enter a new password and to confirm it.

username@login02:~>passwd username
Changing password for username.
Old Password:
New Password:
Reenter New Password:
Changing NIS password for username on batch01.
Password changed.


Choose a "strong" password, with mixed case alphabetic characters and digits. As per the CHPC agreements, please keep your password private and change it immediately if you suspect it has become known to anyone else.

If you have forgotten the password or otherwise cannot log in, you will have to request that your password be reset by the CHPC system admin.

top of the page

Changing your login shell

To change your shell type 'chsh' and follow the prompts. A list of valid shells is available in the text file /etc/shells although the bash shell will suffice for most operations. Current shells include csh, ksh, tcsh, zsh.

For example, to change your shell to bash:

chsh -s bash

top of the page

Changing bash command line editing mode

By default, the bash shell is set up to use VI-style editing keys. To change, run the following, or add the following to your .bashrc file to execute on login:

set -o emacs

Useful keys in this mode:

  • ^a Start of line
  • ^e End of line
  • ^w Delete previous word
  • Up/Down arrow access command history

To revert to VI mode

set -o vi

top of the page

Command-line completion

Pressing Tab will perform automatic filename completion, both for commands that are available on the shell's PATH, a variable that determines which directories are searched for programs, and for file and directory name completion.

top of the page

Last Updated on Monday, 19 November 2012 13:31

Hits: 1627

CHPC Newsletter

First Edition


A note from the Director

happy sithole

The CHPC endevours to communicate with its user base using multiple platforms including the website and our new newsletter. As an indication of this, the centre’s website will be changing soon! Do not be alarmed, users can expect the same online support facility as well as more interactive media in the form of a blog facility, for all research infomation sharing and a social media interface.

The newletter will be emailed on a quarterly basis and will carry the latest developments in the CHPC, profiles of researchers and the nature of the work they are doing among other things. We hope our users will utilise this newsletter at a tool stay abreast of the latest development in the CHPC.

Happy Sithole

CHPC National Meeting

It is that time again when we are hard at work to bring together our CHPC community, leaders of industry in HPC and technology vendors. The meeting aims to gauge international trends in HPC applications, look at what South African researchers are doing and determine a way of keeping the country competetive (Industrial Advisory Council) in this industry. The national meeting takes place from 3 – 7 December 2012 at the Durban Internation Conference Centre. The 3rd and 4th December will cover tutorials and forums on HPC and 5 – 7 December will constitute the main conference days. The theme of this year’s conference is “HPC and Data Applications for Increased Impact on Research” and our intention is to highlight successful applications of HPC.

I am excited to announce that the conference will carry the finale of the first South African Student Cluster Build Competition. Four teams of five will be competing against each other and the winning team will represent South Africa at the 2013 International Supercomputing Conference in Germany. Thanks to a generous R150 000.00 sponsorship from Dell, the winning team will also visit the Dell Headquarters in Austin, Texas, to visit Dell’s development team on HPC and to learn from them.

The Hotseat Industrial Session will take place on Friday, 07 December 2012 and has proved to be a favoured session from last year’s conference delegates. Vendors have booked their places and are gearing-up to face the scrutiny of our inquisitors.

I urge you to register for this conference by visiting www.chpcconf.co.za The call for contributions closes on 26 October 2012.

NAG/CHPC Partnership

The centre has partnered with NAG (Numerical Algorithm Group), a United Kingdom based company to assist CHPC users with their codes. The aim of this partnership is to assist users to tweak and scale their codes on the CHPC infrastructure.

The NAG High Performance Computing services include among others: focused computer science and engineering (CSE) projects, mentoring and training of local CSE personnel as well as advice and support in procurement processes. As part of this partnership, NAG and the CHPC visited the CHPC hosted a workshop the centre’s infrastructure users in September.

The aim of the workshop was to allow users to share their experiences with the type of codes they are running and to see how they could optimise them. As an example of the kind of services CHPC users can now expect, NAG took a user’s personally developed code “Particle-In-Cell Simulations” code, optimized and parallelised it. The user is utilising the code to simulate waves in an electron-beam plasma and it is written in C++. Initially, the code was compiled on CHPC cluster with OpenMP and was limited to running on 8 processors (one node). The aim was to run on multiple nodes by introducing MPI. After the introduction of MPI, the code was successfully scaled from one to eight nodes / 12 to 96 cores which could run for 9.1 seconds, a startling achievement for research.

First South African Cyber Infrastructure Committee Meeting

chpcIn September the first meeting of the committee and sector working groups for the development of a national intergrated cyber-infrastructure system was held at the CHPC offices in Cape Town.

The committee has been established to investigate international cyber-infrastructure best practice which is optimally applicable to South Africa and appropriately advise the Minister of Science and Technology on a model which will maximise the impact, sustainability and effective governance and management of the SA National Cyber-Infrastructure System. The expected outcome is that the Minister will be informed as to how this important initiative should be optimally institutionalised.

Currently, the main components of the core South African national cyber-infrastructure arrangement are: the Centre for High Performance Computing (CHPC), the South African National Research Network (SANReN), the Data Intensive Research Infrastructure of South Africa (DIRISA) formerly known as VLDB and the SAGrid Initiative. Outside of these, other parties own and manage diverse other components of the broader SA cyber-infrastructure ecosystem.

Researcher's Corner

A Shining Star Arises Through CHPC Facilitated Research

reginaDr Regina Maphanga is a senior researcher at the Materials Modelling Centre of the University of Limpopo. She has won several awards in recognition of her work and is the 2010 recipient of the National Science and Technology Forum (NSTF) Award for the category: Distinguished Black Female Researcher over 2-5 years. This was for her contribution to computational modelling of materials, in particular, electrolytic manganese dioxide.

Regina is from a rural village called Ngwanallela in GaMatlala, about 70km west of Polokwane. She has always been an academic achiever, being exempted from doing grade 6 during her primary schooling and finishing matric at the age of 16. Her very first use of a computer happened during her honours degree which she passed with distinctions, going further to do a Master’s and Doctorate in Physics, specialising in computational modelling of materials.

She describes computational modelling as a relatively new research method which combines theory and experimental research to calculate the properties of materials. Instead of laboratory equipments and samples used in traditional experiments, computational modelling makes use of computers and mathematical models to solve problems. The various methods, based on the theory, can be used to bridge the gaps between fundamental science and industrial application. These can be applied to a variety of different materials and can then be used to understand the properties of complex materials. This gives an attractive approach for the many fields where it is hard or impossible to get experimental data.

Her research work is based on computer simulations and EXAFS experiments for electrolytic manganese dioxide, which is a positive cathode material used in alkaline batteries. Ab initio and atomistic simulations (Energy Minimization and Molecular Dynamics Techniques) are used to simulate materials. She uses a state of the art and rare technique called the Amorphisation and re-crystallisation (A and R) method. During the simulation, the material is allowed to undergo amorphous configuration and calculations are prolonged until the material re-crystallises. Prolonged dynamical simulations result in re-crystallisation of the structure together with the evolution of the structural features observed experimentally. Hence the technique was found to be appropriate in the simulation of complex materials.

Regina’s research findings have been presented at national and international conferences and published in journals and conference proceedings. She currently supervises postgraduate students.

Other achievements and awards:

  • Selected by IAP (InterAcademy Panel for International Issues) as a Young Scientist to represent South Africa during the World Economic Forum’s Annual Meeting of the New Champions in Dalian, China ( 2011)
  • Selected as a member of Global Young Academy: the voice of the young scientists around the world (2011)
  • Finalist of LOREAL/UNESCO Fellowship of “For Women in Science” South African Programme (2006)
  • Recipient of Special Mention Award of LOREAL/UNESCO Fellowship “For Women in Science” (2006)

Regina is a long time user of the CHPC, due to the computationally intensive nature of her research. “The CHPC became very handy when we were starting with the projects on Large Scale Simulations, and it provided us with the computing power and resources we required to carry out our simulations. It is still making a huge difference and making it possible for us to progress with our work,” she says.


Last Updated on Tuesday, 16 October 2012 16:36

Hits: 1270

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social shares

Website developed by Multidimensions