A+ A A-

Scientific codes running on the Sun cluster (3)

Sun Microsystems cluster

Code

version

directory

Notes

GROMACS

4.0.5

/opt/gridware
/gromacs

A molecular dynamics package primarily designed for biomolecular. For more information, click HERE.

DL_poly

3.07,
2.18

/opt/gridware
/dlpoly

A general purpose serial and parallel Molecular dynamic simulation package, this version of Dl_poly has a wider range of structure optimisation features to help with setting up the starting configuration. For more info about this code click HERE and to see the script for running DL_poly_3.07 and 2.18 in the Sun, e1350 and BG/P systems, click HERE.

EMBOSS

6.2.0

/opt/gridware
/EMBOSS

EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information about this package, click HERE.

ATLAS

3.9

/opt/gridware
/atlas3.9

ATLAS is an Automatically Tuned Linear Algebra Software. For more information about this software, click HERE.

GAUSSIAN

g09

/opt/gridware
/gaussian

Gaussian is a structure calculation software. For more information about this software, click HERE and to see the script for running GAUSSIAN in the Sun, e1350 and BG/P systems, click HERE.

SEADAS

6.1

/opt/gridware
/SeaDas

SEADAS is a comprehensive image analysis package. For more information about this package, click HERE.


 

Graphical Processing Unit (GPU)

Code

version

directory

Notes

EMBOSS

6.3.1

/GPU/opt/emboss-intel-new

Intel Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

EMBOSS

6.3.1

/GPU/opt/emboss-gcc-6.3

GCC Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

NAMD

2.8

/GPU/opt/namd/NAMD_2.8_Source/

NAMD is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms) HERE

 

Last Updated on Tuesday, 03 June 2014 14:17

Hits: 154

Scaling of codes on CHPC clusters

The performance of the following codes, namely, NAMD, WRF, DL_POLY_2 and 3 were tested on both Sun and GPU cluster. The scalability of these codes has been calculated using the following formula:

Description: http://www.chpc.ac.za/images/103.jpg

Basically; the formula is quantified as follows: the speed-up on P processors, S(P), is the ratio of the execution time on 1 processor, T(1) , to the execution time on P processors, T(P). Some of the benchmark results were calculated using nodes/gpus instead of processors; in that case; one need to replace processors with nodes/gpus in the above formula to achieve the results. Below is the scalability of NAMD tested on processors running on CHPC - GPU cluster:

top of the page

Figure 1: Scalability of NAMD on GPU cluster (Processors)

The above Figure 1 depicts the scalability of NAMD when running on Infiniband and Ethernet network of the GPU cluster. It further shows that the model performs much better when simulating on 80 processors on both Infiniband and Ethernet network. Based on these results, users are advised to utilise at least 32 processors which is much more reasonable and may accommodate other users running in the system. The below graph represent the scaling results of NAMD simulated on gpus (NVIDIA cards):

 

top of the page

Figure 2: Scalability of NAMD on GPU cluster (GPUs)

Figure 2 illustrates the scaling of NAMD when simulating in different number of gpus (NVIDIA cards) running in the GPU cluster. In particular, the scalability results shows that the model does not scale as expected from ~1 gpu up to 4 gpus on both Infiniband and Ethernet network. Thereafter, the performance starts to increase from ~8 gpus up until 20 gpus in all the selected networks. For this task, it is then recommended that this kind of model be executed in many gpus depending on availability of the system. Another molecular dynamics model; namely; DLPOLY 2.18 was also tested

 

top of the page

Figure 3: Scalability of DL_POLY 2.18 on Sun cluster

Figure 3 shows the scaling results of DL_POLY 2.18 when simulating on two different architectures, namely; Nehalem and Harpertown of Sun cluster. In summary, the model performed well in Nehalem system while in Harpertown is also scaling much reasonable by continuing to increase the performance when one increases number of nodes. To allow proper sharing of resources, it is then recommended that DL_POLY 2 users run on at least 4 compute nodes of Nehalem or use Harpertown if the system is busy. Another version of this molecular code (DL_POLY 3.09) is presented in the below graph:

 

top of the page

Figure 4: Scalability of DL_POLY 3.09 on Sun Microsystems cluster

Figure 4 outlines the scalability of DL_POLY 3.09 when executing in the following architectures: Nehalem and Harpertown of Sun system. In particular, the scalability results shows that the performance of the model was comparable from ~1 to 2 nodes of Nehalem infrastructure and slightly increases when one increases number of nodes. On the other hand, Harpertown system follows the trends of Nehalem and start to react properly when increase number of nodes. Depending on available system (either Nehalem or Harpertown), users of these model may run on at least 8 nodes when using a simulation of > 60,000 atoms. The below graph represent the performance of WRF simulated in the Sun system:

 

Figure 5: Scalability of WRF on Sun: Nehalem and Harpertown cluster

Above Figure 5 describes the scaling of WRF tested on Sun Microsystems cluster (Nehalem and Harpertown) systems; the scaling results outlines that the speed-up of WRF was almost as expected from ~1 to 2 nodes and thereafter it started to decrease rapidly from ~4 to 16 nodes; however; Harpertown accumulated much better performance than Nehalem system. Based on the scaling results of this study; it will then be appropriate for WRF users to use at least 16 nodes when using configuration of (1 month or more period). The below scaling results indicate the performance of WRF on Sun's Dell cluster.

 

 

Figure 6: The performance of WRF on Sun Dell system

Figure 6 shows the scaling of the weather model (WRF)  tested on different computational resources, starting from 1 to 16 compute nodes connected to each other via Infiniband and Ethernet network respectively. The performance of this weather simulation is comparable from 1 to 2 nodes and thereafter slightly decrease from 2 to 4 nodes when using both Infiniband and Ethernet network interface. Suddenly, the model start to increase  performance from 8 up to 16 nodes. The scalability of WRF on Sun: Dell system is optimum as compared to the tests performed on Sun: Nehalem and Harpertown systems discussed on Figure 5. To this end, It is therefore recommended that WRF users utilise 16 nodes to run simulation on Infiniband network of the cluster.

For more information about the configuration of all the codes, please click here

top of the page

Last Updated on Wednesday, 02 July 2014 16:07

Hits: 2595

Compiling and Linking Codes

Sun compilers

Please see this page for information on sun compilers, clustertools and intel compilers.

Compilers and Libraries

-------------------------------------------------------------------------------------------------------------------

Code Name

Version

Directory

Notes

gcc

4.5.1

/opt/gridware/compilers

with-gmp

zlib

1.2.7

/opt/gridware/compilers

with gcc

ImageMagick

6.7.9

/opt/gridware/compilers

with intel 2012

NCO

4.2.1

/opt/gridware/compilers

with gcc-4.5.1 , intel 11 and openmpi-1.4.2-intel

netcdf-gnu

4.1.2

/opt/gridware/libraries

with gcc

netcdf-intel

4.1.2

/opt/gridware/libraries

with intel 2012

mvapich2 (r5668)

1.8

/opt/gridware/libraries

with intel 2012

mvapich

2.1.8

/opt/gridware/libraries

with gcc

HDF5

1.8.9

/opt/gridware/compilers

with intel 11.1

OpenMPI

1.6.1

/opt/gridware/compilers/OpenMPI

with intel 2012

OpenMPI

1.6.1

/opt/gridware/compilers/OpenMPI

with gcc

FFTW

3.3.2

/opt/gridware/libraries

with intel 2012 , using mvapich2(r5668) mpi lib

FFTW

2.1.5

/opt/gridware/libraries

with intel 2012 , using mvapich2(r5668) mpi lib

 


 

top of the page

Last Updated on Tuesday, 03 June 2014 14:16

Hits: 2240

Logging In

Find guidelines on the following below:

 

CHPC Use Policies

Please make sure you have read and signed the CHPC Use Policy and returned it. Chances are you have already done so to get to this point.

top of the page

Logging in via Secure Shell

CHPC systems use the UNIX operating system. Click here to download the readme file for all our clusters.

Most systems have an SSH client that may be used to log in to the CHPC. Linux and MacOS systems have this as standard, while PuTTY is a free downloadable client for MS-Windows.

Login using your ssh client to the system (GPU & Sun system) and optionally set the command line argument -X to enable X-windows display forwarding back to your local host. For example:

To login into the GPU:

ssh This email address is being protected from spambots. You need JavaScript enabled to view it. (Anywhere from the internet)


SUN cluster loggins Using Linux

1. Login from anywhere on the internet

ssh This email address is being protected from spambots. You need JavaScript enabled to view it.

2. Login from CSIR

ssh This email address is being protected from spambots. You need JavaScript enabled to view it.

Login via Putty

1.     Open Putty.exe

2.     Category: Session

3.     Under Host Name or IP address:

·         sun.chpc.ac.za (from anywhere in the internet)

·         or : gpu.chpc.ac.za

4.     Port: 22

5.     Connection Type: SSH

6.     Saved Session: e.g CHPC-SUN or CHPC-GPU

7.     Close window on exit: Only on clean exit

8.     Click Open

9.     Your Username [press Enter]

10.   Your Password [press Enter]

This will connect you to a shell on the login node of the cluster. From here you will be able to conduct almost all of your activities.

top of the page

Directories

The root directory in unix / (forward slash) is the base of the file system. Other disk systems may be mounted on mount points on the root directory. The other directories are normally on separate disk subsystems from the system directories containing the libraries and programs.

The directory in which a user's login session starts, is the home directory.

In commands, it may also be referred by a short form, using the tilde symbol, ~.

The tilde is expanded by the shell to refer to the full directory path of the home directory, typically /GPU/home/username (GPU) or /export/home/username (Sun). This directory is owned by the user and contains files enabling correct startup of the user's session such as setting shell variables.

The current working directory may be referred to by its full pathname or . (dot), while the parent directory which is one level up is referred to by .. (double dot).

You may change to your home directory by typing cd on its own. Or, you may refer to files in your home directory by using the tilde shortcut symbol when in a different working directory, eg.:

cat ~/myfile.text

to display the contents of the file in /GPU/home/username/myfile.text (GPU) OR /export/home/username/myfile.text (Sun) on the console.

Tip: to change your working directory to the previous directory, type cd -

top of the page

File permissions

In unix, file permissions for reading, writing and executing may be specified for the classes owner, group and world. In this way access may be controlled. The chown and chmod commands are used to change a file or directory's permissions.

top of the page

Disk space

The unix disk free command df shows the filesystem free space and mount points. The '-h' command line switch causes the output to be in a format more easily read by a human.
For example, to show all free space in GPU cluster:

% df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               49G   18G   29G  38% /
tmpfs                  12G     0   12G   0% /dev/shm
/dev/gpfs              14T  942G   13T   7% /GPU

For example, to show all free space in Sun cluster:
% df -h

Filesystem Size Used Avail Use% Mounted on
/dev/sda3 119G 45G 68G 40% /
udev 7.9G 188K 7.9G 1% /dev
/dev/sda1 130M 25M 98M 21% /boot
172.17.203.15:/mnt/gridware 1.6T 680G 865G 45% /opt/gridware
172.17.203.15:/mnt/home 1.9T 717G 1.1T 41% /export/home
172.17.203.50:/scratch/work 3.6T 1.8T 1.9T 50% /scratch/work
172.17.203.50:/scratch/home 2.0T 210G 1.8T 11% /scratch/home
172.17.195.20@o2ib0:172.17.195.21@o2ib0:/lfs01 72T 15G 68T 1% /lustre/SCRATCH1
172.17.195.21@o2ib0:172.17.195.20@o2ib0:/lfs02 72T 13T 56T 19% /lustre/SCRATCH2
172.17.195.20@o2ib0:172.17.195.21@o2ib0:/lfs03 72T 38T 31T 55% /lustre/SCRATCH3
172.17.195.21@o2ib0:172.17.195.20@o2ib0:/lfs04 72T 2.9T 66T 5% /lustre/SCRATCH4

To show disk usage, use the unix command du :
du -sh .

Show usage in all subdirectories of a specified directory
du -sh directoryname

top of the page

Changing your password on GPU OR Sun cluster

To change your password, login to gpu.chpc.ac.za (GPU) OR sun.chpc.ac.za (Sun).

To change your password type passwd and follow the prompts, first to enter your existing password, and then the new password. You will be prompted twice for the new password to ensure correctness.

For example (GPU):

yppasswd
Please enter old (i.e. current) password:
Please enter new password:
Please re-enter new password:

For example (SUN cluster):

NB:You'll be requested to enter a new password and to confirm it.

username@login02:~>passwd username
Changing password for username.
Old Password:
New Password:
Reenter New Password:
Changing NIS password for username on batch01.
Password changed.

 

Choose a "strong" password, with mixed case alphabetic characters and digits. As per the CHPC agreements, please keep your password private and change it immediately if you suspect it has become known to anyone else.

If you have forgotten the password or otherwise cannot log in, you will have to request that your password be reset by the CHPC system admin.

top of the page

Changing your login shell

To change your shell type 'chsh' and follow the prompts. A list of valid shells is available in the text file /etc/shells although the bash shell will suffice for most operations. Current shells include csh, ksh, tcsh, zsh.

For example, to change your shell to bash:

chsh -s bash

top of the page

Changing bash command line editing mode

By default, the bash shell is set up to use VI-style editing keys. To change, run the following, or add the following to your .bashrc file to execute on login:

set -o emacs

Useful keys in this mode:

  • ^a Start of line
  • ^e End of line
  • ^w Delete previous word
  • Up/Down arrow access command history

To revert to VI mode

set -o vi

top of the page

Command-line completion

Pressing Tab will perform automatic filename completion, both for commands that are available on the shell's PATH, a variable that determines which directories are searched for programs, and for file and directory name completion.

top of the page

Last Updated on Monday, 19 November 2012 13:31

Hits: 2783

Documentation for users:

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions