A+ A A-

Scientific codes running in the e1350, Blue Gene/P & Sun cluster

Sun Microsystems cluster

Code

version

directory

Notes

GROMACS

4.0.5

/opt/gridware
/gromacs

A molecular dynamics package primarily designed for biomolecular. For more information, click HERE.

DL_poly

3.07,
2.18

/opt/gridware
/dlpoly

A general purpose serial and parallel Molecular dynamic simulation package, this version of Dl_poly has a wider range of structure optimisation features to help with setting up the starting configuration. For more info about this code click HERE and to see the script for running DL_poly_3.07 and 2.18 in the Sun, e1350 and BG/P systems, click HERE.

EMBOSS

6.2.0

/opt/gridware
/EMBOSS

EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information about this package, click HERE.

ATLAS

3.9

/opt/gridware
/atlas3.9

ATLAS is an Automatically Tuned Linear Algebra Software. For more information about this software, click HERE.

GAUSSIAN

g09

/opt/gridware
/gaussian

Gaussian is a structure calculation software. For more information about this software, click HERE and to see the script for running GAUSSIAN in the Sun, e1350 and BG/P systems, click HERE.

SEADAS

6.1

/opt/gridware
/SeaDas

SEADAS is a comprehensive image analysis package. For more information about this package, click HERE.


 

Graphical Processing Unit (GPU)

Code

version

directory

Notes

EMBOSS

6.3.1

/GPU/opt/emboss-intel-new

Intel Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

EMBOSS

6.3.1

/GPU/opt/emboss-gcc-6.3

GCC Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

NAMD

2.8

/GPU/opt/namd/NAMD_2.8_Source/

NAMD is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms) HERE

 

Last Updated on Thursday, 15 November 2012 13:51

Hits: 1741

Scientific codes running in the e1350, Blue Gene/P & Sun cluster (2)

Sun Microsystems cluster

Code

version

directory

Notes

GROMACS

4.0.5

/opt/gridware
/gromacs

A molecular dynamics package primarily designed for biomolecular. For more information, click HERE.

DL_poly

3.07,
2.18

/opt/gridware
/dlpoly

A general purpose serial and parallel Molecular dynamic simulation package, this version of Dl_poly has a wider range of structure optimisation features to help with setting up the starting configuration. For more info about this code click HERE and to see the script for running DL_poly_3.07 and 2.18 in the Sun, e1350 and BG/P systems, click HERE.

EMBOSS

6.2.0

/opt/gridware
/EMBOSS

EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information about this package, click HERE.

ATLAS

3.9

/opt/gridware
/atlas3.9

ATLAS is an Automatically Tuned Linear Algebra Software. For more information about this software, click HERE.

GAUSSIAN

g09

/opt/gridware
/gaussian

Gaussian is a structure calculation software. For more information about this software, click HERE and to see the script for running GAUSSIAN in the Sun, e1350 and BG/P systems, click HERE.

SEADAS

6.1

/opt/gridware
/SeaDas

SEADAS is a comprehensive image analysis package. For more information about this package, click HERE.


 

Graphical Processing Unit (GPU)

Code

version

directory

Notes

EMBOSS

6.3.1

/GPU/opt/emboss-intel-new

Intel Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

EMBOSS

6.3.1

/GPU/opt/emboss-gcc-6.3

GCC Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

NAMD

2.8

/GPU/opt/namd/NAMD_2.8_Source/

NAMD is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms) HERE

 

Last Updated on Thursday, 15 November 2012 11:47

Hits: 107

Scientific codes running in the e1350, Blue Gene/P & Sun cluster (3)

Sun Microsystems cluster

Code

version

directory

Notes

GROMACS

4.0.5

/opt/gridware
/gromacs

A molecular dynamics package primarily designed for biomolecular. For more information, click HERE.

DL_poly

3.07,
2.18

/opt/gridware
/dlpoly

A general purpose serial and parallel Molecular dynamic simulation package, this version of Dl_poly has a wider range of structure optimisation features to help with setting up the starting configuration. For more info about this code click HERE and to see the script for running DL_poly_3.07 and 2.18 in the Sun, e1350 and BG/P systems, click HERE.

EMBOSS

6.2.0

/opt/gridware
/EMBOSS

EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information about this package, click HERE.

ATLAS

3.9

/opt/gridware
/atlas3.9

ATLAS is an Automatically Tuned Linear Algebra Software. For more information about this software, click HERE.

GAUSSIAN

g09

/opt/gridware
/gaussian

Gaussian is a structure calculation software. For more information about this software, click HERE and to see the script for running GAUSSIAN in the Sun, e1350 and BG/P systems, click HERE.

SEADAS

6.1

/opt/gridware
/SeaDas

SEADAS is a comprehensive image analysis package. For more information about this package, click HERE.


 

Graphical Processing Unit (GPU)

Code

version

directory

Notes

EMBOSS

6.3.1

/GPU/opt/emboss-intel-new

Intel Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

EMBOSS

6.3.1

/GPU/opt/emboss-gcc-6.3

GCC Compilation:EMBOSS is an open source software package developed to meet the needs of molecular biology community. For more information on how to run EMBOSS on the GPU cluster HERE

NAMD

2.8

/GPU/opt/namd/NAMD_2.8_Source/

NAMD is a free-of-charge molecular dynamics simulation package written using the Charm++ parallel programming model, noted for its parallel efficiency and often used to simulate large systems (millions of atoms) HERE

 

Last Updated on Thursday, 15 November 2012 11:47

Hits: 99

Scaling of codes on CHPC clusters

The performance of the following codes, namely, NAMD, WRF, DL_POLY_2 and 3 were tested on both Sun and GPU cluster. The scalability of these codes has been calculated using the following formula:

Description: http://www.chpc.ac.za/images/103.jpg

Basically; the formula is quantified as follows: the speed-up on P processors, S(P), is the ratio of the execution time on 1 processor, T(1) , to the execution time on P processors, T(P). Some of the benchmark results were calculated using nodes/gpus instead of processors; in that case; one need to replace processors with nodes/gpus in the above formula to achieve the results. Below is the scalability of NAMD tested on processors running on CHPC - GPU cluster:

top of the page

Figure 1: Scalability of NAMD on GPU cluster (Processors)

The above Figure 1 depicts the scalability of NAMD when running on Infiniband and Ethernet network of the GPU cluster. It further shows that the model performs much better when simulating on 80 processors on both Infiniband and Ethernet network. Based on these results, users are advised to utilise at least 32 processors which is much more reasonable and may accommodate other users running in the system. The below graph represent the scaling results of NAMD simulated on gpus (NVIDIA cards):

 

top of the page

Figure 2: Scalability of NAMD on GPU cluster (GPUs)

Figure 2 illustrates the scaling of NAMD when simulating in different number of gpus (NVIDIA cards) running in the GPU cluster. In particular, the scalability results shows that the model does not scale as expected from ~1 gpu up to 4 gpus on both Infiniband and Ethernet network. Thereafter, the performance starts to increase from ~8 gpus up until 20 gpus in all the selected networks. For this task, it is then recommended that this kind of model be executed in many gpus depending on availability of the system. Another molecular dynamics model; namely; DLPOLY 2.18 was also tested

 

top of the page

Figure 3: Scalability of DL_POLY 2.18 on Sun cluster

Figure 3 shows the scaling results of DL_POLY 2.18 when simulating on two different architectures, namely; Nehalem and Harpertown of Sun cluster. In summary, the model performed well in Nehalem system while in Harpertown is also scaling much reasonable by continuing to increase the performance when one increases number of nodes. To allow proper sharing of resources, it is then recommended that DL_POLY 2 users run on at least 4 compute nodes of Nehalem or use Harpertown if the system is busy. Another version of this molecular code (DL_POLY 3.09) is presented in the below graph:

 

top of the page

Figure 4: Scalability of DL_POLY 3.09 on Sun Microsystems cluster

Figure 4 outlines the scalability of DL_POLY 3.09 when executing in the following architectures: Nehalem and Harpertown of Sun system. In particular, the scalability results shows that the performance of the model was comparable from ~1 to 2 nodes of Nehalem infrastructure and slightly increases when one increases number of nodes. On the other hand, Harpertown system follows the trends of Nehalem and start to react properly when increase number of nodes. Depending on available system (either Nehalem or Harpertown), users of these model may run on at least 8 nodes when using a simulation of > 60,000 atoms. The below graph represent the performance of WRF simulated in the Sun system:

 

Figure 5: Scalability of WRF on Sun cluster

Above Figure 5 describes the scaling of WRF tested on Sun Microsystems cluster. On both Nehalem and Harpertown systems; the scaling results outlines that the speed-up of WRF was almost as expected from ~1 to 2 nodes and thereafter it started to decrease rapidly from ~4 to 16 nodes; however; Harpertown accumulated much better performance than Nehalem system. Based on the scaling results of this study; it will then be appropriate for WRF users to use at least 16 nodes when using configuration of (1 month or more period).

For more information about the configuration of all the codes, please click here

top of the page

Last Updated on Tuesday, 21 May 2013 08:17

Hits: 1221

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social shares

Website developed by Multidimensions