A+ A A-

Compiling, Linking & Scalable Codes

Sun compilers

Please see this page for information on sun compilers, clustertools and intel compilers.

The primary compiler on the Cluster system is the IBM Pathscale compiler which provides methods for 'C', 'C++' and various specifications of Fortran.

IBM e1350 Cluster

  • 'C' Compiler: pathcc version 2.9.99
  • 'C++' Compiler: pathCC version 2.9.99
  • 'C' and 'C++' Compiler: gcc (GCC) 3.3.3 (SuSE Linux)
  • Fortran90/77 Compiler: pathf90 version 2.9.99

 

Compilers and Libraries

-------------------------------------------------------------------------------------------------------------------

Code Name

Version

Directory

Notes

gcc

4.5.1

/opt/gridware/compilers

with-gmp

zlib

1.2.7

/opt/gridware/compilers

with gcc

ImageMagick

6.7.9

/opt/gridware/compilers

with intel 2012

NCO

4.2.1

/opt/gridware/compilers

with gcc-4.5.1 , intel 11 and openmpi-1.4.2-intel

netcdf-gnu

4.1.2

/opt/gridware/libraries

with gcc

netcdf-intel

4.1.2

/opt/gridware/libraries

with intel 2012

mvapich2 (r5668)

1.8

/opt/gridware/libraries

with intel 2012

mvapich

2.1.8

/opt/gridware/libraries

with gcc

HDF5

1.8.9

/opt/gridware/compilers

with intel 11.1

OpenMPI

1.6.1

/opt/gridware/compilers/OpenMPI

with intel 2012

OpenMPI

1.6.1

/opt/gridware/compilers/OpenMPI

with gcc

FFTW

3.3.2

/opt/gridware/libraries

with intel 2012 , using mvapich2(r5668) mpi lib

FFTW

2.1.5

/opt/gridware/libraries

with intel 2012 , using mvapich2(r5668) mpi lib

In addition in my home directory /CHPC/home/nallsopp you will find the Examples.tar.gz and a few example loadleveller scripts. If you want an example of a loadleveller script for going over infiniband with MVAPICH take a look at hello_mpi.ll. If you want to go over IP with MPICH then take a look at run_hello_ib.ll, but please remember to build and run with the appropriate mpif90, mpicc, mpirun etc.

Also, you will notice that there are various new classed which have been created. You can see what is available by typing: llclass

 


 

top of the page

IBM Blue Gene/P

  • Based on GCC 4.1.2: gcc, g++, gfortran
  • IBM XL C/C++ Advanced Edition for Blue Gene/P, V9.0.
  • IBM Fortran: mpixlf77, mpixlf2003, mpixlf90, mpixlf95

Common to the Cluster environment is the Message Passing Interface (MPI) and the preferred implementation is that of MVAPICH [1] of which the most recent release is 0.9.9. The MVAPICH is an implementation specifically built to make use of the Infiniband connectivity and it is essential that your codes link with these libraries to get the best performance. Alternatives for MVAPICH are MPICH which is built with the Pathscale compiler but will not make use of the Infiniband interconnect but will rather use the 1Gb ethernet TCP/IP stack.


 

Interaction matrix

 

Interconnect

InfiniBand 10Gb

TCP/IP 1Gb

Compiler

Pathscale

GNU GCC

MPI

MVAPICH

MPICH or MVAPICH

In addition to MPI libraries there are also some maths libraries available namely GSL, FFTW, GotoBLAS/LAPACK and ACML.


 

Compilers

Note

All compilers are cross compilers.

They compile on front end node:

  • SLES 10
  • Power 5

They target Blue Gene/P

  • CNK
  • PowerPC440d

 

C/C++ Compilers for Blue Gene/P

IBM xlc compilers

/opt/ibmcmp/vacpp/bg/9.0/bin/

bgxlc

C compiler

bgxlc_r

thread safe C compiler

bgxlC

C++ compiler

bgxlC_r

thread safe C++ compiler


 

GNU gcc compilers

/bgsys/drivers/ppcfloor/gnu-linux/powerpc-bgp-linux/bin/

gcc

GNU C compiler

g++

GNU C++ compiler


top of the page

Fortran Compilers for Blue Gene/P

IBM xlf [77 | 90 | 95 | 2003 ]

/opt/ibmcmp/xlf/bg/11.1/bin

bgxlf

Fortran 77 compiler

bgxlf_r

thread safe Fortran 77 compiler

bgxlf90_r

thread safe Fortran 90 compiler

bgxlf95_r

thread safe Fortran 95 compiler

bgxlf2003_r

thread safe Fortran 2003 compiler


 

GNU gfortran [77 | 90 | 95 ]

/bgsys/drivers/ppcfloor/gnu-linux/powerpc-bgp-linux/bin/

gfortran

GNU Fortran 77 / 90 / 95 compiler


 

MPI C Compilers for Blue Gene/P

C/C++

/bgsys/drivers/ppcfloor/ppc/comm/bin/

mpicc

MPI gcc C compiler

mpicxx

MPI g++ C++ compiler

mpixlc_r

MPI xlc C thread safe compiler

mpixlcxx_r

MPI xlc C++ thread safe compiler


 

MPI C Compilers for Blue Gene/P

/bgsys/drivers/ppcfloor/ppc/comm/bin/

C/C++

mpicc

MPI gcc C compiler

mpicxx

MPI g++ C++ compiler

mpixlc_r

MPI xlc C thread safe compiler

mpixlcxx_r

MPI xlc C++ thread safe compiler


top of the page

MPI Fortran Compilers for Blue Gene/P

/bgsys/drivers/ppcfloor/ppc/comm/bin/

Fortran

mpif77

MPI gfortran Fortran 77 compiler

mpif90

MPI gfortran Fortran 90 compiler

mpixlf77_r

MPI xlf Fortran 77 thread safe compiler

mpixlf90_r

MPI xlf Fortran 90 thread safe compiler

mpixlf95_r

MPI xlf Fortran 95 thread safe compiler

mpixlf2003_r

MPI xlf Fortran 95 thread safe compiler

 


 

IBM ESSL Library

/bgsys/ibm_essl/sles10/prod/opt/ibmmath/

Include files

 
 

include/

 
 

essl.h

     

 

Library files

 

lib/

 

libesslbg.a

 

libesslbg.so.1.3

 

libesslsmpbg.a

 

libesslsmpbg.so.1.3


 

Name Mangling

Name mangling is a mechanism by which names of functions, procedures, and common blocks from Fortran source files are converted into an internal representation when compiled into object files. For example, a Fortran subroutine called foo gets turned into the name "foo_" when placed in the object file. We do this to avoid name collisions with similar functions in other libraries. This makes mixing code from C, C++, and Fortran easier. Name mangling ensures that function, subroutine, and common-block names from a Fortran program or library do not clash with names in libraries from other programming languages. For example, the Fortran library contains a function named "access", which performs the same function as the function access in the standard C library. However, the Fortran library access function takes four arguments, making it incompatible with the standard C library access function, which takes only two arguments. If your program links with the standard C library, this would cause a symbol name clash. Mangling the Fortran symbols prevents this from happening. By default, we follow the same name mangling conventions as the GNU g77 compiler and libf2c library when generating mangled names. Names without an underscore have a single underscore appended to them, and names containing an underscore have two underscores appended to them. The following examples should help make this clear:

  • molecule -> molecule_
  • run_check -> run_check_ _
  • energy_ -> energy_ _ _

This behavior can be modified by using the -fno-second-underscore and the -fno-underscoring options to the pathf95 compiler. PGI Fortran and Intel Fortran's default policies correspond to our -fno-second-underscore option. Common block names are also mangled. Our name for the blank common block is the same as g77 (_BLNK_ _). PGI's compiler uses the same name for the blank common block, while Intel's compiler uses _BLANK_ _.

In general BEST PRACTICE for the cluster system is to use the following:

  • % export FC=pathf90
  • % export CC=pathcc
  • % export FFLAGS=”-fno-underscoring”

If you run into a problem with double underscoring then use the mvapich*_nsu (no second underscore) library.


 

Scalable Codes On IBM Blue Gene/P and Sun Microsystems

Download scalable codes on IBM Blue Gene/P here.

Download scalable codes on Sun Microsystems Cluster here.

top of the page

CHPC Student Cluster Competition 2013

Tsessebe Cluster Available

Graphical Processing Unit Cluster Available

CHPC SAGrid Cluster Available

Dirisa Storage Unit Available

Social shares

Website developed by Multidimensions