The success and effectiveness of the CHPC: R&HCD Division will be measured by the quantity and quality of its high end computing outputs in the domains of:
which support national and continental strategic initiatives and push research and innovation boundaries.
The focus of CHPC: R&HCD (and indeed the entire CHPC) should be on achieving these goals. The technical and operational strategy and acquisitions of CHPC should be geared similarly.
The focus of CHPC is on High End Computational (HEC) Science - this includes research into novel high end architectures, mathematical and programming algorithmic development and implementation.
These outcomes are closely correlated:
motivated, highly skilled people make things happen.
The following groups will be mobilised to achieve this:
Inputs to achieve this:
Engage in national High End Computing (HEC) advocacy and discussions with:
Employ appropriate research staff in a range of positions: full-time, seconded, part-time, associate; dual appointments; short and longer term.
Meaningful research collaborations are to be established with other research institutes (nationally and internationally) in the HEC space. Promote partnerships between HEC researchers and experimentalists.
Promote the development of a national cyberinfrastructure ecosystem embracing networked distributed hpc/hec platforms and grids.
Formulate and implement a high end software strategy which embraces both the strategic acquisition and the development of HEC software. Expand range of quality open source and proprietary software on chpc platforms, targeting particularly fundamental underpinning software - in this the needs and advice of SIG's should be ascertained and simultaneously R&HCD must be pro-active. Strategy should be to develop and acquire open-source solutions preferably and upskill potential and active users.
Last Updated on Tuesday, 24 July 2012 16:23
The CHPC seeks to advance scientific boundaries and foster innovation through effective partnership and through the training of a new generation of computationally skilled researchers in areas underpinned by high end computing, particularly those of national and continental strategic importance, to the benefit of basic and applied research in the public and private sectors.
Last Updated on Thursday, 10 April 2014 14:42
|System Name||Blue Gene/P|
|Manufacturer/ Model||IBM Blue Gene/P|
|CPU Clock||850 MHz|
|Peak Performance||14 TFlops|
|Linpack Performance||11.5 TFlops|
|Interconnect||Blue Gene Tree/Torus|
|Storage (formatted capacity)||50 TB (Multi-cluster)|
|Launch date||October 2008|
As part of IBM's Global Innovation Outlook and the IBM-CHPC partnership, the CHPC hosts a rack of Blue Gene (BG4A) donated by IBM (Blue Gene for Africa project).
The Blue Gene®/P system is capable of 14 trillion individual calculations per second, and is five times more powerful than the second-fastest research computer on the African continent (in Egypt). The Blue Gene®/P provides 1024 compute nodes, each with four fully cache-coherent cores and 2GB RAM. The cores run at 850MHz.
These nodes provide three modes of operation for jobs:
Last Updated on Tuesday, 25 March 2014 15:38
|Manufacturer/ Model||IBM e1350 Cluster|
|CPU Clock||2.6 GHz (each node)|
|Memory||16GB (each node)|
|Peak Performance||2.5 TFlops|
|Linpack Performance||2.5 TFlops|
|Interconnect||Ethernet and Infiniband Network|
|Storage (formatted capacity)||44 TB (Multi-cluster)|
The cluster platform, aptly called "iQudu" (isiXhosa for Kudu), symbolises the agility, speed and size of the cluster.
Each node is equipped with two dual-core AMD Opteron 2.6GHz Rev. F processors (640 CPUs in total at approximately 2.5 Teraflops/s peak performance) and 16GB of memory.
The nodes are interconnected with the shared-file system of the SAN, accessed over the Infiniband 4X SDR 10 GB cluster via HTX from Voltaire and PathScale.
Eight of the cluster nodes are equipped with ClearSpeed accelerator cards.
In addition to local hard disks, all nodes have access to a shared storage system with a capacity of 44TB via a General Parallel File System (GPFS).
Last Updated on Tuesday, 25 March 2014 15:45