The success and effectiveness of the CHPC: R&HCD Division will be measured by the quantity and quality of its high end computing outputs in the domains of:
which support national and continental strategic initiatives and push research and innovation boundaries.
The focus of CHPC: R&HCD (and indeed the entire CHPC) should be on achieving these goals. The technical and operational strategy and acquisitions of CHPC should be geared similarly.
The focus of CHPC is on High End Computational (HEC) Science - this includes research into novel high end architectures, mathematical and programming algorithmic development and implementation.
These outcomes are closely correlated:
motivated, highly skilled people make things happen.
The following groups will be mobilised to achieve this:
Inputs to achieve this:
Engage in national High End Computing (HEC) advocacy and discussions with:
Employ appropriate research staff in a range of positions: full-time, seconded, part-time, associate; dual appointments; short and longer term.
Meaningful research collaborations are to be established with other research institutes (nationally and internationally) in the HEC space. Promote partnerships between HEC researchers and experimentalists.
Promote the development of a national cyberinfrastructure ecosystem embracing networked distributed hpc/hec platforms and grids.
Formulate and implement a high end software strategy which embraces both the strategic acquisition and the development of HEC software. Expand range of quality open source and proprietary software on chpc platforms, targeting particularly fundamental underpinning software - in this the needs and advice of SIG's should be ascertained and simultaneously R&HCD must be pro-active. Strategy should be to develop and acquire open-source solutions preferably and upskill potential and active users.
Last Updated on Tuesday, 24 July 2012 16:23
The CHPC seeks to advance scientific boundaries and foster innovation through effective partnership and through the training of a new generation of computationally skilled researchers in areas underpinned by high end computing, particularly those of national and continental strategic importance, to the benefit of basic and applied research in the public and private sectors.
Last Updated on Thursday, 10 April 2014 14:42
|System Name||GPU Cluster|
|CPU Clock||2.4 GHz|
|Peak Performance||12 TFlops|
|Linpack Performance||16 TFlops|
|Storage (formatted capacity)||14 TB|
The GPU Cluster consist of 5 compute nodes and 9 storage nodes which achieves a performance of 16TFLOPS. Each node entails 4 Tesla GPU's, 2x Intel Nehalem X5550 CPU's, 6x Supermicro 4GB DDR3-1066ECC REGISTER, 1x 250 SATA Enterprise Level Hard Drive.
Two types of Nvidia Teslas: C1060 and C2070
Last Updated on Friday, 30 May 2014 09:27
|System Name||Sun Hapertown||Sun Nehalem||Sun Westmere||Dell Westmere||SMP (M9000-64)|
|CPU||Intel Xeon Processor||Intel Nehalem Processor||Intel Westmere Processor||Intel Westmere Processor||Sparc Processor|
|3.0 GHz||2.93 GHz||2.93 GHz||2.93 GHz||1.9 GHz|
|CPU Cores||384||2304||1152||2880||256 (512 Threads)|
|Memory||768 GB||3456 GB||2304 GB||8640 GB||2048 GB|
|Peak Performance||3 TFlops||24 TFlops||13.5 TFlops||37.1 TFlops||2 TFlops|
|Linpack Performance||-||61.5 TFlops||61.5 TFlops||61.5 TFlops||-|
|Interconnect||Jupiter Connector||QDR Infiniband Network||QDR Infiniband Network||QDR Infiniband Network||QDR Infiniband Network|
|Sun Shared Storage||480 TB (Multi-cluster)||480 TB (Multi-cluster)||480 TB (Multi-cluster)||480 TB (Multi-cluster)||5.3 TB XFS|
|Launch date||September 2009||September 2009||September 2011||October 2011||September 2009|
Sun Shared Storage
Last Updated on Thursday, 16 April 2015 12:54