A+ A A-

Cliffton M Masedi

Cliffton M Masedi: Computational Modelling of Materials Researcher

Cliffton holds a BSc degree in Physics and Chemistry and a BSc Hons (Physics). His MSc was upgraded in the year 2013 at University of Limpopo (Turfloop Campus), and he is currently studying towards a PhD. During his post-graduate studies, he served as a Research Assistant at the Materials Modelling Centre, a dedicated research centre in University of Limpopo. His thesis entails the computational modelling of advanced materials which are applied in energy storage technologies. Part of his research focus is on the discharge products of rechargeable lithium batteries, which could play a key role in the advancement of next-generation batteries.

In July 2011, he joined the Centre for High Performance Computing (CHPC) as a part of his MSc studentship. He has successfully coordinated some outreach and public awareness programmes through the CHPC. In March 2014 he was the first person to represent Limpopo and the CHPC at the International FameLab South Africa competition; his presentation was about the benefit of using High Performance Computing in the development of energy storage technologies. Cliffton has presented his research findings at local, national and international conferences. The excellence of his academic efforts has rewarded him with awards; this includes a best MSc research paper award in 2011 and a best MSc research paper award in 2013 both from the Faculty of Science and Agriculture Research Day at University of Limpopo.

Last Updated on Tuesday, 08 April 2014 09:38

Hits: 3713

HPC Administrators Training

  1. Name(*)
    Please type your full name.
  2. ID/Passport no
    Invalid email address.
  3. Email(*)
    Invalid Input
  4. Contact Number
    Invalid Input
  5. Organisation
    Invalid Input
  6. Job Title
    Invalid Input
  7. Job Description
    Invalid Input
  8. CAPTCHA(*)
    CAPTCHA
    Invalid Input

Last Updated on Thursday, 19 June 2014 15:23

Hits: 4635

Crossing fingers for team SA

The South African student cluster competition team is awaiting results of months of preparation for the competion. The team put all its months of training and preparation to the test from 16 June 2013 on the exhibition floor of the International Student Cluster Competition in Leipzig, Germany.

Results will be released later today. To read about the competition, click here.

Last Updated on Wednesday, 19 June 2013 16:07

Hits: 3573

NAMD

NAMD on GPU

1. SSH into GPU cluster using the following command: ssh This email address is being protected from spambots. You need JavaScript enabled to view it. .

2. The default setting of NAMD has been set to use rsh to distribute tasks to the compute node. In the GPU cluster; rsh is disabled. User need to change his/her environment from rsh to ssh connection by typing the following command: export CONV_RSH=/usr/bin/ssh (For NAMD authentication) and export PVM_RSH=/usr/bin/ssh (For Torque authentication).

3. From then, create a directory named: namdtest in the following directory: /GPU/home/username.

4. To run NAMD jobs through Ethernet network, do the following:

4.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "++++++++++"
echo "host files is:"
echo " "
cat $PBS_NODEFILE
cp $PBS_NODEFILE $PBS_STEP_OUT.hostfile
echo " "
echo "++++++++++"

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT

-----------------------------------------------------------------------
5. To run NAMD jobs through Infiniband network, do the following:

5.1 cd to /GPU/home/username/namdtest and create the following example script file named: namd.moab.
-----------------------------------------------------------------------
###These lines are for Moab
#MSUB -l nodes=2:ppn=16:gpus=4 partition=c2070
#MSUB -l walltime=168:00:00
#MSUB -m be
#MSUB -V
#MSUB -o /GPU/home/username/namdtest/out
#MSUB -e /GPU/home/username/namdtest/err
#MSUB -d /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64
#MSUB -mb

##### Running commands
echo "original machine file is:"
echo "++++++++++"
cat $PBS_NODEFILE
echo "++++++++++"
cat $PBS_NODEFILE|sed -e 's/.*/&-ib/'>$PBS_STEP_OUT.hostfile
echo "modified machine file is:"
echo "++++++++++"
cat $PBS_STEP_OUT.hostfile

nproc=`cat $PBS_NODEFILE | wc -l`
cd /GPU/opt/namd/NAMD_2.8_Source/Linux-x86_64/
charmrun +p$nproc namd2 /GPU/opt/namd/NAMD_2.8_Source/apoa1/apoa1.namd > /GPU/home/username/namdtest/OUTPUT
-----------------------------------------------------------------------
6. Save the file and run the command: msub namd.moab. NOTE: In the script file, user can either use partition=c2070 or partition=c1060.
7. To check the status of the job, type the command: showq.
8. To check the status of the nodes, type the command: pbsnodes. Note: This command will display available nodes and gpus within the nodes.

Last Updated on Tuesday, 03 June 2014 14:35

Hits: 3741

CHPC in the News

CHPC SAGrid Cluster Available

Documentation for users:

Dirisa Storage Unit Available

Lengau Cluster Available

Social Share

FacebookTwitterGoogle BookmarksLinkedin

Website developed by Multidimensions