4.5 staff years of DL effort are used to develop new very high performance scientific applications (often in association with the CCPs), and assist in porting and optimising users' and other external codes and developing parallel programming tools.
This activity includes extensive scientific support for EPSRC’s users of local and national parallel supercomputing facilities. 4.5 WY are devoted to the development of scientific applications, assisting in porting and optimising users' and other external codes, evaluating and exploiting parallel programming tools and developing new very high performance numerical algorithms.
The HPC Development (high-end) support programme is aimed at keeping major EPSRC-funded computational groups at the forefront of world research. Much of the work is focused around developing scaleable parallel algorithms, new scientific functionality and more effective computational methodologies. This is aimed particularly at enabling efficient exploitation of national facilities, but also aims to enhance the cost-effective exploitation of departmental systems funded e.g. via the JREI and JIF initiatives. Developments are implemented in core computational science and engineering applications packages that are subsequently exploited by the HPCI and CCP communities on a wide range of modestly and massively parallel platforms. The code development activities are complemented by technical reports with coding examples, technical workshops, demonstration codes incorporating new programming paradigms, and numerical algorithm libraries (e.g. HSL and CLIPS). The workplans present activities over the next year in the following categories:
Support will be provided for the development of EPSRC’s business case for the HPCx system. This activity will identify applications areas within a range of EPSRC’s activities that could make exciting scientific progress through the exploitation of a next generation high performance computing system (of order 10 x CSAR). The single page case studies should include an international comparison. Activities will include:
HPCx case studies covering a number of scientific and engineering applications areas were provided to EPSRC at the beginning of November. These case studies, included in Annex 2, were prepared in consultation with scientists in the CCPs and HPCI Consortia.
This section covers activities to be undertaken over the next year aimed at:
Programming paradigms on ASCI class systems include message-passing, shared-memory/vector compilation and threads programming in various possible combinations. Over the next year CLRC will extend work aimed at evaluating the emerging hardware and software systems by porting a base of application exemplars (ANGUS, DL_POLY, CASTEP, CRYSTAL, GAMESS-UK and FELISA to the IBM ASCI system at Daresbury Laboratory and the complementary ASCI COMPAQ-DEC system at Rutherford Appleton Laboratory). Tasks over the next year will include:
The later than expected delivery of the upgrade in the IBM system has delayed work in this area. Effort was put into an initial exploration of performance on an IBM SP2 system with a Sphinx node that contains two processors. This enabled an early exploration of programming issues such as coupling OpenMP and MPI-2 programming in a real application. Early performance results on the new system have been reported at a number of workshops in particular the AWE workshop in Oxford on April 3rd 2000.
For further details on this work see the article in Annex 9:
Mixed OpenMP and MPI for Parallel Fortran Applications.
Performance of the new IBM system using a flat MPI model running across all processors and nodes has been evaluated in a range of applications. Performance is compared with the CSAR T3E system and a number of Beowulf systems. For further details see the articles in Annex 10:
Applications Performance: NWChem
Applications Performance: GAMESS-UK DFT, MP2 and 2nd Derivatives
Applications Performance: Developments to the Fitted Coulomb Module
Applications Performance: DL_POLY
Applications Performance: CHARMM
Applications Performance: CRYSTAL, CASTEP and CPMD
Applications Performance: ANGUS
Applications Performance: FLITE3D
Applications Performance: SUMMARY