I gave a talk last week in a workshop called "Challenges & Pitfalls of Performance Assurance; examples from the financial industry" associated to the Central Europe Computer Measurement Group annual meeting. I was here to present what gets done in the academic world about performance, and how it relates to what's done in financial world. I was invited by AdHoc International, a company expert in performance profiling, mainly for J2EE applications but not only.

My goal was to introduce what high performance applications are and why they exist. The short answer is that they allow computational science. Then, I presented what the challenges are to get high performance application. The first limit is the memory barrier, with the difficulty to feed the CPU with data and the current trend is the electric consumption, which forces to intra-host parallelism. And finally, I presented how to study the performance of these applications. My idea is that this is becoming a science, so we have the three classical ways of doing science nowadays. Do equations, do experimental platforms (such as Grid5000), or use a simulator (such as SimGrid, of course).

The talk went very well, and I think I managed to bring my ideas to the audience. Yuhu.

Agenda
  1. What are Scientific Applications?
    • What they are, why we need them
    • Typical uses of Computational Science
    • Typical hardware environment of these softwares
  2. Challenges to Scientific Applications Quality
    • Usual quality metrics in High Performance Computing (HPC)
    • What are the major performance bottlenecks
    • Hardware evolution impacting HPC applications design
  3. Techniques to evaluate HPC code in practice
    • High Performance Computing as a Science
    • Experimental Approach to HPC
    • Simulation and Emulation
  4. Conclusions

   1/37  
Download PDF