Introduction to High Performance Computing

by Jan Thorbecke

Duration: Two days

Intended Audience: Entry and Intermediate levels

Prerequisites (Knowledge/Experience/Education Required): Familiarity with numerical methods in scientific computing and linear algebra. Some experience in running scientific calculations on a computer (e.g. MATLAB, Seismic Unix). No Unix/Linux skills are required. MSc (last year) and PhD level.

Summary:
For their research or application purposes most people start to test ideas and concepts in high level scripting packages (like Matlab and Python). When the testing becomes more complicated and models become larger and realistic, such packages are not suited anymore to do the calculations. To handle more compute-demanding problems programming in C or FORTRAN is much more suitable. For an efficient implementation of a scientific problem, knowledge about the computational hardware, compilers, operating system, file IO, parallelization and CPU optimization is important.

This course is set up to teach participants the basic principles of high-performing computing under a Linux operating system. The hardware architecture of a computer, and how this hardware can restrict the performance of an application, is explained in detail. Style rules to develop readable code, how to use compilers and make files, and writing efficient code is illustrated with examples and general rules. The role of the operating system is explained and some useful commands for the bash shell are shown during the course. The latest hardware to solve scientific problems and future trends in computer architecture are discussed. The concept of parallel programming is explained and common pitfalls and guidelines are given. Software optimization and parallelization strategies (OpenMP and MPI) for standard CPU's are explained. Hands-on exercises (also as home-work or class exercise) are used to clarify problems and concepts.

Course Outline:

  • Hardware
    • Memory Hierarchy
    • Processor techniques
    • Future trends
  • Operating System
    • Functionality
    • IO management
    • Compiling linking
  • Programming
    • Languages
    • Optimization techniques
  • Parallelization
    • Hardware
    • OpenMP
    • MPI

During the course we will start with discussions about hardware solutions for compute problems. Going from this hardware basis we go to the operating system advance to the programming environment and end with the system users. During this bottom up approach we will touch all aspects which are of importantance for writing efficient (parallel) programs. 

Learner Outcomes:

At the end of the course the student will be able to:

  • Understand the architecture of modern CPU's and how this architecture influences the way programs should be written
  • Write numerical software, which exploits the memory hierarchy of a CPU to obtain a code with close to optimal performance
  • Analyze an existing program for OpenMP and MPI parallelization possibilities
  • Evaluate the possibilities of accelerators to speed up computational work

Teaching methods: Lecture, discussion and in-class exercises.

Instructor Biography:
Jan Thorbecke