Exercise 9: OpenMPI and InfiniBand
- Author:
Richard Berger, Fernando Posada
Warning
Avoid rebooting all of your systems. In all cases, make sure you have copies of your files and configurations.
One of the defining features of an HPC cluster is its high-bandwidth, low-latency interconnect. This enables massively parallel computing applications to run across multiple nodes and work together. The currently dominant technology in this space is InfiniBand.
The training clusters used in this course all have InfiniBand HCA (host channel adapters) and are connected to a 108-port InfiniBand switch using copper cables.
This exercise will showcase how to configure basic InfiniBand support in a cluster and use it for applications compiled with OpenMPI support.