Scalability of Seismic Codes on Computational Clusters

by Rodal, Morten

Abstract (Summary)
Building HPC systems out of commercial off-the-shelf (COTS) hardware is becoming more and more common. These systems are connected to each other via some form for network. Each node in these systems have their own memory and processor(s). In traditional supercomputer hardware, all memory is provided in a shared address space.In this master thesis, we look at a seismic modelling application and the problemT of porting this application from a shared memory system to a distributed memory system. The application was provided by Statoil, and uses a high-order finite difference method to solve wave propagations numerically.The seismic modelling application may be partitioned into several independent tasks. We focus on how to distribute these tasks to a set of heterogeneous nodes, and how the performance can be affected by having more than one processor per node. Parallelism can also be achieved in the application by having the compiler auto-parallelise the code using OpenMP intrinsics.The problem of porting the application is solved by implementing a scheduler, which sits atop of the seismic modelling application. This scheduler will distribute work to slave nodes in a master-slave fashion. In addition to servicing the other nodes, the master can also function as a slave.We show that the amount of idle time associated with the scheduler is insignificant to the time spent doing computation. Both the scheduler developed and the seismic modelling program has been benchmarked on several different architectures. Numerical comparisons between the output from the seismic modelling program when run on several different architectures has been done, and is shown to be insignificant.
Bibliographical Information:


School:Norges teknisk-naturvitenskaplige universitet

School Location:Norway

Source Type:Master's Thesis

Keywords:high performance computing hpc seismic modelling application task distribution


Date of Publication:01/01/2004

© 2009 All Rights Reserved.