Permanent link

Issue date

Metadata

Collections

Abstract

In the early days of computing, scientific calculations were done by specializedhardware. More recently, increasingly powerful CPUs took over and have beendominant for a long time. Now though, scientific computation is not only forthe general CPU environment anymore. GPUs are specialized processors withtheir own memory hierarchy requiring more effort to program, but for suitablealgorithms they may significantly outperform serially optimized CPUs. In recentyears, these GPUs have become a lot more easily programmable, where they in thepast had to be programmed through the abstraction of a graphics pipeline.EMGS in Trondheim is an oil-finding service working with analysis of seismicreadings of the ocean floor, to provide information about possible oil reservoirs.Data-centers comprised of CPU nodes does all the work today, however GPUinstallations could be more cost effective and faster.In this thesis we look at the implementation of the main part of one of theirdata analysis algorithms. For this we use the FDTD method implemented inYee bench by Ulf Andersson. We look at how to adapt it for GPU using CUDA,parallelize the CPU implementations and how to run this efficiently togetherheterogeneously.It is shown that this method has great potential for use on GPUs, speedupsjust short of 19x over single thread CPU are achieved in this work. The FDTDmethod we use does however have some erratic memory operations which limitsour performance compared to great GPU implementations these days which canreach speedups of over 100x. However, many of them still compare to singleCPU performance. The order in which we address memory is therefore evenmore important, we show that optimizing memory writes when half the memoryreads will not coalesce still improves our performance considerably. We show thatcare is needed when scheduling jobs on both CPU and GPU on the same node toavoid the total performance going down. Using all available resources on the hostmay not be beneficial. Utilizing several parallel CUDA streams proves effective tohide a lot of overhead and delay caused by busy CPU and main memory.This work is not a final solution for EMGS? needs for this tool, other consid-erations and options than those discussed are also of interest. These topics areincluded in the future work section.