This research introduces a parallel approach to visualize a three dimensional (3D) heart model using a cluster of homogeneous computers. A 3D heart model is a virtual heart which attempts to simulate the functionality of a real human heart. This model was first done by Charles Peskin & David McQueen using the Immersed Boundary Method (IBM). The IBM is an approach used to model and simulate elastic structure in an incompressible fluid. It is commonly used in the biological fluid dynamic field to describe the fluid structure interaction.
The heart model dataset consists of cardiac fiber orientation, pressure, velocity, stress and other value markers. This enables us to manipulate the heart model to study and identify defections in the heart. However, processing and visualizing a large dataset such as this requires a very high end computer which is not an economical way for most researchers especially those with limited funding and resources. Thus, this motivates the need to construct a cost effective method to visualize the heart model using off-the-shelf computers. The computers used in this research are identical to each other, which means they have the same processor speed, storage, memory and etc. These individual computers are connected in a Local Area Network (LAN) to form a homogeneous cluster. Communication among the computer nodes are made possible using the Message Passing Interface (MPI). A visualization pipeline is constructed on each node and uses the sort-last rendering algorithm to process the heart dataset. In this initial stage of heart model visualization, our scope focuses on the fiber orientation data. This project describes the techniques used to pre-process and convert the simulated datasets to be used for visualization.
The homogeneous cluster, also refered to as the server cluster consists of 8 nodes where one of them act as a master node. There are a total number of 16 processors as each node
has a dual core processor. The client computer will make a conection to the master node specifying the total number of processors to be used when rendering the heart model. Results showed that the more processors used, the shorter time it takes to render the heart model. A part from that, we also analyzed the rendering performance of our visualization framework by loading it with datasets of different sizes ranging from 1 million to 20 millions of point clouds. The results gathered showed good scalability, that is when the dataset size becomes larger, the time taken to render it increases gradually. This experiment also provides useful information on the optimal number of processors to be used to render the heart model with different number of point clouds. Last but not least, we perform a comparison of our research findings with our team-mates who uses the heterogeneous cluster architecture to visualize the heart model. The objective is to discover which computer architecture performs better when rendering the heart model using the proposed methodology. The end results reveal that both architecture give good scalability, however, homogeneous cluster yields a faster rendering speed compared to heterogeneous cluster.
This research presents a low cost scalable data distribution strategy for parallel point-based rendering on a homogeneous cluster computer. Is is unnecessary to purchase an expensive high end super-computer as the same performance could be achieved using inter-connected off-the-shelf computers which are commonly available in a research laboratory. It will also assist researchers in building an efficient visualization system using the optimal number of processors to render a dataset with a specific number of point clouds. The research methodology pave way for visualizing a larger heart model dataset that may consists more value markers which provide more useful and detail information of the heart.