Analysis of TCP/IP Overhead on Overlapping Message Transfer and Computation in a Distributed Memory System Architecture

Main Article Content

Mohamed Faidz Mohamed Said
Mohd Nasir Taib, Saadiah Yahya

Abstract

High Performance Computing (HPC) has been commonly constructed due to the widely implementation of open source software and clustering technology. The growth of clustering technology is also due to the demand of the parallel programming either using shared memory systems (SMS) or distributed memory systems (DMS). The DMS hardware platform utilizing the Message Passing Interface (MPI) programming model is easier to build and scale than the SMS platform because of the direct access to local memory and mainly the communication is via explicit send/receive messages primitives. These message primitives consist of non-blocking and blocking communications. When the programming model of non-blocking communication is used, the messages can return soon without waiting for the finish of communication operation, thus allowing the overlap of message transfer and computation. By empirically measuring the time, rate and capturing the packets, vital information can be extracted from them. The objective of this research is to investigate the TCP/IP protocol statistics of the non-blocking and blocking communications applied on various message and overlap sizes. The benefit of understanding the communication overhead of these distinct MPI communication primitives has the advantage for the programmer to write efficient parallel software. In this research, a four-node PC cluster is built on a private dedicated LAN using the message-passing library MPICH as its parallel software. It is demonstrated conclusively that for a long message size, the large difference in the average Mbit per second for the packets shows that the non-blocking overlap messages provides a more efficient communication compared to the blocking messages, and therefore will eventually contribute to the improved performance of parallel applications.

 


Keywords: MPICH, cluster computing, overlapping.

Downloads

Download data is not yet available.

Article Details

Section
Articles