IMPROVING PERFORMANCE IN HPC SYSTEM UNDER POWER CONSUMPTIONS LIMITATIONS

Main Article Content

Muhammad Usman Ashraf
Amna Arshad
Rabia Aslam

Abstract

Today's High-Performance Computing (HPC) systems require significant usage of "supercomputers" and extensive parallel processing approaches for solving complicated computational tasks at the Petascale level of performance (1015 calculations per second). The next breakthrough in the computing revolution is the Exascale level of performance that is 1018 calculations per second-a remarkable achievement in computing that will have a fathomless influence towards everyday life. Current supercomputers can't achieve such a high level of performance under power dissipation constraints. Even though the Exascale performance can be achieved by multiplying the number of cores according to Exascale computing system constraints, the challenge of power consumption still persists. However, the primary focus of this study is to analyse how to enhance performance under power consumption limitations for emerging technologies. Leading to objectives, the current study presents a comprehensive analysis of existing strategies that can be considered to enhance performance and reducing power for emerging Exascale computing system. Consequently, we have suggested a massive parallel programming mechanism which is promising to achieve HPC Exascale system goals.

Downloads

Download data is not yet available.

Article Details

Section
Articles
Author Biographies

Muhammad Usman Ashraf, GC Women University. Sialkot, Pakistan

Department of Computer Science and Information Technology

Amna Arshad, GC Women University. Sialkot, Pakistan

Department of Computer Science and Information Technology

Rabia Aslam, GC Women University. Sialkot, Pakistan

Department of Computer Science and Information Technology

References

Perarnau, Swann, Rinku Gupta, and Pete Beckman. "Argo: An Exascale Operating System and Runtime." (2015).

Shalf, John, Sudip Dosanjh, and John Morrison. "Exascale computing technology challenges." International Conference on High Performance Computing for Computational Science. Springer Berlin Heidelberg, 2010.

B. S. J. E. A. R. D. ATKINSON, “The Vital Importance of HighPerformance Computing to U.S. Competitiveness.†(2016).

Reed, Daniel A., and Jack Dongarra. "Exascale computing and big data."Communications of the ACM 58.7 (2015): 56-68. Cappello, Franck, et al. "Toward exascale resilience." International Journal of High Performance Computing Applications (2009).

Zhou, Min. Petascale adaptive computational fluid dynamics. Diss. RENSSELAER POLYTECHNIC INSTITUTE, 2009.

Dongarra, Jack J., and David W. Walker. "The quest for petascale computing." Computing in Science & Engineering 3.3 (2001): 32-39.

Reed, Daniel, et al. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review. USDOE Office of Science (SC)(United States), 2015.

M. Snir, R. W. Wisniewski, J. A. Abraham, S. V. Adve, S. Bagchi, P. Balaji, J. Belak, P. Bose, F. Cappello, B. Carlson, A. A. Chien, P. Coteus, N. A. Debardeleben, P. Diniz, C. Engelmann, M. Erez, S. Fazzari, A. Geist, R. Gupta, F. Johnson, S. Krishnamoorthy, S. Leyffer, D. Liberty, S. Mitra, T. Munson, R. Schreiber, J. Stearley, and E. V. Hensbergen, “Addressing failures in exascale computing,†Tech. Rep. ANL/MCS-TM-332, Argonne National Laboratory, Mathematics and Computer Science Division, Apr. 2013

DOE. Report from the Architectures and Technology for Extreme Scale Computing Workshop, 2009.

K. Yoshii, K. Iskra, R. Gupta, P. Beckman, V. Vishwanath, C. Yu, and S. Coghlan. Evaluating power-monitoring capabilities on IBM Blue Gene/P and Blue Gene/Q. In Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER ’12), Beijing, China, 2012. (to appear).

Rajovic, Nikola, et al. "The low power architecture approach towards exascale computing." Journal of Computational Science4.6 (2013): 439-443.

P. M. Kogge and J. Shalf. “Exascale computing trends: Adjusting to the new normal’ for computer architecture.†Computing in Science and Engineering, 15(6):16–26, 2013.

P. Participants. “Workshop on programming abstractions for data locality,PADAL’15â€.https://sites.google.com/a/lbl.gov/padalworkshop/,2015.

Shafto, Mike, et al. "Modeling, simulation, information technology & processing roadmap." NASA, Washington, DC, USA, Tech. Rep 11 (2012).

Gabriel, Edgar, et al. "Open MPI: Goals, concept, and design of a next generation MPI implementation." European Parallel Virtual Machine/Message Passing Interface Users‟ Group Meeting. Springer Berlin Heidelberg, 2004.

Message passing Interface, https://computing.llnl.gov/tutorials/mpi/ , 20 June, 2017 [03 Aug, 2017]

Dinan, James, et al. "An implementation and evaluation of the MPI 3.0 onesided communication interface." Concurrency and Computation: Practice and Experience (2016).

Jin, Shuangshuang, and David P. Chassin. "Thread Group Multithreading: Accelerating the Computation of an Agent-Based Power System Modeling and Simulation Tool--C GridLAB-D." 2014 47th Hawaii International Conference on System Sciences. IEEE, 2014.

Martineau, Matt, Simon McIntosh-Smith, and Wayne Gaudin. "Evaluating OpenMP 4.0's Effectiveness as a Heterogeneous PP Model." Parallel and Distributed Processing Symposium Workshops, 2016 IEEE International. IEEE, 2016.

Terboven, C., Hahnfeld, J., Teruel, X., Mateo, S., Duran, A., Klemm, M., Olivier, S.L. and de Supinski, B.R., 2016, October. Approaches for Task Affinity in OpenMP. In International Workshop on OpenMP (pp. 102-115). Springer International Publishing.

Podobas, Artur, and Sven Karlsson. "Towards Unifying OpenMP Under the Task-Parallel Paradigm." International Workshop on OpenMP. Springer International Publishing, 2016.

NVIDIA Accelerated Computing “developer.nvidia.com/cuda-downloadsâ€, 02 Nov 2016.

Ashraf, Muhammad Usman, Fadi Fouz, and Fathy Alboraei Eassa. “Toward Exascale Computing Systems: An Energy Efficient Massive Parallel Computational Modelâ€, International Journal of Advanced Computer Science and Applications, 2018

The OpenACC Application Programming Interface Version 1.0, 2011.[Online]. Available: http://openacc.org

Khronos OpenCL Working Group, The OpenCL Specification Version 1.2, November 2011. [Online]. Available: http://www.khronos.org/

NVIDIA Corporation, OpenCL Best Practices Guide, 2011.

C. Ong, M. Weldon, D. Cyca, and M.Okoniewski, "Acceleration of large-scale FDTD simulations on high performance GPU clusters," in Proc. IEEE APS/URSI '09, 2009.

Jin, Haoqiang, et al. "High performance computing using MPI and OpenMP on multi-core parallel systems." Parallel Computing 37.9 (2011): 562-575.

Mininni, Pablo D., et al. "A hybrid MPI–OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence." Parallel Computing 37.6 (2011): 316-326.

E. T. U. S. P. L. V. K. e. Akhil Langer, “Energy-efï¬cient Computing for HPC Workloads on Heterogeneous Manycore Chipsâ€, pp. 11-19, 2015.

L. C. A. H. D. K. Panda, “Designing High Performance and Scalable MPI Intra-node Communication Support for Clustersâ€, 2006.

S. H. a. T. Rauber, “Reducing the Overhead of Intra-Node Communication in Clusters of SMPsâ€, 2005.