PANEL TITLE
Execution and Programming Models – Extreme-Scale and Beyond
PROGRAM
Time and date: | Friday July 19, 4:15 pm – 5:15 pm |
Location: | Marquette University, Milwaukee, Wisconsin. Room AMU 163 |
Session Chair: | Stéphane Zuckerman Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS |
Session Chair: | Guang R. Gao University of Delaware, Endowed Distinguished Professor |
Erik Altman | IBM | slides |
Hironori Kasahara | Waseda University | slides |
Karthikeyan Sankaralingam | University of Wisconsin | |
Jean-Luc Gaudiot | University of California | slides |
CJ Newburn | NVIDIA | slides |
ABSTRACT
Computing systems have undergone a fundamental transformation such that exploiting parallelism has become the only means possible for meeting increasing performance demands (e.g. speed, energy, and efficiency). However, to achieve scalable parallelism, we must address the challenges of the increasing performance demand (i.e. speed, energy efficiency, reliability, etc.) in this modern era of parallel computing — especially as the expected scale and complexity of future high-performance (HPC) systems is unprecedented. We have witnessed the community effort in exploring a viable path forward for the design of future HPC systems over the next 10-15 years, particularly when one considers that the limits of current semiconductor technology (the post-Moore’s Law era).
This panel provides a forum to discuss and debate the challenges and solutions in the area of parallel programming models and program execution models with particular focus on extreme-scale parallel computing and systems.
QUESTIONS TO PANELISTS
Question 1: Program Execution Model (PXM) vs. Programming Model (PM)
What is the main distinction (as well as relation) between the concepts of PXM vs. PMs? Please state your answer briefly and use your own words and intuition.
Question 2: System-level API and Fine-Grain Parallelism
There is a heated discussion and debate of the following vision: "in order to effectively and efficiently exploit the vast parallelism (both at coarse-grain and fine-grain levels) at extreme scale we need to break some traditional abstractions at both PXMs and PMs levels. This is essential in the design of a systems-level API for future extreme-scale parallel computing systems". What is your opinion on the above vision?
Question 3: Of the programmability of dataflow models
There has been significant concerns that "the dataflow/codelet community has always claimed that their model is more productive; however more recent work with task parallelism and the recent OCR project tried working with these types of models, and the scientific application community actually found them less productive". What is your observation/opinion?