返回
Challenges and opportunities in the era of exascale super-computing for the development of Earth System Model
林先建
中国科学院大气物理研究所LASG
Shian-Jiann Lin and Zhi Liang Abstract The creation of ENIAC (Electronic Numerical Integrator And Computer, constructed of vast amount of vacuum tubes) after WW2 made the development of “General Circulation Model” (GCM) feasible. Since the establishment of the Geophysical Fluid Dynamics Laboratory at Princeton in the late 1960s, the continued evolution of GCMs are, in fact, not entirely guided by scientific curiosity or requirements. Rather, it is effectively driven by the size and power of super-computing facility. For example, vertical and horizontal resolutions of the climate and weather models are severely limited by available memory and the speed of the “CPU”. Various physical parameterizations (“physics”) are subsequently developed to make up for the unresolvable “sub-grid-scale motions”, which are as large as 1000 km – a totally unthinkable task from Computational Fluid Dynamics (CFD) standpoint. Even today’s climate model, the so-called “sub-grid” is still in the range of 100 km. It is not entirely political or “anti-science” that credibility of some climate simulations has been called into questions. Meanwhile, it is generally accepted that, to reduce the uncertainties in climate simulations, we must rely less on sub-grid parameterization, which requires increases in resolutions and adapting the sub-grid parameterizations for the ever-changing resolutions. In addition to various scientific assumptions (as required by the sub-grid-scale “physics”), advancement of computer hardware has also been forcing the changes of programming styles of the “GCM”. These changes are, at times, dramatic. In the 1980s, vector-processing supercomputers (based on a single gigantic CPU) are the norm. We saw the gradual change to shared-memory multi-threading system since the 1990s, and then a dramatic shift to the socalled MPP (Massive Parallel Platform) from the late 1990s until this date. With the recent introduction of exascale computing systems, perhaps the most dramatic shift will happen (in the next 5 years) to weather and climate modeling. That is the wide-spread utilization of “accelerators” within the already massive parallel system, with one million cores or more, and with multi-threading capability. To utilize these systems efficiently, a minimum requirement is that the model must be able to scale well (to at least ~10,000 cores). The “accelerators” (such as GPUs) are the key to the increased throughput. However, as opposed to weather models, traditional climate models are facing more difficult challenges, which at the same time is also an opportunity for scientific advancements. I will discuss in more details the challenges and opportunities in the talk.