Continuum Computing - on a New Performance Trajectory beyond Exascale

The end of Moore's Law is a cliche that none the less is a hard barrier to future scaling of high performance computing systems. A factor of about 4x in device density is all that is left of this form of improved throughput with a 5x gain required just to get to the milestone of exascale. The remaining sources of performance improvement are better delivered efficiency of more than 10x and alternative architectures to make better use of chip real estate. This paper will discuss the set of principles guiding a potential future of non-von Neumann architectures as adopted by the experimental class of Continuum Computer Architecture (CCA). It is being explored by the Semantic Memory Architecture Research Team (SMART) at Indiana University. CCA comprises a homogeneous aggregation of cellular components (function cells) which are orders of magnitude smaller than lightweight cores and individually is unable to accomplish a computation but in combination can do so with extreme cost efficiency and unprecedented scalability. It will be seen that a path exists based on such unconventional methods like neuromorphic computing or dataflow that not only will meet the likely exascale milestone in the same time with much better power, cost, and size but also will set a new performance trajectory leading to Zetaflops capability before 2030.


Publication Date:
2018
Date Submitted:
Jul 01 2019
Pagination:
43609
Citation:
Supercomputing Frontiers and Innovations
5
External Resources:




 Record created 2019-07-01, last modified 2019-07-09


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)