I have been reading some fascinating articles centered around what is next in supercomputing. It’s all new to me because I haven’t really been paying much attention beyond the common workstation.
And it seems with the advent of 3D Flash memory that computing is shifting away from the old paradigm of the CPU and its instruction set architecture to what is being called as memory centric computing.
As scores of multicore processors are now easily able to be cobbled together it has led developers to see that the system I/O bottleneck is memory access. So the new trick has been to get these new large, fast Flash memory systems as close to the processors as possible.
And the memory densities of the 3D flavors of Flash lends themselves to insanely powerful systems which are expected to move into the exascale range somewhere between 2018 and 2020.
For those of you who have forgotten your metric unit prefixes: kilo 10^3, mega 10^6, giga 10^9, tera 10^12, peta 10^15 and exa 10^18. Where 1 exaFLOPS is a billion billion calculations per second.
According to Wikipedia, “Exascale computing would be considered as a significant achievement in computer engineering for it is believed to be the order of processing power of the human brain at neural level. It is, for instance, the target power of the Human Brain Project.”
Many companies right now are not waiting for the future and are actively building hybrid systems using existing technologies that are rapidly advancing machine learning.
As William Gibson wryly noted, “The future is here now. It’s just not evenly distributed.”