Dividi et impera? https://www.nextplatform.com/2019/02/05/the-era-of-general-purpose-computers... Moore’s Law has underwritten a remarkable period of growth and stability for the computer industry. The doubling of transistor density at a predictable cadence has fueled not only five decades of increased processor performance, but also the rise of the general-purpose computing model. However, according to a pair of researchers at MIT and Aachen University, that’s all coming to an end. [...] As they point out, general-purpose computing was not always the norm. In the early days of supercomputing, custom-built vector-based architectures from companies like Cray dominated the HPC industry. A version of this still exists today in the vector systems built by NEC. But thanks to the speed at which Moore’s Law has improved the price-performance of transistors over the last few decades, the economic forces has greatly favored general-purpose processors. That’s mainly because the cost of developing and manufacturing a custom chip is between $30 and $80 million. [...] But the computational economics enabled by Moore’s Law is now changing. In recent years, shrinking transistors has become much more expensive as the physical limitations of the underlying semiconductor material begins to assert itself. The authors point out that in the past 25 years, the cost to build a leading edge fab has risen 11 percent per year. In 2017, the Semiconductor Industry Association estimated that it costs about $7 billion to construct a new fab. Not only does that drive up the fixed costs for chipmakers, it has reduced the number semiconductor manufacturers from 25, in 2002, to just four today: Intel, Taiwan Semiconductor Manufacturing Company (TSMC), Samsung and GlobalFoundries. [...] It’s not just a deteriorating Moore’s Law. [...] For starters, you have platforms like mobile devices and the internet of things (IoT) that are so demanding with regard to energy efficiency and cost, and are deployed in such large volumes, that they necessitated customized chips even with a relatively robust Moore’s Law in place. [...] Deep learning and its preferred hardware platform, GPUs, represent the most visible example of how computing may travel down the path from general-purpose to specialized processors. [...] But for deep learning, GPUs may only be the gateway drug. [...] Google’s own Tensor Processing Unit (TPU), which was purpose-built to train and use neural networks, is now in its third iteration. [...] The authors anticipate that cloud computing will, to some extent, blunt the effect of these disparities by offering a variety of infrastructure for smaller and less catered for communities. The growing availability more specialized cloud resources like GPUs, FPGAs, and in the case of Google, TPUs, suggest that the haves and have-nots may be able to operate on a more even playing field. None of this means CPUs or even GPUs are doomed. Although the authors didn’t delve into this aspect, it’s quite possible that specialized, semi-specialized, and general-purpose compute engines will be integrated on the same chip or processor package. Some chipmakers are already pursuing this path.