I'd hardly call this a "Golden Age", precisely because of the shortcomings mentioned in the article. Regardless of how you may re-architect a processor you will still end up hitting CMOS limitations in time.
To contrast my fantasy for "Golden Age" would be multiple viable replacements for CMOS that were actively being used in a variety of processors.
Modern arch has various inefficiencies which would be solved if we shifted focus to more parallelism and less complicated processors (maybe something like Amorphous Computing). This requires a different programming model from the sequential, imperative paradigm though.
I'm somewhat comforted by the fact that there are fundamental physical limits to computational efficiency[1] which we aren't at yet. MOSFETs look to be just about played out but there are tons of other potential computational substrates. Carbon nanotube transistors, photonics, nano-rod logic, magnetic coupling, DNA computing, etc. We're in for a big interregnum before we get a new paradigm working better than our existing one but there's no reason to believe that progress in computation is ending permanently now.
Not sure why the downvotes whether or not you take this outcome as a given. CMOS process scaling has been a uniquely powerful driver of the tech industry since circa the 1990s. It’s not obvious how you lose that and don’t have an impact.
We’ve gotten a bit of a reprieve because things like GPUs turn out to be good architectures for many compute intensive workloads.
How will reaching the end of Moore’s law cause a crash like this?
Do you mean that future requirements will outstrip capacity? I don’t think this has been true for some time, we run a lot more systems and tend to scale horizontally; doubling compute power every n period isn’t a hard requirement imo.
If we ever have an industrial revolution or massive productivity gains again it will most likely involve autonomous machines for things like farming, construction, delivery, transport, etc. All of these things depend on increases in power efficiency or performance. It is possible that we are one or two "doublings" away from achieving this in a mobile package and then suddenly moore's law stops.
For example Tesla is betting on conventional cameras and tries to overcome their shortcomings with computationally intensive machine vision. Tesla's custom chips allows them to have more processing power which means they can install more high resolution cameras and other sensors.
Without higher throughput systems including but not limited to CPUs, you can to some degree just throw more servers at the problem. But now you’ve doubled (maybe more like tripled) the cost to get 2x the performance. And some applications like mobile and sensors depend on miniaturization for a given capability.
I was expecting an answer more along the lines of gallium arsenide or SiC process technology, but OK!
It was always going to end at some point, because we're down to just above 100 atoms gate thickness. What we're seeing is not an "end" but a soft landing that really started some years ago with massively multicore processors. The industry has started adapting to horizontal scaling rather than "free" process improvements. There is still an awful lot of low hanging fruit on the software side for performance improvements, but that's harder because it involves removing or adapting abstraction layers.
The end of Moore's Law is a certainty regardless of technology. Something other than CMOS might yield smaller and more power efficient features up to a point, and beyond that point we'd be back where we are today with CMOS. We might as well start dealing with the problem more seriously, because each attempt to kick the can down the road gets more and more expensive.
The valuation of all the tech companies is based on future growth. Meaning ever increasing sales.
Well, what happens when next year's smartphone really isn't any better or cheaper than last year's smartphone? Sure, there will always be some business because people drop their phones all the time, but sales won't be as high without upgrades driving it.
Lots of people (investors) will be unhappy with the tech sector at that point.
Next year's tech will still be cheaper. If they don't have to retool for the next gen because they literally can't, then we can enjoy the economies of scale in producing the same tech over a longer period of time.
I also dispute the scope you assign to this "upgrade" cycle. It's a huge overstatement to claim that the whole tech sector would "collapse". The tech sector is far larger than phones and personal computers.
> Next year's tech will still be cheaper. If they don't have to retool for the next gen because they literally can't, then we can enjoy the economies of scale in producing the same tech over a longer period of time.
Slightly cheaper, maybe, as the fab equipment depreciates.
> I also dispute the scope you assign to this "upgrade" cycle. It's a huge overstatement to claim that the whole tech sector would "collapse". The tech sector is far larger than phones and personal computers.
It used to be that new hardware would drive new software sales, and vice versa. You'd buy a new PC, because Windows XP ran too slow on your old one. And a couple years later, some awesome game comes out, and you'd need to buy a new PC to play it.
Expansion, not replacement (of broken systems) is what drove the PC segment. It was the foundation of the growth that affected everything that used or could benefit from PCs. And we've seen that with the mobile segment.
And we may yet see that with the VR/AR segment, but that's going to be a tough hill to climb if we can't count on continuing IC improvements.
> Slightly cheaper, maybe, as the fab equipment depreciates.
Sure, but the same equipment would also be cheaper to replace than the capital required for the next generation fab, for the same reasons.
> Expansion, not replacement (of broken systems) is what drove the PC segment. It was the foundation of the growth that affected everything that used or could benefit from PCs. And we've seen that with the mobile segment.
Expansion into new sectors, not expansion into the same sectors. The same will still happen. Solutions will become more customized rather than remain general purpose. There remain considerable gains in parallel computation for instance (GPU), and ASICs will become the new hotness (again).
Furthermore, if clients cease to expand, then more computation will happen on server infrastructure. Your fat clients will become more thin clients again, repeating the same cycle that has happened multiple times so far.
You're also neglecting the investment in infrastructure (data centers, networks) that will continue to grow, if not accelerate. Current hardware is more than sufficient for most of this.
Reaching "the end of Moore's Law" has so far been ended up being more of a problem for Moore's company than anyone else.
Between GPUs, more power efficient designs (due to heavy mainstream interest in mobile technology), more work put into algorithmic efficiency, and promising early developments in quantum computing, it appears the focus has shifted away from relying on more transistors per square inch for ensuring the industry's future growth.
To contrast my fantasy for "Golden Age" would be multiple viable replacements for CMOS that were actively being used in a variety of processors.