Nvidia might find itself in a similar position soon if its Tegra line-up doesn't become significantly more popular and successful. Intel is squeezing them out of the market with its integrated graphics in laptops, and I think that's quite a shame because I don't think Intel played fair there. They tried to force manufacturers to use integrated graphics even if the laptop used discrete graphics. Now that Intel's integrated graphics chips are becoming "good enough" for most people, it's going to be a very smooth transition to take Nvidia out of the laptop market.
So Intel is killing both AMD and Nvidia. Hopefully Nvidia has the last say with its ARM-based chips, though. If ARM ends up disrupting Intel in all its markets, and if Nvidia remains one of the best ARM chip makers, then Nvidia will ultimately win.
That can't be said about AMD, though. They're stuck in an x86 world where they're falling further and further behind Intel, and they haven't even begun to enter the ARM world, and even if they do, it might be too late now, because they lack the ARM engineering experience of their competitors. Buying the OMAP division from TI might help, but OMAP is not doing that great now either, and they always use overclocked last-gen GPU's to compete with the new mobile GPU's, which I find a bit annoying. Plus, I don't know if AMD can even afford them, or the turn-around that comes with that. Another interesting acquisition target could be MIPS, which is for sale, but AMD would have to also promote the benefits of that architecture on their own, which seems even harder.
NVIDIA still owns the high-end scientific computing market. This combined with their ARM products could save it in the future. AMD...their only advantage has been X86 and that is just a dying market, kudos to the board at least for dumping the CEO who wanted to focus on the high end PC market. AMD still has really good GPU tech, enough to compete with NVIDIA on the high-end non-GPGPU market, but that is very niche.
Actually, I fear for NVIDIA's HPC market as well. Kepler 2 in my eyes is not well suited for GPGPU, they've rather optimized it for the gaming market in order to go after AMD. I.e. they introduced much more cores, but those cores now have less cache and register resources available to them. Less registers, especially, is deadly - this has already been a limiting factor for many HPC applications.
Meanwhile Intel pushes out their Xeon Phi boards with huge memory bandwidth and OpenMP support. If there's not some kind of surprise in terms of performance either on Intel's or NVIDIA's side I see dark clouds coming for them.
Well Intel recently introduced its Xeon Phi co-processor, which according to them offers about the same performance as Nvidia's chips, while not needing to code in CUDA. But I never trust Intel's marketing anyway because they always seem to exaggerate in some way or be misleading on purpose, so we'll see how that goes. Plus, I'm interested to see what comes out of Nvidia's Project Denver or whatever they are calling it now (Nvidia's 64-bit custom ARMv8 SoC for HPC and servers, which should arrive in 2014).
>But I never trust Intel's marketing anyway because they always seem to exaggerate in some way or be misleading on purpose, so we'll see how that goes.
I think you can safely remove "Intel's" from that sentence and it remains as accurate, or perhaps more so. I really don't mean to hate on marketing, but their job is to sell a story. Good marketers walk that fine line between outright lying and carefully walking the "happy path" that makes whatever they are selling the cure for what ails you and fails to mention the problems. In general once you dig into things you almost always start finding the trade-offs/drawbacks that marketing "forgot" to mention. Thomas Sowell was right when he said: "There are no solutions ... only trade offs".
1. Intel is safe for the foreseeable future, they have a unique position as far as manufacturing and processes go.
2. I suspect the engineering barriers in developing an OMAP class processor are much lower than what AMD has to deal with it in x86 and graphics markets. There is a large amount of off the shelf IP for mobile application processors and they have lots of internal knowledge they could use to customize the ICs (graphics, core architecture).
The problem there is that I'm not sure who the buyers are - the two giants (Apple and Samsung) are making their own processors, so unless you're making cut price ICs (a very crowded market) you are somewhat limited. I don't think the Tegra is bad from a technical standpoint, it's just that the number of really high volume customers is not infinite.
Nevertheless I'd love to see AMD stepping away from the x86 game into a wide ARM portfolio. I'm sure ARM would play ball, it'd be a major win for them and with ATI's graphics/hpc expertise we could see killer solutions.
That might not last as long as you think. Apparently Global Foundries thinks it can reach parity with FinFET 14 nm chips in 2014, which is Intel's (new) timeline as well. I think this is happening because of 2 reasons: Intel has experienced delays of a few months with every latest generation, and GloFo also found a shortcut to the 14nm process:
"By moving to 14nm finfet while keeping 20nm interconnect, GF has brought forward the introduction of its 14nm process by one year."
When 3dfx started flailing, they sued NVidia for IP infringement until NVidia purchased them. I could see the board of AMD doing the same thing, patent trolling everyone in sight until somebody buys them out.
Right now, Intel and ARMs business are very much segregated. Each of the two is making steps into foreign country, but with relatively meager success.
To clarify: ARMs business is the embedded and booming smartphone market, Intel has the never-going-to-cease-to-exist high performance server and workstation market. The fate of the thing between, consumer and "office" PCs, is very much unknown.
Cloud-based data centers are beginning to adopt ARMs for power consumption reasons. If that trend picks up along with everyone using cloud providers, then I think even the high-end PC server business could start dying. Incidentally, this is a market where NVIDIA could begin to really cream Intel, a lot of scientific computing has already moved over to CUDA/Tesla (granted, there is a huge difference between scientific and enterprise applications). I don't see the same thing happening to workstations, however, difficult to move that work into the cloud.