For years now, Intel's only way to get more performance out of their functional process nodes has been to push power and heat higher.
Now, the Tiger Lake performance per watt story just isn't that impressive compared to the competition.
>Here we present the 15W vs 28W configuration figures for the single-threaded workloads, which do see a jump in performance by going to the higher TDP configuration, meaning [Tiger Lake] is thermally constrained at 15W even in [single threaded] workloads.
Comparing it against Apple’s A13, things aren’t looking so rosy as the Intel CPU barely outmatches it even though it uses several times more power, which doesn’t bode well for Intel once Apple releases its “Apple Silicon” Macbooks.
Same thing happened to Nvidia, and I'm very perplexed why this is not widely reported.
I wonder how the 3070 and 3060 are going to compare, in terms of performance per watt, when compared to the 2080/2070 series. Based on the current numbers, they'll possibly show very little improvement.
What “current numbers” do you refer to? Of course it may depend on workload, but the 3080 has, at least in one benchmark[1], better performance per watt than all compared cards (7% better than 2080 Ti, 21% than 2070, 32% than 2080, 67% than 1080 Ti). Total power consumption is up quite a bit (25% over 2080 Ti), but still get more performance than more power.
NVIDIA pushed the 3080's stock performance a little too high up the perf/watt curve. If you limit it to the TDP of the 2080 Ti, you lose 4% performance but you get much better efficiency: https://www.computerbase.de/2020-09/geforce-rtx-3080-test/6/
It’s not clear yet, since we’ve no idea if there will be a 3080 Ti. The 3090 throws out the naming convention from the past few generations, leaving it a bit of a mystery. Nvidia may do as they did with the 1080 Ti: not release till nearly a year after the 1080 (whereas 2080 & 2080 Ti were launched just a week apart).
Given the 3090 is not too much faster than 3080, it seems there may not be room there. But then again, the 1080 Ti was as fast as the Titan X. So…yea.
This is actually what I find extremely dishonest (which is, in other words, extremely good marketing) - a lot of websites are comparing the 3080 with the 2080, and reporting massive improvements... which doesn't make much sense.
I think 3080 vs 2080 Ti is the only possible comparison, but websites should make it very clear that it's an unfair comparison.
Probably the only comparison that actually makes sense is the (future) 3070 vs 2080 Ti. I'm not suprised that Nvidia pushed the release (it was supposed to be released earlier).
The end of Moore’s law has been broadly reported for the last decade and is widely understood. Nobody is likely to get huge efficiency improvements in general processors ever again, not like what we saw in the past, and it isn’t news anymore.
Algorithmic improvements, custom domain specific ASICs, and maybe quantum computing or other physical processes in the future are where large efficiency deltas might come, but for now small improvements are here to stay for all chip makers.
Moore's law may be considered dead at Intel, but TSMC does not agree.
>Wong, who is vice president of corporate research at Taiwan Semiconductor Manufacturing Corp, gave a presentation at the recent Hot Chips conference where he claimed that not only is Moore’s Law alive and well, but with the right bag of technology tricks it will remain viable for the next three decades.
“It’s not dead,” he told the Hot Chips attendees. It’s not slowing down. It’s not even sick.”
I'm not sure chip fabs are ever going to say Moore's law is dead -- I interned at Intel last summer, and Moore's law was pretty much all they could talk about. (In fact, they made very similar claims at the exact same conference [0]).
In fact, multiple gates can be created in the same transistor, in an effect SFN calls “multi-tunnel.” Multiple NOR and OR gates can thus be created from a single Bizen transistor, allowing creation of logic circuits with many fewer devices. This can result in a three-fold increase in gate density with a corresponding reduction in die size for integrated circuits based on the transistors. Summerland said that SFN is also creating a reduced device count processor architecture to enable analogue computing with Bizen transistors.
There's a lot more to the perf improvement tapering than changes in Moore's law (which is about circuit complexity increasing at given cost), namely problems translating the increasing transistor budget to ipc improvement or failing that, solving parallel programming. And clock speed improvements.
FWIW, I don’t think perf improvements are slowing down, I just think efficiency improvements in ICs are. Flops per watt of general compute isn’t moving quickly, and can’t anymore. But we can still make bigger parallel machines, design better algorithms, solve new problems, etc.
Outside HPC/ML I think our programs are now trading off useful ops per watt to take some advantage of the elusive beast called thread level parallelism. A web browser is happy to get a speedup of N by throwing 2N or 4N spinning threads at the problem if correctness and stability can be retained.
What makes sense to accelerate, how to integrate it and balance accelerators vs. general cpus, and how to expose it all to the programmer all seem like fun and interesting problems.
It is a cool time! Yeah I totally agree, and I think it’s awesome that you’re looking at it as an opportunity to learn and have fun doing it. Some people worry, and others embrace the change and make good things happen. I think I can attest to your vision since I work for a chip maker and I’m involved in the hardware & software design of some domain specific computing - it has been a blast, and we are learning all kinds of fun things.
The press follows, rather than identifies, the trend. It doesn't help that in most 'news' organizations, the news is considered entertainment and isn't terribly rigorous.
The 2xxx series wasn't terribly impressive to many so they've just started to peak out in the way that Intel did at least 5 years ago. We're heading into AMD's 4th iteration of processors that basically mop the floor with Intel from a price/performance and performance/watt standpoint and it's just starting to become accepted in the mainstream that Intel is in trouble. It will take another generation or two of products before the press catches on to the fact that what nVidia is telling everyone isn't true. Of course it will be helpful if there's some competition to point to that helps make the case.
Coincidentally there was a Twitter fight about that topic last night. From a consumer perspective there's nothing that can really be done about it. Chip designers have known for years that "Moore's Law" is slowing so it's not news to them.
Was there an issue with any of the math in the video? It was just straight forward performance per watt / performance per watt calculations as far as I could tell.
Software and platform lockdown are a lot weaker than they were in the WinTel heyday. It’s a lot easier to go from macOS to Windows to Linux these days. There are many exceptions of course, but I’d wager that market is not as big as you think.
Most of the world uses web apps. Huge and performance sensitive applications like Adobe, Maya, etc mostly already have their own UI rendering engine anyway.
It’s a lot different from the Win32/x86 duopoly situation.
Yes, most tools are cross platform (not necessarily cross-arch but that's a matter of time) and performance is important - I don't care if it's Apple silicon or Intel or AMD, or what OS GUI I'm using - I just want faster builds and iteration
Also note that the only benchmark where the A13 was included was Spec2006 - so unless Spec 2006 is relevant across most workloads that most users care about it's not really telling the whole story.
There will be an arm version of windows.. but there’s no reason to believe that Microsoft will or can support all of the coprocessors that Apple is going to add - including gpu, t2, nueral engine, etc. it isn’t like Apple plays nice with 3rd parties. I seriously doubt they’ll release the details necessary to develop independent drivers for those... whether it’s Microsoft or Linux.
So there’s good reason to believe it may be macOS only.
Personally I’ll believe it when I see it... because right now there are no 3rd party iPad or iPhone operating systems. So a better question is probably: what makes you think there will be 3rd party OS support?
Yes. There is the fact that macOS on ARM does not appear to have a standardized boot method (Apple says the boot sequence is "based on iPadOS"), and there is the fact that Apple also uses their own GPU architecture for which they're only motivated to make mac/iOS drivers.
Getting Windows to run on this is a pretty tall order, and without significant investment it won't happen. And significant investment from whom? Not Apple surely. I'm guessing Microsoft has more stake if anyone. At best we're going to see Windows on ARM virtual machines.
Apple has enough incentive to do it now because Windows on x86 is something people actually want/need to use. It doesn't cost much to support and probably sells a bunch of Macs. Windows on ARM is not really something people would want to use, not to the enormous extent people want to to use Windows on x86.
Speaking from what people expect from a "Windows PC" - Windows on a current MacBook makes a decent Windows PC. Windows on a future MacBook makes for a very poor Windows PC that will make customers regret their purchase.
Apple currently uses standard PC architecture, with identical CPUs, GPUs and Wifi chips to standard PCs, and a (sort of) standard EFI booting mechanism. With ARM Macs, it won't. Clearly Apple in its hardware design is already not invested enough to support Windows - they could use standard EFI on ARM, for example. So why would they spend a lot of money to do it in software, so people can run a "crippled" version of a competitor's OS?
Note I don't actually believe Apple will actively lock this down. I do believe we will get native Linux on these things to some extent. But I do believe Apple will not lift a single finger to make alternative OSes happen on ARM Macs. Apple only cares about macOS and virtualization. I also think Windows on ARM is a great product, but I also know that common people that just want to use a Windows computer won't agree with me.
Yes. You can't install anything on iPads and iPhones. And those are the closet hardware we have to to be release ARM macs. I would actually be incredibly surprises if they could run anything but Mac OS.
Counterargument: the closest hardware we currently have to ARM Macs is probably... Intel Macs.
The "Apple is going to lock down the Mac just like the iPhone" narrative has been with us since, well, the iPhone. But despite the alarm at tighter security measures in more recent versions of macOS, that hasn't happened yet -- and if a Secret Nefarious Lockdown Plan (tm) was going to come to fruition, the year that they shifted CPU hardware and radically redesigned the operating system's UX would sure as heck seem to be The Perfect Moment. And it still hasn't happened.
Past performance is not a guarantee of future returns and all that, but I don't think there's any reason to think Apple is going to make it any harder to run different operating systems on Apple Silicon hardware than they do on Intel hardware. (Of course, I don't think there's any reason to think they'll make it easier, either.)
On Intel macs the default was that it would run windows as long as Apple did not make substantial changes to the uefi, etc.
But arm macs are really an Apple A series processor plus a number of coprocessors like the t2, nueral engine, etc.. and the default is that windows will not work on this specialized hardware.
So unless there’s evidence that Apple is actively going to help 3rd parties develop operating systems for the Mac, the best we can hope for is a fairly acceptable OS with reverse engineered drivers for those chips. We’re either going to get an unstable OS or a severely crippled OS.
With intel macs Apple just had to stand back and let 3rd parties do their work. But they would have to actively assist with arm macs.. and there’s no indication theyve ever done that much less intend to do so in the future.
That's false. The ARM chips are tightly integrated SoCs. They aren't even close to a dropdown in replacement as "just" a new CPU would be. The internals of the ARM Macs will be much closer to the Ipad then to the Intel Macs.
And I don't think it's a coincidence Apple is slowly but surely moving OSX towards a locked down platform. It's already made pretty hard to run normal software.
Apple has already confirmed that they have no plans on restricting the ability of third party operating systems to run on Apple Silicon based notebooks.
That’s not really a helpful statement. There are so many coprocessors that we need them to actively release the details for those.. not just say they won’t oppose it.
The former requires their support. The later just says they won’t oppose reverse engineering. The later is far more difficult and will likely mean any OS will be buggy and flaky. Unless Apple comes out and says they will actively support 3rd parties, it’s just going to be a shit show with no good alternative OS options.
Is there a reference for this? Running Windows, particularly with the 64-bit x86 emulation announcement, would go a long way to making purchases more palatable.
Beyond what others have mentioned, we also mustn’t discount the possibility that Apple Silicon could be a strategic move to lower device MSRP, thus increasing market share and Apple services revenue.
People will instinctively write off this idea, but this is the new pricing strategy Apple has been employing with the $329 entry level iPad, $399 iPhone SE, and the new $279 Apple Watch SE. The Macintosh now remains Apple’s only major consumer product line that hasn’t seen aggressive price reductions to make Apple services accessible to a broader range of consumers. The move to Apple Silicon, which could save Apple potentially hundreds of dollars per device, is the perfect time to move the Mac to this pricing strategy. If this happens, it would absolutely eat into Intel’s consumer market share.
I know you're joking, but looking at some Geekbench scores there doesn't seem to be a single Android phone on the market that can outperform the iPhone SE in single-core performance. The OnePlus 8 is the closest, but still not really that close.
iPhone SE: 1321
OnePlus 8: 898
On multi-core it's a little better since there seem to be seven Android phones that can outperform the iPhone SE:
iPhone SE: 2737
OnePlus 8: 3281
OnePlus 8 Pro: 3216
Samsung Galaxy S20 Ultra 5G: 3107
Samsung Galaxy S20+ 5G: 3102
Samsung Galaxy S20 5G: 3078
Huawei Mate 30 Pro 5G: 2918
Huawei Mate 30 Pro: 2835
However, all of those Android phones cost quite a bit more than the SE.
Because Apple is going to prove TSMC is capable of building x86 destroying processors. Not in theory. Not academic or institutional one offs... no, it’ll be mass-produced and in the hands of consumers.
Intel’s failure to get their fabs running has put them into a death spiral unless they pull off a miracle.
Also, if Apples ARM chips are really that fast they will be purchased by the shipload to be sent to performance critical operations like HFT. They will be put into servers or turned into them to squeeze and eek out every possible advantage... which will shit on intels most profitable lines (Xeons for single threaded workloads).
Why not? Apple used to sell servers. It could easily sell its chips to Azure or GCP to compete with Amazon’s Graviton chips.
It’ll help Apple cement their dev tools as industry standard, it’ll further amortize overall dev costs by increasing volumes, people can develop better algorithms with their custom hardware accelerators for ML and etc, it’s a great way to fight the current anti-trust cases against Apple.
There are some pretty good reasons for Apple to sell their chips to 3rd parties.
They could probably make money there, but it would take time and energy to do, doesn’t seem like it helps their brand, and seems generally a bit afield for them.
Then again, they started their own TV studio, so what do know.
The situation is a bit different though. If Apple's plan was to produce processors for the likes of HP, Dell, etc, then maybe that would be an issue. But I highly doubt they will move into those markets. It's AMD that's poised to take over that space.
“Haven’t ever” is simply not true. Apple has had plenty of historical server products, including dedicated rack servers with the Xserve line. That’s not to say it’s likely but it’s not unprecedented ;P
I don't think they were really interested in the market honestly. They did it because they felt the needed to, to support use cases like CI for macOS and iOS apps. It was all about supporting the Apple developer community. They cut it as soon as it wasn't necessary, IMO.
> They did it because they felt the needed to, to support use cases like CI for macOS and iOS apps
Actually, the Xserve platform really had nothing to do with that. Xserve’s were partly designed and built for the high end video production industry (as an extension of the Mac Pro hardware) and partly as a general purpose small to medium size business file server / web server (which the Mac Mini has subsumed what’s left of that space).
And they scrapped them. That's exactly my point. They were not committed: they made an uncompetitive product for a few years and bailed instead of trying harder. Compare to the Mac.
yes, yet cloud changes it as the naturally non-consumer thing - cloud datacenters - are needed by the Apple itself to serve the consumer oriented cloud based functionality. So, it may so happen that the major money saving from their own chip would be not on the consumer devices, instead it would be Apple's datacenter/cloud costs/density/efficiency/etc., and that improvement on those metrics can also enable and push Apple further into cloud business.
Apple is a trendsetter, though: I think, as a rule, Apple’s decisions are copied by a large part of the market so its market share isn’t an adequate measurement of its influence. This will be especially true if Apple Silicon MacBooks are impressive in some way, like extra-long battery life.
Absolutely, better performance per watt means longer battery life. Windows already supports ARM. Microsoft doesn't really have a choice here but to improve support for ARM. Office on Mac seems like is going to be ready for Apple Silicon if that effort also helps Office on Windows ARM then they are half way there.
Now, the Tiger Lake performance per watt story just isn't that impressive compared to the competition.
>Here we present the 15W vs 28W configuration figures for the single-threaded workloads, which do see a jump in performance by going to the higher TDP configuration, meaning [Tiger Lake] is thermally constrained at 15W even in [single threaded] workloads.
Comparing it against Apple’s A13, things aren’t looking so rosy as the Intel CPU barely outmatches it even though it uses several times more power, which doesn’t bode well for Intel once Apple releases its “Apple Silicon” Macbooks.
https://www.anandtech.com/show/16084/intel-tiger-lake-review...