Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it interesting you use this kind of disparaging tone when discussing Apple Silicon. I also find it interesting that you consider having a wide decoder not as a technical trick but as "throwing hardware at the problem."

However you try and spin it, what it comes down to is this, Apple is somehow designing more performant processors than every other company in the world and we should acknowledge they are beating the "traditional" chip designing companies handily while being new at the game.

If it's as easy as "throwing hardware" at the problem, then Intel and AMD and Samsung etc should have no problem beating Apple right?



I interpreted that differently: they used good engineering to make a speed demon of a chip, not some magic trick that's only good in benchmarks and not real world usage. I don't think it was disparaging at all.


It’s never considered “magical” after the feat has been accomplished. But a year ago if you claimed this is where Apple would be today, a lot of people would say that would require waving a magic wand.


Yeah, it's like saying "well all Apple is doing is throwing good engineering at the problem".

All they've really done is taken an advanced process, coupled it with a powerful architecture they've been iteratively improving on for years, and thrown it at software that has been codesigned to work really well on those chips.

Yeesh!


They didn't just throw hardware at the problem, but also talent. Something that is way more scarce.


Apple is throwing money at the problem. IC size is directly proportional to cost.

They couldn't sell a chip like this at a cost-competitive price on the open market against AMD/Intel products.


Hmm. The M1 is about 120 sq mm, substantially smaller than many Intel designs. The current estimate for 5 nm wafers is about $17K (almost certainly high). A single 300 mm wafer is about 70,700 sq mm. If we get die from 75% of that area, that gives us about a $38 raw die cost. Even with packaging and additional testing, I suspect they would be competitive.


Nice back of the envelope calculation. I think I'd add yield to it though.

TSMC had a 90% yield in 2019 for a 18mm2 chip[1]. Assuming a 120mm2 chip would have more defects, and assuming process improvements since 2019, maybe 80% would be an accurate-ish estimate.

Found an even better number, [2] list the defect rate as 0.11 / 100m2 => 87%.

$38/87% = $43.7

[1] https://www.anandtech.com/show/15219/early-tsmc-5nm-test-chi... [2] https://www.anandtech.com/show/16028/better-yield-on-5nm-tha...


That's a fair point, yield has to be included. I lumped yield in with the pessimistic 75% factor for area utilization of the wafer. I should have been more clear. The area loss for square die on a round wafer should be much less than 25% of the total wafer area.

If you look at process tech cost trends, the $17K is also very pessimistic. I think a customer the size of Apple is probably getting a much better rate than that. Remember, they sell well over 200 million TSMC fabbed chips a year. Hard to know for sure of course, but I imagine these chips are ultimately costing Apple well under $40. We'll never know of course...


The big skew in availability between 8GB and 16GB models implies to me that yields of perfect chips are lower than Apple expected, with too many ending up in the 8GB bin.


The DRAM is on separate chips from the M1 processor. The availability skew is probably just a production forecasting error.


I came to the opposite conclusion. I think most users than expected paid for the 16GB models, leaving extra inventory of the 8GB modules. During black Friday/Cyber monday I saw several discounts on 8GB M1 systems, but none on the 16GB systems.

Hopefully that sends a message to apple to build systems with more memory. Seems insane to invest in a expensive M1 system (once you add storage, 3 year warranty, etc) and get only 8GB. Even it works well today, with a useful life of 3-6 years seems likely the extra 8GB will have significant value over the life of the system... even if it's just to reduce wear on the storage system.


That would imply that the M1 has the DRAM rather than just the controllers on the chip but all of the coverage I've seen says that they are separate chips in the same package.


This is interesting. What sources do you visit to learn about CPU manufacturing trends?


Well, there are a number of industry sites, but here are a few good starting points:

https://en.wikichip.org/wiki/WikiChip https://semiwiki.com https://www.tomshardware.com https://semiaccurate.com (paywall for some articles, very opinionated...)


As the sister comments have noted they almost certainly could given the size of the die.

But in another sense you are right - Apple is throwing money at the problem: their scale and buying power means that they have forced their way to the front of the TSMC queue and bought all the 5nm capacity.


Is that true? Intel chips are known to be overpriced.


Intel probably isn't the best example here, a better comparison would be AMD. Their r&d budget were 20-25% of intel's, yet were able to produce a better performing part with zen3.


Intel fabs their own chips. AMD outsources that. Intel is still on 14nm vs AMD's 7nm. It has been a really long time since AMD has even come close to Intel. The question is whether Intel can recover from their slump before AMD can get enough chips out.


AMDs on a big upswing, design wins in multiple markets from servers down to laptops. The PS5 and Xbox S/X also will help will volumes for the next few years.


>> Intel is still on 14nm

Why do people keep saying that? There's a variety of 10nm processors from Intel on the market.


Are there any chips like this being sold at all by AMD or Intel? Can you get this performance and power consumption anywhere else?


The AMD 4900U has similar power consumption and higher multi-threaded performance, but lower single-threaded performance.

I expect the AMD part to have quite the bit lower GPU performance since it uses 1/3 of the transistors of the M1, 4.9billion transistors vs 16billion.

https://wccftech.com/intel-and-amd-x86-mobility-cpus-destroy...


AMD CPUs are very hard to fully utilize with a single thread, and Intel has always held the single-thread perf crown. Most high-performance use cases are multi-threaded now, so the single-threaded performance delta isn't that significant. The Apple chip is really built for running a snappy GUI: in most of those cases, you need to run ~1M instructions super fast on one thread for a short time. Intel has historically had the crown on this metric, but not any more with all of their problems.


AMD Zen3 outperforms everything Intel has in single threaded performance.

https://wccftech.com/amd-ryzen-9-5950x-16-core-zen-3-cpu-obl...

Or are you saying that some operations like FMAs for example are hard to keep a high utilization for?


Historically yes, but not so with the Zen3.


The closest in the next few months is the new Zen 3 based APUs that AMD is announcing at CES which is in Mid January. Zen 3 is reasonably competitive with the M1 on a per core basis. Not quite the single thread perf, but pretty competitive on multicore throughput.

As a rough estimate I'd expect the AMD chips to be within 10-20%, and you'll be able to run Windows or Linux on them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: