Hacker Newsnew | past | comments | ask | show | jobs | submit | tristor's commentslogin

> And the loudest pronatalists in American life, the ones who claim declining birth rates are civilization’s gravest threat, are the same people who just spent two years dismantling it: Elon Musk, who has fathered at least fourteen children and called declining birth rates “a much bigger risk to civilization than global warming,” told tech workers on CNBC to “get off the goddamn moral high horse with the work-from-home bulls**.” Marc Andreessen, whose Techno-Optimist Manifesto declares “our planet is dramatically underpopulated,”testified before his local town council that he was “immensely against multifamily housing development.” The network around them (Thiel, Altman, Armstrong, Buterin) has poured some $800 million into fertility technology while the companies in their orbit dismantle the workplace flexibility that actually raises fertility.

This article frames the behavior of Musk, Thiel, Andreessen and others as being hypocritical or misguided, that their aims are not aligned with their actions. Either the author is completely missing the point, or they're crafting a particular narrative to provide plausible deniability for these billionaires acting fully in accordance with their philosophies as they've many times publicly espoused. Far from being "pronatalist", Musk, Thiel, Andreessen, and others are only interested in rising birthrates among a particular portion of the population. Like many SV elites, they have a cozy relationship with the HBD movement within the rationalist movement, including Thiel's close association with Curtis Yarvin (Mencius Moldbug). It's /very/ obvious to anyone who has spent any time comprehending things that these billionaires are very invested in increasing birth rates among other people they consider worthy of having children, particularly elite whites, and decreasing birth rates among those they don't consider worthy of having children, particularly anyone who is not white.

To not put too fine a point on it: Musk, Thiel, and Andreessen do NOT care if their policies prevent their workers from having children. They don't want their workers having children, they only want children from the families of elite whites. They cannot be too loud in their statements, but these people are eugenicists.


All of that, and the funny thing is /that is the easy part/. Moving payloads to space is just incredibly expensive, but not fundamentally hard in the same way that post-launch coordination of satellite constellations and RF tuning to support things like mobile connectivity are (I can connect to Starlink satellites from my iPhone through T-Mobile).

Connecting to a cell phone and/or selling a phased array antenna that can track an object travelling 17,000 mph for $300 is crazy hard.

But a military is going to be fine with an antenna that costs $3000.


This article was definitely written by ChatGPT. I think it's a story anyone who's ever shipped a website that they expected the public to make use of has experienced, but it's also light on the details of how they moved past Google organic search traffic.

I had the same thought

The fact that it was written by ChatGpt does not mean that a human did not decide what to put in it. For non native speakers it’s a huge advantage.

This does not seem accurate based on my recently received M5 Max 128GB MBP. I think there's some estimates/guesswork involved, and it's also discounting that you can move the memory divider on Unified Memory devices like Apple Silicon and AMD AI Max 395+.

> What would you build if on-device AI were genuinely as fast as cloud?

I think this has to be the future for AI tools to really be truly useful. The things that are truly powerful are not general purpose models that have to run in the cloud, but specialized models that can run locally and on constrained hardware, so they can be embedded.

I'd love to see this able to be added in-path as an audio passthrough device so you can add on-device native transcriptioning into any application that does audio, such as in video conferencing applications.


This is a great idea. A virtual audio device that sits in the path of any audio stream and provides live transcription, that would be huge for video conferencing, lectures, podcasts.

MetalRT's STT numbers make this feasible: 70 seconds of audio transcribed in 101ms means you could process audio chunks in real-time with massive headroom. The latency would be imperceptible.

We haven't built this yet but it's a compelling use case. CoreAudio supports virtual audio devices (aggregate devices) that could pipe audio through the pipeline. If anyone in this thread has experience building macOS audio HAL plugins and wants to collaborate, we're very open to contributions, RCLI is MIT.


Something that could be possible is serving the model as a virtual audio device and then you can use existing tools on macOS like Rogue Amoeba's Loopback to direct audio to split to that virtual device and your other output (you'd configure your Loopback device as the output in your system audio settings).

I have never written audio drivers on macOS, but maybe something worth exploring to see if I can make this happen. I really appreciate high quality AI transcripts in my meetings, but right now only Webex has good transcriptioning, and a lot of meetings use other services like MS Teams, Zoom, Meet, et al.


I would say the M5 Max MBP, Mac Studio, and the acceptance of Apple hardware as the pinnacle for personal local LLMs are good signs that they are not going to unify iOS and macOS.

> The reality is that most of them are so badly managed that competing against them is easy if you're actually competent.

The world is a graveyard littered with startups that thought this way. One of the consequences of wealth concentration and monopolies is that it is insufficient to be better than your competitors because your customers are also incompetent. To find product-market fit you not only have to be better, you have to be noticed by someone that cares that you're better and upon reflection confirms you solve a valuable problem.

By way of analogy, it's not enough to realize that MouseCorp makes shitty mousetraps and the local village spends $1M/yr on them. You can make a better mousetrap thinking its worth $1M/yr, or do the deeper look and realize the local village doesn't have a mouse problem but rather has a problem with too many feral cats, and has no interest in buying better mousetraps and once their attention is gained simply stops buying mousetraps altogether. Both parties lacked competence, but that didn't mean there was a market.


> The world is a graveyard littered with startups that thought this way. One of the consequences of wealth concentration and monopolies is that it is insufficient to be better than your competitors because your customers are also incompetent.

It's less that and more that governments and bureaucrats are corrupted to create barriers tot the market and to turn a blind eye to anti competitive behavior and outright illegal practices. For example huge banking corporations have been caught laundering money for drug cartels and got away with fines -- if your fintech startup tried that on, you would never see the outside of a prison cell.


> For example huge banking corporations have been caught laundering money for drug cartels and got away with fines -- if your fintech startup tried that on, you would never see the outside of a prison cell.

German bank N26 was embroiled in money laundering, scams and other issues stemming from bad KYC for years, and all they got was a slap on the wrist from regulators.


Right, so don't waste time trying to sell low-margin products to local governments. As the saying goes: it's like trying to shear a pig, too much squealing and not enough wool.

I've been using an LG Ultrafine 27MD5KL-B for years, and it works pretty flawlessly once I set up BetterDisplay with it. This is my primary work setup, and I think I paid around a grand for at MicroCenter some time back. It has worked great.


I am very excited by this, but I am a bit dampened that the maximum memory available is 128GB. I was really hoping for 256GB, which would allow me to run frontier models locally. I think with 128GB it's still feasible to use this with something like Qwen3-Coder-Next and MiniMax-M2.5, but things like Kimi-K2.5 will require significant quantization to fit and model performance will really suffer.

I'm really wanting to build proper local-first AI workflows at home, and I think Apple has an opportunity to make that possible in a way other companies aren't really focused on, but we need significantly larger memory capabilities to do it, which I know is tough in the current memory market but should be available for a cost.


Tell me about it. I checked the page thinking whether I should go for 256 GB or 512 GB RAM model.

128 GB maximum.

Sigh.


I suspect that they're going to go to a "Ultra every third gen" so we will see a M6 Ultra.


I spent the last day deep diving on what I can do with MLX with local models. I still feel limited, because you have to use quantized models, but I think it's enough to do /something/, so I went ahead and bit the bullet and pre-ordered just now. I am driven a little bit by concern about ongoing memory market pressures over the next 1-3 years, and thinking it's a bit now or never.

Sigh. Maybe you are right.

Let me know when I can buy an M5 Max Macbook Pro that can run local open weight LLMs. Until then, nothing else is particularly interesting, everything I already own gets the job done.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: