Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doubt it, a year ago useful local LLMs on a Mac (via something like ollama) was barely taking off.

If what you say it's true you were among the first 100 people on the planet who were doing this; which btw, further supports my argument on how extremely rare is that use case for Mac users.



No, I got a MacBook Pro 14”with M2 Max and 64GB for LLMs, and that was two generations back.


People were running llama.cpp on Mac laptops in March 2023 and Llama2 was released in July 2023. People were buying Macs to run LLMs months before M3 machines became available in November 2023.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: