We are getting into a debate between particulars and universals. To call the 'unified memory' VRAM is quite a generalization. Whatever the case, we can tell from stock prices that whatever this VRAM is, its nothing compared to NVIDIA.
Anyway, we were trying to run a 70B model on a macbook(can't remember which M model) at a fortune 20 company, it never became practical. We were trying to compare strings of character length ~200. It was like 400-ish characters plus a pre-prompt.
I can't imagine this being reasonable on a 1T model, let alone the 400B models of deepseek and LLAMA.
I don't think Awni should be dismissed as a "marketing account" - they're an engineer at Apple who's been driving the MLX project for a couple of years now, they've earned a lot of respect from me.
Anyway, we were trying to run a 70B model on a macbook(can't remember which M model) at a fortune 20 company, it never became practical. We were trying to compare strings of character length ~200. It was like 400-ish characters plus a pre-prompt.
I can't imagine this being reasonable on a 1T model, let alone the 400B models of deepseek and LLAMA.