My super uninformed theory is that local LLM will trail foundation models by about 2 years for practical use.
For example right now a lot of work is being done on improving tool calling and agentic workflows, which tool calling was first popping up around end of 2023 for local LLMs.
This is putting aside the standard benchmarks which get "benchmaxxed" by local LLMs and show impressive numbers, but when used with OpenCode rarely meet expectations. In theory Qwen3.5-397B-A17B should be nearly a Sonnet 4.6 model but it is not.
You can run Qwen3.5-35B-A3B on 32GB of RAM sure, although to get 'Claude Code' performance, which I assume he means Sonnet or Opus level models in 2026, this will likely be a few years away before its runnable locally (with reasonable hardware).
the OP probably is not telling the whole story and must have some kind of drug addiction going on that sucks up all of his wealth, because how do you even end up on a van when you split rent with a girlfriend/roommates?
also the part where he refuses non-vegan food. yikes.
and skateboarding as a hobby sounds great when you are uninsured
That's a tempting answer. I see why you proffer it. But I have to say
no.
Complexity is neither an immanent feature nor inevitability. Behind
unruly complexity is our failure to manage it. And indeed, a love of
complexity, a fetish for it that seduces us into ever more.
To defeat complexity we have to embrace, and engage with it. We have
to see what parts of technology that got us to where we are, must now
be justifiably rejected.
All I see right now, especially with regards to "AI" and the new wave
of techno-populism, is a retreat from complexity and more embrace of
"magic".