> FWIW I think LLMs are a dead end for software development, and that the people who think otherwise are exceptionally gullible.
By this do you mean there isn't much more room for future improvement, or that you feel it is not useful in its current form for software development? I think the latter is hard position to defend, speaking as a user of it. I am definitely more productive with it now, although I'm not sure I enjoy software development as much anymore (but that is a different topic)
> By this do you mean there isn't much more room for future improvement
I don't expect that LLM technology will improve in a way that makes it significantly better . I think the training pool is poisoned, and I suspect that the large AI labs have been cooking the benchmark data for years to suspect that their models are improving more quickly than they are in reality.
That being said, I'm sure some company will figure out new strategies for deploying LLMs that will cause a significant improvement.
But I don't expect that improvements are going to come from increased training.
> [Do] you feel it is not useful in its current form for software development?
IME using LLMs for software development corrodes my intuitive understanding of an enterprise codebase.
Since the advent of LLMs, I've been asked to review many sloppy 500+/1000+ line spam PRs written by arrogant Kool-Aid drinking coworkers. If someone is convinced that Claude Code is AGI, they won't hesitate to drop a slop bomb on you.
Basically I feel that coding using LLMs degrades my understanding of what I'm working on and enables coworkers to dominate my day with spam code review requests.
> IME using LLMs for software development corrodes my intuitive understanding of an enterprise codebase.
I feel you there, I definitely notice that. I find I can output high quality software with it (if I control the design and planning, and iterate), but I lack that intuitive feel I get about how it all works in practice. Especially noticeable when debugging; I have fewer "Oh! I bet I know what is going on!" eureka moments.
We were worried about that as well. But we have found that most people are not doing well on our take home. If we get to the point that most people are crushing it, then we may need to think more about AI and take homes (maybe tweak the it with the explicit expectation that they may use AI, etc.)
They also need to be able to reason well about why they made the choices they did. Something useful when talking to them can be asking questions like "If X changed, how would that impact your design?". If they were reliant on AI for vibing (rather than just using it as a tool), then those can be more difficult questions to answer well.
It’s a rough heuristic, but it’s not true. I’ve worked at micro managed startups where the CEO wanted to review every change, and giant companies where it’s me shipping a massive feature.
I don't, sadly. The only coverage of this that was first-principles accessible was the course taught by Prof Stergios Romelioutis where I went to grad school.
You could do fine by reading some old books by Bar-Shalom. Any practical textbook like his would include all the "other stuff" about the EKF that helps you understand how nonperforming it often is.
But the actual derivation of the EKF is probably only one or two pages in such a textbook, which is a damn shame nobody includes it.
The background required is simply:
* Know the form of exponential family of PDFs (like Normal Gaussian)
* Bayes rule
* Recognize that to maximize f~= exp(-a), you have to minimize 'a'
* Know how to take derivative of matrix equation ('a', above)
* Solve it
* Use 'matrix inversion lemma' to transform solution to what KF/EKF provides.
Probabilistic Robotics covers Kalman Filter from a first principles probabilistic viewpoint, and its extension the EKF. It's quite readable for someone with basic understanding of linear algebra, probability, and calculus. I believe it also has a refresher on these basics in the introduction.
> Everything just assumes that without Sam they’re worse off.
>
> But what if, my gosh, they aren’t? What if innovation accelerates?
It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster
Very good point. And with very long lifespans (thousands of years), all of those low-probability events that may cause accidental death (airplane crash, getting hit by a car crossing the street, violence, etc.) may really start to add up to a not-so-low probability of at least one of them happening within your extended lifespan.
By this do you mean there isn't much more room for future improvement, or that you feel it is not useful in its current form for software development? I think the latter is hard position to defend, speaking as a user of it. I am definitely more productive with it now, although I'm not sure I enjoy software development as much anymore (but that is a different topic)
reply