The AI and Tooling support point is really just an extension of the Community and Ecosystem point. Even before LLMs React had an advantage in that every question you had was probably already on StackOverflow and there are mature React libraries for almost everything. Now some people might use an LLM to answer the question they previously would have gone to StackOverflow for but the outcome is the same: there are advantages to using what other people are using.
Yes. And no matter how good LLMs get at coding there will always be a crowd intentionally doing it themselves, especially in the open source arena, if only just to keep the joy alive.
I fully expect to keep writing code for decades, just like I still use a hand plane and enjoy growing plants to eat.
I know other methods are more efficient. But I’m here to experience life. I want to do the hard things. I want to be uncertain, confused, to make mistakes, and to learn.
AI is a neat tool in limited scopes. Every time I use it though, I feel like I didn’t get the full experience. I’m acutely aware of all the rabbit holes I missed, or the tiny details I’d notice along the way and helpfully remember 10 years later. For everything it adds, it takes at least as much away from where I’m looking. And I like those parts.
The LLMs will need to get a lot better though. I see every day on reddit/X a bunch of people saying they downloaded Cursor and built a saas without ever having coded anything in their lives; the resulting code is always terrible, full with really obvious bugs and buckets of trivial security issues. All these things can be prevented, in theory, but current LLMs really don't do that at all (they aren't capable).
All of these SaaS products are also clones of things people have already made. If it was helping newcomers create novel things from great imaginations and curiosity, I’d be really excited about this. Instead I saw a reminder app, a todo app, a combination of these, and a voice transcription app.
All of these have existed in huge numbers for a decade or close to it. This isn’t interesting. It wouldn’t have taken much to do this without AI if these people found motivation through some other means. Nothing notable is happening yet.
> All of these have existed in huge numbers for a decade or close to it
That doesn't always matter though; I saw some guy on reddit selling 1000s of subs of a server uptime page they generated with an LLM (Cursor I think it was). Most do it for the money and that is working IF you have the following somewhere.
> the resulting code is always terrible, full with really obvious bugs and buckets of trivial security issues
That's pretty much the same as what you got from old school stackexchange. The LLMs are trained on poor example data (e.g. from stack exchange) and I presume it is a hard problem to filter for just the "good" training data.
Stack exchange answers make up a tiny fraction of LLM training data. They've dumped the source of every open-source project on the internet into those things. They've been trained on plenty of high-quality code; the trouble is that all that can teach is imitation.
The why behind a design decision is something an LLM can't understand simply by being fed the decisions themselves. Hence, though they often stumble into the right answer, they only ever do so because it "seems right based on context," and so they can easily apply a principle correctly one moment and then misuse it the next.
Compilers actually output the binary that you ask them to, though.
If all a compiler did was spit out a slurry of buggy assembly that misunderstands its context and has to be carefully scrutinized for errors, I would still be writing assembly the old fashioned way.
I don't think so. No matter the code complexity, copilot gives me good enough suggestions that it saves me time. It often simply writes the exact line I had in mind.
I was considering this other day. AI tools are stuck at a particular point in time. And even training them on newer stuff, there's only so much information to train on. I've been exploring this being a _good_ thing. In software we spend so much time chasing the latest tooling, language features, frameworks, etc. Maybe it'll be a positive that it all stagnates a bit and we just use the tools we have to get work done instead of creating new hammers every 6 months.
It would be nice if some AI tools could be developed to actually evaluate new libraries and frameworks. For instance, if there are already 10 libraries to do something and I develop a new one that's objectively faster than all of them (true story), could some AI do the work of installing and benchmarking it and incorporate the results in its knowledge base? And periodically update it? Is there any way to leverage AI for discovery of solid code? I suspect this is beyond current capabilities, but one can dream.
It's not about "being able" it is about being efficient. There are many cases where current AI can provide boilerplate and good examples for doing something specific, which eases things a lot.
There is a lot, of course, one can't take 1:1 into the final product, but it helps to find the right libraries, helps to find patterns, the right parts/functions to use where verification in the applicable documentation or source is a lot simpler than finding it in the docs to begin with.
Using it as a tool, while not a source of truth can be good.
And don't get me started about writing all the boilerplate which sometimes is needed, which is too complex for a simple editor shortcut, but too tedious for me as a human. That I review and fix a lot faster than create by hand.
> If your devs can't work without something writing their code for them, why are you hiring them?
I am currently in the process of hiring a backend engineer. Anybody who does not use AI to aid development work gets an automatic disqualification. In my experience, a good engineer using AI tools will run circles around a good engineer not using AI tools.
This is very much like saying "good engineer using StackOverflow will run circles around a good engineer who isn't".
AI does help an engineer who embarks on a new voyage through unfamiliar APIs to guide them with usage patterns, but some people become much more efficient by going through the library docs.
Typing out the code is the smallest part of a "good engineer's" job (and even so, having to adapt most of AI generated code is slower than typing it out yourself once you do understand the APIs).
I do think it might work well for MVP-style quick prototyping, but using this as an applicant qualification criteria seems so weird (even when building an MVP, you want some tension between building it quickly and building it the right way).
"A good cyclist using training wheels will go circles around a good cyclist who doesn't." No, training wheels only help bad cyclists. You can't generalize that and assume they will make a good cyclist even better.
AIs generate deeply mediocre code. This is better than anything a person who can't code on their own would produce, but an experienced developer will have to spend all their time babysitting the AI to get it to behave properly.
Not really. Today's mediocre code is tomorrow's technical debt. LLMs often inject subtle bugs or misunderstand the context they're being used in and have to be badgered into respecting the parameters of requests.
Would you rather write code yourself, or ask a first-year student to write it for you while you watch over their shoulder and tell them to go back and try again every time you notice a mistake? Which of these do you think is faster and better in the long run for the quality of your codebase?
I'm not too worried about this, and I think Gumroad's concern is likely overblown. I can't tell from their comment whether they actually experienced AI being bad at HTMX, or if they transitioned to talking about other resources.
LLMs are often wildly good at being universal translators. So if they pick up general patterns and concepts in popular frameworks, and enough syntax of more niche frameworks, IME, they do a pretty great job of generating good niche framework code.
Similar to how they can talk like a pirate about things things pirates never said in their training data.
I had written a comment addressing this as well but you beat me to it. In a way it is similar to the effect StackOverflow had on popular libraries, but amplified. Even without StackOverflow, a library can do well if it has good documentation. I'm not sure if the same holds true with LLMs.
My prediction is that it'll be like this for a while, but as soon as tooling becomes better and the context of current APIs + local files gets better taken into consideration, this "advantage" will disappear.
This will not be true for future frameworks, though it is likely true for current ones.
Future frameworks will be designed for AI and enablement. There will be a reversal in convention-over-configuration. Explicit referencing and configuration allow models to make fewer assumptions with less training.
All current models are trained on good and bad examples of existing frameworks. This is why asking an LLM to “code like John Carmack” produces better code.. Future frameworks can quickly build out example documentation and provide it within the framework for AI tools to reference directly.
Because there’s enough rails code in the training data to determine the proper conventions :) if you’re making something new without this glut of data, it’s going to be much more difficult for a coding assistant to match a convention it’s never seem before.
The thing is, with some elbow grease, you can write a great plugin for your preferred editor. No need for dubious LLMs results, especially when the difficult part, code intellisense, is already solved with LSP. If you're a shop that has invested in a framework, it would be cheaper and more productive.
true, but the conventions it has seen are the same across all similar domains not just same framework/language, copilot "picks up" the similarity.
What I mean is: if you name your modules consistently, say Operation::Object::Verb or Action::ObjectVerb or ObjectManager.doSomething it's really easy for the LLM to guess the next one, just as it is easy for a human.
Add a new file actions/users/update.rb and start typing "Act" and it may guess "class Actions::Users::Update, and start to fill in the code based on nearby modules, switch to the corresponding unit test and it'll fill it in too.
Source: we have our own in-house conventions and it seems copilot gets them right most of the time, ymmv.
I saw a tool that had a page dedicated to AI to read. Basically you would point your llm to that page as the initial prompt and from there could start asking questions. I thought it was an interesting idea, but apparently not interesting enough to remember who did it or even check how that page looked.
its actually a big thing i am waiting for - both websites and ai tools to agree on a way to facilitate this.
i'm doing some game development in godot as a hobby, and the current llm's are really bad at it - very often i get code suggestions that use ancient versions of gdscript or the engine. I'd love to have a big enough context window and the tooling needed to go like "look at these godot docs for the current version: (insert link)" and then ask my questions, i think it would fix 99% of these issues. same with other less well-known tools and languages.
respectfully disagree. i think that the value of llm suggestions are driving us toward a kind of standardization that is really good. we'll all be java programmers soon!
This is stated as a very matter-of-fact downside, but this is a pretty crazy portent for the future of dev tools / libraries / frameworks / languages.
Predictions:
- LLMs will further amplify the existing winner-take-all, first-mover nature of dev tools
- LLMs will encourage usage of open-source tools because they will be so much more useful with more/better training data