It’s also just not as good at the task anymore. It frequently gets lazy and gives you an outline with a bunch of vague pseudocode. Compare to when GPT-4 was slower at producing output, but all of that output was solid, detailed work. Some of the magic that made you say “wow” feels like it’s been enshittified out of it.
I sometimes try the free chatgpt when I run into a problem and it's just hilarious how terrible it is. Loves to go around in circles with the same made up solution that has no basis in reality, using functions in libraries that would be great if they actually existed.
I noticed that like since a week ago? Output faster, but not impressive. Now I just skip to stack overflow or docs. The output is also giving error a lot more, as if the libraries on which the example is based off was old. Sometimes it's really trivial task just to save time, and it's just not of any help. Still helpful when you want to start something new, it just doesn't scale that well.
Yes, rather poor but people can always post new answers and votes sort the answers. It might not work all that well but there is a mechanism for improvement and to keep things up to date.
Language models can copy the top answers from SO, ingest docs and specs etc. And then the information is never updated? Or are they going to train it from scratch? On what? Outdated github saved games?