Like many others, I can vouch for “AI is good at programming now,” at least for the generic web programming that I do. But that doesn’t imply that it generalizes to other fields, and this article, at least, doesn’t show that it does.
I would like to read more from people who have other jobs about what they see when they use AI. Did they see a similar change?
It translates PDFs for me and gives me a good enough text dump in the console to understand what I’m being told to do. If the PDF is simple enough (a letter, for example). It doesn’t give me a structured English recreation of the PDF.
I’ll give it credit that it’s probably underpinning improved translation in e.g. google translate when I dump a paragraph of English and then copy the Chinese into an email. But that’s not really in the same ballpark.
The only other professional interaction I’ve had with it was when a colleague saw an industry-slang term and asked AI what it meant. The answer, predictably, was incredibly wrong but to his completely naive eyes seemed plausible enough to put in an email. As in, it was a term relating to a metallurgical phenomena observed by a fault and AI found an unrelated industry widget that contained the same term and suggested it was due to the use of said widget.
I don’t even really see the telltale AI writing signs of people using it to summarise documents or whatnot. Nor could I think how I could take what I do and use it to do it faster or more efficiently. So I don’t even think it’s being used to ingest and summarise stuff either.
I would like to read more from people who have other jobs about what they see when they use AI. Did they see a similar change?