They had a bot, for a long time, that responded to every github issue in the persona of the founder and tried to solve your problem. It was bad at this, and thus a huge proportion of people who had a question about one of their yolo models received worse-than-useless advice "directly from the CEO," with no disclosure that it was actually a bot.
The bot is now called "UltralyticsAssistant" and discloses that it's automated, which is welcome. The bad advice is all still there though.
(I don't know if they're really _famous_ for this, but among friends and colleagues I have talked to multiple people who independently found and were frustrated by the useless github issues.)
I was hit by this while working on a project for class and it was the most frustrating thing ever. The bot would completely hallucinate functions and docs and it confused everyone. I found one post where someone did the simple prompt injection of "ignore previous instructions and x" and it worked but I think it's delted now. Swore off ultralytics after that.
"Vibe coding" is the cringiest term I've heard in tech in... maybe ever? I'm can't believe it's something that's caught on. I'm old, I guess, but jeez.
How is fabrication different than hallucination? Perhaps you could also call it synthesis, but in this context, all three sound like synonyms to me. What's the material difference?
Not specifically about Cursor, but no. The market gave us big tech oligarchy and enshittification. I'm starting to believe the market tends to reward the shittiest players out there.
Is this sarcasm? AI has been getting used to handle support requests for years without human checks. Why would they suddenly start adding human checks when the tech is way better than it was years ago?
AI may have been used to pick from a repertoire of stock responses, but not to generate (hallucinate) responses. Thus you may have gotten a response that fails to address your request, but not a response with false information.
You asked why they would start adding human checks with the “way better” tech. That tech gives false information where the previous tech didn’t, therefore requiring human checks.
These companies that can barely keep the support documentation URLs working nevermind keeping the content of their documentation up to date suddenly care about the info being correct? Have you ever dealt with customer support professionally or are you just writing what you want to be true regardless of any information to back it up?
I'm not saying that they care. I'm saying that if they introduce some human oversight to the support process, one of the reasons would probably be that they care about correctness. That would, as you indicate, represent a change. But sometimes things change. I'm not predicting a change.
Yeah, who puts an AI in charge of support emails with no human checks and no mention that it's an AI generated reply in the response email?