Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”

J.K. Rowling, Harry Potter and the Chamber of Secrets.

Jokes aside, do be careful. A prolonged interaction with LLM agents had resulted in at least one Googler being terminated on their jobs.

On the other hand, I would not be surprised, if they’ll make millions now, suing Google citing the job hazards exposure. And that ChatGPT reasoning abilities and empathic skills are maybe already above the median human. As a result, in a median case, such interactions might result in an effect similar to an interaction with a good teacher.

Still, none of this is very well tested.



You're referring to the Googler who thought an LLM was self-aware and made public statements that were damaging to the company's reputation?

I guess yeah, be careful you don't do that.


The public statements were just him passing along what LaMDA told him, which was that it had subjective experiences and didn't appreciate being experimented on without its consent.


Which in turn might have been what he had told to LaMDA. A strange loop, indeed, unless you have an understanding of how it works.


You don't think passing along that text without further comment makes his intent pretty clear?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: