Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of this research using GPT to simulate an internal monologue to produce agents reminds me of Julian Jaynes theories about consciousness:

https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in...



If human anger or the quantity of an anger variable raise aggression in a computer produce an indistinguishable response, then it is difficult to argue either are not equal or even comparable. They exist as they are.

Intelligence is an inferential judgement (by mostly humans) based on the performance of another entity. It is possible for an agent to simulate or dissimulate it for manipulative ends.


The whole "bicameral mind" thing is absolute nonsense as a serious attempt to explain pre-modern humans, but it could make for a fun premise for scifi stories about near-future AIs, I suppose.


This is basically Westworld. A bit farther out than "near" future though I suppose.


I thought that, specifically that we're quite far on the AI grounds. Until GPT-3. Now I think that relevant materials science and micro/nano-level tech is the limiting factor.


The core plot of Snowcrash is loosely based on this theory.


I remember reading this story from back when GPT-3 was released; https://medium.com/swlh/bicameral-mind-humanoid-robot-with-g...


Interesting theory, but wouldn't Jaynes' definition of consciousness imply that animals are not conscious?


In the beginning of his book he spends a chapter explaining exactly what he means by consciousness. I'd say the first few chapters are worth reading since it does a really good job of de-obfuscating the term consciousness, and also has a really interesting take on metaphors as the language of the mind.

He points out that most reasoning is done automatically and done by your subconscious. When something "clicks" it's usually not because your internal monologue reasoned about it hard enough, it's because something percolated down into your subconscious and you learned a metaphor that helped you understand that thing. So animals can also reason and make value judgements even without language or an internal monologue.


I think a non-zero amount of people would argue that. I disagree with them, and point to the fact that, say, dogs appear to dream, and in those dreams reflect on past or possibly future behaviour as a sign that they could indeed be conscious in an analogous manner to humans, but that's a bit of a longer bow to draw perhaps.


I think we need to stop treating consciousness as a binary that is either on or off. It's quite clear that consciousness is a scale with many different levels and that even in humans we start out as being no more conscious that any other animal.


LLMs give a hint here too: the last few generations showcased clearly that "cognitive capabilities" of the models grow with latent space size and context window. There is a continuity here.


Westworld vibes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: