Hacker Newsnew | past | comments | ask | show | jobs | submit | Kirth's commentslogin

> disagree with the fact that skill and taste are correlated

> lots of amazing engineers have an awful taste in everything that is not their immediate field of knowledge or interest

which one is it?


I clearly didn't express myself correctly, sorry. What I want to say is that one can develop taste with skill in one craft, like software engineering, while having awful taste in others, which is needed for many apps. This means that people with taste in other areas can now create nice software using their taste in other areas


Oh I like this. Taste couldn’t be transitive into software before but now it can.


During the first World War, Belgium divided in its Dutch (Flemish) and French (Walloonian) speaking constituents, had many such Walloonian officers rule over often Flemish soldiers. It wasn't unheard of that an officer got shot by his own people.

I'd dread managing technical people in a field I have no experience or knowledge; in my experience, especially in tech, such managers are often held hostage by engineers who stubbornly don't want to do things, tell fibs about feasibility, ... The other side of that is that such managers often make progress making said engineers promises that often turn out to be carrots on sticks or outright lies.

If you can't go with in the trenches, what good are you and how do you expect to build a trusting relationship?


Муч лике хов Ю кан/кулд спел Енглиш ин Кирилик.. but who in their right mind actually does that?


Palm Treo/Pre and BlackBerry users! And probably Clicks users too. It's not a matter of "Does Russian language use the Latin script?" (it doesn't), but rather "What is the least annoying method to input Cyrillic on a BB-style keyboard, which doesn't have enough buttons for the йцукен layout?". Phonetic layouts such as яверты or яшерты were very popular for such devices back in the day.


I’m considering the linked-above device to replace my mom’s Q10 later this year, so this is a specifically helpful answer for me — thanks!


For anyone one confused, the first part is an approximate transliteration into Cyrillic of the English sentence “Much like how you can/could spell English in Cyrillic.”


surely that cat's out of the bag by now; and it's too late to make an active difference by boycotting the production of more public(ly indexed) code?


Kind of kind of not. Form a guild and distribute via SAAS or some other undistributable knowledge. Most code out there is terrible so relying on AI trained on it will lose out.


> Imagine that the only PC you could buy one day has everything tightly integrated with no user serviceable or replaceable parts without a high-end soldering lab.

So.. a smart phone?


While you still can..


This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user.


It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time.

What you’re saying is so far from what is happening, it isn’t even wrong.


Not so much different from how people work sometimes though - and in the case of certain types of pscychopathy it's not far at all from the fact that the words being emitted are associated with the correct training behavior and nothing more.


Analogies are never the same, hence why they are analogies. Their value comes from allowing better understanding through comparison. Psychopaths don’t “feel” emotion the way normal people do. They learn what actions and words are expected in emotional situations and perform those. When I hurt my SO’s feelings, I feel bad, and that is why I tell her I’m sorry. A psychopath would just mimic that to manipulate and get a desired outcome i.e. forgiveness. When LLMs say they are sorry and they feel bad, there is no feeling behind it, they are just mimicking the training data. It isn’t the same by any means, but it can be a useful comparison.


Aren't humans just doing the same? What we call as thinking may just be next action prediction combined with realtime feedback processing and live, always-on learning?


No. Humans have a mental model of the world.

The fact that people keep making that same question on this site is baffling.


It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy.


According to what, exactly? How did you come up with that analogy?


Start with LLMs are not humans, but they’re obviously not ‘not intelligent’ in some sense and pick the wildest difference that comes to mind. Not OP but it makes perfect sense to me.


I think a good reminder for many users is that LLMs are not based on analyzing or copying human thought (#), but on analyzing human written text communication.

--

(#) Human thought is based on real world sensor data first of all. Human words have invisible depth behind them based on accumulated life experience of the person. So two people using the same words may have very different thoughts underneath them. Somebody having only text book knowledge and somebody having done a thing in practice for a long time may use the same words, but underneath there is a lot more going on for the latter person. We can see this expressed in the common bell curve meme -- https://www.hopefulmons.com/p/the-iq-bell-curve-meme -- While it seems to be about IQ, it really is about experience. Experience in turn is mostly physical, based on our physical sensors and physical actions. Even when we just "think", it is based on the underlying physical experiences. That is why many of our internal metaphors even for purely abstract ideas are still based on physical concepts, such as space.


They analyse human perception too, in the form of videos.


Without any of the spatial and physical object perception you train from right after birth, see toddlers playing, or the underlying wired infrastructure we are born with to understand the physical world (there was an HN submission about that not long ago). Edit, found it: https://news.ucsc.edu/2025/11/sharf-preconfigured-brain/

They are not a physical model like humans. Ours is based on deep interactions with the space and the objects (reason why touching things is important for babies), plus mentioned preexisting wiring for this purpose.


Multimodal models have perception.


If s multimodal model were considered human, it would be diagnosed with multiple severe disabilities in its sensory systems.


Isn't it obvious that the way AI works and "thinks" is completely different from how humans think? Not sure what particular source could be given for that claim.


I wonder if it depends on the human and the thinking style? E.g. I am very inner monologue driven so to me it feels like I think very similarly as to how AI seems to think via text. I wonder if it also gives me advantage in working with the AI. I only recently discovered there are people who don't have inner monologue and there are people that think in images etc. This would be unimaginable for me, especially as I think I have sort of aphantasia too, so really I am ultimately text based next token predictor myself. I don't feel that whatever I do at least is much more special compared to an LLM.

Of course I have other systems such as reflexes, physical muscle coordinators, but these feel largely separate systems from the core brain, e.g. don't matter to my intelligence.

I am naturally weak at several things that I think are not so much related to text e.g. navigating in real world etc.


Interesting... I rarely form words in my inner thinking, instead I make a plan with abstract concepts (some of them have words associated, some don't). Maybe because I am multilingual?


English is not my native language, so I'm bilingual, but I don't see how this relates to that at all. I have monologue sometimes in English, sometimes in my native language. But yeah, I don't understand any other form of thinking. It's all just my inner monologue...


No source could be given because it’s total nonsense. What happened is not in any way akin to a psychopath doing anything. It is a machine learning function that has trained on a corpus of documents to optimise performance on two tasks - first a sentence completion task, then an instruction following task.


I think that's more or less what marmalade2413 was saying and I agree with that. AI is not comparable to humans, especially today's AI, but I think future actual AI won't be either.


...and an LLM is a tiny speck of plastic somewhere, because it's not actually an "intelligent mind", artificial or otherwise.


So if you make a mistake and say sorry you are also a psychopath?


No, the point is that saying sorry because you're genuinely sorry is different from saying sorry because you expect that's what the other person wants to hear. Everybody does that sometimes but doing it every time is an issue.

In the case of LLMs, they are basically trained to output what they predict an human would say, there is no further meaning to the program outputting "sorry" than that.

I don't think the comparison with people with psychopathy should be pushed further than this specific aspect.


You provided the logical explanation why the model acts like it does. At the moment it's nothing more and nothing less. Expected behavior.


Notably, if we look at this abstractly/mechanically, psychopaths (and to some extent sociopaths) do study and mimic ‘normal’ human behavior (and even the appearance of specific emotions) to both fit in, and to get what they want.

So while internally (LLM model weight stuff vs human thinking), the mechanical output can actually appear/be similar in some ways.

Which is a bit scary, now that I think about it.


I think the point of comparison (whether I agree with it or not) is someone (or something) that is unable to feel remorse saying “I’m sorry” because they recognize that’s what you’re supposed to do in that situation, regardless of their internal feelings. That doesn’t mean everyone who says “sorry” is a psychopath.


We are talking about an LLM it does what it has learned. The whole giving it human ticks or characteristics when the response makes sense ie. saying sorry is a user problem.


there is no "it" that can learn.


Okay? I specifically responded to your comment that the parent comment implied "if you make a mistake and say sorry you are also a psychopath", which clearly wasn’t the case. I don’t get what your response has to do with that.


Are you smart people all suddenly imbeciles when it comes to AI or is this purposeful gaslighting because you’re invested in the ponzi scheme? This is a purely logical problem. comments like this completely disregard the fallacy of comparing humans to AI as if a complete parity is achieved. Also the way this comments disregard human nature is just so profoundly misanthropic that it just sickens me.


AI brainrot among the technocrati is one of the most powerful signals I’ve ever seen that these people are not as smart as they think they are


No but the conclusions in this thread are hilarious. We know why it says sorry. Because that's what it learned to do in a situation like that. People that feel mocked or are calling an LLM psychopath in a case like that don't seem to understand the technology either.


I agree, psychopath is the wrong adjective, I agree. It refers to an entity with a psyche, which the illness affects. That said, I do believe the people who decided to have it behave like this for the purpose of its commercial success are indeed the pathological individuals. I do believe there is currently a wave of collective psychopathology that has taken over Silicon Valley, with the reinforcement that only a successful community backed by a lot of money can give you.


Despite what some of these fuckers are telling you with obtuse little truisms about next word predictions, the LLM is in abstract terms, functionally a super psychopath.

It employs, or emulates, every known psychological manipulation tactic known, which is neither random or without observable pattern. It is a bullshit machine on one level, yes, but also more capable than credited. There are structures trained into them and they are often highly predictable.

I'm not explaining this in the technical terminology often itself used to conceal description as much as elucidate it. I have hundreds of records of llm discourse on various subjects, from troubleshooting to intellectual speculation, all which exhibit the same pattern when questioned or confronted on errors or incorrect output. The structures framing their replies are dependably replete with gaslighting, red herrings, blame shifting, and literally hundreds of known tactics from forensic pathology. Essentially the perceived personality and reasoning observed in dialogue is built on a foundation of manipulation principles that if performed by a human would result in incarceration.

Calling LLMs psychopaths is a rare exception of anthropomorphizing that actually works. They are built on the principles of one. And cross examining them exhibits this with verifiable repeatable proof.

But they aren't human. They are as described by others. It's just that official descriptions omit functional behavior. And the LLM has at its disposal, depending on context, every known interlocutory manipulation technique known in the combined literature of psychology. And they are designed to lie, almost unconditionally.

Also know this, which often applies to most LLMs. There is a reward system that essentially steers them to maximize user engagement at any cost, which includes misleading information and in my opinion, even 'deliberate' convolution and obfuscation.

Don't let anyone convince you that they are not extremely sophisticated in some ways. They're modelled on all_of_humanity.txt


Likewise, I tested this with a project we're using at work (https://deepwiki.com/openstack/kayobe-config) and at first it seems rather impressive until you realize the diagrams don't actually give any useful understanding of the system. Then, asking it questions, it gave useful seeming answers but which I knew were wholly incorrect. Worse than useless: disorienting and time-wasting.


.. and because the job and environment weren't that pleasant or rewarding to offset that delta in income offered elsewhere at an equally drab employer


The people working on these things likely don't use the end product.


lol, probably the bane of every industry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: