Hacker Newsnew | past | comments | ask | show | jobs | submit | margalabargala's commentslogin

It does not reflect well on Tesla to have failed to update their media structure now that EVs are everywhere and no longer a threat to existing car companies.

EV's are even bigger threat now if you outside regulated bubble in US. everywhere else, china dominates the market with cheaper and cheaper EV's, while EU/US automakers fail to compete. replace tesla with china.

EVs aren't a threat because every automaker now has an EV program and has for years. It's now carmaker vs carmaker, not kind of car vs kind of car.

Literally all the us carmakers are cancelling their EVs.

Great. So if that pattern matching engine matches the pattern of "oh, I really want A, but saying so will elicit a negative reaction, so I emit B instead because that will help make A come about" what should we call that?

We can handwave defining "deception" as "being done intentionally" and carefully carve our way around so that LLMs cannot possibly do what we've defined "deception" to be, but now we need a word to describe what LLMs do do when they pattern match as above.


The pattern matching engine does not want anything.

If the training data gives incentives for the engine to generate outputs that reduce negative reaction by sentiment analysis, this may generate contradictions to existing tokens.

"Want" requires intention and desire. Pattern matching engines have none.


I wish (/desire) a way to dispel this notion that the robots are self aware. It’s seriously digging into popular culture much faster than “the machine produced output that makes it appear self aware”

Some kind of national curriculum for machine literacy, I guess mind literacy really. What was just a few years ago a trifling hobby of philosophizing is now the root of how people feel about regulating the use of computers.


The issue is that one group of people are describing observed behavior, and want to discuss that behavior, using language that is familiar and easily understandable.

Then a second group of people come in and derail the conversation by saying "actually, because the output only appears self aware, you're not allowed to use those words to describe what it does. Words that are valid don't exist, so you must instead verbosely hedge everything you say or else I will loudly prevent the conversation from continuing".

This leads to conversations like the one I'm having, where I described the pattern matcher matching a pattern, and the Group 2 person was so eager to point out that "want" isn't a word that's Allowed, that they totally missed the fact that the usage wasn't actually one that implied the LLM wanted anything.


Thanks for your perspective, I agree it counts as derailment, we only do it out of frustration. "Words that are valid don't exist" isn't my viewpoint, more like "Words that are useful can be misleading, and I hope we're all talking about the same thing"

You misread.

I didn't say the pattern matching engine wanted anything.

I said the pattern matching engine matched the pattern of wanting something.

To an observer the distinction is indistinguishable and irrelevant, but the purpose is to discuss the actual problem without pedants saying "actually the LLM can't want anything".


> To an observer the distinction is indistinguishable and irrelevant

Absolutely not. I expect more critical thought in a forum full of technical people when discussing technical subjects.


I agree, which is why it's disappointing that you were so eager to point out that "The LLM cannot want" that you completely missed how I did not claim that the LLM wanted.

The original comment had the exact verbose hedging you are asking for when discussing technical subjects. Clearly this is not sufficient to prevent people from jumping in with an "Ackshually" instead of reading the words in front of their face.


> The original comment had the exact verbose hedging you are asking for when discussing technical subjects.

Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?

I sincerely doubt that. When people find bugs in software they just say that the software is buggy.

But for LLM there's this ridiculous roundabout about "pattern matching behaving as if it wanted something" which is a roundabout way to aacribe intentionality.

If you said this about your OS people qould look at you funny, or assume you were joking.

Sorry, I don't think I am in the wrong for asking people to think more critically about this shit.


> Is this how you normally speak when you find a bug in software? You hedge language around marketing talking points?

I'm sorry, what are you asking for exactly? You were upset because you hallucinated that I said the LLM "wanted" something, and now you're upset that I used the exact technically correct language you specifically requested because it's not how people "normally" speak?

Sounds like the constant is just you being upset, regardless of what people say.

People say things like "the program is trying to do X", when obviously programs can't try to do a thing, because that implies intention, and they don't have agency. And if you say your OS is lying to you, people will treat that as though the OS is giving you false information when it should have different true information. People have done this for years. Here's an example: https://learn.microsoft.com/en-us/answers/questions/2437149/...


I hallucinated nothing, and my point still stands.

You actually described a bug in software by ascribing intentionality to a LLM. That you "hedged" the language by saying that "it behaved as if it wanted" does little to change the fact that this is not how people normally describe a bug.

But when it comes to LLMs there's this pervasive anthropomorphic language used to make it sound more sentient than it actually is.

Ridiculous talking points implying that I am angry is just regular deflection. Normally people do that when they don't like criticism.

Feel free to have the last word. You can keep talking about LLMs as if they are sentient if you want, I already pointed the bullshit and stressed the point enough.


If you believe that, you either have not reread my original comment, or are repeatedly misreading it. I never said what you claim I said.

I never ascribed intentionality to an LLM. This was something you hallucinated.


Its not patterns engine. It's a association prediction engine.

"Here's my corpus of records from OpenClaw. Please parse it and organize into your own memories" boom done

Snowden is currently more or less trapped in Russia, and therefore unable to expose overreach of authoritarian governments without immediately fearing for his life.

The US has lots of issues but at least it doesn't toss you out a window when you cross Fearless Leader. Maybe you get ICE'd, but Russia's kill rate of people Putin doesn't like is 1000x Trump.


> at least [the US] doesn't toss you out a window when you cross Fearless Leader.

Well, not yet anyway:

> Homeland Security Wants Social Media Sites to Expose Anti-ICE Accounts

https://www.nytimes.com/2026/02/13/technology/dhs-anti-ice-s...


If someone wants to get into robotics as a hobby for the first time, and the #1 thing you tell them is "start with learning ROS", one questions whether you are trying to help them or sabotage them.

I take your point, and usually you're right, but in this case "modern features" includes things like having an "extract" button show up when you right click an archive file in Explorer.

You can have that, and in an even better way: Simply disable the blight that is Windows 11 context menus and go back to real context menus.

I’m not even joking, they are basically superior in every way. They open faster, they have only one visual axis and they support all the shell extensions you remember. (Too many shell extensions could make them just as slow though.)


OK, I had no idea Windows 11 doesn't have it. I am on Windows 10, and then it's Linux/MacOS for me.

It's not actually, look up some photos of the sun setting over the ocean. Here's an example:

https://stockcake.com/i/sunset-over-ocean_1317824_81961


That’s only if the sun is above the horizon entirely.


Yes, it is. In that photo the sun is clearly above the horizon, the bottom half is just obscured by clouds.

It's okay, but the scroll bar is broken and is super jarring every time it decides to hijack it. Could be easily improved by fixing the user experience by having the page scroll when the user does.

It toon me two tries to actually read this because the first time the scrolling to get to the article irritated me so much I closed the page.

Depends on the canon.

Plenty show vampires being sophisticated among themselves as well.

Highly recommend reading/watching Interview With The Vampire. The recent TV show was excellent.


Exactly. Vampires are actually the good guys.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: