Hacker Newsnew | past | comments | ask | show | jobs | submit | rglover's commentslogin

It's helped me to adapt to a more traditional schedule. I worked 6-7 day weeks, 10+ hours a day on average for about a decade. Thanks to LLMs, I just took off the weekend to build garden beds with my wife and rest. I shut down around 6pm now when I used to frequently go from ~10a to ~10p most days.

And to be clear: this isn't because of some magical property of LLMs, but rather, because it satiates my nervous system's bar for "that'll do, pig." I view the LLM exhaustion as a gift/tell that I've done enough for the day/week.


Here you are describing any job, nothing related to LLMs.

Garden variety malignant narcissism (my armchair psych opinion but grew up in this dynamic). It's acting out in response to their deep shame (the root thing that all of the narcissistic behavior is desperate to hide). They can't admit they're wrong, otherwise their entire psychological world collapses.

Coincidentally, that's also why it's so terrifying to see so many of these types in power. While most narcissists are mostly hot air and talk, occasionally, you get a legitimate wildcard that's destructive in difficult to repair ways (sometimes leaving nothing but smoldering rubble).


It is very interesting when you explore the neurological mechanics of this. A narcissist is rigid thinking dialed up to 11. It is essential a special and pathological “skill” their brains have learned. They do not have to update their priors or spend metabolic energy on almost anything their life. Their brain figured out the best way to survive and conserve energy was to avoid costly updates to their beliefs. Repeated over years and that system becomes deeply myelinated, a core identity. Unwinding that is a feat. Some people just have a more narrow set of rigid beliefs (e.g. religion, work skills, etc).

Agreed on your neuro take. It would seem that the rigidness is somewhat reinforced by the pervasive mechanism of digital feedback. As we now can see clips of stupid behavior being propagated online as easily as opening our eyes and tap a screen, the rigid behavior of an overt narcissist is now on display as a model for lesser equipped minds to absorb. The narcissist acquires a visually recognizable position of power through their actions, and this makes them highly desirable by those lacking control in their own life. The audience is global... And where the terrain is fertile.. the said audience also votes for their model.

Social media is a toxic stew of identity based narrative reinforcement. Custom tailored to your specific, and I mean really specific, narratives. Does your identity revolve around religion A, hobby B and C, political views D? The algorithm will feed you exact pro-narrative pro-identity content. Did you react to the rage bait style things that we tried out on you? Awesome, now you are getting even more toxic nonsense streamed to your brain. It is genuinely scary. It just creates and strongly reinforces it. It is like we created a way to chunk memetic hazard into a series of small unidentifiable pieces. The net result? No one would be like "If you open this door you become an extremist and will have really rigid identity beliefs" who would open that. But clicking thumbs up or like on a "funny" political meme? Sure why not.

so .. i guess.. if technology is meant to trigger our impulses then the world is slowly going towards the direction dictated by the impulses that form the largest cluster, pulling the whole environment in their direction. Just like a carriage with many horses that cannot be controlled if a group of horses decide to pull right and go into the ditch. So we will have to endure the fall of everything just for the impulsive unevolved people to learn their lesson. Kind of a grim view... but seems like it right now.

Yes, and here's an interesting (and clear) example that shows that narcissism is a complex delusion that puts one's own fault squarely into a blind spot that cannot be perceived. I watched this and, for the first time in my life, felt a huge pang of compassion and sadness for those that suffer from it, even though they make life more difficult for everyone else. They are broken.

https://www.youtube.com/watch?v=bqRIw5FICAs

A Kent State professor calls 911 because she can't get into her building to pee; she is clearly drunk; they give her every opportunity to get a ride home; she refuses and is eventually detained. Later she goes to the police department to get an apology from the officers involved. It was, to me, a shocking example of the narcissistic delusion, with stakes low-enough that one could focus on that and not the side-effects.


Trump’s shame, I wonder, might be more rare than the garden variety. Nothing seemed to endear him to Manhattan elites. Not the pro wrestling or bragging like an 80s rapper. I wonder how that important internalized shame could change.

There seems to be something about Pres. Obama mocking him during the Correspondents Dinner. A venue for mockery, sure, but a black man mocked a son of Fred Trump.


Built my first PC (for basement LAN parties) using the old family Packard Bell case. Cut my thumb on the poorly machined aluminum inside...I'll cherish that scar forever.

Ah, the good ol' days.


I had a similar opinion until I started to see a flood of outages, data leaks, money being pulled, etc starting to crop up right as co's started hailing AI as the second coming.

Now? I'm waiting for the inevitable reality check. AI doesn't go away (I personally don't want it to; it's a power tool for an experienced dev), but imo, the market is not too far from correcting the hype. Reality can only shoulder so much bs before the rubber has to hit the road.

The promises being made over the last few years are not being fulfilled and big money likes results, not talk. So, unless we get another (significant) rabbit coming out of the hat in 2026, the momentum (again, imo) won't be there to sustain the necessary long-term growth (i.e., in terms of mass-adoption, this era of AI is closer to AOL than Facebook).


> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.

Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."

Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").


It's only recently some have come to terms with the fact that DNA evidence sometimes returns false positives. Society, and law enforcement, assumed that DNA was infallible. No one apparently wondered millions of people could be reduced to a tiny number of genetic markers apparently having no overlap.

Danish police had to redo 20.000 DNA tests with a larger set of markeres begin tested, because they jailed someone based solely on a DNA test and did consider that they might have gotten the wrong person, despite the DNA match. It's essentially a human hash collision.

Identification by AI is going to be the same, except worse, because it's frankly less scientific. Law enforcement, the judicial system and especially the public is simply to uninterested in learning the limitations of these types of systems. Even in the more civilized part of the world police would love to just have the computer tell them who to pick up and where.


There was a man arrested in Santa Clara county because his DNA was tracked to a murder scene by the paramedics that treated him before they were called to the scene of the murder. He only got away with it because the public defender realized that he was in the hospitals detox at the time of the murder.

Typically, the “it” in the phrase “got away with it” refers to an action that broke the rules.

“Got off” would be more appropriate

"got off" implies he was guilty but got away with it. I'd say "vindicated" or "absolved" fit the bill here.


Closed source DNA testing software and hardware is a travesty imo

not the first instance.

This was 2023 https://www.youtube.com/watch?v=lPUBXN2Fd_E&t=19s

A dude in the usa was arrested in a casino by police because the casino's facial recognition software said he had been trespassed before. He hadn't. I think there was height differences and eye colour difference. The police still arrested him, booked him. I think the prosecutors took it to trial.


I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%.

Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.

I feel like I'm going crazy with this narrative.


> I feel like I'm going crazy with this narrative.

We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.

What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.

They're going to fall for it, without a second thought.

And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.


When you foster a culture of impunity and passing the buck, don't be surprised when they pass the buck to the inscrutable black box they bought.

You might even argue that's the purpose of the inscrutable black box.


AI is the new "it's policy."

The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology?

AI facial recognition is smarter than what they are capable of. That's not the issue. It is much faster than a human, and state-of-the-art models make fewer errors than a human (though the types of errors are not the same).

The issue is that facial recognition is just not very reliable. Not for humans and not for machines. If you look at millions of people, some of them just look incredibly similar. Yet police apparently thought that was all the evidence they will ever need. A case so watertight there's no point in even talking to the suspect


So the sane solution here is just leaving unreliable stuff to humans and reliable to machines. Especially so when human wellbeing and freedom are at the stake.

To define the line between the two, calculate the percentage of cases when mainstream CPUs return anything but integer 4 after addition of integer 2 and integer 2, and use that as the threshold to define "reliable".


> The “I” in “AI” stands for “intelligence”

By that logic the “I” in Siri is 2x more intelligent.


Because they are supposed to possess minimum levels of intelligence found in homo sapiens, which includes not believing anything a salesperson says.

Also, their whole job is dealing with people who constantly lie to them.


There are two things occurring here.

Police get raises and recognition for closing cases. In general they don't care if you're guilty or not, that's someone else's problem. Same with the detective, same with the DA. The more cases they close they 'tougher they are on crime'.

The next thing occurring is

https://en.wikipedia.org/wiki/Computer_says_no

https://en.wikipedia.org/wiki/Computer_says_no


If you have a broken system whose injustice is checked only by the limitations of the human elements, and you start replacing those human elements and powerscaling them, you have an unlimited downside.


Some police departments seem to actively reject candidates that have higher scores on IQ tests. Not that I think IQ test scores and actual intelligence are related but it clearly shows their intended target candidate group.

https://abcnews.com/US/court-oks-barring-high-iqs-cops/story...


This came up a few weeks ago. I don't think it's true. This lawsuit from 26 years ago is the only example anybody has come up with. Among the problems with this claim:

* Nobody can find a police department that administers any kind of general cognitive test.

* There are large states with statewide written police aptitude tests that are imperfect but correlated to general cognitive ability, and maximizing scores on that test is the universal correct strategy.

* It's a luridly stupid policy and most municipalities aren't luridly stupid.

I think this happened like, once or twice, in one or two of the 20,000 police departments across the United States, many of which are like one goober and his sidekick (no offense to them; just, you live in gooberville, you're a goober), and now it's an Internet meme that police departments specifically hire for midwittery. Nah.


In different states, police use cognitive aptitude tests such as the Wonderlic -- https://jobdescriptionandresumeexamples.com/10-important-fac... -- https://www.practice4me.com/lst-police-exam/ -- these are not strictly 'IQ' tests, but they're very similar.

The Wonderlic might as well be an IQ test (I'm using the term "general cognitive test").

The LST isn't; it's a domain-specific occupational exam.

If you find a place that (1) uses the Wonderlic and (2) has recently (like, not all the way back in 2000) claimed there was a high-end cut-off for applicants, you'll have disproven my claim. I don't think giving general cognitive tests to prospective police officers is common; this is why there are things like the LST, the PELLETB, and the POST.


You're over-selling the minimum level of intelligence in homo sapiens.

What you're stating is your wishful thinking. Don't get me wrong. I'd also like what you say to be true. It very much is not. Quite the opposite, which is why salespeople "work".

The amount of AI bullshit Senior+ level developers just paste to me as truth is astonishing.


As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm.

all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.

  "Among his accomplishments has been establishing the department’s Real Time Crime Center that leverages technology and data to support officers in responding more effectively to incidents," the city's release said. "Zibolski also prioritized officer wellness initiatives to strengthen mental health resources and resilience within the department. He reinstituted the Traffic Safety Team to focus on roadway safety and proactive enforcement, and ... played an active role in statewide discussions on various issues affecting law enforcement."
From the same article... He spearheaded a push to "leverage technology and data to support officers in responding more effectively to incidents", then that same technology mistakingly ruins a woman's life by passing along a hit to an officer who compared with her FB photos and said "sure, seems right".

The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out.


> someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.

Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened.


Yea but this feels like when a Waymo ran over a cat, and a Human driver ran over a toddler and both got the same level of coverage in the media (actually the cat got more follow-up coverage). And I'm supposed to believe both issues are equally important.

No. That's gaslighting, and totally misplaced political activation.


What do you propose we do in the latter situation? The news isn't the value of the life that was (presumably lost). The news is the circumstances that made that loss possible. Human driver was maybe careless, or maybe didn't look. The child safety classes I took emphasized over and over again to look around your car and yard before backing your car out. This is a problem with a known solution that unfortunately still happens despite the best efforts to prevent it.

Waymo hitting a cat is obviously less tragic, but if it can hit a cat, what else can it hit? A toddler? A human? The wall of your kitchen? This is a problem that has no known solution; furthermore, it's a problem that the engineers at Waymo don't seem overly keen on solving quickly.


"This is a problem with a known solution that unfortunately still happens despite the best efforts to prevent it."

Great, let's just apply that logic to Waymo as well and call it a day (see how silly that sounds?). Waymo has engineers..so does the Department of transportation.


I'm really not sure how to respond to this because it seems like you're insinuating that the Dept of Transportation has the same level of control over ALL cars in the country as Waymo has over their cars.

You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition.

But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?

I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.


Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.

It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'.

You wouldn't know they had no history of leaving their local area unless you interviewed them.

Why does not the investigator have to supply some sort of evidence that she has a history of leaving their local area rather than putting the onus on the accused? This line of argument is halfway to "guilty until proven otherwise".

You and the GP that replied to me are way overstating what it means to be a "suspect". It just means the police are investigating you and consider it a possibility you've committed the crime. On its own, is not a sufficient status to search your home, subpoena your ISP, or arrest you - all of those things require a much higher burden of evidence, and oftena third party (judge's) approval. People routinely become "suspects" on much flimsier evidence than an unreliable software match - if I call in an anonymous tip that I saw you acting suspicious near the crime scene, you will probably become a suspect.

If you'd like, you can replace the term "suspect" in my post with "person of interest", which colloquially implies a lot less suspicion but isn't practically any different in terms of how the police interacts with you.


> Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.

It's absolutely absurd. The argument that AI is the problem is literally the people arguing against AI shedding responsibility to the machines. The people arguing that AI is the problem are essentially (philosophically) the same people who will say it was the AIs fault.

The thing that it most reminds me of is people trying to stop the deaths and injuries that come as a result of "swatting" by being really angry at people who "swat" and proposing the harshest punishment for it that they can come up with (or outdoes anyone else in the thread.)

The problem with swatting is that police were showing up to the houses of harmless people based on anonymous phone tips and murdering them. You guarantee swatting will work indefinitely when you indemnify the cops.

You don't need AI for injustice in the US justice system. There is literally no part of the US justice system that makes sense at all, and even in the best case scenario when the guilty are caught, tried, and punished, it is tremendously wasteful, cruel, and ass-backwards. Juries are basically the AI of the US justice system, allowing the prosecutorial and enforcement apparatus to be infinitely cruel, illogical, self-serving and incompetent. 12xFull AGI. AI couldn't do any worse.

> I feel like I'm going crazy with this narrative.

You're not alone.


You are exactly correct. Cops cannot be trusted. We spent a lot of time pointing that out in 2020. AI is the least of our problems with policing.

Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020.


Are we just gonna pretend the wide implementation of bodycams hasn't shown that the overwhelming majority of times the cops weren't in the wrong to a point that the same people that demanded them want them gone now?

Citation needed. Who are these people who wanted improved police oversight who are supposedly now fighting for the removal of bodycams?

I feel like I'm going crazy that anyone tries to suggest the AI and the producers and promulgators and apologists of AI played no part and bear none of the responsibility in this narrative.

Because the responsibility lies on the part of the criminal justice system who used the flimsy AI facial recognition evidence to arrest and hold her for months. If AI didn't exist, and this same incident happened because a human looked at a photograph of the woman and said "I think this might be the same person who committed the crime in the video", it would be insane to blame the people who invented photographs or video recording for her arrest.

The problem is in how these tools are sold to them. Not everybody can be an expert in every topic. Like in every other application area, these AI systems are promoted as being able to do about a thousand times more, and a million times more reliably, than they actually are. Of course the departments can be expected to do some due diligence and instruct their officers, but the lies by AI system suppliers is where a large part of the blame belongs. Manufacturers of cameras or CCTV systems never told the police department that the system would do their job for them.

But it's not totally irrelevant in this story.

Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results.

What or who is at fault isn't either/or, it's a bunch of compounding factors.


You’re on the right track here but I don’t think it should be hand-waved away as “the least of your problems” - it’s yet another weapon that police in the USA can use against the population with impunity. They’re going to have to reckon with all of this in the coming years - cops having guns and armored cars, “qualified immunity”, the “stop resisting” workaround for brutality and now this AI

You’re going crazy because up until this exact moment you’ve never had to confront the reality that these tools, placed into the hands of the common man, are viewed as authoritative and lack any accountability or consequence for misuse.

For anyone who has been victimized by law enforcement or governments before, we’ve been warning about this shit for decades. About the lack of consequence for police brutality. The lack of consequence for LPR abuse. The lack of consequence for facial recognition failures and AI mismatches.

You need to understand that by using these systems correctly and holding yourself accountable, you are in the minority. Most people do not think that critically, and are all too happy to finger the computer when things go badly.

And until you accept that, and work to actually hold folks accountable instead of deflecting blame away from the tool, then this won’t actually change.


Your answer presumes we cannot hold people accountable. I think that is incorrect.

Do you mean hypothetically could society hold law enforcement personnel accountable for mistakes, bad judgement, flagrant criminal conduct, horrendous abuse of any and everyone? Certainly, a large scale and comprehensive restructuring of America’s law enforcement and prosecutorial system is legally possible.

However, I hold to the opinion that if you are discussing actual reality, based on decades (if not the entire period post civil war, for near certainty) of historical examples and the current “majority” position of the US electorate: there is a nearly unqualified NO. We cannot, or will not, hold law enforcement accountable for even intentional, planned, and malicious conduct in a vast majority of cases. There is practically no accountability at all, and that’s just for thoroughly proven intentional conduct. Bad judgement, alleged mistakes, etc are even less able to result in any action.

The reality of the legislation and precedent ensure it. It’s not a bug, it’s a feature.


The AI is the authority having so much knowledge, that we hear a reassuring "Please continue" [0].

https://en.wikipedia.org/wiki/Milgram_experiment


It's called qualified immunity. Many support its repeal. I hope you join them, and convey the same to your local representatives and candidates. Until it is reformed few if any officers or administrators of criminal justice in the United States will ever feel any type of accountability.

Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice. Criminal charges against officers are exceedingly rare. She should be able to sue this detective directly. Of course she can sue the government too, and should. But without any personal consequences for the people carrying out these acts, taxpayers will continue to bail out these practices without ever noticing. Your own government should not be a shield for a police officer who has violated you or your neighbors.


> Many support its repeal.

There's nothing to repeal. Qualified immunity is a doctrine that the judicial branch made up out of thin air, with no legislative backing.

But agreed, we need legislatures to write laws that expressly hold police accountable, and declare that they are not shielded from liability when things go wrong due to their own failures and negligence.


Not that it changes your point, but, um actually:

While the origins of qualified immunity are judicial, some State loved the idea so much the went and made it statutory too. Louisiana’s 2024 bill explicitly removes negligence as an exception (which is a valid method to circumvent qualified immunity based on jurisprudence at the federal and most state levels). Louisiana requires intentional violations or criminal actions to even be able to bring a claim.


> Short of video evidence of blatant gun to the back of the head style homicide qualified immunity means most law enforcement officials are never held accountable for their miscarriages of justice.

And frequently not even then.


You can hold someone responsible only after they've actually fucked up. And with the way things move in the criminal justice system, that can take months to discover. Holding them responsible doesn't really fix anything, it's purely reactive.

Dude, not sure which team are you working in, but across many-many domains - corporate, business and political, people are already delegating full decision making and responsibility to AI. Unless national governments and standards institutions create and enforce ironclad AI governance laws, situations resembling what this poor granny went through are going to occur again, again and again.

There’s money to be made selling AI plausible deniability machines that allow end users to enact unethical policies while evading accountability, but only if all moral responsibility ostensibly falls on the end user and none on the dealer.

When are cops ever treated the same way as the rest of us?

Well in most cases I would prefer to have a cop's word to outweigh a word of an average joe.

You should tell that to Angela Lipps, I'm sure she told every cop she came in contact with she had never been to Fargo. Cops have a responsibility to do their job, part of that job is listening and relying on proof. ALL those cops were either too lazy or were afraid of their superiors. This is unacceptable for the amount of power and information they have access to. We should either de-fund the police system or reform the hell out of it. BTW, where was her state representative during this fiasco?!?

The belief by a juror that law enforcement personnel, especially phrased as a belief that applies to law enforcement personnel as a generic group, is a well established basis for a challenge for cause leading to exclusion of that person from being a juror. The US jury system is build explicitly on excluding these types of belief in juries in order to ensure fairness, impartiality, and individual and case/witness specificity of “triers-of-fact”.

I could understand someone who disagrees with it, but your position would be antithetical to current and historical thought on what defines a fair jury.


>The belief by a juror that law enforcement personnel,

>excluding these types of belief

You have not stated what belief you are talking about.


True, apparently I got backspace happy when Inposted the reply on mobile. I was talking about the belief by a perspective juror that law enforcement personnel are more credible or trustworthy than others due to their status as law enforcement personnel.

This is not quite true. The rules of evidence state that law enforcement (official) testimony is more credible than civilian testimony. Officials have a wide exemption from hearsay objections, if the offfical was working at an official task at the time.

Do you think police are inherently more honest than everybody else? Why would you think that?

Why should having that particular job give you that privilege? All should be equal before the law.

I mean, this is the USA we're talking about. Cops are given huge authority over everyone else, with poor accountability. AI just lets them pretend to be even less accountable. And by "pretend" I of course mean "get away with it".

See, AI was used to accelerate arrest and jailing, but not to follow through. It was not used to ensure her well being. Clearly this demonstrates that AI contributes to treating humans inhumanely, and demonstrably AI is not used to improve anyones quality of life. Stop making excuses for "AI not at fault here".

It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it.

It's also why people start parroting the phrase "the purpose of a system is what it does". Look at where we are right now: a precipice before this becomes widely used in all forms of policing. We still have a chance to police the police's use of the AI.

The purpose of using AI to identify suspects in criminal cases is to ease the burden of manual searching for a suspect (or insert whatever the purpose of statement you want). Ok, but we're getting false positives that are damaging people's lives already in the early stages. And I don't want to hear "trust me bro, it will get more accurate" as an excuse to not regulate it.

At a minimum, we should enshrine the right to appeal AI and have limits on how it can be used for probable cause.

This isn't even the only recent case of this happening. There was another case of mistaken identity due to AI. [0] Sure 4 hours isn't the same as 5 months, but still this guy wanted to show multiple forms of ID to prove who he was! The bodycam footage was posted a few months back but never got traction here.

Like if the police officer can't read numbers, they can't do breathalyzer tests on people. If the AI can't be used responsibly, then it can't be used at all.

[0]: https://www.youtube.com/watch?v=lPUBXN2Fd_E


If you're skeptical, watch this - https://www.youtube.com/watch?v=lPUBXN2Fd_E

So what? There were false arrests and convictions made by misuse of line-ups, DNA, eye-witnesses, photos, bloodstains, fingerprints, etc. since forever. You must also blame all those other technologies, so what do you think the police should use to find suspects? In your view, the more help police have, the worse a job they'll do. Is that actually the trend?

With all other proof you mentioned, there was always a human putting his signature.

Now that they can blame "AI" no specific officer(s) will take the blame, ever. If no one is responsible there will be many more false positives.

And false positives destroy lives


> With all other proof you mentioned, there was always a human putting his signature.

There was a human doing that in this case; AI doesn’t inititiate charges. “In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.”


The article could have - but didn't - mention this specific Fargo police detective's name.

The article not mentioning the name does not change that the detective did sign the police charging documents.

(Nor does the omission in the article of other names and procedural details change the fact that for there to be actual criminal charges, an arrest warrant, extradition, and incarceration, a number of other people had to sign their names to official acts, including, among others, at least one public prosecutor, and more than one judge.)


So what???

This woman lost most of her material possessions, was terrorised by "goons"... The police do this stuff regularly, as black people, immigrants, "white trash" etcetera know well. Another opportunity, presented BY AI models for more routine police oppression

As the wise singer said: "Fuck the police!"


Exactly, it's the police's fault, as well as the wider system they operate in that enables that kind of abuse, and they do it anyway even with out AI.

AI is, in this case, a tool enabling it, because trawling large databases using AI allows finding people with a degree of similarity to a suspect that would reasonably constitute probable cause int he context of what was until fairly recently the norm for police work because that work relies on proximity and connections to the crime. The understanding of probable cause and what is necessary for it , given the actual investigative process in the case, including the use of large databases unconnected with the events and locality of the crime needs to adapt.

You're right that they often do a lot of harm.

The point that you're missing is that, in a system where such abuses are possible, many of us really don't want one more tool in their box for them to fuck us with.

Like, they already prove themselves incompetent- giving the power to track anyone in the US via a distributed ALPR system just makes them more dangerous. Giving them all these "AI" based tools does the same.


Misleading title*

> The default rate among U.S. corporate borrowers of private credit rose to a record 9.2% in 2025

Emphasis added. Headline makes it sound like retail credit, not corporate specifically.

*Edit: Not misleading, just an unfamiliar term/usage from my perspective. I'm not a finance guy so didn't know the difference and assumed others wouldn't either. Mea culpa.


TBH "private credit" (meaning exactly what this article is talking about) is such a big thing in the finance industry that probably most finance industry people can't even fathom that the title is misleading to non-finance-industry people.

I'm not saying they are right. But it's like if you posted an article called "Python Is Eating the World" on a non-tech side and people got mad because they thought the article was about a wildlife emergency. Fair for them to be confused, but maybe not fair to accuse the title of being misleading (at least not intentionally).


Ha, yes I didn't even consider it meant anything other than corporate private credit. Otherwise we'd be talking about presumably mortgages or "consumer debt". Right?

It's some sort of Gell-Mann-Amnesia-like effect. I am accustomed to seeing thoughtful, informed discussion about technical topics on HN, so then it's jarring when something like this hits the front page and nobody seems to have any idea what they're talking about.

It's opposite Gell-Mann-Amnesia: I am a SWE and I come here because I find it one of the best places to keep abreast of the broader software world, not just the little corner of it that I'm currently working in. So in the things that I know well, I trust it. My wife is a medical professional, and so I know just enough to see that most medical conversations here are complete and utter nonsense.

So the mental model I have of the average HN contributor is basically that they are all SWE's- they know software engineering extremely well, and the farther you get from that the less valuable the conversation will be, and the more likely it will be someone trying to reason from first principles for 30 seconds about something that intelligent hard working people devote their careers to.


Probably mostly accurate. Though a few of us do know lots of topics. Can outscore med students on USMLE prep, know what private credit is, etc., etc.

> Headline makes it sound like retail credit

I’m coming at this loaded with jargon, so excuse my blind spot, but why would the term private credit bring to mind anything to do with retail specifically?

(The term private credit in American—and, I believe, European—finance refers to “debt financing provided by non-bank lenders directly to companies or projects through privately negotiated agreements” [1].)

[1] https://corporatefinanceinstitute.com/resources/capital_mark...


>, by why would the term private credit bring to mind anything to do with retail specifically?

If a layman is unfamiliar that "private credit" is about business debts, and therefore only has intuition via previous exposure to "private X" to guess what it might mean, it's not unreasonable to assume it's about consumer loans.

"private insurance" can be about retail consumer purchased health insurance outside of employer-sponsored group health plans

"private banking" is retail banking (for UHNW individuals)

But "private credit" ... doesn't fit the pattern above because "private" is an overloaded word.


> But "private credit" ... doesn't fit the pattern above because "private" is an overloaded word

Makes sense. Thanks. Private here is as in private versus public companies.


No 'private' meaning that the transaction is between the lender and the borrower without a public rating agency involved (Moody's, etc...). This used to be for niche things like a data center where a rating agency might have trouble figuring out reasonable rating. Then the data center company would go to somebody like Apollo who could do custom analysis on the risk.

But now those private loans are being syndicated to affluent investors who probably don't understand that while some of this debt is solid, alot of it is not. And without a rating agency involved nobody knows how much risk is in there.


In other words, "private credit" is private the way "private equity" is private, not how "private insurance" is private.

> and, I believe, European

Yes.

It surprises me that most people would read "private credit" to mean "retail credit" by default, but I also come to this loaded with jargon so I guess would defer to others on this. But to be clear, the title is not misleading to anyone who has any familiarity with the financial markets.


Outside of finance, people associate "private" with "individual"

With the caveats that banks can originate private credit as long as it is separate from their reserve system credit (and consequently does not increase the money supply when originated)

That's not the likely definition most will reach for here automatically (especially amidst the constant financial blackpilling).

I think you’re mistaken. We’ve been in a private credit bubble for a couple years at least, it’s in the finance/economic news every week and I’ve even started to hear regular NPR doing primers on it for normies. The term for “retail credit” is consumer debt or consumer debt. We don’t call it retail debt because the retailer is not actually a counterparty.

Out of curiosity where do you primarily get your news?


> not the likely definition most will reach for here

A lot of the datacenter buildout has been financed with private credit [1].

> financial blackpilling

?

[1] https://www.bloomberg.com/news/articles/2026-02-02/the-3-tri...


"Blackpilling" is apparently an incel term for fatalism/nihilism. Sounds like they're trying to read financial news through that lens.

> "Blackpilling" is apparently an incel term for fatalism/nihilism

Any idea as to the etymology? What was the black pill? Is it a Matrix reference?

Meta: why are incel neologisms so catchy?


I think (but I don't move in such circles) that originally there was "redpilled" to refer to people playing "The Game" (pickup artists). Original reference is to The Matrix, of course.

what on earth is "financial blackpilling"?

That's exactly where my mind went as soon as I read the title. HN rules say to "use the original title, unless it is misleading". I think the original title meets the misleading bar but I can't speak for other readers.

"Private credit" is a finance term of art. It could be misleading if you don't have context for the correct definition, but that's true of many posts on this site.

We just need to socialism harder.

why was finance post even allowed here?

it's not programming and it's not tech


it is correct, though.

someone not knowing the definition != misleading title


FWIW when I read "private credit" I think of private issuers, not retail.

Private as in private (i.e. non-public) corporation, not as in individual/retail/natural person borrowers.

That’s not what it means though. It’s done through a partnership. Or not, if we count Business Development Companies as “private credit” - but then they are not usually private corporations either.

Has the title been changed already? It currently says 'private credit', I don't see how that misleadingly sounds like 'retail credit'?

Thanks, I completely miss-read it thinking that it was about retail credit. facepalm. Time for coffee.

This just reminded me of a friend who explicitly requested to be placed inside of a glass cube and then blown up, his body becoming an instant art project/memorial.

I can't get over how cool the UI is for playing back the examples. The more complex ones are wild to watch loop.


I'm here for it in the short-term. As the market continues to saturate, most of the people building this stuff will flame out. Eventually, I suspect we hit a tipping point where the ROI is too low (not enough real human engagement, just other bots) and the flood dials back.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: