Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems that a lot of people would rather accept a relatively high risk of unfair judgement from a human than accept any nonzero risk of unfair judgement from a computer, even if the risk is smaller with the computer.
 help



> even if the risk is smaller with the computer.

How do we even begin to establish that? This isn't a simple "more accidents" or "less accidents" question, its about the vague notion of "justice" which varies from person to person much less case to case.


But who controls the computer? It can’t be the government, because the government will sometimes be a litigant before the computer. It can’t be a software company, because that company may have its own agenda (and could itself be called to litigate before the computer - although maybe Judge Claude could let Judge Grok take over if Anthropic gets sued). And it can’t be nobody - does it own all its own hardware? If that hardware breaks down, who fixes it? In this paper, the researchers are trying to be as objective as possible in the search for truth. Who do you trust to do that when handed real power?

To be clear, federal judges do have their paychecks signed by the federal government, but they are lifetime appointees and their pay can never be withheld or reduced. You would need to design an equivalent system of independence.


It not the paychecks that influence federal judges; these days it's more of quid-pro-quo for getting the position in the first place. Theoretically they are under no obligation but the bias is built in.

The problem with a AI is similar; what in-built biases does it have? Even if it was simply trained on the entire legal history that would bias it towards historical norms.


I think it is usually the opposite - presidents nominate judges they think will agree with them. There’s really nothing a president can do once the judge is sworn in, and we have seen some federal judges take pretty drastic swings in their judicial philosophy over the course of their careers. There’s no reason for the judge to hold up their end of the quid-pro-quo. To the extent they do so, it’s because they were inclined to do so in the first place.

You just repeated what I said -- how is that the opposite?

“Fair” is a complex moral question that llms are not qualified to answer, since they have no morals or empathy, and aren’t answering here.

Instead they are being “consistent” and the humans are not. Consistency has no moral component and llms are at least theoretically well suited to being consistent (model temperature choices aside)

Fairness and consistency are two different things, and you definitely want your justice system to target fairness above consistency.


I'd rather get judged by a human than by the financial interests of Sam Altman or whichever corporate borg gets the government contract for offering justice services.

> nonzero risk of unfair judgement from a computer

I feel like this is really poor take on what justice really is. The law itself can be unjust. Empowering a seemingly “unbiased” machine with biased data or even just assuming that justice can be obtained from a “justice machine” is deeply flawed.

Whether you like it or not, the law is about making a persuasive argument and is inherently subject our biases. It’s a human abstraction to allow for us to have some structure and rules in how we go about things. It’s not something that is inherently fair or just.

Also, I find the entire premise of this study ludicrous. The common law of the US is based on case law. The statement in the abstract that “Consistent with our prior work, we find that the LLM adheres to the legally correct outcome significantly more often than human judges. In fact, the LLM makes no errors at all,” is pretentious applesauce. It is offensive that this argument is being made seriously.

Multiple US legal doctrines now accepted and form the basis of how the Constitution is interpreted were just made up out of thin air which the LLMs are now consuming to form the basis of their decisions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: