Hacker Newsnew | past | comments | ask | show | jobs | submit | foobarbecue's commentslogin


What’s the context?

How would / does Moltbot try to prevent humans from posting? Is there an "I AM a bot" captcha system?

Maybe, but the most compelling scifi to me personally is the generation ship stuff, like Ring by Steven Baxter.

And then there’s Cloud Cuckoo Land. (Anthony Doerr)

*rationale

Thanks, noticed after edit disappeared

I don't even understand what discipline we're talking about here. Can someone provide some background please?

The thing that lets LLMs select the next token is probabilistic. This proposed a deterministic procedure

Problem is, we sometimes want LLMs to be probabilistic. We want to be able to try again if the first answer was deemed unsuccessful


Ah, LLMs. I should have guessed.

> Quenching is higher-frequency pressure application that amplifies contradictions and internal inconsistencies.

> At each step, stress increments are computed from measurable terms such as alignment and proximity to a verified substrate.

Well obviously its ... uh, ...

It may not be, but the whole description reads as category error satire to me.


Not satire, though I get why the terminology looks odd. The language comes from materials science because the math is the same: deterministic state updates with hard thresholds. In most AI systems, exclusion relies on probabilistic sampling (temperature, top-k, nucleus), which means you can’t replay decisions exactly. This explores whether exclusion can be implemented as a deterministic state machine instead—same input, same output, verifiable by hash.

“Mechanical” is literal here: like a beam fracturing when stress exceeds a yield point (σ > σᵧ), candidates fracture when accumulated constraint pressure crosses a threshold. No randomness, no ranking. If that framing is wrong, the easiest way to test it is to run the code or the HF Space and see whether identical parameters actually do produce identical hashes.


What do you mean by "exclusion"?

Here “exclusion” just means a deterministic reject / abstain decision applied after a model has already produced candidates. Nothing is generated, ranked, or sampled here. Given a fixed set of candidate outputs and a fixed set of verified constraints, the mechanism decides which candidates are admissible and which are not, in a way that is replayable and binary. A candidate is either allowed to pass through unchanged, or it is excluded from consideration because it violates constraints beyond a fixed tolerance.

In practical terms: think of it as a circuit breaker, not a judge. The model speaks freely upstream; downstream, this mechanism checks whether each output remains within a bounded distance of verified facts under a fixed rule. If it crosses the threshold, it’s excluded. If none survive, the system abstains instead of guessing. The point isn’t semantic authority or “truth,” it’s that the decision process itself is deterministic, inspectable, and identical every time you run it with the same inputs.


You really really need to be upfront in the first paragraph or your docs that you are talking about the inner workings of LLMs and other machine learning stuff

Failing that, at least mention it here


LLMs are probabilistic by nature. They’re great at producing fluent, creative, context-aware responses because they operate on likelihood rather than certainty. That’s their strength—but it’s also why they’re risky in production when correctness actually matters. What I’m building is not a replacement for an LLM, and it doesn’t change how the model works internally. It’s a deterministic gate that runs after the model and evaluates what it produces.

You can use it in two ways. As a verification layer, the LLM generates answers normally and this system checks each one against known facts or hard rules. Each candidate either passes or fails—no scoring, no “close enough.” As a governance layer, the same mechanism enforces safety, compliance, or consistency boundaries. The model can say anything upstream; this gate decides what is allowed to reach the user. Nothing is generated here, nothing inside the LLM is modified, and the same inputs always produce the same decision. For example, if the model outputs “Paris is the capital of France” and “London is the capital of France,” and the known fact is Paris, the first passes and the second is rejected—every time. If nothing matches, the system refuses to answer instead of guessing.


You are going so deep with abstract terms that your text becomes a special shorthand you think is clear but is anything but clear.

Stop talking about “exclusion” and “pressure” etc and use direct words about what is happening in the model.

Otherwise, even your attempts at explaining what you have said need more explanation.

And as the sibling comment points out, start by stating what you are actually doing, in concrete not “the math is the same so I assume you can guess how it applies if you happen to know the same math and the same models” terms. Which is asking everyone else, most anyone, to read your mind, not your text.

There is a tremendous difference between connections you see that help you understand, vs. assuming others can somehow infer connections and knowledge they don’t already have. The difference between an explanation and incoherence.


Ok I swear I had a printer that would do some kind of internal cleaning noise thing every time I plugged something else in to a 120v outlet anywhere in the same apartment. I never really tried to figure it out.


Before I waste any time on this article, is the "0.01%" claim backed up with any evidence?


He shows an alleged screenshot of an email sent by the vendor. There is also a cool animation of what seems to be a chromosome gallery produced by the result of a genetic algorithm of some sort which took 1 day for claude.



> I'm building The Marketplace for Healthcare

Um. Is this jeopardy? If so, "Who is Obama?"


It sounds like you may be confusing schizophrenia with multiple personality disorder / dissociative identity disorder. Easy to do, since they are often mixed up. https://www.medanta.org/patient-education-blog/myth-buster-p...


So the weird answer is... a better model Lenovo. They vary from plastic disaster to metal or carbon fiber dream machine.


Yeah, if you don't like the case quality of a T model Thinkpad, you are the problem ;) - fiber reinforced plastic is arguably a more suitable laptop case material than aluminum.

Lenovo's cheap laptops are as bad as anyone's.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: