Hacker Newsnew | past | comments | ask | show | jobs | submit | roywiggins's commentslogin

Right, but they didn't actually test that, did they?

Maybe that's true, but they didn't actually show that that's true, since they didn't try scaffolding smaller models in a similar way at all.


I think my favorite part is that, because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all. That makes very little sense unless you specifically want to make it illegal to not be OpenAI (et al).

Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?


> because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all

Oof. If you're an Illinois resident, please call your elected and at least ensure they understand this loophole is there. In all likelihood, nobody other than OpenAI's lobbyists have noticed this.


> unless you specifically want to make it illegal to not be OpenAI [...]

If that is an "unintended" consequence, I am certain OpenAI wouldn't be opposed. Preventing competition whilst keeping any potentially profit risking regulations at bay has been a clear throughline in OAIs lobbying efforts.


    > "Frontier model" means an artificial intelligence model that:

    > (1) is trained using greater than 10^26 computational operations, such as integer or floating-point operations; or

    > (2) has a compute cost that exceeds $100,000,000
Such a strange regulation, usually large thresholds like this are made to only apply burdening regulation to very-big-players (if you're spending 100 million on training, you can afford a dedicated team to follow such regulation).

But here it seems to be an anti- competitive move for market entrants who haven't made it into the big league yet...

Sounds like the saga for some players pushing for Biden's EO 14110 but this time at the state level?


ye olde camera filter

It seems pretty clear when you follow the link?

https://juxt.github.io/allium/


The only way these sorts of contracts can be enforced is if private parties have recourse to government powers- civil courts- to enforce them.

Governments could just not help them do that.


right. that's not what people are doing here though, at all

It makes you wonder how smart their ancestors- dinosaurs- were.

Yes, I've experienced the sense that there's a person on the "other end" even when I have been perfectly aware that it's a bag of matrices. Brains just have people-detectors that operate below conscious awareness. We've been anthropomorphizing stuff as impersonal as the ocean for as long as there have been people, probably.


Exactly. You only have to look at animistic religions that are all about anthropomorphizing stuff.

I once found myself reporting back the results of something to Claude the way I would to a human (“hey thanks your tip worked”) before catching myself to realize that it doesn’t care and it won’t learn from it (which would otherwise be a good reason).

Hah, ya exactly. I say sorry to objects if I drop them or accidently whack them, and feel remorse. What hope do I have with an LLM who talks to me?

(jk f those clankers ofc)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: