Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The comparison between a ML model and a human is often used but it is not very useful.

The problem is in most of these cases is that a powerful entity is profiting off of other people's work without their consent and gives nothing to the exploited members in return. Sure, an individual human learns from copyrighted works and reincorporates to make something slightly new all the time. And then they may also profit from it and not give anything in return to those that came before.

The problem here is the scale and the power that enables that scale. This is industrial level mining of non-consenting humans, exploiting their life's work in many cases.



A powerful entity such as...an open source company like Stable Diffusion?


They do have 100 million in backed VC funding. They obviously intend to profit somehow.


Such as Microsoft with Copilot


The very nature of AI is that anyone can make their own model with enough training. There are already open versions of Copilot available, such as Fauxpilot, or GPT Neo for GPT-3, so I never understand the "only rich and powerful will have AI." I mean, sure, some will, but just like programming, the effort will eventually be distributed to open source versions.


As an OSS contributor you can nicely ask people building some OSS equivalent of Copilot to not use your work as training data, and they would comply because this is culture. Good luck doing the same with a megacorp




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: