Not true. Back then, limitations on "Spannungsfall" and "Verteidigungsfall" were in place which have been removed last year. The real news, though, is that the public (media, opposition parties) didn't noticed until a couple of days ago.
Citation needed. I've heard this quite often, but so far, I haven't seen proof of the stated causality.
PS: This doesn't mean that better public transportation could deliver more bang for the buck than the n-th additional car lane. But never ever have I heard from anybody that they chose to buy a car or use an existing car more often because an additional lane has been built.
You've never heard anyone choose to take side streets instead of the highway because of traffic jams? No one ever goes out of their way to avoid heavily trafficed areas?
I don't understand what the point is you're trying to make. When people at t0 take detours because of traffic jams on the direct route, and then at t1, there are less traffic jam on the direct route due to additional lanes, so they decide to take the direct route, then total traffic is down, because they no longer take a detour. Even if they are still part of a newly induced traffic jam.
I don't quite understand how CRDTs should help with merges. The difficult thing about merges is not that two changes touch the same part of the code; the difficult thing is that two changes can touch different parts of the code and still break each other - right?
Eh. It's a matter of visible pain vs invisible pain.
Developers are quite familiar with Merge Conflicts and the confusing UI that git (and SVN before it, in my experience) gives you about them. The "ours vs theirs" nomenclature which doesn't help, etc. This is something that VCSs can improve on, QED this post.
Vs the scenario you're describing (what I call Logical Conflicts), where two changes touching different parts of the code (so it doesn't emerge as a Merge Conflict) but still breaking each other. Like one change adding a function call in one file but another change changing the API in a different file.
These are painful in a different way, and not something that a simple text-based version control (which is all of the big ones) can even see.
Thank you for the clarification. I agree that the current state of the art to show conflicts _in the same part of the code_ is not sufficient, so any improvement with regard to that is welcome. Still, I'm more looking for solutions with the Logical Conflicts.
"Teaching" in OPs context probably doesn't mean lecturing, but 1:1 sessions with Junior Researchers from Master's Thesis upwards to PhD Candidates and Postdocs.
I beg to differ. AI-output did not entitle the person creating the prompt for IP protections, so far – but my objection is not directed towards the "so far", but towards your omission of "the person creating the prompt", because if an AI outputs copyrighted material from the training data, that material is still copyrighted. AI is not a magical copyright removal machine.
What this means in practice is that (currently), all output of an LLM is legally considered to not be copyrightable (to the extent that it's an original work). If it happens to regurgitate an existing copyrighted work, though, is that infringement? I'm not sure we have a legal precedent on that question yet.
The Thaler case here is something different than "AI-generated = uncopyrightable" though. Thaler was not trying to copyright work in the way humans who make work with tools normally copyright their work ("Copyright 2026 by Me"), he was specifically trying to give AI the copyright ("Copyright 2026 by My-AI-Tool"). The court rejected this because only humans can own copyright.
I believe there are other cases where AI-generated works were found uncopyrightable but Thaler is not a good example* of them.
There’s several large settlements that say Anthropomorphic/OAI didn’t want to have legal precedent. In general if it’s not outright regurgitated it would be derivative.
The out of court settlements that avoid precedent don't mean anything in a broader legal context. Legally speaking, right now in the USA, output of LLMs is not copyrighted and cannot be copyrighted (without substantial transformation by a human).
I don't think this means the same thing as whether or not LLM output can infringe on someone else's copyright though (that does pose an interesting question -- can something non-copyrightable in general infringe on something copyrighted?).
I don't believe that you require to do much to claim copyright over an output of an LLM.
The input prompt is under copyright - a simple modification to the source code will grant copyright to you.
I'm afraid as of last week this is now as settled as it gets in US law: the output of LLMs is not per se copyrightable, though arrangements and modifications of it can be. It's like a producer who made a song entirely with public domain audio samples: he can't then demand the compulsory license when someone resamples that song.
They actually wouldn't, since they'd be sampling the new arrangement. They could reconstruct a new, similar sounding arrangement based on the original samples, but it'd be have to be different enough to that new arrangement so as not to be considered derivative of it.
That also applies to generative AI, pure output may not be copyrightable but as soon as you do something beyond type some words and press a button, like doing area-specific infills and paintovers, which involve direct and deliberate choices by a human, the copyrighted human-driven arrangement becomes so deeply intertwined with the generative work that it's effectively inseperable.
There's no technical need to do that, because someone who can always deliver electricity would be able to struck contracts with those that always need it, i.e. heavy industry, esp. aluminum and chemistry. The reason why downregulation was necessary in the 2000s and 2010s was regulation ("Einspeisevorrang"), not technology.
If someone takes it near the power plant, and all the infrastructure is there for it. You don't build a (large) nuclear power plant just for these customers though.
Generally, with a high amount of renewable but fluctuating supply, we have to get away from the base load model, towards a residual load model.
This is not true, since you still need to pay for capex and depreciation. The reason it appears to be free is not because its production doesn’t cost anything, but because at times of a glut there‘s just no one willing to pay much for it. Please make some good will effort to acknowledge the difference between cost and price.
Now, about your question, why people should buy „expensive“ nuclear power: for the same reason that people buy health insurance for: volatility increases risk, and you’re willing to pay an ongoing premium to reduce systemic risks. Over- and undersupply of electricity are risks for a lot of businesses and lots of them spend a lot of money on capex to avoid them, e.g. hospitals that have diesel generators. Generators are for a different failure mode (rare, longer duration outages), but for the high frequency, short time interruptions and/or price spikes caused by unbalanced generation volatility, contracts with a nuclear power company are similar; the capex is just shifted to the power company, and the customer might pay a premium during those times that other sources would deliver energy „for free“.
That said, this is not a black-and-white scenario. Of course we can benefit a lot from solar and wind. I’m not very positive about large scale batteries and lean more towards having flexible consumers, e.g. H2 production for the chemical industry. But right now, we don’t have the choice of nuclear vs. renewables, it’s (renewables + nuclear) vs. (renewables + turbines run with Russian gas or LNG from the US and Qatar). My choice here is clear, and it should not be muddied by the Russian propaganda of nuclear power clogging our electricity grids.
What about the code that wasn't even GPL, but "all rights reserved", i.e., without any license? That's even stronger than GPL and based on your reasoning, this would mean that any code created by an LLM is not licensed to be used for anything.
You get it wrong. Copyright is excluding you from using something, a license is allowing you to use something. So „no license“ does NOT mean „free to use“, but „not allowed to use“.
If you do not hold copyright, you cannot prevent someone from copying a thing. If you cannot prevent someone from copying the thing, then "licensing" it is somewhere between pretty weird and pretty stupid, no?
No, because OP implied that the AI generated content inherits the LICENSE: in their view, if the input has been GPL, The output must be GPL. So if the input hasn’t been licensed at all, the output cannot be licensed. The inheritance of „no license“ is not „no copyright“, but „no license“. The question of copyright applies hasn’t been definitely answered yet, but just because it is likely that the person PROMPTING the AI doesn’t gain copyright, doesn’t mean that an output that is 1:1 derived from copyrighted material loses its copyrighted status. That would be truly ridiculous.
As you note, this is a legal question that has not yet been answered. I think that speculating on the outcome in the current legal climate is fruitless.
I did not refer to privacy rights. If you post a photo of yourselves online, you're giving up on a tiny part of your privacy rights. So my question still stands: would running your photos that you have taken of yourselves through a diffusion model rip your copyright of your photo?
So we have two positions here:
1) LLMs are trained on non-licensed information, so anything coming out of them must be created without a license, so no one should be allowed to use it.
2) LKMs are trained on public information, so everything coming out of the must be public domain.
These two positions are mutually exclusive and I feel that both are not entirely false, but also certainly not fully correct.
Is this true once you use a fancy filter of the photo app of your choice? Is this true once your phone applies such a filter without asking you? Should this be true for Theseus‘ Ship?
reply