There’s been this weird idea lately, even among people who used to recognize that copyright only empowers the largest gatekeepers, that in the AI world we have to magically flip the script on copyr…
Making only big companies able to “rip off your work” (not an accurate representation, but whatever) Is not the solution you think it is.
The only solution is to force all models trained on public data to not be covered by copyrights by default. Any output from those models should also by default be in the commons. The solution is to avoid copyright cartels, not strengthen them.
IMO, we need to ask: What benefits the people? or What is in the public interest?
That should be the only thing of importance. That’s probably controversial. Some will call it socialism. It is pretty much how the US Constitution sees it, though.
Maybe you agree with this. But when you talk about “models trained on public data” you are basically thinking in terms of property rights, and not in terms of the public benefit.
Making only big companies able to “rip off your work” (not an accurate representation, but whatever) Is not the solution you think it is.
The only solution is to force all models trained on public data to not be covered by copyrights by default. Any output from those models should also by default be in the commons. The solution is to avoid copyright cartels, not strengthen them.
IMO, we need to ask: What benefits the people? or What is in the public interest?
That should be the only thing of importance. That’s probably controversial. Some will call it socialism. It is pretty much how the US Constitution sees it, though.
Maybe you agree with this. But when you talk about “models trained on public data” you are basically thinking in terms of property rights, and not in terms of the public benefit.