New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD’s licensing goals) and cannot be committed to NetBSD.
New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD’s licensing goals) and cannot be committed to NetBSD.
Lots of stupid people asking “how would they know?”
That’s not the fucking point. The point is that if they catch you they can block future commits and review your past commits for poor quality code. They’re setting a quality standard, and establishing consequences for violating it.
If your AI generated code isn’t setting off red flags, you’re probably fine, but if something stupid slips through and the maintainers believe it to be the result of Generative AI, they will remove your code from the codebase and you from the project.
It’s like laws against weapons. If you have a concealed gun on your person and enter a public school, chances are that nobody will know and you’ll get away with it over and over again. But if anyone ever notices, you’re going to jail, you’re getting permanently trespassed from school grounds, and you’re probably not going to be allowed to own guns for a while.
And, it’s a message to everyone else quietly breaking the rules that they have something to lose if they don’t stop.
Okay, easy there, Chief. We were just trying to figure out how it worked. Sorry.
It was a fair question, but this is just going to turn out like universities failing or expelling people for alleged AI content in papers.
They can’t prove it. They try to use AI tools to prove it, but those same tools will say a thesis paper from a decade ago is also AI generated. Pretty sure I saw a story of a professor accusing someone based off a tool having his own past paper fail the same tool
Short of an admission of guilt, it’s a witch hunt.