With this logic, any output of any pic gen AI is abuse… I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.
Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.
With this logic, any output of any pic gen AI is abuse… I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.
Yes?
We could be sure of it if AI curated it’s inputs, which really isn’t too much to ask.
Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.