I disagree. Machines aren’t “learning”. You are anthropomorphising theem. They are storing the original works, just in a very convoluted way which makes it hard to know which works were used when generating a new one.
I tend to see it as they used “all the works” they trained on.
For the sake of argument, assume I could make an “AI” mesh together images but then only train it on two famous works of art. It would spit out a split screen of half the first one to the left and half of the other to the right. This would clearly be recognized as copying the original works but it would be a “new piece of art”, right?
What if we add more images? At some point it would just be a jumbled mess, but still consist wholly of copies of original art. It would just be harder to demonstrate.
Morally - not practically - is the sophistication of the AI in jumbling the images together really what should constitute fair use?
Mimic, perhaps inspired but neural nets in machine learning doesn’t work at all like real neural nets. They are just variables in a huge matrix multiplication.
FYI, I do have a Master’s degree in Machine Learning.