It started, I think, with Deep Dream[a], an AI tool that would infuse images with a psychedelic amount of pareidolia. Then came Prisma[b] and other neural style transfer[c] tools, that let you take a photo and then style it to look like some other photo - for example, "this photo in the style of van Gogh".
Lately there has been a flood of art-generating AI based on the same idea - provide the network with a textual description of the image, and it will generate something that it would describe as the prompt. For example, give it
a red apple on a table and it will generate just that image, or why not have a look at
a cute corgi [that] lives in a house made out of sushi[d].
The flood gates were open. The web was scraped for millions of images and those were used to train the neural nets. The end result can be seen at DALL-E Outpainting[e], Midjourney[f], and Stable Diffusion[g]. It was now possible for anyone to create images by prompting the image generator - and what's more, you could add "in the style of X" to get output that looked like that artist had painted it. The artists whose paintings had been used for training were, unsurprisingly, not all that excited about those paintings being used to train something that would compete with them commercially, and therefore we have at least one lawsuit working its way through the legal system now.[h]
My belief is that
AI generated art is inevitable, lawsuit or not
it will still be bad art.
For the first assertion: There is a thing called transfer learning[i], where you take a neural net that is trained on one set of data and re-train it on another set - think "learning to drive a go-kart when you know how to drive a car". Now imagine that an AI art company commissions artists to create art for the transfer learning training set. They will have 100% of the IP rights to those paintings, for sure.
So... game over, artists? I don't think so.
Art isn't just putting color on a canvas. It's also about context and story, and AI art falls short of this - much is bad, and not bad in a technical sense, but bad in a lazy, derivative, bland, and mass-produced way. Let's take Simon Stålenhag[j] as an example. Replicating the appearance of his art is quite simple: take some kids dressed like 80's Swedish kids in the foreground, put some gigantic futuristic object on the horizon, and do it all in subdued cold colors.
Briefly looking from a distance it's Stålenhag. Up close it isn't. The greatness of Simon's art isn't the technique or the motives, but the skillful integration of those in a greater story universe that borrows enough from reality to be a believable (and uncanny) alternate world. Take Den ryska nallen (The Russian Teddy Bear)[m]. It's nice art. But what sets it apart is that I know that exact location, and I know enough of Swedish social engineering to find the story in which it takes place believable and utterly frightening. Without that story, and without the meticulous attention to these details, the art would be just like the examples above - lazy and derivative paint-by-numbers art.
To conclude then: I think AI (as the term is understood today when talking about art and specifically image processing) will end up being a way for good artists to produce good art faster. It'll be a tool for creativity, not a replacement.
I think that the AI model that is a result of the training is a derivative work of the training data, and that there is a clear case of depriving the copyright owner of income or undermining a new or potential market for the copyrighted work, but the courts will figure out all that.