Thoughts on AI Art

DALL-E 2 became publicly available this September, so now you can sign up for an account and starting asking robots to draw you a picture. I started to sign up, but they require a phone number right off the bat, so I declined. I don’t want robots calling me on the weekend to sell me robot subscriptions.

The fun and quirky podcast Cortex recently had a long episode where they talked about the alarmingly fast progress of these AI platforms as well as the broader implications. One of the hosts, Grey, made the very good point that while image generation is one thing, the natural language interpretation is another. If the robots can interpret natural language and use that interpretation to generate human-understandable imagery, well then what else can it do? (It can be fed your company’s employee handbook and become an HRBot. For example).

I work in technology, but nowhere near artificial intelligence or machine learning. I’m a web guy. I make web things. My understanding of machine learning is pretty rudimentary. I think I know enough to be appropriately skeptical of wild claims but also enough to be generally alarmed.

There is a Moore’s Law-like inevitability to AI getting smarter, better, faster, and stronger over time. So whenever an AI skeptic says something like “well sure it can make a picture, but it will never do video” my first question is “what’s stopping it?”

I have a number of Big Questions floating in the air around AI art and I thought it would be interesting to interrogate them here.

Will illustrators, photographers, and designers lose their jobs?

Yes.

In the professional world of the graphic arts there are sort of two channels of work: Conceptual and production. Production is what it sounds like. “We need this ad reformatted to 12 different sizes for use on the internet and social media.” A designer uses their artistic eye and technical skill to reformat an ad for all those different sizes so that it fits the space, but “feels” the same. That’s a job for AI.

The other branch - “conceptual” (or creative, or whatever) - that’s when someone is coming up with the concept or the big creative overarching idea.

I believe this world of work will shrink. There will be fewer creatives doing this kind of work because there will be less call for it.

Only premium brands will pay for expensive original creative, and everyone else will use the LogoBot 4000 to generate their brand identity. The LogoBot will, of course, steal from the premium brands just like real creatives do today.

What about UX designers? They should be worried too. Their replacement will probably come even sooner. Most user experience design serves a pretty well understood purpose. Think “sign up or login” or “click here to get started”. Well used patterns. Revolutions in that space are rare. Think back - what was the last big user interface revolution you can remember? Pinch to zoom? Pull to refresh? Those innovations date back to 2007 or so.

Most UX is about arranging well known types of elements (inputs, buttons, text, pictures, sliders, etc) into meaningful hierarchies onto a screen. If an AI can make you a picture of Christopher Walken wearing a Santa Costume in a helicopter then it can certainly move some text boxes around based upon interpreted hierarchies.

Additionally, since UX designs are implemented with pretty standardized code libraries, it’s a short hop to just generating the user interface code itself. Sorry UX developers.

Is AI art unethical?

Yes.

DALL-E was trained on hundreds of millions of labeled images. Did OpenAI get permissions from the authors or license those images? Of course not. The training data was pilfered from the internet without permission. This is copyright infringement on a massive scale, though it may not withstand to a legal challenge. OpenAI could probably argue in court that the resultant images are derivative works and therefore copyright is not applicable.

I would argue that since OpenAI is literally using the pixel data itself to create neural networks, then it’s not derivative at all. The algorithm is quite literally made from others’ works. But I am not a lawyer.

Recently ShutterStock announced an arrangement with OpenAI to integrate DALL-E with their stock photo service, promising to pay photographers whose works are used by DALL-E to generate images. But before all of this, ShutterStock sold stock photography to OpenAI to help create DALL-E in the first place. It is unclear if photographers on the platform gave consent for this, or if they signed away their consent when hosting their work on ShutterStock’s platform.

Basically photographers selling stock photos to ShutterStock had their work handed over to OpenAI to create a tool to put the photographers out of business.

This is a quagmire.

Is AI Art actually art?

No. Well, sometimes.

I guess first, you have to define art. The philosopher Arthur Danto defines art as embedded meaning. What does that mean?

Are you familiar with Andy Warhol’s Brillo boxes? Warhol created stacks and stacks of simulated Brillo boxes and put them in a  gallery. He recreated the boxes in plywood with custom silk-screens to reproduce the commercial printing. At first glance you’d be forgiving for thinking “what’s with all the Brillo?”

Danto muses on these boxes as embedding ideas about commerce, mass consumption, capitalism, and pop culture into a representation of a Brillo box. An idea embedded in plywood and paint.

I like this definition. It skirts past problems of media (paint or pixels? who cares?) and gets to the heart of what art does for us. It contains meaning.

So does AI art embed any meaning? I would argue that the individual images generated by a tool like DALL-E are not art nor do they contain meaning. They are entirely derivative. The prompt provided DALL-E doesn’t “inspire” any anything within the algorithm. The text prompt is input for a100-dimensional Galton board meat grinder for images and text. What comes out is the most statistically likely output for that input.

A simple AI

AI image generation could certainly be used by an artist to make art. I imagine an artist training an AI on their own work, or perhaps tagging images with concepts like moods or political implications, then forming an automated but pre-curated narrative.

But DALL-E and other tools like it are robotic remixers of existing creative work. They can only ever be derivative. At least until they learn to think.