A sad thing happened over the weekend. It’s not the end of the world, but definitely a question-provoking event. And I don’t know the answer. Maybe you will.
A talented artist on Twitter, with whom I’ve been in friendly contact for a couple of years — and bought two pieces from — came into my DMs, accused me of spitting on her profession, and then blocked me. All this happened while I was eating take-out Chinese, so I had no chance to engage before it was already over.
The cause of this was me idly reposting one of the Twitter versions of my Secret History of Santa Cruz. She asked if the image used was AI, and I cheerfully said it was. Her take, as then revealed in my DMs, was that the prompt used to generate the image was feeding off her and other artists’s work, and pictures of children, and people’s medical records (not sure I understand that part), and therefore inexcusable.
It’s a point of view, and makes me realize I need to re-engage with the subject.
The 3 levels of AI
The first thing to be clear on about the term “AI” — currently one of the most annoyingly over-used pairs of letters in the Western world — is that it’s confusingly used to mean more than one thing — while never actually referring to “intelligence”.
There’s basically three levels, it seems to me. The first is simply more sophisticated methods of information aggregation and data manipulation, wrapped in “humanized” form. The search engine Perplexity, for example (which I discovered recently and now use by default when researching) takes a question you type into its field, scurries off and finds links and references (note how I’ve unconsciously anthropomorphized it) and hands them to you with a brief summary and suggestions for further enquiries. It’s pretty cool (beating ChatGPT’s version hands down, by revealing and citing its sources), and can feel genuinely like having a not-super-smart but quite enthusiastic research assistant on hand. This is not “intelligence”, but more of a Google on steroids with added delivery finesse. I don’t see any real dangers in it beyond the fact it might prevent people developing or maintaining those abilities themselves.
The second level is the core of the market the tech bros are monetizing, and involves the user providing small but frequent needs — a snippet of marketing copy, an agenda or schedule, a summary of a pdf or piece of code or transcript of a Zoom — which a piece of software meets though leveraging its ability to interpret and manipulate information, gained through averaging out a vast amount of previous input. The issue here is that maybe if you can’t write your own copy then maybe you shouldn’t be a copywriter, so why not let another person do the job and earn the cash. It’s often a case of lazy or untalented or avaricious people reaping the benefits of other people’s work, but that’s hardly a new phenomenon, especially in the corporate world.
But this bleeds murkily into the third level, which is the truly generative arena of AIs that can produce large pieces of text or “artistic” images to order. And this is the that creators like my erstwhile pal on Twitter tend to lose their shit over — and with good reason — because how is the software able to perform these functions? It’s been fed mind-numbingly huge amounts of text or images — all originally made by actual humans — and trained to replicate them.
The first problem with this is none of the producers of the original materials are credited, or paid. The second is that if this becomes the norm, they may never will be again. So let’s look at these issues.
Credit vs Compensation
Should writers or artists receive credit for what they have inspired? I believe so, but think also that it’s not as black and white a question as might first appear.
Example. Is there something ”unique” about my prose? I hope so. But would that style exist without my having encountered and devoured writers from Kingsley and Martin Amis to Stephen King and Ray Bradbury to James Lee Burke and James Ellroy to Richard Ford and Umberto Eco to Enid Blyton and Philip K. Dick and Joan Dideon, not to mention a bunch of non-fiction writers? No, it would not. I absorbed those writers’ works over many years and developed something that’s certainly not merely an amalgam of their work, but nonetheless bears their influence. And would my imagination be exactly as it is, without the further storytelling influence of countless movies and television shows, along with visual inspiration from artworks? Also no.
So am I just an AI? No, because I bring the humanity of my real-world experience and personality, and a myriad other factors like a fractional shift in my style of creativity when a deadline is approaching, none of which are brought to bear when a piece of software churns something out. But that’s a separate question, about value. I’m talking here about whether having absorbed the work of others and used it for the basis of the creation for new art is, in and of itself, problematic.
Yes, I bought those books and rented the movies, so the creators were financially recompensed for their contribution to “my” style — unlike the situation with generative AI large language models and image creators. But if I unthinkingly sketch out a character in the way King might, or try to pull off a concisely elegant sentence like an Amis, or unconsciously adopt the deceptively affectless tone of a Dideon when describing a scene, am I “ripping them off”? I don’t think so. And I don’t list every author I’ve ever read in the acknowledgements of my books.
So while credit is undoubtedly an issue, it’s hard to know where the lines are.
Maybe it’s payment that we really come down to, then. The owners of the AI get paid, and the users may do too — but the producers of the raw material they’re manipulating… don’t. But I fear — for better or worse — that pay-to-use model broke a while back, again at the hands of technology we know, love, and use every day.
Years back I remember ranting to my wise friend Adam Simon about how terrible it was that music copyright was being thrown out the window by people uploading songs to YouTube. There’s a lot of people out there who want (and even demand) that things should be free, and bang — here it all was. So why would anybody pay the creators ever again? Adam’s view (and he’s turned out to be right) is I was missing a sea change in the way creativity would be monetized. Yes, a song uploaded to YouTube could be a copyright infringement, and the creator wouldn’t get paid per listen. It would however become a form of marketing for the artist’s brand, with positive repercussions for their celebrity, along with merch and concert sales.
Visibility has become the main engine of nascent creative success, with half the new stars in the pantheon starting out on YouTube or TikTok. Ask any teen how they think they’re going to become famous and it sure as hell won’t involve slogging heavy amps around sketchy bars to play to a handful of drunks. It’s going viral on Insta.
Things don’t work like they used to any more.
The problem with this comparison, of course, is that we who provide the raw material for generative AIs don’t receive any of these perks. I know — having seen the leaked data dump of whose works were scraped — that most of my novels and many of my shorts went into the soup. But if someone uses an AI to produce a horror story or thriller novel (or even a chunk of ad copy or meeting summary, because they all come out of the same stew) I receive zero knock-on benefits.
Pragmatism
So what do I, as a creator, do about this?
Yes, I could declare or join a jihad on all generative AI. What will that achieve? Will all the happy new billionaires think “Whoops, that grumpy Brit writer doesn’t like it, we’d better stop”? LOL no. GRR freakin’ Martin and a bunch of other huge names tried that approach, and nobody cares. There’s too much money in it and this genie is already out of the bottle. Sure, there’s maybe a world in which the tech bros get pressurized into some form of royalty system under which people whose work got fed into the hopper are pro rata recompensed for how often their words (or images) are influential in a particular piece of paid-for output, but that sounds like a lot of hard math which it’s not in the AI industry’s interest to do, so they won’t.
And the sad fact is there wouldn’t be a groundswell of popular support for such a system. Lots of consumers think all creators are paid too much anyway, for doing something that anybody would do if they only had the time. You should see the abuse authors get any time they put up their hand and meekly ask if people would please stop uploading their novels to file-sharing sites. Everything should be free, man — anybody who says otherwise is just an entitled bourgeois elite.
Consumers want AI, even if they’re not yet aware of what embracing it means — because it’s not as obvious what they lose in the process. The issues don’t just affect creatives or coders. AI is going to bring existential challenges to all of us.
There’s something of an art to using it, for a start, along with the frustrations of not actually owning the ability yourself. Not merely through the crafting of the prompts, but the fact for example that versions 5.0, 5.1, 5.2 and 6.0 of Midjourney (which are all available at once) produce quite different results, with higher numbers not necessarily being “better” for your purpose. It’s a moving target, too: effective practice for prompting changes with each upgrade, and the results from even one particular version can change overnight as they tweak the AI — what the developers regard as an improvement may not strike you as one, but once they’ve done it, there’s no going back. This reminds you that you are buying this ability, not acquiring real skills. If you stop being able to pay the monthly nut, or the AI changes in a way you don’t like, or the system gets bought out and dumped… suddenly you can’t paint any more.
The thing about using AI is it means you will not learn. Every time you use it, you’re declining a challenge to do it yourself. You won’t learn to write, or paint, or read French. Yes, it may be a convenient quick fix but it means you as an individual won’t grow or improve. Maybe that doesn’t matter to us. But perhaps it should. Otherwise we’re all going to end up living in a world where all we do is press buttons, and software we don’t understand (but have to pay for) will do everything else.
And so?
Years ago my family — along with my visiting father — spent the night in Monterey. The next morning we were driving down Cannery Row past a bunch of beer-bellied middle-aged motorcycle warriors, who’d clogged the street with their parked hogs. One of them took exception to how close we came to his beloved Harley, yanked open my car door, and vocalized his ire to me in a vigorous fashion. I extricated us from the situation and we drove on. Both my twelve-year-old son and septuagenarian father seemed to believe that, as I had been in the right, I should have escalated the confrontation instead. I felt otherwise, and took the hit of looking like a pussy.
I don’t see much good coming of me getting out of the car and trying to punch the AI industry in the face either, not least as it too has a crowd of buddies around it (all of the tech big guns, from Microsoft to Google to Musk to Zuck have their fingers deep in AI). I don’t think this is evidence of a lack of courage or fortitude. It’s realism. This is a situation where you can bang your head against the wall as long as you want, but it won’t break, and you’ll just wind up with a headache.
I don’t create because I think I’m some great artist who should be celebrated down the ages. I do it because I enjoy it, am compelled to keep doing it, and it’s the only way I have of supporting my family. So I lean toward a viewpoint similar to one espoused by my other smart friend Julian Simpson: Okay, it’s here, it’s not going away, how do we adapt? What’s the new game? How do we win it, at least enough to keep putting food on the table? How do we continue to prove that human creativity is better than a software solution? What’s the balancing upside?
I’m never going to use AI to generate text — or my life truly would be pointless — but I’ll certainly use it to provide imagery for pitch decks or as writing prompts like the Santa Cruz history stuff. Maybe artists and photographers can strike back at me for doing that by using text AI to generate descriptions or copy for collections of their work, something they might not be able to do themselves. In both cases, it’s not like money is being stolen from people’s pockets — because neither of us could have afforded commissioning this “creative” work in the first place. I use Midjourney all the time to produce images for inspiration or fun (everything on this page is AI-generated, of course) — how guilty am I supposed to feel about that? I’ve also built a dedicated ChatGPT bot to try to improve my conversational French. There are no French tutors in Santa Cruz (that I’m aware of), so nobody’s losing out. Are they?
Just because I and other creators are taking a loss through this technology, am I supposed to turn my back on it, and reality, and the future? I don’t see how that serves me, or them. Am I happy about all this? No. But I’m not sure I see any other way forward that preserves my sanity and keeps me non-furious enough to keep creating.
What do you think?
As someone who has a career based on technology and education I see the fear that AI use brings. Can a student who has used AI to generate an essay really understand the subject? That kind of depends. AI will only give so much, the prompts require some understanding, but the essay as a start point will need further work to make it fit the brief. Questioning the student if you are not sure will clarify if they really understand, but it is a sea change in how educators work and we need to adapt to it.
Loved this. As always, I agree with me.
Weirdly, I was going to write the exact same piece this morning (minus the angry artist, and the "I've built a HAL-9000 to teach me French" bits). I too am bumping up against the idea that AI scraping a gajillion pictures and novels is not really very different to how I arrived at what we might laughingly call my "style". And I don't need anyone presenting me with an invoice, thank you. Even when the world greets its first AI artist, or novelist, or screenwriter, that entity won't necessarily be better than anyone else, or have a more interesting viewpoint or style.
I'm not sure I agree with you on the learning side of things - since you introduced me to Perplexity, I have gone down some fascinating research rabbit holes in which I have certainly learned things, in much the same way I would have done using Google/Wikipedia (and I don't think that Googling is a skill anyway). It's true that I am not navigating a complex library or historical archive IRL, but I wasn't doing that before.