32 Comments
Feb 13Liked by Michael Marshall Smith

As someone who has a career based on technology and education I see the fear that AI use brings. Can a student who has used AI to generate an essay really understand the subject? That kind of depends. AI will only give so much, the prompts require some understanding, but the essay as a start point will need further work to make it fit the brief. Questioning the student if you are not sure will clarify if they really understand, but it is a sea change in how educators work and we need to adapt to it.

Expand full comment
Feb 13Liked by Michael Marshall Smith

Loved this. As always, I agree with me.

Weirdly, I was going to write the exact same piece this morning (minus the angry artist, and the "I've built a HAL-9000 to teach me French" bits). I too am bumping up against the idea that AI scraping a gajillion pictures and novels is not really very different to how I arrived at what we might laughingly call my "style". And I don't need anyone presenting me with an invoice, thank you. Even when the world greets its first AI artist, or novelist, or screenwriter, that entity won't necessarily be better than anyone else, or have a more interesting viewpoint or style.

I'm not sure I agree with you on the learning side of things - since you introduced me to Perplexity, I have gone down some fascinating research rabbit holes in which I have certainly learned things, in much the same way I would have done using Google/Wikipedia (and I don't think that Googling is a skill anyway). It's true that I am not navigating a complex library or historical archive IRL, but I wasn't doing that before.

Expand full comment
Feb 13Liked by Michael Marshall Smith

“The thing about using AI is it means you will not learn” first noticed this using Google maps, used to remember routes/directions better using an AtoZ now I just let google or apple tell me when to turn

Expand full comment

This is really good, and really thought-provoking.

Expand full comment
Feb 13Liked by Michael Marshall Smith

Michael, I'm sorry the artist took a hit-and-run approach, since you are always so willing to explore and ponder and discuss all sides of a contentious issue. The time and care you spend on this sort of thing has been a source of mild awe on my part for all the years I've known you on twitter. You honestly seem to enjoy the respectful give and take, while what I generally do these days is think how ancient I am and how precious time is, and then move on. Anyway. I'm sorry you were blindsided. And thank you for taking the time to explore and ponder and present multiple sides of a contentious issue in this piece right here. Very well done and thought-provoking as always.

Expand full comment
Feb 13Liked by Michael Marshall Smith

When this was New Stuff, I saw pals who were Real Artists playing with it and having lots of fun seeing what came out of their prompts. The results were everything from hilarious to terrifying. I still look for extra fingers in any "artwork". I have never used Midjourney, I am intimidated by all digital art, and I am very sad that art is being cheapened by the idea that it can just be produced with the touch of a button, like a microwave burrito.

Expand full comment

Oh, how I love waking up to a good AI rant! Thank you, Michael. I'd like to back up a bit to the "credit vs compensation" part and focus on copyright law. For better or worse, I actually went to The Eye before Shawn Presser took it down, and I saw (even downloaded one of the tar files) the many thousands of pirated books used to develop the technology. I want to focus on the piracy because for me, that's the thing that makes me physically ill every time someone at work or on LinkedIn for my profession mentions generative AI, knowing that the copyrights of tens of thousands of writers for upwards of 183K books were violated. As I stared in disbelief at the massive amount of theft in The Pile (as it's been called) aka the Books3 cache (as the Meta research paper referred to it and it was labelled on The Eye), I felt inexpressible levels of rage. To Serve Man, indeed. We are the meat.

Here, I think, is the crux of all things: the copyright violation. Because they had to "copy" the work in its entirety into the system without permission. Over and fucking over. Presser admits this. Everyone shamelessly admits this, as if copyright were nothing. And as the NYT lawyers go after OpenAI right now in the courts for the copyright violation (after a protracted and unsuccessful negotiation for content licensing), I hope that's where it ultimately takes that overdue pound of flesh and very much then some in financial damages, ultimately causing OpenAI to have to raise the cost of licensing its technology so high that no one will be able to afford it.

Now, OpenAI is not the only game in town. Adobe is developing "ethical" generative AI. They're going to explore content licensing (if they haven't already). I hear Google is doing the right thing, too, but I'm wary.

The vast majority of people in tech aren't authors, nor do they know any personally. So, they don't know anything about copyright law. They read conscience-soothing, half-baked articles about generative AI being "transformative" without understanding that's only ONE of several qualifications for fair use. (God, sometimes it's really something being married to and living with a recovering attorney, but these days I appreciate him more than ever.) My colleagues are already racing down the generative AI rabbit hole with little thought or conscience. We've been inured to thinking about the technology we use -- its origins, its consequences. The same way we devour chocolate without thinking if child labor went into (a good lot of it did). We are so removed from the wounds that we don't, can't smell the antiseptic. We just have the product and it's our right.

Well, now, I have to get ready for work. But as you can tell, it's something I've thought about quite a lot and I'm very passionate about certain aspects. Not to say the rest isn't important -- it's VERY important. But as a grassroots activist, I like to focus on one issue and make progress there. It's easier to raise consciousness about a single legal aspect. Therein lies the way forward.

xo

Maria

Expand full comment
Feb 13Liked by Michael Marshall Smith

I had this same conversation with my partner yesterday - we were discussing a new AI text-to-speech tool for adding voiceover narration to a video editing project I was working on, a simple social media video post for a client. We talked about whether this was another job that AI might fill in the future and whether it was a tool I should even be using (a couple of friends are professional voiceover artists and audiobook narrators).

As somebody with a background in app and game development, I often pay composers, writers, and voice actors. But for this small project, for a client, I decided to give this tool a try. And it was good. It couldn't "act" (at least not yet), but I could ask it to be more cheerful, enthusiastic, or serious. It did a good job.

But the decision on my part was never to "use this tool" or "pay a voiceover artist". The decision was to "use this tool" or "not have a voiceover at all", as - for a social media post with a short lifespan - I would never have convinced my client to pay a voiceover artist. In the same way that it wouldn't make economic sense for you to pay an artist to illustrate (or recreate images based on the images you'd generated for) your Secret History of Santa Cruz (which would never have existed without AI in the first place).

I see these tools, I think, the same way that you do. They allow us to achieve things beyond our skillset. And there *is* something creative about prompt engineering and figuring out how to get the best results. That's a skill in its own right (and one that organisations will value moving forwards).

Is using these tools as creative (or impressive) as learning how to paint watercolours or mastering black-and-white photography? No, but it is doing what technology has always done: making things that were previously impossible or expensive achievable and accessible.

Many of the services we offer, such as programming apps and games, graphic design, video editing, social media services, etc. are ripe for AI to disrupt (it's already happening), but we're leaning into what AI can do, and using it as part of our workflow to free us humans up for the stuff that only we can do.

Sort of a side note, but I quite like Microsoft's branding of their AI tools as "copilots" and I think that's a great way to think about how they can and should be used.

Expand full comment
Feb 13Liked by Michael Marshall Smith

I use ChatGPT for various purposes, such as composing cover letters for job applications. Given the high volume of daily applications, it's realistic to acknowledge that most may not be reviewed by a human. Even if they are, the reviewers often sift through dozens a day, making it challenging for any single application to stand out. Writing a bespoke cover letter for each application is a time-consuming task, and given the competitive nature of the process, it's not really feasible.

Additionally, I use ChatGPT to review my code, identifying errors or locating issues when it appears correct but fails to run. This helps me save time that would otherwise be spent on extensive debugging or seeking external assistance. Furthermore, I turn to ChatGPT to find solutions for coding specific functions or overcoming challenges beyond my immediate understanding and I’ve learned quite a bit this way without having to bother others.

I don't view these uses as inherently wrong because for me, it's about utilizing a tool to efficiently accomplish tasks or streamline processes. I don't entirely agree with the notion that using AI to generate text hinders learning. Personally, I use it as a tool to refine my writing. I carefully review each suggestion, selectively incorporating what aligns with my intent and my voice, allowing me to learn and improve. This parallels my approach to spell check, which not only corrects my spelling but teaches me HOW to spell the word and this one thing has contributed more to my learning how to spell than any other single thing.

Expand full comment
Feb 13Liked by Michael Marshall Smith

Dear Michael -- I loved this piece so much that I started writing a massive reply. And roughly five paragraphs in, I realized I should probably just post my OWN fucking essay in response.

So here it is! And I hope you enjoy! THANK YOU!!!

https://johnskipp.substack.com/p/ai-yi-yi

Expand full comment
Feb 21Liked by Michael Marshall Smith

I took a foundation course in AI in, like, 1989, and I thought that was going to be the Next Big Thing. I was wrong by ≈ 30 years.

According to John Searle (1981), there are two AIs: Weak AI, which attempts to mimic (human) cognition, which is what ChagGPT is, and strong AI, which is an attempt at understanding (human) cognition by building a machine that does what brains do.

What follows is somewhat ranty. The seconds you spend reading this, you won't get back. I think I'm trying to elaborate a bit on ChatGPTs originality.

I quit pursuing AI because the staff at U of Oslo were all in awe of Chomsky. I inadvertently stumbled across B.F. Skinner in the original, and realized Chomsky's critique of Skinner is misguided. I suggested that it would make more sense to build AI on Skinner's ideas than Chomsky's. Apparently, I had moved ludicrousness into unexplored dimensions. I eventually gave up and became a clinical psychologist instead.

A friend messaged me right after ChatGPT was released, and said his daughter studies AI at the same track that I left. He told me the AI department staff at the U of Oslo is a bit down in the dumps these days because ChatGPT is a) based on Skinner's ideas and b) proves that Chomsky is wrong.

Chomsky is ranting fully these days, claiming that ChatGPT is just copying and is not producing anything original (his criticism of ChatGPT is in the same lane as his criticism of Skinner, i.e. misunderstood: His main gripe against Skinner is that Skinners theory can't explain original behavior (It's not supposed to)).

But Chomsky is talking against better judgement, because ChatGPT is producing original material. Sure, ChatGPT is not a blank slate. But ChatGPT already does things that their creators could not foresee. Chomsky will probably move the goalpost of what constitutes originality until his dying breath, but he's basically committing the no true Scotsman fallacy.

Expand full comment
Feb 22Liked by Michael Marshall Smith

I don't know if I've misunderstood anything.

Politically, I lean about as far to the right as common courtesy allows. I'm in favor of free markets and competition and stuff.

But do I lose any sleep that ChatGPT had uploaded my books? I'm a) talking about a loss of, like, 60 bucks and b) what ChatGPT has paid me back just in plain existing is much more than that.

Tell you what, ChatGPT? I'll give you a complimentary copy of my next book. You're welcome.

Expand full comment
Mar 18Liked by Michael Marshall Smith

I got the crap kicked out of me by a watercolourist the other day for something similar.

Expand full comment