As someone who has a career based on technology and education I see the fear that AI use brings. Can a student who has used AI to generate an essay really understand the subject? That kind of depends. AI will only give so much, the prompts require some understanding, but the essay as a start point will need further work to make it fit the brief. Questioning the student if you are not sure will clarify if they really understand, but it is a sea change in how educators work and we need to adapt to it.
The impact on education is really striking, and one I'm aware of as my son's just started college. I cannot see how someone can have really absorbed a subject if they've used an AI to produce an essay — even if they then have to finesse the result. And in addition to acquiring the "knowledge", there's also the question of them failing to acquire the skills... or research, critical thinking, assimilation, and writing...
Weirdly, I was going to write the exact same piece this morning (minus the angry artist, and the "I've built a HAL-9000 to teach me French" bits). I too am bumping up against the idea that AI scraping a gajillion pictures and novels is not really very different to how I arrived at what we might laughingly call my "style". And I don't need anyone presenting me with an invoice, thank you. Even when the world greets its first AI artist, or novelist, or screenwriter, that entity won't necessarily be better than anyone else, or have a more interesting viewpoint or style.
I'm not sure I agree with you on the learning side of things - since you introduced me to Perplexity, I have gone down some fascinating research rabbit holes in which I have certainly learned things, in much the same way I would have done using Google/Wikipedia (and I don't think that Googling is a skill anyway). It's true that I am not navigating a complex library or historical archive IRL, but I wasn't doing that before.
I was hoping you'd disagree with yourself just for kicks ;-)
And yes, there's no question that if you've already got an inquiring mind and some critical thinking skills, there's huge bonuses to be had from some of these tools. I've also wound up down fascinating holes into knew areas via Perplexity, and additionally had some bang-on useful answers to specific questions that I never got from Google. There's good shit happening as well as the bad... and throwing babies out with the bath water won't help any of us.
I read the other day that someone, for shits and giggles, designed an AI not that was 100% ethical, will not violate copyright, scrape anything, present questionable info etc. it literally will not return a response to a single question.
“The thing about using AI is it means you will not learn” first noticed this using Google maps, used to remember routes/directions better using an AtoZ now I just let google or apple tell me when to turn
Exactly! I used to love paper maps, and also just feeling my way through a town and letting the geography seep into me. My son, who's never driven without GPS, simply doesn't have the same knowledge of a place...
Michael, I'm sorry the artist took a hit-and-run approach, since you are always so willing to explore and ponder and discuss all sides of a contentious issue. The time and care you spend on this sort of thing has been a source of mild awe on my part for all the years I've known you on twitter. You honestly seem to enjoy the respectful give and take, while what I generally do these days is think how ancient I am and how precious time is, and then move on. Anyway. I'm sorry you were blindsided. And thank you for taking the time to explore and ponder and present multiple sides of a contentious issue in this piece right here. Very well done and thought-provoking as always.
Hey there :-) Thank you — really glad you enjoyed it. Yes, the inspiration for the piece is a bit of a shame, but I guess it reminded me what a knotty problem it is and will remain... and one where (like so many of the semi-terrifying issues facing us in the world right now) can't be solved by just shouting at it from a single entrenched position.
When this was New Stuff, I saw pals who were Real Artists playing with it and having lots of fun seeing what came out of their prompts. The results were everything from hilarious to terrifying. I still look for extra fingers in any "artwork". I have never used Midjourney, I am intimidated by all digital art, and I am very sad that art is being cheapened by the idea that it can just be produced with the touch of a button, like a microwave burrito.
I share that sadness, but on the other hand think/hope: just as you've just made a distinction between a microwave burrito and a "real" one, maybe it'll be the same with the arts. The microwave version may be cheap and easy, but personally I'll always choose to have fewer, more expensive and less convenient — but BETTER — ones made by a human.
Oh, how I love waking up to a good AI rant! Thank you, Michael. I'd like to back up a bit to the "credit vs compensation" part and focus on copyright law. For better or worse, I actually went to The Eye before Shawn Presser took it down, and I saw (even downloaded one of the tar files) the many thousands of pirated books used to develop the technology. I want to focus on the piracy because for me, that's the thing that makes me physically ill every time someone at work or on LinkedIn for my profession mentions generative AI, knowing that the copyrights of tens of thousands of writers for upwards of 183K books were violated. As I stared in disbelief at the massive amount of theft in The Pile (as it's been called) aka the Books3 cache (as the Meta research paper referred to it and it was labelled on The Eye), I felt inexpressible levels of rage. To Serve Man, indeed. We are the meat.
Here, I think, is the crux of all things: the copyright violation. Because they had to "copy" the work in its entirety into the system without permission. Over and fucking over. Presser admits this. Everyone shamelessly admits this, as if copyright were nothing. And as the NYT lawyers go after OpenAI right now in the courts for the copyright violation (after a protracted and unsuccessful negotiation for content licensing), I hope that's where it ultimately takes that overdue pound of flesh and very much then some in financial damages, ultimately causing OpenAI to have to raise the cost of licensing its technology so high that no one will be able to afford it.
Now, OpenAI is not the only game in town. Adobe is developing "ethical" generative AI. They're going to explore content licensing (if they haven't already). I hear Google is doing the right thing, too, but I'm wary.
The vast majority of people in tech aren't authors, nor do they know any personally. So, they don't know anything about copyright law. They read conscience-soothing, half-baked articles about generative AI being "transformative" without understanding that's only ONE of several qualifications for fair use. (God, sometimes it's really something being married to and living with a recovering attorney, but these days I appreciate him more than ever.) My colleagues are already racing down the generative AI rabbit hole with little thought or conscience. We've been inured to thinking about the technology we use -- its origins, its consequences. The same way we devour chocolate without thinking if child labor went into (a good lot of it did). We are so removed from the wounds that we don't, can't smell the antiseptic. We just have the product and it's our right.
Well, now, I have to get ready for work. But as you can tell, it's something I've thought about quite a lot and I'm very passionate about certain aspects. Not to say the rest isn't important -- it's VERY important. But as a grassroots activist, I like to focus on one issue and make progress there. It's easier to raise consciousness about a single legal aspect. Therein lies the way forward.
Hey Maria! Thanks so much for that response. And you're right: however much one wanders around the subject and tries to see ameliorating aspects and bright sides, the bottom lines is a bunch of creators had their work RIPPED OFF by people who neither understand nor in the least care about copyright or the effects their software will have on writers and artists.
And this, I think, is another of the great issues of our age — as struck me recently when I was in San Francisco and seeing tons of self-driving cars around. At what point did we agree to this? An experimental, unproven and potentially lethal new tech, right there on our streets? We never did — just as we never agreed to having all our work scraped so someone else could charge money for it. It's this wanton trammeling over other people's rights that may be one of the bigger issue: the people who think it's cool to "move fast and break things" are never the ones finding their own lives broken.
I had this same conversation with my partner yesterday - we were discussing a new AI text-to-speech tool for adding voiceover narration to a video editing project I was working on, a simple social media video post for a client. We talked about whether this was another job that AI might fill in the future and whether it was a tool I should even be using (a couple of friends are professional voiceover artists and audiobook narrators).
As somebody with a background in app and game development, I often pay composers, writers, and voice actors. But for this small project, for a client, I decided to give this tool a try. And it was good. It couldn't "act" (at least not yet), but I could ask it to be more cheerful, enthusiastic, or serious. It did a good job.
But the decision on my part was never to "use this tool" or "pay a voiceover artist". The decision was to "use this tool" or "not have a voiceover at all", as - for a social media post with a short lifespan - I would never have convinced my client to pay a voiceover artist. In the same way that it wouldn't make economic sense for you to pay an artist to illustrate (or recreate images based on the images you'd generated for) your Secret History of Santa Cruz (which would never have existed without AI in the first place).
I see these tools, I think, the same way that you do. They allow us to achieve things beyond our skillset. And there *is* something creative about prompt engineering and figuring out how to get the best results. That's a skill in its own right (and one that organisations will value moving forwards).
Is using these tools as creative (or impressive) as learning how to paint watercolours or mastering black-and-white photography? No, but it is doing what technology has always done: making things that were previously impossible or expensive achievable and accessible.
Many of the services we offer, such as programming apps and games, graphic design, video editing, social media services, etc. are ripe for AI to disrupt (it's already happening), but we're leaning into what AI can do, and using it as part of our workflow to free us humans up for the stuff that only we can do.
Sort of a side note, but I quite like Microsoft's branding of their AI tools as "copilots" and I think that's a great way to think about how they can and should be used.
Yes, I like the "co-pilot" framing too — Perplexity offers something similar.
Sounds like we're on the same page. I got into Midjourney purely for generating images for pitch decks. It saves a TON of time (instead of Googling and finding a bunch of stuff that isn't quite right) and like you on that kind of task, there's no way in hell I'd ever actually commission someone for any of it. There just isn't the budget. And as you rightly point out, my Santa Cruz things didn't just reply upon free imagery like that, they wouldn't even have existed in the first place without them.
Maria makes some great points about copyright in another comment, but in addition/to the side of that, I do think there's an onus upon us as working creatives to try to adapt and transcend, possibly in a slightly similar way to artists not just rolling over and giving up once photography arrived. The latter didn't kill the former, but turned into a parallel artform in its own right. Maybe AI-generated material will follow a similar path...
I use ChatGPT for various purposes, such as composing cover letters for job applications. Given the high volume of daily applications, it's realistic to acknowledge that most may not be reviewed by a human. Even if they are, the reviewers often sift through dozens a day, making it challenging for any single application to stand out. Writing a bespoke cover letter for each application is a time-consuming task, and given the competitive nature of the process, it's not really feasible.
Additionally, I use ChatGPT to review my code, identifying errors or locating issues when it appears correct but fails to run. This helps me save time that would otherwise be spent on extensive debugging or seeking external assistance. Furthermore, I turn to ChatGPT to find solutions for coding specific functions or overcoming challenges beyond my immediate understanding and I’ve learned quite a bit this way without having to bother others.
I don't view these uses as inherently wrong because for me, it's about utilizing a tool to efficiently accomplish tasks or streamline processes. I don't entirely agree with the notion that using AI to generate text hinders learning. Personally, I use it as a tool to refine my writing. I carefully review each suggestion, selectively incorporating what aligns with my intent and my voice, allowing me to learn and improve. This parallels my approach to spell check, which not only corrects my spelling but teaches me HOW to spell the word and this one thing has contributed more to my learning how to spell than any other single thing.
Ha — that's a very good point about spellcheck! I actually had a digression into the issue of how much of this spanking-new AI is *actually* new... and/or threatening because of that, and how much of it is just a better way of leveraging computing ability (I cut it because the thing was getting too long already).
I'm really interested (and cheered) to see a coder embracing the benefits for correction and learning, as it's another field where AI could have major effects. The ability to make little personal bots to act as helpers seems like a genuine boon, for example. I guess like a lot of subjects at the moment there's a lot of excitable chatter and shouting, and maybe a degree of sitting back and assessing it as it develops is the best way forward...
Like any technology, there's the possibility for both positive and negative uses and outcomes. Humans are notoriously prone to finding negative ways to use . . . well everything.
Also, I don't have a job where I can really leverage other people's work in same way writers and artists might, and never having been confronted with it, I don't really know what I'd do in that situation. If I'm being honest, I suspect I'd leverage whatever tools I had at my disposal. I almost wrote "do the right thing" rather than "tools at my disposal" and then I realized there is no obvious right/wrong here. Like most things, it's murky. What I'm convinced of though, is that rich dudes are definitely going to do whatever they want that makes them more $ unless forced to do otherwise. We need to put guardrails in place for this and the longer it's left in the wild, the less likely we are to be able to rein it back in. As you said, pandora is out of the box and there's no putting her back.
Dear Michael -- I loved this piece so much that I started writing a massive reply. And roughly five paragraphs in, I realized I should probably just post my OWN fucking essay in response.
I took a foundation course in AI in, like, 1989, and I thought that was going to be the Next Big Thing. I was wrong by ≈ 30 years.
According to John Searle (1981), there are two AIs: Weak AI, which attempts to mimic (human) cognition, which is what ChagGPT is, and strong AI, which is an attempt at understanding (human) cognition by building a machine that does what brains do.
What follows is somewhat ranty. The seconds you spend reading this, you won't get back. I think I'm trying to elaborate a bit on ChatGPTs originality.
I quit pursuing AI because the staff at U of Oslo were all in awe of Chomsky. I inadvertently stumbled across B.F. Skinner in the original, and realized Chomsky's critique of Skinner is misguided. I suggested that it would make more sense to build AI on Skinner's ideas than Chomsky's. Apparently, I had moved ludicrousness into unexplored dimensions. I eventually gave up and became a clinical psychologist instead.
A friend messaged me right after ChatGPT was released, and said his daughter studies AI at the same track that I left. He told me the AI department staff at the U of Oslo is a bit down in the dumps these days because ChatGPT is a) based on Skinner's ideas and b) proves that Chomsky is wrong.
Chomsky is ranting fully these days, claiming that ChatGPT is just copying and is not producing anything original (his criticism of ChatGPT is in the same lane as his criticism of Skinner, i.e. misunderstood: His main gripe against Skinner is that Skinners theory can't explain original behavior (It's not supposed to)).
But Chomsky is talking against better judgement, because ChatGPT is producing original material. Sure, ChatGPT is not a blank slate. But ChatGPT already does things that their creators could not foresee. Chomsky will probably move the goalpost of what constitutes originality until his dying breath, but he's basically committing the no true Scotsman fallacy.
What a very interesting sets of thoughts. I'd never thought about it that way! Especially the point about not being able to explain original behavior, which I suspect is going to become more and more of an issue with AI (cf ChatGPT seemingly going a little wonky last night!
Politically, I lean about as far to the right as common courtesy allows. I'm in favor of free markets and competition and stuff.
But do I lose any sleep that ChatGPT had uploaded my books? I'm a) talking about a loss of, like, 60 bucks and b) what ChatGPT has paid me back just in plain existing is much more than that.
Tell you what, ChatGPT? I'll give you a complimentary copy of my next book. You're welcome.
I've used ChatGPT and Midjourney heavily for illustrations. No artists have lost any commisions. The alternative is not paying an artist. The alternative is no illustrations.
I feel much the same about deck imagery I make. It simply wouldn't happen otherwise. I guess that doesn't speak to the underlying issue of how the AI gains the ability, but it seems like a practical approach — okay, maybe they "stole" my creativity, how do I claw some advantage back — while it plays into their hands, is the only one that makes sense right now.
As someone who has a career based on technology and education I see the fear that AI use brings. Can a student who has used AI to generate an essay really understand the subject? That kind of depends. AI will only give so much, the prompts require some understanding, but the essay as a start point will need further work to make it fit the brief. Questioning the student if you are not sure will clarify if they really understand, but it is a sea change in how educators work and we need to adapt to it.
The impact on education is really striking, and one I'm aware of as my son's just started college. I cannot see how someone can have really absorbed a subject if they've used an AI to produce an essay — even if they then have to finesse the result. And in addition to acquiring the "knowledge", there's also the question of them failing to acquire the skills... or research, critical thinking, assimilation, and writing...
Loved this. As always, I agree with me.
Weirdly, I was going to write the exact same piece this morning (minus the angry artist, and the "I've built a HAL-9000 to teach me French" bits). I too am bumping up against the idea that AI scraping a gajillion pictures and novels is not really very different to how I arrived at what we might laughingly call my "style". And I don't need anyone presenting me with an invoice, thank you. Even when the world greets its first AI artist, or novelist, or screenwriter, that entity won't necessarily be better than anyone else, or have a more interesting viewpoint or style.
I'm not sure I agree with you on the learning side of things - since you introduced me to Perplexity, I have gone down some fascinating research rabbit holes in which I have certainly learned things, in much the same way I would have done using Google/Wikipedia (and I don't think that Googling is a skill anyway). It's true that I am not navigating a complex library or historical archive IRL, but I wasn't doing that before.
I was hoping you'd disagree with yourself just for kicks ;-)
And yes, there's no question that if you've already got an inquiring mind and some critical thinking skills, there's huge bonuses to be had from some of these tools. I've also wound up down fascinating holes into knew areas via Perplexity, and additionally had some bang-on useful answers to specific questions that I never got from Google. There's good shit happening as well as the bad... and throwing babies out with the bath water won't help any of us.
I read the other day that someone, for shits and giggles, designed an AI not that was 100% ethical, will not violate copyright, scrape anything, present questionable info etc. it literally will not return a response to a single question.
“The thing about using AI is it means you will not learn” first noticed this using Google maps, used to remember routes/directions better using an AtoZ now I just let google or apple tell me when to turn
Exactly! I used to love paper maps, and also just feeling my way through a town and letting the geography seep into me. My son, who's never driven without GPS, simply doesn't have the same knowledge of a place...
Re-folding paper maps, lost art form
This is really good, and really thought-provoking.
Thank you, Helen! Delighted that you enjoyed it :)
Michael, I'm sorry the artist took a hit-and-run approach, since you are always so willing to explore and ponder and discuss all sides of a contentious issue. The time and care you spend on this sort of thing has been a source of mild awe on my part for all the years I've known you on twitter. You honestly seem to enjoy the respectful give and take, while what I generally do these days is think how ancient I am and how precious time is, and then move on. Anyway. I'm sorry you were blindsided. And thank you for taking the time to explore and ponder and present multiple sides of a contentious issue in this piece right here. Very well done and thought-provoking as always.
Hey there :-) Thank you — really glad you enjoyed it. Yes, the inspiration for the piece is a bit of a shame, but I guess it reminded me what a knotty problem it is and will remain... and one where (like so many of the semi-terrifying issues facing us in the world right now) can't be solved by just shouting at it from a single entrenched position.
When this was New Stuff, I saw pals who were Real Artists playing with it and having lots of fun seeing what came out of their prompts. The results were everything from hilarious to terrifying. I still look for extra fingers in any "artwork". I have never used Midjourney, I am intimidated by all digital art, and I am very sad that art is being cheapened by the idea that it can just be produced with the touch of a button, like a microwave burrito.
I share that sadness, but on the other hand think/hope: just as you've just made a distinction between a microwave burrito and a "real" one, maybe it'll be the same with the arts. The microwave version may be cheap and easy, but personally I'll always choose to have fewer, more expensive and less convenient — but BETTER — ones made by a human.
Oh, how I love waking up to a good AI rant! Thank you, Michael. I'd like to back up a bit to the "credit vs compensation" part and focus on copyright law. For better or worse, I actually went to The Eye before Shawn Presser took it down, and I saw (even downloaded one of the tar files) the many thousands of pirated books used to develop the technology. I want to focus on the piracy because for me, that's the thing that makes me physically ill every time someone at work or on LinkedIn for my profession mentions generative AI, knowing that the copyrights of tens of thousands of writers for upwards of 183K books were violated. As I stared in disbelief at the massive amount of theft in The Pile (as it's been called) aka the Books3 cache (as the Meta research paper referred to it and it was labelled on The Eye), I felt inexpressible levels of rage. To Serve Man, indeed. We are the meat.
Here, I think, is the crux of all things: the copyright violation. Because they had to "copy" the work in its entirety into the system without permission. Over and fucking over. Presser admits this. Everyone shamelessly admits this, as if copyright were nothing. And as the NYT lawyers go after OpenAI right now in the courts for the copyright violation (after a protracted and unsuccessful negotiation for content licensing), I hope that's where it ultimately takes that overdue pound of flesh and very much then some in financial damages, ultimately causing OpenAI to have to raise the cost of licensing its technology so high that no one will be able to afford it.
Now, OpenAI is not the only game in town. Adobe is developing "ethical" generative AI. They're going to explore content licensing (if they haven't already). I hear Google is doing the right thing, too, but I'm wary.
The vast majority of people in tech aren't authors, nor do they know any personally. So, they don't know anything about copyright law. They read conscience-soothing, half-baked articles about generative AI being "transformative" without understanding that's only ONE of several qualifications for fair use. (God, sometimes it's really something being married to and living with a recovering attorney, but these days I appreciate him more than ever.) My colleagues are already racing down the generative AI rabbit hole with little thought or conscience. We've been inured to thinking about the technology we use -- its origins, its consequences. The same way we devour chocolate without thinking if child labor went into (a good lot of it did). We are so removed from the wounds that we don't, can't smell the antiseptic. We just have the product and it's our right.
Well, now, I have to get ready for work. But as you can tell, it's something I've thought about quite a lot and I'm very passionate about certain aspects. Not to say the rest isn't important -- it's VERY important. But as a grassroots activist, I like to focus on one issue and make progress there. It's easier to raise consciousness about a single legal aspect. Therein lies the way forward.
xo
Maria
Hey Maria! Thanks so much for that response. And you're right: however much one wanders around the subject and tries to see ameliorating aspects and bright sides, the bottom lines is a bunch of creators had their work RIPPED OFF by people who neither understand nor in the least care about copyright or the effects their software will have on writers and artists.
And this, I think, is another of the great issues of our age — as struck me recently when I was in San Francisco and seeing tons of self-driving cars around. At what point did we agree to this? An experimental, unproven and potentially lethal new tech, right there on our streets? We never did — just as we never agreed to having all our work scraped so someone else could charge money for it. It's this wanton trammeling over other people's rights that may be one of the bigger issue: the people who think it's cool to "move fast and break things" are never the ones finding their own lives broken.
I had this same conversation with my partner yesterday - we were discussing a new AI text-to-speech tool for adding voiceover narration to a video editing project I was working on, a simple social media video post for a client. We talked about whether this was another job that AI might fill in the future and whether it was a tool I should even be using (a couple of friends are professional voiceover artists and audiobook narrators).
As somebody with a background in app and game development, I often pay composers, writers, and voice actors. But for this small project, for a client, I decided to give this tool a try. And it was good. It couldn't "act" (at least not yet), but I could ask it to be more cheerful, enthusiastic, or serious. It did a good job.
But the decision on my part was never to "use this tool" or "pay a voiceover artist". The decision was to "use this tool" or "not have a voiceover at all", as - for a social media post with a short lifespan - I would never have convinced my client to pay a voiceover artist. In the same way that it wouldn't make economic sense for you to pay an artist to illustrate (or recreate images based on the images you'd generated for) your Secret History of Santa Cruz (which would never have existed without AI in the first place).
I see these tools, I think, the same way that you do. They allow us to achieve things beyond our skillset. And there *is* something creative about prompt engineering and figuring out how to get the best results. That's a skill in its own right (and one that organisations will value moving forwards).
Is using these tools as creative (or impressive) as learning how to paint watercolours or mastering black-and-white photography? No, but it is doing what technology has always done: making things that were previously impossible or expensive achievable and accessible.
Many of the services we offer, such as programming apps and games, graphic design, video editing, social media services, etc. are ripe for AI to disrupt (it's already happening), but we're leaning into what AI can do, and using it as part of our workflow to free us humans up for the stuff that only we can do.
Sort of a side note, but I quite like Microsoft's branding of their AI tools as "copilots" and I think that's a great way to think about how they can and should be used.
Yes, I like the "co-pilot" framing too — Perplexity offers something similar.
Sounds like we're on the same page. I got into Midjourney purely for generating images for pitch decks. It saves a TON of time (instead of Googling and finding a bunch of stuff that isn't quite right) and like you on that kind of task, there's no way in hell I'd ever actually commission someone for any of it. There just isn't the budget. And as you rightly point out, my Santa Cruz things didn't just reply upon free imagery like that, they wouldn't even have existed in the first place without them.
Maria makes some great points about copyright in another comment, but in addition/to the side of that, I do think there's an onus upon us as working creatives to try to adapt and transcend, possibly in a slightly similar way to artists not just rolling over and giving up once photography arrived. The latter didn't kill the former, but turned into a parallel artform in its own right. Maybe AI-generated material will follow a similar path...
I use ChatGPT for various purposes, such as composing cover letters for job applications. Given the high volume of daily applications, it's realistic to acknowledge that most may not be reviewed by a human. Even if they are, the reviewers often sift through dozens a day, making it challenging for any single application to stand out. Writing a bespoke cover letter for each application is a time-consuming task, and given the competitive nature of the process, it's not really feasible.
Additionally, I use ChatGPT to review my code, identifying errors or locating issues when it appears correct but fails to run. This helps me save time that would otherwise be spent on extensive debugging or seeking external assistance. Furthermore, I turn to ChatGPT to find solutions for coding specific functions or overcoming challenges beyond my immediate understanding and I’ve learned quite a bit this way without having to bother others.
I don't view these uses as inherently wrong because for me, it's about utilizing a tool to efficiently accomplish tasks or streamline processes. I don't entirely agree with the notion that using AI to generate text hinders learning. Personally, I use it as a tool to refine my writing. I carefully review each suggestion, selectively incorporating what aligns with my intent and my voice, allowing me to learn and improve. This parallels my approach to spell check, which not only corrects my spelling but teaches me HOW to spell the word and this one thing has contributed more to my learning how to spell than any other single thing.
Ha — that's a very good point about spellcheck! I actually had a digression into the issue of how much of this spanking-new AI is *actually* new... and/or threatening because of that, and how much of it is just a better way of leveraging computing ability (I cut it because the thing was getting too long already).
I'm really interested (and cheered) to see a coder embracing the benefits for correction and learning, as it's another field where AI could have major effects. The ability to make little personal bots to act as helpers seems like a genuine boon, for example. I guess like a lot of subjects at the moment there's a lot of excitable chatter and shouting, and maybe a degree of sitting back and assessing it as it develops is the best way forward...
Like any technology, there's the possibility for both positive and negative uses and outcomes. Humans are notoriously prone to finding negative ways to use . . . well everything.
Also, I don't have a job where I can really leverage other people's work in same way writers and artists might, and never having been confronted with it, I don't really know what I'd do in that situation. If I'm being honest, I suspect I'd leverage whatever tools I had at my disposal. I almost wrote "do the right thing" rather than "tools at my disposal" and then I realized there is no obvious right/wrong here. Like most things, it's murky. What I'm convinced of though, is that rich dudes are definitely going to do whatever they want that makes them more $ unless forced to do otherwise. We need to put guardrails in place for this and the longer it's left in the wild, the less likely we are to be able to rein it back in. As you said, pandora is out of the box and there's no putting her back.
Dear Michael -- I loved this piece so much that I started writing a massive reply. And roughly five paragraphs in, I realized I should probably just post my OWN fucking essay in response.
So here it is! And I hope you enjoy! THANK YOU!!!
https://johnskipp.substack.com/p/ai-yi-yi
Hey John — excellent! Glad it landed for you, and I'll go read yours!
YAAAAAAAY!!!
I took a foundation course in AI in, like, 1989, and I thought that was going to be the Next Big Thing. I was wrong by ≈ 30 years.
According to John Searle (1981), there are two AIs: Weak AI, which attempts to mimic (human) cognition, which is what ChagGPT is, and strong AI, which is an attempt at understanding (human) cognition by building a machine that does what brains do.
What follows is somewhat ranty. The seconds you spend reading this, you won't get back. I think I'm trying to elaborate a bit on ChatGPTs originality.
I quit pursuing AI because the staff at U of Oslo were all in awe of Chomsky. I inadvertently stumbled across B.F. Skinner in the original, and realized Chomsky's critique of Skinner is misguided. I suggested that it would make more sense to build AI on Skinner's ideas than Chomsky's. Apparently, I had moved ludicrousness into unexplored dimensions. I eventually gave up and became a clinical psychologist instead.
A friend messaged me right after ChatGPT was released, and said his daughter studies AI at the same track that I left. He told me the AI department staff at the U of Oslo is a bit down in the dumps these days because ChatGPT is a) based on Skinner's ideas and b) proves that Chomsky is wrong.
Chomsky is ranting fully these days, claiming that ChatGPT is just copying and is not producing anything original (his criticism of ChatGPT is in the same lane as his criticism of Skinner, i.e. misunderstood: His main gripe against Skinner is that Skinners theory can't explain original behavior (It's not supposed to)).
But Chomsky is talking against better judgement, because ChatGPT is producing original material. Sure, ChatGPT is not a blank slate. But ChatGPT already does things that their creators could not foresee. Chomsky will probably move the goalpost of what constitutes originality until his dying breath, but he's basically committing the no true Scotsman fallacy.
What a very interesting sets of thoughts. I'd never thought about it that way! Especially the point about not being able to explain original behavior, which I suspect is going to become more and more of an issue with AI (cf ChatGPT seemingly going a little wonky last night!
I don't know if I've misunderstood anything.
Politically, I lean about as far to the right as common courtesy allows. I'm in favor of free markets and competition and stuff.
But do I lose any sleep that ChatGPT had uploaded my books? I'm a) talking about a loss of, like, 60 bucks and b) what ChatGPT has paid me back just in plain existing is much more than that.
Tell you what, ChatGPT? I'll give you a complimentary copy of my next book. You're welcome.
That's an EXTREMELY unusual opinion... but one I can't actually wholly disagree with, certainly in practical terms.
I've used ChatGPT and Midjourney heavily for illustrations. No artists have lost any commisions. The alternative is not paying an artist. The alternative is no illustrations.
I feel much the same about deck imagery I make. It simply wouldn't happen otherwise. I guess that doesn't speak to the underlying issue of how the AI gains the ability, but it seems like a practical approach — okay, maybe they "stole" my creativity, how do I claw some advantage back — while it plays into their hands, is the only one that makes sense right now.
I got the crap kicked out of me by a watercolourist the other day for something similar.
Oof.