Couple of observations…
We’re not done yet
Okay, I lied last week. I am going to talk about AI again briefly. Sleepily killing time in a hotel room yesterday evening before going out to find beer — I’ve recently arrived in Scotland to work for two weeks — I remembered the idea of being able to make your own ChatCPT agents. Hmm, I thought — could be useful if you’re ever called upon to adapt a book or series for TV. Let’s see how it works.
So I uploaded the three Straw Men novels (my series about a conspiracy of serial killers) to a custom private ChatGPT agent. Then asked the bot to tell me about the main characters. It quickly did so, with acceptable little summaries. Fine. I asked about minor characters, and it did a decent round-up of those too. Good. So then I asked it to tell me in more detail about a main character — one called Bobby.
It perkily gave me an analysis of his role in the series which was absolutely and completely wrong in two essential regards.
I corrected it on the first. It quickly came back thanking me for doing so and acknowledging its error, and provided me with two alternative revisions so I could choose which I preferred, both of which corrected the egregious error… but perpetuated the other, equally wrong piece of wrongness (seriously, neither mistake was minor: they were story-significant gaffes you couldn’t possibly make if you’d read even just one of the books). So I corrected it on that, and again it came back thanking me for doing so — and provided another revision, which was basically correct.
But… this is no bloody use, is it?
Like the example about guitar tablature I mentioned in the previous post, the only way I could get the right answer was by knowing it beforehand. If I’d been relying upon it to break down a series I’d never read, I’d look a complete fool. My cat could have done a better job and I’m pretty sure he only skip-read the second book.
I then asked Google’s much-vaunted Gemini model the same question. To be fair, I hadn’t directly uploaded the books to it, as I had with my bot. But it snapped back quickly with a full, confident, and slightly precocious-sounding answer, clearly having access to the material… but getting one of the book titles wrong and indirectly making one of the same errors as my bot. Bullshitting, in other words.
My points being (a) I’m not sure we have to worry about AI taking over the universe just yet — despite the ravings of people on the Singularity subreddit, who are all about worshipping our new software masters — and (b) it’s kinda worrying, given how much is being merrily handed over to AI. I would not want my custom bot flying a plane, for example, in case it misunderstood that we’re at war with Belgium and decided to perform an heroic kamikaze crash-landing into Bruges.
AI can already do remarkable things, no question — and of course it will get better and better. But right now it’s like one of those flashy fellows who talks a good game in the bar however when pressed with a couple of informed questions is quickly revealed to have only a sketchy understanding of the subject, probably acquired through half-listening to a podcast or overhearing someone else’s conversation.
You can lead a software to facts, but you can’t make it think.
The power of losing
Came across a fascinating quote in the notebooks of Michael Oakeshott, a British philosopher:
Lost causes — Jacobites; the South in the American Civil War; Cavaliers & Roundheads.
Often the great moments of history; often, because they are fixed & finished, more permanent than that which succeeded & therefore tends to be lost in further achievements. The value of loss itself. The moments of success tend to lose their individuality in a general development; the moments of loss retain their individuality and power. The same true of decay-it is freed from the dissolving power of success.
Which yes, explains Southern pride in the US, but also the eternal allure of the Nazis, and in fact all the historical and cultural beacons of the hard right. It stems from an absolute inability to deal with the present and the future, and a deep insecurity over their unknowability. A profound unwillingness to cope with the flux of life, a need for something fixed and inviolable, even if it’s a bad thing.
By losing, a cause becomes preserved in a kind of purity, never having had to graduate from being an idea into into actuality, forced to fight for its place in the ever-onward march of history, being tested over time and in reality… and proved wrong.
There will always be people attracted by this awful simplicity.
Regarding AI and machine learning, have you come across the software development that was taught to pick up cancer tumours? It’s standard practice for doctors to put a ruler alongside a tumour to show how big it is. The software’s ability to detect tumours ended up being the sight of a ruler in the photo 🤣🤣 so rulers cause cancer
My main problem with the question of AI as you’ve detailed it is that your point is 100% correct, very well documented and unlikely to change anytime soon, and yet none of this is stopping these absolute fucktangles from presenting said glorified spreadsheets as alternatives to search engines or, you know, actual people with expertise. There are teachers tearing out what little hair they have because a whole generation of kids doesn’t know the difference between an AI chatbot and a search engine (which, in fairness, are also not particularly reliable these days, but that’s another head of the hydra of late-stage capitalism).
My day job is writing clinical and technical content for the internal and external knowledge bases of a large UK insurer. At present we are training two AI chatbots (one for internal, agent-facing content, one for external customer-facing content) to deliver correct responses on complex queries. We have already determined that the core issues these bots are facing in delivering said correct responses could not have been predicted by us prior to testing. In other words, ONLY testing can flag these core issues so that we can fix them, either by adjusting the content so that the bot is happy, or retraining the bot.
However, since a knowledge base is an organic, fluid construction which is constantly being updated and built upon, there will always be content that the AI needs training on. Since we can’t be sure that new or updated content will not spark a juddering fit and a wave of incorrect answers without testing, it follows that all new and updated content will need running past the chatbot and testing before we publish it.
I’m not sure how to get people in authority to understand this, because at present everyone I’m speaking to seems to be under the impression that these AIs are supposed to *know* the answers after having access to properly calibrated content. Explaining that AI chatbots are only supposed to *sound like* they know the answers is meeting with uncomfortable glances and/or baffled stares…