Discussion about this post

User's avatar
Julie Taylor's avatar

Regarding AI and machine learning, have you come across the software development that was taught to pick up cancer tumours? It’s standard practice for doctors to put a ruler alongside a tumour to show how big it is. The software’s ability to detect tumours ended up being the sight of a ruler in the photo 🤣🤣 so rulers cause cancer

Expand full comment
Jack The Bodiless's avatar

My main problem with the question of AI as you’ve detailed it is that your point is 100% correct, very well documented and unlikely to change anytime soon, and yet none of this is stopping these absolute fucktangles from presenting said glorified spreadsheets as alternatives to search engines or, you know, actual people with expertise. There are teachers tearing out what little hair they have because a whole generation of kids doesn’t know the difference between an AI chatbot and a search engine (which, in fairness, are also not particularly reliable these days, but that’s another head of the hydra of late-stage capitalism).

My day job is writing clinical and technical content for the internal and external knowledge bases of a large UK insurer. At present we are training two AI chatbots (one for internal, agent-facing content, one for external customer-facing content) to deliver correct responses on complex queries. We have already determined that the core issues these bots are facing in delivering said correct responses could not have been predicted by us prior to testing. In other words, ONLY testing can flag these core issues so that we can fix them, either by adjusting the content so that the bot is happy, or retraining the bot.

However, since a knowledge base is an organic, fluid construction which is constantly being updated and built upon, there will always be content that the AI needs training on. Since we can’t be sure that new or updated content will not spark a juddering fit and a wave of incorrect answers without testing, it follows that all new and updated content will need running past the chatbot and testing before we publish it.

I’m not sure how to get people in authority to understand this, because at present everyone I’m speaking to seems to be under the impression that these AIs are supposed to *know* the answers after having access to properly calibrated content. Explaining that AI chatbots are only supposed to *sound like* they know the answers is meeting with uncomfortable glances and/or baffled stares…

Expand full comment
30 more comments...

No posts