New study claims up to 20% of local doctors in the UK could be using generative AI tools as part of their practice
AI has been a bit weird as of late. It has been used to generate naff images, play Doom, and build the loneliest social media app out there. A potentially great but also potentially worrying use of generative AI has been spotted with General Practitioners (GPs) across the UK, suggesting up to 20% could be using it as part of their practice.
As spotted by ScienceDaily, a paper titled "Generative artificial intelligence in primary care: an online survey of UK general practitioners" has recently polled Doctors.net.uk—an online forum and database for GPs across the UK. Of the 1,006 polled, 205 (20%) reported "using generative AI tools in clinical practice".
When asked "Have you ever used any of the following to assist you in any aspect of clinical practice?", 16% reported using ChatGPT at some point, with Bing AI taking 5%, Google Bard getting 4% and others getting a single percentage. In the introduction to the paper, only ChatGPT is cited, as it is the most popular choice of those mentioned.
Though it doesn't appear to specify how or why these figures total more than 20%, the results table suggests that as high as 26% of those polled have used some form of a chatbot, and the total number of respondents at the bottom of the table means that this number hasn't been achieved through GPs ticking multiple boxes.
Each response only totals one choice. It's worth noting that those who answered say they have used AI tools, not that they use it every day or even regularly. The figure of GPs who use it in that poll is a potential figure, not an absolute one.
Of those polled, 29% report they used it to generate documentation after appointments, 28% use it to suggest a different diagnosis, 25% use it to generate treatment options, 20% use it to summarise and make timelines from prior documentation, 8% use it to write letters, and 33% used it for other purposes.
However, as chatbots are trained to scrape large amounts of data from the internet, the concern around generative chatbots is that data can be skewed by misinformation and bias. As the paper points out:
What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.
"They (chatbots) are prone to creating erroneous information. Outputs of these models also risk perpetuating or potentially worsening racial, gender and disability inequities in healthcare (‘algorithmic discrimination’). As consumer-based applications, these tools can also risk patient privacy."
The paper's authors also praise the potential of AI, when it comes to writing documentation, dictating conversations, and even assisting with diagnosis, though it should never be a diagnosis tool in and of itself. Patients can have peculiar symptoms or speak in ways that make it hard for a bot to understand or properly translate. Language is a great tool but also very complex and nuanced.
As the paper puts it: "The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risk."