Nearly a year into parenting, I’ve relied on advice and tricks to keep my baby alive and entertained. For the most part, he’s been agile and vivacious, and I’m beginning to see an inquisitive character develop from the lump of coal that would suckle from my breast. Now he’s started nursery (or what Germans refer to as Kita), other parents in Berlin, where we live, have warned me that an avalanche of illnesses will come flooding in. So during this particular stage of uncertainty, I did what many parents do: I consulted the internet.
This time, I turned to ChatGPT, a source I had vowed never to use. I asked a straightforward but fundamental question: “How do I keep my baby healthy?” The answers were practical: avoid added sugar, monitor for signs of fever and talk to your baby often. But the part that left me wary was the last request: “If you tell me your baby’s age, I can tailor this more precisely.” Of course, I should be informed about my child’s health, but given my growing scepticism towards AI, I decided to log off.
Earlier this year, an episode in the US echoed my little experiment. With a burgeoning measles outbreak, children’s health has become a significant political battleground, and the Department of Health and Human Services, under the leadership of Robert F Kennedy, has initiated a campaign titled the Make America Healthy Again commission, aimed at combating childhood chronic disease. The corresponding report claimed to address the principal threats to children’s health: pesticides, prescription drugs and vaccines. Yet the most striking aspect of the report was the pattern of citation errors and unsubstantiated conclusions. External researchers and journalists believed that these pointed to the use of ChatGPT in compiling the report.
What made this more alarming was that the Maha report allegedly included studies that did not exist. This coincides with what we already know about AI, which has been found not only to include false citations but also to “hallucinate”, that is, to invent nonexistent material. The epidemiologist Katherine Keyes, who was listed in the Maha report as the first author of a study on anxiety and adolescents, said: “The paper cited is not a real paper that I or my colleagues were involved with.”
The threat of AI may feel new, but its role in spreading medical myths fits into an old mould: that of the charlatan peddling false cures. During the 17th and 18th centuries, there was no shortage of quacks selling reagents intended to counteract intestinal ruptures and eye pustules. Although not medically trained, some, such as Buonafede Vitali and Giovanni Greci, were able to obtain a licence to sell their serums. Having a public platform as grand as the square meant they could gather in public and entertain bystanders, encouraging them to purchase their products, which included balsamo simpatico (sympathetic balm) to treat venereal diseases.
RFK Jr believes that he is an arbiter of science, even if the Maha report appears to have cited false information. What complicates charlatanry today is that we’re in an era of far more expansive tools, such as AI, which ultimately have more power than the swindlers of the past. This disinformation may appear on platforms that we believe to be reliable, such as search engines, or masquerade as scientific papers, which we’re used to seeing as the most reliable sources of all.
Ironically, Kennedy has claimed that leading peer-reviewed scientific journals such as the Lancet and the New England Journal of Medicine are corrupt. His stance is especially troubling, given the influence he wields in shaping public health discourse, funding and official panels. Moreover, his efforts to implement his Maha programme undermine the very concept of a health programme. Unlike science, which strives to uncover the truth, AI has no interest in whether something is true or false.
AI is very convenient, and people often turn to it for medical advice; however, there are significant concerns with its use. It is injurious enough to refer to it as an individual, but when a government significantly relies on AI for medical reports, this can lead to misleading conclusions about public health. A world filled with AI platforms creates an environment where fact and fiction meld into each other, leaving minimal foundation for scientific objectivity.
The technology journalist Karen Hao astutely reflected in the Atlantic: “How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: how do we ensure that we’ll make our future better, not worse?” We need to address this by establishing a way to govern its use, rather than adopting a heedless approach to AI by the government.
Individual solutions can be helpful in assuaging our fears, but we require robust and adaptable policies to hold big tech and governments accountable regarding AI misuse. Otherwise, we risk creating an environment where charlatanism becomes the norm.
-
Edna Bonhomme is a historian of science
Comments