Italian researchers used ChatGPT to create a fake dataset from a medical study, aiming to see if ChatGPT was able to generate something convincing. The result, not surprisingly: it is, as long as we do not confront it with a real expert.
We already knew that “generative AI” is capable of writing compelling research summaries. It is therefore not surprising that it is able to generate a series of data in less than a minute. The robot’s big weakness: it was unable to use its critical thinking.
A domain expert can already see that the data lacks originality, as… Write it down The authors of this experiment – relating to two alleged eye surgeries – have been published in the journal Gamma Ophthalmology.
But the simple citizen who knew nothing about clinical studies – or statistics – and who primarily wanted to see in this data “evidence” for his preferred belief, would have seen nothing but fire.
review nature to request To British biostatistician Jack Wilkinson, of the University of Manchester, who specializes in detecting questionable data, to review the document. He explains that this contains several errors that reveal that the robot does not really understand what it is doing: many of the participants to whom it has been assigned genders appear to correspond to their names, and there is no correlation between the vision measurements taken before and after the alleged surgery, an abnormally large number Of patients whose lives end in 7 or 8… In short, “clear signs” that these data are “fabricated.”
But not everyone is an expert, and even among experts, not everyone will take the time to look closely at research data.
Data manipulation by an unscrupulous researcher has always been a problem in research, but this one It could soon become more important“, points out Italian ophthalmologist Giuseppe Giannaccari, from the University of Cagliari, lead author of the “experiment”.
Subscribe to our sprawling newsletter
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”