ChatGPT Generates Convincing Fake Medical Report

Italian researchers recently found that the data created by artificial intelligence can be misleading. The main conclusion of the study conducted by Giuseppe Giannakkar, an ophthalmologist from the University of Cagliari, is that Chatgpt can generate convincing but false data.

Dzhannakkare notes that ChatGPT has created a fake set of data about hundreds of patients in a matter of minutes. This discovery is anxious, especially in the light of an increased threat of the use of falsified data in medical research.

Dictionary of Cambridge University even called “hallucinating” (production of false information by large language models) the word of the year. Examples of incorrect use of ChatGPT have already led to sanctions: two lawyers who used it to prepare materials on cases were fined $ 5,000 for using fictional information.

Researchers used ChatGPT, connected to an advanced model of data analysis on Python, to generate clinical testing of keratoconus treatment. However, the generated data were completely fictitious, despite convincing.

Jannakkare emphasizes the importance of awareness of the “dark side of AI” and the need to develop more effective methods for detecting fraud. He also notes that adequate use of AI can significantly contribute to scientific research.

Article, published in the journal Jama Ophthalmology, indicates that with a more thorough analysis of the data, you can identify signs of falsification, such as an unnatural number of ages of subjects ending at 7 or 8.

/Reports, release notes, official announcements.