Scientific articles ChatGPT were able to deceive academicians

A group of researchers, led by the North-Western University used ChatGPT to write 50 abstracts in style of 5 different scientific magazines.

4 Academicians who were divided into 2 groups determined who wrote an abstract – a person or artificial intelligence. If one researcher was given a real abstract, then the second was given generated, and vice versa. Each person examined 25 scientific reports.

Reviewers were able to detect 68% of generated essays, and 86% of original abstracts from these articles. In other words, they were deceived, forcing them to think that 32% of generated abstracts are real, and 14% of these abstracts are fake.

At the same time, the inspectors knew that among the essays there were those that were written in artificial intelligence. The reviewers also said that it was very difficult to distinguish real abstracts from fake. In some reports, ChatGPT replaced the facts about studies that he cited as evidence, which he issued himself.

According to scientists, the experiment shows that the material created by ChatGPT can be very believable. The fact that the reviewers approved AI refrats in 32% of cases means that these reports are really good and can deceive the unprepared inspector.

Chatgpt has repeatedly been subjected to prohibitions and locks. For example, recently scientists have been forbidden to use ChatGPT to write articles. The heads of the International Machine Learning Conference (ICML), in order to protect against plagiarism, banned the reception of articles created using AI.

It is also due to fears about the negative impact on students’ performance, as well as for security and content accuracy, access to ChatGPT was blocked in the networks and devices of public schools in New York.

/Media reports cited above.