Artificial Intelligence a challenge for authors, editors and readers

On September 2, 2019, the first issue of the European Gynecology & Obstetrics, the official journal of the European Society of Gynecology, was published. To me, the Editor-in-Chief of this journal, nothing matters more than the confidence of our readers in the quality of our published manuscripts, and by quality, I mean above all veracity, accuracy, authenticity, but also objectivity and transparency.

The use of generative artificial intelligence (AI) is a controversial and disputatious topic that has recently emerged among the medical journals.  Therefore, it is not surprising that the first manuscript published in “The opinion corner” - a new section of EGO journal – focuses on this argument.

AI is the most innovative technology since the generalization of the internet use. Day by day new applications, implications and polemics regarding AI use are appearing, with healthcare being no exception. A recent paper by Kung and colleagues shows that ChatGPT can pass medical licensing board examinations [1]. Moreover, AI can provide second opinions and demonstrate more compassion than real physicians [2]. Generative AI models learn from huge amounts of published data, including all the original papers previously published in medical journals, as well as from books, other publications and social media sites, in order to find the most adequate words for a given sentence.

This innovation is an increasingly important phenomenon not only in medical publishing, but in various medicine-related areas. It has obvious and potentially far-reaching implications for physicians, researchers, authors and editors; it impacts the way we approach our daily work and can help us in the analysis of our studies. It has the potential to increase productivity and liberate authors’ time enabling them to focus on generating new original content.  Following this scientifically important trend, the editorial team and myself are determined to make EGO a valuable source of information, also by devoting some space withing the upcoming issues to the analysis of AI aspects

However, while AI generated manuscripts appear to be very articulate and plausible, AI models are in fact prediction engines and AI chatbots that use previously published data. They can produce original text and corroborate any affirmation with references and links. If adequately operated, AI models can produce entirely false images and articles.

AI tools when assisting authors in tasks such as mining data, analyzing text and performing translations may be acceptable but their use needs to be clearly quoted by authors. Similarly, we will accept the use of AI-augmented visuals (i.e. infographics, diagrams, photos) and will ask the authors to make this clear to the readers. The use of AI engines will not modify the criteria of the reviewers or editorial board members of EGO used in the peer review and acceptance process of the original articles however, the readers should be made aware of AI contribution and impact on the paper. As other medical journals, we are apprehensive about the inappropriate use of AI to generate content. Within EGO journal we have always been and continue to be transparent with our authors and readers, as we strongly believe that while each new technology opens new, stimulating horizons, it must also be conscientiously surveilled.

References

  1. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198.
  2. Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023;183(6):589–596.

Citation: Castelo-Branco C., Artificial Intelligence a challenge for authors, editors and readers, EGO European Gynecology and Obstetrics (2024); 2023/03:100 doi: 10.53260/EGO.235031

Published: January 23, 2024

ISSUE 2023/03