Blog EBHC | expert insight

The use of generative AI tools in scientific publishing

10/03/2026 |

Generic placeholder image

David Tovey, Co-Editor, Journal of Clinical Epidemiology

Another week, another unverifiable allegation of wrong doing, another awkward email to a valued contributor, another denial of any intention to harm. It would be wrong to say that the incursion of AI into the world of journal editing puts us in the position of having one arm behind our back. In reality, we are shackled from head to toe, stumbling in the dark. Did this author / peer reviewer / letter writer use generative AI, and if so, should we be either surprised or judgemental? Increasingly, my answer to these questions seems to be a) I have no idea and b) probably not on either count.

In June 2025, arguably a bit late in the day, I had a lightbulb moment - well two, actually. Both happened during a keynote presentation at a scientific conference. The speaker, at the European Conference on Ethics and Integrity in Academia was Professor Mike Perkins, from the British University, Vietnam. One of the ‘lightbulbs’ was a prediction and the other, arguably a statement of the blindingly obvious. First, the prediction. In 10 year’s time the presently prevailing view amongst academic colleagues that students who use LLMs to support their work at cheating, will be regarded as a quaint throwback to a bygone age. Perkins went on to note that if any part of the function of academic teaching in universities is to prepare graduates for their professional lives ahead, would-be employers would regard it as negligent for universities to fail to prepare them with the skills and knowledge required to use generative artificial intelligence tools effectively. Which does seem fairly obvious when you consider it, but clearly up until that point, I had not.

Subsequently, I read and was inspired by several commentaries written by Howard Bauchner and Frederick Ravara and advocating for and describing the seeming inevitability of the use of AI tools in scientific publication and peer review.
For all of these reasons, over the last year or so, as co-editor of the Journal of Clinical Epidemiology, it has sometimes seemed that our industry is at risk of being left adrift as the world races onwards. As with many interventions, there have been benefits. Firstly, we had the positive experience of our publisher introducing a tool aimed at identifying duplicate, simultaneous publications. Almost immediately, we were being alerted to numerous examples of such duplications - submissions to myriad unrelated journals, all within a few days of one another and usually identical in every detail. These generally included a provably untrue statement in the accompanying letter from the authors stating that they had submitted to our journal only. So here was a tool working strongly in favour of good practice and research integrity, even if, as I write, this tool appears to be limited to one publisher, so that authors who wish to duplicate their manuscript submissions and have the wit not to use journals produced by the same publisher could be continuing to get away with it.

But, as ever, there were also harmful and challenging effects. First came the unanticipated problem of ‘letters to the editor’, an area of our journal which, probably in common with many journals we tended to regard as a problem-free zone. Our first indication of trouble ahead came with an unusual reply from an author team when sent a letter inviting them to respond to a letter we had tentatively accepted for publication. Whilst offering to draft a reply, the authors shared their belief that the letter was likely generated via a Large Language Model (LLM). In support of this, they claimed that an AI detection tool had rated the likelihood of this at 94%. Furthermore, they noted that the authors of the letter were, in fact prolific to an unfeasible, incredible degree in submitting and publishing letters on a broad range of topics to a wide spectrum of journals. This proved to be true, with close to 500 letters published over the last 5 years by the two authors. Indeed, this appeared to be their only visible academic activity. This left the journal, and us as its editors, in a bind. Firstly, it is not permissible in our journal to use AI within the peer review or editorial process, for fear of breaching confidentiality of a manuscript that has yet not been accepted for publication. Thus, the authors should not have used the AI detection tool and we could certainly not attempt to replicate it. Even if this had been permissible, current tools are unvalidated and reports suggest that the results are unreliable. Naturally, we approached the letter authors with our concerns and predictably the claim was denied. Nonetheless, we decided not to proceed with publication.

More problems followed. Suddenly, a journal that received few letters was receiving larger numbers, some of which did not refer to articles published in the journal. Anecdotally we hear that our experience is not unique: other specialist journals are also currently being submerged under the deluge (Emily Hodgson, personal communication). More letters followed, and checks revealed a cohort of would-be contributors whose career seems dominated by sending un-commissioned letters to journals. Interestingly they follow a consistent trend: These are usually bland, inoffensive, respectful and broadly relevant to the scope of the journal – just interesting enough to be considered publishable but not ground breaking or controversial in a manner that would invite scrutiny. All of this contributes to year-on-year increase in submissions and creates an investment of time that is clearly disproportionate. As a consequence, we now have instituted a very high bar for publishing letters to the editor in our journal. Other journals appear to have done likewise. Letters need to substantially challenge or develop our science, and we explicitly favour correspondents with a verifiable history and academic experience in the methodological issue under discussion.

So, this is where are. AI tools and their potential for benefit and harm. What is clear, is that scientific publishing needs to catch up, or risk being left behind and wounded in respect of AI and specifically LLMs. This will require identifying and rapidly employing enterprise tools that deliver the benefits that generative AI can bring, but provide a ring fence round confidential manuscripts. In time, perhaps the idea that employing generative AI tools is a form of cheating will be consigned to history, as ridiculous as asking people to return to using a quill pen.

Don't miss any updates from the EBHC International Conference!

Subscribe to our newsletter by filling out the form.

EBHC Conference | The use of generative AI tools in scientific publishing

Blog EBHC | expert insight

The use of generative AI tools in scientific publishing

10/03/2026 |

Generic placeholder image

David Tovey, Co-Editor, Journal of Clinical Epidemiology

Another week, another unverifiable allegation of wrong doing, another awkward email to a valued contributor, another denial of any intention to harm. It would be wrong to say that the incursion of AI into the world of journal editing puts us in the position of having one arm behind our back. In reality, we are shackled from head to toe, stumbling in the dark. Did this author / peer reviewer / letter writer use generative AI, and if so, should we be either surprised or judgemental? Increasingly, my answer to these questions seems to be a) I have no idea and b) probably not on either count.

In June 2025, arguably a bit late in the day, I had a lightbulb moment - well two, actually. Both happened during a keynote presentation at a scientific conference. The speaker, at the European Conference on Ethics and Integrity in Academia was Professor Mike Perkins, from the British University, Vietnam. One of the ‘lightbulbs’ was a prediction and the other, arguably a statement of the blindingly obvious. First, the prediction. In 10 year’s time the presently prevailing view amongst academic colleagues that students who use LLMs to support their work at cheating, will be regarded as a quaint throwback to a bygone age. Perkins went on to note that if any part of the function of academic teaching in universities is to prepare graduates for their professional lives ahead, would-be employers would regard it as negligent for universities to fail to prepare them with the skills and knowledge required to use generative artificial intelligence tools effectively. Which does seem fairly obvious when you consider it, but clearly up until that point, I had not.

Subsequently, I read and was inspired by several commentaries written by Howard Bauchner and Frederick Ravara and advocating for and describing the seeming inevitability of the use of AI tools in scientific publication and peer review.
For all of these reasons, over the last year or so, as co-editor of the Journal of Clinical Epidemiology, it has sometimes seemed that our industry is at risk of being left adrift as the world races onwards. As with many interventions, there have been benefits. Firstly, we had the positive experience of our publisher introducing a tool aimed at identifying duplicate, simultaneous publications. Almost immediately, we were being alerted to numerous examples of such duplications - submissions to myriad unrelated journals, all within a few days of one another and usually identical in every detail. These generally included a provably untrue statement in the accompanying letter from the authors stating that they had submitted to our journal only. So here was a tool working strongly in favour of good practice and research integrity, even if, as I write, this tool appears to be limited to one publisher, so that authors who wish to duplicate their manuscript submissions and have the wit not to use journals produced by the same publisher could be continuing to get away with it.

But, as ever, there were also harmful and challenging effects. First came the unanticipated problem of ‘letters to the editor’, an area of our journal which, probably in common with many journals we tended to regard as a problem-free zone. Our first indication of trouble ahead came with an unusual reply from an author team when sent a letter inviting them to respond to a letter we had tentatively accepted for publication. Whilst offering to draft a reply, the authors shared their belief that the letter was likely generated via a Large Language Model (LLM). In support of this, they claimed that an AI detection tool had rated the likelihood of this at 94%. Furthermore, they noted that the authors of the letter were, in fact prolific to an unfeasible, incredible degree in submitting and publishing letters on a broad range of topics to a wide spectrum of journals. This proved to be true, with close to 500 letters published over the last 5 years by the two authors. Indeed, this appeared to be their only visible academic activity. This left the journal, and us as its editors, in a bind. Firstly, it is not permissible in our journal to use AI within the peer review or editorial process, for fear of breaching confidentiality of a manuscript that has yet not been accepted for publication. Thus, the authors should not have used the AI detection tool and we could certainly not attempt to replicate it. Even if this had been permissible, current tools are unvalidated and reports suggest that the results are unreliable. Naturally, we approached the letter authors with our concerns and predictably the claim was denied. Nonetheless, we decided not to proceed with publication.

More problems followed. Suddenly, a journal that received few letters was receiving larger numbers, some of which did not refer to articles published in the journal. Anecdotally we hear that our experience is not unique: other specialist journals are also currently being submerged under the deluge (Emily Hodgson, personal communication). More letters followed, and checks revealed a cohort of would-be contributors whose career seems dominated by sending un-commissioned letters to journals. Interestingly they follow a consistent trend: These are usually bland, inoffensive, respectful and broadly relevant to the scope of the journal – just interesting enough to be considered publishable but not ground breaking or controversial in a manner that would invite scrutiny. All of this contributes to year-on-year increase in submissions and creates an investment of time that is clearly disproportionate. As a consequence, we now have instituted a very high bar for publishing letters to the editor in our journal. Other journals appear to have done likewise. Letters need to substantially challenge or develop our science, and we explicitly favour correspondents with a verifiable history and academic experience in the methodological issue under discussion.

So, this is where are. AI tools and their potential for benefit and harm. What is clear, is that scientific publishing needs to catch up, or risk being left behind and wounded in respect of AI and specifically LLMs. This will require identifying and rapidly employing enterprise tools that deliver the benefits that generative AI can bring, but provide a ring fence round confidential manuscripts. In time, perhaps the idea that employing generative AI tools is a form of cheating will be consigned to history, as ridiculous as asking people to return to using a quill pen.

Don't miss any updates from the EBHC International Conference!

Subscribe to our newsletter by filling out the form.