Guest Post by Martijn Roelandse, Park 56

In 2017, I had the pleasure of attending the STM Tech Trends workshop. This workshop invites around 30 colleagues from around the publishing world to come together and predict the future of scholarly publishing. That year, we tried to imagine what challenges we would face in 2022. Some of the questions in the area of Research Integrity were “Can AI help to find the flaws in science”, “Detect fraud and error”, and in Smart Services, we found “AI for Peer Review”, and “Computer generated hypothesis”. Despite not having a glass bowl, it was a good year to predict the future and see where we are now.

On November 22, 2022, OpenAI was introduced to the world. It did stir some wrinkles. We were confronted with questions like, “What are the ethical challenges for using ChatGPT in medical publishing? Can AI help for scientific writing, or should we ban listing ChatGPT as co-authors on papers?”

The best starting point is to ask where we currently see AI in scholarly publishing. Right now, AI technologies have the potential to play a pivotal role in various aspects of the research cycle.

Research

Tools for research and analysis, such as the AI Search Engine for Scientific Experimentation by Bioz, contribute to enhanced exploration and understanding. Platforms like TetraScience facilitate easier and faster access to centralized cloud-based data, optimizing scientific data workflows. Automation through tools like BIT.AI increases data integrity and traceability by automating manual processes. Additionally, systems like Iris.ai offer features like smart search and filter, reading list analysis, autonomous extraction and systematizing of data. Finally, resources like Resolute.AI enables researchers to simultaneously search aggregated scientific, regulatory, and business databases.

Article Writing and Submission

Most by now are aware of the advantages AI can offer when it comes to writing most styles of short text or improving existing texts of any length. Resources like Writefull, Trinka, Paperpal and Scholarcy can aid in flagging missing citations, providing language feedback, generating titles and paraphrasing sentences.

Manuscript Screening

When preparing a manuscript for submission, AI can screen and ensure the quality of scholarly work. Penelope.ai, for example, is designed to screen manuscripts upon submission as an initial layer of evaluation. Supporting the Peer Review process, SciScore offers tools for method checking to assess how research methodologies are being applied. To uphold the integrity of scientific images, tools like ImageTwin and Proofig are employed for thorough image checking. Additionally, Statcheck helps verify the proper use of statistics in manuscripts.

Peer Review

When undergoing the Peer Review process, there are several resources that can ensure academic integrity. Turnitin, a well-renowned plagiarism detector, and Copyleaks aids in maintaining the originality of submissions. At a larger scale, tools like STM Integrity Hub and Clear Skies specialize in detecting papermill activities. Scite.ai also serves as a valuable tool for identifying retracted references and highlighting any editorial concerns such as corrections. Related to these tools, globalcampus.ai provides Peer Reviewer recommendations and facilitates the selection of qualified individuals for rigorous evaluation of manuscripts.

The general aims for using AI technologies in the scientific writing process ultimately come down to these four features:

  • Help authors with writing their paper and reduce time to submission for authors
  • Automate the screening process
  • Decrease manual checks
  • Streamline Peer Review by aiding peer reviewers with helpful information

The Ongoing Dynamics in the World of AI

The launch of ChatGPT clearly stirred the waters of scholarly publishing, and the dust hasn’t settled yet. Whilst ChatGPT may well be used in some way or form in writing, Turnitin has already caught up and will be able to detect the use of it in text soon. However, other tools don’t write as much as ChatGPT but still offer the service, e.g. Writefull, Scholarcy, Quillbot. Their work isn’t detected (yet).

Will there be a red line determining how far you can use AI for writing? We should be asking ourselves, at the end of the day, if AI has a negative impact on the content, writers and readers. Consider that past research has shown that articles written in correct English increases the chance for citations and downloads–AI is an obvious solution for researchers whose English needs improvement. Also consider that in Pharma, ghostwriters are often hired to produce high-quality text, bearing a striking resemblance with the role ChatGPT has and continues to play.

Time will determine how our relationship develops with AI in the publishing world. It is crucial that we, of course, are cautious in our application of these tools, but also remain open to their potential to transform the landscape of scientific information as we know it.

Related Posts

An insightful interview with patient advocate Trishna Bharadia on plain language summaries.  Plain language summaries (PLS) are rapidly gaining recognition...
cardiology
We recently had the opportunity to interview Dr. Heinrich Taegtmeyer, Professor of Medicine at McGovern Medical School, UTHealth, Houston. He...
nut consumption
Regular nut consumption has a positive effect on chronic kidney disease and mortality in the United States, a recent study...

Comments

Share your opinion with us and leave a comment below!

Catherine Richards 20.12.2023 at 17:09

Very interesting – nice to see considerationsv of the specific task and AI other than ChatGPT. There does seem to be a firmly-held belief that the quality of the text produced by AI can be as good as that produced by a good medical writer. I’m not sure this is the case. Where the text is formulaic it is highly likely that AI can do what a medical writer can do. But where summaries of papers (PLS) are involved, many AI platforms currently struggle with lexical appropriacy as well as with the task of accurately summarising content: detail can be lacking or key aspects missing altogether. I also sometimes see text that at first glance seems OK- all the right words are there, the grammar is accurate. But after you read it, you realise you haven’t understood a thing. I suspect that’s related to aspects of the language we call formulaic language, mutiword units, collocation etc. I expect there are already computational linguists and applied linguists publishing papers on the same.