Guest post by  Krishna K. Chinnaiah, Molecular Connections

Introduction

The publication of research is a long-established tradition. For many years, scholarly publishing remained the activity of small scholarly societies, each publishing just one or a small number of journals. However, the exponential growth in the number of scholarly articles and journals during the last fifty years has dramatically changed the academic research and publication workflow. During that period, there has been a steady concentration of publications onto a smaller number of publishers. As a result, the typical publisher is handling many more journals than before. Clearly, the workflow for 50 journals needs to be very different to the manual processes that sufficed when the publisher only had a handful of titles. Today, there is a new, and welcome, emphasis across the industry to increase throughput and reduce publication time, yet for many years the editorial and production processes around submissions have remained virtually unchanged. The initial response by publishers to the increased number of submissions was that more staff were hired to manage the growth in publications. But such a response is not scalable.

This article explores how the scholarly publishing workflow can be scaled to new levels by editorial services that combine human expertise and machine learning-based checks.

Challenges Faced by Journal Publishers

The increasing volume of submissions demands a more efficient workflow, for many reasons. First and foremost, researchers seek prompt feedback and rapid publication; they are no longer satisfied with a wait of several months between submission and publication. Additionally, maintaining the reputation of both publishers and authors is crucial, as even minor errors in the editorial process can lead to retractions or resubmissions (and hence further delays).

There have been two main consequences of the growth in scholarly publishing.  The advent of digital publications means that minor variations in metadata and formatting are no longer tolerated. Digital publications require very standardised metadata and formatting for bulk processing. Once the formatting is standard, it becomes possible to carry out many editorial checks more rapidly than by hand. For example, a simple count of all the words in the article, the abstract, or the title, can all be done in a fraction of a second. These are examples of checks that could be carried out by hand, but much more slowly. However, to carry out several manual checks on the same document by hand is very inefficient and slow. Each check requires a separate pass through the text. The traditional manual process, in other words, does not scale: it will simply lead to bottlenecks and delays, when carried out at scale.

A great benefit from applying coding and formatting more consistently is that the resulting published articles become more discoverable. With the growth in publications, researchers no longer have the time to discover relevant research manually. They need to be able to find relevant content from published articles quickly and efficiently. This can only happen when the articles are well-coded, using standard tagging. Machine-based checks can ensure this is the case.

Alongside the growth in the number of publications, there was a growing awareness of the need to provide additional checks for every submitted manuscript before it was released for publication. Some of these checks are quite sophisticated, and would take many minutes to carry out by hand. Much more attention is given today, for checks on potential plagiarism, for authors declaring any potential conflict of interest, and for ensuring that all references are actually cited in the manuscript. It is possible to check, for example, that all authors have included their ORCID ID, which makes it possible to disambiguate authors with identical names.

As the number of publications increased, it became increasingly clear that simply adding more human resources would not solve the problem. A different solution had to be found.

Implementing a Solution

The remarkable rise in the capabilities and speed of machine-based checks meant that it has become increasingly possible to use a machine to assist the workflow. An analysis of the existing human workflow reveals a surprising number of checks that take place: some publishers carry out more than 75 separate checks on each manuscript as part of the article submission process. At this point, some careful judgement is required. It is not simply a matter of abandoning all human input and leaving all the work to the machine.

While machine-based checks are fast and very efficient, they are not necessarily perfect. The key to improving the publication workflow is to determine what requires human judgement. To count the words in an article is straightforward and uncontentious; to check that an article about a medical trial contains the necessary relevant ethics check is more challenging. To check that every reference at the end of the article is cited in the body of the text is simple, but highly time-consuming. For some checks, AI tools can be deployed, for example, to determine the subject area of an article (it might be about Paediatrics, for example, but without using the term at any point in the article).

At this point, it made sense to bring in external guidence to determine how best to deploy the machine-based tools, and where human decision-making needed to be employed – and, in many cases, how to use the most effective combination of the two.

Molecular Connections has two decades of experience working with publishers to provide comprehensive editorial services and was an early adopter of machine-based checks. By combining autonomous AI/ML and semi-automated proprietary tools with human expertise, publishers are able to streamline the production cycle. Technical checks, such as plagiarism detection, article verification, papermill identification etc. can efficiently be performed using proprietary AI/ML assisted tools. For tasks requiring human judgment and subject-matter expertise, the system turns to subject-matter experts.

As a result, paradoxically, the submission process can be made faster, yet at the same time, articles are checked and validated to a much higher quality than before.  It is true to say that any article published by any publisher today has more extensive and detailed editorial checks than ever before, and yet the time to publication has been reduced and continues to contract further. As a result, authors are happy: the version of record of their article appears more quickly, but at the same time, they can feel confident, as a result of all the checks carried out, that the article is less likely to contain errors.

Benefits of a combined machine and human process

  • The process is scalable. Once set up, the process can run for one journal or for a thousand journals.
  • Tailored Solutions: the machine-based tools can be applied in a modular fashion, enabling the publisher to determine, for example, if the submissions need a check for the language competency of authors, or if the editorial submission checks should be integrated with systems for monitoring journal inboxes. An efficient system comprises a selection of relevant tools, so that each publisher might have a different solution, but the overall process is efficient.
  • Deploying Human Skills at Scale: With experience of deploying these tools across many diverse types of publishers and scale of operation, Molecular Connections is able to ensure the appropriate level of human evaluation and expertise for each publishers’ requirements.
  • Metrics and Analytics: It is a commonplace that business process changes are only worth carrying out if they can be measured. Any efficient editorial process requires relevant metrics and analytics to monitor the performance of every aspect of the editorial operation, allowing publishers to make data-driven decisions.

Conclusion

For any scholarly publisher today, large or small, using AI tools for the editorial workflow enables them to address the challenges of scale and digital delivery faced by journal publishers, offering a combination of automated tools and human expertise. By streamlining the editorial processes, publishers can achieve faster turnaround times, maintain and even enhance their reputation, and improve overall efficiency. Using combined machine / human editorial tools, publishers can bring measurable significant improvements to the process of running a journal.

Related Posts

Our next interview in the "Meet the Editors-in-Chief” series is with Prof. Dr. Giovambattista Capasso, the editor of the journal...
In this special edition of the "Meet the Editors-in-Chief” series, we interviewed two co-editors of the journal Glomerular Diseases, Prof. Dr....
Our next interview on the "Meet the Editors-in-Chief” series highlights Prof. Dr. David Witherington, the editor of our journal Human...

Comments

Share your opinion with us and leave a comment below!