AI

Scholarly publishing has undoubtedly become one of the most profitable sectors of publishing since the first scientific journal was established in 1752  (Philosophical Transactions: n.d.). The mechanics of scholarly publication, however, is still flawed. As Cameron Neylon outlined in his article, PolEcon of OA Publishing I: What is it publishers do anyway? one of the key criticisms of the publication process is the lack of understanding for both publishers and academics on what it entails. Managing the peer review process is the part where many can agree publishers bring value (2015). It has become an integral part of the scholarly publishing since “1776 with the publication of Philosophical Transactions by the Royal Society and has evolved over hundreds of years” (“Publons’ Global State Of Peer Review 2018.”, 2018). Peer Review is the process by which other academics, typically those with expertise in the same field will judge the quality of a manuscript.  In the traditional and simplistic model, an author would submit their manuscript to a journal, it will be reviewed by an editor and sent out to expert reviewers and they would provide feedback on whether the article is acceptable for publication. Depending on the feedback the article will either be rejected, resubmitted with revisions, re-evaluated, or accepted.  The peer review can vary by journal and editorial staff to include the following types (“What is Peer Review? n.d.):

  • Single blind review – Author is unaware of the reviewer. Considered to be the traditional and most common type of review
  • Double-blind review – Both the reviewer and the author are anonymous in this model
  • Open review- both the reviewer and author are known to each other during the peer review process.

While working as a Manuscript Manager for the Journal of Applied Clinical Medical Physics in 2015-2016, I have first-hand experience on how lengthy this process can be for some articles.  The review process is one of the areas of the publication process that both authors and editors find cumbersome.  Authors may face a string of rejections before they can even find a journal to review their article. When they do, authors might have to wait weeks to get a decision of their manuscript. They may also get stuck in a cycle where they are continually asked to revise and/or provide additional material for resubmission. This process can certainly be time-consuming and can easily “eat up months of their lives, interfere with job, grant and tenure applications and slow down the dissemination of results” (Powell, 2016).  Editors on the other hand are faced with the time-consuming task of finding the right reviewers and getting a response (Mrowinski, Fronczak, Fronczak, Ausloos, Nedic, 2017).

Across scholarly publishing, journals have continued to receive more submissions over the years. PLOS ONE’s submissions has grown from 200 in 2006 to 30,000 in 2016 with larger publishers facing the same trend. The increased number of submitted manuscripts have resulted in longer review times across all journals. For Nature the median review time has grown from 85 to 150 days and similarly PLOS ONE has risen from 37 to 125 days within a decade (Powell, 2016).  Interestingly enough high and low impact factor journals have longer review times than those that fall in the middle. Regardless of the increased review time, “researchers continue to regard peer review as a critical tool for ensuring the quality and integrity of the literature” (“Publons’ Global State Of Peer Review 2018.”, 2018).

One of the contributing factors of the delay in publication could be attributed to “reviewer fatigue” (ibid, 2018).  As scholarly publishing functions on a model of “free” work on the part of the author and reviewer. A large number of reviewers are doing so as part of their academic or practical responsibilities as researchers or medical professional in addition to other work (ibid, 2018). The 2018 Global State of Peer Review published by Clarivate found that “10% of reviewers are responsible for 50% of peer reviews” (ibid, 2018; Vesper, 2018). A disproportionate number of reviewers are also from established countries, with the United States leading the number of reviews. Despite the fact that reviewers from emerging countries have been found to be more willing and more readily to accept an invitation. This maybe in part a result of editors referring to reviewers who they know within their communities and are often unaware of what other reviewers may be doing across the world.

Recently, an emergence of artificial intelligence software designed to reduce the strain on the peer review process have been adapted by some publishers. It come as no surprise that some of the biggest scholarly publishing companies are the first to adapt this — Elsevier with EVISE (“Can Artificial Intelligence Fix Peer Review?”, 2018), ScholarOne with UNSILO (Heaven, 2018) and Springer-Nature with StatReviewer (Stockton, 2017). EVISE was created by its publisher to replace its outdated editorial system, some of the features of this program include (“Can Artificial Intelligence Fix Peer Review?”, 2018):

  • Suggests reviewers based on content;
  • Communicates with other programs to check things such as the profile, scientific performance, and conflicts of interest of reviewers;
  • Automatically prepares correspondence among the parties involved;

Meanwhile, StatReviewer is a program that ensures that submitted manuscripts have complete and accurate statistical data (“Can Artificial Intelligence Fix Peer Review?”, 2018).   UNSILO on the other hand, has the ability to pull out key concepts from a manuscript and provide a summary (Heaven, 2018).

In 2017, PLOS ONE had published an article entitled, Artificial intelligence in peer review: How can evolutionary computation support journal editors? in which researchers had used Cartesian Genetic Programming, which was programmed to tell the “editor how many new review threads should be started at any given time” (Mrowinski et al., 2017). They found that when effective, the review time was 17 days shorter than the previous strategies editors had implored previously. Findings from this particular study, while it can’t be applied in the same way for every journal displays the potential for AI programs to improve the review process.

AI reviewer programs have potential to save time by performing the time-consuming tasks such as finding, inviting, and uninviting reviewers, sending reminders and emails, and performing quality checks. AI reviewers could also be helpful in reducing reviewer fatigue as they are able to expand their selection broader than the networks traditionally used by an editorial team. These programs do not aim to make the decision for editors but rather perform an analysis highlighting the ideas that stand out against what has been publishing for the editor to be able to decide (Heaven, 2018). Others believe that by removing the human aspect through the use of AI, this could help eliminate the tension between authors, reviewers, and publishers (“Can Artificial Intelligence Fix Peer Review?”, 2018). However, AI in peer review face many criticisms. As AI is a “machine-learning tool” it could reinforce the biases the existing biases already ingrained with the journal. Additionally, as StatReviewer, provides an overall score on a manuscript, editors may use this score to reject a paper (Heaven, 2018). In the extreme, some fear the AI could take over the entire review process and humans would become obsolete in this process.

As AI in the review process is still a fairly new concept, it is hard to determine what the full range of capabilities it will have. It would be hard to believe that in a process such as peer review being deeply embedded to scholarly publishing, that humans would not be part of this process. It does pose the potential to save time for the editorial staff from the time-consuming tasks. This would be extremely beneficial for the smaller publishers who have limited resources to perform all the tasks associated with the entire publication process. Although it would be a long time before this technology would even become available to them considering it is the giant publishers with the resources who are now starting to implement these.

Bibliography

“Can Artificial Intelligence Fix Peer Review?” Enago Academy.Last modified May 23,
2018. https://www.enago.com/academy/can-artificial-intelligence-fix-peer-review/.

Heaven, Douglas. “AI peer reviewers unleashed to ease publishing grind.” Nature 563, no. 7733 (2018), 609-610. doi:10.1038/d41586-018-07245-9.

Mrowinski, Maciej J., Piotr Fronczak, Agata Fronczak, Marcel Ausloos, and Olgica Nedic.”Artificial intelligence in peer review: How can evolutionary computation support journal  editors?” PLOS ONE 12, no. 9 (September 2017), e0184711. doi:10.1371/journal.pone.0184711.

Neylon, Cameron. 2015. “PolEcon of OA Publishing I: What Is It Publishers Do Anyway?” 2015. http://cameronneylon.net/blog/polecon-of-oa-publishing-i-what-is-it-publishers-do-anyway/.

Philosophical Transactions of the Royal Society of London. Accessed December 2, 2018.http://rstl.royalsocietypublishing.org.

Powell, Kendall. “Does it take too long to publish research?” Nature 530,no. 7589 (2016), 148-158. doi:10.1038/530148a.

“Publons’ Global State Of Peer Review 2018.” 2018. doi:10.14322/publons.gspr2018.

Stockton, Nick. “If AI Can Fix Peer Review in Science, AI Can Do Anything.” WIRED.Last modified February 21, 2017. https://www.wired.com/2017/02/ai-can-solve-peer-review-ai-can-solve-anything/.

Vesper, Inga. “Peer reviewers unmasked: largest global survey reveals trends/”
Nature. (2018),10.1038/d41586-018-06602-y.

“What is Peer Review?” Elsevier | An Information Analytics Business | Empowering
Knowledge.Accessed December 5, 2018. https://www.elsevier.com/reviewers/what-is-peer-review.