Today, peer review is considered an indispensable requirement for formal publication. As a result, journals need to establish appropriate peer review policies as well as approaches to recruit peer reviewers.
Peer review aims to assess the suitability of a submission for publication. It is delivered by experts in the field of the submission, who must be identified by editorial staff. Different journals and editors have different expectations around the role of reviewers, so clear guidance is essential. Notably, authors tend to prefer rapid peer review, as this means that their work is published quickly. However, time pressure on peer reviewers may compromise the quality of their input. As a result, journals have a responsibility to keep speed and quality of peer review in balance, to ensure that published articles are of high standard.
Whether a submission is deemed to be suitable for publication depends on the journal’s article selection criteria as well as on the quality and integrity of the research being described. For example, peer reviewers may be asked to focus on methodological rigour, novelty, engagement with open scholarship practices or any other criteria set by editorial staff.
Peer review can be delivered in a broad range of ways, all of which have their respective merits. It is the editorial board’s responsibility to choose a suitable approach, considering the following dimensions:
|Number of reviewers|
|Interaction between reviewers|
|Identification of authors and reviewers|
|Publication of peer review reports (also known as transparent or open peer review)|
The use of automated review tools is also possible, although journals should note that these are only able to detect basic issues (e.g. plagiarism, data sharing statements, reagent identifiers). By using artificial intelligence, these tools can help reduce the editors’ and reviewers’ workload, although ongoing research has highlighted potential concerns around computational bias. As a result, automated review tools should not be used to automatically make decisions on submissions received, but only to inform the peer review process.
Identifying reviewers is a complex challenge, as researchers are typically very busy and the number of articles that need reviewing annually grows more quickly than the pool of available reviewers. The following strategies may help in identifying peer reviewers:
- Asking authors to provide suggestions
- Checking the references in the submission or automated tools to identify researchers in similar areas
- Using personal networks, including from editors, the editorial boards and previous authors and guest editors
- Inviting previous peer-reviewers
- Asking declining peer-reviewers to provide suggestions for alternative candidates
Today, there are imbalances in peer review, which contribute to difficulties in finding reviewers and to the overburdening of a small sample of reviewers (Kovanis et al., 2016). In addition, diversifying reviewers in terms of gender, region or other personal characteristics can have a positive impact (Murray et al., 2019).
Currently, there is no reliable evidence that peer review significantly contributes to the overall quality of scientific literature. This lack of evidence, however, does not indicate that peer review is harmful or should be avoided, but just points to the need for more research about established and innovative peer review systems.
Important concerns about peer review include the low agreement between reviewers, the fact that they are subjected to biases and that peer review can be very time-consuming.
Finally, peer reviewers can only address the quality of a manuscript – not of the underlying research. Other methods of quality assurance, such as assessing reproducibility or replicability, are being considered by some journals alongside peer review (e.g. American Economic Association), although this remains a limited practice. The requirement for a data availability statement and data sharing as part of editorial policies is a useful way to enable activities such as reproducibility or replicability checks.
The Committee on Publication ethics defines ‘paper mills’ as individuals, groups or organisations that aim to manipulate the publication process for financial gain. These actors pursue the fraudulent submission, peer review and publication of articles that are, in most cases, incorrect and not arising from genuine research endeavours. Identifying paper mill activity is complex, as these articles are designed to deceive all stakeholders involved in the publication process as well as readers. Often, fraudulent articles are only spotted after they are published, as concerns across several articles may start appearing as a coherent pattern. In these cases, editors should collect information, documentation and data from authors to inform next steps.
Dealing with paper mills creates a significant administrative burden. Journals are therefore advised to have clear guidance and processes in place, as well as to operate transparently and share information with other publishers (and, potentially, the author’s institution) as appropriate. Any retraction notices applied to articles identified as arising from a paper mill should be transparent and clear, too. Throughout the process of investigating paper mill activity, journals and editors should respect confidentiality, as there is a high risk of unwillingly damaging an author’s reputation even when claims or concerns may eventually be resolved or unfunded.
- Song, E., Ang, L., Park, J.-Y., Jun, E.-Y., Kim, K. H., Jun, J., Park, S., & Lee, M. S. (2021). A scoping review on biomedical journal peer review guides for reviewers. PLOS ONE, 16(5), e0251440.
- Bornmann, L., Mutz, R., & Daniel, H.-D. (2010). A Reliability-Generalization Study of Journal Peer Reviews: A Multilevel Meta-Analysis of Inter-Rater Reliability and Its Determinants. PLoS ONE, 5(12), e14331.
- Dance, A. (2023). Stop the peer-review treadmill. I want to get off. (Nature 614, 581-583). Nature.
- Huber, J., Inoua, S., Kerschbamer, R., König-Kersting, C., Palan, S., & Smith, V. L. (2022). Nobel and novice: Author prominence affects peer review. Proceedings of the National Academy of Sciences, 119(41), e2205779119.
- Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchimani, G. (2021). Can AI be used ethically to assist peer review?. Impact of Social Sciences Blog.
- Editor resources Taylor and Francis. (n.d.). How to find peer reviewers – an editor’s guide.
- Kovanis, M., Porcher, R., Ravaud, P., & Trinquart, L. (2016). The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise. PLOS ONE, 11(11), e0166387.
- Murray, D., Siler, K., Larivière, V., Chan, W. M., Collings, A. M., Raymond, J., & Sugimoto, C. R. (2018). Author-Reviewer Homophily in Peer Review. BioRxiv, 400515.
- Office of the American Economic Association Data Editor. (n.d.).
- COPE Council. (2019). COPE Supplemental guidance — Addressing concerns about systematic manipulation of the publication process — English.
- COPE. (n.d.). Webinar 2022: Managing paper mills. Committee on Publication Ethics.
- Aczel, B., Szaszi, B., & Holcombe, A. O. (2021). A billion-dollar donation: estimating the cost of researchers’ time spent on peer review. Research Integrity and Peer Review, 6(1).
- Amaral, O. B. (2022). To fix peer review, break it into stages. (Nature 611, 637). Nature.
- Brainard, J. (2021). The $450 question: Should journals pay peer reviewers? Science.
- Carneiro, C. F. D., Queiroz, V. G. S., Moulin, T. C., Carvalho, C. A. M., Haas, C. B., Rayêe, D., Henshall, D. E., De-Souza, E. A., Amorim, F. E., Boos, F. Z., Guercio, G. D., Costa, I. R., Hajdu, K. L., van Egmond, L., Modrák, M., Tan, P. B., Abdill, R. J., Burgess, S. J., Guerra, S. F. S., … Amaral, O. B. (2020). Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. Research Integrity and Peer Review, 5(1).
- Chubb, J., Cowling, P., & Reed, D. (2021). Speeding up to keep up: exploring the use of AI in the research process. AI & SOCIETY, 37(4), 1439–1457.
- Eisen, M. B., Akhmanova, A., Behrens, T. E., Diedrichsen, J., Harper, D. M., Iordanova, M. D., Weigel, D., & Zaidi, M. (2022). Peer review without gatekeeping. ELife, 11. CLOCKSS.
- Royal Society of Chemistry. (n.d.). Joint commitment for action on inclusion and diversity in publishing.
- Singh Chawla, D. (2022). Should AI have a role in assessing research quality? Nature.