Depending on a journal’s subject area, readers and authors may expect to find certain publication metrics published directly on journal websites. The limitations and exact meaning of metrics are often not fully understood by authors and readers, which indicates that journals should carefully consider what they display. In this section, we provide an overview of article- and journal-level metrics.
Article-level metrics
Article-level metrics are citation metrics that attempt to quantify how an article is being discussed, shared, referenced and used. Metrics that are often included on journal websites or available to publishers include the following:
- Citation counts: Counting the number of citations that an individual article has received is a traditional and easy to understand metric that some publishers may choose to display. For example, Crossref members can obtain and display this information by using the Cited By API. Scopus, Dimensions (although Article-level metrics can be viewed in the freemium version) and Web of Science can also provide this information, but they are paid-for solutions. Although this information is also available via Google Scholar, no open API is available to support easy integration into journal websites.
- Page views and downloads: The number of times a website has been visited and the number of times that a PDF or the XML version of the article has been downloaded provide some insight into the scholarly visibility of the article. The number of accesses and downloads will be greatly affected by how well the journal is indexed.
- Altmetrics: Altmetrics add a measure of “social visibility”, including shares and likes on various social media platforms, mentions in blogs and other platforms, and news articles and press releases about a published article. Examples include Altmetric, Plum Analytics and Crossref’s Event Data.
Journal-level metrics
Journal-level metrics focus on the whole journal rather than on specific published articles. The most common options for journal-level metrics are outlined in the table below, in the row titled ‘Metrics available’. Notably, there are several data providers that allow the calculation of the same journal-level metrics, leading to likely differences in results.
Web of Science | Scopus | Google Metrics | Lens.org | Dimensions | |
---|---|---|---|---|---|
Title curation | High | High | None | Medium | Medium |
Source of data | Self-curated | Self-curated | Self-curated | Crossref, PMC, Core, OpenAlex | Self-curated, Crossref, PMC, OpenCitations |
Metrics available | In Journal Citation Reports: JIF, 5-year JIF, quartile ranking, Eigenfactor, JCI Other: h-index, citations/period | In SCImago Journal & Country Rank: SNIP, SJR, h5-index, quartile ranking Other: CiteScore, citations/period | h5-index | Citations/period | Citations/period |
Interface access | Paywalled | Paywalled | Open | Open | Mostly paywalled |
Metric access | Paywalled | Open | Open | Open | Open |
Business strategy | Commercial | Commercial | Commercial | Non-profit social enterprise | Commercial |
Access costs | High | High | Free | Free | High |
Other ways to compare academic journals
Although not necessarily intended as a metric, a comparison across journals regarding their openness and transparency can be made using the Transparency and Openness Promotion (TOP) Guidelines. Furthermore, journal policies, procedures and practices can be summarised into a metric-like form of the TOP factor.
Other journal-level information that is of interest to readers and authors that is often compared includes the following:
- Publication time and time of various editorial steps (e.g., time to decision, peer review, acceptance)
- Number of submissions and publications
- Rate of desk rejection, overall rate of rejection
- Median number of reviews
- Price information
In these areas, however, there are no standardised metrics and comparisons tend to be ad-hoc and manual.
Limitations of publication metrics
The use of publication metrics has received increasing criticism (see Further reading), and debates regarding the relationship between such metrics and quality of the published content is being debated.
The San Francisco Declaration on Research Assessment (DORA) makes five recommendations for publishers, including that they should “greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics..” and “make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.”. The Leiden Manifesto for research metrics encourages the development of “open, transparent and simple” data collection. Most recently, the Coalition for Advancing Research Assessment (CoARA) published the Agreement on Reforming Research Assessment, which recommends that journal- and publication-based metrics are no longer used in research assessment.
- CrossRef. (n.d.). Cited-by.
- Altmetrics. (n.d.). Altmetric.
- Plum Analytics. (n.d.). Plum Analytics.
- CrossRef. (n.d.). Event data.
- OSF. (2023, March 02). Transparency and Openness Promotion (TOP): Guidelines.
- Mayo-Wilson, E., Grant, S., Supplee, L., Kianersi, S., Amin, A., DeHaven, A., & Mellor, D. (2021). Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices. Research Integrity and Peer Review, 6(1).
- San Francisco Declaration on Research Assessment. (n.d.). The declaration.
- Leiden Manifesto. (2015, April 23). Leiden Manifesto for research metrics.
- Coalition for Advancing Research Assessment (CoARA). (2022). Agreement on Reforming Research Assessment.
- Brembs, B. (2018). Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in Human Neuroscience, 12:37.
- Herb, U. (2016). Impactmessung, Transparenz & Open Science. Young information scientist, 1, 59–72.
- Larivière, V., Sugimoto, C.R. (2019). The Journal Impact Factor: A Brief History, Critique, and Discussion of Adverse Effects. In: Glänzel, W., Moed, H.F., Schmoch, U., Thelwall, M. (eds). Springer Handbook of Science and Technology Indicators. Springer Handbooks. Springer, Cham. (Green OA version
- McKiernan, E. C., Schimanski, L. A., Muñoz Nieves, C., Matthias, L., Niles, M. T., & Alperin, J. P. (2019). Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. Elife, 8, e47338.
- Mech, E., Ahmed, M. M., Tamale, E., Holek, M., Li, G., & Thabane, L. (2020). Evaluating Journal Impact Factor: a systematic survey of the pros and cons, and overview of alternative measures. Journal of Venomous Animals and Toxins Including Tropical Diseases, 26 , e20190082.
- Paulus, F. M., Cruz, N., & Krach, S. (2018). The Impact Factor Fallacy. Front. Psychol. 9:1487.
- SPARC open. (n.d.). Article level metrics.
- Triggle, C. R., MacDonald, R., Triggle, D. J., & Grierson, D. (2022). Requiem for impact factors and high publication charges. Accountability in Research, 29:3, 133-164.
Share this article
Download this article
This work is licensed under a Creative Commons Attribution 4.0 International License