Journal and article-level metrics

Topic lead: Wendy Patterson, Andy Byers
Last updated: 28/09/2023

The aim of this section is to provide an overview of popular article- and journal-level metrics and to advise in choosing which may or may not be relevant for your open access journal. Here, the various types of metrics are summarised and cautions for responsible use are given.

Depending on a journal’s subject area, readers and authors may expect to find certain publication metrics published directly on journal websites. The limitations and exact meaning of metrics are often not fully understood by authors and readers, which indicates that journals should carefully consider what they display. In this section, we provide an overview of article- and journal-level metrics.

Article-level metrics

Article-level metrics are citation metrics that attempt to quantify how an article is being discussed, shared, referenced and used. Metrics that are often included on journal websites or available to publishers include the following:

  • Citation counts: Counting the number of citations that an individual article has received is a traditional and easy to understand metric that some publishers may choose to display. For example, Crossref members can obtain and display this information by using the Cited By API. Scopus, Dimensions (although Article-level metrics can be viewed in the freemium version) and Web of Science can also provide this information, but they are paid-for solutions. Although this information is also available via Google Scholar, no open API is available to support easy integration into journal websites.
  • Page views and downloads: The number of times a website has been visited and the number of times that a PDF or the XML version of the article has been downloaded provide some insight into the scholarly visibility of the article. The number of accesses and downloads will be greatly affected by how well the journal is indexed.
  • Altmetrics: Altmetrics add a measure of “social visibility”, including shares and likes on various social media platforms, mentions in blogs and other platforms, and news articles and press releases about a published article. Examples include Altmetric, Plum Analytics and Crossref’s Event Data.

Journal-level metrics

Journal-level metrics focus on the whole journal rather than on specific published articles. The most common options for journal-level metrics are outlined in the table below, in the row titled ‘Metrics available’. Notably, there are several data providers that allow the calculation of the same journal-level metrics, leading to likely differences in results.

 Web of ScienceScopusGoogle MetricsLens.orgDimensions
Title curationHighHighNoneMediumMedium
Source of dataSelf-curatedSelf-curatedSelf-curatedCrossref, PMC, Core, OpenAlexSelf-curated, Crossref, PMC, OpenCitations
Metrics availableIn Journal Citation Reports: JIF, 5-year JIF, quartile ranking, Eigenfactor, JCI Other: h-index, citations/periodIn SCImago Journal & Country Rank: SNIP, SJR, h5-index, quartile ranking Other: CiteScore, citations/periodh5-indexCitations/periodCitations/period
Interface accessPaywalledPaywalledOpenOpenMostly paywalled
Metric accessPaywalledOpenOpenOpenOpen
Business strategyCommercialCommercialCommercialNon-profit social enterpriseCommercial
Access costsHighHighFreeFreeHigh

Other ways to compare academic journals

Although not necessarily intended as a metric, a comparison across journals regarding their openness and transparency can be made using the Transparency and Openness Promotion (TOP) Guidelines. Furthermore, journal policies, procedures and practices can be summarised into a metric-like form of the TOP factor.

Other journal-level information that is of interest to readers and authors that is often compared includes the following:

  • Publication time and time of various editorial steps (e.g., time to decision, peer review, acceptance)
  • Number of submissions and publications
  • Rate of desk rejection, overall rate of rejection
  • Median number of reviews
  • Price information

In these areas, however, there are no standardised metrics and comparisons tend to be ad-hoc and manual.

Limitations of publication metrics

The use of publication metrics has received increasing criticism (see Further reading), and debates regarding the relationship between such metrics and quality of the published content is being debated.

The San Francisco Declaration on Research Assessment (DORA) makes five recommendations for publishers, including that they should “greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics..” and “make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.”. The Leiden Manifesto for research metrics encourages the development of “open, transparent and simple” data collection. Most recently, the Coalition for Advancing Research Assessment (CoARA) published the Agreement on Reforming Research Assessment, which recommends that journal- and publication-based metrics are no longer used in research assessment.

Share this article

Download this article