Citation counts are traditional key measures of scholarly works. Impact measurements have however, expanded significantly in recent years to include influences beyond academia. Research impact has a much broader meaning nowadays and can be wide ranging, varied and project-specific.
For Hong Kong's upcoming Research Assessment Exercise RAE 2020, the University Grants Committee (UGC) defines impact as demonstrable contributions, effects, changes or benefits that research qualitatively makes to 6 areas:
Economy | Society | Culture
Public Policy/Services | Health | Environment or Quality of Life
In preparation for RAE2020, HKBU's Graduate School has compiled a list of Impact resources with links to Impact Case Studies drawn from the Experience of UK Research Excellence Framework (login required).
In summary, research impact is a multi-dimensional indicator and it is important to be aware of the positive and negative effects of each measure. This Guide aims to provide you with basic information about some the most common measurement tools.
Research supported by public money is seen as needing justification. More and more, researchers are required to demonstrate their contribution to the economy and wider society, as well as to culture and the environment.
Funders around the world now expect researchers to demonstrate research impact. The HK RGC has assigned a 15% weighting for impact in the next RAE 2020, requiring researchers to demonstrate the influences of their work according to the six factors listed above.
Impact measurement is a controversial topic as there are no agreed standards. Criticisms include metric maximising practices, diverting attention away from actual content, and ignoring the reasons for citations. Various initiatives such as the San Francisco Declaration on Research Assessment (DORA) and Leiden Manifesto aim to educate and address these pitfalls.
Nevertheless, metrics can be useful tools for research assessment when applied appropriately. The following explains some of the traditional ways of measuring impact:
Journal metrics rate the importance of a journal based on the number of articles published and the number of citations received. One aim is to help researchers decide which journal to publish in. Each system has its strengths and weaknesses and so judgment must be exercised when interpreting and applying the figures. Common journal metrics includes:
Journal Impact Factor (JIF)
Published annually in Web of Science's InCites Journal Citation Reports, JIF measures how frequently an "average article" is cited over a defined period and is thus an indication of how often a journal is cited by other journals in a field. Surprsingly, the calculation is rather simple, based over 3 years:
A JIF of 2.575 means that on the average, articles published one or two years ago have been cited 2.575 times.
SCImago Journal Rank (SJR)
Based on Scopus data, SJR was inspired by Google's PageRank algorithm and has visual presentation capabilities. It is a free online tool that ranks publications by weighted citation per document. A citation from an important journal will count as more than one and likewise, those in a less prestigious journal will have a lower weight.
Source Normalized Impact per Paper (SNIP)
Scopus-based SNIP measures contextual citation impact to allow direct comparisons between subject fields. For example, due to the usual higher value for STEM-related journals, Arts and Humanities journal titles are given a higher rating to balance out the value difference.
Freely accessible via Elsevier's Journal Metrics, CiteScore is based on citations received in one year to articles published in the previous 3 years, divided by the number of articles indexed in Scopus published in the same 3 years.
You may find Professor Anne-Wil Harzing's Where to submit your paper? Compare journals by impact helpful when trying to decide where to publish.
Author impact metrics are based on the number of publications by an author and the number of times they have been cited. Key author impact measurements include:
The h-index attempts to balance scholarly productivity with citation impact. A scholar with an index of h has published h papers that have been cited by other papers at least h times. Thus an author with an h-index of 50 has published 50 articles each with 50 or more citations. It is sometimes seen as unfair to early-career researchers.
Used only in Google Scholar in its My Citations feature, i10-index is a basic and straightforward way of measuring a scholar's productivity. It simply counts only the number of publications with at least 10 citations.
Article metrics are from a variety of different data sources, such as the traditional Citation Counts, Field-Weighted Citation Impact (FWCI), and the more recent altmetrics. Scopus, Web of Science and Google Scholars all provide easy ways for authors to find their article impact.
The assumption of Citation Count is that the more important or influential the work, the higher the number of Citation Count. However, the argument is that citations don't measure quality and are purely based on subjective judgment. Citation Counts also favour mainstream research and may reveal negative instead of positive attention.
Field-Weighted Citation Impact (FWCI)
Field-Weighted Citation Impact metrics are a useful benchmark regardless of the differences across disciplines. Sourced from SciVal, FWCI compares the number of citations received by a researcher against the average number of citations received by all other publications that have the same publication year, type and discipline, calculated as follows:
FIELD-WEIGHTED CITATION IMPACT =
Number of citations received by an article
Expected number of citations for similar articles
In addition to the above, altmetrics have become increasingly popular as measures of article level impact in recent years. See the next section for more details.
Short for alternative metrics, altmetrics complement traditional impact measures but unlike citation counts which may take years to emerge, they are available almost immediately. The birth of altmetrics in recent years has been a benefit to many scholars, in particular, those from Humanities and Social Sciences, providing them with an alternative way to assess their work apart from traditional journal metrics. Altmetrics have gained widespread attention, giving researchers total new perspectives to see how their works are influencing not only scholarly communities, but also beyond.
Examples of altmetrics include: mainstream media coverage, citations on Wikipedia, discussions on research blogs, mentions on Facebook or Twitter, or exports to citation management systems like Mendeley. Main ways to access and make use of altmetrics data include:
scopus.com | Found on the right hand sidebar of the Scopus Document details page, the 5 categories of PlumX metrics track various altmetrics data. This has become the primary source of article level metrics on Scopus alongside citation count and field-weighted citation impact.
impactstory.org | Funded by the National Science Foundation and the Alfred P. Sloan Foundation, ImpactStory allows users to create profiles that gather usage data from various online research sharing platforms.
altmetric.com | Not to be confused with altmetrics.org, altmetric.com is a product offered by Digital Science that uses a doughnut symbol to visualise the reach and impact of articles. altmetric.com widgets can be found on many publisher article metrics pages or repositories such as Figshare.
Altmetrics are still in their infancy. Just like traditional metrics, they have limitations and can also be easily manipulated. To learn more, visit: altmetrics.org.
Watch this 2-min video on why these University of Leicester researchers care about impact.
for articulating impact
Library Home |
Opening Hours |
Contact Information |
Location Map |
Site Map |
|Copyright © 2010-2019. Hong Kong Baptist University Library. All rights reserved.|