The title of this post might sound straightforward, but discussions on measuring impact in research can be confounding.
Much of what is already written about research impact, and many of the tools that are developed to measure it, focus on the STEM and social science disciplines. These tools have been more widely developed and used for scientific research due to the significant pressure in the sciences to provide measures of impact in grant evaluations, hiring, tenure and promotion, and reputation. Measurements of impact can also be used at the institutional level too for university rankings, to support program funding, and to build partnerships.
The challenge rests in providing a meaningful measure of impact that accurately reflects how research is making a difference across various disciplines, each with unique qualities in their scholarly publishing ecosystem.
Traditionally, citation metrics, such as the number of times your publication has been cited or how many times others have cited you, have been used to measure impact. Popular metrics such as the Impact Factor and h-index, use citations in their calculations and are still widely used. One known issue with these is that they tend to favour big publishers that put content behind paywalls. Another common issue is that sometimes they are the sole metric used to describe the complex relationship between research and its impact on the world. This can lead to misuse and negative feedback loops in the research landscape. In response, tools such as Altmetrics and PlumX evolved to show a more holistic and transparent view by providing stats not only on citations, but “likes” on social media, blog mentions, the number of times your work has been bookmarked etc.
It can be a challenge to assess the many ways research makes an impact and filter it down into a single number. Beyond impact, you can also look at engagement, influence, content quality over time, author productivity… There are many programs and calculations to choose from, each with their own strengths and weaknesses.
But how helpful are they for legal researchers? Does law need it’s own metric?
What does “impact” in law mean? For a legal researcher who wrote a paper that is cited in a high profile Supreme Court case, this one citation can be seen as a significant impact. Or perhaps it is the impact of a widely-viewed blog or Twitter account of a legal scholar. Or perhaps it is the impact of a law professor who is actively participating in legal work for the wider community.
Without developing tools with an understanding of some of the unique aspects of legal publishing and what research impact means in law, legal researchers could be left to use measurements of impact that weren’t designed for them.
In an article titled “Exploring the Development of a Standard System of Citation Metrics for Legal Academics” (2018), Susan Barker describes her development of a metric for legal researchers called the b-index. The b-index includes both quantitative and qualitative reporting of academic, judicial, and social impact. Judicial influence takes up 30 percent of the total, academic influence counts for 50 percent, and social impact at 20 percent. The author describes social impact as being the hardest to measure, involving the use of an open source altmetrics tool called Impactstory, and ideally a peer-review process. I highly recommend reading this article for a comprehensive exploration of metrics for legal research.
Though I see a benefit for a law-specific metric like this one, some questions come to mind.
As the scholarly publishing landscape evolves with increased open access content, different modes of communication, and data sharing etc. how can a metric (with a specific formula and strict parameters) adapt to these changes? Popular metrics, like the h-index and Impact Factor, are much easier to calculate through subscription-based databases such as Web of Science and Scopus. How can we make it easier for non-commercial legal databases to be included in the equation? When there is confirmation that there is a citation advantage for open access law journal articles (Beatty, 2019), including this content in citation metrics can provide advantages to legal scholars. There have been concerns before about the misuse of metrics in evaluations, how can we educate and encourage responsible use for a law-discipline metric?