There’s no silver bullet for measuring societal impact

The effects of research are uncertain and disputed — and efforts to evaluate them must take this into account.

measurement tools on a table

Which measurement tools are the best? Illustration: Fleur Treurniet, Unsplash.

Across Europe, policymakers are placing more emphasis on the contributions of research to society. These contributions are diverse — taking in improvements to well-being, spurring innovation or creating meanings — and their assessment is complex and subjective. This creates a pressure to develop indicators that can justify policy choices while saving time and resources.

But using general indicators as a silver bullet to measure societal impact is analytically wrong, unfair to some types of research and harmful to science as a whole. The contributions of science to society are so varied, and mediated by so many different actors, that indicators used in impact assessment cannot be universal metrics. Instead, they need to be developed for given contexts and used alongside qualitative assessment.

First, remember that science, technology and innovation do not necessarily improve social well-being. They have also caused much harm — sometimes purposefully, as with nuclear weapons, sometimes accidentally, as with asbestos or thalidomide. Often, there is uncertainty and disagreement regarding what is desirable — some may think, for example, that developing renewable energy is more important than improving the combustion engine. 

Therefore, we cannot assume that more impact is necessarily better. It is crucial to assess the type of contribution made. Improving weapons is not the same as developing therapies. Impact is a vector, not a scalar — its direction matters. Unidimensional indicators, such as numbers of jobs created, cannot capture directions — the value of the impact depends on the type of jobs.

Second, policy analysts such as Roger Pielke Jr have argued that, for uncertain and disputed questions, analysis cannot be separated from decision-making. This applies to societal impact: what is valued is tightly entangled with what is measured and how. Therefore, impact indicators must be developed as part of the decision-making process, and include diverse views and interests.

Developing indicators in this way would be a major departure from current practices. Conventional science indicators are mainly based on information from a few data sources, for example publications, tweets or patents. 

These indicators come with assumptions about the data, such as the meaning of a citation, and the effect of measurement, for example that assessment will foster ‘quality’. This type of research assessment analysis takes place in seclusion, away from the contexts and decisions about research and policy.

To shift the way indicators are developed, I would adopt two suggestions for pluralising science policy advice, made by Andy Stirling and his colleagues in the Science Policy Research Unit at the University of Sussex.

The first involves a broadening of inputs, from publication and patent databases to a wider set of data and expertise. This could include information from social media, as well as databases of news, healthcare, consumption, social welfare and so on. 

More data alone is not enough. Disparate forms of expertise will be needed to bring in qualitative insights to frame, interpret and contextualise these data. Such interpretation is crucial because indicators mean different things in different contexts.

The second move concerns how the outputs of analysis are presented and used in decision-making. Conventionally, indicators are presented to decision-makers as tables, providing what seems to be a unique and prescriptive ranking of the options or performers. 

In cases such as societal impact, where there is uncertainty and disagreement, evidence should instead be presented in formats such as spider graphs, maps or drawings, which allow different interpretations depending on priorities, thus providing plural and conditional advice. 

A science map, for example, can show the differences between research that contributes to therapies or to prevention. Different parties, each with their own values and interests, can then argue about the strategy that will have a more desirable form of impact. 

This way of presenting evidence acknowledges that societal impact assessment is inevitably value-laden. Rather than using indicators that hide these values—and their politics—the aim should be to reveal the assumptions behind quantitative evidence for impact.

In summary, for the assessment of societal impact, given that the effects of research are uncertain and disputed, bespoke indicators have to be developed and used in collaboration with research users. At present, indicators are tools to close down debate. They should instead become part of a pluralistic exploration of impacts—and in the process, foster wider participation in research assessment.

 

About the author

Ismael Ràfols is a science policy analyst at the Universitat Politècnica de València, Spain. This article is based on his keynote to the Science, Technology and Innovation Indicators conference held in Paris between 6th and 8th of September. This was published in Research Evaluation and is avalailable at SSRN. This blog post was also posted on Research.

Tags: Indicators, Measuring, Keynote By Ismael Ràfols
Published Oct. 26, 2017 1:57 PM - Last modified Apr. 3, 2024 4:26 PM
Add comment

Log in to comment

Not UiO or Feide account?
Create a WebID account to comment

illustrasjon

The OSIRIS blog

On the OSIRIS blog the members of the project team write about impact of research as our research on this topic progresses.

We aim for a collection of posts that represent preliminary and conceptual findings and ideas, discussions from meetings and seminars, shorter analyses of empirical data and brief summaries of the vast literature on impact. Some of the posts will be shared with the Impact Blog at the London School of Economics, the most comprehensive web page devoted to this topic and a great source of interesting ideas about many topics within science policy and science in practice.

The blog is also open for contributions from people outside of the OSIRIS team. Send us an email if you have a text that would fit into the blog.