Go back

Judge open science by its outcomes, not its outputs

Image: Dmytro_Ostapenko, via Getty Images

Counting publications does not build equity, integrity and value, say Ismael Rafols and Louise Bezuidenhout

Following a flurry of policies to foster open science, we are seeing a wave of efforts to monitor its development. There is even a monitor for monitoring: the French Open Science Monitoring Initiative. This gathers information from different countries and institutions, as well as proposing general principles for monitoring open science–also the subject of a recently launched Unesco consultation.

So far, monitoring has focused on tracking uptake of open-science policies and outputs, particularly publications and data. But this is like trying to understand fungi by counting mushrooms: it misses what’s going on beneath the surface, the network of activity and interaction that underpins the visible result.

Open science is not just about changing the practices of research, but also its motivations and values. It encompasses much besides publishing, from public engagement to evaluation. Focusing on what is easy to count means neglecting these activities that foster equity and share benefits with society.

Beyond missing important aspects of open science, narrowly focused monitoring can have damaging unintended consequences.

Perverse incentives

For example, more open access publications might seem like progress, as it means that more information is freely accessible. However, if openness depends on pay-to-publish models, this creates inequality, as researchers who cannot afford publication charges become less visible. It may also lead to unscrupulous publishers weakening reviewing processes to increase economic gains.

These effects weigh more heavily on researchers in marginalised scholarly communities and places, meaning that output-centred monitoring both undermines the values of open science and weakens poorer research systems. More generally, it reinforces a publish-or-perish system focused on quantity, instead of supporting slow and thoughtful knowledge creation for meaningful purposes—the deeper transformation needed to connect science and society. Lay readers should not have to sort through an ever-growing jungle of research outputs.

Similar problems afflict data sharing: there is little evidence of meaningful reuse of datasets on a large scale, outside of a few disciplines such as genomics; issues of quality control are poorly understood; and storage and long-term curation remain problematic. So, unless open data is intended solely as fodder for artificial intelligence models, counting datasets does not lead to meaningful insights.

Current monitoring efforts can be useful, but their narrowness may result in engagement activities and the uses and benefits of open science being overlooked. Deeper efforts that embrace context and values are needed for open science to live up to its promises of fostering research that makes the world a better place.

Learning from history

For this, first, indicators need to specify their particular route to open science. For example, each form of open access publishing needs its own indicator, since each has different normative implications. Gold, diamond and green open access should not be aggregated in a single measure.

Second, ending monitoring at the point of creation or engagement says nothing about their effects, the outcomes: it is important to track the effects of open science policies, thinking about who bears the costs and who reaps the benefits.

One way to conceive of open science is as a system of connections. This foregrounds its relational aspects, highlighting the centrality of collaboration and co-creation, and the aim of building responsive and responsible global research communities. The Unesco Open Science Outlook published in 2023 calls for “a focus on the people who are doing, engaging with and/or benefiting from science”.

History holds lessons for how to monitor such processes. The OECD Frascati Manual, devised in 1963, sets out measures of science, technology and innovation based on tracking inputs and outputs. Yet this approach failed to capture the processes, drivers and outcomes of innovation, so in 1992 the OECD launched a complementary approach, the Oslo Manual, based on surveying companies.

Mapping connections

Similarly, for open science, monitoring needs to show the connections between researchers and stakeholders, along with behavioural changes and underlying motivations. Surveys are the main method for capturing this type of information.

Open science monitoring, in other words, needs a pluralistic approach, possibly based on surveys and narratives, that links practices to value-driven outcomes. Such a shift would be in line with the movement in research assessment away from journal impact factors and citations towards narrative accounts of impact.

If open science is a systemic transformation of the research system, including its values, its monitoring needs strategies to match. If open science is about more than just producing accessible and reproducible research, but rather about effecting meaningful change in science, we need strategies to track the contribution towards collective benefits, integrity and equity in science.

Ismael Rafols is Unesco chair on diversity and inclusion in global science at Leiden University, and Louise Bezuidenhout is senior researcher at the Centre for Science and Technology Studies, Leiden University. The centre is hosting webinars to discuss monitoring open science on 21 June (14:00 CET) and 28 June (15:00 CET).

A version of this article also appeared in Research Europe