MIXED METHODS AND CROSS-CUTTING APPROACHES

21 Realist Evaluation

Sarah Louart, Habibata Baldé, Émilie Robert, and Valéry Ridde

Abstract

Realist evaluation is based on a conception of public policies as interventions that produce their outcomes through mechanisms that are only triggered in specific contexts. The analysis of these links between contexts, mechanisms and outcomes is therefore at the heart of this approach. This approach can be based on a variety of methods, but will in all cases use qualitative methods to investigate the mechanisms involved. Belonging to the family of theory-based evaluations, realist evaluation aspires to produce middle range theories that will facilitate the transfer of the knowledge produced on the intervention under study to other contexts or other interventions of the same type.

Keywords: Qualitative methods, theory-based evaluation, context-mechanism-outcome (CMO) configurations, middle range theory, critical realism

I. What does this approach consist of?

Critical realism, a school of thought led in particular by Roy Bhaskar (1975), paved the way for the development of the realist approach to evaluation. In his book A Realist Theory of Science, he asks: what must the world be like for science to be possible? The idea is to question the nature of reality, since this will then determine how it can be explored and understood. Bhaskar argues that there are structures, powers, mechanisms (e.g. gravity) that exist and can produce effects, even if we do not know them. A leaf can fall from the tree and reach the ground under the effect of gravity, even if we have not observed it. The aim is then to try to produce knowledge about the mechanisms, powers, structures that exist and the ways in which they act, including the conditions that favour their triggering and the effects they can produce. These theories are therefore ‘statements about the way things act in the world’ (Bhaskar 1975). The aim of researchers is then to produce theories that will try to elucidate the existence of mechanisms and the way they operate. However, these theories will always be perfectible and evolving, as reality and knowledge evolve. For ‘realist’ researchers, there is a reality, but there are no general truths that are valid at all times and in all places.

The realist approach to policy evaluation is based on these principles (Pawson and Tilley 1997; Westhorp et al. 2011; Westhorp 2014; Robert and Ridde 2020). It was introduced by Pawson and Tilley (1997). The presupposition is that policies have real effects, but do not cause them directly. They may or may not trigger mechanisms, which in turn produce outcomes. Mechanisms refer to the ways people react to the resources, sanctions or opportunities (depending on the type of intervention) made available to them within the policy framework (Lacouture et al. 2015). People are therefore the drivers of change; it is their reactions that will produce outcomes, whether positive, negative, expected or unexpected. It is therefore no longer enough to answer the question: does the policy work (or not), but rather it becomes necessary to investigate the mechanisms that, triggered by the policy in specific contexts, produce outcomes. This helps answer the question: how does the policy work or not, why, for whom, and in what contexts? The objective of realist evaluation is therefore to make links between the triggering of mechanisms and contextual factors, and between the action of these mechanisms and the occurrence of outcomes.

To answer all these evaluation questions, realistic evaluation mobilises a theory-based evaluation approach (see separate chapter on theory-based evaluation). This involves starting with the intervention theory of the policy under study (the way in which the policy is expected to produce its effects), and developing it in the light of existing knowledge and field data. The aim is to arrive at what is known as a middle range theory (Merton 1968). This is a theory that lies between the intervention theory and a theory that would like to be general. It is an explanatory theory of a regularity (observed trend) that is also contextualised (linked to particular contexts). To achieve this, different stages of the evaluation must be organised.

I.I. Reconstructing intervention theory

The first step is to understand the policy being implemented. There is a set of beliefs or assumptions that underpin the activities of the policy. Indeed, all policy is based on a theory, i.e. an idea that the activities implemented can produce change. It is about answering questions such as: what resources does the policy make available and why? What changes could the policy generate and how? What elements of the context might influence the policy? This intervention theory is often not explicit at the outset, and needs to be investigated. This may involve discussions with the implementation team and a review of documents. Often used interchangeably, the concepts of ‘intervention theory’ and ‘theory of change’ do have differences. Intervention theory is a detailed, intervention-centred theory describing all the components of an intervention and their logical relationship to their desired impact. From a realist perspective, it incorporates the concepts of mechanism and context. The theory of change, on the other hand, focuses on the objective of the intervention and the social change it aims to achieve: it describes the part of the logical reasoning between the expected results and the desired impacts of an intervention.

I.II. Formulate theoretical proposals on how policy can produce effects

At this stage, it is necessary to draw on both the intervention theory reconstructed in the previous stage, and on scientific knowledge. The aim is to formulate theoretical proposals on how policy activities in specific contexts might trigger mechanisms, which in turn might produce outcomes, and which ones. The aim is to work on the interactions between a particular context, the possibility of triggering a mechanism via the intervention, and the production of an outcome. The context is the set of factors external to the policy which have an influence on the triggering of a mechanism (e.g. socio-economic characteristics of the participants, social norms, interpersonal relations, political environment, etc.). Outcomes are the results produced by the triggering of these mechanisms. These interactions are therefore theoretical propositions about how the policy is expected to work and why.

I.III. Empirical testing of the theoretical propositions

On the basis of these a priori theoretical propositions, the objective is then to test them empirically, in order to confirm, revise or clarify them. This allows the theoretical propositions and therefore the intervention theory to evolve. This empirical testing may involve both quantitative and qualitative data collection methods. Indeed, realist evaluation does not necessarily rely on one type of method or data. The aim is to use a range of methods according to their relevance to test theoretical propositions. It is therefore not really a research method but rather a ‘logic of inquiry’ (Pawson and Tilley 2004). Nevertheless, in order to investigate the mechanisms and therefore the reasoning of the actors involved in the policy, qualitative methods will necessarily play a role at some point to understand how actors perceive and react to the policy. The aim of this step is to identify links between contexts, mechanisms and outcomes, which occur on a regular basis in the field.

I.IV. Specify the middle range theory

Based on the empirical data, the theoretical proposals formulated are therefore tested and refined. The proposals initially constructed evolve and are completed with the new data. This allows us to have a consolidated theory, which can be formalised by a set of ‘context, mechanisms, outcomes’ (CMO) configurations, to explain how, why, for whom and in which contexts this type of policy may or may not work. As the mechanisms are triggered by the policy implemented (intervention), and are necessarily linked to a person or group of people, it is possible to formulate the theories as more complete configurations: ICAMO (intervention – context – actor – mechanism – outcome). This theory is a middle range theory because it has a broader validity than the intervention theory, which is very specific to a given intervention. The middle range theory is more abstract and can be used as a basis for analysing and evaluating other policies of the same type.

II. How is this approach useful for policy evaluation?

The purpose of policy evaluation is to guide public policy by identifying the most relevant activities to be undertaken in view of the desired objectives. To do this, it is necessary to draw on the lessons learned from the implementation of past or current policies. These lessons should not be limited to assessing whether or not the objectives have been achieved. They must also draw on a critical, empirically-based assessment of the assumptions and preconceptions on which the policy was based, as well as the action strategies of those responsible for implementing it. The realist approach thus distinguishes itself from more traditional approaches to evaluation, which often aim to assess only the effectiveness of a policy, using rather quantitative indicators. These methods alone are often insufficient to draw relevant lessons from the implementation of complex policies. Most policies are complex because they operate at different levels, involve many people, change as they are implemented, are influenced by context and a myriad of factors. It is therefore useful to turn to other approaches, such as realist evaluation, which allow for the complexity of policies. It allows us to understand how a policy may, or may not, bring about change. It is an explanatory type of evaluative research.

In the critical literature on public policy, it is often pointed out that the same type of policy is disseminated without being adapted to different contexts (Olivier de Sardan 2021). This diffusion of standardised policies often does not produce the same results elsewhere. The realist approach helps to explain this and can help to avoid such pitfalls. It helps to understand what does or does not trigger the mechanisms that produce positive effects, and to understand how and why these triggers could potentially occur elsewhere. Understanding why and how public policies work, with which beneficiaries and in which contexts, provides guidance for decision-making. Asking the question ‘for whom’ the policy works is also a key question in policy evaluation. This is necessary to take into account the different impacts of the policy on different sub-groups, particularly the most marginalised, among whom differential, counter-intuitive or even undesirable effects may be observed. All of these questions, which are found in the realist approach, can provide guidance on whether a policy should be implemented in a different context, or how the policy can be adapted or changed to maximise its potential to produce the desired effects. It is therefore a particularly appropriate approach when the policy is intended to be scaled up and extended to other populations in other contexts.

A realist rationale, based on the results of previous evaluations or a review of the scientific literature using this approach, can be used to guide policy formulation prior to implementation. However, realist evaluation cannot be carried out only ex ante if no outcome data are available, since in order to develop CMO configurations, outcome data are needed.

III. An example of the use of this approach to evaluate the implementation of universal health coverage

Universal health coverage (UHC) promotes the access to health services for all people who need them, without exposing them to financial hardship. To achieve this goal, the World Health Organization (WHO) has established a partnership to support UHC in several countries. This partnership aims to support collaborative policy dialogue as a governance tool in countries that aim to implement actions for UHC. This intervention consists of providing resources and expertise (e.g. technical assistant, training for ministry officials, etc.) according to the needs of the Ministries of Health (Robert and Ridde 2020). This type of intervention is complex and takes place in very different contexts and in varying forms. It cannot be evaluated using only quantitative data and indicators. Drawing on realist evaluation has allowed for a better understanding of how this partnership can work, and of the potential differences in results depending on the context of implementation. The overall objective of the study was to understand how, in which contexts and through which mechanisms the partnership can support policy dialogue.

The objective was to investigate: 1) how and in which contexts the partnership can initiate and nurture policy dialogue; 2) how the collaborative dynamics unfold within the policy dialogue supported by the partnership (Robert et al. 2022). A multiple case study was conducted in six countries. Theoretical propositions on how policy could work were drawn from the project documents but also from existing theories in the scientific literature, for example theories on partnership relations and collaborative governance. An example of a theoretical proposition is that capacity building (through training, technical expertise and continued support from WHO) would empower the MoH (M) while triggering a shared understanding of governance and policy dialogue (M); this should lead the MoH to conduct inclusive and participatory policy dialogues (O). The triggering of these mechanisms could be facilitated by contextual factors, such as the fact that WHO and the MoH have an enduring relationship (C) or that the human resources of the two institutions involved in the partnership are stable (C).

A collaborative approach was adopted, involving stakeholders at key stages of the evaluation: development of the protocol, development of the intervention theory, interpretation of the results, etc. By drawing on theories to increase abstraction, the intervention theory as well as field data to consolidate or refute the initial theoretical propositions, several CMO configurations were formulated. For example: partnership facilitates the initiation of policy dialogue (O) by generating stakeholder interest in multi-sectoral collaboration (M), provided that stakeholders recognise their interdependence and the uncertainty of managing critical health issues (C). It can be seen that one of the outcomes that WHO expects from the establishment of the partnership will only be realised in a particular context that will allow a specific response mechanism to be triggered by stakeholders. This type of result could support the implementation of similar actions, help to adapt it, or help identify the contexts where this type of intervention is most likely to respond positively to the expected outcomes.

IV. What are the criteria for judging the quality of the mobilisation of this approach?

Realistic evaluation is more an approach to evaluation than a method. It belongs to the category of “evaluative research”. In order to judge the quality of a realist evaluation, it is therefore more important to ensure that the evaluation meets certain basic criteria of the approach. For example, the evaluation should focus on discovering the mechanisms at work, and the concept of mechanism should be properly understood and applied. The evaluation should uncover configurations of contexts, mechanisms, outcomes and actors. It must allow for a greater abstraction from the intervention theory. Other elements can also favour the quality of a realist evaluation: carrying out a review of scientific literature to investigate existing theories and support the formulation of theoretical proposals to be tested; triangulating data sources (qualitative and quantitative); involving different stakeholders at different stages of the evaluation, etc. There are guides, such as the ‘Quality Standards for Realist Evaluation’ (Wong et al. 2016) that provide guidance at each stage of the evaluation, in order to carry out a quality realist evaluation.

V. What are the strengths and limitations of this approach compared to others?

Realistic evaluation has many advantages. It makes it possible to take into account the complexity of public policies, as well as that of the social, political and economic world in which they take place. It is based on a collaborative approach and encourages the involvement of all stakeholders (at the institutional, operational and policy recipient levels). In particular, this approach allows the “beneficiaries” and front-line people to be placed at the centre of the evaluation, by considering them as experts. It is their reactions that we try to understand so they are the ones who are best able to inform us.

It helps to explain multiple processes and outcomes, to highlight unexpected results of policies, and to answer evaluative questions that are often overlooked (understanding the how rather than just the outcome). Seeking to understand in depth how policies work provides knowledge that is more likely to be mobilised in other contexts. Its attention to context is fundamental because context is too often forgotten in standard evaluation approaches. Moreover, its grounding in theory allows for the use and accumulation of available knowledge. It allows scientific knowledge to be mobilised in a concrete way, whereas it is often not used enough in the field. The fact that it is based on both scientific literature and data from the field makes it possible to ensure a certain transferability of the results produced and to provide appropriate recommendations to political decision-makers (whether or not it is appropriate to implement this type of policy in certain contexts, how to adapt a policy to a specific context, etc.). Finally, it allows for collaboration between teams with different expertise and research areas, as well as the mobilisation of very different research methods.

Nevertheless, mobilising this approach involves some challenges. First, it is time-consuming and can be difficult to master. The concepts of context and mechanism can be difficult to grasp and operationalise. There is still a lack of dedicated courses in advanced evaluation practice and evaluative research in which realistic evaluation is taught. Moreover, many evaluation stakeholders (donors, operational partners, ethics committees, etc.) are not familiar with this approach, which can cause problems in understanding what is and what is not possible, and thus in meeting their expectations. Secondly, evaluation is still very much marked by the search for impact results measured by indicators, whereas realist evaluation offers commissioners and interested stakeholders a completely different format of results. Finally, this approach to evaluation is not straightforward, and does not produce linear results: several mechanisms can act at the same time, having opposite influences on the outcomes; an effect in one configuration can become a context in another. CMO configurations can therefore sometimes be difficult to construct.

Some bibliographical references to go further

Bhaskar, Roy. 1975. A Realist Theory of Science.

Lacouture, Anthony. and Breton, Eric. and Guichard, Anne. and Ridde, Valéry. 2015. “The concept of mechanism from a realist approach: a scoping review to facilitate its operationalization in public health program evaluation.” Implementation Science, 10(1): 153. https://doi.org/10.1186/s13012-015-0345-7.

Merton, Robert C. 1968. Social Theory and Social Structure. Simon and Schuster.

Olivier de Sardan, Jean-Pierre. 2021. La revanche des contextes: Des mésaventures de l’ingénierie sociale en Afrique et au-delà. KARTHALA Editions.

Pawson, Ray. and Tilley, Nicholas. 1997. Realistic Evaluation. London: Sage.

Pawson, Ray. and Tilley, Nicholas. 2004. Realist Evaluation. DPRN Thematic Meeting 2006 Report on Evaluation.

Robert, Emilie. and Ridde, Valéry. 2020. Dealing With Complexity and Heterogeneity in a Collaborative Realist Multiple Case Study in Low- and Middle-Income Countries. SAGE Research Methods Cases. SAGE Publications Ltd. https://doi.org/10.4135/9781529732306.

Robert, Emilie. and Ridde, Valéry. and Rajan, Dheepa. and Sam, Omar. and Dravé, Mamadou. and Porignon, Denis. 2019. “Realist Evaluation of the Role of the Universal Health Coverage partnership in Strengthening Policy Dialogue for Health Planning and Financing: A Protocol.” BMJ Open, 9(1): e022345. https://doi.org/10.1136/bmjopen-2018-022345.

Robert, Emilie. and Zongo, Sylvie. and Rajan, Dheepa. and Ridde, Valéry. 2022. “Contributing to Collaborative Health Governance in Africa: A Realist Evaluation of the Universal Health Coverage partnership.” BMC Health Services Research, 22(1): 753. https://doi.org/10.1186/s12913-022-08120-0.

Westhorp, Gill. 2014. “Realist Impact Evaluation: An Introduction.” Methods Lab.

Westhorp, Gill. and Prins, Ester. and Kusters, Cecile. and Hultink, Mirte. and Guijt, Irene. and Brouwers, Jan. 2011. “Realist Evaluation: An Overview.” Seminar report.

Wong, Geoff. and Westhorp, Gill. and Manzano, Ana. and Greenhalgh, Joanne. and Jagosh, Justic. and Greenhalgh, Trish. 2016. “RAMESES II reporting standards for realist evaluations”. BMC Medicine, 14(1): 96. https://doi.org/10.1186/s12916-016-0643-1.

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Policy Evaluation: Methods and Approaches Copyright © by Sarah Louart, Habibata Baldé, Émilie Robert, and Valéry Ridde is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book