by Eunice Sotelo & Victoria Pilbeam
Many evaluators are familiar with realist evaluation, and have come across the realist question 鈥渨hat works for whom, in what circumstances and how?鈥 The book Doing Realist Research (2018) offers a deep dive into key concepts, with insights and examples from specialists in the field.
We caught up with Brad Astbury from ARTD Consultants about his book chapter. Before diving in, we quickly toured his industrial chic coworking office on Melbourne鈥檚 Collins Street 鈥 brick walls, lounges and endless fresh coffee. As we sipped on our fruit water, he began his story with a language lesson.
Doing Realist Research (2018) was originally intended to be a Festschrift, German for 鈥榗elebration text鈥, in honour of recently retired Ray Pawson of Realistic Evaluation fame. Although the book is titled 鈥榬esearch鈥, many of the essays in the book, like Brad鈥檚, are in fact about evaluation.
The book鈥檚 remit is the practice and how-to of realist evaluation and research. Our conversation went wide and deep, from the business of evaluation to the nature of reality.
His first take-home message was to be your own person when applying evaluation ideas.
You don鈥檛 go about evaluation like you bought something from Ikea 鈥 with a set of rules saying screw here, screw there. I understand why people struggle because there鈥檚 a deep philosophy that underpins the realist approach. Evaluators are often time poor, and they鈥檙e looking for practical stuff. At least in the book there are some examples, and it鈥檚 a good accompaniment to the realist book [by Pawson and Tilley, 1997]. |
Naturally, we segued into what makes realist evaluation realist.
The signature argument is about context-mechanism-outcome, the logic of inquiry, and the way of thinking informed by philosophy and the realist school of thought. That philosophy is an approach to a causal explanation that pulls apart a program and goes beyond a simple description of how bits and pieces come together, which is what most logic models provide. [The realist lens] focuses on generative mechanisms that bring about the outcome, and looks beneath the empirical, observable realm, like pulling apart a watch. I like the approach because as a kid I used to like pulling things apart. Don鈥檛 forget realist evaluation is only 25 years old; there鈥檚 room for development and innovation. I get annoyed when people apply it in a prescriptive way 鈥 it鈥檚 not what Ray or Nick would want. [They would probably say] here鈥檚 a set of intellectual resources to support your evaluation and research; go forth and innovate as long as it adheres to principles. |
Brad admits it鈥檚 not appropriate in every evaluation to go that deep or use an explanatory lens. True to form (Brad previously taught an impact evaluation course at the Centre for Program Evaluation), he cheekily countered the argument that realist evaluation isn鈥檛 evaluation but a form of social science research.
Some argue you don鈥檛 need to understand how programs work. You just need to make a judgment about whether it鈥檚 good or bad, or from an experimental perspective, whether it has produced effects, not how those effects are produced. Evaluation is a broad church; it鈥檚 open for debate.
If it鈥檚 how and why, it鈥檚 realist. If it鈥檚 鈥榳hether鈥 then that鈥檚 less explicitly realist because it鈥檚 not asking how effects were produced but whether there were effects and if you can safely attribute those to the program in a classic experimental way. Because of the approach鈥檚 flexibility and broadness, you can apply it in different aspects of evaluation |
Brad mused on his book chapter title, 鈥Making claims using realist methods鈥. He preferred the original, 鈥淲ill it work elsewhere? Social programming in open systems鈥. So did we.
The chapter is about external validity, and realist evaluation is good at answering the question of whether you can get something that worked in some place with certain people to work elsewhere. Where realist approaches don鈥檛 work well is estimating the magnitude of the effect of a program. |
As well as a broad overview of where realist evaluation fits in evaluation practice, Brad provided us with the following snappy tips for doing realist research:
Don鈥檛 get stuck on Context-Mechanism-Outcome (CMO)
When learning about realist evaluation, people can get stuck on having a context, mechanism and outcome. The danger of the CMO is using it like a generic program logic template (activities, outputs and outcomes), and listing Cs, Ms and Os, which encourages linear thinking. We need to thoughtfully consider how they鈥檙e overlaid to produce an explanation of how outcomes emerge.
A way to overcome this is through 鈥榖racketing鈥: set aside the CMO framework, build a program logic and elaborate on the model by introducing mechanisms and context. |
Integrate prior research into program theory
Most program theory is built only on the understanding of stakeholders and the experience of the evaluator. It means we鈥檙e not being critical of our own and stakeholders鈥 assumptions about how something works.
A way to overcome this is through 鈥榓bstraction鈥: through research, we can bring in wider understandings of what family of interventions is involved and use this information to strengthen the program theory. We need to get away from 鈥榯his is a very special and unique program鈥 to 鈥榳hat鈥檚 this a case of? Are we looking at incentives? Regulation? Learning?鈥 As part of this work, realist evaluation requires evaluators to spend a bit more time in the library than other approaches. |
Focus on key causal links
Brad looks to the causal links with greatest uncertainty or where there are the biggest opportunities for leveraging what could help improve the program.
When you look at a realist program theory, you can鈥檛 explore every causal link. It鈥檚 important to focus your fire, and target evaluation and resources on things that matter most. |
When asked for his advice to people interested in realist evaluation, Brad鈥檚 response was classic:
Just read the book 鈥楻ealistic Evaluation鈥 from front to back, multiple times. |
As a parting tip, he reminded us to aspire to be a theoretical agnostic. He feels labels can constrain how we do the work.
To a kid with a hammer, every problem can seem like a nail. Sometimes, people just go to the theory and methods that they know best. Rather than just sticking to one approach or looking for a neat theoretical label, just do a good evaluation that is informed by the theory that makes sense for the particular context. |
--------------------------
Brad Astbury is a Director at ARTD Consultants. He specialises in evaluation design, methodology, mixed methods and impact evaluation.
Eunice Sotelo, research analyst, and Victoria Pilbeam, consultant, work at Clear Horizon Consulting. They also volunteer as mentors for the Asylum Seeker Resource Centre鈥檚 Lived Experience Evaluators Project (LEEP).