Using evidence to prevent gender-based violence – An evaluator’s perspective

Evidence and progress go hand in hand. To prevent gender-based violence (GBV), we need to understand what works – and why – and use what we have learned to inform decision-making. But understanding what doesn’t work and why is just as important.

In recognition of this year’s 16 Days of Activism against Gender-Based Violence campaign, we are taking a closer look at the work of our team on the UK’s Foreign, Commonwealth and Development Office’s SAFE programme. As the lead on the SAFE Evaluation and Learning Unit (ELU), Tetra Tech provides rigorous evidence to our partners on the programme’s effectiveness and impact on preventing violence against women in Zimbabwe.

Earlier this year, our Team Leader and evaluation and GBV expert, Julienne Corboz, presented at the SAFE & Spotlight GBV Symposium in Harare. In her presentation, she shared how evidence can inform policies, programmes and strategies for preventing GBV.

These are her five recommendations for evaluators:

We need to value diverse forms of evidence. In evaluating and generating evidence on interventions, we often prioritise research-based evidence, but practice-based knowledge is equally important. Practice-based knowledge is defined by the Prevention Collaborative as “the cumulative knowledge and learning acquired by practitioners from designing and implementing diverse programmes in different contexts, including insights gained from observations, conversations, direct experiences and programme monitoring.” To make the most of this, we need to build more rigorous and systematic tools and processes for recording this knowledge. The Prevention Collaborative provides practical guidance, along with tools and resources. You can read the paper here.

Evidence should also be used to understand process and effectiveness. We often focus on using evidence to test the outcomes and impact of our programmes, strategies and policies. But evidence is also critical to understanding how we implement interventions, what works and how, and what doesn’t work.

To understand when an intervention doesn’t work, we need to distinguish between theory and implementation failure. Evidence can help us to understand when an intervention works or doesn’t work but only when distinguishing between theory and implementation failure. Was the policy, strategy or programme carried out as intended without having the desired impact, suggesting that the theory of change was incorrect? Or was it not carried out as intended and could the theory still hold?

Adaptation, replication, and scale-up rely on solid evidence. Evidence is key when it comes to informing how we adapt our programmes and, if found to be effective and impactful, how to replicate or scale them up. With the right evidence, we can tactically and strategically adapt, establish what the costs of GBV prevention interventions are, and determine which factors are essential to retain at scale and which factors may be less important.

Strategic adaptation is more resource-intensive than tactical adaptation. Interventions need to plan for this accordingly. Making minor alterations to interventions in response to feedback or routine monitoring information (tactical adaptation) is something that should be a routine part of most interventions. However, questioning the appropriateness of an intervention is based on in-depth learning from real-time experience, and continuous testing of assumptions and approaches (strategic adaptation). This requires deeper and more comprehensive data collection, analysis and review processes.