A blog by Power to Change and Renaisi.

Stephen Miller, Director of Impact and Learning at Power to Change, Chloe Nelson, Head of Impact and Learning at Power to Change, and Mylene Pacot, Principal Consultant – Strategy & Impact at Renaisi, share thoughts about how power dynamics can play out in evaluations, and what can be done to challenge them.

When Renaisi and Power to Change reflected on some of the questions that evaluators and commissioners grappled with in 2022, they all related to the fact that evaluations are part of a wider system, and can contribute to reinforcing or challenging harmful power dynamics. In other words, it matters:

  • who commissions evaluations;
  • who conducts them and how;
  • who participates in the research; and
  • who the findings are shared with.

It can all contribute to reinforcing or challenging existing power dynamics.

Being aware of the wider system we are placed in can help us challenge power dynamics for the better, and this indeed seems to be evolving. As Mylene reflected:

“Evaluations seem to be commissioned less often to hold funded organisations accountable to their funder, and more often to generate learning both ways; recognising that funders also have a lot to learn from the people they’re aiming to support”.

How can harmful power dynamics be challenged?

In this context, what can evaluators and evaluation commissioners do to challenge inequalities and potentially harmful power dynamics? Mylene, Chloe and Stephen have four suggestions.

1. Acknowledge the tension between ‘rigour’ and ‘real life’

Traditional powerholders such as central and local government and academia have often used evaluations to understand which interventions cause impact and how they can be replicated. These more traditional evaluations and methodologies are not always fit for purpose.

For instance, many Power to Change programmes empower local communities to promote economic development, and cannot be replicated like-for-like. In this context, it is much more important to understand complexity, rather than replicability, eg how a programme worked for a particular group of people at a given time.

It is important to acknowledge this tension when considering the future of evaluations. As Stephen summarised:

“In an ideal world, we would want policymakers to recognise complexity, but in the real world, people with power and money have a particular understanding of what constitutes evidence. We need to find a way to meet them while stretching the practice in the sector.”

2. Challenge what counts as a ‘robust’ evaluation

“What was decided counts as a ‘robust evaluation’ comes with biases; it was developed by people in power for people in power decades ago”.

Chloe Nelson

For instance, while Randomized Control Trials are often perceived as the ‘gold standard’ for evaluation in the social sector, they tend to reinforce existing power dynamics; helping commissioners decide what programme to fund, rather than focusing on lessons learnt from practitioners or giving a platform for participants to share their voices.

Evaluators should question what methods are considered ‘valid’ and why.

“For me the ‘gold standard’ of evaluation means facilitating a meaningful dialogue between decision-makers, funders, practitioners and the people they serve; and often that cannot be achieved through quantitative methods alone. Observations from an evaluator who has built trust with programme stakeholders over time, or reflections from a peer learning workshop, can also generate high quality evidence”.

Mylene Pacot

3. Give evaluators and practitioners agency over their approach

Funders and commissioners can challenge traditional power dynamics by providing flexibility to the evaluators they commission and the organisations they fund. In Stephen’s experience, it is best to avoid prescriptive evaluation tenders, and to rather commission “evaluations that will evolve over time and will incorporate opportunities for constant reflection”.

Mylene agreed: “evaluators can add more value by having conversations and co-designing approaches with programme stakeholders, rather than by executing a pre-defined brief”.

Similarly, funders and commissioners can let go of some of their traditional power by being flexible with the design of the programmes they fund. As Chloe shared: “we don’t need to be controlling the detail of an intervention to a granular level; being too specific can hamper delivery organisations”.

For instance, Power to Change wanted the Trade Up evaluation to explore whether the grant had contributed to increasing community businesses’ trading income, yet did not specify in detail how the grant had to be used.

4. Make evaluation findings accessible

Sharing evaluation findings with research participants and programme stakeholders – and not solely commissioners or decision-makers – is a powerful way to challenge power dynamics. As Chloe noted:

“sometimes better quality and more accessible work isn’t as talked about as work that uses a lot of jargon. That’s a fundamental point of ethics – ensuring that our evaluation work isn’t exclusionary to the people we’re talking about”.

Making evaluations accessible can take several forms – from writing reports in plain English or using visual tools, to creating spaces for people to talk about the findings and more.

While one individual or organisation cannot change an entire system alone, these four concrete solutions can help anyone who evaluates or commissions evaluations to challenge harmful power dynamics.

Renaisi’s commitment

We know that we all embody and uphold injustice in unconscious and overt ways.

Each of the points above could be expanded upon and we will build on these suggestions in 2023. We are committed to continuously challenging ourselves to be more inclusive and equitable in the way we work, who we work with, how we build and test ideas, and by unlearning things that are antithetical to positive change.

Read Renaisi Consultancy Team’s commitment to continuously and proactively reviewing their processes to make them as equitable as possible.