On being asked the wrong question
We are often asked to help organisations set ‘indicators’ for their advocacy campaigns, with an expectation these indicators would find a home in a monitoring and evaluation (M&E) framework. We know such frameworks well, they’re submitted with the project proposal to the funder and then supplied to us as external evaluators several years later with a pack of other project documentation. It’s not always clear what happens to them in the meantime, nor does it usually seem that M&E is put to the best service of campaigns.
How M&E goes wrong
M&E serves two purposes – accountability – to funders, managers, boards, ideally but rarely to those who should stand to benefit from success – and learning. A third lens, particularly helpful when thinking about monitoring, is that of decision-making. M&E should help organisations to monitor whether a plan is being delivered as intended and to inform thinking as to whether the plan was the right one and hence whether it should be revised to take into account changing internal and external circumstances. M&E typically performs well in terms of recording activities and their immediate effects. Where things tend to get harder is in monitoring achievement of, and especially progress towards, campaign and advocacy objectives. Sometimes the achievement of an overall goal may be easy to assess in itself – in cases where it is something clear-cut like the ratification of a treaty or the repeal of the death penalty that is at stake. More often though, objectives are only partially achieved: an agreement may be adopted which has strengths and weaknesses which need careful interrogation. The range of NGO opinion of the Paris Climate Accord is testament to this. Even more nuance is needed in monitoring the stages prior to the conclusion of a political process since this involves measuring degrees of support for particular positions among political and corporate targets disinclined to admit to being under the influence of an NGO.
Faced with these challenges, organisations may struggle to define meaningful indicators or, more commonly, they may set indicators which are, on the face of it, perfectly coherent and logical, but difficult to measure against. Indicators for whether a campaign target is coming round to supporting an organisation’s objectives may refer to changes in public rhetoric, public statements of support or positions take in relevant fora. But in our experience, it is rare for there to be systematic tracking against this sort of indicator; neither do advocates’ subjective impressions of targets’ likely willingness to move, drawn from lobbying meetings or other interaction with them, tend to be properly recorded and reflected upon in strategy review processes.
As well as not knowing how to record the right thing, organisations may disproportionally invest in measuring the ‘wrong’ things. They may capture lots of information relating to social media engagement, for example, which is useful to know in itself, but can be inappropriately adopted as a proxy for political movement. At the very least, there are several ‘links in the chain’ between an issue gathering traction in social media and political or corporate targets taking it more seriously.
In other cases, there seems to be little commitment to using an M&E framework as a ‘real time’ advocacy management tool. A framework developed as part of a fundraising effort may be ignored until the time comes to report to the donor, at which point an organisation has to scramble around for data to retrospectively assign to different objectives and indicators.
These problems may have any of several causes: M&E frameworks may be unwieldy, overcomplicated and potentially arbitrary in how they define indicators. There may be a disconnect between those responsible for reporting (M&E or fundraising staff) and those involved in project delivery. A vicious circle can develop in which M&E is ineffective and (perceived to be) burdensome which leads campaigners and advocates to disengage from it, further reducing its quality. The setting of indicators and measurement against them becomes divorced from the basic question of knowing whether a campaign is moving in the right direction.
Intel not indicators
Martin, a member of the Hub, was commissioned by a coalition of organisations to coordinate their efforts in the twelve months before a Conferences of Parties (CoP) of an international treaty, at which states came together to revise the text, agree protocols and report on implementation.
The coalition, focussed on a number of ground-breaking objectives for the CoP, knew that the positions of bigger, more influential, countries were easy to predict, although in some instances difficult to influence. However, less information was available for many smaller or otherwise less well-known countries, each of which had a vote and was potentially easier to influence (something of course not lost on the governments and others ‘on the other side’). An effective strategy demanded country-specific background information and a way of capturing ‘intelligence’ to monitor the positions of the nearly 200 states.
The team worked with another consultant to develop a database that stored key data and allowed for regular updates and rapid analysis. The database included information on each country including data relevant to the subject of the treaty; political information on affiliations and alliances; ranking on the Corruption Perceptions Index; a complete voting record from previous CoPs; and information on their CoP delegation. Expert analysis assigned each Party a number indicating the level of support or opposition to the proposals with a score reflecting how confident we were in the intel. In the run-up to the CoP, coalition members conferred on weekly conference calls to discuss updates on member positions and opportunities for advocacy, all of which was recorded.
At the start of the CoP, the database was updated with further information including a photo of each Head of Delegation – helpful when looking for people in the corridors. Intel arising from discussions and observations was added on a constant basis, and twice-daily updates with voting forecasts were generated. The walls of the NGO’s ‘war room’ in the conference venue were covered with printouts of targets and positions.
All this data – the most comprehensive information held by any NGO on the ground – proved to be necessary but, on its own, not sufficient: It worked because it reflected and supported expert analysis and strategic thinking. Not only were forecasts used to coordinate coalition members’ lobbying, but the number-crunching and political analysis was also used by negotiators from some powerful ‘friendly’ countries, who sought advice in formulating their own daily ‘hit lists’ for bilateral conversations.
Not only did the CoP reach agreement in line with the coalition’s goals, but its vote predictions were within 2%, with the final Plenary vote prediction out by one vote only. The chances of success going into the project had seemed much more doubtful.
A database like this may perhaps be over-engineered. Other campaigns use ‘living documents’ (or, if unavoidable, spreadsheets) routinely updated with the latest intel on the positions of key decision-makers and influencers with, importantly, associated next steps for influence. The point is to embed ‘political’ monitoring in campaign delivery, rather than treating it as a stand-alone exercise done for a separate purpose. It’s a type of approach that is not usually apparent in campaign documentation and M&E frameworks. It is information that exists in campaigners’ heads, but in this case was extracted, processed and employed in a systematic way.
What are the lessons?
We don’t want to replace one set of unhelpfully rigid rules with another. The issue is to reframe M&E as a function essential to campaign success and not something done on the side to satisfy other interests.
What tends to happen now is that the very interesting and engaging task of debating an issue, working out how to fix a problem, interpreting political shifts and so on loses its usefulness as well as its fun when it is converted into a language of tools and frameworks and reduced to being a technical exercise. Consultants like us can try to make ourselves indispensable by inventing methodologies and mystifying the M&E function. Campaigners get taken out of their comfort zone and become paralysed by the task of setting indicators. Far better to keep your eye on the goal; use ‘adaptive management’; and focus on (researching and) recording the intel you need to adjust your strategy, usually the positions of key targets and influencers. All the rest – apart from some techie on-line engagement metrics etc. which may be useful for assessing particular tactics – is more likely to be a distraction than a help.
This suggests a different attitude to indicators themselves. While campaign goal(s) should be immovable, objectives should be bendable – if internal or external circumstances change dramatically – but indicators should be entirely fluid, with new ones added or the original ones dropped if they don’t prove to be useful. As Goodhart’s law suggests “When a measure becomes a target, it ceases to be a good measure.”
Our colleague Jim Coe has pointed out how we should also be wary about claims that data can answer all our questions about strategic effectiveness. The answers lie less in the accumulation of data and more in careful reflection based on high-quality intelligence. Campaigners should remember too that if they are able to define a problem sufficiently well to be able to launch a campaign about it, then they must also be able to know that the problem has been improved or resolved: it is a fallacy to seek a higher standard of proof for outcomes than for problems.
So, what’s the right question?
Whatever M&E experts suggest, the question that campaigners should be asking themselves is always “what information do we need to win?”
Some interesting responses elsewhere to this post. Our colleague Rhonda highlighted as an example of ‘developmental evaluation’ which is not a term or frame I had thought of in this context, perhaps reflecting the sometimes unhelpful separation of M&E thinking from doing advocacy to the detriment of both. An experienced campaign manager suggested that the example we used is perhaps not that uncommon. That’s fair, but the point is that such tracking, while done in good campaigns, is seen as something else to the ‘monitoring’ of M&E when it probably shouldn’t be. As Michaela O’Brien pointed out, the “use of the term ‘intel’ as distinct to ‘information / data’ frames this an essential campaign activity and not a bureaucratic add-on” which was indeed deliberate and very much what we wanted to emphasise.
Interested in other views!