Author Archives: Rhonda Schlangen

  1. The Evalu-ization* of Advocacy

    Leave a Comment

    By Rhonda Schlangen and Jim Coe

    We received dozens of questions in advance of a webinar organized by the Advocacy Accelerator on the ideas in our paper, No Royal Road: Finding and Following the Natural Pathways in Advocacy Evaluation. The sheer volume of the questions points to an incredible pent up demand for solutions to evaluate advocacy. But the nature of many of the questions – focusing on indicators, for example, or how to attribute results – also points to the persistent focus on how to make advocacy fit within the contours of traditional evaluation. It is precisely this entrenched focus on “evalu-ization” of advocacy that we’re pushing against in the paper.

    Social and policy change is complex, and influencing it through advocacy is a dynamic, adaptive, complicated process. Therefore, like many complex change processes, it does not lend itself to simple measurement promised by some more traditional evaluation approaches. The inherent uncertainty and unpredictability of advocacy means that rather than searching for this non-existent “royal road”, we should be following the natural pathways of advocacy.

    We discuss tactics to streamline and calibrate MEL for advocacy by:

    1. Recognizing unpredictability, for example by using problem-centered strategy development.
    2. Embracing uncertainty about results by getting the best information we can given the resources we have, and make reasoned claims on the basis of it.
    3. Thinking about contribution differently, for example by exploring different advocacy roles and understanding how well they have been played when analyzing contribution
    4. Resisting rogue indicators that drive activities and outputs rather than illuminate outcomes and instead interrogate progress against multiple dimensions of change.
    5. Doing the basic things well, by ensuring advocates have the time and space to consider, think about, deliberate and interpret information about their work.

    What’s practically useful to effective advocacy can be at odds with more traditional and formal evaluation practices. The orientation to formal, complicated, and resource-intensive processes risks privileging approaches that are out of reach to all but the best-resourced civil society organizations and campaigns.

    Everyone involved in MEL — funders, senior managers and boards, advocacy practitioners, and evaluators — has a role to play in helping ensure it is oriented and supported in a way that is grounded in the way advocacy influences change. All need to help build a culture of critical reflection and adaptation in response, which will ensure MEL efforts contribute to more effective advocacy.


    *h/t to Pierre Basimise Ngalishi Kanyegere for his From Poverty to Power blog post “The NGO-ization of Research: What are the Risks” and others concerned about the effects of professionalization and bureaucratization of social change and other civil society work. See


    Blog posts represent individual author’s views


  2. Measuring advocacy capacity and knitting sweaters for chickens

    Comments Off on Measuring advocacy capacity and knitting sweaters for chickens

    Considerations when designing approaches to assess and support advocacy capacity. 

    My sister spent nearly a decade trying to “build” my knitting capacity. She wanted to impart her passion and skill for knitting on me; I was mildly interested. She knits everything, including this darling chicken sweater. (She lives in a very cold place.) At this point, I knit a passable scarf at a leisurely pace. And that’s just fine with me.

    I keep this experience in mind when working with civil society organizations and their funders who are interested in learning about using advocacy to advance their goals. Roughly a quarter of international development assistance funding is spent on “capacity building”—training and technical assistance—but with mixed results.[1] So, it is wise to give attention to better understanding how these efforts are working, what benefits emerge as a result, and for whom.

    There are five issues evaluators and programmers should consider when designing ways to learn about the effectiveness of advocacy capacity support:

    1. Defining advocacy: What concept of advocacy is being supported?
    2. Identifying essential capacities: What skills, resources, and supporting conditions need to be in place to enable that advocacy, and are they in place for the organization or coalition engaging in the advocacy?
    3. Aligning needs with support: If the necessary skills, resources, and supporting conditions aren’t in place, can their development be supported by external sources?
    4. Assessing quality of support and engagement: Is the support provided in response to those needs of good quality, appropriate, and relevant? Does the organization participate in a way that enables it to learn?
    5. Emphasizing durability: Does the organization have what it needs to apply that learning?

    Each of these issues has some variables programmers and evaluators can consider, as outlined in the table.

    Questions Variables
    1. What concept of advocacy is being supported?
    • Short-term advocacy campaign or project
    • Advocacy as a core organizational strategy

    2. What skills, resources, and supporting conditions need to be in place to enable that advocacy? Are they in place for the organization or coalition engaging in the advocacy?

    •  Organization’s experience
    • Advocacy context
    • What the advocacy is intended to change
    • Expectations for sustained and adaptive advocacy

    3. If they aren’t in place, can they be supported by external sources?

    • Barriers or reasons the capacity and practices haven’t been developed and/or sustained
    • External factors that support or limit the capacity
    • Internal organizational factors that help or limit capacity

    4. Is the related support provided in response to those needs of good quality, appropriate, and relevant? Does the organization participate in a way that enables it to learn?

    • Alignment of the resources and skills of the supporting funder with the organizations’ needs
    • Structure and duration of the support
    • Involvement of the right people from the organization in a way that is sustained and consistent

    5. Does the organization have what it needs to apply that learning?

    • Funding
    • Staffing
    • Space to advocate
    • Space to act on what it learns
    • Supportive organizational conditions, like leadership and willingness to leverage organizational assets on behalf of advocacy


    When addressing these questions, evaluators need to give attention to whose perspectives are represented, how well aligned they are, and whether supporting conditions are in place to enable application and use of those capacities. We also need to tackle assumptions head-on, such as that an organization being supported agrees with the needs identified, invests in solutions, and is committed to applying what it learns.

    We have a lot more to learn. My burning questions include:

    • What is the evidence connecting advocacy capacity support and effective advocacy?
    • What do we know about effective ways to support an organization to develop its advocacy capacity?
    • While advocacy capacity support often focuses on technical skills, like effective media tactics, is it possible to support less tangible factors that we know affect an organization’s ability to use advocacy as an on-going strategy to advance its goals, such as leadership and adaptation?

    What other questions will help us improve how resources for capacity support can best be applied to sustainable, durable change?


    Rhonda Schlangen


    [1] “The Challenge of Capacity Development: Working Towards Good Practice,” OECD Journal on Development 8, no. 3 (2008): 33,

    Photo credit: Kristi Schlangen-Lindquist

    Blog posts represent individual author’s views only

  3. Hubbing it Out: Why We’re Together in the Advocacy Hub

    Comments Off on Hubbing it Out: Why We’re Together in the Advocacy Hub

    While our website and online presence is new, Advocacy Hub members have been working together on advocacy- and campaigning-related efforts for years. In our inaugural blog post, we’re sharing why we’ve joined together in this consulting cooperative. Here, in our own words, is why we’re in the Hub:

    Jim: These are difficult times and we want campaigners to be as effective as they can in tacking the multiple challenges we face. I try to maximise the contribution I can make to that, and sharing ideas with Advocacy Hub members and others about campaigning, campaign strategy, and evaluation is a key part of that for me.

    Gabrielle: When I became a consultant I joined the Advocacy Hub because in the past, as an internal evaluation manager, I had hired Hub members. I valued the way they engaged me as a collaborator and co-thinker, while also delivering fresh insights and analysis that surprised and challenged me.

    Jeremy:  I value being a member of the Advocacy Hub because our collective experience and complementary perspectives means that as a group we can offer tailored, multi-faceted solutions to clients, while interaction with expert peers helps me to improve and expand my own knowledge and skills.

    Beverley: Although we are scattered in different places, we have established a strong community of practice, where we make time to share ideas, learning and best practice. I enjoy working with Advocacy Hub colleagues on projects as we complement each others’ strengths and experience, which can lead to better quality outcomes.

    Martin C: As someone who also runs campaigns, I appreciate working with others with a focus on strategy and evaluation. I always have a voice in my head asking if I as a campaigner could deal with the recommendations I make as an advisor; and as a campaigner, another asking me what an evaluator would make of the decision I just took.

    Antonella: It’s great to be able draw on and share experiences and thinking with a group of practitioners from across the globe who believe that evaluation and strategic planning processes should support, rather than hinder, social change processes. The opportunity to discuss ideas and emerging methodologies helps to sharpen my own thinking and practice.

    Steve: I am interested in how change happens and unpacking the complexity of that via the perspectives of colleagues across different contexts brings a certain richness.

    ElenaI like working with colleagues with different skills and expertise and based in different parts of the world. I find the Hub an enriching environment, where we can discuss and exchange ideas. There is also is significant learning and innovation created through this process.

    Martin V: To me, the Advocacy Hub is an “international space of proximity”, where a small number of consultants from a wide diversity of countries intend to understand and challenge each other to better collaborate together.  It enables each of us to question our mutual tools and approaches, to better innovate on these approaches, and to work together on common consultancy missions.

    Rhonda: I’m connected with colleagues who are equally convinced that we don’t have all the answers, and are excited by the challenge of continuously pushing ourselves to figure out how to make sure evaluation and strategy helps improve advocacy.

    Jean-Martial:  Being a consultant can be solitary. The Hub gives us a family of peers.  I’m in it because the connected world allows us to feel close even if you are far and because diversity is a key attribute of a collective brain. We reflect how new organisations are: networked, non-hierarchical, practice over theory, interested by risk, and aware of complexity.

    So, that’s us: a group of geographically and methodologically diverse people, joined by a shared curiosity about how advocacy leads to change and a commitment to percolate innovations that help campaigners, advocates and others working to change the world.