Monitoring and Evaluation

Engagement in monitoring and evaluation (M&E or more broadly MEAL: monitoring, evaluation, accountability and learning) has a practical meaning for practitioners as well as integrating a strong ethical perspective in emergency response.

Humanitarian action is deeply rooted in humanitarian principles that promote human rights, human dignity and aims to put people at the center of humanitarian response. Monitoring and evaluation looks into whether our humanitarian efforts are successful in achieving that.

The core humanitarian principles require greater focus on evidence, through the humanitarian commitment to impartiality; without unbiased evidence of humanitarian needs, impartial decisions on allocation of humanitarian assistance are near impossible to make.

‘If you believe in impartiality, you have to be evidence based. You can’t be impartial if you don’t know what the range of choices are’ (Peter Walker, personal communication, March 2013).[1]

While significant improvements have been made in the entire humanitarian sector, in many cases supported by genuine evaluation studies, the sector still does not meet the needs of affected populations. The most recent State of the Humanitarian System report (covering the period of 2015-2017) highlights gaps in terms of coverage, coherence, appropriateness of response for particular vulnerable groups and limitations to participation.

On a practical level, systematic monitoring and evaluation helps adaptive management of projects and programmes, contributes to institutional learning and provides a basis for accounting on organisational achievements for people and communities affected by conflicts and disasters. These processes provide managers and practitioners with the practical means to improve their response.

In recent years evaluations, reviews and assessments have been on the rise to support that. Donors require more robust performance monitoring systems and thus increase data collection and reporting demands on projects and programmes. With the increasing number of people in need, there is a greater demand for cost-effective approaches which require performance analysis. These challenges call for an investment in a more systematic approach to monitoring and evaluation and the humanitarian sector has developed resources that can support such investments.

Core Humanitarian Standards and M&E

The Core Humanitarian Standards, developed as a collective effort by the humanitarian community, establish commitments for effective and accountable humanitarian action which are closely linked with and require tailored monitoring and evaluation capacity during the emergency response phase. Integration of monitoring and evaluation as well as accountability and learning helps to plan and give an account of the organisational efforts in terms of e.g. appropriateness, relevance and effectiveness of the response (CHS commitments 1,2) or continuous learning and improvement (CHS commitment 7).

 

M&E in the Humanitarian Programme Cycle

The humanitarian programme cycle (HPC), is an improved humanitarian response model which has been developed as part of IASC’s Transformative Agenda. It integrates monitoring and evaluation processes. The Monitoring processes and framework with performance indicators involves humanitarian actors, clusters and OCHA in order to provide humanitarian actors with evidence needed to take decisions and adapt short and long-term strategies.

An inter-agency humanitarian evaluation (IAHE) is an independent assessment of results of the collective humanitarian response by IASC partners to a specific crisis. It contributes to accountability and strategic learning for the humanitarian system, and seeks to promote human dignity and empower affected people.

M&E in the HPC aims to promote accountability and to ensure that organizations remain accountable to affected people, national  authorities, donors and the general public.

The MEAL approach

ALNAP emphasizes that: “The processes of monitoring and evaluation are much easier if the foundations have been laid during the needs assessment phase and during programme design. Evaluation processes can be facilitated if they can be built on a solid monitoring basis and evaluations should be built into project design from the beginning so as to contribute effectively to learning and accountability.”

MELMOPs

DRC’s Monitoring, Evaluation and Learning Minimum Operational Procedures (MELMOP) and Evaluation Policy describes basic project and programme monitoring and evaluations, including tools and processes to develop project management tools and learning from evaluations.

The annual compliance Self-Check monitors the performance against standards and provides recommendations. The HQ based MEAL team maintains a registry of evaluations performed across DRC, provides technical support for evaluation and monitors compliance with the policy through the Compliance Self-Check.

Common definitions

Monitoring

A continuing function that uses systematic collection of data on specif indicators to provide management and the main stakeholders of an ongoing humanitarian intervention with indications of the extent of progress, achievement of objectives and progress in the use of allocated funds. (Based on OECD-DAC, 2002)

Evaluation

Evaluation is a systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results (OECD DAC).

Evaluation of humanitarian action

A systematic and objective examination of humanitarian action, intended to draw lessons to improve policy and practice and enhance accountability. (ALNAP: Evaluation of Humanitarian Action)

Learning

The process through which experience and reflection lead to changes in behavior or the acquisition of new abilities. (ALNAP: Evaluation of Humanitarian Action)

Accountability

Accountability is the means through which power is used responsibly. It is a process of taking into account the views of, and being held accountable by, different stakeholders, and primarily the people affected by authority or power. (ALNAP: Evaluation of Humanitarian Action)

 

Common challenges:

  • Learning or reporting on projects
  • Budgeting for M&E
  • Staff and structure
  • Quality of and sampling in evaluation
  • Indicators
  • Short time span
  • Access and restrictions
  • Data protection and Information Management
  • Mobile data collection
  • Training of staff
  • Having a comprehensive system

Core guidance and standards

 

Guidance on Indicators:

 

Auxiliary websites, resources

  • MEASURE Evaluation. online resource which provides a host of materials for MEAL learning, including trainer resources, online courses, a webinar series, publications, tools, and much more.
  • ALNAP. The Active Learning Network for Accountability and performance in Humanitarian Action offers a variety of resources supporting evaluation in humanitarian contexts.
  • Better Evaluation.  Better Evaluation is provided by an international collaboration to improve evaluation practice and theory by sharing and generating information about options (methods or processes) and approaches.

[1] ALNAP, Insufficient Evidence https://www.alnap.org/system/files/content/resource/files/main/alnap-study-evidence.pdf

Questions?

For questions related to Monitoring and Evaluation please contact

Name:                Kordian Kochanowicz
Position:           Monitoring and Evaluation Advisor
Email:                [email protected]