I recently attended the first workshop of the CGIAR Evaluation Community of Practice (ECoP) in Rome, on 29-31 October. The workshop was a launch of the ECoP, which is being instated to strengthen evaluation, promote evaluative thinking in the CGIAR and serving as a platform to join those interested in following and participating in CGIAR systems and processes for evaluation. There were 45 of us participating: evaluation focal points and other staff with significant evaluation-related responsibilities from CRPs and Centres; IEA head and staff; Consortium Office (CO), SPIA and ILAC representatives. It was an interesting mix of people which i believe reflects well the current status of “evaluation intelligence” in the system: strong and formal (if overworked) IEA and CO, a loosely articulated group of evaluation focal points and an even more informal “loose” fringe of people who move between Centers and CRPs, monitoring and evaluation (and several related topics, such as data management, knowledge sharing, capacity building) and yet another group of “externals”- happy to contribute, just a bit doubtful when trying to draw parallels between this System and any other they’ve ever encountered. I was one of the Community Stewards, who, with patient leadership of the facilitator, Julia Compton, supported workshop preparation, facilitation and review.
During these 3 days, key elements of CGIAR evaluation guidance, such as the Four-Year Rolling Evaluation Work Plan (REWP) of the IEA, CRP Commissioned External Evaluations (CCEEs) and Independent External Evaluation of CRPs were presented. We had an outside evaluation expert providing capacity strengthening in an evaluation method, much in the way we’d like the ECoP to be in the future, with its role in capacity strengthening of members, and there was the opportunity to discuss and agree on some activities and next steps for the ECoP. We went through informative materials- sharing and presenting of evaluation in the reformed CGIAR, the roles of central bodies (IEA, SPIA and the Consortium Office), the IEA’s rolling evaluation work plan (REWP), CRP-level evaluations, and CRP-commissioned external evaluations (CCEEs) by Rachel Bedouin (Head of IEA), Anne-Marie Izac (CO) and Tim Kelly (SPIA).
The IEA is based in FAO- Rome, operating in close consultation with the consortium. This group collaborates with ISPC, SPIA and the Evaluation and Impact Assessment Committee (EIAC), reports to the Fund Council (FC) and are in charge of conducting system level evaluations of specific cross- CG issues, the CG System-wide and other CG institutions, such as the FC and the CO, for accountability, decision- making support and institutional learning. The IEA is also in charge of the Independent Evaluations of the CRPs. Last, but not least, they are charged with coordinating and harmonizing evaluation in all the system (as described in the REWP) and facilitating this ECOP.
The Rolling Evaluation Work Plan (REWP)
The Rolling Plan Document mostly presents on the two main aspects of the IEA duties: evaluations (of CRPs, cross- CGIAR) and strengthening evaluation capacity. Table 1 (below) is an overview of the evaluation plan of the IEA for 2013- 17. While CRPs are in charge of monitoring (collecting data on inputs, outputs and immediate outcomes) and of Impact Assessment; with the support of SPIA (studying later outcomes, linkages with impact), the IEA is in charge of evaluations. On capacity building, the plans for maintaining and strengthening this ECOP was discussed, as well as the role of the IEA in coordinating evaluation plans with CRP, in particular the CRP commissioned evaluations (CCEEs).
CCEEs are evaluations commissioned by CRP management, and which can cover a broad scope- they can evaluate themes, subthemes or crosscutting topics in a CRP. A CCEE will be carried out by external evaluators, but with the inputs and advice of a “reference group”, an evaluation manager, who plans a designs the ToR and contracts and manages the external evaluator; and commissioning, overseeing and following up on recommendations of a CRP governance body. Perhaps the least straightforward issue here has to do with the scope of the CCEEs- although the guidance clarifies that in a CRP’s lifetime, the major funding and outcome areas should all have been “CCEE’d”. Also clear is that any CCEE must address all of the main evaluation criteria: relevance, effectiveness, efficiency, impact, sustainability and quality of science. The tricky part is striking the required balance between the depth of analysis and coverage of the evaluation, and the current overlap in responsibility for evaluating outcomes (see figure 1 for the overlap, marked in red)
Figure 1- Monitoring and IA flow into evaluation (IEA), and overlap in assessing outcomes
The IEA has really done a thorough job of putting together the what and how of CCEEs and the articulated plan. There was some discussion- I don’t think enough- about the “scope” – what is worthy of evaluation focus? In programs of such duration and objectives- where are we monitoring (activities and the road to outputs, but also outcomes) and where are we evaluating?
Some CRPs (the brave: GRiSP; WHEAT/MAIZE; RTB; A4NH; FTA; and AAS) presented their progress in their evaluation plans. There was a broad scope of progress in these plans, and in the focus of the presentations. I was leading one of the groups, so could only attend 3 of the 6, but in hearing the reports back in plenary the main conclusion I draw is that CRPs are still in very incipient stages of formalizing what they will be doing for M&E, that each CRP is “going at it” from very different points of view, but that there is hope: in the light of quite some confusion and lack of focus on rigidly directed M&E, lots of innovation and interesting takes on issues are happening, adding to diversity and potential.
Finally, we discussed follow-up areas for the ECoP: (1) provide resources (such as a roster of evaluators, a database of evaluative studies- in coordination with the similar endeavour of SPIA, a website, harmonized glossary of evaluation terms, others); (2) capacity building and training on evaluation topics of interest for the ECoP members, and all M&E- related personnel in CGIAR; and (3) forums and discussions. Some of us will continue to work with the IEA to advance these areas.
SOME CHALLENGES GOING AHEAD
CRPs are in a very incipient state of evaluation thinking, planning and resourcing. Not all CRPs and centers have dedicated staff for M&E, and M&E plans are still very general and not well disseminated. While there is clear guidance on conducting CCEEs (except for the scope and “aggregation” challenges mentioned above) guidance in other key areas of monitoring and evaluation, such as center/ CRP coordination, inter- CRP synergies, “immediate” outcomes (a new term- are these what has been called research outcomes before?) and longer- term outcomes (are these IDOs?), etc is missing to date. While it is understood that the IEA is not responsible for providing this guidance, it is hard to separate such a fluid process/ cycle into the three neat windows/ designations of monitoring, impact assessment and evaluations.