As we near the closure phase of the SPHEIR programme, we highlight the role that the SPHEIR Fund Manager’s Monitoring, Evaluation and Learning (MEL) team played in guiding the partnerships through the process of conducting evaluations, and the different approaches the partnerships adopted.
Project evaluations provide an opportunity for projects to assess progress against initial objectives, generate knowledge about what projects have achieved (including unintended outcomes) and reflect on lessons learned. Summative evaluations measure outcomes against pre-determined goals and frameworks while formative evaluations can assist in continuous improvement.
SPHEIR projects have been conducting their evaluations before the end of the programme in order to make the most of the potential for Monitoring, Evaluation and Learning (MEL) activities to support good project management and results, and to promote accountability and learning within and beyond the partnership. In the long run, the evaluations will help the teams assess whether the higher education reform approaches chosen were successful. They also provide useful lessons learned for funders and practitioners to design future interventions.
The guiding principles
Partnerships were encouraged by the MEL team early on to reflect on some guiding principles when planning their evaluations, including: the use of their project’s theory of change and key learning questions, and the needs and interests of key stakeholder groups. The OECD ’s Development Assistance Committee criteria were also used to guide the partnerships’ planning (efficiency, effectiveness, impact, sustainability, coherence and relevance).
The SPHEIR Fund Manager’s MEL team offered ongoing support to all the partnerships: from determining the high-level questions that should be addressed, to identifying opportunities for collaboration and synergy with other partnerships and the SPHEIR External Evaluator, through to advising on the type of evaluation approach and whether collection of further data was necessary. The team also recommended that projects formed an evaluation advisory, steering or reference team to ensure that partnerships were committed to a collaborative, participatory co-design and implementation approach.
Meeting the needs of stakeholders
In developing their evaluation plans, partnerships were encouraged to adopt a strong ‘utilisation’ focus, designing their evaluation to meet the needs and interests of key stakeholders, and not to view the evaluations as a compliance exercise. The key ideas for utilising results were varied and included:
- a summary report outlining the key findings and learnings for external dissemination (LEAP);
- translation of reports into Arabic for sharing with community users (PADILEIA);
- possible learning publications produced by different project partners (TESCEA); and
- wider dissemination of findings among partnerships, partners, funders and other stakeholders (all the partnerships).
The great majority of the final evaluations were conducted by an outside single evaluator or a team of experts. As an alternative approach, TESCEA adopted an internally-driven process: while the evaluation was co-designed and co-implemented with the primary users to embed learning in the process, an external analyst oversaw the analysis of the data and interpretation of the findings. This approach was effective in creating strong ownership of the evaluation findings among project partners, and several other projects have held workshops with partners to collaboratively validate and prioritise learning and results and to generate recommendations.
Some partnerships carried out a formative evaluation to generate learning that enabled improvements to their ongoing work, as well as carrying out a summative evaluation. As an example, the PADILEIA team commissioned a rapid evaluation to capture learning from adapting its pathways to enable fully-remote, online course delivery in response to Covid-19.
Find out more about the objectives of the various evaluations and the approaches adopted by the partnerships. Please note that links to project evaluations will be added below as and when they become available.
Partnership for Enhanced and Blended Learning (PEBL) Read the report.
- Objective: To focus on how far the model demonstrates ‘proof of concept’ with regard to the network of universities developing and sharing blended learning modules, and any emerging lessons which may inform proposals for scaling up.
- Approach: A two-stage approach to the summative evaluation. First, an evaluability assessment was completed in 2020 and the second stage was a summative evaluation facilitated by an external team of consultants.
Transforming Employability for Social Change in East Africa (TESCEA) Read the report.
- Objective: To gain a better understanding of what, if any, aspects of the TESCEA project have contributed to transforming the way universities teach and learn; understand how the different components of the project have affected teaching and learning within universities; generate a broad scope of learning beneficial to the wider sector as a whole.
- Approach: This internally driven evaluation carried elements of both formative and summative assessments, with the main criteria for assessment focused on effectiveness, sustainability, equity and learning.
Assuring Quality Higher Education in Sierra Leone (AQHEd-SL) Read the report.
- Objective: To summarise the progress that the project has made towards achieving the intended objectives; examine and analyse the processes of the project.
- Approach: Summative evaluation: data has been collected from partner institutions using mixed methods, such as document analysis, surveys and qualitative methods. Facilitated by a team of external evaluators.
Partnership for Digital Learning and Increased Access (PADILEIA). Read the report.
- Objective: To provide learning for partners to feed into institutional goals around connected learning for disadvantaged groups and more generally; provide original insights in a compelling way for broader stakeholders.
- Approach: A rapid evaluation of Covid-19 adaptations by the project followed by a summative evaluation to test PADILEIA’s theory of change and assumptions.
Pedagogical Leadership in Africa (PEDAL). Read the report.
- Objective: Intended for both accountability and learning – for informing programme improvements and for lesson learning; and to highlight best practices, challenges, barriers, and successes.
- Approach: Theory-based approach to the summative evaluation that used a mixed-method approach, which was facilitated by an external team of consultants.
Prepared for Practice (PfP). Read the executive summary of the report. Read the full report.
- Objective: To provide a rigorous and independent evaluation of the project – intended to assess the project’s progress in achieving its core outcomes; to test the assumptions underpinning the project’s theory of change and clearly articulate how and why change happens and for whom.
- Approach: Summative single evaluation using a mixed method approach in order to properly address its lines of enquiry. The process is being conducted by an independent team of evaluation experts and researchers.
Transformation by Innovation in Distance Education (TIDE)*. Read the report and the management response.
- Objective: To address three overarching questions linked to the project’s theory of change, i.e. related to the changes that have occurred as a result of TIDE interventions; the most influential factors contributing towards those changes; and lessons learned from TIDE interventions, and how to apply them.
- Approach: A formative evaluation in 2020 was followed by a summative evaluation in 2021.
The Lending for Education in Africa Partnership (LEAP). Read the report.
Objective: To assess the LEAP programme’s progress to date against its theory of change in order to generate information about what has been achieved and what lessons to draw from this for improving the project going forward as LEAP continues to scale up.
Approach: A formative evaluation facilitated by an external team of evaluators. The evaluation will be a theory-based, non-experimental evaluation that serves to make an interim assessment of LEAP’s path to viability/sustainability based on its theory of change.