The evaluation plan for the SPHEIR programme includes questionnaire surveys of educators and students. Here, we describe the scope and nature of these surveys and the rationale behind the design. We think this will be of interest to several audiences.
For some of the SPHEIR partnerships, it could highlight the value of the questions and resultant data for their own monitoring and evaluation purposes. Also, since there is increased attention amongst international donors and governments on higher education reform, there is wider interest in the development of approaches and instruments that evaluate education interventions.
But first some context of the evaluation of the SPHEIR programme. It is being evaluated from 2017-23 to cover the period of its implementation plus one year, so allowing for data to be collected on the destination of graduates. The evaluation is being carried out by a consortium, comprised of three organisations (IPE Tripleline, Technopolis Group, and the University of Bedfordshire).
Our task is to gain a better understanding of the design aspects that make for successful interventions in higher education and to improve knowledge on the longer-term impacts of interventions which strengthen higher education.
Educator and student level assessments: just two of five levels
The ambitious scope of change anticipated by the SPHEIR programme, and the diverse portfolio of partnerships, requires an evaluation which captures changes at five different levels: from the system, world of work/employer sector, partnership and educator to the student. In this blog we focus on the educator and student levels.
Why measure at the educator and student level?
Evaluating at educator and student levels is crucial for several reasons. We need the data to evaluate the extent to which SPHEIR achieves the programme’s intended overall impact: enabling ‘HEIs to contribute more effectively to economic development and growth, public institutions and civil society’. We also need it to evaluate programme outcomes on quality in the delivery of teaching and learning in higher education institutions, and the impact of the programme on students/graduates who are socio-economically disadvantaged or living with disabilities (including refugees).
And, more technically, measuring at these levels will enable evaluation of the extent to which the outputs of the SPHEIR programme are in line with its Theory of Change and corresponding indicators which are shown below.
|SPHEIR higher level change levels as set out in SPHEIR Theory of Change|
|Impact level||Higher Education Institutions contribute more effectively to economic development and growth and strengthened public institutions and civil society|
Generic elements of survey questionnaire design
DFID set out key areas for evaluation on teaching and learning, issues of equality, inclusiveness and accessibility, and HE provision that supports students to be career-ready and able to contribute positively to society.
Survey questionnaires are our main tool to collect quantitative and qualitative data, complemented by focus groups. From the outset, practical realities and resourcing limitations ruled out extensive use of more in-depth qualitative approaches. Observations or reflective accounts may best capture the processes linked to pedagogical changes and learning development but could not be used considering the numbers and geographical spread of the partnerships.
As we are familiar with instruments used to evaluate HE provision and learning, we incorporated questions that have been developed and tested internationally for validity across geographical and socio-cultural contexts. So, the surveys reflect good sector practice.
Student questionnaire survey
This survey evaluates impacts of SPHEIR interventions at programme level. We recognised that the relatively short implementation period of partnership projects would mean that only minor changes in student learning and graduate outcomes will be evident. But evaluating students (and eventually graduates) was a requirement of DFID since students are the main beneficiaries of SPHEIR and the key pathway to eventual economic and social impacts.
In response, we aimed to produce an instrument that could capture, and link, changes occurring in both teaching and learning. Additional resourcing from DFID enabled us to develop a survey which is pedagogically grounded and methodologically robust.
Scope of the student questionnaire
Various sections of the questionnaire obviously deal with primary areas of the Theory of Change. But it is the section dedicated to evaluating provision and the development of 21st Century competences, including for salaried work and entrepreneurship, which makes our survey innovative.
The imperative to evaluate 21st Century competences
We understood the significance of evaluating the relevance of education provision of SPHEIR in relation to the changes that are unfolding across the world. Such ‘global megatrends’ include technological innovations, climate chaos, ecosystem degradation, resource limitation, demographic shifts and unemployment, and rising geopolitical instability accompanied by unprecedented forced migration. In light of these challenges and disruptions, graduates require a set of competences to function effectively in their salaried and entrepreneurial work, and within their communities.
Defining generic 21st Century competences
In leading this task, the University of Bedfordshire opted to consider ‘competences’, not skills, because it is a broader concept. Competence refers to the sufficiency in – and the interaction of – a range of skills, knowledge, intrinsic characteristics and attitudes but also the underlying values and ethical principles.
An obvious place to start to identify key competences was to review relevant international frameworks for employability, entrepreneurship, global citizenship and sustainability. These revealed many commonalities such as problem solving, communication, teamwork and responsibility.
The challenge throughout the selection process was to limit the number of competences to avoid a lengthy survey. From our initial list of fifteen, we sought inputs from the SPHEIR partnerships themselves. At a SPHEIR workshop in Nairobi (October 2018), they devised their own list of the 21st Century competences. This process reduced the list to ten. It also confirmed consensus on competences including critical thinking, problem solving and collaboration and generated some corresponding indicators.
Predictably, given the diversity of partnerships, it also surfaced divergencies which required us to adapt or eliminate some competences. For the PADILEIA partnership project working with refugees, the phrasing of ‘active citizenship’ proved inappropriate given the lack of definition of the legal status of many of its students. It offered alternative wording which referred to ‘responsibility in the community’ and the indicator question it proposed, about ‘volunteering in the community’, was incorporated into the survey.
While all agreed that communication skills and information technology skills were key competences, they were excluded because there seemed no obvious shared indicator questions that could accommodate the different levels to which these are developed within the various study programmes.
Involving stakeholders in survey design has many benefits. In this case, it meant the selection was tailored to fit the SPHEIR portfolio. Also, feedback from some of the partnerships claimed that this process was useful in indicating where our survey could align with their own monitoring instruments.
Mixed methods of assessment of 21st Century competences
A parallel element of the survey design involved choosing from different types of assessment approaches that are appropriate for distinct cultural contexts. In the end we incorporated mixed methods, establishing another feature likely to generate broader interest.
We included many ‘student engagement’ questions (from the US National Survey of Student Engagement) which have already been used and tested in HEIs in several African countries. These measure provision and learning by asking what students do (the time and energy they devote to educationally purposive activities), and what institutions do (the extent to which they provide effective educational practices and learning opportunities). At a mixed workshop in Bedfordshire (UK), involving the evaluators, the UK’s AdvanceHE, the Fund Manager and representatives of some of the lead partners, the relevance of these questions was examined.
We also included other survey questions drawing from existing surveys and, in the case of provision for change and innovation, we devised our own. A quantitative reasoning test was built-in which has the benefit of being a good indicator for basic numerical literacy, literacy and problem solving.
Finally, psychometric questions on self-efficacy and self-regulated behaviours were included because these enhance academic performance and can be developed through good pedagogical approaches.