By Dr Diana Pritchard , External Evaluator, University of Bedfordshire

04 July 2019 - 10:02

The evaluation plan for the SPHEIR programme includes questionnaire surveys of educators and students.  Here, we describe the scope and nature of these surveys and the rationale behind the design.  We think this will be of interest to several audiences.  

For some of the SPHEIR partnerships, it could highlight the value of the questions and resultant data for their own monitoring and evaluation purposes.  Also, since there is increased attention amongst international donors and governments on higher education reform, there is wider interest in the development of approaches and instruments that evaluate education interventions. 

But first some context of the evaluation of the SPHEIR programme. It is being evaluated from 2017-23 to cover the period of its implementation plus one year, so allowing for data to be collected on the destination of graduates.  The evaluation is being carried out by a consortium, comprised of three organisations (IPE Tripleline, Technopolis Group, and the University of Bedfordshire).  

Our task is to gain a better understanding of the design aspects that make for successful interventions in higher education and to improve knowledge on the longer-term impacts of interventions which strengthen higher education.  

Educator and student level assessments: just two of five levels

The ambitious scope of change anticipated by the SPHEIR programme, and the diverse portfolio of partnerships, requires an evaluation which captures changes at five different levels: from the system, world of work/employer sector, partnership and educator to the student.  In this blog we focus on the educator and student levels.

Why measure at the educator and student level? 

Evaluating at educator and student levels is crucial for several reasons.  We need the data to evaluate the extent to which SPHEIR achieves the programme’s intended overall impact: enabling ‘HEIs to contribute more effectively to economic development and growth, public institutions and civil society’.  We also need it to evaluate programme outcomes on quality in the delivery of teaching and learning in higher education institutions, and the impact of the programme on students/graduates who are socio-economically disadvantaged or living with disabilities (including refugees).   

And, more technically, measuring at these levels will enable evaluation of the extent to which the outputs of the SPHEIR programme are in line with its Theory of Change and corresponding indicators which are shown below.  

SPHEIR higher level change levels as set out in SPHEIR Theory of Change
Impact level Higher Education Institutions contribute more effectively to economic development and growth and strengthened public institutions and civil society
Longer-term outcomes
  1. Number and quality of graduates better meet needs and shortages in the labour market (public and private sectors and enterprise development)
  2. Improved graduate outcomes including graduate employability
  3. Improved quality and efficiency of HE sector including through strengthened regulatory framework
Intermediate outcomes
  1. Increased and more equitable access and retention
  2. Increased quality and relevance in delivery of teaching and learning, and student experience in HE
  3. Strengthened governance, leadership and institutional management in partner HEIs and beyond, respecting diversity principles

Generic elements of survey questionnaire design  

DFID set out key areas for evaluation on teaching and learning, issues of equality, inclusiveness and accessibility, and HE provision that supports students to be career-ready and able to contribute positively to society.    

Survey questionnaires are our main tool to collect quantitative and qualitative data, complemented by focus groups.  From the outset, practical realities and resourcing limitations ruled out extensive use of more in-depth qualitative approaches.  Observations or reflective accounts may best capture the processes linked to pedagogical changes and learning development but could not be used considering the numbers and geographical spread of the partnerships.  

As we are familiar with instruments used to evaluate HE provision and learning, we incorporated questions that have been developed and tested internationally for validity across geographical and socio-cultural contexts.  So, the surveys reflect good sector practice.

Student questionnaire survey

This survey evaluates impacts of SPHEIR interventions at programme level.  We recognised that the relatively short implementation period of partnership projects would mean that only minor changes in student learning and graduate outcomes will be evident.  But evaluating students (and eventually graduates) was a requirement of DFID since students are the main beneficiaries of SPHEIR and the key pathway to eventual economic and social impacts.    

In response, we aimed to produce an instrument that could capture, and link, changes occurring in both teaching and learning. Additional resourcing from DFID enabled us to develop a survey which is pedagogically grounded and methodologically robust.  

Scope of the student questionnaire

Various sections of the questionnaire obviously deal with primary areas of the Theory of Change.  But it is the section dedicated to evaluating provision and the development of 21st Century competences, including for salaried work and entrepreneurship, which makes our survey innovative.  

The imperative to evaluate 21st Century competences 

We understood the significance of evaluating the relevance of education provision of SPHEIR in relation to the changes that are unfolding across the world.  Such ‘global megatrends’ include technological innovations, climate chaos, ecosystem degradation, resource limitation, demographic shifts and unemployment, and rising geopolitical instability accompanied by unprecedented forced migration.  In light of these challenges and disruptions, graduates require a set of competences to function effectively in their salaried and entrepreneurial work, and within their communities.

Defining generic 21st Century competences 

In leading this task, the University of Bedfordshire opted to consider ‘competences’, not skills, because it is a broader concept.  Competence refers to the sufficiency in – and the interaction of – a range of skills, knowledge, intrinsic characteristics and attitudes but also the underlying values and ethical principles. 

An obvious place to start to identify key competences was to review relevant international frameworks for employability, entrepreneurship, global citizenship and sustainability.  These revealed many commonalities such as problem solving, communication, teamwork and responsibility. 

The challenge throughout the selection process was to limit the number of competences to avoid a lengthy survey.  From our initial list of fifteen, we sought inputs from the SPHEIR partnerships themselves.  At a SPHEIR workshop in Nairobi (October 2018), they devised their own list of the 21st Century competences.  This process reduced the list to ten. It also confirmed consensus on competences including critical thinking, problem solving and collaboration and generated some corresponding indicators.  

Predictably, given the diversity of partnerships, it also surfaced divergencies which required us to adapt or eliminate some competences.  For the PADILEIA partnership project working with refugees, the phrasing of ‘active citizenship’ proved inappropriate given the lack of definition of the legal status of many of its students.  It offered alternative wording which referred to ‘responsibility in the community’ and the indicator question it proposed, about ‘volunteering in the community’, was incorporated into the survey.  

While all agreed that communication skills and information technology skills were key competences, they were excluded because there seemed no obvious shared indicator questions that could accommodate the different levels to which these are developed within the various study programmes. 

Involving stakeholders in survey design has many benefits.  In this case, it meant the selection was tailored to fit the SPHEIR portfolio.  Also, feedback from some of the partnerships claimed that this process was useful in indicating where our survey could align with their own monitoring instruments. 

Mixed methods of assessment of 21st Century competences

A parallel element of the survey design involved choosing from different types of assessment approaches that are appropriate for distinct cultural contexts.  In the end we incorporated mixed methods, establishing another feature likely to generate broader interest.    

We included many ‘student engagement’ questions (from the US National Survey of Student Engagement) which have already been used and tested in HEIs in several African countries.  These measure provision and learning by asking what students do (the time and energy they devote to educationally purposive activities), and what institutions do (the extent to which they provide effective educational practices and learning opportunities).  At a mixed workshop in Bedfordshire (UK), involving the evaluators, the UK’s AdvanceHE, the Fund Manager and representatives of some of the lead partners, the relevance of these questions was examined.

We also included other survey questions drawing from existing surveys and, in the case of provision for change and innovation, we devised our own.  A quantitative reasoning test was built-in which has the benefit of being a good indicator for basic numerical literacy, literacy and problem solving.  

Finally, psychometric questions on self-efficacy and self-regulated behaviours were included because these enhance academic performance and can be developed through good pedagogical approaches.

At a workshop at University of Bedfordshire (September 2018) SPHEIR participants explored the value of student engagement questions to evaluate the SPHEIR programme.
Kenyatta University (KU) hosted the implementation of the pilot survey.  Here KU staff (from PEBL and Kenya-Nottingham partnerships) and students with the blog author.
Students in Myanmar complete the student surveys that will serve for the baseline evaluation of TIDE.

Survey pre-testing and piloting

Pre-testing the survey at the University of Bedfordshire involved diverse student groups, including individuals from different countries of origin, ethnic backgrounds and academic levels.  Beyond highlighting where language needed to be modified, feedback reflected students’ curiosity about the nature of the questions which got them thinking in new ways about their different learning experiences. 

The survey was then piloted in Nairobi (Kenya) in November 2018 thanks to the collaboration of Kenya-Nottingham and PEBL partnerships based in Kenyatta University.  Reiterating the need to adapt surveys to distinct cultural perspectives and political contexts, changes were subsequently made to avoid sensitivities linked to questions which asked about gender, ethnicity and the levels of education of family members.  By example, the adjective ‘ethnic’ was removed from the question which now asks whether students identify as ‘belonging to a minority group’. 

Creatively, students proposed and articulated an indicator question about IT which asks about the extent to which study programmes provide opportunities to ‘develop an online presence to promote [students’] skills and engage with the market’. This enabled us to include this competence.

Lecturer survey

The educator survey is designed to capture levels and types of staff development including the changes in approaches that lecturers adopt in their teaching and learning practices. It also serves to assess the extent to which educators develop student learning in relevant competences.   

Professional development opportunities 

Identifying opportunities for professional development and training delivered by HEIs to educators, and collecting relevant data, gives us an overview of the provision offered within the SPHEIR programme.  By asking about the factors which motivate or deter staff from taking these up, we can obtain wider information about the circumstances which facilitate reform and innovation. After all, our evaluation must contextualise findings in the realities of working environments for professionals within the sector across the SPHEIR countries. 

Ultimately, staff development and training activities represent just the ‘input’ of enhancement of teaching and curriculum reform.  Attending a training activity does not necessarily or immediately translate into an improvement in the quality of teaching and learning practice of that lecturer.  

Hence the need, anticipated in this survey, to obtain measurements which reflect on the current and changed ‘relational’ attitudes and approaches to teaching that educators have with respect to their role before students and the learning process.   

Measuring teaching approaches

Pedagogical research informs that teacher-focused approaches, so common throughout higher education teaching, limit the learning process of students.  Where teachers see their role primarily to transmit knowledge, the learning outcomes and development of students such as their critical thinking and problem-solving skills are not developed.  

By contrast, where educators adopt more student-focused approaches to teaching, students adopt deeper approaches to learning resulting in better learning outcomes. These are reflected in cognitive development, which is so central to HE provision.  

For the design of our lecturer survey, this means that instead of asking about any new teaching methods that lecturers may adopt, we rely on questions that access the understandings lecturers have of their role in teaching and learner development. Changes in approach from, say, transmission of knowledge to facilitation of learning, positively reflect on the impact of their professional development activities. 

The corresponding questions are psychometric, some drawn from the internationally-validated Approaches to Teaching Inventory. These ask about understandings of the purpose of note taking, class debate, assessments, and the significance of feedback.  Pre-testing and piloting of these surveys amongst a diverse group of lecturers at the University of Bedfordshire, echoed the reliability of these questions. 

Mirroring questions

An important aspect of our methodology is to ensure that the surveys mirror each other such that student and lecturer experiences of the curriculum and teaching practices can be captured and compared.  

By viewing a phenomenon from the perspectives of two of the major beneficiaries of SPHEIR interventions, our findings can be validated. Where differences or disconnects emerge, these can be explored in more detail through our subsequent focus groups.   

Finally

The design, testing and implementation of the surveys could not have been done without collaboration with the SPHEIR partnerships.  They worked with us to provide comments and insights, merged questions to better align with their own needs, and helped implement the surveys on the ground.  This highlights the importance of close cooperation and involvement and means that the survey results will be useful for the stakeholders involved.

Dr Diana Pritchard

External Evaluator, University of Bedfordshire

Dr Diana Pritchard is a core member of the External Evaluation team. She leads on employability, sustainability and diversity inclusion at the Centre for Learning Excellence at the University of Bedfordshire.