skip to primary navigation skip to content

ED:TALK – Evidence & Dialogue Toolkit

Department A-Z

 

Step 3: Decide what evidence to generate

First: understanding evaluation

What are you evaluating against? What is your goal?

Remember that in your project you are trying to address a very specific goal. If you are unsure about your goal, go back to Step 1 and decide what this goal is. This is hugely important, because understanding if your project has been successful and effective relies on you specifying that goal accurately.

Revisit the theory of change you have already developed

In Step 2 you developed your project and thought of the mechanisms by which the project you have selected might bring about the change you are seeking. This theory of change is as important as specifying your goal, because it allows you to understand which aspects of your project you need to systematically focus on so that you can find out how and if it’s meeting your intended goal.

If you have not already done this, return to Step 2 and briefly outline the way you think your project might contribute to your goal.

What evaluation approach is most appropriate?

Deciding early on what your evaluation approach is will allow you to seamlessly integrate it in your project’s implementation.

Evaluation is a systematic process that looks to explore if, and how, a specific activity or programme is effective. It usually looks at the processes you’ve identified in your theory of change above.

This section of the Toolkit introduces you to a very light-touch version of evaluation that is practically implementable in a school context while drawing on robust research methodologies and analytical approaches.

Do not worry if you are unfamiliar with the terminology or approach in general. This section will provide enough background to both so that you can evaluate your project with confidence!

In deciding what evaluation approach is most appropriate, the guiding principle behind the Toolkit is that evaluation should be planned and undertaken alongside the implementation of your project.

What is evaluation?

Evaluation is a wide-ranging but systematic approach employing different types of methods to explore (1) the effectiveness or efficiency of a particular activity, programme for a defined outcome; and/or (2) the process by which any such effects on those outcomes may have come about.

Types of evaluation

When the focus is on the effectiveness of a given activity or project, we’re looking at an impact or outcome evaluation. When the focus is on the process of undertaking that project, we are looking at a process evaluation.

Impact and process evaluations are sometimes done separately, but in order to get the best understanding of your project, we suggest looking at both together.

There are many evaluation research designs out there. The Education Endowment Foundation, for instance, champions the use of randomised controlled trials in education. These can provide causal evidence as to the impact of a programme. Other approaches exist, for example realist evaluation, each with their own assumptions.

Types of evidence

Evaluation can generate different types of evidence. These range from causal evidence, which demonstrates that a specific programme has a causal impact on the outcome that you are focused on, to narrative evidence that does not use new sources of data, but builds upon existing evidence to make a reasoned case as to why the project you are implementing might be making a difference.

The type of evidence that you want to generate depends on the project you are undertaking, and what you consider will be most useful for you.

The type of evidence that this Toolkit will help you generate is empirical, based on new data that you collect as part of the evaluation in a simple and straightforward manner.

There is a lot of talk in education about whether evaluation leads to a reductive approach, whereby we are only ever interested in hard outcomes that can be measured. This is not the case for the approach the Toolkit takes. We think of evaluation as more than just the application of statistical analysis or the implementation of a randomised controlled trial (though the initial research evidence the Toolkit builds on is derived in that manner).

Evaluation, and self-evaluation in particular, can be a very useful reflective tool that generates good actionable evidence for you to use in your teaching practice.

The key challenge in evaluation — and how to address it

The key challenge in evaluation is to establish a counterfactual. In other words, to understand what would have happened had you not implemented your project. This is not possible (it would require time travel!) but there is a close-second option: find a comparison or control group — that is a group of students who are very similar to the students you will be working with, but who do not take part in your project.

Control groups are best found by randomly allocating students to take part in a programme (random allocation would also allow you to make causal claims about your project). We realise this is not always possible in the context of your school, so the next best thing is to find a similar comparison group.

The Toolkit evaluation approach

The evaluation approach the Toolkit suggests is that you adopt focuses on the generation of empirical evidence in a practical and straightforward manner. We are aware of the time constraints teachers come up against in their work, so the approach is modular.

This modular approach allows you to choose which configuration of modules you want to adopt: you can do Module 1 on its own; Module 3 on its own; Modules 1 and 2 together; or all three Modules for a comprehensive account. Modules 1 and 2 are about the impact of your project

Module 1: before-and-after

Module 1 focuses on a before-and-after comparison that allows you to understand the change that has occurred in your project group towards the goal that you have set when developing your project. It uses resources that you can take as given or adapt to suit your context or the needs of your pupils. This approach produces empirical but not causal evidence.

Rationale: use an existing measurement tool, like the questionnaires in our Resources section, or a test to capture the aspect you are focusing in your project before and after you have implemented it.

Steps:

  1. Choose your measure in a way that reflects your overall goal for your project and the outcome you are trying to attain
  2. Administer the questionnaire or test to the students you will be working with, before you start your project, and enter your results into a simple spreadsheet
  3. Administer the questionnaire or test to the students you have worked with once you have implemented your project, and enter your results into the spreadsheet
  4. Explore the before-and-after change by using any spreadsheeting software with which you are familiar

Module 2: comparison group

Module 2 focuses on addressing the key evaluation challenge outlined above, namely finding a counterfactual. The rationale is that in the absence of a randomly chosen control group (which is usually not possible in day-to-day practice) the next-best solution is to choose a group of pupils that are very similar to the group you will be implementing your project with, but who do not take part in your project (they can take part later on, of course, but since you will most likely be testing this with a small group to begin with, there are no ethical dilemmas to contend with here). Usually the easiest comparison group is a different class in the same year group, which you also teach but with whom you are not doing your project.

Rationale: undertaking the before-and-after comparison with a group that is similar to your project group allows you to understand if the change you may have observed in your project participants is associated with taking part in the project as opposed to normal development.

Steps:

  1. Choose a comparison group that is as similar as possible to the group you will be working with
  2. Use the same measurement tool, and the same data collection approach as with your project group
  3. Undertake data collection by administering the questionnaire or test to your comparison group
  4. Enter your data in the same spreadsheet as your project group and explore it in the same way, by looking at the differences in change between the two groups

Module 3: process focus

Module 3 is about the process of implementing your project.

Module 3 focuses on qualitatively exploring the experiences of your participants in your project, as well as your own reflections of implementing it in your context, to identify both enabling factors and barriers that you have faced.

Rationale: qualitatively exploring your participants’ and your own attitudes, perceptions, behaviours and practices allows you to engage with how your project is working.

Steps:

  1. Select the project participants you will be talking to
  2. Carry out systematic conversations with these participants by using a consistent set of questions or prompts for discussion; or carry out observations of students' learning in your project lessons
  3. Reflect on your experience undertaking the project
  4. Document all your findings and reflections in a brief narrative account