Tag Archives: assessment

Improving Student Performance – Interventions

This well overdue blog post is about the hard part of formative assessment – using the information I’d gained about each student to intervene with them in order to improve their performance.

Before I delve into the details of the different strategies I employed, I want to outline a few points with regards to my own opinions on intervention. In no way should an intervention ever be used or thought of as a substitute for quality teaching during the regular school day. And it’s important that pupils and parents never form this opinion (e.g. “it’s ok to take Johnny out of class to go on holiday because there’ll be a revision session later on”). That’s why I’ve always refused to do after school revision sessions close to the exams, which only promotes an exam/test ‘cramming’ culture and not one of hard work over time (i.e. a growth mindset).  That said, interventions can be a powerful tool, as long as they are used throughout the year, and not just near exams. I also believe that it’s up to every individual teacher to intervene with their classes on a regular basis in order to help students to improve their knowledge and understanding, and to become more reflective about their learning. This year my department has introduced the idea of dedicating one lesson every two weeks to do just that, and so far it’s very positive.

Ok, so onto the intervention strategies. I tried 6:

1. Starters on different topics – while I was teaching the next unit, I used starters based on the topics that most of the class had performed badly in. However it was hard to keep them to 10-15 minutes, and some took over the entire lesson (such as when I tried to go over standard index form in 10 minutes!). I think my mistake was to try to teach a concept in this time instead of practice one, and so this approach was probably best for ‘amber’ concepts, where most students had some understanding of the topic.

2. Homeworks on topics – generally involving using BBC Bitesize and online videos (e.g. my-gcsescience.com). I had some success with this, but it was a battle getting students to complete it (I had to call a lot of parents). So this was quite time-intensive.

3. Some 1-2-1 after school sessions for a couple of students with low confidence in the subject. This was very successful, but again time-intensive.

4. I’d recently bought ‘Talk-Less Teaching’ and used ‘Teach me, tell me (and then tell me more!)’ on p156. This strategy involves students questioning each other using cards with two different questions on them – one knowledge-based, the other application-based. I got the students to write the questions for a random topic (picked out of a ‘hat’). They did ok in writing their own questions, although some needed quite a lot of support. However the students really enjoyed questioning and explaining the answers to each other.

Since then I’ve found that this is also a Kagan structure (Quiz-Quiz-Trade), and for Science teachers there are questions cards already prepared on Daria Kohls excellent blog (listed as ‘Quiz, Quiz, Trade’ cards – these have one question per card and not 2 as in ‘Talk-Less Teaching’).

5. Multiple choice quiz for the whole topic – I used the questions from the BBC Bitesize website for the topic along with a few of my own. After completing it, students collected a mark scheme to mark it, and then filled in an analysis sheet to record their mistakes and the reasons for them (e.g. ‘didn’t understand the question’, ‘didn’t know the science’, etc). Following this they then created a revision sheet for what they needed to revise.

What surprised me about this was how the students responded – they were completely focused for the whole lesson and really seemed to enjoy it. I think this may be because (a) what they were doing was completely personalised to them, and (b) they received instant feedback to the test by marking it themselves, and then used that to determine their next step (so in a sense gamified). Hopefully this also motivated them to use the revision sheet they created at home to revise for the second test.

6. For the second test, I told the students that instead of them getting their results individually, I would publish them colour coded and in rank order on my classroom door window, so the whole class, and everybody else that passed by, could see them.  This visibly got the attention of most of the students! Hopefully it gave them a little extra incentive to revise for the test. That said, I only did this because they are a higher ability group and want to do well. I’d think very carefully about using it with a lower ability class where there may be a certain amount of kudos in underperforming.

Out of all the strategies, I’ll definitely be using the ‘quiz-trade’ card activity (#4) as well as my multiple choice test and mark activity (#5). I’ll also continue to use the starters to revisit old topics (#1), but only those that students have some understanding of and require more practice.

This post is by no means comprehensive in its review of intervention/revision strategies. What do you do? What have you found particularly successful (or not!). Leave a comment below.

Making Better Use of Assessments – Part 2

A few months ago I wrote here about how I’d analysed the results from a year 10 class test with a spreadsheet to give me a breakdown of every students performance against every specification objective/concept.

Since then I’ve been experimenting with how I could use the data to improve my students performance.  In this blog post I’ll give details of the students performance after those interventions (I’ll write another blog post about the specific intervention approaches I tried).

Interventions complete, I tested the students again with the same test as before (so I could use the same spreadsheet).  It was approximately 3 months since they’d last seen the test, and so highly unlikely that they’d remember much about the questions.

Below are two screenshots of the spreadsheet breakdown of the test results for every spec objective, before and after interventions (the student numbers are not the same as in the previous blog post as some students were not present for the second test, so have been removed).  The RAG thresholds used were: 0-39% red, 40-69% amber, 70-100% green.

Breakdown of test 1 against objectives (before interventions).

Breakdown of test 2 against objectives (after interventions).

Breakdown of test 2 against objectives (after interventions).

It can clearly be seen that many improvements have been made for all students.  The screenshot for test 2 as well as showing the individual strengths and weaknesses of the students, also shows that most students still struggle with certain areas, mainly to do with red-shift and the universe (although some improvements have been made in those areas).

Previously I’d identified the ‘use of the wave equation’ objective as a priority for interventions, as the first test had showed that few students could use the equation, and those that could had trouble using it when given data in standard index form.  This objective increased overall from 21% in test 1 to 51% in test 2.  Interrogation of the marks showed that now most students could use the wave equation correctly, but they still struggled with numbers in standard index form.  So this is something that requires ongoing work.

To quantify the improvement, the number of red, amber and green objectives was totalled for both tests and the percentage change calculated (shown below).

Percentage changes in number of red, amber and green objectives following interventions.

The rows highlighted in orange are for those students who joined my class from another just before I administered the first test but hadn’t been taught the Waves topic.

These numbers show that on average the entire class increased their proficiency in a third of the objectives.  The students who had not been taught the topic previously (and so had only been exposed to the material during the interventions) unsurprisingly showed a bigger increase – 52% on average.  The students who had been taught the topic previously increased by just under a third.  However some individual students made much bigger gains (83% and 50%).  This gives me valuable evidence of not only what I have done with the students, but also of how individual students have responded to the interventions, and so how they are likely to respond to interventions in the future.

My spreadsheet also calculated the percentage scores for multiple choice and non-multiple choice questions, as well as higher and foundation tier questions, and the overall percentage for the entire test.  These numbers are shown below for both tests (again the students not taught the topic prior to test 1 are highlighted).

Headline figures

Overall figures for different question types and the whole test.

From these figures I calculated the percentage change between tests, which are below.

Headline figures - percentage changes

Percentage change in the overall figures for different question types and the whole test.

This shows that students increased in all areas of the test, but the increase was less for the higher tier questions; so this is an area in need of further work.

From this exercise I’ve learned not only the individual strengths and weaknesses of students but also what parts of the topic students struggle with the most (and so requires further work), and how individual students respond to interventions.  It also gives me clear evidence of what I’ve done in the classroom and the results of my interventions.  In my next blog post I’ll be writing about what intervention strategies I tried and how successful they were.

Making Better Use of Assessments

As a Science teacher, and like many other teachers, I’ve routinely given my students a test after we’ve finished work on a given topic. However these short topic-by-topic tests have been effectively a summative assessment – the result gets recorded in an Excel spreadsheet, students receive brief feedback (their percentage and grade), and we move on to the next topic.

For a long time I’ve been a firm believer that in those completed test papers is a wealth of information on each students individual performance. But without the time to go over the test papers and analyse them in depth by question and subtopic, that information has remained well and truly locked shut.

Inspired by Kev Lister’s blog post I decided to produce a test and spreadsheet to analyse the results for the GCSE Physics Waves topic I’d recently taught to a year 10 class.  I’d recently read Driven by Data which stressed the importance of assessments being as rigorous as the final summative assessment, in this case the final GCSE exams.  So I downloaded a few exam papers (both Foundation and Higher tier) and cut out all the Waves topic questions to compose a test.

Using the exam board specification, I copied all the statements about what students should be able to do for the topic.  I had to tweak them a little, and break a few large statements into several shorter ones, but the specification was a good place to start.  These would be the objectives which I’d assess my students against.  Each question in my test was then ‘tagged’ with the objective(s) it was testing, in some cases seven or more.

Now came the hard work – creating the spreadsheet and coding all the connections between the objectives and questions, so that each student would get a percentage score for how well they’d met each assessment objective.  I also wanted to know how well each student answered multiple choice verses free-response questions and their performance on foundation versus higher tier questions, so I coded these associations too.

This whole process was quite complex and took a considerable amount of time to do.  However by the end of it I was the most excited I’ve ever been about giving a test!  The students took the test, and that same night I marked the papers and input the marks for all the different questions items into the spreadsheet, giving me the screen shown below.

Objective analysis screenshot

Objective analysis screenshot

From this screen its easy to see that there are some objectives that were generally answered well, and a lot more objectives that weren’t!  These would need some attention from me to improve the students performance.

Its also clear to see that there are seven students that came out red for every objective.  One student (student 8) was ill and so didn’t complete the test, whilst the other students had recently joined my class from another (all from the same class).  These students had arrived in my class after I had taught the topic, but this didn’t matter (so I thought) as each GCSE class should have been taught this topic.  However when I spoke to these students I found that they had not been taught the topic, something I wouldn’t have uncovered without doing this test.  Now I can try to remedy it through interventions.

I was quite intrigued when looking at the ‘use of the wave equation’ objective.  Most students performed badly in this, however some students scored 44%, getting almost half the marks.  There were four wave equation questions in the test.  How could students answer some of these correctly and the others incorrectly?  Looking at the marks for the individual questions, I noticed that all these students answered questions correctly which used simple numbers, whereas they answered them incorrectly when questions used numbers in standard index form (ie 3×10^8).  So this is something that also needs some intervention.

Second analysis screen

Second analysis screen

The second analysis screen, shown above, summarised how students performed in multiple choice and non-multiple choice questions, as well as higher and foundation tier questions.  It also gave an overall percentage and estimated grade (omitted from the screenshot above).  Most students performed better in multiple choice questions compared to non-multiple choice, and so they need more practice in how to answer exam style questions.  This screen also shows that my students performed much better in the foundation tier questions, so they need practice in answering higher tier questions too.

This analysis gave me a lot of information about my students.  However, did my students perform as I’d expected, better or worse?

Before the test I predicted every students performance in each area (whether I thought they would achieve a red, yellow or green for each objective).  When compared to the students actual performance, my predictions turned out to be 59% wrong!

For whatever reason, my perceptions about my students performance was wrong; this test analysis tells me their real performance.  Over the coming weeks I’ll be using the data to identify sub-topics that require improvement for each student and perform interventions with them to improve their performance.  The details of this will be in a follow up blog post.

Would you like this kind of data analysis for every test that you did with your classes?  How would you use the data?  And what do you think the benefits would be?  Let me know by answering the brief questions below.