Monthly Archives: April 2014

Making Better Use of Assessments

As a Science teacher, and like many other teachers, I’ve routinely given my students a test after we’ve finished work on a given topic. However these short topic-by-topic tests have been effectively a summative assessment – the result gets recorded in an Excel spreadsheet, students receive brief feedback (their percentage and grade), and we move on to the next topic.

For a long time I’ve been a firm believer that in those completed test papers is a wealth of information on each students individual performance. But without the time to go over the test papers and analyse them in depth by question and subtopic, that information has remained well and truly locked shut.

Inspired by Kev Lister’s blog post I decided to produce a test and spreadsheet to analyse the results for the GCSE Physics Waves topic I’d recently taught to a year 10 class.  I’d recently read Driven by Data which stressed the importance of assessments being as rigorous as the final summative assessment, in this case the final GCSE exams.  So I downloaded a few exam papers (both Foundation and Higher tier) and cut out all the Waves topic questions to compose a test.

Using the exam board specification, I copied all the statements about what students should be able to do for the topic.  I had to tweak them a little, and break a few large statements into several shorter ones, but the specification was a good place to start.  These would be the objectives which I’d assess my students against.  Each question in my test was then ‘tagged’ with the objective(s) it was testing, in some cases seven or more.

Now came the hard work – creating the spreadsheet and coding all the connections between the objectives and questions, so that each student would get a percentage score for how well they’d met each assessment objective.  I also wanted to know how well each student answered multiple choice verses free-response questions and their performance on foundation versus higher tier questions, so I coded these associations too.

This whole process was quite complex and took a considerable amount of time to do.  However by the end of it I was the most excited I’ve ever been about giving a test!  The students took the test, and that same night I marked the papers and input the marks for all the different questions items into the spreadsheet, giving me the screen shown below.

Objective analysis screenshot

Objective analysis screenshot

From this screen its easy to see that there are some objectives that were generally answered well, and a lot more objectives that weren’t!  These would need some attention from me to improve the students performance.

Its also clear to see that there are seven students that came out red for every objective.  One student (student 8) was ill and so didn’t complete the test, whilst the other students had recently joined my class from another (all from the same class).  These students had arrived in my class after I had taught the topic, but this didn’t matter (so I thought) as each GCSE class should have been taught this topic.  However when I spoke to these students I found that they had not been taught the topic, something I wouldn’t have uncovered without doing this test.  Now I can try to remedy it through interventions.

I was quite intrigued when looking at the ‘use of the wave equation’ objective.  Most students performed badly in this, however some students scored 44%, getting almost half the marks.  There were four wave equation questions in the test.  How could students answer some of these correctly and the others incorrectly?  Looking at the marks for the individual questions, I noticed that all these students answered questions correctly which used simple numbers, whereas they answered them incorrectly when questions used numbers in standard index form (ie 3×10^8).  So this is something that also needs some intervention.

Second analysis screen

Second analysis screen

The second analysis screen, shown above, summarised how students performed in multiple choice and non-multiple choice questions, as well as higher and foundation tier questions.  It also gave an overall percentage and estimated grade (omitted from the screenshot above).  Most students performed better in multiple choice questions compared to non-multiple choice, and so they need more practice in how to answer exam style questions.  This screen also shows that my students performed much better in the foundation tier questions, so they need practice in answering higher tier questions too.

This analysis gave me a lot of information about my students.  However, did my students perform as I’d expected, better or worse?

Before the test I predicted every students performance in each area (whether I thought they would achieve a red, yellow or green for each objective).  When compared to the students actual performance, my predictions turned out to be 59% wrong!

For whatever reason, my perceptions about my students performance was wrong; this test analysis tells me their real performance.  Over the coming weeks I’ll be using the data to identify sub-topics that require improvement for each student and perform interventions with them to improve their performance.  The details of this will be in a follow up blog post.

Would you like this kind of data analysis for every test that you did with your classes?  How would you use the data?  And what do you think the benefits would be?  Let me know by answering the brief questions below.