The idea of performance tasks is to allow most students to be successful and allow the student to demonstrate learning below, at and above the expected standard.
We have asked teachers to design performance tasks with 25% of Material Below the Expected Level, 60% of Material At the Expected Level and 15% of Material Above the expected level.
That is easy to write but in reality, it is difficult to do.
There are two models of assessment.
The Quality Model
The most obvious method of assessment is to ask the students to do something and see how well they do it.
By referring to a scaled set of criteria or descriptors, the examiner judges which number on the scale best indicates how well each student has performed.
We call this the quality model of measurement because central to it is the process of judging the quality of a performance.
Competitive ice dancing uses a pure quality model. The skater is expected to go out and perform in a way that impresses the judges as much as possible.
In practice, the method is limited to the sorts of activities in which we can observe students operating in reasonably standard circumstances, such as speaking in a foreign language, playing the piano, or painting in oils. It is not so appropriate for measuring understanding of theories in subjects such as mathematics, science or geography.
Marking an English Essay uses the quality model.
The 25/60/15 breakdown is not really possible with quality model tasks. The nature of quality model tasks is usually such that students will be able to show if they are below, at or above the expected level.
The Difficulty Model
The usual approach consists of setting a series of tasks that vary in difficulty, often starting with easy ones and moving on to harder ones. What we ‘see’ is which tasks each student can do successfully.
We call this the difficulty model of measurement because central to it is the mapping of students’ ability to the difficulties of a graded set of tasks (it has also been called the Counting strategy)”
A high jump competition is a clear example of the difficulty model, consisting of a series of tasks of ever-increasing difficulty which continues until everybody has failed.
Assessing knowledge in Maths, Science and Hums are often done using the difficulty model. Some grammar and writing conventions questions in English can be assessed using the difficulty model too but in my experience English staff a loath to venture into the difficulty model of assessment.
The Quality model requires a lot of work after the assessment! Watch the English staff cross marking and trying to make consistent judgments against complicated rubrics or marking guides. It is time-consuming.
The Difficulty model requires a lot of work to do before the assessment. Devising questions of increasing difficulty is not an easy task.
Our maths team is doing a great job in this area.
Here is how they have structured their common moodle tests.
It is a 30 Question test comprising:
- 8 Questions below standard
- 18 Questions at standard
- 4 questions above standard
The Facility Index shows what percentage of students answered the questions correctly.
Here is a graphical representation of that.
Facility Index means % of students who got the question right.
Discrimination efficiency is how good the question was at discriminating between stronger and weaker students.
You can see the almost all students were getting the below level questions correct. (Thankfully). Question 5 is a bit of a blip.
Let’s look at that.
It is a worry that only 74% of our year 7 cohort can do 7×8 without a calculator. Also, note that this question is a good discriminator. Students who know their tables do better on the whole test.
The other key point here is that the 25% of content below level does not have to be just below. It can be very basic content.
Q24 shows the power of using Moodle quizzes as you are able to see the misconceptions and easily reteach the skill.
Rounding to the nearest place value is an important skill in science so we’d really like the maths teachers to make sure that more than 37.65% of our students can do it.
A quick reteach of this part of the course is in order.
Some staff have questioned the idea of testing students on material “above level”. Is it fair to test students on material we have not explicitly taught them? My answer to this would be just because we have not taught them does not mean they don’t know it.
Q26 Involves directed number. For non-maths teachers, this means +ve and -ve numbers. A concept usually not taught until Year 8.
Nine of our year 7 students knew how to do it. Someone must have shown them.
The question does not necessarily have to be something we have not taught. It may just be a very complex example where students need to combine a number of concepts to come up with the correct answer.
Once again 9 students were able to answer a very challenging question for Year 7’s.
The high jump analogy of the difficulty model provides a useful insight here. In the high jump, you don’t set the bar at the standard height for Year 7 and record everyone who can jump over. You start low and gradually increase height until everyone fails to get over the bar. This may be a useful analogy to share with your high flyers; students with perfectionist tendencies who are used to getting 100% on every test need to know they too will eventually be stumped. We will ramp up the difficulty until you can’t do it.
That is the only way we know how high you can actually jump.
Well done maths team.
There is a lot of work in setting these assessments up but now they are set, they should not need to be significantly altered from year to year.
Now you can really get into the analysis of what students can and can’t do yet. Mainframe 1, 3, 4 and 10.