Donate SIGN UP

Exam Results

Avatar Image
maggiebee | 13:53 Thu 13th Aug 2020 | News
97 Answers
Considering the debacle surrounding the Scottish exam results, you would gave thought that England would have learned from it. Apparently not.

https://www.theguardian.com/education/2020/aug/11/pressure-grows-on-government-over-england-a-level-results-mess-coronavirus
Gravatar

Answers

81 to 97 of 97rss feed

First Previous 2 3 4 5

Best Answer

No best answer has yet been selected by maggiebee. Once a best answer has been selected, it will be shown here.

For more on marking an answer as the "Best Answer", please visit our FAQ.
Frankly, the students need to stop whingeing about "the system" that adjusted their grades and, instead, address their displeasure to their teachers who thought they were all wonderful and gave them inflated predicted grades that they neither deserved nor got.
Which students didn't deserve the predicted grades?
I’d agree with that were it the case that the grades were being re-assessed intelligently. Plainly tho they were not
Clearly then you have no idea about how the GCE grading system works in a normal year.
This isn’t a normal year
Another, more pertinent, point is that the model used was shown within the technical report to be accurate to within a grade, when compared to the 2019 results, for typically around 90%-95% of students. So that means there is an error inherent in the model that is comparable to, or greater than, the number of students who saw their predictions downgraded by two grades. In those circumstances, you simply cannot defend the accuracy of such extreme downgrading.

It's not that simple though, diddlydo. In around a third of cases the predictions were reduced by one grade, which is what I would expect given that teachers will generally take an optimistic view and put what the student could get on a good day. But the fact that some were downgraded 2 or 3 grades is a cause for concern and explanations need to be given as to why some were unchanged whereas a few were given such a big reduction.

I think the phrase over-inflated isn't the best way of describing what's happened. Teachers can't predict with certainty. If a student has a 50% chance of A and 50% chance of B the teacher is far more likely to put A rather than B. It's human nature. The overall effect is predicted grades are too high in maybe 50% of cases
Ichoria - I.m well aware that this is not a normal year. That's my whole point. Few people realise that the statisticians use an algorithm every year to determine how many candidates are awarded each grade. It matters not what raw marks have been awarded by an examiner - as one such, if I thought every candidate I marked was worth an A grade and gave them that in the raw marks, the statisticians would apply an algorithm to make my marks nearer the norm for that school etc etc etc
What is radically different in that case, though, is that the students have completed a formal, standardised, assessment. That aspect is in their control. This year, their grades have been determined by their teachers' predictions, their teachers' attempts to rank them individually, historical performances of students at that school from the previous two or three years, historical performances of students nationally, the Government's desire to ensure results were comparable to last year, and a newly-developed algorithm that has had only limited testing and no apparent attempt to assign an error value to its output. The students' results have been determined, then, by everybody, in other words, apart from the students.
Can't those, shocked by a 2 or 3 grade drop, not run with the mock exam result that suggested they would do better ?

I still feel that some folk want to have their cake and to eat it too. Had they received a good result they'd not complain, if they had a good mock result they'd not complain, but if there was neither showing support for an optimistic prediction, then they aren't prepared to accept any result but their anticipated one.
Question Author
There was a call for the First Minister and the Scottish Secretary of State for Education, John Swinney, to resign following the exam result debacle. Will there also be a call for the Prime Minister and Gavin Williamson to resign? Haven't read anything to date.
If there's a robust appeals system in place then hopefully the more serious examples of downgrading will be either resolved or justified, so in that sense I see OG's point. Still, it ought to be clear that there's a material difference between having your grade determined by matching to historical results, and having your grade determined by sitting a formal assessment that is your own work. That's where the butt of this lies -- that, and the extreme anomalies, and the sense that this rewards students who sit in smaller classes at historically better schools regardless of how well, or badly, the were going to do in practice.
It's all very sad - and unnecessary. I keep forgetting how long ago it is that I was teaching and that things have changed, but it is so obvious to me that the old system of moderation (which I was used to for years) should have been employed in this. Very simply, the teacher grades the pupils on their best submitted work, plus their 'mock' results (most schools choose the simple option of giving the year before last's paper). These grades go to the exam board. The board then asks for random papers to be sent in for moderation/agreement (very often 1 high grade, 1 low grade and a C-/D+ boundary -the system we used). The teacher's grades are approved and all is hunky-dory (over 90%). If there is a problem then all paperwork is submitted for moderation.
It was simple, it worked!
It is only recently I've noticed that the re-grading software doesn't have the teacher's opinion as an input. One has to say that, that's an obvious drop off, as one can no longer adjust on an individual basis. Who on Earth signed that software design off ? Ideally it should have as inputs both the mock result, and the teacher's prediction, and weight them accordingly. It's weird one can't take that as obviously being the actual case without a need to check it.
Even simpler is to set the exam, mark it and that's that. Leaving aside the Covid complication, why are "predicted grades" so important? Is it that Universities cannot gear themselves up to making offers only when actual grades are available?
One problem with mocks is they are marked internally and not subject to the same checks as gcses. Also some schools set their own grade boundaries or use cut and paste exam questions so there is no totally reliable standardisation of grades.
I think the term Predicted Grade should be changed. Studies show teachers generally over predict by giving the benefit of the doubt to students who might get sat B C B,D,B C on a set of practice papers (sometimes with hints), and most would predict B. There should be two grades submitted with a probability score such as 60% B, 40% C.
Yes, the refusal to be more flexible than a single letter in stating grades bothers me -- especially in cases where there was no formal assessment.

Actually, why is this true anyway? A single lettered/numbered grade merely assesses the student on a given day, based on a given exam (or set of exams), and is therefore nowhere near flexible enough or accurate enough to capture their aptitude. Like, the entire philosophy behind exams feels flawed. I mean, to some extent I thought this already -- I've always been distrustful of the usefulness of exams over longer-term graded assessment. But it could be said to be another flaw of the system exposed by a forced change.

81 to 97 of 97rss feed

First Previous 2 3 4 5

Do you know the answer?

Exam Results

Answer Question >>