Quizzes & Puzzles5 mins ago
Exam Results
Considering the debacle surrounding the Scottish exam results, you would gave thought that England would have learned from it. Apparently not.
https:/ /www.th eguardi an.com/ educati on/2020 /aug/11 /pressu re-grow s-on-go vernmen t-over- england -a-leve l-resul ts-mess -corona virus
https:/
Answers
Best Answer
No best answer has yet been selected by maggiebee. Once a best answer has been selected, it will be shown here.
For more on marking an answer as the "Best Answer", please visit our FAQ.At the moment the only data I have is based on what's been reported in the press, so it's superficial data. At some point I'd like to see the full data set, for sure -- in particular, with such a large amount of downgrading but also a record number of top grades, I'm wondering if there is going to be a weird skew to the data (peaking high and low) -- which, if so, would surely defeat the object.
The results for this year were always going to be 'false', because exams weren't taken. That being the case they should have just said, ok, we'll let the teachers decide, as they're the ones that should know best.
But no, they couldn't do that. They had to say we'll take the teacher's grades and then apply some 'clever' algorithm to it that corrects the teacher's inevitable over-estimations.
Thereby arriving at a result that is equally as false, but decided by a computer instead of someone that had something to do with educating the person concerned.
But no, they couldn't do that. They had to say we'll take the teacher's grades and then apply some 'clever' algorithm to it that corrects the teacher's inevitable over-estimations.
Thereby arriving at a result that is equally as false, but decided by a computer instead of someone that had something to do with educating the person concerned.
For anyone who is interested, technical details of Ofqual's procedure are publicly available, which is a relief. Although, as it is over 300 pages long, I have not read it.
https:/ /assets .publis hing.se rvice.g ov.uk/g overnme nt/uplo ads/sys tem/upl oads/at tachmen t_data/ file/90 9368/66 56-1_Aw arding_ GCSE__A S__A_le vel__ad vanced_ extensi on_awar ds_and_ extende d_proje ct_qual ificati ons_in_ summer_ 2020_-_ interim _report .pdf
https:/
//35.6% of marks were adjusted down by one grade, 3.3% were brought down by two, and 0.2% came down by three.//
Downgrading around a third by one grade sounds about right given the clear over optimism of teachers' predictions, but alarm bells should have rung for the 3.5% that would be downgraded by 2 or 3. A limit of one grade should have been put in or each one should have been followed up with the school.
For those who think teacher's assessments should have been used without any adjustment, it's worth noting that teachers predicted a doubling in the number of A* grades.
I'm still unclear why the process is said to have discriminated particularly against BAME, disabled and poorer children- unless those schools were more likely to overestimate grades. I think people are using every argument possible to give the highest possible grades.
Downgrading around a third by one grade sounds about right given the clear over optimism of teachers' predictions, but alarm bells should have rung for the 3.5% that would be downgraded by 2 or 3. A limit of one grade should have been put in or each one should have been followed up with the school.
For those who think teacher's assessments should have been used without any adjustment, it's worth noting that teachers predicted a doubling in the number of A* grades.
I'm still unclear why the process is said to have discriminated particularly against BAME, disabled and poorer children- unless those schools were more likely to overestimate grades. I think people are using every argument possible to give the highest possible grades.
It seems that Ofqual did try to check for possible issues of systemic bias in their standardisation process. What they clearly can't control for is systemic bias in the education system itself.
Other thoughts, on reading it:
1. I don't think that Ofqual had enough time, in practice, to develop the system they'll have needed to. Not a criticism -- although the rigid deadline of Results Day can't have helped.
2. Amazing how much of this mess owes itself to 2020 being the first year in recent times when all students had only one set of exams to sit. In previous years there was either a modular system, or at least a half-year check-point for all. Those gone, there's no more recent formal assessment for these students to fall back on. No doubt there are more compelling arguments surrounding a single, final exam cycle than "what if there's a pandemic and all exams are cancelled?" -- but I'd say it's pretty clear from the subtext that Ofqual was frustrated at how little it has to go on, and at how difficult it is to control for the constant syllabus changes.
3. It may owe something to the time available, or it may be a theoretical upper limit, but it seemed to me that the ceiling for the predictive power of their method was about 75% -- ie, they'd call around a quarter of students wrongly anyway; and around a tenth would be wrong by two grades. Plays into what FF says, but I'm not sure that if you have a model that is only right to within one grade 95% of the time, you shouldn't be changing by two grades anyway.
4. The final flaw in the method, as far as I am concerned, is that the any student's grade assessment is based on (a) historical data from their school, (b) historical data from the country, (c) a teacher's assessment of their absolute predicted grade, and (d) a teacher's assessment of their relative ranking in the class. In other words, almost everybody in the country apart from the actual student is involved in determining that student's final grade. Even if the system calls the grade correctly, it would end up leaving a bad taste in the mouth. An exam, whatever the issues in moderation etc, is at least about the student doing *their* work to get *their* result.
A lot of this also boils down to transparency, or lack of it. It's clear that Ofqual decided, or was instructed, to make being transparent about the process a lower priority than it could have been. I can understand that while running their standardisation process they would have wanted to do so without teachers staring over their shoulder, but I don't understand why, especially in this case, an extra week or so wasn't built into the system so that schools had a chance to react to the changes and maybe negotiate *before* pupils saw their results. I suspect that pressure from above made it clear that Results Day could not be changed.
Other thoughts, on reading it:
1. I don't think that Ofqual had enough time, in practice, to develop the system they'll have needed to. Not a criticism -- although the rigid deadline of Results Day can't have helped.
2. Amazing how much of this mess owes itself to 2020 being the first year in recent times when all students had only one set of exams to sit. In previous years there was either a modular system, or at least a half-year check-point for all. Those gone, there's no more recent formal assessment for these students to fall back on. No doubt there are more compelling arguments surrounding a single, final exam cycle than "what if there's a pandemic and all exams are cancelled?" -- but I'd say it's pretty clear from the subtext that Ofqual was frustrated at how little it has to go on, and at how difficult it is to control for the constant syllabus changes.
3. It may owe something to the time available, or it may be a theoretical upper limit, but it seemed to me that the ceiling for the predictive power of their method was about 75% -- ie, they'd call around a quarter of students wrongly anyway; and around a tenth would be wrong by two grades. Plays into what FF says, but I'm not sure that if you have a model that is only right to within one grade 95% of the time, you shouldn't be changing by two grades anyway.
4. The final flaw in the method, as far as I am concerned, is that the any student's grade assessment is based on (a) historical data from their school, (b) historical data from the country, (c) a teacher's assessment of their absolute predicted grade, and (d) a teacher's assessment of their relative ranking in the class. In other words, almost everybody in the country apart from the actual student is involved in determining that student's final grade. Even if the system calls the grade correctly, it would end up leaving a bad taste in the mouth. An exam, whatever the issues in moderation etc, is at least about the student doing *their* work to get *their* result.
A lot of this also boils down to transparency, or lack of it. It's clear that Ofqual decided, or was instructed, to make being transparent about the process a lower priority than it could have been. I can understand that while running their standardisation process they would have wanted to do so without teachers staring over their shoulder, but I don't understand why, especially in this case, an extra week or so wasn't built into the system so that schools had a chance to react to the changes and maybe negotiate *before* pupils saw their results. I suspect that pressure from above made it clear that Results Day could not be changed.
just to add that the Guardian predicted the 40% markdown a week ago
https:/ /www.th eguardi an.com/ educati on/2020 /aug/07 /a-leve l-resul t-predi ctions- to-be-d owngrad ed-engl and
https:/
Five months to design a system basically from scratch, test several other alternatives, gather the necessary data, process it, run quality checks on the 2019 data set, predict for 2020, check that there are no issues regarding accidental biases (beyond those already present)... Plus, collate this with thousands of schools and about four exam boards, across GCSE and A-Level.
I'm not sure I'd have expected it to be done in five months. A week wouldn't have made a difference to their work, except that it would have given the schools a chance to appeal before students saw anything.
I'm not sure I'd have expected it to be done in five months. A week wouldn't have made a difference to their work, except that it would have given the schools a chance to appeal before students saw anything.
//Amazing how much of this mess owes itself to 2020 being the first year in recent times when all students had only one set of exams to sit. In previous years there was either a modular system, or at least a half-year check-point for all//
Remind us all again which body of "professionals" spent decades lobbying against tests and exams. Tests and exams were deemed damaging to the mental health and emotional well being of the pupils and hated by their erstwhile tutors. Oh how they celebrated when the mid year tests and assessments were curtailed. The ultimate aim of course being to scrap them entirely. But don't worry too much. When it is deemed that foregoing the tests and exams disadvantages the bame pupils they will bring them back with a vengeance. The new and exciting system will demand one a week most likely.
Remind us all again which body of "professionals" spent decades lobbying against tests and exams. Tests and exams were deemed damaging to the mental health and emotional well being of the pupils and hated by their erstwhile tutors. Oh how they celebrated when the mid year tests and assessments were curtailed. The ultimate aim of course being to scrap them entirely. But don't worry too much. When it is deemed that foregoing the tests and exams disadvantages the bame pupils they will bring them back with a vengeance. The new and exciting system will demand one a week most likely.
I'm sure we always used to go by exam results completely? It changed when I was at school, so it was a mixture of coursework and exams. This is the first year that we have to go by coursework only and I think there are bound to be a lot of problems.
Most of my teachers hugely underestimated my grades, as I often wasn't there... only one teacher was accurate with both GCSEs and A Levels, so i certainly wouldn't go with teacher assessments alone. However good they are, they know students too well to be totally unbiased.
Most of my teachers hugely underestimated my grades, as I often wasn't there... only one teacher was accurate with both GCSEs and A Levels, so i certainly wouldn't go with teacher assessments alone. However good they are, they know students too well to be totally unbiased.
Togo: firstly, I'm not sure it's relevant who campaigned against constant modular exams, when replying to me, when it was (a) somebody else, and (b) not me. Besides, I think the plea was for a greater reliance on continuous assessment, rather than to consolidate all the exams into a single, final exam. That pressure came from above, based perhaps on some idea of schooling having had a golden age in the days of single final exams. Oh, yes, and I think the bigger pressure was on scrapping SATs, which are meaningless in terms of a student's future and were only used to assess the performance of the school. Scrapping *those* is not really the same as the A-Level reforms, or the removal of coursework, etc.
* * * *
No idea, jno, although I think probably most of it. Some time spent designing and consulting with technical experts in April and May, so the work goes back a while.
* * * *
No idea, jno, although I think probably most of it. Some time spent designing and consulting with technical experts in April and May, so the work goes back a while.
Hence, I suppose, why Ofqual decided to and was tasked to moderate. But, as jno says, in a way exams aren't accurate either. Especially a single exam. It reduces years of work to your ability to perform for a couple of hours or so.
I don't accept that the solution to this is to revert to predicted grades, as this will just create a whole new set of problems. Trying to avoid massive grade inflation is a reasonable aim: if almost double the number of students were predicted top grades then that devalues the meaning of a top grade, especially as compared to previous and future years.
I don't accept that the solution to this is to revert to predicted grades, as this will just create a whole new set of problems. Trying to avoid massive grade inflation is a reasonable aim: if almost double the number of students were predicted top grades then that devalues the meaning of a top grade, especially as compared to previous and future years.