Quizzes & Puzzles3 mins ago
Exam Results
Apparently there is to be an announcement at 4 o’clock concerning how this fiasco is to be sorted.
Answers
Best Answer
No best answer has yet been selected by grumpy01. Once a best answer has been selected, it will be shown here.
For more on marking an answer as the "Best Answer", please visit our FAQ.JIM - // Apparently Williamson never evne saw the algorithm and results until the weekend. If that is so then it exposes his incompetence. He should have looked into this much sooner, and certainly shouldn't have called a system he didn't understand "robust". Shocking dereliction of duty.
He should go. //
I hope his absence from having the guts to face up to his own bottomless incompetence - avoiding making the announcement personally - is explained because he had a pressing commitment to fulfil - writing his resignation.
He should go. //
I hope his absence from having the guts to face up to his own bottomless incompetence - avoiding making the announcement personally - is explained because he had a pressing commitment to fulfil - writing his resignation.
I agree, jim. Andy maybe the Scottish one too?
Andyhughes's comments are interesting, mainly because they don't reflect reality. Studies have shown, and ofqual are well aware that schools/teachers do tend to over predict overall, Thecorbeyloon I think it was gave us some data that showed the scale of it. It's not lying or cheating- it's human nature.
Jonny does 4 practice papers and gets CCBC. This can happen depending on the questions and how well he feels on the day. The teacher knows that with revision he will get at least a C and might even get a B if all goes really well and he has shown he is capable. The teacher will almost certainly put B. Jenny and Jilly also got similar but slightly lower scores but still scraped BCDC so teacher put them down for a B. But in reality the teachers knows that the chances of Bs for all 3 are slim.
If teachers predictions are reliable the predictions for this year tell us show we have a cohort that is far better than previous years- a big increase in A* for example. Can this be explained? No- the improvement is too great
AH says "The fact is, some upgrading does go on, largely in the private sector where pressure is on to maintain good results to maintain a private market in students,"
I don't think that is a fact and would be interested to see the evidence. Some teachers in state schools don't hold private schools in high regard but there is no doubt results are far better overall in the latter. There was no need to downgrade the predictions for large private schools as the predictions broadly matched the pattern of historic good results. It's more likely state sixth form colleges overstated their predictions as they exceeded the historic patterns. the only exception is small sixth forms, maybe private, where it was decided the samples were too small to apply the algorithm.
Andyhughes's comments are interesting, mainly because they don't reflect reality. Studies have shown, and ofqual are well aware that schools/teachers do tend to over predict overall, Thecorbeyloon I think it was gave us some data that showed the scale of it. It's not lying or cheating- it's human nature.
Jonny does 4 practice papers and gets CCBC. This can happen depending on the questions and how well he feels on the day. The teacher knows that with revision he will get at least a C and might even get a B if all goes really well and he has shown he is capable. The teacher will almost certainly put B. Jenny and Jilly also got similar but slightly lower scores but still scraped BCDC so teacher put them down for a B. But in reality the teachers knows that the chances of Bs for all 3 are slim.
If teachers predictions are reliable the predictions for this year tell us show we have a cohort that is far better than previous years- a big increase in A* for example. Can this be explained? No- the improvement is too great
AH says "The fact is, some upgrading does go on, largely in the private sector where pressure is on to maintain good results to maintain a private market in students,"
I don't think that is a fact and would be interested to see the evidence. Some teachers in state schools don't hold private schools in high regard but there is no doubt results are far better overall in the latter. There was no need to downgrade the predictions for large private schools as the predictions broadly matched the pattern of historic good results. It's more likely state sixth form colleges overstated their predictions as they exceeded the historic patterns. the only exception is small sixth forms, maybe private, where it was decided the samples were too small to apply the algorithm.
All this assumes that assessments in the previous years are accurate, though, which is dubious -- eg, because as your example shows, student performance is variable.
It would be nice if this led to a rethink of the exam system altogether. I know that continuous assessment is vulnerable to cheating, but it is also fairer, and, for that matter, much more reflective of how performance is assessed in work. There's some collective delusion allowed to develop where we've decided that the fairest way to assess people is in how they perform over a two-hour spurt.
It would be nice if this led to a rethink of the exam system altogether. I know that continuous assessment is vulnerable to cheating, but it is also fairer, and, for that matter, much more reflective of how performance is assessed in work. There's some collective delusion allowed to develop where we've decided that the fairest way to assess people is in how they perform over a two-hour spurt.
I agree jim. I think it must be possible to have some form of formal assessment each year at secondary school or at least in years 10 and 11 and then 12 and 13.
On a separate point, it seems odd to me that so much moderation/standardisation is needed. Why do exam pass marks and grade boundaries have to change so much from year to year? It makes it so difficult for teachers to assess actual grades from past papers students take for prediction purposes It surely must be possible to come up with papers which are broadly similar.
On a separate point, it seems odd to me that so much moderation/standardisation is needed. Why do exam pass marks and grade boundaries have to change so much from year to year? It makes it so difficult for teachers to assess actual grades from past papers students take for prediction purposes It surely must be possible to come up with papers which are broadly similar.
This is the link to the report I quoted previously.
https:/ /bit.ly /34bqmr w
Regarding the accuracy of different school types, it says,
"How does accuracy vary by student characteristics and school type?
In this section I examine how accurate grades are according to the characteristics of
students and schools, examining whether certain types of students, or school types are particularly likely to under or over-predict. This analysis is presented in figures 3-4.
As figure 3 shows, there is a good deal of variation in prediction accuracy according to
school type. Independent schools appear to be the most accurate predictors – over 20% of applicants from independent schools’ grades were accurately predicted. Meanwhile,
academies, state schools and sixth form colleges are more prone to over-predicting their
students’ grades (as seen by the greater proportions from these school types with a
difference between actual and predicted grades below zero). As is evident, there is a
limited amount of under-prediction; however this will be explored in more detail in Section 3."
https:/
Regarding the accuracy of different school types, it says,
"How does accuracy vary by student characteristics and school type?
In this section I examine how accurate grades are according to the characteristics of
students and schools, examining whether certain types of students, or school types are particularly likely to under or over-predict. This analysis is presented in figures 3-4.
As figure 3 shows, there is a good deal of variation in prediction accuracy according to
school type. Independent schools appear to be the most accurate predictors – over 20% of applicants from independent schools’ grades were accurately predicted. Meanwhile,
academies, state schools and sixth form colleges are more prone to over-predicting their
students’ grades (as seen by the greater proportions from these school types with a
difference between actual and predicted grades below zero). As is evident, there is a
limited amount of under-prediction; however this will be explored in more detail in Section 3."
// Independent schools appear to be the most accurate predictors – over 20% of applicants from independent schools’ grades were accurately predicted. //
The fact that the "most accurate prediction" is only 20% or so suggests that an algorithm that broadly accepted independent school predictions is flawed anyway.
But moreover, this analysis is based on global matching, and says nothing about *which* predictions are flawed. The entire problem is not that the philosophy of the algorithm (ie correct potentially optimistic teachers' grade) is flawed, so much as its execution. Features built in, such as the apparent requirement for at least one student to receive a U grade if there was even the slightest chance for it to happen, or the fact that the algorithm wasn't applied to schools with small class sizes, etc. -- all of those are awful features, and, while they were acknowledged in the technical report, it was clear they were regarded as acceptable given that the aim was to reproduce the global 2019 results to within 1%.
All of this still falls on the Minister. Not a single part of the events of last week make any sense. It was obvious from what happened in Scotland that England would have to follow suit, so, to proceed anyway (with only a single panicked change to policy), to be apparently unaware before the results of what was coming, to defend them as robust only to abandon them -- all of that is pathetic and woefully incompetent, because it was so utterly predictable.
The fact that the "most accurate prediction" is only 20% or so suggests that an algorithm that broadly accepted independent school predictions is flawed anyway.
But moreover, this analysis is based on global matching, and says nothing about *which* predictions are flawed. The entire problem is not that the philosophy of the algorithm (ie correct potentially optimistic teachers' grade) is flawed, so much as its execution. Features built in, such as the apparent requirement for at least one student to receive a U grade if there was even the slightest chance for it to happen, or the fact that the algorithm wasn't applied to schools with small class sizes, etc. -- all of those are awful features, and, while they were acknowledged in the technical report, it was clear they were regarded as acceptable given that the aim was to reproduce the global 2019 results to within 1%.
All of this still falls on the Minister. Not a single part of the events of last week make any sense. It was obvious from what happened in Scotland that England would have to follow suit, so, to proceed anyway (with only a single panicked change to policy), to be apparently unaware before the results of what was coming, to defend them as robust only to abandon them -- all of that is pathetic and woefully incompetent, because it was so utterly predictable.
F-f at 19. 'some form of graded assessment' - we used to have that. It was called end-of-term-tests and end-of-year-exams. I think that by the year 2,000 I was the last person in our (very large comp.) using these. I set them myself (which took some time) but it gave a good indication of progression. It also kept pupils attentive until the last week of the Summer term - the onslaught of marking was hard for me, but it worked.
Yes, I've worked in lots of schools and all have some sort of internal exams every year and in Maths these are often termly to help us to ensure pupils are in the right set and RAG rate topics for us to focus on for homeworks etc. But whilst this gives lots of data on learning gaps and enables us to rank pupils it doesn't give us a clear score or grade for external benchmarking. Year 9 SATS in Maths, English, Science were national papers with levels assigned to give us a standardised measure but these were abolished a few years ago.
It's a sad reflection of the exam system and all the masses of effort that goes into tracking, 'flight paths' etc that for A levels teachers predict the right grade only 20% of the time and over predict grades around 75% of the time. I expect that for GCSEs it would be a little better but not significantly so.
It's a sad reflection of the exam system and all the masses of effort that goes into tracking, 'flight paths' etc that for A levels teachers predict the right grade only 20% of the time and over predict grades around 75% of the time. I expect that for GCSEs it would be a little better but not significantly so.
What weak government. Gets moaned at then capitulates. Clearly the algorithm needed to take in the teacher's opinion as a weighted input, but it also needed to check against previous years to ensure no grade inflation. It seems at no time was this the case, and now they're running with fantasy teacher optimism and a clear indication that once again, this year is apparently more brilliant than previous years. That isn't sorting anything, that's caving in and making a bigger fiasco than ever. This lot seem incapable of organising a urine-up in a beer making factory. We have no government worth the name at present. And no opposition worth the name either.
O-G Like others I have been slowly disillusioned and appalled by the lack of competence shown in so many areas by the govt.. But there isn't an option. I can only hope that one or 2 people - Boris, Raab, Gove perhaps - wake up, knock heads together and get some form of efficiency and accountability organised. Otherwise the accountability will be huge in a few years' time, I fear.
Only those with the data for previous years can tell you that. Grades should be set such that the same percentage get each grade each year. One year isn't going to be more capable than the next.
Probably fair to look at it on an institution to institution basis to ensure places who have shown that they achieve well aren't unfairly dragged down by less successful institutions.
But maybe the best solution could have been not to award grades yet. The course ain't finished until the exam has tested understanding, so grades shouldn't be awarded until the exam has been taken. Let universities and employers make their own decisions in the meanwhile.
Probably fair to look at it on an institution to institution basis to ensure places who have shown that they achieve well aren't unfairly dragged down by less successful institutions.
But maybe the best solution could have been not to award grades yet. The course ain't finished until the exam has tested understanding, so grades shouldn't be awarded until the exam has been taken. Let universities and employers make their own decisions in the meanwhile.
// Only those with the data for previous years can tell you that. Grades should be set such that the same percentage get each grade each year. One year isn't going to be more capable than the next. //
That would mean that if somebody got a U in 2019, then somebody has to get a U in 2020, despite never having sat anything.
The flaw in your approach is that you're focusing too much on the national picture any too little on the local. It may well be that the correct national result is comparable to last years, but you wouldn't have a hope in hell of working out which 7% deserve an A* and which would have missed it. This is what the algorithm tried to do, and this is what it was certain to fail at.
This falls on Williamson all the same -- he was the one who gave the instruction to ensure that "results" in 2020 were the same (to within 2% or so) as in 2019, whatever that meant for individual students. I don't dispute the claim that the results this year are inflated, but that is not the fault of the students and they should not be punished for everybody else's failures. Only their own.
That would mean that if somebody got a U in 2019, then somebody has to get a U in 2020, despite never having sat anything.
The flaw in your approach is that you're focusing too much on the national picture any too little on the local. It may well be that the correct national result is comparable to last years, but you wouldn't have a hope in hell of working out which 7% deserve an A* and which would have missed it. This is what the algorithm tried to do, and this is what it was certain to fail at.
This falls on Williamson all the same -- he was the one who gave the instruction to ensure that "results" in 2020 were the same (to within 2% or so) as in 2019, whatever that meant for individual students. I don't dispute the claim that the results this year are inflated, but that is not the fault of the students and they should not be punished for everybody else's failures. Only their own.
I quoted only your first paragraph but forgot to reply to the second, which I think would have been sensible approach, the least worst option: accept that life was on hold for a few months and hope to find ways of "catching up" in the coming year(s) -- eg, work with Universities to organise a later term start etc. It would have been hell to implement, mind.