# The New R-Score Adjustment: How Will It Affect You?

Despite the bad rap it occasionally gets for being difficult to understand, many students have always known our dear R-Score as a relatively simple formula: (Z-Score + IFG + 5) × 5, where IFG, or group strength index (*indice de force de* *groupe*)*, *equals (High school average – 75) / 14 (if you haven’t read our R-Score breakdown, it will provide useful background before reading this article).

As authors of a website dedicated to analyzing not only university admissions but also the R-Score itself, we were evidently intrigued when the media reported on Monday that the Ministère de l’Éducation et de l’Enseignement supérieur would implement a modified R-Score calculation, starting in the Fall semester of 2017. We track all news relating to the R-Score closely, so it was surprising to hear about this so close to the implementation. What changes will this adjustment bring, and is this the first we’re hearing of it? Let’s take a look.

### An Unreliable IFG

A report dating back to 2014 by the *Comité de gestion des bulletins d’études collégiales*, a committee responsible for analyzing matters related to the R-Score, made note of a major problem with the IFG in the R-Score calculation. The original R-Score formula was based on two constants, 75 and 14, which were chosen through empirical tests almost 20 years ago. Since then, many (including the RScology team) have speculated, though lacking data to prove it conclusively, that the system was most likely tougher on students in CEGEPs with strong students than those with weak ones. Although we were meant to believe that the chosen CEGEP had no impact on students’ R-Score due to the IFG, or group strength correction factor, we could only take the Ministry’s word for it (and refused to do so), lacking public data to verify it ourselves.

A first problem at the source of the issue which was highlighted by the committee is that high schools possess different standards of grading, not only in general tests, but also due to the lack of a uniform exam for Secondary IV history, science and mathematics courses. Thus, students from different high schools will sometimes be over- or under-graded, meaning that for the same performance, they would be given different grades in different high schools. This difference in grading leads to a different IFG, producing an average impact of -0.52 on the R-Score, but varying between -1.11 and +0.10.

If you’re a quick thinker, you might immediately notice that in fact, the R-Score system is distrustful of absolute grades, and doesn’t use them at all at the CEGEP level, rather using the student’s relative standing in the class. In that case, why does the formula rely on a similarly arbitrary numeric value awarded by high school teachers? The architects of the R-Score only know of one hammer with which to hit such a nail: the handy Z-Score, which can also be applied to high school grades to standardize them.

**The Forgotten Factor of Group Dispersion**

It is not all that simple, however. There is a second issue with the existing R-Score formula, unrelated to the issue of unequal high school grading. Even when group strength is standardized through a Z-Score of high school grades, we must remember that the Z-Score isn’t a mere difference between the student’s grade and the average, but also takes into account the standard deviation. It follows that a student with a given level of performance could be advantaged or disadvantaged, under the old system, by going to a CEGEP where the standard deviation is higher or lower, which is another form of inequity. Put simply, being a homogeneous group was a good thing for strong students, whose mark was compared using a smaller standard deviation than usual, while it hurt weak students in those groups. In heterogeneous groups (with a large standard deviation), the effect was opposite. This addresses a second common complaint from students: too high (or low) standard deviations, which are now accounted for through an index similar to the existing Group Strength Index (IFG).

In order to adjust for those variances, nine different R-Score formulas were tested with different formulas for the IFG:

- IFG = (High school average – 75) / 14 (our current system)
- IFG = (High school average – 80) / 8 (80% being the average high school grade and 8% being the high school grades’ standard deviation)
- IFG = Z-Score of high school grades
- IFG = Z-Score and a standard deviation of high school Z-Scores

Each of those formulas were tested with different groups of classes, namely those from compulsory courses and those from ministerial courses:

Name and level of the subject | Mandatory subject | Ministerial subject |
---|---|---|

History, Secondary IV | Included | Included |

Science, Secondary IV | Included | Included |

Mathematics, Secondary IV | Included | Included |

Mathematics, Secondary V | Included | – |

First language, Secondary IV | Included | – |

First language, Secondary V | Included | Included |

Second language, Secondary IV | Included | – |

Second language, Secondary V | Included | Included |

Monde contemporain, Secondary V |
Included | – |

An additional calculation with no IFG was tested, bringing the total to nine.

In the end, a final R-Score calculation was decided upon, one which would help weak students in homogeneous groups and strong students in heterogenous groups: **R-Score = { [Z-Score × (High school Z-Score standard deviation)] + (High school Z-Score mean) + 5 } × 5.**

Due to the removal of arbitrary values and the reliance on the Z-Score, the committee affirms that this formula makes R-Scores more valid, and further makes any arbitrary bonification to the R-Score useless (i.e. the 0.5 bonus given to IB and Arts & Science students). So far, we tend to agree with the Ministry’s logic. The change will most benefit those in strong homogeneous groups, such as Arts and Science, as well as Science programs at strong CEGEPs to a lesser extent.

### A Transitional Phase

Given R-Scores brackets of under 27, 27-30, 30-33 and over 33 for science students, and under 24, 24-27, 27-30 and over 30 for non-science students, about 20% of students are expected to change brackets. Students in Sciences, IB and Arts & Science will generally see an upwards trend in R-Score (despite the 0.5 bonus being abolished). Students in Social Science will generally see a downward trend; among those changing R-Score brackets, over 90% will be making a downward change. As another example, 26% of students with an R-Score between 27 and 30 in Science will now have an R-Score between 30 and 33. Further, students in Arts and Science will move to a higher bracket in 45% of cases, with none moving downwards! In short: Science students win, Social Science students lose, at least on average.

Due to those fairly massive changes, the new formula will final for all courses starting from Fall 2017. However, for courses from Fall 2014 to Summer 2017, if the newly calculated R-Score is higher, then it will replace the old R-Score. Otherwise, the old R-Score will be used. Thus, students who are taking more than two years to finish CEGEP or who are currently starting their second year will have their existing R-Scores affected by the change.

**Socrates, and the Ministry’s Transparency**

The system used by the Ministry to compute students’ R-Scores is a little-known program called Socrates, which runs behind the scenes. We find this ironic, as Socrates was famously known for being deemed the wisest man in Athens because he was the only one who was aware of his own ignorance. This is the opposite attitude of that displayed by the Ministry and the *Bureau de coopération interuniversitaire* (BCI), which oversee the R-Score, and have once again showcased their omniscient approach by taking a top-down decision with no true public consultation.

We mentioned earlier that the report regarding this new formula dates back to 2014. Why, then, have we not heard about this until recently? Because the BCI’s website hosts a number of documents regarding the change, including several released in 2017, some people could be tricked into thinking they transparently published documents as they were produced. However, an Internet Archive search confirmed our suspicion that, as late as June 2017, BCI had released exactly zero documents on this topic, including the report that had been available to them for *three years*.

Even though we aren’t displeased with the outcome, it is worrisome that these governmental and quasi-governmental agencies don’t feel any need to consult the main parties involved, the students, before making changes, which doesn’t inspire confidence in the future. Further, the report suggests that the BCI *considers* making the calculation more transparent by releasing information such as the grade average used to calculate the R-Score, or the high school average. This suggestion isn’t very original — it’s been a demand of many private actors for years, including us, and there is no reason for the Ministry not to undertake it immediately. As we move away from a high school average-based formula to a high school Z-Score and standard deviation of high school Z-Scores, which are more opaque, we can only hope that the Ministry will send those data points to students so they can verify the calculation of their R-Scores.

### A Shift Towards a Better R-Score

Although the RScology team welcomes these changes to our beloved indicator of a student’s performance, we still have some reservations when it comes to how fair the R-Score truly is, and to how the Ministry approaches the issue. As mentioned above, the inclusion of Z-Scores in the calculation of the IFG is a step in the right direction; reducing inequity amongst high school students, yet still not truly making up for the different grading schemes in different educational institutions.

The problem with the new formula is recursive: by using a Z-Score within each high school, it presupposes that the caliber of students at that level is the same at any high school. This is obviously not the case for the same reason as we have an R-Score in the first place. Will the next solution be to add a high school R-Score to the CEGEP R-Score, based on the group strength calculated from primary school grades? When those are considered unreliable, will a Z-Score and then a R-Score be implemented at the primary school level based on kindergarten grades, and so on? The ridiculousness of the proposition shows why the system is imperfect.

However, all this is, in our humble opinion, quite easily overshadowed by the problem of evaluation groups, and how that specific variable creates much more unfairness and bias in the calculation of the R-Score. In fact, we have already published an article here, explaining everything you need to know about this particular variable in detail, as well as valuable information on how to use these groups for your own benefit. All in all, we remain consistent with our belief that the R-Score, in spite of being better and certainly more interesting than the traditional grading systems used across the world, is still far from perfect, and are hopeful that the institutions in charge will listen to those directly concerned in their efforts to improve it.

**Using R-Score Calculators**

Until we have solid data from the first iteration of this formula in December 2017 or January 2018, it is impossible to accurately predict the high school Z-Score average or high school Z-Score standard deviation values. For all intents and purposes, this is a minor change and will not get someone into a university program who never would have otherwise, or get someone rejected who was previously a shoe-in. Although we discourage spending too much time using any type of calculator, it is most pragmatic to use our classic R-Score calculator with reliable historical data, from which even the new formula won’t stray too far.