The Ultimate R-Score Breakdown: Everything Explained
If you are currently in CEGEP, or even in the last years of high school in Quebec, you’ve probably heard some cliché discussion of the R-Score, often taking the following form: “The calculation of the R-Score is extremely complicated, and you need to fully understand it to use it to your advantage,” and continuing with an explanation of the ’lesser-known’ factors involved in calculating the R-Score, such as the standard deviation. However, this is often done by students, teachers or advisors who do not understand or are not aware of some of the intricacies of the R-Score. Indeed, a major mistake that can be made in understanding the R-Score is to only rely on official primary sources, such as the Conference of Rectors and Principals of Quebec Universities (CREPUQ). In practice, not all information on this little-understood evaluation method has been publicly available—until this post.
Why is the R-Score used?
The CREPUQ, which sets the guidelines for the calculation of the R-Score, defines three criteria for there to be a common basis for evaluation:
- Same group of students. This does not occur across different classes and colleges, as students at different institutions or in different programs may be stronger because of several reasons.
- Same learning experiences. Each institution is free to develop their own curricula: for example, the English 101 course at College A may focus entirely on comic books, while that at College B is based around dystopian novels.
- Same grading methods. Each institution and each department sets their own methods for grading, which has several effects. For instance, a Student A who achieved 80% in English 101 at College A may have been able to get 83% at College B, which does not constitute an issue with either college, but simply a difference in methodology. Moreover, in this scenario, a Student B who achieved 82% at College A may get 84% at College B. Not only does the same student achieve a higher mark at College B, but a 2% increase in mark at College A is only a 1% increase at College B! Evidently, grades among unequal groups vary in both average and spread.
Given that differences in colleges, in courses and in teachers lead to there not being a common basis for evaluation, universities could not possibly allocate admission to competitive programs based solely on students’ general average (e.g. 88.4%). Something like this occurs in the United States based on grade point averages, or GPAs, and usually leads to grade inflation, defined as “the tendency to award progressively higher academic grades for work that would have received lower grades in the past”. This is remedied to some extent with the use of the R-Score in Quebec.
What is the R-Score?
The R-Score is best understood as a classification method employed by universities in order to attribute an absolute score corresponding to the academic performance of any student in a college-level institution in Quebec, based on their relative performance with respect to their peers. While the R-Score is often criticized for being complex and difficult to understand, it is actually exceedingly simple.
There are four factors which affect the calculation of the R-Score. These are not to be confused with variables, since they are mostly unchangeable statistical characteristics of each group. While these factors of the R-Score formula are what many students and adults take pride in understanding, they are only of a limited importance, and do not constitute the whole of the nuances of R-Score calculation. With that said, here are the values:
- Grade: This is the grade a student achieves in a course.
- Class average: This is the average of all grades in the evaluation group, which we will further explain later on. While a student may earn 80% at College A and 82% at College B for the same work as seen earlier, this is remedied by only taking the difference from the average rather than the absolute grade.
- Standard deviation: This is a statistical measurement of how spread out the grades are from the average within an evaluation group. It is taken by calculating the square root of the average of the square deviations from the average. It is more often than not confused with the average deviation from the average, which seems like the same thing but is not (the sum of two square roots is not the same as the square root of a sum of two numbers). It is useful to understand that mathematically, large deviations raise the standard deviation more than small deviations lower it: therefore, evaluation groups with major outliers will have an unfavourable standard deviation.
- High school average: While the two previous values correct for variations in grading for similar groups, they do not account for group strength. In fact, before this last factor was introduced, strong students in colleges with the weakest students were advantaged over those at strong colleges. To correct this, the average Secondary IV and V grades in mandatory subjects of every student in the evaluation group are factored into the calculation to account for weaker or stronger groups. This does absolutely not mean that a student who did badly in high school will be disadvantaged in CEGEP: their own grades will only account for a tiny portion of the group’s high school average.
These four factors are used in the following formula to calculate the course R-Score for each course, and vary according to the specific course and evaluation group:
( [(Grade – Average)/Standard deviation] + [(High school average – 75)/14] + [5] ) × 5
A student’s global R-Score, which is used for university admissions, is an average of all the course R-Scores of courses in their current program, weighted according to the number of credits each of these is worth. For this reason, some of the student’s semesters may be worth a greater share of their global R-Score than others, if they take more credits.
What is the evaluation group?
The four factors used in calculating the R-Score, other than the student’s grade, are mostly out of their control, and this is why understanding the details of R-Score calculation is not overly useful for the average student, often creating more confusion than anything else. It is not uncommon in CEGEPs to hear students freaking out about a 22% standard deviation in one of their courses, or raving about the extremely low average in one of their classes, based on seeing this information on Omnivox. This is based on a lack of knowledge of the only one true variable pertaining to the R-Score: the evaluation group.
The evaluation group, sometimes called ‘group at evaluation’, consists of the group of students against which one is compared in a certain course, and from which the class average, standard deviation and high school average are calculated.
It is essential to remember that students with less than 50% in the course are always excluded from the calculation (hereafter referred to as “low-fails”, since grades from 50% to 60% are both failing and included). This has several implications: the standard deviation always remains reasonably low, given that major outliers like 0% are always excluded, and the average is almost always somewhat higher than expected, once those low-failing grades are removed. Another effect of this criterion is that the R-Score is even more opaque than it would otherwise be: at most colleges, only the overall group average is ever published, not the one of the true evaluation group used to calculate the R-Score, which is higher.
Contrary to what one might assume, the class average is very rarely that of a student’s ‘class’, in the sense of a course taught at the same dates and times in the week. According to the CREPUQ, the evaluation group “should include the students of the same college who have taken the same course, the same year, the same semester and who have been evaluated in the same way.” This is very similar to the criteria mentioned earlier: the same group of students, the same learning experiences (“the same course”), the same grading methods (“evaluated in the same way”). Furthermore, the CREPUQ implies that evaluation groups should consist of every student who has taken a certain course with a certain teacher in the same semester.
This seems logical: regardless of the date and time at which this class meets, students in the English 101 course “20th Century Cinema” taught by Prof. John Doe will be evaluated exactly the same way. However, students in an English 101 course on the same subject taught by Prof. Jane Roe will probably be evaluated slightly differently, as every instructor’s judgement and requirements will vary. In the case of an English 101 course titled “19th Century Poetry” taught by Prof. Joe Bloggs in the same semester, the evaluation group is even more clearly separate: not only are the grading methods different, but so is the subject matter.
However, as can be noticed from our highlighting of the word ‘should’ in the definition of the evaluation group, what seems like the only reasonable way to determine this group is only a strong suggestion. Indeed, the authority in the calculation of the R-Score considers that “each CEGEP is responsible for the way it determines its evaluation groups.” While it is RScology’s opinion that there is only ever one correct way to determine this group for a given scenario, the methods for determining it vary from one college to another, introducing distortions which usually advantage clever students at the expense of the uninformed. Furthermore, the Ministry of Higher Education of Quebec (MEES) is similarly deferent to the “independence of colleges” in this matter, even in cases where the way it is determined is obviously incorrect or unfair.
To know how evaluation groups are determined, and how to make the most of them, read our article Evaluation Groups, and How to Use Them to Your Advantage.