Off to college: How accurate is Pakistan’s equivalence formula? – Press Release


Students seen taking an exam in a classroom. Photo: Geo.tv/ file
Students seen taking an exam in a classroom. Photo: Geo.tv/ file

Fall season is college and university admissions season. It is the time of year when the issue of equivalence of different board exams rears its head again.

Fair comparability of exam results between the 30 different local exam boards has always been a challenge; 80 percent in Islamabad means something very different from the same score from an exam board in an underdeveloped area. This puts students from more competitive exam boards at a disadvantage when competing for admissions to sought-after university programmes against students from all across the country.

Students that take the Cambridge IGCSE / O/A-level, IB or other foreign exam route through high-school are disadvantaged even further. Before they can apply to most universities, they need to apply for an equivalent score from the Inter Board Committee of Chairmen (IBCC). Under the present rules, students with highest grade A* Cambridge exams get an equivalent score of 90 percent. Subsequent grades are mapped to percentages at 10 percent intervals – 80 percent for an A, 70 percent for a B, etc. That effectively caps Cambridge students with a perfect result (straight-A*s in all subjects) at 90 percent.

For comparison, in 2019, the last year pre-Covid that regular exams could be held, the top scoring HSSC students in the Federal Board of Intermediate and Secondary Education (FBISE) pre-medical and pre-engineering groups scored around 96 and 95 percent, respectively.

Grade inflation in the local school system and equivalences between different exam systems being what they are make it impossible for A-level students to secure admission in the most competitive university programs, medical colleges foremost among them. This is a long-standing issue and, although the mapping of grades to percentages has been revised over the years, it has always lagged behind and has never been enough to give a student with perfect grades a real shot at getting into competitive programmes.

As a result, for decades, students entering high school and who plan to go to a medical and (to some extent) engineering programmes are compelled to switch to the local school system. Instead of fixing admissions criteria to universities, we force parents and children to trade a relatively better education for a lesser one for a shot at being admitted to a programme of their choice. 

Meanwhile, university leaders continue to complain that high-school students arrive under-prepared. This year, with both Cambridge and provinces figuring out their own ways to compensate for cancelled, disrupted and delayed classes and exams, comparing student exam scores will be even more of an apples-and-oranges situation.

In order to navigate these uneven standards, back in the 90s some Pakistani universities, including LUMS, NUST and GIKI, began conducting their own entrance exams or standardized tests (like the SAT) for their admissions decisions. Over the years, more and more universities have come to follow that lead. Today, the Pakistan Medical Commission (PMC) conducts the national Medical and Dental Colleges Admissions Test (MDCAT) whose score accounts for 50 percent of applicants’ admissions score, while the remaining is accounted for by FSc / HSSC (40 percent) and Matric / SSC (10 percent).

Facing the same difficulties evaluating diverse applicant qualifications, many engineering universities conduct their own entrance exams. The different ways HSSC exams were delayed, conducted and graded this year made this so difficult that NUST reduced the HSSC result’s importance to a passing requirement and reduced its weight in the admission score to zero. It has given 75 percent of weight to its entrance test, while giving the remaining 25 percent to SSC / Matric results.

Meanwhile, like most universities in the US, in Pakistan LUMS has been using a holistic admissions criterion for years, which considers school grades, admissions tests as well as an applicant’s non-quantifiables, like extracurriculars. Such an admission criteria relies on the incorruptibility of admissions committees and trust from students and parents. Unfortunately, that makes this solution unworkable for most universities in our local context.

Every university wants the best talent, because talented input will likely produce talented output. If relevant government departments in collaboration with international examination systems cannot devise an adequate method for exam result equivalence, then universities will devise their own. Entrance tests serve as a uniform yardstick, letting colleges and universities test applicants on whatever material they deem relevant, without unduly handicapping students for graduating from the ‘wrong’ school system. 

In time, perhaps the admissions process can be made completely fair by basing decisions solely, 100 percent, on admissions tests alone. High-school qualifications of some level can be retained as qualifying criteria to take the admission test. Admission tests would then become another purpose-specific test like the CSS or GRE.

A uniform yardstick can also serve as a transparent measure of success for the government’s upcoming Single National Curriculum (SNC). If it indeed succeeds in achieving its declared objectives, in the long term that should be reflected in the results of admissions tests. Whether government departments are prepared for it or not, increasingly this is the direction things seem headed, and some in government realise it. This makes it even more important to treat the next phases of developing the SNC (middle, secondary, and higher secondary) and revising the primary SNC in light of feedback received very seriously, so that it is as competitive as envisioned.

The FBISE is keen to delink its exams from textbooks altogether and develop assessments based on student learning outcomes (SLOs) of its syllabi. When exams test for learning outcomes, rather than the ability to reproduce content from a particular textbook, we can claim learning is happening. When the focus becomes achieving learning outcomes, we can have publishers compete to produce the best textbooks to achieve those outcomes.

If this sounds too radical, recall that we will only have gone in a circle and be back where we started in the 60s, before the establishment of textbook boards that monopolized the making of textbooks. We would only be reinventing the wheel and arriving at the same model Cambridge International Education and many other school systems around the world arrived at long ago. This also makes focusing on teacher training and school improvement more important than mere replacement of textbooks.

Making this transition to SLO based examination across boards will not be easy. There are always parties whose interests are vested in the status quo – weaker public exam boards could see an exodus with fewer being compelled to switch to the local system and an even admissions playing field will separate the wheat from the chaff. The IBCC will have to continue to play its role in identifying and addressing stark disparities in quality of education delivery and distribution of resources. The role of IBCC and the Inter Provincial Education Ministers Conference (IPEMC) will be key to build consensus among the exam boards and provincial education departments.

Another development that is indicative of this trend is the HEC’s recent announcement of introducing the Undergraduate Studies Admissions Test (USAT). Universities can use the USAT scores to make admissions decisions; How many include it in their admissions criteria remains to be seen. USAT is patterned after its more famous international cousin, SAT, and includes the same verbal reasoning, quantitative reasoning and essay writing sections but costs only Rs1,200 instead of SAT’s $95. Like SAT, USAT is a general-purpose aptitude test and is not geared towards any specific discipline or programme.

What is positive is that the push to resolve this long-standing issue of equivalence is coming from the Federal Ministry of Education and not from any international exam board, even though their students have the most to gain. Governments are obligated to look out for the interests of all children and ensure provision of education as public good, while the private sector only has to look out for its customers’ and business interests.

Ultimately, the best yardstick would be well-made, fairly administered tests that sort children by knowledge, skills and aptitude to become doctors, engineers, lawyers, scientists, etc, and not biased in favor of students from any one school system. 

This will put the onus of teaching students conceptual understanding of subjects (regardless of which admissions test they choose to sit in later) where it should be – schools, both public and private. It will make equivalence between school qualifications irrelevant. Until then, no matter what mechanisms for equivalence we develop, they will always involve some degree of subjectivity and produce winners and losers.

The writer is technical adviser to the MoFEPT. Opinions are her own.

This article originally appeared in the August 21, 2021 edition of daily The News. It can be accessed here.