Home Education News How we can fix problem of new normal of 100% cut-offs in Delhi University

How we can fix problem of new normal of 100% cut-offs in Delhi University

16 min read
Comments Off on How we can fix problem of new normal of 100% cut-offs in Delhi University
0
5

How we can fix problem of new normal of 100% cut-offs in Delhi University

Once again the reliability of cut-offs in undergraduate courses at University of Delhi has shaken citizens’ trust in school boards’ results. Trust is the foundation upon which the legitimacy of public institutions is built.

In the earlier times, the national and the state boards of school education enjoyed a great deal of credibility. The marks awarded by these boards were taken at face value because of their reliability and validity. They were unhesitatingly used for the purpose of allocating seats in colleges and universities across the states. Even the premier institutions of higher learning did not recognise the necessity of conducting any entrance examination.

The situation has changed a lot over the years. Boards of school education have lost a good deal of standing because of systemic failures. It is primarily because of the imperfection of their testing instruments and procedural flaws resulting in an inconceivable progression of learning curve on an annual basis, which is quite far from reality.

There is a general public perception that standards in schools are declining and that the school boards are falling short of maintaining standards of evaluation. Things had come to such a sorry pass at one stage that some institutions started discounting the marks of some boards by a certain percentage of points due to their overblown marking policies. Some of them got into such an unhealthy competition that they started awarding a perfect hundred out of hundred even in disciplines devoid of deductive and inductive reasoning, leading to a disaster of unimaginable proportions. Consequently, the premier institutions were the first to abandon the use of boards’ results for granting admissions and instead switched over to their own entrance examination, followed by numerous other institutions of higher learning.

But University of Delhi still continues to admit students in its undergraduate programmes on the basis of marks awarded by school boards. An increasing number of students are competing for about 67,000 seats available in its degree courses each year. Some of its courses witness incredible cut-offs of a perfect 100 per cent for several years running. This year it has broken all the previous records. It has been the result of a perpetual systemic failure coupled with procedural fiascoes created by boards in declaration of recent results. In fact, the recent evaluation policy followed by boards has further exacerbated the problem of systemic failure, hence the height of irrationality in cut-offs.

It is understood that six to seven colleges released a 100 per cent cut-off in the first list in as many as 13 courses. Apart from them, there are some more courses wherein even the second cut-off list was sealed at 100 per cent. There are courses wherein there is a drop in the range of half to three percentage points in the third cut-off list. The shocking revelation of cut-offs in Hindu college caused quite a storm wherein 139 admissions in Political Science (Hons) were made with a perfect 100 percent marks, and of them, as many as 138 happened to be the pass-outs from Kerala board.

There have been several national and international reports citing enormous amounts of learning loss due to school closures during the COVID-19 pandemic. But the boards’ results are quite the contrary. This time the learning curve appears to have shot up exponentially due to flawed policy adopted by the board. Each school was asked to internally moderate the marks to be awarded to the students by using a reference year. The reference year was to be the year in which students have secured best overall performance in previous three-year board examinations. The final marks were computed on the basis of the weightage given to the marks obtained by the students in classes X, XI and XII. In the process, 30 percent weightage was given to class X marks, 30 percent to class XI and 40 percent to class XII. It is true that this policy was evolved under an unprecedented situation but the fact remains that it did not have any scientific basis, be it related to addition of marks across different subjects or comparison of marks over the years.

First, it is quite unusual in teaching and learning to give credit to an evaluation on multiple occasions. Second, it is very strange to add and compare the marks of class X students with that of classes XI and XII due to inherent limitations of the interval scale used for the award of marks. The interval scale neither allows additivity of marks across different subjects, nor permits comparison of marks over the years due to wide variations in spread of marks over the years as also across different subjects. Third, it is inconceivable to equate the outcomes of an undifferentiated curriculum (class X) with distinctly different streams of classes XI and XII.

Fourth, it is also illogical to take the best performance of the last three year board exams as a reference point since it provides unfair advantage to poor performers and hurts competitive students, especially those who compete against themselves. Why such students should be penalised for the poor past performance of their schools. On this issue some aggrieved students have already approached the Supreme Court for a review of this evaluation policy. Fifth, the whole exercise was compromised since internal moderation of marks was primarily left to the discretion of individual schools. The net result has been a remarkable upsurge in marks which in some cases turn out to be wholly unrealistic. It is understood that some of the candidates who got 80 to 85 percent marks in class X board examination, miserably failed in subsequent school tests. This is precisely the reason as to why the schools prefer to use their pre-board class X examination results rather than boards’ results for the purpose of allocating different streams in class XI.

The crux of the matter lies in our lack of expert knowledge not only in designing testing instruments, but also in interpreting and documenting their outcomes. Most of the testing instruments are not able to discriminate between high achievers and low achievers because the majority of their questions require cognitive operations at a very low level like recall, recognition, identification and reproduction. The number of questions measuring cognitive operations at understanding level are very few. There are fewer questions that require either some sort of differentiation or relationship or interpretation or extrapolation of certain concepts. The number of questions that require application of knowledge under novel situations barely find a place in most of the tests, let alone the questions measuring cognitive operations at the level of analysis, synthesis and evaluation. These are the basic reasons which lower the reliability, validity and objectivity of these testing instruments on the one hand and on the other they act as great deterrents against development of specific abilities like problem solving, critical thinking and analytical reasoning.

The solution, therefore, lies in preparing a pool of test developers with proven track record of subject knowledge and art of framing quality questions which can realistically discriminate between low achievers and high achievers. It is going to be all the more necessary because from this year the CBSE is going to switch over to objective type testing. It will be conducting the examinations in major subjects in two terms, once in November and December and again in March and April to avoid any unprecedented COVID-19 -like situation.

Objective type testing is going to be altogether a different ball game for paper setters. The paper setters and moderators now will also have to minutely look into the philosophical, sociological, psychological, scientific and pedagogical basis of paper setting to ensure greater degree of objectivity, reliability and validity. They will have to synergise amongst the content, the objective and the type of multiple-choice type questions. Experts will have to ensure a number of things like specification of task in the stem part of the item, absolute correctness of the key option, plausibility of each distractor, purging the scope of blind guessing but not discouraging intelligent guessing. It is only a well-worded question that can elicit the needed answer. Each item will have to be relevant to the purpose of the test to ensure its validity. Emphasis will have to be laid on testing of major concepts and principles rather than facts or events to improve the content validity of the tests. Although it looks simple, it requires expertise out of the ordinary.

The current practice of students’ evaluation across the boards is a wholly unsatisfactory situation and requires urgent action. The boards, therefore, should organise massive orientation programmes and workshops for paper setters and moderators to overcome these systemic problems, failing which objective type testing will only worsen the situation of cutoffs. Therefore, we must put into place a whole series of corrective measures on an urgent basis. If school boards are not going to fix this problem, universities will have no option but to go in for entrance examinations whereby they would like to admit only those who would satisfy the criteria of their tests.

[“source=firstpost”]
Load More Related Articles
Load More By Loknath Das
Load More In Education News
Comments are closed.

Check Also

Freeman cactus and cloudberry mask review

Freeman Cactus and Cloudberry Mask is a vegan face mask developed by Freeman Enterprises. …