exam
Image: Unsplash

The pandemic has confirmed that exams are flawed and outdated

While taking exams and writing my assessed essays during the COVID-19 pandemic, it became clear to me that some of the ways we are assessed are completely outdated. Sat in my room filling in the answers to open-book maths exams, I never felt like I was doing maths. It felt more like I was following a process that I had learned beforehand which I knew would get me the marks I needed.

I was never being assessed on my mathematical ability – I was being assessed on how quickly I could use the ctrl-f function on my keyboard to find the answer to most of the questions. Even when the answers were not written word-for-word in the lecture notes, they were usually exercises previously found in assignment sheets. The only thing being tested was my memory.

What did not help was the fact that the exams had not been rewritten to accommodate for an open-book format. Thus, the majority of answers did not need me to perform any reasoning of my own and the format exposed a glaring flaw in the way these exams have been run for years. 

They are not exams of maths, philosophy, science, or languages, they are examinations of memory

The fact is that these exams have always been memory tests. STEM students need to get off their high horses while engaging in WarwickFessions arguments and realise that, especially in an open-book format, anyone could get a first in these exams as long as they have a good memory.

Of course, there are also issues in the ways that humanities subjects are examined. Humanities modules aren’t too far removed from maths exams in their focus upon assessing students’ memories. Every time I had a closed-book essay exam in philosophy last year, my method of revising for both was just memorising essay plans for questions that were bound to come up, which seems not to be representative of my ability to reason through philosophical problems rather than assessing my ability to memorise an essay plan.

There is clearly a common issue which plagues these exams, which was also apparent at A-level and GCSE. They are not exams of maths, philosophy, science, or languages, they are examinations of memory.

While memory is of course helpful, it is not as important as the ability to reason and work through your problems

The issue is that no matter where you end up after university, whether you end up going on to do further study involving research or you in employment, you are going to have to rely on more than just your memory to solve the problems that you encounter. Whilst memory is of course helpful, it is not as important as the ability to reason and work through your problems. This is especially true in research, considering this requires you to expand what is already known about a subject.

To solve this issue, I think it is clear that questions for STEM subjects need to be written such that students are not emulating bookwork as much as they are currently expected to do. With the majority of my exams, it seemed that just by bookwork alone one could achieve a strong 2:1, or even a first. Studying any STEM subject requires the student to be adept at both proof and calculation, something the exams currently do not encourage.

Meanwhile for humanities exams, coursework essays make more sense as a means of assessing students. You never write your best work in the space of two hours, you write your best work when you’ve had time to really think about your argument and how you’ll evaluate the question. This is a better way of assessing your ability to argue key points in a complex debate, instead of assessing how much of said debate you can regurgitate in a short time period.

Interviewing someone on a topic and assessing their ability to explain information via speech is a much better method of assessing someone’s knowledge overall than a written exam

Of course, the best way to truly assess someone’s knowledge on a topic (and hence why it’s used for assessing PhD theses) is a viva. Interviewing someone on a topic and assessing their ability to explain information via speech is a much better method of assessing someone’s knowledge overall than a written exam. Here, it is much more difficult for someone to fraudulently reproduce what they’ve seen elsewhere, as it is easier to spot when someone is blagging via speech than via writing.

Pragmatically, using vivas as a universal assessment method is currently impractical, yet I hope moves are made one day to make their implementation more widespread. It would be a wonderful way to improve skills in public speaking and explain complex ideas clearly and concisely, skills that go a long way in the world outside of exams. The only skills our exams currently test is our skill in sitting exams.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.