In life, humans have always worked hard to seek the edge, the competitive advantage that will see us thrive against our competitors or the challenges of our environment. This is genetically hardwired into us through evolution. The downside to this, of course, is that there is a fine line between gaining an edge and taking an unacceptable risk. To minimise risk, we rely on rules to temper our competitive nature. A good example of how this manifests in today’s technologically advancing society is the world of Formula 1. In the 1950s, rules related to the construction of the cars were lax to say the least. Cars had an engine capacity limit, no weight limit and could use any fuel. Crash helmets were optional! Modern cars have access to a range of performance-enhancing technologies including fuel injection, power recovery and aerodynamic modelling but are subject to restricted engine and fuel capacity, a minimum weight limit and an impressive array of safety features.
When it comes to academic performance, students too feel the competitive urge to perform well in their assessments and, like in car racing, the modern student has access to a range of tools capable of boosting their performance. The pre-internet scholars gained their edge through investment in encyclopaedias, dictionaries, thesauruses (or thesauri?) and access to libraries. The richest students were able to purchase the services of a proofreader. Performance enhancing technologies over the last 20 years or so include word-processors, spellcheckers, grammar checking algorithms, the internet, translation tools and, most recently, generative AI.

The response of the regulatory body (be that Formula 1 or higher education institutions) has generally been to ban or restrict use of those technologies that pose a threat to fairness or safety (academic integrity). By ensuring that all competitors adopt the same rules and have access to the same technology, the competition becomes based on driving skills or on academic argument and critical thinking skills. But where should the lines be drawn in relation to academic writing tools? It would be hard to justify banning students from using spell-checking tools, yet marks are frequently assigned for correct spelling. It is a similar story with grammar checkers, although the most sophisticated of these could be accused of rewriting sections; again, marks are assigned for good writing styles. If these tools are permitted, then why is there such a fuss being made about generative AI? It could be argued that it is merely a tool to be used to help students to translate their thoughts and critical skills into a written format in order that it can be marked. Some critics, however, foresee generative AI being used as a replacement for thinking and the herald of the end of written assessment altogether.
So what should guide our thinking with regard to academic use of generative AI? Reaching again for the Formula 1 analogy, it is not unreasonable to foresee driverless cars racing each other with zero risk to human competitors. The technology is certainly available in commercial vehicles and would see an end to crashes and injuries. But is this a sport which we would want to watch? Would losing the human element mean that the race is meaningless? In academic terms, we absolutely need to have human involvement as it is the human who is being assessed. The bigger question really is what aspects of human performance are we really interested in assessing? Should we care about writing or is writing a dying art? Would we be better off assessing verbal argument and critical reasoning? Will our children still be putting pen to paper or finger to keyboard in 20 years’ time? Or will they be chatting to their personal AI assistants? Will our higher education institutions see a return to a verbal tradition with our students watching in bemusement as we nail our thoughts down to paper or screen slowly and carefully.