ChatGPT is today what Google was to the millennials – a catch phrase that includes all kinds of Artificial Intelligence products – whether automated bots that help with food ordering or virtual tools that will write entire essays in a minute.
There is no denying that AI tools have become part of our daily transactions. We use online bank applications, depend on it to give us directions to a new place and even to recommend films, books and brands of products based on previous purchases.
Within exactly two years, however, AI has taken over a whole new activity – it generates content - actually writes, makes videos and draws what might be seen to be original material. This generative content is now used by everybody – from students to submit assignments to content creators who are actively encouraged to fall back on such tools to save time and effort.
What exactly are the ethics behind it? In a social media post recently, a user had asked whether they could sell posters created by an AI illustrator tool. The user had used a standard template and given it a personal touch. Does this product become theirs, they wondered.
This is becoming routine in education. Teachers are increasingly being given assignments which are clearly not completely written by students. When confronted, the typical student answer is that the content, the ‘ideas’ are theirs but that they had put their text through a writing ‘corrector’. These tools clean up the grammar, substitute words for better sounding or more appropriate ones, and suggest alternate sentences.
Can such a text be considered original? The ethics behind this are not clear, either from an educator’s point of view or from a legal one.
To a large extent, it depends on which side you are on. Students and learners say that these tools exist for a reason and that they should be allowed to use them to correct their language. Teachers respond that this is not the student’s original work.
Short-term rules can be easily made to circumvent these ethical questions. Already, many instructors in higher education internationally have re-introduced class tests and assignments as a short-term solution. They have also drawn up rigid rules outlining the percentage of borrowed language that would be allowed.
But this does not effectively solve the problem of using Artificial Intelligence, even if it considered a problem at all.
The World Economic Forum, one of the many institutions which has weighed in on this debate, has outlined that schools and colleges can decide between a range of strategies in the use of AI: permissive, moderate and restrictive. This essentially means that students can either be encouraged, even taught to use AI tools in submitted assignments, use them selectively only to understand some concepts, or not at all.
Such rules are easy to make but difficult to implement. The online applications which used to detect unoriginal content cannot do so with any finality any more – a student essay ‘may or may not’ have been produced by AI, is the typical automated response.
If the purpose of education is knowledge production, then falling back on technology which provides this knowledge seems counterproductive. Today, it is more important to teach students how to use and apply this knowledge, not make them re-produce existing content. Strategies to encourage such applications should be the focus of education.
Oman Observer is now on the WhatsApp channel. Click here