A chatbot is an artificial intelligence-based software application which can engage in human-like conversations. Users have the ability to inquire or request information, and the chatbot can provide a response in a matter of seconds. ChatGPT managed to accumulate a million users just five days after its first release.

Although there has been extensive discussion regarding the impact of AI on education, only a small number of peer-reviewed articles have been published on this matter. The new article in the Journal of Applied Teaching and Learning, points out that some educators have expressed concerns about ChatGPT’s potential to replace traditional assessment methods as well as its inability to evaluate the accuracy of information generated. However, the authors suggest a pragmatic approach to managing the challenges presented by ChatGPT may be more productive than resistance.

One of the primary concerns that has been raised about the use of ChatGPT is its potential to undermine the value of essays as an assessment tool. Some educators fear that students may opt to use ChatGPT to complete their written assignments, as it can generate readable text within seconds, bypassing plagiarism detection tools. These concerns, however, may be attributed to instructors' reluctance to adapt to changes in assessment methods, since written assignments are often criticized for being unengaging and ineffective in gauging students' learning.

The ability to generate essays using AI technology has created challenges for educators, but some are embracing the opportunities for innovation in teaching and learning that it presents. Engaging students and instructors in shaping and utilizing AI tools to enhance learning is proposed as a better approach than prohibiting their use. In the future, these tools may become a common component of writing, similar to how calculators and computers are now integral to math and science. The potential benefits of Language Models such as ChatGPT include assisting with writing, improving search engines, and answering questions.

Currently, ChatGPT has a significant limitation in that it does not include sources and citations in its responses. While it can recommend books and explain the reasons for these recommendations, it does not provide in-text referencing or a reference list. This is a major drawback when it comes to academic assignments that require a certain number of references. However, OpenAI has developed a prototype called WebGPT, which has the ability to browse the web and access verified sources and quotations, in addition to incorporating the latest information. In the meantime, there is a GPT-3-based tool called Elicit, which markets itself as an AI research assistant that can reduce the time needed to write a literature review and research proposal. Elicit can respond to research questions and suggest academic articles, as well as provide summaries from a vast repository of 175 million scholarly papers.

In the short term what measures can be taken? Are there strategies to counteract texts generated by AI? The article suggests using physical closed-book exams, proctoring/surveillance software, and designing writing assignments that ChatGPT is currently not good at handling, such as summarizing the results of student discussions or discussing recent news stories. Finally, the article proposes alternative assessment methods such as oral exams and video or audio submissions of students discussing their essays or metacognitive reflection on their writing process.

Comment: There is no doubt that AI can now create convincing, well-organized text on a wide range of topics. For example, everything you have read so far on this page was written by ChatGPT. There has been much discussion about the negative consequences of ChatGPT on hand-in tasks and student assessment, however, I feel that for the most part these worries are unfounded, for two reasons.

AI doesn’t always get it right. Although ChatGPT can produce convincing text, that is all it is—text. There is no actual thinking or analysis at work here. This means that texts can contradict themselves. Another recent article examined how ChatGPT answered fairly basic physics questions on Newtonian mechanics, finding that answers contradicted themselves and were often physically incorrect. What is more, when the researchers interacted with the Chatbot to try to see whether it could learn to give correct answers, they were frustrated by the way it continued to give incorrect answers in an authoritative manner despite agreeing that an answer was incorrect (See Gregorcic & Pendrill, 2023). The researchers concluded that the best use for the Chatbot in physics was perhaps in addressing student misconceptions. This is because of ChatGPT’s ability to produce convincing but incorrect answers. Such answers can either be discussed by students or used to create alternative in multiple choice questions.

AI doesn’t say what you want it to say. The second reason I am not overly worried by ChatGPT is that it is difficult to get it to say what you yourself want to say. For example, producing the text above was much more difficult than normal because of my unfamiliarity with ChatGPT and its limitations on the amount of text that can be entered and summarized in one go. This could of course be something that is rectified with time. However, although correct, the themes in the above text are not what I would have chosen to highlight. Of course I could have played around with the text, regenerating and refining, but I felt like that would have taken much more time than just doing the writing myself.

Focusing on what’s important. All of the above sounds like a damning indictment of the use of chatbots in education. So why am I cautiously optimistic about AI? Well, as an educational researcher I think language models such as Chat GPT help us focus on what is important. In many undergraduate courses we have traditionally spent a lot of time and energy on helping students with the mechanics of producing texts. We discuss the layout of lab reports, the structure of introductions in essays, the importance of presenting counter-arguments in argumentative texts, etc. All of this is what language models such as ChatGPT do extremely well. So firstly, it seems to me that we can use AI to help us in these teaching tasks. For example, AI can rewrite student texts, modeling how the ideas could have been better presented. However, I think there is a more interesting educational question here. Do we still need to teach students how to write? Perhaps outsourcing that part of our work to AI would allow us to better focus on the ideas themselves?

Language and learning. Vygotsky famously suggested that there is a direct link between thought and language—he claimed that language structures our thoughts. As Halliday & Martin (1993:8) put it: Language is not passively reflecting some pre-existing conceptual structure, on the contrary, it is actively engaged in bringing such structures into being.

Going forward, it will be fascinating to see how the “language without thought” produced by AI can help us redefine what we think learning is.

Interested in learning more about the impact of ChatGPT on higher education? The Centre for the Advancement of University Teaching (CeUL) will be hosting an online workshop on 12th May 2023 for university teachers and directors of studies. The effects of AI in higher education were also recently discussed in the latest edition of Panorama (In Swedish).

Note
The first section of this online article was written using ChatGPT, following OpenAI’s terms of use.

References
Gregorcic, B., & Pendrill, A. M. (2023). ChatGPT and the frustrated Socrates. Physics Education, 58(3), 035021.
Halliday, M. A. K., & Martin, J. R. (1993). Writing science: Literacy and discursive power. London: The Falmer Press.

Text: John Airey, Department of Teaching and Learning and ChatGPT

The study
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1).

Keywords: ChatGPT, OpenAI, Assessment, Undergraduate learning, Artificial intelligence, Language models