The rise in artificial intelligence (AI) technology has caused concerns among many for potential use in academic dishonesty, but SUNY Oswego communication studies professor Ulises Mejias notes that limitations still exist –- and that the process provides an opportunity for reflection.
“We are living in an interesting moment,” said Mejias, whose research specialties include how technology impacts daily life. While he doesn’t currently see or expect widespread student cheating, he does ponder “what happens when you introduce a technology that makes it so easy to do so, in almost undetectable ways?”
“In my courses, we are continuously asking whether technology shapes society or whether society shapes technology,” Mejias said. “It's both, of course, but I do think that new technologies have the potential to disrupt or at least change social behavior.”
The most prominent example is ChatGPT, a conversational AI content generation service from the OpenAI company, which has become a recent source of concern among educators.
Mejias asked ChatGPT to describe itself in two sentences, and it generated: “ChatGPT is a variant of the GPT-3 language model that has been fine-tuned for the task of generating human-like text in a conversational style. It is designed to be able to continue a conversation and generate appropriate responses based on the input it receives.”
Co-author of the book “The Costs of Connection,” on some of the intrusive side effects of modern technology, and an international Fulbright Specialist on this topic, Mejias noted this is an example of chatbots still only presenting surface knowledge.
“If you tell ChatGPT to write a 5,000-word essay on a topic like ‘Moby Dick,’ photosynthesis, the cash-basis accounting method or the history of AI, it will probably do a decent job, but anyone with some knowledge of the topic would probably be able to detect that something is wrong,” he explained. Papers mostly written by humans, but using ChatGPT to pad out the word count to meet a length requirement, might be more difficult to detect.
“Maybe that means that we, as teachers, should care less about our students' word counts and more about the originality and structure of their arguments,” Mejias suggested.
Detecting AI
“AI's lack of originality (or maybe we should say it's unique kind of originality) is the reason why tools like ChatGPT are being used to write bland text that doesn't need a lot of human flair: weather reports, stock market reports, short summaries of sports games, etc.,” Mejias said.
This limitation, as well as its shortcomings of creativity or a unique voice, makes it easier to detect, “although advances are being made in this area,” Mejias said. “You've probably seen examples of AI-generated art work. It still looks like it was generated by a machine, not a human, but it's original!”
In addition, shortcomings in citing sources and specificity can be telling.
“Other ways in which it might be obvious that something has been written by AI is the poor use of citations, which are an important part of academic writing,” Mejias explained. “And while AI is good at writing about general knowledge (e.g., what is ‘Moby Dick’ about?), it doesn't do very well with specific knowledge (e.g., what are the main themes in ‘Moby Dick,’ and which scholars have written about them?).”
In addition, ChatGPT has created its own GPT Detector that faculty can use, and more help may be on the way.
“I just read that a student at Princeton has built an app that can detect with some degree of accuracy if ChatGPT was used to generate a paper,” Mejias said. “The problem, unlike tools that can detect plagiarism, is that the system can't point to original text as evidence, so it's the student's word against the instructor's.”
Broader educational implications
But with the challenges of AI come opportunities, Mejias noted.
“Firstly, as an instructor I would encourage students to play around with it,” he said. “ChatGPT is an ingenious instrument, and instead of trying to scare students away from it, let them use it and experience what it can and can't do. Have a discussion with them about it.”
In addition, it can lead to teachers thinking more carefully about how they design assignments. Mejias points to recommendations by John Kane, director of SUNY Oswego's Center for Excellence in Learning and Teaching (CELT), including "carefully scaffolded assignments, narrowly scoped writing prompts that tie into specific course content, and authentic assessments."
For faculty members everywhere, their institution's equivalent of CELT can be a tremendous resource, Mejias added.
Mejias also recommends academia dedicate more resources to teaching humanities, social sciences, and science and technology studies.
“These disciplines can guide our conversations with students about these issues,” Mejias said. “It is particularly important that students in STEM, who design AI tools like ChatGPT, be aware of these discussions. In today's society, opportunities to engage in critical thinking about technology need to be part of every curriculum and program, in my opinion.”
Beyond classrooms
But AI technology implications span beyond higher education into such realms as propaganda and equity, Mejias noted.
“The reason this is important is because the most significant effects of AI on our society have little to do with automatically generated text,” he said.
“There is also fear that it will be used to generate massive amounts of fake news: instead of one false report about a presumably stolen election, someone could quickly generate lots of unique reports, and distribute them on social media to make it seem like different people are writing those reports,” he explained.
“AI is being used, as we speak, to deny the most vulnerable individuals in our communities opportunities and access to resources,” Mejias said. “When AI can deny a loan application, determine the amount of welfare that is received, identify people as more likely to commit crime, increase the insurance rates or rents of specific populations, or deny a job application — all with little human intervention or oversight — then we've got bigger problems than plagiarism.”