OpenAI — among other artificial intelligence brands — have created products like ChatGPT which allow anyone to write an essay, find sources and answer complex questions in under a minute. With this monumental change in technological ability, teachers and students are questioning what it means for the future of the classroom as it’s known.
Professor of learning analytics David Williamson Shaffer works in the Department of Education and has previously written about his opinions on AI in the classroom, including where it should be integrated and where it should be avoided.
“There’s a long history of AI in the classroom,” Shaffer said. “Much of that has been in tutoring systems, like chemistry or algebra, where problems are well-formed. There are things that you can do and there’s a right order to them and if you do them in the right order, you’ll get the solution.”
Shaffer said the difference between past AI and the current large language models students are using today like ChatGPT is that LLMs are able to answer much more complex questions and do not have clear right or wrong answers.
New kidney transplant technique uses immune system to enhance recovery
LLMs gather information from the internet or from pre-fixed data sets. They then use a prediction engine in order to predict the words the user wants to go next and repeat this process to find the answer the user is looking for, Shaffer said.
In the context of the classroom, Shaffer said AI can be a good start, but it cannot properly complete a student’s work in the way a student putting their full effort forward would.
“I think of it sort of like an undergraduate who skims the reading and is kind of faking it through the class,” Shaffer said. “Sure, sometimes that’s good and frankly, I find it’s usually helpful. But I don’t think you would want it to write your essay.”
Shaffer said there are many different ways to deal with AI — he has seen institutions ignore it, ban it, outfox it, teach about it or teach with it.
Assistant professor in the Department of Electrical and Computer Engineering Kangwook Lee said ignoring AI is not the answer and those who are hesitant will be behind the others who adapted in a few years.
Thermal modeling allows researchers to study physiology of Triassic period animals
“If you really adapt to this new technology and learn how to use it better than others, you will excel at every step in your learning,” Lee said. “As a student, it is the most important thing you can use to improve your learning in many factors, and I hope you’re working with it.”
Both Shaffer and Lee said that they “outfox” AI in the classroom in order to make sure students’ work is their own, but also teach with it to maximize student performance.
Lee said he will not hold online exams at all anymore to ensure there is no cheating, but he recommends using AI in students’ or anyone’s daily workflow. Lee said he also uses it for creating diagrams and presentations often.
Shaffer said he does not just design classwork to be unable to complete with AI but he actually teaches with it.
“There’s plenty of evidence that suggests you do learn a lot by seeing something that is flawed and correcting it instead of just trying to produce and getting corrections on your own thing,” Shaffer said.
Department of Forest and Wildlife Ecology starts series off strong
While ChatGPT and other LLMs perform well and will always give an output that is somewhat similar to what the user is asking for, there has been public doubt of their output’s accuracy. Lee and Shaffer had differing opinions on how accurate and reliable ChatGPT is for students to use on their schoolwork.
According to Lee, ChatGPT does basic research better than almost any student and will certainly be able to create a sufficient amount of information to satisfy the answer to most questions.
“[Chat GPT is reliable] for the undergrad curriculum,” Lee said. “You can basically assume the accuracy is above 90%. You can always get an A.”
On the other hand, Shaffer has a more skeptical approach to LLMs. Shaffer said he would advise to have almost no trust in ChatGPT’s sources, not because it is particularly wrong, but because it is just hard to predict when and where it is wrong.
Shaffer said because there is an abundance of online content produced by ChatGPT and with the recent switch of ChatGPT being able to pull information from the internet, ChatGPT has started to process information it has self-created.
Institute on Aging presents new findings regarding brain health at annual colloquium
Despite differing opinions on the sources ChatGPT pulls from, both experts said AI in the classroom will become mainstream.
Shaffer said similar debates happened with the introduction of the calculator. Lee said if people were to look back on debates over the internet and education, they would look very similar to current debates about AI and education.
Artificial intelligence in the classroom is becoming more and more normal and it is important people understand how it will fit into their learning environment, Lee said.