In the last months of 2024, it has been difficult think of Elon Musk as anything but an important player in the presidential election. Once widely beloved as an eccentric entrepreneur with an answer to humanity’s biggest problems, Musk’s madcap charge into politics and the Trump campaign has undoubtedly complicated his reputation, but there was a time not too long ago when one of the most controversial things about Musk was his desire to merge organic and artificial intelligence.
Many criticized Musk’s plan to create a direct brain-computer interface as a violation of the organic sovereignty of the human brain. By becoming part computer, we would cease to be completely human. Musk’s response to this? We have already crossed that line.
“We are already a cyborg,” Musk said at Code Conference 2016. “You have more power than the president of the United States did 20 years ago. You can answer any question. You can video conference with anyone anywhere. You can send a message to millions of people instantly.”
Hyperbole aside, I and every other student these days has an extraordinary amount of power at our fingertips, so how can we be reasonably expected by our professors not use this power to our advantage in their classes? Early in the semester, I had an open book, take-home Canvas quiz. The rules — no collaboration and no usage of resources beyond the textbook —which only exists online — and our notes.
On average, the class performed highly —as you might expect — but, there were four questions we did not do well on as a class. Most likely unbeknownst to the professor, these were also the four questions ChatGPT got wrong. There was no feasible way to identify violations of academic honesty on this quiz, and even if there had been, they shouldn’t have been used.
In such a situation, students have every incentive to collaborate — either with another human or with an artificial intelligence — even if just to double check their answers and really, they would be foolish not to. After all, the presumption is that everyone else is using their resources to their advantage. Of course, we are not actually cyborgs yet but giving students an online assessment with no guardrails and then instructing them not to use AI is a bit like giving students an in-person exam and asking them to only use the left side of their brain.
Here’s another example that highlights the misalignment between instructors’ expectations for academic honesty and the realities of being a student in a post-pandemic, ChatGPT-fluent world. A while ago, my friend’s computer science professor confronted the class about the widespread similarity scores on their latest project. They had been given a week to complete the assignment and were allowed to use ChatGPT as long they cited it, but group work was prohibited.
The professor said more projects were flagged by the plagiarism detection system than all previous projects combined and he gave them an ultimatum — turn yourself in within three days and receive no credit for the assignment but no further punitive action or wait and be potentially processed as an academic violation case. The next day he went to class and the professor then said more people withdrew their projects than were flagged and yet many of the flagged projects had not been withdrawn. Needless to say, cheating is much more widespread than most professors and administrators realize.
One obvious strategy to curtail the widespread abuses of web-based tools is simply to put students back in the classroom. “Traditional” methods of cheating require a much greater conscious effort to break the rules, are harder to pull off, and students will not be going into the assessment with the thought that the only way to compete with their peers is to cheat. Once a student is given the opportunity take an assessment online, the expectations for their conduct need to reflect the fact he or she can turn to a powerful AI engine like ChatGPT with a single click.
Another tool instructors now have at their disposal is Turnitin, which links directly to Canvas and has a very robust mechanism for checking student work for plagiarism and AI-generated content. Instructors have the option to enable Turnitin on their submission portals and they absolutely should when appropriate.
More and more, professors are recognizing the inevitability of AI in the classroom and workplace and their syllabi reflect this. Increasingly, assignments incorporate generative AI or at least permit its use with proper citation. This approach is perhaps the most reasonable. ChatGPT does not just exist in university libraries and dorm rooms but is already deeply embedded in the professional world — including in law offices, hospitals (more than 60% of physicians report using large language models to check drug interactions) and HR departments — to name a few.
And this is only the beginning. Artificial intelligence will continue to expand rapidly, most likely continuing to outpace curricula and the conventions of authentic work and plagiarism.
For those university administrators and instructors who haven’t already realized it, their students really are part cyborg, tapping into more advanced technologies and a more intelligent internet every semester. Though often ill-advised, students can hardly be expected not to use these tools when the opportunity presents itself, and if professors do not want students to use large language models on their assessments, then they shouldn’t hold them online.
As difficult as it is to expect my generation of students to not use the latest advancements in technology, it is likewise unreasonable to expect an older professorship to be equally well versed in and sympathetic to the use of this technology, but we must continue to work toward a middle ground.