https://www.insidehighered.com/news/faculty-issues/learning-assessment/2024/12/13/ai-assisted-textbook-ucla-has-some-academics "You students shouldn't use AI to write your papers for you!" "Why not? You do it."
The trick to AI regarding the authorship of assignments is an old one: don't just read what they write, have them defend/discuss it. When I had my viva at Leicester, it took the external examiner less than 15 minutes to accomplish two things. First, he verified that I knew my work. Second, he asked one question that caused me to do another year of research and writing. Instructors would do well NOT to compete with AI, but to allow it to be a tool. The ownership of intellect must remain with the student.
One of our children is a high school teacher, and he’s noticed that some students are using AI to write their papers. When this happens, there are consequences. Depending on the situation, he may not automatically fail the student, but he will require them to resubmit or redo the paper. He also warns students that such actions can have serious long-term consequences, like getting into trouble at university or even failing the class. Some parents have even argued with him about these policies, but he believes in maintaining academic integrity. The school is proactively addressing this by using tools that can detect if a paper, in whole or in part, was generated by AI language models. These tools help uphold academic honesty and ensure that students are held accountable for their own work. This week, students will present multiple assignments in front of the class, and the teacher is keeping a close eye on the quality of the work being submitted. I don't think the school is suing AI generated textbooks. Is he fighting a lost cause?
This isn't actionable because enshittification is not the same thing as fraud. I don't think AI detection tools are all that great, but I also still don't think this is a lost cause provided that schoolteachers and postsecondary instructors make it clear what they expect, what the educational benefit to students is, and what the consequences will be for trying to pull a fast one.
The more schools can develop measurements that resemble real life, the less likely AI will be used as a substitute for actual learning.
The tools aren't very accurate - in both directions. They have a lot of false positives and negatives; the Declaration of Independence, for example, gets flagged by many of these tools as written by AI. Personally, I wouldn't use them. Some of my professors at Johns Hopkins use them though; I'm not sure if there is any university policy on their use. I think you can safely say that if an entire paper, or most of it, gets flagged as AI, it probably is. If a few sections get flagged, it's probably a false positive. However, you can't really know for sure.
What's interesting is that the AI communication goes two ways. If a professional wants to use AI to generate a textbook and then review it with their expert knowledge, I don't mind. It's no different than other ways of getting words on the page in my mind, though I personally don't use AI in my own writing. Asking the AI questions and having it respond based on the content is much more concerning: there's no way to know if the AI is hallucinating or providing false information.