NEW YORK, Sept 13 (The African Portal) – The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.
Student use of artificial intelligence has become so prevalent, high school and college educators say, that assigning writing outside the classroom is like asking students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it is creating new confusion over what constitutes academic dishonesty.
“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”
Shifting Teaching Practices
Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”
In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.
“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”
Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in The Great Gatsby. Many students now say their first instinct is to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT produces essay ideas, examples and quotes, and even offers to draft paragraphs.
Students Struggle to Define Cheating
Students say they often turn to AI with good intentions for research, editing or help reading difficult texts. But AI presents unprecedented temptation, and the line between study aid and cheating is often unclear.
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles with structure. It also helped her in a philosophy class where assigned readings felt incomprehensible until she read AI-generated summaries.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?” she says.
Her syllabi state: “Don’t use AI to write essays and to form thoughts,” but students argue that this leaves large grey areas. Many hesitate to ask teachers for clarity, fearing that admitting AI use could mark them as cheaters.
Policies vary widely even within the same school. Some teachers allow tools like Grammarly for grammar checks, while others ban it because it rewrites sentences.
“Whether you can use AI or not depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her class AI study skills, like uploading guides to ChatGPT and quizzing themselves. But this year, her teachers have strict “No AI” policies.
“It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”
Schools Introduce Guidelines, Slowly
When ChatGPT launched in late 2022, many schools banned AI outright. But attitudes are shifting. “AI literacy” has become a buzzword, with schools now focusing on balancing AI’s benefits with its risks.
Over the summer, several universities formed AI task forces and issued guidance to faculty.
The University of California, Berkeley, told faculty to “include a clear statement on their syllabus about course expectations” around AI. It offered sample language for three types of courses: those that require AI, those that ban it, and those that allow limited use.
“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the guidance said, noting that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”
Carnegie Mellon University has reported a rise in academic violations linked to AI, often because students did not realize they had done anything wrong.
One student wrote an assignment in his native language and used DeepL to translate it into English. He did not realize the tool altered his writing style, which was then flagged by an AI detector.
“Enforcing academic integrity policies has become more complicated,” says Rebekah Fitzsimmons, chair of Carnegie Mellon’s AI faculty advising committee. “AI use is hard to spot and even harder to prove.” Faculty are increasingly hesitant to accuse students, while students worry about being wrongly flagged.
Over the summer, Fitzsimmons helped draft new guidelines urging clarity. Faculty were told that blanket bans on AI “are not viable” unless teaching methods are redesigned. Some have returned to pen-and-paper tests or adopted flipped classrooms, where students complete work during class.
At Carnegie Mellon’s business school, communication instructor Emily DeJeu has eliminated homework essays altogether, replacing them with in-class laptop quizzes in a lockdown browser.
“To expect an 18-year-old to exercise great discipline is unreasonable,” she said. “That’s why it’s up to instructors to put up guardrails.”
Credit: Associated Press