During our first departmental meeting of the year, a teacher offhandedly bemoaned the new paywall for Draftback, a tool for investigating the revision histories of students suspected of using AI. Another teacher responded by bringing up a new tool she is using instead for tracking revision history, and it was like a record scratched to halt. After a brief moment of silence, the room erupted with questions for the teacher about the new tracking tool, and then everyone was talking all at once concerning their worries about student’s using AI when they shouldn’t in the year to come.
This scene brought to mind a disconnect that I’ve been thinking about a lot concerning AI: The education world is suddenly awash with AI companies, each offering to make teachers’ lives easier and save them time with new AI tools. For example, my district is testing a platform right now that claims to save teachers 10+ hours of time, and their marketing materials underscored this by featuring a teacher closing her computer to do yoga instead.
This vision is compelling, and yet the majority of teachers I’ve talked to have expressed that Gen AI hasn’t made their lives easier or saved them any time thus far. If anything it has increased their workload because they need to constantly think about whether student work was ghostwritten by AI and respond accordingly if AI is suspected. George Dillard captures the time and energy cost of this well in his piece “I Want to Be a Teacher, Not an AI Detective”:
I was making my way through the year’s final batches of papers for my high-school history class, and it happened again. Reading a student’s work, I went from thinking, “this isn’t bad” to “this is a little more polished than I usually get” to “ugh, this bears the lifeless, robotic mark of the AI beast.”
…Now I have to go back through the student’s work with a fine-toothed comb to decide whether the prose that activated my radar is really evidence of AI usage. I might go back to the Google Doc in which they wrote the paper and click through the revision history to see if I can find any suspicious events (like a whole page of text getting pasted in all at once). If I find enough evidence, I’ll have to speak to the student, bring the case to the school’s dean, and perhaps participate in a disciplinary hearing…
Like Dillard, I spent far, far too much of my limited time last year playing Sherlock Holmes with pieces of writing I suspected of AI involvement and coming up with plans for responding to suspected AI misuse. Further, the nagging question of whether each piece might be AI served as a constant distraction that undoubtedly slowed and impacted my ability to respond to student work.
Put all of this together and you might get a sense for why my colleagues responded so strongly to a potentially effective tracking tool when only an hour earlier most responded to the announcement that the district was piloting a couple “time-saving” AI Tools with little more than an indifferent shrug. For many teachers (this one included), the EdTech tool they want most isn’t a co-writer for lessons and materials; it is a tool that can tell us beyond the shadow of a doubt if the work of our students is actually their work.
Unfortunately, such a tool doesn’t exist though, and given how the technology of obfuscation tends to outpace the technology of detection, it might never exist. So a big question going into this year is, short of such an invention, what can we do to limit how much of our valuable time and mental energy will be spent playing an AI detective, prosecutor, and jury?
There are plenty of thoughts on this, and the two most common suggestions I’ve seen are to…
- Require students to share a revision history and engage in a writing process
- Have more in-class, pen-and-paper assignments and assessments
I think these are good suggestions and in many situations good teaching, but both can add even more time to our plates (it takes roughly double the time to respond to something by hand) and have their own downsides (not all assignments lend themselves to in-class pen-and-paper work and student handwriting today is often wildly illegible). Schools can also set up firewalls, but those firewalls don’t extend to homes and personal devices, and AI is increasingly interwoven into nearly all tech (even my PowerSchool gradebook has a button for an AI PowerBuddy this year).
Put it all together, and I’ve come to believe that, like Tony Frontier discusses in his Cult of Pedagogy piece “Catch them Learning: A Pathway to Academic Integrity in the Age of AI,” the real key is to focus less on detection and punishments and more on shutting down AI misuse at its source.
In his article, Frontier looks at the broader conditions that make cheating more likely (like unclear or unreasonable expectations or crunched time), and while that is a part of the picture, in talking with students about AI, I’ve found AI misuse often comes from the fact that many students simply haven’t thought about it very deeply. Maybe they are struggling with a math concept, don’t want another C+ on an English paper, got home at 10pm from play practice, or just see others using it and decide to go for it without much consideration of the impact it will have on them or their learning.
With this in mind, last spring I beta-tested something that instantly and significantly shrank the number of issues I had with Gen AI misuse. It was a mini-unit where we took a few days to learn about and discuss the following:
- How Gen AI works and knows what to write (or more accurately writes without knowing what it is saying)
- Potential downsides of Gen AI usage, including its impact on the environment, issues with copyright infringement, regular hallucinations, and potential biases
- How AI affects us and our brains and could potentially inhibit learning and growth
- Potential positive uses for Gen AI in the wider world, including discussion of where (if anywhere) Gen AI usage might be appropriate and even accelerate learning in schools
- Examination of the messages AI companies are giving us all about Gen AI
- Debate around appropriate classroom usage
Time and again in my career, I’ve found the best answer to a thorny question has often been to take some time to directly teach students about the issue and/or to open up a conversation with them about it. And last year I found that to be true again with AI. Further, once we embarked on the conversations, it began to dawn on me how important those conversations are, not just for our classes, but students’ wider lives, given how much teens are using AI outside of school. It also dawned on me that there was no better place to be having that conversation than an English language arts classroom that is already examining the wider world.
This year, I’ve taken those lessons from last spring and polished them into a mini-unit that I’ll be presenting to students over the next week. The goal of this mini-unit isn’t to demonize or to glorify AI; it is to offer clear guidance and build context together in the hope that increased knowledge will stem most Gen AI issues before they start. And then I can ideally hang up my detective hat and magnifying glass and reinvest that time into the things I love to do: connecting with my classes, responding to actual student work, writing this newsletter, or, most importantly, putting it all away to be with my family.
If you are interested in using those lessons, I am so excited to announce that for the first time ever (!) on the newsletter they are available, along with the resources I’m using to think about and discuss Gen AI at the start of this year for an introductory less-than-a-latte-with-tip price of $5.99 (click here). This is an exciting new step for the newsletter, so please let me know how it goes, as I hope to offer new units each month this year, and good feedback ☺️ is never a bad thing!
Until then, I hope those starting today have a wonderful first day, and everyone else had a good Monday!
Yours in teaching,
Matt
Leave a reply to Matt Cancel reply