Maybe it's impossible to truly "AI proof" a course, but that's not gonna stop a girl from trying.
I don't like quizzes as accountability mechanisms, so in upper-level courses I use reading responses: questions, quote, commentary is my format. Questions related to the reading, a quote with a page number cited, and commentary capturing the initial thoughts in response to the reading. I told them I wanted it to pull back the veil on their brain-workings, and that's it. No polish. That their audience is not me, but their own selves, and ffs not to try to be impressive. I want this to be RAW. UGLY. BRUTAL. USEFUL. REAL.
So Day 1 of my upper-level elective Women in Philosophy course I showed the students an example of What Dr. Thweatt Does Not Want in a Reading Response.
What Dr. Thweatt Does Not Want in a Reading Response is a ChatGPT generated document following my exact instructions for Reading Responses. ChatGPT flawlessly followed the structure of the assignment, provided multiple bullshit fake-performative non-questions, a quote with a fake citation, and a bullsit commentary written in the first person.(As a bonus, at the end of generating this for me, it offered to write a full essay. That's SO GROSS.)
Anyhow, when I put this on the screen as an example of What Dr. Thweatt Does Not Want, I asked my students what was wrong with it.
First comment: "it seems like these questions...aren't real questions. Like, they sound like they're based on the text but it's not like they are questions someone would actually be asking, if they were trying to understand the reading"
Second comment: "This feels perfomative"
Good. GOOD. Yes, yes, my chickens, you are preceiving correctly.
So then I said, that's right. These aren't real questions. When you try to answer them, you can tell that there's not an actual question there, because there's nothing you can actually say in reply. It's just words strung together in grammatically correct ways with a question mark at the end. Then I said, the other thing, of course, that is wrong with this, is that this is ChatGPT.
And 5 students immediately said in unison, "I knew it!" And I said, that's right. You did know it, because this stuff stinks. It stinks because it's bullshit, and we're all pretty good bullshit detectors. YOU KNOW IT. AND I KNOW IT. AND IT'S NOT HARD TO SEE IT. SO DON'T DO THIS.
Just don't do it. It's not what I'm asking for. ChatGPT can't do what I'm asking for. And I can spot the difference. So let's just establish this as baseline on Day 1, okay?
We'll see if this helps? But it's also true that these students are taking this as an elective--no one has to be in here--and it's mostly upper-level students who are pretty highly motivated. So this skews the results, proabably. But I'm (temporarily) heartened by the attitudes observed here--serious students, the ones who are actually interested in learning new skills and material and critical thinking, grasp that the bullshit-machine shortcut leads nowhere and they are not impressed.
So: some of the kids, at least, are all right.


