This post is part 1 of a potential series.

--

In April, writing for the Chronicle of Higher Education, Beth McMurtrie asked Should College Graduates be AI Literate? and I can’t stop thinking about it. My thoughts center on the admirable chemistry professor we meet in the opening vignette, Jacqueline, who had an honors student frequent her office hours. He was trying (and failing) to learn the course material through an elaborate system of NotebookLM-generated study guides and podcasts. Jacqueline does everything she can to teach this student more effective and active study habits.

She cautioned him that his approach to learning chemistry was quite passive. One of her assignments, for example, asked students to wrestle with some complex ideas and, through close reading and discussion with their classmates, prepare a presentation for a general audience. Now AI was doing that work for him. This student was dedicated and came to office hours regularly. Later, after he did poorly on an exam, she implored him to study the material on his own, walking him through strategies such as creating concept maps to better understand the connection among ideas.

She's absolutely right, but to no avail. He isn’t able to turn his grades around. Jacqueline concludes that “if she had had the language in which to speak about AI, she might have been able to persuade the student to study differently.” (Or, at least that’s how McMurtrie paraphrased her.) 

I have been ruminating on this article so much that I started a blog. 

I don’t think Jacqueline meant that maybe she could have helped the student discover an even better AI study hack, so why does the article spend so much time conflating proficient usage with literacy?

The questions about what students should learn about AI and literacy are important and deserves more critical attention than the response we keep landing on: push for educators to become practiced users of generative LLM programs. Over the last few years, tech companies have worked hard to convince us that their products are the future of the workplace, and therefore, essential to a college education. A recent conference hosted by UCF on Teaching and Learning with AI made the case this way: “There are different ways to leverage the power of AI through generated text, images, and products. If you work with students, it’s important to understand and discuss how to use these tools to shape the future of teaching and learning across college campuses.” This is what many see as the answer to the question of AI literacy: faculty should know, and help students learn “how to use these tools.” 

Elle Woods asks: “What, like it’s hard?”

The latest example of this is the $23 million granted to the American Federation of Teachers for a National Academy for AI Instruction, where teachers can attend workshops on “how to use A.I. tools for tasks like generating lesson plans.” as Natasha Singer reports in the New York Times. 

The rationale for this partnership comes from an understandable place. There are stories everywhere of students using AI in ways that educators and many in the general public find worrisome. At New York Magazine, James Walsh warns that everyone is cheating their way through college. Through the interviews, we get an alarming picture of rampant cheating. I'm concerned this feeds into a toxic paranoia, putting in professors into an antagonistic role against their students. 

Resultingly, students are now feeling just as paranoid that their professors believe they’re cheating. Florida International University student Kendall Moffett reveals her “constant state of worry” that an instructor’s reliance on AI detectors will mistakenly cause her, or other students, to face academic misconduct charges. For Kendall, this means lost confidence and ease in writing, worrying about an algorithmic, punitive reading of her paper, rather than a supportive response from a teacher ready to help her develop confidence as a writer. Kendall’s not alone: just a week after the New York Magazine story, the New York Times published How Students Are Fending Off Accusations That They Used A.I. to Cheat.

So, in a series of blog posts, I plan to look at the question McMurtie posed: Should College Graduates be AI Literate? I’m not saying the answer is no. Rather, I think the question is ill-formed. It’s ill-formed in the obvious way, in that “AI” as a concept is an ill-defined category. Focusing to a narrower question relevant to my writing classrooms: Should college graduates be chatbot literate, feels still like a slippery question.

What college graduates probably don’t need is a curriculum full of classes that help them learn how to “leverage the power of AI” by using LLM chatbots. Critical Literacy of AI has to do better than that.

--

To be continued.

badge: Written by a human, not AI

notbyai.fyi