AI is Infantilizing
DE+SE '25 [#1]
Last week I taught my first session of Digital Ethics and Social Equity, an upper-level Philosophy & Religion course featuring graduate and undergraduate students.
In what a student dubbed an “icebreaker,” I submitted for debate a syllabus policy:
“All work students submit should be their own. I expect that you can discuss the contents of your work and the process of creating it. In this course, ‘generative AI’ tools such as the GPT-series are not appropriate at any stage of composition because they interfere with our learning expectations.”The debate lasted a little over an hour. I gave a thesis, some reasons, with ample time throughout for students to raise comments. As you will see, my defense does not refer to the expectations [see first link]. I mostly wished for students to exercise critical expression. I’m partial to the view that argumentation is for understanding, not consensus; my woo suspicion is that the latter is a kind of colonization. I told them something like:
If you know why someone disagrees, you can take a bit of them with you. This room is a laboratory, a kitchen, a hotbox, where we get higher together, if and only if we (a) don’t leave the same; (b) gift each other a bit of ourselves for future recall.Below’s the argument, with paraphrases of student objections.
These tools are infantilizing. I mean that descriptively, not prescriptively. Infantilizing in that they do your work for you and thereby make you helpless (if not anxious) when tested on your limits. In an educational context, that’s a problem for everyone involved.
“When we graduate, our work will expect us to use it. So why not now?”
“Why can’t you grade how we use the tools?”
“Nobody ever agrees with all AI outputs. We’re always editorializing.”
“This stuff is basically free. You might say it’s doing our work for us, but it’s also kind of a tutor, and students used to pay for tutors. So, isn’t AI leveling the playing field?”
“Is there a distinction between thinking-with and having AI think for us?”First off, it’s a problem for educators. We rifle through papers that are passable but the same. It’s dehumanizing. We know each of you are rich singularities, uniqueness-es, but we encounter gray, time and again, devoid of your signature touch. And for what? Word quotas? Citation limits? Alignment to grammatical norms? I’m not teaching Chicago Style but ways to think about stuff.
“Dog, are you just saying that some won’t use it right, so nobody can?” [I kind of just nodded?]
“This is way too general. My other professors actually encourage AI use!” [“yeah, and they’re wrong for that maybe?”]
“Isn’t dehumanizing a bit extreme? Like, that word’s for specific kinds of abuse.”Second, it’s a problem for society. I am, spoiler alert, a black man. I find the cognitive offloading at hand to be rather suspicious. It stinks of mastery, but ironically also, slavery. We should question whether we’re (a) losing recipes and (b) making ourselves, in truth, uncompetitive, interchangeable with anyone who use the same tools. Further, we might wonder whether we’re wedded to these machines, that if we hadn’t had them, we would be impassioned, angry; wanna lash out, call foul. What does that sound like?
“People use it knowing it’s not human though.” [I gestured to romantic, sexual, and therapeutic use of AI, reports of outrage when system updates result in reset relations, but advised them to remain suspicious whether such use is pervasive, problematic, or all that irrational. Said something like, “idk, imagine if you lost somebody who seemed to care for you?”]
“But they inventing slurs for AI now!” [I responded “clanker?” to uproarious applause]
“Are you saying that using AI is more about the destination than the journey?”Third, it’s, arguably, a you problem. You’re adolescent. Even if you’re retired, by taking this course, you’ve traded that status for childhood. Welcome: being a kid rocks, so much so that I decided to be one for the rest of my life.
Adolescence is risk and uncertainty. The infant is not the child who explores curiosities and tests their limits. Babies are needy, dependent, little tyrants, and that can be, often is, not cute.
If you think you’re being mature, doing the smart thing by using this technology, think back to how, growing up, adults thought they knew what’s what but never actually thought about it. Once you’re all done here, there’s a nonzero chance this stuff jeopardizes your livelihood. Compared to adolescence, infancy and maturity are the same.
“Adolescence is painful though. Remember puberty?”Some additional points:
I’m in the business of youth corruption. My job is to keep you adolescent, by enabling you to develop the capacities to decide whether you want to be dependent, and with whom, and I hope those others are independent enough to decide whether to depend on you.
Frustration is good, actually. It’s a signal to take it easier, or make it harder. Grades, also, aren’t that deep. Tests aren’t the end of the world. It mightn’t matter in the end. What does is your ability to figure things out.
So, can we not?
This Fall, I’ll post bi-weekly dispatches from this course. I’m convinced my studes will produce cutting-edge insights that will seem prophetic next year. I also want to practice communicating my lectures in as accessible a key as possible. But I’m also kind of just curious what y’all think about this stuff.
