Investigationsvol. 8

Learning in Translation

What learning looks like when students work with AI

—By James Tian


At 2am in the UGLI, the only light comes from rows of glowing laptop screens. It’s a familiar scene: a student staring at a half-written essay, typing a quick prompt—“Can you suggest a stronger transition for this paragraph?”—into ChatGPT. Within seconds, sentences appear—not perfect, but useful. He tweaks a few lines, presses submit, and leans back.

Moments like this have become routine at the University of Michigan. Generative AI has slipped into everyday study routines, shaping how students read, write, and test ideas. Some use it to clarify complex theories or debug code; others to jump-start a thought they can’t quite articulate. The line between tool and thinker, shortcut and exploration, is increasingly hard to draw. Across campus, teachers and students are still figuring out what that means—experimenting more than regulating, questioning more than prohibiting. What’s emerging is not just a debate about cheating or convenience, but a redefinition of learning itself: less about the answers students produce, and more about how they get there.

Students: from users to collaborators

When Chunye Yang, a senior majoring in Honors Math, first encountered ChatGPT, he thought it could only handle everyday queries or debug small programming errors. “It was built by computer scientists,” he said. “So I figured it might help with coding, but not something like abstract algebra.” That initial perception was common among students—viewing generative AI as a convenience rather than a cognitive partner. “We thought it was like Siri,” Yang said, “useful for quick answers, but not for deep reasoning.”

He remembers the moment his view changed. One night in his dorm room, stuck on an algebraic geometry problem he’d been wrestling with for hours, his roommate, a CS student, suggested, “why not ask GPT?” He then typed in a prompt—“walk me through why this morphism is not injective.”—and watched as the model produced a partly incorrect but oddly illuminating explanation.

“It wasn’t right,” he said. “But it pointed me to an angle I hadn’t considered.”
Since then, he has used AI less to solve problems and more to test his reasoning: “If it disagrees, I figure out why.”

This transformation in how students use AI—from consumer to collaborator—mirrors broader patterns across higher education. A 2025 Educational Research Review study by Liu and Zhong systematically examined 71 empirical papers on how generative AI integrates into student learning. It found that when students engaged with AI interactively—questioning, verifying, and refining its responses—they demonstrated measurable gains in cognitive, affective, and behavioral outcomes. For instance, several of the studies described students who didn’t simply accept an AI-generated solution, but pressed it with follow-up questions, compared its reasoning with their own, and iteratively revised the output; those students showed stronger understanding and higher motivation. But when they relied on it for direct answers, the effect reversed: motivation and self-regulation declined.

The study also highlighted a shift in cognitive emphasis. For lower-order tasks such as recall and comprehension, students moved from “information gathering to information verification.” For higher-order tasks—analysis, synthesis, and evaluation—the effort shifted from “task execution to task stewardship,” where students guided and critiqued AI output rather than replacing their own reasoning.

Still, not everyone approaches AI that way. “It’s too tempting to stop thinking,” said a sophomore who asked not to be named. “When you’re tired, it feels easier to let the model finish your thought.” The tendency is especially common when students use AI outside their core areas of study. “For my distribution required classes, I don’t really need to learn all the material in depth,” the student admitted. “So I let GPT handle a lot of it. It saves time for the courses that actually matter to my major.”

Across these cases, the pattern remains the same: the boundary between collaboration and dependence is drawn not by technology, but by choice—by whether students use AI to deepen their thinking or to escape it. In practice, deepened thinking shows up in small but deliberate moves: asking the model to surface missing steps in a proof, comparing its explanation with one’s own, or probing why two answers diverge. These moments shift the interaction from answer-getting to sense-making. By contrast, when students bypass these checks and simply accept AI output, the gains vanish.

Teaching in the age of iteration

Faculty are also facing their own learning curve. Professor Kayvan Najarian, who teaches data science and biomedical engineering, allows students to use ChatGPT in his class, but only if they include both their prompts and the model’s responses in their submissions. “If they’re going to use it,” he said, “I want to see how they used it. The conversation itself shows their reasoning.”

His focus on transparency echoes a 2025 study in Assessment & Evaluation in Higher Education, which found that both students and teachers are struggling to “draw the line” between acceptable and unacceptable AI use. The authors argue that instead of blanket bans, instructors should create clearer, task-specific guidelines and ask students to document how they used AI, promoting fairness and reflection.

Najarian said his goal isn’t to police AI use, but to make students conscious of their process. “Sometimes they realize the model misunderstood their intent,” he explained. “That reflection is where real learning happens.”

Across the department, Professor Austin Yarger, who teaches game development, takes a more pragmatic approach. For many students in his EECS 494 course, it’s their first experience with C# and the Unity engine. “On the first day,” he said, “I tell them to turn on their IDE’s auto-completion features” (An IDE is the environment where programmers write and test their code). “They help students get comfortable with syntax faster,” Yarger said. “There’s no need to waste hours memorizing punctuation rules.” For Yarger, teaching code isn’t about rote memorization but about helping students build intuition. He sees tools like IDEs as training wheels that let beginners focus on logic and design before worrying about syntax.

He views generative AI the same way: a practical accelerator, not a shortcut. “When it comes to the logic of the code itself, I let them use AI if they want,” he said, “but they need to understand why the model writes it that way. Otherwise it’s just what students jokingly call ‘vibe coding’—copying or pasting code that merely feels right without understanding how it works.” 

Yarger’s distinction—between using AI to save time and using it to avoid thinking—echoes the caution raised by Geoffrey Currie in Seminars in Nuclear Medicine. Rather than banning AI, Currie argues, educators should redesign assignments so that students must interrogate the output: explain why an AI-generated answer makes sense, identify its flaws, or iterate on it until it aligns with disciplinary standards. In his view, AI supports real learning only when students remain accountable for the reasoning behind the answer, not just the answer itself.

GSIs in the grey zone

Graduate student instructors are finding themselves caught between enforcement and exploration. Maddy Heyer, a GSI in computer science, said AI has made grading more complicated—but also more revealing, because AI-written code tends to stand out: students who normally write messy, beginner-level scripts suddenly turn in assignments with polished structure, textbook-style comments, and advanced syntax they’ve never learned.

Recent research supports her observation: a 2025 study by Jiahui Luo and Phillip Dawson in Studies in Higher Education found that teachers now make implicit value judgments when assessing AI-assisted student work, weighing not just output quality but honesty, independence, and effort.

“You can usually tell when someone used ChatGPT,” Heyer said. “What really gives it away is when they can’t explain what they submitted.” Sometimes, students come to her office hours to debug their programming projects. “I’ll ask, ‘What does this line of code do?’ and they’ll have no idea,” she said. “That’s a bad sign.”

Heyer rarely penalizes students solely for using AI. Instead, she asks them to articulate their reasoning in person. “If they can defend their work, I don’t mind if they used AI—it means they actually learned something,” she said. “If they can’t, that’s when it becomes a problem.” She adds a note of caution: “AI can make things faster, but learning isn’t just about speed—it’s about building judgment.”

Her approach shows a quiet change happening in classrooms. Instead of banning AI completely, some instructors and GSIs now use it to help students think more deeply—not just about what they learned, but how they learned it.

Still, others draw a firmer line. Professor Brian Noble, who teaches the distributed systems course EECS 491, allows open-book and open-internet exams, but explicitly forbids generative AI or human help. His reasoning is simple: “You can look things up,” he tells students, “but you still have to do the thinking yourself.”

Between Heyer’s openness and Noble’s restraint is the uncertain middle ground where most teaching now unfolds, a place where AI functions as a mirror, pushing students and instructors to rethink what learning should be.

When AI outthinks its creators

The rise of generative AI, says University of Michigan computer science professor Lin Ma, is transforming not just the classroom but the very way researchers think.

In one of his reading group discussions, Ma shared a paper that unsettled many of his students. The study found that when large language models like GPT were prompted to iteratively refine and critique their own outputs, the resulting hypotheses often surpassed those proposed by human researchers. “It’s not that the model became creative,” Ma said afterward. “It’s that our definition of creativity might be narrower than we thought.”

That reflection resonates with Ma’s own research, which explores how machine learning models can represent and reason about complex data systems, including the very query plans that drive modern databases. In his view, generative AI isn’t merely a tool for automating experiments; it’s a new kind of collaborator that learns the hidden logic behind human-designed systems and, at times, exposes the blind spots in them. “When a model predicts execution behavior better than our handcrafted cost estimators,” he said, “it’s not just a performance gain—it’s a hint that the model has discovered structure we didn’t formally capture.”

A 2025 study in Thinking Skills and Creativity by a research team from the University of Eastern Finland echoed a similar shift at a different scale. It found that when people worked with generative AI, decision-making was no longer purely human—the model was influencing how options were framed, which lines of reasoning were pursued, and even what counted as a plausible answer. In research, Ma noted, the same thing may be happening: “We’re starting to share part of the reasoning process with the model. The question is no longer just what we can teach it, but what it’s already teaching us.”

Between efficiency and understanding

For some, AI has helped them learn faster than they expected. For others, it has made genuine understanding feel elusive. “It’s efficient, but it also flattens things,” said Yang. “When every question has an instant answer, you stop wondering what makes an answer interesting.”

The paradox between speed and depth may define how people learn today. Generative AI can turn confusion into quick answers, but it can also make those answers feel shallow. Across campus, students, GSIs, and professors are finding that the hardest part of learning with AI isn’t getting things right but rather staying in control of how they learn.

Late at night in the UGLI, a different scene plays out: a student opening ChatGPT again—not to finish a paper, but to question it. He pastes a paragraph, asks for counterarguments, deletes the reply, and rewrites it in his own words. Around him, others do the same in quieter ways: students experimenting with prompts, GSIs reshaping assignments, professors rethinking what counts as understanding.

Whether this new rhythm leads to deeper curiosity or greater dependence remains uncertain. But Yang is already noticing its effect on himself. “It answers fast,” he said, “but the second I stop checking why the answer works, I stop learning.”

 

Feature photo: UM’s Bob and Betty Beyster (CSE) Building, by James Tian