The New Normal
How AI is reshaping the U-M college experience
—By Benjamin Cohen
On a Tuesday night in the Shapiro library, Alex Basner opens three windows: a starter notebook for his data science lab, the U-M GPT web app, and GitHub Copilot in VS Code. He’s not “cheating,” he insists. He’s resourcing: “I use U-M GPT to structure my approach, what to test first, where to pay the most attention. Then Copilot handles the technical glue code so I can focus on why the model is behaving the way it is.”
Across Ann Arbor, this scene is ordinary enough to be unremarkable. Once the University of Michigan released its own privacy-preserving AI services—U-M GPT for writing, brainstorming, and coding help, and Go Blue for day-to-day logistics—what felt transgressive in 2023 reads like basic infrastructure in 2025. All of these tools are free, accessible and officially offered to the entire community. U-M even frames their offering in plain terms: “the first university in the world to provide a custom suite of generative AI tools,” with a “focus on equity, accessibility and privacy,” available to all students, faculty, and staff across Ann Arbor, Flint, Dearborn, and Michigan Medicine.
What changed when U-M made it official?
A 2024 case study explains why Michigan built a closed, campus-managed service: to govern data flows, meet accessibility obligations, and widen access so students wouldn’t need to pay for premium consumer products or risk prompts leaving university control. The response from the community has been positive with U-M AI tools having over “15,000 users a day”.
The University’s official stance matters, says Basner, who watches the norm shift across classes. “Once instructors put allowed AI resources in the syllabus and linked the university page, people stopped whispering about it. It’s like when Canvas became standard. You don’t debate whether to use it; you just do.”
The availability of University provided AI services further cements GenAI as routine infrastructure rather than novelty. In fact many U-M English teachers have encouraged “students to use U-M GPT to create the first draft of an essay and then to revise the draft in class”.
Inside the student workflows
For students, the novelty has given way to more efficient workflows. Zach Post, a senior studying Business, uses U-M GPT for “outline scans” on technical memos: “It’s a forcing function. I dump requirements and my quick take; it proposes a structure. I throw half of it away and keep the skeleton.”
In quantitative courses, the Copilot effect is more surgical. Mikey Schwartz, a senior studying Math, says the assistant shines ”when you’re inside an unfamiliar codebase. It won’t write the full implementation for you, but it cuts the time to conform to the project’s style.” Research based evidence backs up Schwartz’s claim. In a 2025 experiment with undergraduates working on brownfield programming tasks—assignments where they had to modify and extend an existing codebase they hadn’t written themselves —students using GitHub Copilot finished significantly faster, made more progress on the tasks, and spent less time typing and searching the web.
Basner uses U-M GPT as a “harsh TA”: “I paste my own answer and ask it to attack the logic, not to write a new answer. The trick is framing my question.” Those habits map onto what the education literature has been surfacing: generative systems can strengthen feedback cycles and self regulation, but risk shallow understanding if left unchecked. As one 2024 commentary puts it, GenAI has quickly become “the talk of every school…positioning GenAI in the epicenter of today’s research, policy, and practice,” with the challenge of ensuring it “enhance[s]…learning design, [and] regulation of learning” without eroding depth.
The faculty side: guidance, guardrails, and where the bite really is
If students are the engine of normalization, faculty are the brakes—and sometimes the steering. U-M has not left instructors to figure it out solo; the university urges explicit syllabus statements about acceptable use, open discussion of policies, and assessments aligned to the tasks AI actually changes. According to the University of Michigan, “Tools provided by the University of Michigan, such as U-M GPT, are private, secure, accessible, equitable, and free,” one student-facing guidance page reminds users, adding that “data you share…will not be used for training these models.”
That clarity has teeth. Post says the simplest norm is disclosure: “If I write down what I asked and why I kept or changed a suggestion, I’m less tempted to paste the response and hope I don’t get caught.” Professor of music education at U-M, Colleen Conway, worries that the real danger is not AI itself but that “we can’t move fast enough” to use it ethically, risking students “depending on the computer to do it for them” instead of learning to make hard decisions.
Equity and access
U-M GPT and Go Blue are free to all on the Ann Arbor, Dearborn, Flint, and Michigan Medicine campuses and were designed to work with screen readers.
Students notice. “I don’t have to pay for ChatGPT Plus to get decent performance or risk my data,” Post says. But course policy divergence creates a new equity gap: students in AI-friendly sections develop prompting and critique skills their peers may not, even within the same major. Mikey frames it bluntly, “the gap now is fluency. Two grads with the same GPA, one can co-write with AI and one can’t.”
From an employer’s vantage point, the gap matters well beyond computer science. A field experiment with economics graduates in England introduces the idea of “AI capital”—AI-realted knowledge, skills and capabilities—and finds that applicants who completed an AI-intensive business module received more interview invitations and offers for higher-wage positions than otherwise similar graduates without that AI training.
The authorship question students actually worry about
For all the talk of plagiarism detectors, the anxiety students name most often here isn’t being caught; it’s whether the work is still theirs. Schwartz gives a concrete example: “When Copilot suggests a five-line function and I accept, is that my work?” The research echoes the unease. In exit interviews from the 2025 study, students reported concerns “about not understanding how or why Copilot suggestions work,” even as they moved faster. The authors of the study argue that this finding requires new approaches that deliberately foster reflection on how and why AI-generated suggestions address a task, not just whether the final output runs. Assessment researchers are converging on the same response, calling for more “authentic” and process-focused assessments—oral explanations, in-class demonstrations—because product-only grading is extremely easy for AI to provide and too weak at checking whether students actually understand the underlying thinking
Why this matters now
GenAI is now embedded in every education practice, not just a future possibility. On Michigan’s campus, the central deployment—shared services, focus on broader access, and shared services—embodies a particular stance: these tools are worth using, but only under structured oversight. GenAI can enhance feedback, automate routine assessment, and support learner self-regulation, but it introduces ethical risks and the need for new roles for human experts.
Michigan’s bet on campus-owned, privacy-preserving AI as shared infrastructure has shifted student behavior from “hush-hush” experiments to routine use. The university says the goal is equity, accessibility, and privacy; students add a fourth outcome: fluency.
“In two years,” Schwartz predicts, “listing ‘AI-assisted development’ on a resume will be like listing language proficiency”. The differentiator will be whether you can explain your choices.”
Feature photo, Diag; Photo Credit, Michael Barera
