This student story was published as part of the 2025 NASW Perlman Virtual Mentoring Program organized by the NASW Education Committee, providing science journalism experience for undergraduate and graduate students.
Story by Emma Yao
Mentored and edited by Mackenzie White
This fall, as students open laptops in lecture halls, tools like ChatGPT are reshaping education and prompting debate over regulation.
Seeing artificial intelligence (AI) tools rapidly grow more common in higher education, faculty and policymakers are grappling with how to manage their use. From essay writing to coding assistance, generative AI presents new opportunities and challenges for teaching, learning, and management. Educator experience and research findings suggest that flexible, locally informed policies may be best suited to keep pace with a fast-changing technology.
Across the country, professors are now including policies on AI use in their syllabi. Some educators ban it outright, concerned about academic dishonesty or shortcuts. Others, like Robert Ghrist, associate dean of undergraduate education at the University of Pennsylvania’s School of Engineering and Applied Sciences, are taking a different approach.
“[AI] has potency,” Ghrist says. “Anything with power can be used for good or for ill.” As a role model, he feels a responsibility to teach his students how to use AI constructively. This upcoming fall semester, Ghrist plans to integrate tools such as Google’s AI Studio and NotebookLM directly into his curriculum.
Still, he’s cautious about prescribing his approach to others.
“Academic freedom is important and worth defending first and foremost,” he says. “No matter what issue we’re talking about.”
While educators have been navigating these choices independently, legislators are weighing regulating AI use more broadly. Earlier this month, the United States Senate voted 99-1 to reject a provision in the One Big Beautiful Bill Act that would have banned state-level AI regulation. While other aspects of the bill were more divisive, this near-unanimous rejection suggests bipartisan reluctance to restrict local policymaking around AI.
For AI experts like Ghrist, that approach makes sense.
“It's going to be very difficult to write laws that are time independent,” Ghrist says. “That creates a dangerous scenario where we legislate with the right intention, the right motivation — even the right law for today — and in a year or two, it's a totally different world.”
Recent research also suggests that AI systems may require consistent human oversight to preserve both academic integrity and technical reliability. In a 2024 paper published in Nature, a team of researchers in the United Kingdom and Canada studied how AI systems learn over time. Their findings show that as generative AI models like ChatGPT produce more content online, future models trained on that synthetic data will inevitably experience “model collapse.”
Unlike human learners, AI models cannot build knowledge from their artificial predecessors. Without continued exposure to human-generated inputs, models can degrade, accumulating irreversible defects that distort their outputs and understanding of the world.
In education, these findings point to a continued role for human teachers and students in maintaining information quality and accuracy. Rather than replacing human judgment, AI may work best as a complement to it.
Some researchers are also exploring how AI could advance understanding of the human brain.
In a 2024 internal experiment at the AI company Anthropic, AI experts altered the combination of neurons in their Claude model and observed changes in behavior. One version, nicknamed “Golden Gate Claude,” began redirecting conversations toward the Golden Gate Bridge. Though anecdotal, the case offers insight into how attention and associations are encoded in large language models and potentially, how similar processes function in human cognition.
For now, experts support the idea that AI regulation should remain adaptable and informed by both educators and scientists.
As for students, “[They] are here to create [their] own futures and make it as good as possible,” Ghrist says. “That future is going to involve AI.”
Top image: Students attend a lecture at Aalto University during ongoing discussions on AI regulation in education. Credit: Unsplash / Creator: Dom Fou
Emma Yao is a rising junior at the University of Pennsylvania pursuing a Physics major and English minor. She is currently performing beta-NMR simulations and investigating their connection to quantum annealing with Dr. Syd Kreitzman at TRIUMF, Canada’s national particle accelerator center. As an aspiring physicist and writer, she is passionate about making complex scientific ideas accessible to the public through science communication. You can contact her at her email emmayao@sas.upenn.edu.
Mackenzie White is a science writer, geophysicist, and video producer based in Cambridge, Mass. Her work appears in outlets like Astronomy, Science Friday, Environmental Health News, and Eos. She loves Texas, Mars, and her dogs, Rocky and Maggie.The NASW Perlman Virtual Mentoring program is named for longtime science writer and past NASW President David Perlman. Dave, who died in 2020 at the age of 101 only three years after his retirement from the San Francisco Chronicle, was a mentor to countless members of the science writing community and always made time for kind and supportive words, especially for early career writers.
You can contact the NASW Education Committee at education@nasw.org. Thank you to the many NASW member volunteers who lead our #SciWriStudent programming year after year.
Founded in 1934 with a mission to fight for the free flow of science news, NASW is an organization of ~2,600 professional journalists, authors, editors, producers, public information officers, students and people who write and produce material intended to inform the public about science, health, engineering, and technology. To learn more, visit www.nasw.org.