Logomark of the National Association of Science Writers against a contrasting horizontal gradient background

NASW issues statement on generative A.I. tools

Accurate science news depends on, as it always has, people writing about science. New technologies emerge all the time, and now we have witnessed generative A.I. tools like ChatGPT and Bard that appear to have the ability to write with little human guidance. We may write about these tools and how they are used or misused, and they might one day help us do our jobs better. But it should be up to us as writers and journalists to write and report the stories.

A few outlets have experimented with using ChatGPT to replace writers, despite the fact that the tool is known to be prone to inaccuracies. These efforts have so far failed miserably. ChatGPT and other chatbots have also been known to “hallucinate,” inventing citations and claiming falsehoods, and can amplify existing gender and racial stereotypes. And to make matters worse, generative A.I. are often trained on copyrighted works without writers being compensated.

There are already examples of publishers replacing writers with A.I. tools or planning to do so. Using generative A.I. instead of writers, even for brief news items or translations, threatens the reliability of journalism when public confidence in media in the United States is at a record low. A.I.-generated errors could affect science writing accuracy, such as in crucial coverage of COVID-19 and climate change, at a time when trust in science and scientists has declined.

In light of these developments and concerns, we at the National Association of Science Writers (NASW) commit to not using generative A.I. tools like ChatGPT or Bard to replace work done by human writers and editors. We also will not use A.I.-generated images, such as with DALL-E — except under very particular conditions and with artists directly involved, while ensuring the result neither imitates existing work nor infringes copyrights. Individual journalists may choose to experiment with generative A.I. tools, with caution, while compiling research or summarizing information, for example. But NASW does not support ChatGPT and other similar tools being used to replace journalists or to publish content that is generated entirely by A.I., without human input and oversight. We will continue to be transparent about our own use of A.I. systems, and recommend that other publications and organizations do the same. NASW supports media unions demanding worker protections — just as their Writers Guild of America (WGA) union counterparts did in September 2023 — as well as demanding a voice in A.I.-related management decisions.

Ethical considerations and human agency must remain central to editorial decisions, as they always have been. For the integrity and accuracy of journalism, we must remain vigilant so that readers and writers alike can clearly distinguish between human- and algorithm-generated content. We recommend that NASW members follow the same principles and guidelines regarding generative A.I.

New technologies will become more powerful, and they’ll be accompanied by plenty of hype. They will thereby invite further scrutiny. That said, as generative A.I. tools and the media landscape evolve, NASW may update these guidelines accordingly — and we will be transparent with any such changes, should we do so.

Cassandra Willyard
NASW President
and the NASW Board


To reach the NASW Board, email president@nasw.org

We thank NASW board member Ramin Skibba for leading the original draft of this statement.

Founded in 1934 with a mission to fight for the free flow of science news, NASW is an organization of ~2,800 professional journalists, authors, editors, producers, public information officers, students and people who write and produce material intended to inform the public about science, health, engineering, and technology. To learn more, visit www.nasw.org and follow NASW on LinkedIn.

February 1, 2024