Reporting on Tech and AI: Tips from Our Newsletter

Last Updated March 2024

The guidance below collects the thoughts, tips, and must-reads about reporting on technology and AI published in our weekly newsletter, Revisions. The information is presented in roughly chronological order, has been edited for clarity, and is updated where necessary.

Language: Thoughts & Resources

September 7, 2023

Shout out to Samantha Cole at 404 Media and her report on the AI-generated books flooding online markets (with some truly harmful effects) for pointing me toward ZeroGPT. You can use this free tool to detect whether content has been made by OpenAI or Bard. If you pay a subscription, you can upload batches of content and get extra features. I can imagine many a journalist using such a tool to uncover similar strains of misinformation.


September 14, 2023

The Reynolds Journalism Institute recently released a detailed guide to creating a chatbot built on your newsroom’s content. The case study on Graham Media explains exactly how they chose a tool, how it works, and how they rolled it out. It’s a useful read.

Which reminds me: if you’re looking for chatbot resources, don’t forget Joe Amditis’ “Beginner’s prompt handbook: ChatGPT for local news publishers.”

Got other AI or ChatGPT resources to share? Please send me a link so I can spread the word!


October 26, 2023

As we all struggle with the misinformation running rampant across social media, it might be worth seeing for yourself just how difficult moderation and policy decisions can be. TechDirt recently released a game called Trust & Safety Tycoon that puts you in the driver’s seat. Take it for a spin and let me know how you do!


February 29, 2024

Because unfortunately this is very necessary, Nieman Lab has published a guide on how to identify and investigate AI audio deepfakes ahead of the 2024 elections. Bookmark it.


March 14, 2024

Zach Seward, The New York Times’s new editorial director of AI initiatives, recently gave a talk at SXSW about how and when AI actually helps journalists. He points out some high-profile mistakes, but the patterns he’s seen in the successes could serve as great inspiration for newsrooms looking to take the leap.


March 21, 2024

When even photos from the office of the Princess of Wales can’t be trusted, it’s time we all learn how to spot an image that’s been manipulated. Luckily, The BBC just put out its own guide.


March 28, 2024

Does your newsroom need an AI ethics policy? The answer is “yes,” but if you don’t have one yet, it’s your lucky day. Poynter just released a template that you can fill out as a group, along with instructions on who to bring together for the conversation.

Reframing Headlines

August 31, 2023

The headline below from The Washington Post leads a story about recent policy reversals at Twitter (I refuse to call it X, sorry), Facebook, and YouTube that have let disinformation run amok.

“Surrender” does mean to stop resisting and in some ways that is applicable — some of these policy reversals, as the story explains, are said to be in response to floods of disinformation that are difficult to keep up with using existing tools. But others are common sense policies that keep users from being manipulated and scammed.

Following Elon Musk’s lead, Big Tech is surrendering to disinformation

“Surrender” is a word with agency, but not much. Its denotation is that of almost inescapable defeat. But Big Tech could very well choose to keep fighting the good fight against bad actors. So, rather than accept their premise that it’s just too difficult, the question we must ask as journalists is, what incentive do they have to stop?

Elon Musk sues disinformation researchers, claiming they are driving away advertisers

A few weeks ago, an NPR story (headline above) took a more straightforward approach to Twitter owner Elon Musk’s recent decisions. This headline makes clear the connection between disinformation and the profit motive of these platforms. It isn’t necessarily in their business interests to invest time and money in eradicating disinformation, especially if some power users are heavily invested in spreading it. This news cycle should serve as a reminder to tech journalists to follow the money when discussing decisions that affect the information ecosystem.


‘I log into a torture chamber each day’: the strain of moderating social media
Deepa Parent and Katie McQue, The Guardian
The public has known for years (thanks to reports from The Verge, BBC, NPR and others) that moderating social media posts is a traumatizing job. Recently, however, social platforms have begun outsourcing this moderation (and thus the trauma) to workers in countries like India and the Philipines. The Guardian has the full story.

The Tech That’s Radically Reimagining the Public Sphere
Jesse Barron, The Atlantic
If the proliferation of facial recognition technology doesn’t freak you out, it will by the end of this article. The Atlantic’s review of Your Face Belongs To Us – A Secretive Startup’s Quest To End Privacy As We Know It by Kashmir Hill teases the book’s core story while probing its most important questions. What happens when law enforcement uses this tech without public knowledge? What happens if we can no longer exist in public anonymously?

Instagram’s Algorithm Delivers Toxic Video Mix to Adults Who Follow Children
Jeff Horwitz and Katherine Blunt, The Wall Street Journal
The headline of this Wall Street Journal investigation reveals plenty about what you’ll find inside. But it’s worth the read to learn how such patterns manifest in our social media networks and how those networks choose to tackle (or avoid) these issues. A key quote: “Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.”

The Perfect Webpage
Mia Sato, The Verge
The Verge just published an interactive exploration of how Google’s standards for search engine optimization have reshaped the internet for better and worse. It’ll change the way you surf the web. Seriously.

The Scariest Part About Artificial Intelligence
Liza Featherstone, The New Republic
Fans of AI make big claims about its future applications for the human race. But are its current capabilities — doing things humans can already do, but not always accurately — worth its damage to the planet? The New Republic writes, “Between its water use, energy use, e-waste, and need for critical minerals that could better be used on renewable energy, A.I. could trash our chances of a sustainable future.” Take a read before you decide.