Reporting on Tech and AI: Tips from Our Newsletter

Last Updated July 2024

The guidance below collects thoughts, tips, and must-reads about reporting on technology and AI published in our now-retired newsletter, Revisions. It has not been updated since September 2024. The information is presented in roughly chronological order and has been edited for clarity.

Language: Thoughts & Resources

September 7, 2023

Shout out to Samantha Cole at 404 Media and her report on the AI-generated books flooding online markets (with some truly harmful effects) for pointing me toward ZeroGPT. You can use this free tool to detect whether content has been made by OpenAI or Bard. If you pay a subscription, you can upload batches of content and get extra features. I can imagine many a journalist using such a tool to uncover similar strains of misinformation.

 

September 14, 2023

The Reynolds Journalism Institute recently released a detailed guide to creating a chatbot built on your newsroom’s content. The case study on Graham Media explains exactly how they chose a tool, how it works, and how they rolled it out. It’s a useful read.

Which reminds me: if you’re looking for chatbot resources, don’t forget Joe Amditis’ “Beginner’s prompt handbook: ChatGPT for local news publishers.”

Got other AI or ChatGPT resources to share? Please send me a link so I can spread the word!

 

October 26, 2023

As we all struggle with the misinformation running rampant across social media, it might be worth seeing for yourself just how difficult moderation and policy decisions can be. TechDirt recently released a game called Trust & Safety Tycoon that puts you in the driver’s seat. Take it for a spin and let me know how you do!

 

February 29, 2024

Because unfortunately this is very necessary, Nieman Lab has published a guide on how to identify and investigate AI audio deepfakes ahead of the 2024 elections. Bookmark it.

 

March 14, 2024

Zach Seward, The New York Times’s new editorial director of AI initiatives, recently gave a talk at SXSW about how and when AI actually helps journalists. He points out some high-profile mistakes, but the patterns he’s seen in the successes could serve as great inspiration for newsrooms looking to take the leap.

 

March 21, 2024

When even photos from the office of the Princess of Wales can’t be trusted, it’s time we all learn how to spot an image that’s been manipulated. Luckily, The BBC just put out its own guide.

 

April 25, 2024

Why does so much reporting on AI use imagery of robots as illustration? The Reuters Institute explains why it happens, why it shouldn’t, and what we should use instead.

 

March 28, 2024

Does your newsroom need an AI ethics policy? The answer is “yes,” but if you don’t have one yet, it’s your lucky day. Poynter just released a template that you can fill out as a group, along with instructions on who to bring together for the conversation.

 

May 23, 2024

Already sick of Google’s new AI-supplemented search results? Journalist, designer, and creator of Tedium, Ernie Smith, has created a work-around. Visit https://udm14.com/ and you can search Google without any “AI Overview” to annoy you.

 

June 20, 2024

Want to try your hand at sharing your journalism via vertical video? A TikTok star has created an AI tool and template that helps journalists do just that.

 

July 4, 2024

The Center for Media Engagement recently released an in-depth report on the role of AI in the 2024 U.S. elections. Take a spin through the key insights from their interviews with political campaign consultants, AI tech vendors, and at least one candidate who is using generative AI on her campaign.

 

July 11, 2024

Want to find out what all the AI-in-news fuss is about? ONA rounded up a recent discussion among their AI Innovator Collaborative, and it comes complete with a list of the tools journalists are already using.

Reframing Headlines

August 31, 2023

The headline below from The Washington Post leads a story about recent policy reversals at Twitter (I refuse to call it X, sorry), Facebook, and YouTube that have let disinformation run amok.

“Surrender” does mean to stop resisting and in some ways that is applicable — some of these policy reversals, as the story explains, are said to be in response to floods of disinformation that are difficult to keep up with using existing tools. But others are common sense policies that keep users from being manipulated and scammed.

Following Elon Musk’s lead, Big Tech is surrendering to disinformation

“Surrender” is a word with agency, but not much. Its denotation is that of almost inescapable defeat. But Big Tech could very well choose to keep fighting the good fight against bad actors. So, rather than accept their premise that it’s just too difficult, the question we must ask as journalists is, what incentive do they have to stop?

Elon Musk sues disinformation researchers, claiming they are driving away advertisers

A few weeks ago, an NPR story (headline above) took a more straightforward approach to Twitter owner Elon Musk’s recent decisions. This headline makes clear the connection between disinformation and the profit motive of these platforms. It isn’t necessarily in their business interests to invest time and money in eradicating disinformation, especially if some power users are heavily invested in spreading it. This news cycle should serve as a reminder to tech journalists to follow the money when discussing decisions that affect the information ecosystem.

Must-Reads

‘I log into a torture chamber each day’: the strain of moderating social media
Deepa Parent and Katie McQue, The Guardian
The public has known for years (thanks to reports from The Verge, BBC, NPR and others) that moderating social media posts is a traumatizing job. Recently, however, social platforms have begun outsourcing this moderation (and thus the trauma) to workers in countries like India and the Philipines. The Guardian has the full story.

The Tech That’s Radically Reimagining the Public Sphere
Jesse Barron, The Atlantic
If the proliferation of facial recognition technology doesn’t freak you out, it will by the end of this article. The Atlantic’s review of Your Face Belongs To Us – A Secretive Startup’s Quest To End Privacy As We Know It by Kashmir Hill teases the book’s core story while probing its most important questions. What happens when law enforcement uses this tech without public knowledge? What happens if we can no longer exist in public anonymously?

Instagram’s Algorithm Delivers Toxic Video Mix to Adults Who Follow Children
Jeff Horwitz and Katherine Blunt, The Wall Street Journal
The headline of this Wall Street Journal investigation reveals plenty about what you’ll find inside. But it’s worth the read to learn how such patterns manifest in our social media networks and how those networks choose to tackle (or avoid) these issues. A key quote: “Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.”

The Perfect Webpage
Mia Sato, The Verge
The Verge just published an interactive exploration of how Google’s standards for search engine optimization have reshaped the internet for better and worse. It’ll change the way you surf the web. Seriously.

The Scariest Part About Artificial Intelligence
Liza Featherstone, The New Republic
Fans of AI make big claims about its future applications for the human race. But are its current capabilities — doing things humans can already do, but not always accurately — worth its damage to the planet? The New Republic writes, “Between its water use, energy use, e-waste, and need for critical minerals that could better be used on renewable energy, A.I. could trash our chances of a sustainable future.” Take a read before you decide.

Elon Musk Tweeted a Thing
Jason Koebler, 404 Media
Don’t be fooled by the simple headline. This article from 404 Media traces the news cycle that crops up anytime someone like Musk (or Donald Trump) makes a statement, regardless of its import or accuracy. This type of SEO cottage industry is an all-too-familiar feature of modern media, and one that wastes valuable journalism resources.

Is Google S.E.O. Gaslighting the Internet?
Kyle Chayka, The New Yorker
For additional context on the state of search engine optimization, check out this New Yorker piece on what Google says it uses to determine search rankings, and what people actually experience.