Bad actors across the world are generating mis- and disinformation to influence voters within their own countries and in others. With more than 2.5 billion people in 81 countries casting ballots in 2024, what can journalists do to undercut the torrent of lies?
Rather than avoid AI, said ICFJ Knight Fellow Nikita Roy, start using it.
“AI is a very, very powerful tool that bad actors are using to their advantage, and we as journalists need to be able to use it to be able to combat that,” Roy said. “And that’s what I think is going to be different from this election onward.”
Roy, the CEO and founder of The NRI Nation news startup serving the Indian diaspora, spoke Tuesday at an ICFJ event at Bloomberg News’ headquarters in New York City. She was joined by Jonathan Lemire, POLITICO’S White House Bureau Chief and the host of MSNBC’s “Way Too Early,” and Laura Zommer, an ICFJ Knight Fellow and co-founder of Factchequeado. ICFJ President Sharon Moshavi moderated the discussion.
Newsrooms can use generative AI tools to create social media posts, SEO-friendly headlines, translated content and more, Roy said. Doing so frees up journalists to “be out there actually doing the reporting” and building relationships with the communities they serve. “It’s about these one-, two-person local newsrooms being able to do the work of a five-person newsroom,” she said.
Growing Threats
Finding ways to spend more time on reporting and interacting with communities is crucial, especially now, the panelists said.
“There’s going to be so many possible ways to influence those elections, and it comes at a time where so much in the news media, sadly, are having a real crisis of resources right now,” Lemire said. “We’re trying to do more with less at a time when we need to do a lot more. And I think it’s about training, and it’s about being nimble, and about being prepared and as careful as we can.”
Journalists have improved how they cover elections since 2016, he said, understanding that “you can’t just take things at face value, you also can’t allow certain speakers to be unfiltered, un-fact-checked.” But they still have a long way to go, and are up against even more challenges.
Lemire referenced a recent interview with Sen. Mark Warner (D-Va.), the chairman of the Select Committee on Intelligence: “What the Senator said is the United States is far less prepared for this election, to keep this election safe than we were four years ago, despite lessons learned, because the threats have grown in terms of outside actors - foreign, who wish us harm, who wish to interfere in elections - and just the spread of technology, both at home and abroad.”
Indian immigrants in the U.S., for example, are being targeted with content from India designed to influence the elections here, Roy said. Zommer said Spanish-speaking immigrant communities in the U.S. are also prime targets for mis- and disinformation – and the technology is not always that sophisticated.
"We are also dealing with what we call ‘cheap fakes,’” said Zommer, who started Factchequeado to stem disinformation flows in Spanish-language platforms in the U.S. “It’s not always needed, a deepfake. In some cases, it’s just photoshopped. It’s something that just confirms your bias, and then you share it.”
Trust and Credibility
One of the challenges journalists face reaching immigrant communities with accurate, reliable information is that many consume their news on social media. Zommer cited a recent study that found that 57 percent of Latinos get their news from social media, compared to 15 percent of non-Hispanic White people. That percentage jumps to 74 percent among Latinos who prefer to consume their news in Spanish.
And she said many are receiving and sharing political information in Whatsapp groups. “If we are not creating a rich offer in Whatsapp for them, they’re consuming just disinformation or information with low quality,” Zommer said.
Zommer also emphasized the importance of being modest as journalists, listening to what people need and being responsive when they want information. That is how to build trust, and over time, community members will begin fighting lies, too.
“What we need to do is to have that relationship with people, do more media literacy, find ways to help make them also be reporters on the ground in their WhatsApp groups asking, ‘Hey, why did you share this? What is your source?’” Zommer said. “But for sure, we needed to start yesterday.”
Roy said even those spreading disinformation understand the importance of direct interaction. In India, for example, political parties are recruiting people to join WhatsApp groups by sending campaign workers to talk with them in person: “They’re not doing it just by being behind their computers and sending them messages on WhatsApp.”
What’s Next with AI
Asked about the tools journalists can use to detect deepfakes, Roy and Zommer said they are unreliable. Factchequeado uses these tools and reports what they find, but is transparent about the limitations and follows up with traditional reporting to verify, Zommer said.
And that gets back to Roy’s point about the role of AI.
“We [journalists] are in the business of creating knowledge, of going and being able to fact check and see, ‘Is that true or not?’” said Roy, who calls herself an AI optimist. “That’s something that AI is not able to do, and will not be able to do. [These tools] don’t work as Google. They work on top of material and information that has been produced by journalists, by news, by those who are creating knowledge.”