ChatGPT, an artificial intelligence (AI) language model created by OpenAI, has been making waves across the internet, leading to questions on how AI will change the way we work and write.
In the latest ICFJ Pamela Howard Forum on Global Crisis Reporting webinar, Jenna Burrell, director of research at Data & Society, dove into the pros of ChatGPT and how it can be a tool for journalists, as well as its limitations and what journalists should be cautious about.
Here are her tips on best uses of ChatGPT for journalists:
Chat GPT can simplify concepts
One of the most important tasks for journalists is simplifying complex topics for a general audience. ChatGPT makes this easier, Burrell said. Using the language model allows journalists to plug an abstract or part of an academic article into ChatGPT and ask the software to simplify it. A journalist can use this tool to better understand an article or idea before interviewing the piece's author.
Chat GPT is also useful for non-native English speakers. Its simplification feature allows native English speakers who might be able to understand basic English to “translate” any work into a more basic form. This is especially useful for topics that use complex or specialized language like science or economics. For now, this only works in English.
It can assist you with questions for interviews
Journalists can use ChatGPT to prepare for interviews. You can list questions you have in mind for an interview subject, and the software will create more questions modeled after them. The software can also copy a previous interview, or an article written by the interviewee, and develop questions about that topic.
ChatGPT can be used as a sub-editor. Journalists can input their articles for a last review before sending them to their editor, for example by asking ChatGPT to edit the article in a specific format like AP style. However, journalists should still review and fact-check the changes ChatGPT makes to ensure that no information added is false.
You cannot always trust its results
Journalists should be aware of ChatGPT’s major flaw: it cannot be trusted. ChatGPT was trained by inputting the entirety of the internet, and it responds to prompts by making predictions on the most likely answer to queries. By using this model, it sometimes generates an answer that’s not factually correct.
For example, when Burell asked Chat GPT for experts in the field of data science, it produced a list of well-known academics who study the subject. However, when the question was modified to ask for Ghanaian experts in data science, none of the names it presented actually existed when checked.
Burrell warned journalists to be aware of ChatGPT’s inability to “fill the data void.” ChatGPT will not answer a question by saying it does not know the answer; instead, if the data it has doesn’t provide an answer, it will simply make one up. This can be especially problematic in regions where the internet has been historically devoid of data. “Because it takes a little bit from here and there, it often produces just absolutely incorrect results and it's hard to figure out when it’s incorrect,” Burell said.
ChatGPT also has an issue of replicating the bias it was built on. The software was built using a large amount of data, but the tool cannot “learn” – it can only reproduce and regurgitate the data it already has.
Because Chat GPT was built by collecting massive amounts of information from the internet, the information it gives back will be as biased as the information it was trained on. When journalists use ChatGPT, they should not only double-check the content it presents, but also reach out to others who have different perspectives, including those who might counter ChatGPT’s built-in bias.
“ChatGPT sucks up everything on the internet; what you get out of it is a reflection of the skew of the internet as a whole,” Burell said.
The future of ChatGPT in journalism
ChatGPT is still a new tool, and questions remain on how its parent company, Open AI, will shape its business model. It is not yet clear how OpenAI will make money, whether or not it will partner with Microsoft and its search engine, Bing, or if it will create its own advertising model like those that Google and Bing currently use. While the tool is currently free and open for everyone, any change in monetization might raise further questions regarding copyright law. For example, although ChatGPT was trained on every article journalists have written, the authors are not compensated or recognized for their contribution.
Burell notes that this might be a bigger problem with tools like DALL-E, another Open AI tool, which creates images from text. Today, artists who make their lives from the art they create are seeing their styles copied by DALL-E with no credit or compensation. “Everything you’ve written as a journalist that’s out there publicly is dumped into OpenAI’s tool," Burrell said. “You have this copyrighted work that you’ve published as a journalist that informs that model, and you don’t get compensated for that. I don’t think copyright law is really up to the task at this moment.”
In the form ChatGPT exists today, Burrell recommended that journalists use it as a tool while recognizing its limitations. Although the model can help journalists write faster when they are on a deadline, inspire them when they are having trouble being creative, and serve as an extra step to ensure their work is well-written and stylized, it should always be used with a human by its side. Everything it says must still be double-checked for accuracy and for sources.
For journalists worrying that ChatGPT's writing will be passed off as journalism, Burrell notes that its writing lacks a level of journalistic quality and creativity — an editor can usually tell the difference. “Humans will continue to be much more inventive and creative, and able to produce really unusual ways of saying things,” she said.