How to Develop an Ethical AI Use Policy for a Nonprofit

By: Chris Kauffman and Maggie Farley | 04/09/2024
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Managing Data Hazards / CC-BY 4.0

Technology changes quickly, and as it does, it often leaves us wondering “What does this mean for us?” When ChatGPT ushered in a new era of accessible artificial intelligence (AI) tools in 2023, our staff here at the International Center for Journalists (ICFJ) were full of questions about what this meant for our work, our mission and journalism in general. 

To support our staff, we embarked on a project to develop a policy that provides guidance on how the organization will use AI tools. And because we know we aren’t alone in answering these big questions, we wanted to share the lessons we learned along the way to help other organizations that are in the midst of creating their own policy.

Why we decided to write an AI policy for our organization

Here at ICFJ, AI was a much-discussed issue. Some people were extremely apprehensive about using it—others just dove in. 

The apprehensive ones had serious concerns with AI: How would it affect their job? Would this hurt the journalism community? How will this impact the rise of disinformation? Is it safe to use? Other people saw the potential: How could AI help optimize our organization’s processes? How can it help us analyze ICFJ’s program impacts and tell the organization’s story? Can this empower journalists?

To address these questions, we convened a task force to create an AI Use Policy that establishes a standard for how ICFJ uses, adopts and engages with AI tools. The task force comprised staff from various levels and departments to ensure the new policy is an effective tool for everyone in the organization. 

How we wrote the policy

The first goal of the AI task force was establishing some principles that would guide us through the process. Here’s what we decided on:

  • Do no harm
  • Protect others’ rights, privacy and original work
  • Use content and data with consent
  • Be transparent

After we identified these principles, we began our research. We started with the Partnership on AI’s guidelines. We read articles, listened to podcasts, talked to colleagues and attended webinars and in-person discussions. Some of the resources we used are linked below.

Once we felt confident in our knowledge, we began drafting the policy. And where better to start than ChatGPT? We used the tool to draft an AI Use Policy for a nonprofit organization. In the end, we didn’t use it—but it did help us lay the groundwork and decide what sections to include in our own policy. 

Once we had a draft, we went through an intensive review process. We asked IT consultants, AI experts, editors, a lawyer and ICFJ’s senior leadership all to review the policy. 

After six months of work, we had a final version of the ICFJ AI Use Policy.

What we learned from this process
 

  1. Be prepared to learn. AI can be daunting to research and it's hard to understand where to start. There are more resources now than when we started. (Check out the list below.) 

    Our suggestion is to create an area (we used a Slack channel) where you and the rest of your organization can collaborate on the research. Also go to conferences and webinars. Reach out to your professional community to see if anybody is willing to help or engage with your IT service provider if you have one.
     
  2. Just start. People who work for nonprofits are generally tasked with wearing multiple hats and your organization might task you with being their in-house AI expert. Start with answering why your organization is creating the policy and what the goals of the policy are. 
     
  3. Don’t sweat drafting the policy. All policies generally come with some form of Overview, Purpose, Scope, Compliance and Review sections. Write these standard sections first. It will get your head in the right space to write the meat of the policy. Poynter has a handy template to get you started.  
     
  4. Keep everyone informed. This policy will affect your entire staff. Regularly communicate the why of the policy and invite people to help with crafting the policy.
     
  5. Optimize the review process. As mentioned earlier, this policy will affect everyone in your organization. Do not just have your review process be for copyediting. Create a review team that understands how this new policy could affect all departments in your organization.
     
  6. AI is continuously changing. Your first policy will not cover all the AI tools out there and that’s okay. Build into the policy a process to regularly review and update it as new AI technologies emerge. 


How ICFJ is helping journalists navigate AI

Journalists’ justified wariness about AI competes with their fear of being left behind. ICFJ is hosting programs that deal forthrightly with that tension, addressing the ethical and equity issues, while exploring ways that newsrooms can safely and effectively use AI tools to improve their processes, research and workflows. Some news outlets are blocking AI scrapers from their sites, others are licensing their work to AI companies to train their models.

ICFJ is working with journalists to explore the options, and in some cases, use AI tools to counter the output of AI tools, such as in the spread of disinformation. The tools and framework are changing constantly.
 

  • ICFJ Knight Fellows as thought leaders: We have two Knight Fellows focused on AI, and one who will soon come on board. Nikita Roy is creating courses to teach AI literacy and is working with newsrooms to carefully implement AI tools.  Newsroom Robots, her weekly podcast of conversations with leaders in the space has made the list of Top Technology Podcast category on Apple podcasts in more than 30 countries. Mattia Peretti, one of the pioneers of AI in journalism, is taking a step back to see how this watershed moment allows us to reimagine how journalism can better serve communities, with AI as an aid to change, not the driver of it. He also organized a Directory of AI & Journalism Consultants and Trainers for newsrooms.
     
  • Leap Solutions Challenge: Leap, ICFJ’s innovation lab, challenged eight teams of journalists to explore how AI tools can disarm AI-powered disinformation. The solutions included AP’s Verify  dashboard, which helps its reporters verify information; Rolli Information Tracer, which tracks the origin and spread of misinformation across platforms; several chatbots that respond immediately to queries about suspect claims, including ChatVE, which handles several African languages; and Serendipia’s Snap Audit, which helps Mexican journalists quickly analyze documents to expose corruption and disinformation
     
  • Media Party: This gathering of technologists and journalists, started by an ICFJ Knight Fellow years ago and now supported  by ICFJ, hosts workshops, discussions and a hackathon. Last year’s theme was “How can AI serve journalism?” and this year it is “AI and elections.”
     
  • ICFJ’s Disarming Disinformation program investigates who is behind disinformation and how it spreads, which includes a focus on AI.
     
  • AI literacy program: Roy and Peretti are teaming up to develop an AI literacy program and learning experiences for journalists.
     
  • ICFJ’s International Journalists’ Network (IJNet) oversees the Pamela Howard Forum on Crisis Reporting, which has held many webinars covering topics including AI use ethics and building AI tools for journalists. You can view the webinar trainings here.


What ICFJ is doing next

AI will continue to be a disruptive force for the foreseeable future, and we need to be ready for all the change it will bring. In order to stay up to date, our plan is to regularly revisit our AI Use Policy to ensure that our staff are still informed, protected and empowered as they engage with AI tools—now and in the future.
 

ICFJ resources on all things AI


Additional resources for creating your own policy

News Category

Latest News

Valeriya Yegoshyna: Keeping Eyes on Ukraine

In the face of dire threats to their safety, Ukrainian journalists have put their lives on the line to document the atrocities of Russia’s invasion of their country, and amplify the stories of those most impacted. Among these fearless journalists is 2024 ICFJ Knight International Journalism Award winner Valeriya Yegoshyna, a reporter at Schemes, the investigative project of the Ukrainian service of Radio Free Europe/Radio Liberty. Her reporting has revealed alleged Russian war crimes and corruption in her native Ukraine.

Covering Elections and a New Administration in a Fractured Media Landscape

Maria Ressa joined White House correspondents Peter Baker and Eugene Daniels to reflect on the challenging environment for the journalists who covered the 2024 elections and their aftermath. The panel, led by Kristen Welker, moderator of NBC News’ “Meet the Press,” was part of ICFJ’s 40th Anniversary Tribute to Journalists, held Nov. 14 in Washington, DC.

Highlights from ICFJ's 40th Anniversary Tribute to Journalists

Last night we celebrated the best in journalism globally at ICFJ’s 40th Anniversary Tribute to Journalists in Washington, DC. We recognized our 2024 ICFJ Knight Award winners – three inspiring journalists who have made a mark with their courageous investigative journalism.