Ethical AI starts with culture: build trust, protect data, stay mission-first. AI can expand capacity in philanthropy—but only if your people and your stakeholders trust how you use it. In this thought leadership session, Michael Reardon brings a change-management lens to the ethics of introducing AI in mission-driven organizations: how to set clear guardrails, protect privacy, and keep values at the center while you build real capability. Expect a practical, product-agnostic conversation that helps you define what “responsible” means for your organization—and how to turn that definition into everyday behaviors, communication, and governance your team can sustain.
What you’ll take away:
- A starter Responsible AI framework you can adapt—so you move from abstract principles to clear decisions, roles, and review points
- A privacy-first set of questions to guide AI evaluation (data access, permissions, retention, and “should we use this data at all?”)
- A simple approach to mission alignment—how to test AI use cases against your values, donor expectations, and constituent trust
- A change-ready plan to bring staff along with transparency and confidence, including how to address understandable concerns about job impact and accountability
- Language and talking points to explain AI clearly (including “assistants” vs. “agents”) without jargon—so more people can participate in responsible decisions
Trust is one of your organization’s most valuable assets. When you introduce AI with clarity, care, and strong stewardship of data, you don’t just adopt new tools—you strengthen the culture that powers your mission.