ASK THE EXPERTS! Ask Blackbaud experts your responsible AI questions here!
Thanks for all the great questions! This Ask the Experts session is now closed but we hope you'll read through the great discussion below.
----
It's time for our "Ask the Experts" session about responsible AI with Carrie Cobb, chief data and AI officer at Blackbaud, and Cameron Stoll, Blackbaud's chief privacy officer.
Ask all your questions here, whether they are about AI governance frameworks, risk management, bridging the gap between AI adoption and responsible practice, or anything else related to ensuring that your organization is approaching AI responsibly.
Carrie and Cameron will be checking in and responding throughout the day until 4pm ET.
Note — you need to join the AI Explorers group to be able to participate in the discussion - navigate to the AI Explorers homepage and click join if you aren't yet a member.
Answers
-
Happy to be here!
0 -
Looking forward to it!
0 -
- How does Blackbaud define “Responsible AI” within the context of nonprofit fundraising and constituent engagement?
- What does Responsible AI compliance look like for a nonprofit organization in practical terms?
- From Blackbaud’s perspective, what concrete steps can a nonprofit take to ensure their AI initiatives are fair, inclusive, and mission-aligned, especially when modeling or segmenting donors?
- Given that bias in AI often originates from the data used to train AI, should nonprofits actively attempt to make their datasets more “representative” or “fair”?
(For the record, I'm want a more fair and inclusive world. When dealing with data, the data is the data. A nonprofit's current donor base might not be inclusive. The resulting use of that data might not be considered fair.)
1 -
As Blackbaud rolls out new AI tools, how are you making sure they’re environmentally responsible? For example, are you looking at things like the energy required to train and run these models, and how does that fit into your broader responsible AI approach? How should charities using Blackbaud respond to our supporters concerns about environmental impact of our tools?
1 -
How did Blackbaud train its AI? Anthropic is one of Blackbaud's advertised AI partners. Anthropic has a lawsuit against them regarding pirating copywritten books to train AI models and their agentic technology was recently used in an attempted cyber-attack against the US government. How is Blackbaud guaranteeing responsible AI training and safety of database/donor information?
0 -
Thanks for the question - I will start to answer these thematically in a few response :)
Responsible AI within Nonprofit Fundraising and Constituent EngagementResponsible AI means defining, developing, and\or deploying AI that is ethical, transparent, and mission-driven - always keeping the social impact community at the center. That suggests it’s not just a technical standard… it’s a commitment to fairness, reliability, and trust.
For more information, please visit:
For nonprofits, that means the use of AI should amplify human judgement, ensure fairness and inclusivity, operate transparently so decision can be understood and trusted, prioritize human agency, honor data and donor relationships, and be mission-driven.
Some great resources for us all to leverage include:
- Responsable AI Institute:
- Open Data Institute:
- Ethics Guidelines for Trustworthy AI:
- FundraisingAI:
I would be interested in this community’s recommendation on additional resources for us all to leverage.
0 -
Getting Started with Responsible AI
Defining, deploying, and deploying AI to drive value while mitigating risk requires actionable plans that move from commitment and frameworks to action on the ground. It’s a great opportunity to utilize Trustworthy AI building blocks: Governance, Policy, Empowerment, and Process. If we break these down:
- Governance: Ensure AI aligns with your organization goals and values, provide guidance and promote best practice, and ensure compliance with security, legal, and responsible AI standards.
An example can be establishing an AI Council at your organization.
- Policy: Harness the transformative potential of AI while mitigating risk and ensuring that AI use aligns with trustworthy AI principles.
An example can include creating AI policies, standards, and toolkits. But don’t let them sit on the shelf! Ensure that every employee sees themselves within these policies.
Note: Did you know that only 14% of nonprofits currently have an AI policy in place? (source: Blackbaud Institute Status of Fundraising Report - BBI - Status of Fundraising 2025
- Empowerment: Invest in the confidence of your employees by developing analytic communities throughout the organization that are trained, coached, supported, and governed.
An examples can be curated learning libraries, experimentation groups, lunch and learns, and leveraging some great training opportunities (many are free!)
Note: Together with the AI Coalition for Social Impact, we’re launching a free, sector-specific certification designed to help the social impact community use AI effectively and ethically in our daily work. For more information, please visit:
- Process: Democratize data skills through the use of common frameworks, tools and methodologies. Centralize access to improve controls, governance, and cost.
An example can include centralizing solutions, access and frameworks to improve governance and cost.
0 -
You noted that very few nonprofits currently have an AI policy in place. Do you have examples of AI policies for charitable groups?
0 -
How do AI governance policies/best practices compare or contrast with more broad data/analytics/business intelligence policies and best practices?
0 -
Sharing this Blackbaud Institute Spotlight as well, as it references AI trends as well: What Will Giving Look Like in 2023: - thanks to Dr. Laurene Currie.
0 -
So glad you raised this. We agree it's absolutely right to highlight the environmental impact of AI, particularly in terms of energy consumption and water usage. At Blackbaud, we're committed to continuing the strategies and initiatives we have had in place to reduce our environmental impact - and we're learning more each day.
- Vendor Engagement: We are engaged with some of the top cloud providers in the world – all who have strong commitments to renewable energy and carbon neutrality. It is important for us to understand and learn about the aggressive plans they have in place to manage the increases load from AI globally. We connect with them to discuss our footprint and ways they are advancing their own environmental efforts.
- Decarbonization: Where impacts are unavoidable, we're investing in credible carbon and water offset programs to mitigate our footprint.. As we announced earlier this year, we did achieve carbon neutrality last year as our business operations including our Scope 3 emissions (which includes our cloud provider and data center impact).
- Transparency and Measurement: While we can't currently isolate the impact of AI in our direct or indirect footprint, it is import for us to continue to measure and report our overall impact. We also plan to continue to study and learn more about our indirect Scope 3 emissions especially in our data center and cloud environments.
Linking here to our Blackbaud Impact Report for a deep dive if you'd like:
1 -
And here is a great article from Carrie Goldberger - AI in Action: Deploying Responsible, Effective, and Trustworthy AI:
Carrie is an amazing Data & AI Product Manager and outlines a thoughtful intentional approach - another great resource for you all.
0 -
The lawyer in the room of course recommends not adopting any sample or third-party policy wholesale, since it's most effective if it's tailored to your organization's particular quirks, processes, tech stacks etc. However, there's some very useful resources in these that will give you substantial building blocks to work from:
1 -
What would you recommend to someone who has not used AI before as a way to dip their toe in to get started? I saw comments about steps on an organization level, but what about on a person to person level?
0 -
Will AI be able to assist us with answers that require fund accounting knowledge in addition to software knowledge?
0 -
For our current Gen-AI offerings, Blackbaud doesn’t build or train generative AI models. We use enterprise-grade tools and techniques like prompt engineering, RAG (retrieval augmented generation), and more to create helpful, secure features for our customers. This lets us improve the outputs over standard LLM's for our customers so that we get the benefit of emerging models with billions of parameters and improvements in things like hallucinations and performance - while applying our industry best practices.
0 -
Will we be able to use AI to create and manage Dashboards?
0 -
I look at them as layering.
AI governance policies and data/analytics/business intelligence (BI) policies share a lot of common foundations, such as data quality, security, and compliance. But AI governance introduces additional layers of complexity because of the nature of AI systems.
Lets start with the key similarities
- Data Integrity & Privacy: Both require strong controls for data accuracy, security, and regulatory compliance
- Accountability: Clear ownership and auditability are essential in both domains.
- Transparency: BI and analytics policies emphasize clear reporting (note: AI governance extends this to explainability of models.
Now lets talk about the additional layers of AI governance
- Ethical Considerations: AI governance explicitly addresses fairness, bias mitigation, and societal impact.
- Model Lifecycle Management: AI policies govern training, validation, monitoring, and retraining of models.
- Risk Management: AI introduces algorithmic risk (ex: unintended bias, drift), requiring continuous oversight beyond standard data governance.
- Human Oversight: AI governance includes human-in-the-loop for critical decisions.
Overall, BI policies ensure accurate, secure, and useful insights from data. AI governance builds on that foundation but adds ethical, technical, and operational safeguards to ensure AI systems are trustworthy, explainable, and aligned with your organizational values.
1 -
I appreciate the engagement/responses.
I didn't see a response to this:
Given that bias in AI often originates from the data used to train AI, should nonprofits actively attempt to make their datasets more “representative” or “fair”?Thoughts?
Thank you in advance!0 -
I would say – just be curious! We are all learning 😊
I’ll start with my favorite books – Prediction Machines and Power and Predictions ( ) and AI Factor ( ).
Winning with AI: The 90 Day Blueprint for Success ( ) looks great as well. It's coming soon – but the authors (Katia Walsh and Charlene Li) are incredible and I’ve learned so much from them throughout my career.
Charlene also has a great newsletter – Leading Disruption: (scroll down to the bottom to subscribe)
If you are looking for technical training, there are some great, free courses out there as well – check out DataCamp or LinkedIn Learning – but there are others. Some have hands on skills labs to practice.
The FundraisingAI Global Summit has some great recordings shared as well: . I of course have to give a shout out to Sam Venable (Director, Data & AI Product Management) for his session on The Context Advantage: The Missing Piece in Nonprofit AI Strategy.
And then stay tuned for our AI Certification for Social impact in 2026:
I am also interested in what this community has to share on favorites!
0 -
I'd also add that in a personal capacity, it's very tempting to use free versions of AI tools until you know what you need, like, and get value from. Keep in mind that free tools have very liberal ideas of using anything in your prompts (including your PII, maybe even biometrics, and intimate details) for training their models. Some LLMS let you opt out of using your prompts for training purposes so peruse your account options before you put anything sensitive in there. It's (nearly) impossible to purge.
0 -
Hi Charles,
A risk of historical bias in training data is largely based on the likelihood of harm to individuals. We see laws and litigation focus on bias in training data primarily in scenarios where AI makes decisions on biased data that has discriminatory and significant impact on people, in the form of affecting the cost or terms of one's insurance, provision of healthcare, education, employment opportunities, access to public services etc.
If the risk of automated decisionmaking on an individual is low or nonexistent, such as categorizing donors or recommending ask amounts, we don't see orgs correcting for historical bias.
If your organization is thinking about using AI to make significant decisions about people (grants, financial aid, scholarships, compensating fundraisers, school applications), the answer would be different.
1
Categories
- All Categories
- 6 Blackbaud Community Help
- 206 bbcon®
- 1.4K Blackbaud Altru®
- 394 Blackbaud Award Management™ and Blackbaud Stewardship Management™
- 1.1K Blackbaud CRM™ and Blackbaud Internet Solutions™
- 15 donorCentrics®
- 357 Blackbaud eTapestry®
- 2.5K Blackbaud Financial Edge NXT®
- 646 Blackbaud Grantmaking™
- 561 Blackbaud Education Management Solutions for Higher Education
- 3.2K Blackbaud Education Management Solutions for K-12 Schools
- 934 Blackbaud Luminate Online® and Blackbaud TeamRaiser®
- 84 JustGiving® from Blackbaud®
- 6.4K Blackbaud Raiser's Edge NXT®
- 3.6K SKY Developer
- 242 ResearchPoint™
- 117 Blackbaud Tuition Management™
- 165 Organizational Best Practices
- 238 The Tap (Just for Fun)
- 33 Blackbaud Community Challenges
- 28 PowerUp Challenges
- 3 (Open) Raiser's Edge NXT PowerUp Challenge: Product Update Briefing
- 3 (Closed) Raiser's Edge NXT PowerUp Challenge: Standard Reports+
- 3 (Closed) Raiser's Edge NXT PowerUp Challenge: Email Marketing
- 3 (Closed) Raiser's Edge NXT PowerUp Challenge: Gift Management
- 4 (Closed) Raiser's Edge NXT PowerUp Challenge: Event Management
- 3 (Closed) Raiser's Edge NXT PowerUp Challenge: Home Page
- 4 (Closed) Raiser's Edge NXT PowerUp Challenge: Standard Reports
- 4 (Closed) Raiser's Edge NXT PowerUp Challenge: Query
- 777 Community News
- 2.9K Jobs Board
- 53 Blackbaud SKY® Reporting Announcements





