
Generative AI technologies have made significant inroads across industries, but the not-for-profit (NFP) sector is approaching this digital revolution with measured steps. Our 2025 not-for-profit survey findings indicate that while NFPs recognise generative AI’s potential to enhance operations and engagement, adoption remains strategic and focused on specific applications. This emerging pattern mirrors the sector’s approach to ESG implementation – cautious but deliberate, with clear recognition of both benefits and challenges.
Marketing leads adoption priorities
Marketing and communications emerge as the primary use case, with 44% of NFPs leveraging generative AI to streamline content creation and enhance outreach, and 19% using it in fundraising and donor engagement efforts. This focus reflects the sector’s strategic prioritisation of stakeholder engagement and donor relationships and could be an important tool in helping communicate value and impact to address changing expectations.
Use in internal communications is also popular at 33%, as organisations look to find ways to increase efficiency and augment skill sets.
The survey reveals a pattern of practical implementation, with 27% utilising AI for data analytics and 19% for customer service. This targeted approach suggests that NFPs are selectively implementing AI where immediate value is most evident, rather than pursuing comprehensive digital transformation.
Positive impacts emerging
Nearly half (48%) of respondents report that generative AI has positively impacted their operations, pointing to early returns on investment for organisations that have embraced these technologies. However, a significant portion remain uncertain about AI’s effects, suggesting that many NFPs are still in experimental phases or lack robust assessment frameworks to measure outcomes.
This cautious optimism mirrors the sector’s approach to other digital innovations – focused on practical applications that directly support mission fulfillment rather than technology adoption for its own sake.
Quality concerns top list of challenges
The top concerns surrounding generative AI reflect the sector’s commitment to building trust. Data inaccuracy and quality concerns lead the list at 64%, followed by data protection and privacy issues (56%) and intellectual property and legal concerns (44%).
These priorities underscore the uniquely sensitive position of NFPs, where trust is a fundamental currency and where organisational reputation can directly impact funding and community support. For a sector often working with vulnerable populations and sensitive information, these concerns represent significant barriers to wider AI adoption.
What this means for you
The NFP sector stands at a critical juncture with generative AI – balancing the operational efficiencies and enhanced engagement these technologies promise against legitimate concerns about quality, privacy, and legal compliance. Organisations that develop clear governance frameworks for AI implementation, focusing on transparency and quality control, may find themselves better positioned to leverage these tools while maintaining stakeholder trust. As adoption increases and best practices emerge, NFPs that thoughtfully incorporate AI into their operations may discover new ways to amplify their impact and extend their resources – particularly valuable in an environment where operational pressures continue to mount. For NFP leaders, the path forward likely involves strategic implementation in areas of clear return, coupled with robust governance frameworks that address the sector’s unique concerns and responsibilities.
- Identify key areas for AI implementation: Determine where AI can add value, without introducing significant risks, such as marketing or communications.
- Develop an AI governance policy: Create clear guidelines for when and how AI can be used, who has authority to approve outputs, and what verification processes must be followed.
- Create data privacy safeguards: Develop explicit policies about what data can and cannot be input into AI systems, particularly regarding donor or client information.
- Engage stakeholders: Involve board members, staff, and volunteers in the planning and implementation process. Their insights can help identify valuable use cases and ensure buy-in across the organisation.
- Invest in training and skill development: Provide training for staff to understand and effectively use AI tools. This includes both technical skills and strategic understanding of AI’s potential impact.
- Pilot and iterate: Start with a small, focused pilot project to test AI applications. Use the insights gained to refine and expand AI initiatives gradually.
- Monitor and evaluate: Continuously assess the performance and impact of AI projects. Use data-driven insights to make informed decisions and adjustments.