Where AI Belongs in Fundraising — And Where It Absolutely Doesn't
The email that made me cringe
A friend of mine runs development for a mid-size housing nonprofit. Last year she told me about a meeting with a program officer she'd been cultivating for almost two years. They'd had coffee. They'd exchanged updates about each other's kids. The program officer had personally flagged an upcoming RFP because she thought my friend's org was a perfect fit.
Then, right before the meeting, my friend's new development associate sent the program officer a "personalized" follow-up email. AI-generated. You could feel it. The tone was off. The language was too polished, too generic, too obviously not written by a human being who knew this person. It opened with "I hope this message finds you well" and closed with "We deeply value our partnership and look forward to continued collaboration."
The program officer didn't say anything about it. She didn't have to. The warmth in the next meeting was just — a little less. The relationship didn't end. But something shifted. A machine had been inserted into a human connection, and everybody felt it.
That story has stayed with me because it captures the whole tension. AI is genuinely transformative for fundraising work. It can do things in minutes that used to take days. But there are places where it has no business being, and the penalty for getting that wrong isn't a bad draft. It's a broken relationship.
The line is simpler than you think
People overcomplicate this. They write twenty-page AI policies trying to enumerate every scenario. They debate edge cases in committee meetings. They get paralyzed.
Here's the line, and it's clean:
That's it. That's the whole framework. AI is your prep team, your research assistant, your first-draft engine. It sits behind the curtain. The moment it steps in front of the curtain — the moment it's the thing communicating with another human being on your behalf — you've crossed the line.
A grant narrative drafted by AI is fine. A relationship email sent by AI is not. A prospect research brief generated by AI is fine. An AI chatbot responding to a program officer's question is not. A budget justification structured by AI is fine. A thank-you note written by AI is — well, you already know.
Where AI absolutely belongs
Let's start with the good news. There's an enormous amount of fundraising work where AI doesn't just belong — it's borderline negligent not to use it. This is the mechanical work. The research. The synthesis. The reformatting. The first drafts that you're going to rewrite anyway.
Research and prospect qualification
This is maybe the single highest-value use of AI in fundraising right now. The old way: you spend hours on a foundation's website, read their 990s, skim their recent grants, try to piece together whether they'd fund your work. The new way: AI synthesizes all of that in minutes and gives you a fit assessment you can actually act on.
Assess Fit
Grantable's Assess Fit feature scores how well a grant opportunity aligns with your organization — analyzing funder priorities, past giving patterns, eligibility requirements, and program alignment. You get a clear, structured readout in minutes instead of a half-day research project. But here's the key: AI scores the alignment. You make the relationship decision. The score tells you whether to look closer. Your judgment tells you whether to pick up the phone.
This is AI at its best: crunching data, surfacing patterns, compressing hours of manual work into minutes. No relationship is at stake. No human connection is being mediated. It's pure preparation.
Drafting and structuring proposals
Grant proposals have a lot of mechanical structure. Needs statements follow patterns. Logic models have standard formats. Budget justifications require specific language. AI is exceptional at generating solid first drafts of this material — drafts that your team then rewrites, sharpens, and makes human.
AI Helper
Grantable's AI Helper follows a structured plan-review-execute workflow. It doesn't just spit out text — it builds a plan for the section, lets you review and adjust the approach, then executes the draft based on your direction. You stay in control of the strategy. The AI handles the production. Every draft is grounded in your organization's profile, style guide, and past writing — so it starts from your voice, not a blank slate.
The plan-review-execute cycle matters because it keeps the human in the decision seat. AI drafts, humans decide. The AI proposes a structure. You approve, modify, or reject it. The AI generates text. You rewrite what needs rewriting. At no point does the AI make a judgment call about what your organization should say to a funder. That's your job.
Data synthesis and reporting
Pulling outcomes data into a narrative. Reformatting a federal report into a foundation report. Summarizing a year's worth of program results into a two-page brief for the board. All of this is mechanical translation work where AI saves enormous time without touching a single relationship.
Internal knowledge management
How many times has your team rewritten the same program description? How often does someone dig through old proposals to find the paragraph about your evaluation methodology? AI is brilliant at organizing, retrieving, and repurposing your own content. This is librarian work, and AI is a very good librarian.
Where AI absolutely does not belong
Now the other side. These aren't gray areas. These are bright red lines, and crossing them costs more than time.
Donor and funder communications
Every email, every call, every handwritten note to a program officer, a major donor, or a foundation contact should come from a human being who actually knows the recipient. Not "knows about" them — knows them. Knows their priorities, their communication style, their quirks, the conversation you had at that conference last March.
AI cannot replicate relational knowledge. It can simulate warmth. It can mimic personalization. But simulation and mimicry are exactly what make people uncomfortable. Program officers read hundreds of emails. They can feel when something is off. And once they feel it, the trust recalibration happens silently and permanently.
Stewardship and relationship management
Stewardship is the art of making someone feel seen. Remembered. Valued — not as a funding source, but as a human partner in your work. This is where grant professionals earn their keep, and it requires the kind of judgment, emotional intelligence, and relational memory that AI fundamentally cannot provide.
Should you send a check-in email or wait until you have program results to share? Should you invite this program officer to your site visit or would that feel presumptuous? Is this the right moment to mention the capital campaign, or should you wait another quarter?
These are judgment calls that depend on years of relational context. AI has no access to that context, and pretending it does is how you get the "I hope this message finds you well" disaster.
Compliance decisions
Federal grants come with compliance requirements that have real legal consequences. Allowable costs, OMB circulars, single audit thresholds, lobbying restrictions — this is territory where a wrong answer isn't just embarrassing, it's potentially career-ending and organization-threatening.
AI can help you organize compliance information. It can surface relevant regulations. It can flag potential issues for your team to review. But the actual compliance decision — "yes, this cost is allowable under this grant" — has to come from a human being with the expertise and the authority to make that call. AI hallucinations in a compliance context aren't a quality problem. They're a legal liability.
The AI Placement Test
- Is a relationship at stake? If the output goes directly to a funder, donor, or partner — a human writes it.
- Is a judgment call required? If the decision has legal, financial, or strategic consequences — a human makes it.
- Is it preparation or performance? If it's preparation (research, drafting, organizing) — AI is perfect. If it's performance (the actual communication, the actual decision) — humans only.
- Would the recipient care? If the person receiving this output would feel differently knowing AI wrote it — a human writes it.
The gray areas (and how to navigate them)
Real life isn't always clean. A few scenarios that come up constantly:
"Can I use AI to draft an LOI that goes to a funder?" Yes — because an LOI is a document, not a relationship. But the cover email that accompanies it? That's you. Personally. Every time.
"Can I use AI to prepare talking points for a funder meeting?" Absolutely. This is AI behind you, helping you prepare. Just don't read the talking points verbatim like a teleprompter. Use them as a springboard for genuine conversation.
"Can I use AI to write a grant report?" Yes, with a caveat. The data and narrative should be AI-assisted, but any section that characterizes your relationship with the funder or makes promises about future engagement should be human-written. Reports are partly mechanical and partly relational. Treat each section accordingly.
"Can I use AI to write a thank-you note for a $50 donation?" I'd still say no. A short, genuine note takes ninety seconds. A donor who gave you fifty dollars deserves ninety seconds of your actual attention. If you're sending so many thank-you notes that you can't write them all, batch-personalize the template yourself. Don't outsource gratitude.
Why the line keeps moving (and why the principle doesn't)
AI tools will get better. They'll get more personalized. They'll get better at mimicking warmth and relational awareness. Some people will argue that once AI is "good enough" at relationship communication, the line disappears.
I don't think so. The line isn't about capability. It's about integrity. The reason you don't send an AI-written thank-you note isn't that AI writes bad thank-you notes. It's that the note represents your gratitude, and outsourcing it to a machine is a statement about how much you value the relationship — whether you intend it that way or not.
As AI gets better, the preparation side gets dramatically more powerful. Your research gets deeper. Your drafts get better. Your internal operations get faster. The behind-the-curtain work becomes extraordinary. But the curtain itself? That stays exactly where it is.
The practical upshot
If you're leading a fundraising team right now, here's what I'd do this week:
- Audit your AI touchpoints. Map every place your team currently uses AI. For each one, ask: is this preparation or performance? Is a relationship at stake?
- Draw the line explicitly. Write a one-paragraph policy: "We use AI for research, drafting, and internal operations. We do not use AI for direct communications with funders, donors, or partners. All AI-generated content is reviewed and approved by a human before it leaves the building."
- Invest in the preparation side. The organizations getting the most value from AI are the ones using it for deep research and structured drafting — the behind-the-curtain work. That's where the leverage is. That's where the time savings are. That's where AI earns its keep.
- Protect the relationship side. Make it explicit that relationship communications are human-only. Not because AI can't write them. Because your donors and funders deserve the real thing.
The nonprofits that get this balance right will be unstoppable. They'll move faster on proposals, go deeper on research, and submit more applications — while maintaining the authentic human relationships that actually win grants. The ones that get it wrong will save a few hours on emails and lose funders they'll never get back.
AI is the most powerful preparation tool the nonprofit sector has ever had. Use it like one. And when it's time to connect with another human being, put the tool down and show up yourself.