Back to Blog
Getting Started with AI in Grants Article 5 of 5
· 8 min read

Who Should Own AI Adoption at a 5-Person Nonprofit?

N

The meeting no one called

There's a conversation happening at your nonprofit right now. It's happening in the hallway after the staff meeting, in the group chat after hours, in the half-joking comment someone makes when they see a LinkedIn post about AI. The conversation is: "So... who's figuring out this AI thing for us?"

Nobody raises their hand. Not because nobody's interested — half your team has already tried ChatGPT on their personal laptops — but because nobody knows whose job this is. Your ED is busy keeping the lights on. Your grant writer has three proposals due this month. Your program director is drowning in reporting. Your admin person is covering for someone on leave. And your part-time development associate just started six weeks ago.

Five people. Zero bandwidth. And a nagging feeling that every other nonprofit in your space is figuring this out while you're still arguing about whether to renew your Canva subscription.

Here's the truth nobody wants to say out loud: at a five-person shop, there is no Chief Innovation Officer. There is no IT department. There is no digital transformation committee. There is just the person who cares enough to start.

Why "waiting for the right person" is a trap

I've talked to dozens of small nonprofit teams about AI adoption. The ones that are stuck almost always say some version of the same thing: "We're waiting until we have time to do this properly."

That time never comes. There's always another deadline, another report, another crisis. And while you're waiting, the gap between your organization and the ones that started experimenting six months ago gets wider. Not because AI is magic — it's not — but because the organizations that experiment learn faster. They build institutional muscle. They develop judgment about what works and what doesn't. And that judgment compounds.

The organizations that wait don't just start late. They start from a worse position, because now they're trying to learn in an environment where everyone else has already developed expectations about what AI can do. Their funders have seen AI-assisted proposals. Their peer organizations have published AI-augmented reports. The bar has moved, and they're still trying to find the bar.

Waiting for the "right" person, the "right" moment, or the "right" level of organizational readiness is not caution. It's a decision to fall behind dressed up as prudence.

The person who should own it is the person reading this

If you're reading an article titled "Who Should Own AI Adoption at a 5-Person Nonprofit?" — congratulations, it's you. Not because of your title or your technical skills or your budget authority. Because you're the one who noticed the gap and went looking for answers. That's the only qualification that matters at this stage.

At a small organization, AI adoption doesn't need an owner with a mandate. It needs a champion with curiosity. Someone willing to spend an hour exploring a tool, try it on a real piece of work, and then tell the rest of the team what happened. That's it. That's the whole job description for phase one.

This isn't about becoming the AI expert. It's about being the person who breaks the seal. Once one person at a five-person org tries something and shares the result — "Hey, I pasted our last grant into Claude and asked it to critique the needs statement, and here's what it said" — the whole dynamic shifts. Suddenly AI isn't an abstract threat or an abstract promise. It's a thing Sarah tried on Tuesday and it was pretty useful.

Permission is the bottleneck at most small nonprofits, and often the person who grants permission is the person who goes first.

What "starting" actually looks like this week

I want to get very specific here, because vague advice is the enemy of action at resource-strapped organizations. "Explore AI" is not a task. "Develop a digital strategy" is not a task. Here is a task:

This week, take your last submitted grant proposal — the whole thing, narrative and all — and paste it into an AI tool. Ask it: "What are the three weakest sections of this proposal, and how would you strengthen them?"

That's it. One action. Thirty minutes. No committee, no policy document, no board approval required. You're not publishing anything. You're not submitting anything. You're getting a second opinion on work you already completed.

Here's what will happen: the AI will give you feedback that's partly obvious, partly surprising, and partly wrong. And each of those categories is valuable.

The obvious feedback confirms your instincts. You already knew the logic model section was weak — now you have language for why. The surprising feedback reveals blind spots. Maybe the AI noticed that your outcomes section makes promises your methods section can't support. Maybe it flagged that you use passive voice every time you describe your organization's role, which makes you sound less confident than you are. And the wrong feedback teaches you something critical: AI doesn't know everything, and your judgment still matters. That last part is the most important lesson for any nonprofit professional starting with AI.

From solo experiment to team practice

Once you've done the grant critique exercise, you have something invaluable: a story to tell. Not a pitch, not a proposal — a story. "I tried this thing, here's what happened, here's what I learned." That story is your leverage.

Share it at your next staff meeting. Not as a formal presentation — just as a five-minute aside. Show the original proposal section and the AI's feedback side by side. Let people react. Some will be impressed. Some will be skeptical. Both reactions are productive.

The skeptic who says "That feedback is obvious, I already knew that" just told you they're confident in their craft. Great. Ask them: "What if the AI could handle the obvious stuff so you could focus on the hard stuff?" The person who says "Whoa, I never noticed that about our outcomes section" just had their first useful AI experience through your screen. Now they want to try it themselves.

At a five-person nonprofit, you don't need a rollout plan. You need one good story and the willingness to tell it. Adoption at small organizations spreads through demonstration, not documentation.

The gatekeeper problem (and how to kill it)

Here's where small nonprofits hit a wall that larger organizations don't. At a 50-person org, if the development team wants to try an AI tool, they can usually get budget approval and run a pilot within their department. At a five-person org, every dollar comes from the same pot, and every tool decision feels organization-wide.

This creates an accidental gatekeeper dynamic. Someone — usually the ED or the finance lead — ends up being the person who says yes or no to every new tool. And because they're overwhelmed, they default to no. Not because they're against AI, but because evaluating a new tool is one more thing on a list that's already too long.

The way to break this dynamic is to remove the cost question from the first conversation. Don't start with "Can we buy this tool?" Start with "Here's what I learned from a free experiment, and here's what I think we could do if we had the right platform." Lead with value, not with a purchase request.

And when you do get to the tool conversation, per-seat pricing is the enemy of small-team adoption. If a tool costs $30 per user per month and you have five people, that's $150/month before anyone's figured out whether it's useful. That's a real line item for an organization running on a $400K budget. The math forces someone to decide who gets access and who doesn't, which recreates the gatekeeper problem inside the tool itself.

Collaborative Editing

Grantable doesn't charge per seat, which means there's no gatekeeper deciding who gets to experiment and who doesn't. Your grant writer, your ED, your program director, your development associate — they can all be in the same workspace, trying things out, learning from each other's experiments. When you remove the budget question from adoption, you remove the single biggest friction point for small teams. The person who cares most can bring everyone else along without needing budget approval for each seat.

Making the invisible visible

One of the biggest challenges for the AI champion at a small nonprofit is proving value to leadership. You know AI is saving you time. You can feel it. But feelings don't show up in board reports.

This is where most grassroots adoption efforts stall. The person who started experimenting gets busy, stops evangelizing, and the rest of the team drifts back to old habits. Three months later, someone says "Whatever happened with that AI thing?" and nobody has an answer.

The fix is visibility. You need a way to show — not just tell — what AI is contributing to the organization's work. How many proposals are in the pipeline? How has throughput changed? Where is AI being used and where isn't it? This isn't about surveillance. It's about making the case for continued investment in something that's easy to dismiss as a fad.

Reporting

Grantable's pipeline reporting gives your leadership a clear picture of grant activity — what's in progress, what's submitted, what's in the queue. When your ED can see that proposal throughput increased by 40% after the team started using AI-assisted drafting, the conversation about whether AI is "worth it" answers itself. Visibility turns an individual experiment into an organizational capability that leadership can understand and support.

The moral case for starting now

We need the most moral people in society to be competent with these tools. Right now, every hedge fund, every ad agency, every lobbying firm is getting fluent with AI. If the people running homeless shelters and food banks and after-school programs sit this out, we don't preserve some kind of ethical purity. We just make sure that the most powerful technology of our generation is shaped entirely by people optimizing for profit. The nonprofit sector doesn't get to opt out and still expect to have influence.

This isn't a technology argument. It's a capacity argument. Every hour your grant writer spends reformatting the same program description for the fifteenth funder is an hour they're not spending on the narrative that actually wins the grant. Every afternoon your ED spends pulling data for a board report is an afternoon they're not spending on strategy. AI doesn't replace the human work. It clears the path so the human work can happen.

At a five-person nonprofit, you don't have the luxury of wasted capacity. You can't afford to have your best people doing mechanical work when a tool could handle it. Not because the mechanical work doesn't matter — it does — but because your best people are the only people you have, and their judgment and creativity are the things no tool can replace.

When your ED pushes back

You're going to hit resistance. At a small org, that resistance usually comes from the executive director, and it usually sounds like one of these:

"We don't have time to learn a new tool." Agree with them. Then say: "That's why I'm not asking for a training initiative. I'm asking for permission to try one thing this week and share what I learn. If it's useful, we keep going. If it's not, we stop." You're asking for thirty minutes of organizational attention, not a strategic commitment.

"We can't afford another subscription." Start with the free experiment — the grant critique exercise. Build the case with results, not projections. And when you do propose a tool, make sure it's one where the whole team can participate without per-seat multiplication.

"I don't trust AI with our data." Fair concern. Address it directly: "Neither do I, which is why I'm recommending a platform with SOC 2 compliance and clear data policies, not a free chatbot." Show them you've done the homework.

"Our funders won't like it." The honest answer: most funders care about the quality of the proposal, the clarity of the logic model, and the credibility of the organization. They're not running AI detection software on your LOI. And many of them are using AI themselves.

The Monday morning version

Here's your one thing to do this week. I mean it — one thing. Not five things. Not a plan. One concrete action.

Take your last submitted grant proposal. Open an AI tool — any AI tool. Paste the narrative section in and type: "You are an experienced grant reviewer. What are the three weakest parts of this proposal, and what specific changes would make them stronger?"

Read the response. Disagree with some of it. Learn from some of it. Notice how it makes you think about your own writing differently.

Then tell one person on your team what happened.

That's how AI adoption starts at a five-person nonprofit. Not with a strategy document or a board resolution or a digital transformation roadmap. With one person who cared enough to try, and one conversation about what they found.

You don't need permission to be that person. You just need thirty minutes and the willingness to go first.