Where Does Your Data Actually Go? A Grant Professional's Guide to AI Privacy
The question nobody wants to ask out loud
You pasted your draft proposal into ChatGPT last Thursday. The one with your executive director's salary, your program participant demographics, and three paragraphs describing vulnerable populations your organization serves. It generated a solid revision. You used it. You moved on.
And then, somewhere around Friday afternoon, the thought crept in: where did that data actually go?
You're not paranoid for wondering. You're responsible. Grant proposals contain some of the most sensitive information any organization produces — financial data, personally identifiable information, program details about at-risk communities, donor relationships, strategic plans. When you paste that into an AI tool, you're trusting that tool with material that, in any other context, you'd lock in a filing cabinet.
So let's answer the question. Not with vague reassurances, not with scare tactics. With facts.
What actually happens when you hit "send"
When you type a prompt into an AI tool — any AI tool — your text travels from your browser to the provider's servers. At that point, several things can happen depending on the provider, the plan you're on, and the settings you've configured.
Processing: Your input gets processed by the model to generate a response. This is the basic transaction. You send text, the model reads it, the model sends text back.
Storage: Most providers store your conversations for some period of time. Sometimes that's 30 days. Sometimes it's indefinitely. Sometimes it depends on your plan tier. The storage serves different purposes — abuse monitoring, debugging, product improvement, or simply letting you revisit past conversations.
Training: This is the one that keeps grant professionals up at night. Some providers use your inputs to train future versions of their models. That means the words you typed — your program descriptions, your budget numbers, your needs assessment — could become part of the dataset that teaches the next version of the AI. Your data doesn't get regurgitated verbatim (usually), but it becomes part of the model's learned patterns.
The critical distinction: not all providers do all three of these things, and the difference between plan tiers can be enormous.
The free tier problem
Let me be specific. As of this writing, the free tier of ChatGPT uses your conversations for model training by default. You can opt out in settings, but the default is opt-in. The free tier of Google's Gemini uses your data for training. Most free AI tools follow this pattern — they offer the product at no cost because your usage data has value.
Paid enterprise plans typically don't train on your data. OpenAI's Team and Enterprise plans, Anthropic's business offerings, and most professional-tier AI products explicitly exclude customer data from training. But "typically" isn't "always," and "by default" isn't "guaranteed." You need to read the actual terms.
This matters enormously for grant professionals. If you're on a free plan and you paste in a proposal that includes HIPAA-protected health information about program participants, you may have just created a compliance violation that has nothing to do with the quality of the AI's output. The output could be perfect. The data handling could still be a problem.
The compliance alphabet: HIPAA, FERPA, and PII
Grant work touches regulated data more often than most people realize. If your organization serves healthcare populations, you're dealing with HIPAA. If you work with educational institutions, FERPA applies. If you serve any human beings at all, you're handling PII — personally identifiable information.
Here's what each of these means for your AI tool choices:
HIPAA (health data): If your proposal includes protected health information — patient demographics, diagnoses, treatment outcomes, anything that could identify an individual's health status — your AI provider needs a Business Associate Agreement (BAA). A BAA is a legal contract that obligates the provider to handle health data according to HIPAA standards. No BAA, no legal basis for sharing that data with the tool. Period.
FERPA (education data): If your grant involves student records, educational assessments, or data from educational institutions, FERPA restricts how that data can be shared with third parties. Most AI providers are not FERPA-compliant by default. Check before you paste.
PII (personally identifiable information): Names, addresses, Social Security numbers, dates of birth, demographic data that could identify an individual. This is the broadest category and the one most commonly violated in AI usage. When you paste a case study that includes "Maria, age 34, single mother of three from the Eastside neighborhood" into ChatGPT, you've shared PII. Even if the name is changed, the combination of details might be enough to identify a real person.
The practical rule: before you paste anything into an AI tool, ask yourself — does this contain information about a real, identifiable person? If yes, either remove that information or make sure your tool is compliant with the relevant regulation.
What you should never put into an AI tool (unless you've verified compliance)
This list isn't exhaustive, but it covers the most common mistakes I see grant professionals make:
- Program participant names, demographics, or case narratives with identifying details
- Donor Social Security numbers or financial account information
- Employee salary details tied to named individuals
- Student records or educational assessment data
- Medical or health information about specific individuals
- Internal board communications about personnel matters
- Legal documents related to active disputes
Notice I said "unless you've verified compliance." This isn't a blanket ban on using AI with sensitive data. It's a requirement to verify that your specific tool, on your specific plan, with your specific configuration, handles that data appropriately. Some tools can handle sensitive data safely. Most free ones can't.
The vendor evaluation checklist
When you're evaluating any AI tool for grant work — whether it's a general-purpose chatbot, a grant-specific platform, or an internal tool your IT team is building — here are the five questions you need answered before you trust it with organizational data.
Five Questions to Ask Every AI Vendor
- Do you hold SOC 2 Type 2 certification (or equivalent)? SOC 2 Type 2 means an independent auditor has verified that the company's security controls actually work over time — not just that they exist on paper. Type 1 means they checked a snapshot. Type 2 means they monitored for months. If a vendor can't produce a SOC 2 report, ask why. FedRAMP is the federal equivalent and matters if you're doing federal grant work.
- Is my data used for model training? You want a clear, unambiguous "no." Not "not by default." Not "you can opt out." A flat no. If the answer is anything other than "we never use customer data for training," dig deeper. Ask for the specific section of their terms of service. Read it yourself.
- What is your data retention policy? How long do they keep your inputs and outputs? Can you delete your data? Is deletion permanent and verifiable? Some providers retain data for 30 days for abuse monitoring even on enterprise plans. That might be acceptable. Indefinite retention probably isn't. Know the number.
- Can you provide a Business Associate Agreement (BAA) for HIPAA compliance? If your organization handles health data in any capacity — even tangentially through community health programs — you need this. A vendor that can't sign a BAA cannot legally process your HIPAA-covered data. End of conversation.
- Where is my data stored and processed? Which country? Which cloud provider? Is data encrypted at rest and in transit? For organizations working with federal grants, data sovereignty matters. Some funders require that data stay within the United States. Some international programs have their own jurisdiction requirements.
Print this list. Use it. Every vendor you talk to should be able to answer all five questions clearly and without hedging. If they get vague on any of them, that tells you something.
How data architecture changes the equation
There's a fundamental difference between AI tools that send your data out to a model and AI tools that bring the model to your data. The architecture matters.
With a general-purpose chatbot, you copy sensitive information from your files and paste it into a third-party interface. That data now lives on their servers, subject to their policies. You've exported your organizational knowledge into someone else's infrastructure.
With a purpose-built platform, the relationship is different. Your organizational data lives in your workspace. The AI accesses it within that controlled environment. Nothing gets exported. Nothing gets copied to a training pipeline. Your data stays in the same place it started — your workspace — and the AI comes to it rather than the other way around.
Organization Profile
Grantable's Organization Profile is designed as a secure container for exactly the kind of sensitive data grant professionals work with — mission statements, financials, program details, outcomes data. It lives in your workspace, protected by the same SOC 2 Type 2 infrastructure that governs the rest of the platform. When the AI draws on your organizational data, it's pulling from your workspace, not from an external paste. The data doesn't travel. The intelligence does.
This isn't a minor architectural detail. It's the difference between a workflow that creates compliance risk every time you use it and one that eliminates compliance risk by design.
SOC 2 Type 2: what it actually means
You've seen "SOC 2" on vendor websites. Most people nod and move on. Let me explain why it matters.
SOC 2 is a framework developed by the American Institute of CPAs that evaluates how a company manages data. There are two types. Type 1 is a point-in-time assessment — an auditor looks at your security controls on a single day and says "these exist." Type 2 is an ongoing evaluation — an auditor monitors your controls over a period of months and verifies they actually work consistently.
The difference is enormous. Type 1 is a photo. Type 2 is a documentary. You want the documentary.
Grantable holds SOC 2 Type 2 certification. That means our data handling, access controls, encryption, and infrastructure security have been independently verified over time by a third-party auditor. It's not a claim we make on a landing page. It's a fact verified by someone whose job is to find problems.
When you're evaluating vendors, SOC 2 Type 2 should be a baseline requirement, not a bonus feature. If a vendor serving the nonprofit sector can't demonstrate this level of security, you should ask hard questions about why.
The practical daily workflow
All of this policy and compliance talk means nothing if it doesn't change what you do on a Tuesday morning. So here's what a privacy-conscious AI workflow actually looks like in practice:
Before you paste anything: Spend two seconds asking — does this contain information about a real, identifiable person? Does it contain financials tied to specific individuals? Does it contain health or education data? If yes to any of those, either strip that information out or confirm your tool is compliant.
Use your organization's approved tool: Not your personal ChatGPT account. Not the free version of whatever your colleague recommended. The tool your organization has vetted, with the plan your organization is paying for, with the settings your organization has configured.
Keep sensitive data in controlled environments: If your platform has a centralized organizational profile or document library, use it. That's where sensitive data belongs — in a controlled workspace with proper security, not in a chat window you'll forget about in a week.
When in doubt, anonymize: If you need AI help with a case study or participant narrative, change the names, generalize the demographics, remove geographic specifics. You can add the real details back after the AI has helped you with the structure.
Your Monday morning move
Here's what I want you to do this week:
Step one: Check what plan you're on for every AI tool you use. Free? Paid? Enterprise? Look up the data training policy for each one. If any tool is training on your inputs, either upgrade, opt out, or stop using it for work.
Step two: Print the five-question vendor checklist from this article. The next time someone on your team suggests a new AI tool, run it through the checklist before anyone signs up.
Step three: Do a quick mental audit of the last five things you pasted into an AI tool. Did any of them contain PII? Protected health information? Donor data? If yes, that's not a crisis — it's a data point that tells you where your workflow needs a guardrail.
Step four: Talk to your team. Not a lecture. A five-minute conversation: "Hey, when we use AI tools, let's make sure we're not pasting in anything that identifies specific people unless we've confirmed the tool is compliant." That one sentence eliminates the majority of the risk.
Privacy in AI isn't about fear. It's about knowing what you're working with, making informed choices, and building habits that protect the people your organization serves. The data in your grant proposals represents real communities, real individuals, real trust. Handle it accordingly.