We Built an AI Data Security Policy. Here's What We Actually Decided (and Why).
A practical guide from a 15-person SaaS company that chose clarity over complexity.


Every SaaS company is using AI right now. Most of them don't have a policy for it.
We didn't either, until recently.
I'm Joel Varty, CTO at Agility CMS, and I've been at this company for over 20 years. In that time, I've watched us navigate cloud migration, headless architecture, composable DXP, and a dozen other industry shifts. AI is different, though - it touches everything: your code, your content, your customer data, your contracts. Well, it CAN touch all of those things - the trick it to know what it SHOULD be able to touch and from what context. If you don't have a clear answer for "what are we allowed to put into these tools?", you're one careless paste away from a problem you can't undo.

So we built a policy that I think is appropriate for us. It's not a 40-page governance framework written by consultants filled with legal-ease. It's a practical document that our teams can actually follow.
I'm going to share what we've learned so that it can maybe help you out, too.
Most Policy Templates Are Written for the Wrong Company
I started by looking at what was out there. There's no shortage of AI acceptable use policy templates. ISACA has one. SHRM has one. FRSecure, AIHR, FairNow, Lattice, Fisher Phillips. I looked at a lot of them.
They're fine. They're also mostly written for companies with 500+ employees, a dedicated InfoSec team, and a compliance department. They are as much (or more) about controlling liability from a corporate overview perspective as they are about helping folks make practical decisions about what to do. That's not us.
We also looked at the formal frameworks: NIST AI Risk Management Framework and ISO 42001. Both are excellent references for understanding the landscape. NIST gives you the "what" and "why" of AI risk management. ISO 42001 gives you the "how" with a certifiable management system. But implementing either one wholesale for a 15-person company would be like buying a transport truck to move a couch.
So we used the templates as a starting point and the frameworks as a compass, then wrote something that actually reflects how we work.
The First Decision: Data Tiers
The most useful thing we did was classify our data into four tiers based on what's allowed to touch an AI tool.
Tier 1 (Public) is anything already out there. Blog posts, public docs, open-source code. Use whatever AI tool you want.
Tier 2 (Internal) is stuff like Slack messages, internal process docs, meeting notes, draft content. This can go into our approved Team-tier AI tools (Claude Team, ChatGPT Team, M365 Copilot), but only on company accounts. Never on free or personal plans.
Tier 3 (Sensitive) is the one that matters most. Customer data. Financial records. Legal contracts. Employee PII. Sales pipeline details. This tier has real restrictions, and the reasoning behind those restrictions is the most interesting part of the whole exercise.
Tier 4 (Prohibited) is credentials, API keys, tokens, database exports. These never go into any AI tool, period. No exceptions. No approvals.
The rule of thumb: if you're not sure whether something is Tier 2 or Tier 3, treat it as Tier 3.
The Tier 3 Problem: Your AI Plan Matters More Than You Think
We had SO MANY questions about this policy, not just from worried finance/legal folks, but genuinely concerned employees who didn't want to make a mistake.
That warms my heart a little, not gonna lie.
In terms of what we're using, we have Claude Team and ChatGPT Team plans. Both are governed by commercial terms. Neither vendor trains on our data. That's good. But when we actually looked at what "commercial terms" means on a Team plan vs. an Enterprise plan, the gaps were significant.
Claude Team: No training on our data. 7-day retention on Anthropic's servers. But no Zero Data Retention option, no audit logs, no compliance API, and no way to prevent someone from switching to a personal account on the same machine.
ChatGPT Team: Similar story. No training. 30-day retention on OpenAI's servers. No ZDR, no audit trail.
So if someone on our team pastes a customer contract into Claude Team, that contract sits on Anthropic's servers for 7 days. We can't shorten that. We can't audit it. And if a customer asks "was our data processed by AI?", we can't prove what did or didn't happen.
For internal data, that tradeoff is fine. For customer PII and legal contracts? It's not good enough.
M365 Copilot Changed the Equation
Copilot is a completely different beast in terms of how it handles security. Too bad it just isn't really as good a Claude, but that's another story.
When you use Copilot, either standalone or in Word, Excel, or Outlook, your data never leaves your M365 tenant. It's not sent to a third-party server. It's processed where it already lives, under the same encryption, access controls, and DPA that already cover your email and documents.
Microsoft doesn't use prompts, responses, or Graph data to train models. Full stop. No toggle. No opt-out to accidentally miss.
Copilot interactions are logged and auditable through Microsoft Purview. If a customer asks what AI touched their data, we can actually answer that.
And because we already have external sharing disabled on most SharePoint portals and custom permissions locked down, the main risk (oversharing within the tenant) was already largely mitigated by our existing governance.
So we made a decision: M365 Copilot is approved for Tier 3 data. Claude and ChatGPT are not.
That's not a knock on Anthropic or OpenAI. Their Team plans are solid for internal work. It's an architectural difference. "Data stays in your environment" is a stronger assurance than "we promise to delete it after 7 days."
The Free Account Problem
This might be the single most important part of the policy.
Free and personal AI accounts are not approved for any Agility work. Period. Not even "quick" tasks.
Why? Because free-tier accounts on most AI platforms can use your inputs for model training. Since late 2025, Anthropic's consumer plans (Free, Pro, Max) default to training-on unless you opt out. If someone on our team has a personal Claude account and pastes a customer name into it while thinking "this is just a quick question," that data could end up in a training pipeline retained for up to 5 years.
Our Team plans explicitly exclude training. But there's no technical way on a Team plan to prevent someone from switching to their personal account. It's a policy control, not a technical one. So we made the rule clear, explained why it matters, and made it the first item on the quick reference card.
What About Claude Code?
Most of our engineering team uses Claude Code as their primary AI development tool. It's excellent. It runs locally in the terminal against your codebase using our Team API key, so it gets the same commercial data handling as Claude Team.
But it also has unique risks. Claude Code can read your entire project directory, execute shell commands, and modify files. If a developer happens to have customer data exports sitting in a subdirectory, Claude Code could read them.
Our rules: don't point Claude Code at directories containing customer data. Don't run it against customer production environments. Don't use unapproved coding assistants (Cursor, Cody, personal Copilot) for Agility work.
We're also looking at Claude Code hooks, which are deterministic shell scripts that run before tools execute. Unlike CLAUDE.md instructions (which are suggestions the model can override), hooks are enforced at the system level. There's a good open-source project called claude-guardrails that provides a starting configuration for this.
What We Didn't Do
We didn't buy an enterprise DLP platform. We didn't deploy a CASB. We didn't set up TLS inspection proxies. Those tools exist for a reason, but that reason is organizations with hundreds or thousands of employees. For a team our size, they'd add cost and complexity without proportionate benefit.
We also didn't pretend we have it all figured out. The policy has a quarterly review cycle. AI capabilities change fast. The Team plans we evaluated today might have audit logging next quarter. Claude Enterprise might make sense for us at some point. The policy is a living document, not a monument.
Some Takeaways
If you're running a small-to-mid SaaS company and you don't have an AI data security policy, you're not alone. Most don't. But the gap between "we use AI" and "we have a clear, documented position on how we use AI" is the gap that will matter when a customer, auditor, or prospect asks the question.
And they will ask the question.
Here's what I'd suggest:
- Start with data classification. Don't try to boil the ocean. Just answer: what types of data do we have, and which ones are we comfortable putting into a third-party AI tool? That single exercise will clarify 80% of your policy.
- Understand what your AI plan actually provides. "We have a paid plan" is not the same as "we have enterprise-grade data protection." Read the terms. Know the retention periods. Know whether there's an audit trail. Know the difference between your vendor's Team tier and Enterprise tier.
- Look at M365 Copilot differently. If you're already a Microsoft shop, Copilot's in-tenant architecture might be the fastest path to AI-assisted work on sensitive data without the third-party exposure risk.
- Ban free/personal accounts for work. This is the easiest, highest-impact rule you can set. The data handling difference between a free plan and a commercial plan is enormous, and one careless paste on a free account can undo everything else in your policy.
- Write it down. A policy that lives in someone's head isn't a policy. It doesn't have to be perfect. Version 0.1 that exists is better than version 3.0 that you'll get around to someday.
We'll be publishing our full internal AI data security policy as a downloadable resource - let us know if you're interested in that!. If you want to use it as a starting point for your own, go for it.
Next up: we're building the customer-facing version. That one covers how customer data is handled in our products, including the Agility CMS MCP Server. That's a different post.

About the Author
Joel is CTO at Agility. His first job, though, is as a father to 2 amazing humans.
Joining Agility in 2005, he has over 20 years of experience in software development and product management. He embraced cloud technology as a groundbreaking concept over a decade ago, and he continues to help customers adopt new technology with hybrid frameworks and the Jamstack. He holds a degree from The University of Guelph in English and Computer Science. He's led Agility CMS to many awards and accolades during his tenure such as being named the Best Cloud CMS by CMS Critic, as a leader on G2.com for Headless CMS, and a leader in Customer Experience on Gartner Peer Insights.
As CTO, Joel oversees the Product team, as well as working closely with the Growth and Customer Success teams. When he's not kicking butt with Agility, Joel coaches high-school football and directs musical theatre. Learn more about Joel HERE.