Most small business owners can tell you exactly which AI tools they use. ChatGPT for drafting emails. An AI scheduling assistant. Maybe a transcription tool for meetings or a chatbot on the website.
Ask those same owners about their AI policy — what employees can put into these tools, what needs human review before it goes out, who’s responsible when something goes wrong — and the conversation gets quieter.
“We haven’t gotten to that yet.” “Everyone kind of knows the deal.” “It’s just a tool, we don’t have a policy for using Google either.”
Here’s the thing: according to a 2025 QuickBooks survey of more than 2,200 small businesses, 68% now use AI regularly — up from 48% just a year earlier. That’s a massive adoption curve. But an EisnerAmper survey from mid-2025 found that only 36% of workers say their employer has any formal AI policy. The populations are different — one measures small businesses, the other desk workers broadly — but the pattern is consistent: adoption is outpacing governance across the board.
That gap — between usage and governance — is the AI equivalent of running your LLC on defaults. The absence of rules doesn’t mean the absence of consequences. It means nobody’s making the decisions deliberately.
Writing an AI policy is a form of cognitive offload — getting the rules out of your head so your team can operate without asking you every time. And just like governance debt in your legal structure, AI governance debt compounds every month you leave it unaddressed.
What “No Policy” Actually Looks Like
When there’s no AI policy, your team doesn’t stop using AI. They just make their own rules. Here’s what that commonly looks like — based on patterns widely reported across small businesses:
Data Goes Places You Don’t Expect
An employee pastes a client’s financial summary into ChatGPT to help write a proposal. Another uploads a spreadsheet of customer contacts to an AI tool that generates email campaigns. A third drops proprietary pricing into an AI assistant to build a competitive analysis.
None of this is malicious. It’s people being resourceful with the tools available to them. But without boundaries, sensitive data — client information, financials, trade secrets, employee records — ends up in systems you don’t control, subject to data retention and training policies you’ve never read.
Quality Becomes Inconsistent
Some people on your team treat AI output like a first draft and review everything carefully. Others copy-paste directly into client deliverables. Without a standard, you have no idea which is happening — until a client catches a hallucinated statistic or a proposal references a competitor’s product by name because the AI confused its context.
The problem isn’t that AI makes mistakes. The problem is that without a quality standard, there’s no consistent expectation for catching them.
You Build on Tools You Don’t Track
One person builds their entire client follow-up workflow around an AI tool. Another uses a different tool for the same purpose. A third is paying for a subscription out of pocket because they didn’t want to ask. Nobody has visibility into what the business actually depends on, what it costs, or what happens if a vendor changes their terms or pricing.
This is shadow IT, and it’s not new — but AI tools make it considerably easier to build meaningful workflows on unsanctioned platforms.
The Voice Drifts
AI-generated emails go out with one tone. AI-assisted proposals read differently from hand-written ones. Social media posts sound like a different company depending on who prompted them and which tool they used. Without guidelines on voice, disclosure, and review, your brand starts to sound like it has a split personality.
Nobody Knows What Anyone Else Is Doing
Perhaps the most fundamental problem: without a policy, you have no visibility. You don’t know which tools are being used, what data is going into them, what’s coming out, or how much of your operation now depends on them. You’re flying blind in a space that’s changing every few months.
The point isn’t that any one of these is a catastrophe. It’s that without a policy, you have no way of knowing which ones are happening right now.
AI Governance Debt
We’ve written before about governance debt — what accumulates when your legal structure is running on defaults instead of deliberate decisions. AI governance debt works the same way.
Every month you operate with AI tools and no written policy, more workflows get built on unexamined assumptions. More data crosses boundaries nobody defined. More quality standards diverge. More institutional knowledge gets embedded in tools and prompts that only one person understands.
Nothing seems wrong — until a triggering event exposes the gaps:
- A client asks “was this written by AI?” and nobody knows the answer policy
- An employee leaves and their AI-powered workflows vanish with them
- AI-generated content goes to a client with an error nobody caught
- A vendor changes their data retention or training policy overnight
- You realize three different people are paying for three different AI subscriptions that do the same thing
- You need to onboard a new hire and there’s nothing written about how AI fits into the job
Sound familiar? It should. It’s the same pattern as tribal knowledge, undocumented processes, and operating agreements that live in a drawer. The medium is different. The problem is identical: decisions are being made by default instead of by design.
And like governance debt in your legal structure, in our view the cost of fixing it goes up the longer you wait. Retrofitting policy onto entrenched workflows is harder than setting expectations before the habits form.
What an AI Policy Actually Covers
A useful AI policy for a small business doesn’t need to be a 30-page compliance document. It needs to answer the questions your team is already answering on their own — just without your input.
1. Approved Tools and Access
What AI tools is the business sanctioned to use? Who has access? Who approves adding a new tool? This doesn’t mean banning experimentation — it means knowing what your operation depends on.
A simple inventory is the starting point: tool name, who uses it, what for, what plan you’re on, what data it touches.
2. Data Boundaries
This is the most important section. Define what can and can’t go into AI tools:
- Off-limits: Client PII, financial records, employee data, trade secrets, anything covered by an NDA
- Allowed with caution: Internal drafts, general business questions, publicly available information
- Freely allowed: General writing assistance, brainstorming, formatting, research on public topics
The line will be different for every business. The point is having one.
3. Quality Standards
What requires human review before it goes out? Set a clear threshold:
- All external-facing content (client deliverables, proposals, emails) gets reviewed by a human
- Internal documents can use AI more freely but should be labeled if substantially AI-generated
- Any content with specific claims, numbers, or legal language gets verified against primary sources
In practice, this rarely slows things down meaningfully. It prevents the kind of error that costs you a client. These quality standards are closely related to what we call intent engineering — defining not just what AI can do, but what it should prioritize.
4. Disclosure
When do you tell clients or customers that AI was involved? This is partly ethical, partly practical, and increasingly a question clients are asking directly.
Options range from full transparency (“we use AI tools as part of our process”) to output-based disclosure (“this analysis was AI-assisted and reviewed by our team”) to no disclosure for minor use (formatting, grammar). Pick a position and communicate it.
5. Ownership and IP
Who owns AI-assisted output? How does it interact with client contracts? If you’re producing deliverables for clients using AI tools, your contracts should address this. Some clients care deeply. Others don’t. Either way, you should know your position before they ask.
6. Vendor Management
Track your AI subscriptions centrally. Understand each tool’s data retention policy, training data practices, and terms of service. Know what happens to your data if you stop paying. Review this at least quarterly — these policies change often.
7. Training and Onboarding
How does a new employee learn what’s expected? If the answer is “they figure it out” or “they ask around,” your policy isn’t a policy — it’s folklore. Include AI usage expectations in onboarding the same way you’d cover any other business tool or process.
A Starter Framework You Can Use This Week
You don’t need to build a comprehensive AI governance program overnight. Here’s a three-week path to get out of default mode:
Week 1: See What You’ve Got
- Inventory your AI tools. Ask every person on the team: what AI tools do you use for work? Include free tiers, personal accounts, browser extensions — everything.
- Define your red lines. What data is categorically off-limits for AI tools? Client PII, financial records, and anything under NDA are the obvious starting points. Write it down in one paragraph. This is the same foundational work we describe in making your business AI-ready.
- Set one rule today: All external-facing AI-assisted content gets human review before it goes out. No exceptions.
Week 2: Write It Down
- Pick your sanctioned tools. Based on the inventory, decide what the business officially uses. Communicate the list.
- Write a one-page acceptable use guide. It should cover: approved tools, data boundaries, quality review expectations, and disclosure position. One page. Not ten.
- Review your client contracts. Do they address AI-assisted work? If not, consider adding language. If you’re not sure what to add, that’s a conversation worth having with your attorney.
Week 3: Make It Stick
- Add AI policy to onboarding. New hires should learn your AI expectations on day one, alongside everything else about how the business operates.
- Schedule a quarterly review. AI tools change constantly — new features, new pricing, new data policies. Your policy should keep pace. Put it on the calendar.
- Assign ownership. Someone needs to be responsible for AI governance. In a small business, that’s probably you. Name it explicitly so it doesn’t become another thing that “everybody” owns and nobody maintains.
This is minimum viable governance. It won’t cover every edge case. But it gets you from “running on defaults” to “making deliberate decisions” — and that’s where the compounding works in your favor instead of against you.
Why This Matters Now
AI tools are improving on a cycle measured in months, not years. Every few months, the tools your team uses get more capable — which means more workflows, more data, more decisions being made without guardrails.
The longer your team uses AI without a policy, the harder it becomes to introduce one. Habits calcify. Workflows get built around assumptions nobody questioned. The same compounding dynamic that rewards early AI adopters works against those who delay governance — the gap widens in both directions.
We’ve written about the capability overhang — the gap between what AI can do and what businesses are actually using it for. But there’s a governance overhang too: the gap between how much AI your business relies on and how much of that reliance is deliberate, documented, and governed.
Closing the capability overhang means adopting AI more aggressively. Closing the governance overhang means adopting it more deliberately. You need both.
And if state-level AI regulation continues at its current pace, having a governance foundation in place now means you’re adapting from a position of strength rather than scrambling from scratch. We’ll be writing more about the regulatory landscape in an upcoming post.
Where We Come In
At Moser Research, we treat AI governance the same way we treat business governance: as infrastructure. It’s not a nice-to-have. It’s the foundation that determines whether your AI adoption creates value or creates risk.
Our Operations Audit includes a review of how AI fits into your operations — not just which tools you’re using, but whether there’s governance underneath. We look at your data flows, your quality standards, your vendor dependencies, and your team’s actual usage patterns alongside your operational processes.
Because the same principle applies: you can have the most capable AI tools in the world, but if nobody wrote the rules for how they’re used, you’re building on defaults.
Let’s talk about getting your AI house in order.
The scenarios described in this post represent common patterns we see across small businesses. Specific risks and policy requirements depend on your industry, client base, and the tools you use. This post does not constitute legal advice — consult with a qualified attorney for guidance specific to your circumstances.
Ready to get started?
Let's discuss how we can help systematize your operations.
Book a Free Discovery CallRelated Articles
The Best Engineers Are Artists
The best engineers don't just solve problems — they make elegant solutions. The same instincts that make a great bassist make a great engineer: listening first, serving the song, knowing when not to play. Research suggests the connection runs deeper than metaphor, and the companies that understand this dramatically outperform those that don't.
You're Not Locked In: How to Actually Get Value from AI in 2026
AI platforms aren't interchangeable brands. They're different tools with different design philosophies. Most businesses either pick one and use it wrong, or get paralyzed by choice. Here's how to stop doing both.
The Creativity Gap: Why AI Isn't Paying Off (And Why That's Your Opening)
AI has added basically zero to US economic growth despite hundreds of billions in investment. The reason isn't the technology — it's that most people don't bring the creativity to use it well. For small business owners who do, that gap is the opportunity.