← All insights
AI Strategy·7 min read

AI governance for Australian SMBs: a one-page policy you'll actually follow

AI governance has a reputation problem in SMB-land. Most owners read the term and picture a 30-page enterprise document nobody will ever follow. The version that actually works for an Australian small or medium business fits on one page, answers six questions clearly, and gets read because it's short enough to be readable.

Why the policy goes before the rollout, not after

The most common pattern we see when stepping into an SMB AI rollout gone sideways: licences were bought first, the policy was promised for “next quarter,” and three quarters later the team is using AI in undocumented ways that the owner is now uncomfortable with. Customer information has gone into prompts. AI-generated content has gone out under the company brand without review. Nobody’s sure who owns what.

None of those outcomes are catastrophic on their own - they’re the kind of issues a competent governance policy prevents cheaply. They become problems because the policy was never the prerequisite. The fix is a habit, not a document: write and sign off the policy before licences land in user inboxes.

The six questions a working policy answers

1. Which AI tools are approved for company use?

List them by name. Claude.ai Teams. Claude Code (engineering). ChatGPT (where you’ve also paid for that). Microsoft Copilot (if it’s in your M365 tier). Anything not on the list is not approved for work data without explicit sign-off. This is the single most under-rated control - half of SMB AI risk comes from staff using consumer tools casually for work tasks.

2. What data can and can’t go into them?

Two columns: green-light data (general business questions, your own public marketing copy, internal SOPs, drafts), and red-light data (customer personal information, financial information about identifiable individuals, anything covered by NDAs, anything covered by client confidentiality, source code if your engagement contracts restrict it). Most policies don’t need a yellow column - a binary list is easier to remember than a nuanced one.

3. How is customer / client data handled, and does it cross borders?

For Australian SMBs operating under the Privacy Act, the relevant questions are: where is the data being processed (Anthropic publishes this for Claude.ai Teams and Enterprise), is it used to train general-purpose models (it isn’t for Teams/Enterprise), and how long is it retained. State the answer once in your policy so a staff member can read the policy and know - rather than asking IT every time.

4. Who owns the IP?

Default: the company owns AI-generated content created by employees in the course of their work, the same way it owns non-AI work. Contractors should sign equivalent terms before getting access. State this in one sentence. The unsettled separate question - whether AI-generated content qualifies for copyright protection at all in Australia - is interesting but doesn’t change your internal IP policy.

5. What training is required before access?

Mandatory pre-access training of 15-30 minutes covering this policy, the approved tools, and the data handling rules. Build this into onboarding for new staff, and run an annual refresher. The training is the single highest-leverage governance investment you can make - most AI incidents come from people not knowing the rules, not from people deliberately breaking them.

6. How is a suspected breach or misuse reported?

Name the person and the channel. “Email compliance@yourbusiness.au within 24 hours of becoming aware.” The policy needs to make reporting cheap and obvious. A breach the team handles internally and tells leadership about is a breach you can fix. A breach the team handles internally and hopes nobody notices is the kind that eventually becomes a problem.

What to leave out

Long policies fail. The temptation is to copy an enterprise template and end up with a 12-page document that covers every theoretical edge case. Don’t. Things to deliberately leave out of the one-pager:

  • Detailed legal analysis - put it in a referenced annex if you must, not in the policy itself
  • Tool-specific configuration steps - those go in IT documentation, not governance
  • Hypothetical scenarios that haven’t happened in your business - cover them as they come up
  • Aspirational language about responsible AI - the policy should describe the rules, not the philosophy

Who signs it off

Owner / CEO. IT lead. Legal where applicable (often outside counsel for an SMB). For larger SMBs, also the privacy officer if you have one. The signatures matter because they’re what makes the policy a real instrument rather than an aspiration. Distribute the signed version alongside the AI tool credentials, and require acknowledgement before access.

How the policy evolves

AI is moving faster than most policy update cycles. The pragmatic rhythm: review the policy quarterly for the first year, then annually once it’s settled. Update it when something genuinely changes - a new approved tool, a new data category, a new risk that surfaces. Don’t update it cosmetically; that erodes the authority of the document.

How XLev installs governance

Governance is one of the three layers we install on every Claude rollout - alongside Projects per function and custom Skills. We adapt the one-page template to your business and walk it through owner, IT and legal sign-off as part of the rollout - not as a separate workstream that drags. See Claude Implementation for the full service detail.

Frequently asked questions

What's the minimum AI policy an SMB needs?
A one-page document covering: which AI tools are approved, what data can and can't be put into them, how customer/client information is handled (and whether it crosses borders), who owns AI-generated IP, what training is required before access, and how a suspected breach or misuse is reported. Signed off by the owner, IT and legal where applicable. Distributed alongside Claude licences, not three months after.
Do we need to worry about Australian Privacy Principles when using Claude?
Yes - any time customer or staff personal information is involved. Anthropic publishes data handling commitments for Claude.ai Teams and Enterprise tiers, but the obligation under the APPs sits with you, the controller of the data. The practical answer for most Australian SMBs is to pick the Claude tier with appropriate data handling (Teams or Enterprise rather than the consumer tier), restrict what categories of personal information can be entered into prompts, and document the decision so you can show a regulator if asked.
Who owns the IP of AI-generated content?
By default, content created by an employee using AI in the course of their work belongs to the company - the same way employment IP terms work for non-AI work. The policy should make this explicit, and contractors should sign equivalent terms before getting access to AI tools. The unsettled question is whether AI-generated content qualifies for copyright protection at all in Australia - a separate issue from internal IP ownership.
Should we ban consumer ChatGPT or Claude?
Most SMBs end up restricting consumer tiers for work use rather than banning them outright. The practical issue is that consumer tiers don't have the data handling commitments enterprise/teams tiers do, and they sit outside your governance perimeter. The cleanest policy: company-paid Teams/Enterprise tier is the approved tool; consumer tier is fine for personal use but not for company data.
Do we need an AI training requirement in the policy?
Yes - even if light. Mandatory pre-access training (15-30 minutes) covering the policy itself, the approved tools, and the data handling rules is the single highest-leverage governance investment. Most AI incidents come from people not knowing the rules, not from people deliberately breaking them. Putting the training on the front of the rollout filters out the avoidable issues cheaply.

Where this fits

Claude Implementation

Install Claude properly across your team - Claude Code, Claude.ai projects and skills, custom Anthropic SDK builds.