Skip to content
Cybersecurity

Every business has an AI strategy. Almost none have an AI inventory.

John Zammit29 April 20265 min read
Abstract red and black digital matrix — representing the dense web of unmonitored AI agents and integrations running inside Australian businesses.
Image: Pachon in Motion via Pexels

Walk into any 50-person firm in Melbourne and ask the leadership team about AI. You'll get a confident answer. Ask their IT manager — or their MSP, if they've outsourced — and the answer narrows. Ask the question that actually matters, and nobody can answer it.

The confident answer goes something like this: Copilot rollout in motion, somebody's running an experiment with ChatGPT Enterprise, a Claude licence on the marketing manager's card. The narrower answer covers the tools IT has approved and deployed. The question that nobody can answer is the one that matters most: how many AI tools and agents are running across the business right now, including the ones nobody approved?

That's the gap. And it's where the risk lives.

Shadow AI is a SaaS-era problem on fast-forward

This is the same pattern that played out with SaaS fifteen years ago. The marginal cost of trying a new tool fell to zero. The marginal cost of governing it stayed flat. Business adoption outran IT oversight, and by the time someone wrote a policy, the behaviour was already embedded in the workflow.

AI has compressed that timeline by an order of magnitude. Sign-up takes thirty seconds, the free tier is generous, the output is genuinely useful. The difference is that the tool isn't just storing your data anymore. It's acting on it. An agent inside Salesforce is sending emails. An automation in Make is summarising customer records through a third-party model. A finance person is dropping last quarter's P&L into a free chatbot to help with the board pack. None of this is wrong-headed. All of it is unaccounted for.

You cannot govern what you cannot see. The policy is a document. The inventory is a control.

The control gap, plainly

The standard SMB response is to write a policy. Don't use ChatGPT for confidential data. Get IT approval before signing up to AI tools. The policy goes into the staff handbook and nothing changes, because the policy doesn't see anything. It's a document, not a control.

Look at how shadow AI actually emerges and the mechanism is straightforward. An employee has a task. An AI tool can do that task in five minutes instead of an hour. The employee uses the tool. The tool retains the prompt, and the prompt contained a customer name, a dollar figure, or an internal process description. Whether that data ever leaves the model's training corpus is a question your policy cannot answer, because your policy cannot see the prompt.

Inventory first. Policy second. Enforcement third.

The right first step isn't a framework. It's an inventory: every AI tool, agent, and integration touching your data, regardless of who deployed it. You cannot govern what you cannot see.

From there you can layer enforcement: a gateway through which model requests flow, allowlisted providers, group-based access controls, runtime inspection of inputs and outputs. None of this is conceptually new. Microsoft, Cisco, and Palo Alto have been doing the equivalent for SaaS and web traffic for a decade. AI just makes it urgent.

The numbers Australian SMBs should know

~50%
of employees admit to using AI tools their employer hasn't approved
Industry surveys, 2025
98%
of organisations report some unsanctioned AI use
Cisco AI Readiness Index
21%
of leaders say they have a mature AI agent governance model
Industry surveys, 2025
~25%
have real-time visibility into AI running in production
Industry surveys, 2025
USD $670k
average added cost per breach involving shadow AI
IBM Cost of a Data Breach 2025

Most of that exposure sits below the line where SMBs notice — until a customer asks where their data went, a cyber insurer asks for evidence of control, or a regulator asks for an audit trail.

The compliance net is already tightening around Australian SMBs

The Australian regulatory environment is moving in the same direction as the EU and US, even if it's not yet binding here. The Privacy Act reforms touch automated decision-making. ISO 42001 is showing up as a buyer's question in tenders. The EU AI Act's high-risk provisions become enforceable on 2 August 2026, and any Australian business selling into European customers, or supplying ones who do, gets pulled into that compliance net by contract.

What to do this quarter

The practical sequence for a Victorian SMB is unglamorous, but it works:

The window for getting ahead of it, before a regulator or a major customer forces the conversation, is narrowing every quarter.

Control doesn't slow down innovation. It's what lets the business keep moving once the questions start arriving.

Frequently asked questions

Why is an AI inventory more important than an AI policy?

A policy is a document. A control is something that sees and enforces. Until you know which AI tools, agents, and integrations are touching your data, an AI policy is a statement of intent without an enforcement layer. The inventory comes first because you cannot govern what you cannot see.

What is shadow AI?

Shadow AI is any AI tool, agent, or integration used inside the business without IT approval or visibility. That includes free chatbots used to summarise sensitive documents, AI extensions installed on managed devices, third-party model APIs called from automation platforms like Make or n8n, and AI features built into SaaS tools that staff have enabled themselves.

How do you actually build an AI inventory?

Start with discovery, not policy. Interview each team about the AI tools they use, audit the SaaS bill, review the Microsoft 365 audit log for AI sign-ins, check browser extensions on managed devices, and inspect outbound traffic for known model API endpoints. Once you have the inventory, layer enforcement: Conditional Access, Purview DLP, Defender for Cloud Apps, and an outbound web filter that classifies AI providers.

Does the EU AI Act apply to Australian businesses?

Not directly — Australia has its own regulatory pathway through the Privacy Act reforms and the AI in Government framework. But the EU AI Act's high-risk provisions become enforceable on 2 August 2026, and Australian businesses selling into European customers, or supplying ones who do, are pulled into compliance through contract terms. Customers in financial services, healthcare, and government are already pushing AI control requirements down the supply chain.

What is ISO 42001 and why should an SMB care?

ISO 42001 is the international standard for AI management systems — the AI equivalent of ISO 27001 for information security. It is increasingly appearing in tender requirements and procurement questionnaires from larger Australian organisations who need evidence that their suppliers are governing AI use responsibly. SMBs answer for it whether they have certified or not.

John Zammit

Written by

John Zammit

Managing Director

Related Topics

AI inventoryshadow AI governanceAI governance SMB Australiashadow AI risksAI tool inventoryISO 42001 AustraliaEU AI Act Australian businessesAI policy enforcement Melbourne

Need help with your IT?

Our Melbourne team has 37+ years of experience helping businesses like yours.