top of page

Shadow AI: Your Employees Are Already Using It. Now What?

  • Writer: Frank Calvello
    Frank Calvello
  • 12 hours ago
  • 4 min read

Here is something worth sitting with for a moment. Right now, while you are reading this, there is a good chance that someone on your municipal team is using AI to do their job. Not because you told them to. Not because your department has an AI policy. Because they found a tool that saves them an hour of work, and they started using it.


This is called Shadow AI, and it is happening in local governments across the country. Employees are turning to ChatGPT, Copilot, Gemini, and other AI tools on their own, without training, without guardrails, and often without any awareness of the risks involved.

The instinct for many leaders is to ban it. Lock it down. Issue a memo. But here is the hard truth: that approach rarely works, and it almost always makes things worse.


You cannot put this genie back in the bottle

A 2024 study found that 75% of knowledge workers are already using AI tools in their jobs. That number is only going up. In municipal government, where staff are stretched thin and the workload never shrinks, employees are not waiting for permission. They are solving problems with whatever tools they can find.

Think about the public works coordinator drafting a grant narrative at 11pm and deciding to paste the project summary into ChatGPT to get a head start. Or the HR manager asking an AI tool to help rewrite a job posting. Or the finance analyst uploading a spreadsheet to get help spotting trends.


These are not bad employees doing reckless things. They are resourceful people trying to do their jobs better. The problem is not their intention. The problem is that they have not been taught where the lines are.

What actually goes wrong with unsanctioned AI use

Shadow AI is not just a policy headache. It creates real operational and legal risk for municipalities. The biggest danger? Data. When an employee pastes constituent information, personnel records, legal documents, or internal financial data into a free AI tool, that data may be used to train future models. It may be stored on servers outside your control. In some cases, it could trigger public records obligations you never anticipated.


Then there is the accuracy problem. AI is a creative text generator, not a reliable source of truth. An employee using an AI tool without training may not know that the output needs to be fact-checked. They may submit a report, send a constituent response, or build a budget projection based on information the AI simply invented. This is not hypothetical. It is already happening.


And then there is consistency. When every employee is using different tools in different ways, your organization has no way to measure what is working, what is risky, or how AI is affecting your operations at all.

Why banning AI does not solve the problem

Issuing a blanket ban on AI tools sounds decisive. But enforcement is nearly impossible, and the message it sends to your team can do more harm than good. You are essentially telling resourceful, self-starting employees to stop being resourceful. The tools do not disappear from their phones and personal laptops. They just go further underground.


What you want is not compliance through prohibition. You want informed, intentional use. The goal is a team that understands what AI can do, what it cannot do, what data should never be shared with it, and how to verify what it produces.

That kind of behavior does not come from a memo. It comes from training.


What a smart AI policy actually looks like

The municipalities getting this right are not the ones who banned AI first and asked questions later. They are the ones who got ahead of it with a clear, practical framework. Here is what that looks like in practice.


First, acknowledge what is already happening. Talk openly with your team about AI use. You will likely be surprised by how many people are already experimenting. That conversation, handled well, shifts the dynamic from secrecy to collaboration.

Second, establish clear data guidelines. Your policy does not need to cover every possible AI tool. It needs to be clear about what information can and cannot be shared with any external tool. Define categories: what is sensitive, what is confidential, what is public. Make it simple enough that a busy department head can actually remember it.


Third, designate approved tools. You do not need to build a custom AI system. Pick one or two tools that meet your security standards and point your team toward those. This gives employees a sanctioned path forward, reduces fragmentation, and makes it much easier to provide consistent training.


Fourth, and most importantly, train your people. Not a one-hour overview. Practical, role-specific training that teaches employees how to use AI strategically, how to spot bad output, and how to get real value from the tools. Research is clear on this point: access to AI tools alone does not improve outcomes. Strategic training is what makes the difference.


The cost of doing nothing

Let us be direct about what inaction costs you. Every day your team uses AI without training, you are accepting risk you did not consciously take on. Data handled improperly. Facts not checked. Work product that your organization owns but cannot vouch for.


There is also an opportunity cost. A staff member spending five hours on a report that AI could help them draft in two is not a neutral outcome. That is three hours of capacity that disappears every single time. At $35 per hour, across five staff members doing similar work, you are looking at over $27,000 in lost capacity every year. Just on that one task.


Shadow AI is going to keep spreading whether you address it or not. The only question is whether your team will be using it wisely.


The bottom line

You do not need to prevent your team from using AI. You need to make sure they know how to use it well. That means policy, yes, but more than anything it means training. Practical, municipal-specific training that turns curious experimenters into confident, careful users.


The municipalities that thrive in this next era of public service will not be the ones that resisted AI the longest. They will be the ones that got ahead of it with intention, equipped their teams with real skills, and turned Shadow AI into strategic AI.

That starts with a conversation. And a plan. If you are ready to have both, we can help.

Comments


bottom of page