The shadow pandemic: Why bans and training fall short

You’ve probably already tried to stop it. You’ve written policies, run training sessions, maybe even blocked some tools through the firewall. And yet your employees are still using unauthorized AI.

Here’s why that approach is failing, and what actually works.

The numbers that should concern you

Forrester calls it the “shadow pandemic”. Sixty percent of workers are using their own AI tools to do their jobs, deliberately going around their organization’s security policies.

Not because they’re reckless but because they feel it’s the most efficient way to get their work done.

Meanwhile, 38% of employees acknowledge sharing sensitive work information with AI tools without their employer’s permission. And 69% of companies suspect or have seen employees using forbidden generative AI tools.

Why “don’t do that” doesn’t work

Here’s the uncomfortable truth. Training and policies alone won’t stop shadow AI.

Training is necessary but insufficient. You can tell employees not to paste confidential data into ChatGPT. They’ll nod, agree, and then do it anyway when they have a deadline and a tool that works.

Bans are even worse. They don’t eliminate the problem, they hide it. Employees go underground. They use personal devices, home networks, or tools you don’t know about. You lose visibility, which means you lose the ability to protect data.

The real issue isn’t employee intent. It’s incentive alignment. Your employees are measured on output. The AI tool makes them faster. So they use it, regardless of what the policy says.

The missing piece: Active controls at the prompt level

Here’s what actually works. Don’t block the tool. Block the data that shouldn’t go through it.

Modern security solutions are doing something traditional approaches miss. They operate at the prompt level, the actual moment when an employee types information into an AI tool.

The capabilities being deployed look like this:

Real-time detection

Identify when sensitive data is about to be sent to an AI tool before it leaves your network. Personal information, proprietary code, API keys, customer records, financial data. Catch it in the moment.

Intelligent redaction

Instead of blocking the entire prompt, redact or mask the sensitive bits. Let employees use the tool, but strip out the data that shouldn’t go there. This maintains productivity while protecting information.

Contextual policies

Different teams have different risk profiles. Marketing can share types of data Finance can’t. Modern solutions let you set granular policies per department, per role, per data type.

Visibility without blame

Log what’s happening without creating a culture of surveillance. You’re not trying to catch people breaking rules. You’re trying to understand the risk and protect data.

Active enforcement

Don’t just alert. Block risky prompts, reroute them, or require additional approval before sensitive data leaves. It’s like a DLP system, but built for the AI era.

Why this matters: You’re already using AI anyway

Here’s the thing your employees know that you might not. AI is already embedded in the tools you’ve approved. Slack, Microsoft 365, Salesforce, Google Workspace. They all have AI built in. Your team uses these features every day, often without realizing it.

So the question isn’t “will we use AI?” It’s “will we use it safely, or will we use it in the shadows?”

Active controls at the prompt level give you a third option. You don’t have to choose between banning AI and ignoring the risk. You can enable it and protect it simultaneously.

Worried about shadow AI in your organization?

Map your AI exposure and set up the right guardrails with the help of our specialists and the right tools, from visibility to active controls.

What comes next

This is where the real strategy starts. You need:

1. Visibility into what’s happening. What data is flowing where? Which tools are being used? Who’s doing it?

2. Clear policies. Show people how to use AI safely.

3. Active guardrails. Technology that enforces those policies at the moment of risk.

4. Culture shift. From “AI is forbidden” to “AI is enabled, but we’re protecting our data”.

None of this is theoretical. Organizations are deploying these capabilities right now. And the ones that move first will have a significant advantage over those still trying to ban their way to safety.

Sources:

  • Forrester. (2023). Predictions 2024: Generative AI Transitions From Hype To Intent.
  • Forrester. (2023). Predictions 2024: Cybersecurity, Risk, And Privacy.
  • IBM. (2024). What Is Shadow AI?
  • Gartner. (2024). Predicts 2025: Shadow AI Security Breaches Will Affect 40% of Enterprises by 2030.
  • McKinsey & Company. (2025). The State of AI in 2025: Agents, Innovation, and Transformation.

More insights