The AI adoption blind spot: What the data is telling you

Gartner is direct about where this is heading. By 2026, 75% of organizations running GenAI initiatives will reprioritize their data security efforts, shifting spending from structured to unstructured data.

That’s not a prediction. That’s a warning.

We’re seeing it firsthand. Whether it’s a municipality, a healthcare clinic, or a manufacturing operation, the pattern keeps repeating. Teams discover new LLM tools their employees are using. IT blocks one through the firewall. Another appears two weeks later. It’s whack-a-mole, and it’s everywhere.

And here’s what makes it worse. According to Verizon’s 2025 Data Breach Investigations Report, nearly 50% of data breaches come from employees sending sensitive information to the wrong place, not from external attackers. Now imagine that happening through tools your IT team doesn’t know exist.

Why this is happening: The speed of adoption outpaced your governance

Here’s the structural problem. Your governance framework was built for a different world. IT rollouts? Quarterly. Board approvals? Months. Procurement? Annual cycles.

AI doesn’t play by those rules.

Employees sign up for ChatGPT in five minutes. They share access within the hour. By the time your approval process even gets scheduled, the tool is already embedded and data is moving through it.

Your framework wasn’t designed for this pace. And the speed keeps accelerating.

What this means: You can’t control what you can’t see

So here’s the realization you need to have. You can’t write policy for tools you don’t know exist. You can’t protect data flowing to systems you’re not monitoring. You can’t audit what’s invisible.

Stop trying to ban AI. That battle is already lost. The real battle is visibility.

The consequence: it’s not just IT anymore

And here’s where it gets real. When this breaks, and it will, the pressure won’t come from your IT team. It’ll come from:

  • Legal. Where are your controls? What data went where?
  • Compliance. Did we violate any regulations? What’s our exposure?
  • Finance. What’s the cost of a breach? What’s our liability?
  • The board. How did this happen on your watch?

When a regulator asks where your safeguards were, saying “we didn’t know employees were using it” isn’t a defense. It’s negligence.

What this means for your role

If you’re a CIO, you’re seeing operational risk. How do you govern something moving faster than your processes?

If you’re a CISO, you’re seeing compliance exposure. The liability is real.

If you’re an IT manager, you’re seeing resource crunch. You’re already stretched thin, and now there’s another invisible layer to manage.

Same problem. Different pressures.

Start here: Three questions for your team

Before you can fix visibility, you need to know what you’re working with. Ask your team:

1. What AI tools are people actually using right now? Not what you’ve approved. What are they really using? Slack integrations? Browser extensions? Standalone apps? If you can’t answer this with confidence, you have a visibility gap.

2. What data is flowing into these tools? Customer information? Proprietary code? Internal strategies? Get specific. This tells you the real risk surface.

3. Which tools have access to your systems or data? Some AI tools connect to your Microsoft 365, Google Workspace, or CRM. They have permissions you probably forgot about. Find them.

These three questions won’t solve the problem. But they’ll tell you whether you have one, and how big it is.

Sources:

  • Gartner. (2024). Security Leaders’ Guide to Data Security in the Age of GenAI.
  • Verizon. (2025). 2025 Data Breach Investigations Report.

More insights