Back to Insights
Shadow AI

Shadow AI: What it is, why it's risky, and what it means for AI Act compliance

Shadow AI is the use of AI tools without company approval or oversight. Here's why it matters for security, compliance, and your AI Act readiness.

Hanna
Hanna
Co-Founder & Tech Lawyer
October 10, 2025·7 min read

What is "shadow AI"?

Shadow AI is the use of AI tools, models, agents, or AI-powered SaaS by employees without company approval or governance, often via personal accounts, browser plugins, or free tiers.

Why it's a problem

Studies show that employees routinely paste non-public data into public GenAI tools (customer data, code, contracts, HR records). In Cisco's 2024 Global Privacy Study, 48% of organizations admit employees have entered non-public company information into GenAI tools; 27% had banned GenAI at least temporarily over privacy/security concerns.

IBM's 2025 Cost of a Data Breach highlights an "AI oversight gap": 97% of organizations that reported an AI-related security incident lacked proper AI access controls; 63% lacked AI governance policies.

Cybersecurity leader Netskope tracked 1,550+ distinct GenAI SaaS apps in 2025 (up from 317 earlier in the year), with typical organizations using ~15 GenAI apps, out of which most are outside formal procurement.

How shadow AI collides with the AI Act

The AI Act is a risk-based safety and accountability framework that assigns duties to AI providers and deployers to ensure trustworthy and transparent use of AI systems. It assumes that organizations know which AI systems they use and can control them.

Shadow AI erases visibility into what's being used, by whom, and on what data. Without that visibility, you can't assign owners, classify risk, keep the required records, apply human oversight, or show compliance.

The AI Act puts the heaviest, pre-market obligations on providers (the ones who build/place an AI system on the market). Deployers (the ones who use an AI system in a professional context) also have duties because risk ultimately depends on how and where the AI is used.

Provider core duties include design and test for safety and data quality; maintain full technical documentation and logs; manage risks; complete conformity assessment or CE marking where required; ensure pre-market transparency and post-market monitoring.
Deployer core duties include use the system as intended; maintain human oversight; keep usage and performance logs; complete required assessments for high-risk uses (such as hiring, credit, or education); inform affected people where required; and protect data and cybersecurity.

Why deployers still have obligations: Even a compliant system can cause harm if misused or applied in a sensitive context such as HR. The Act therefore requires deployers to control their use, ensure oversight, document usage, and confirm that the tool is appropriate for the task and data.

An example (HR screening)

Say your company licenses a third-party résumé-screening tool.

The provider must ship a compliant system (safety-by-design, documentation, CE marking if high-risk, monitoring).

You, the deployer, must:

  • Confirm it's suitable for hiring and used within its stated purpose.
  • Ensure a human makes/oversees decisions (no fully automated reject lists).
  • Log how it's used and monitor outcomes (bias, error rates).
  • Do the required high-risk assessment (because hiring is high-risk).
  • Inform candidates that AI is used and how decisions are reviewed.
  • Protect inputs/outputs (avoid feeding sensitive data where not allowed; apply access controls/DLP).

Shadow AI creates blind spots that make it impossible to:

  • Classify use-cases against the Act's risk tiers (unacceptable / high / limited / minimal).
  • Produce or retain required technical documentation and logs.
  • Ensure human oversight, model change control, and security controls.
  • Demonstrate lawful bases and respect for fundamental rights in data use (e.g., HR, credit, or customer decisioning).

Bottom line: if you don't know an AI tool is in use, you can't risk-assess it, document it, or secure it, thereby putting your organization on the wrong side of both security and compliance.

What you should do

1. Get visibility

What you do: list the AI your teams actually use. Include SaaS apps, browser add-ons, LLM plugins, and where data flows.

How TrustFlo can help: our scanner finds shadow AI automatically, maps data flows, and keeps a live register you can export for audits.

2. Set simple house rules

What you do: publish a short AI policy people will follow. Name the approved tools, the data that is off-limits, and who signs off on new use cases.

How TrustFlo can help: policy templates, built-in approvals, and in-product prompts that steer people to approved tools instead of risky workarounds.

3. Add basic guardrails

What you do: control who can use what, and protect sensitive data.

How TrustFlo can help: one-click connections to your existing identity and security stack, automatic redaction for sensitive fields, and activity logs you can actually use.

4. Connect it to the AI Act

What you do: show that you use AI responsibly, in everyday language.

  • If you use AI for decisions that affect people, such as hiring, credit checks, or education, treat it as high-risk. Keep a human in the loop, log how the tool is used, and check that input data is accurate and relevant.
  • If you use general-purpose AI tools, know which model and version you are using and tell users when content was AI-generated where required.

How TrustFlo can help: we automatically categorize your tools against the AI Act criteria, generate simple action items, checklists, and records, and produce exportable reports aligned to provider and deployer duties. You get impact assessment templates, evidence capture, and a clear paper trail without heavy legal lift.

Outcome: you know who is using which AI, on what data, with what controls. Shadow AI goes down, security goes up, and you have the documentation to show AI Act compliance when asked.

Ready to get compliant?

See how Trustflo helps you discover shadow AI and prepare for the AI Act.