Back to Insights
AI Act

Is the hiring platform you're using considered a "high-risk" system under the AI Act?

If your organization uses software to screen CVs, rank candidates, or suggest who should advance to interview, you may be in high-risk territory - even if a human makes the final call. Here's what the Act actually says and how to determine your risk level.

Hanna
Hanna
Co-Founder & Tech Lawyer
October 20, 2025·9 min read

Hiring teams have embraced "AI-powered" recruiting, and it's likely your organization already uses an AI-enabled hiring platform. Even if your vendor classifies the product as limited-risk, regulators will still look at what it actually does in use and how it functions in your company's hiring process. If your organization uses software to screen CVs, rank candidates, or even suggest who should advance to interview, you may be in high-risk territory - even if a human makes the final call.

Let's unpack what the Act actually says, who is responsible, and how to tell where your system falls.

Who counts as a "deployer" and why that matters

Under the AI Act, a deployer is the organization that uses an AI system in its operations. If your company uses Greenhouse, Workday, or another platform to help manage recruitment, you are the deployer, even though the software is built and maintained by someone else.

Being a deployer carries its own obligations, especially when the system is high-risk. You must ensure:

  • it's used according to the provider's instructions,
  • appropriate human oversight is in place,
  • logs and records are kept,
  • and individuals affected (such as candidates) are informed.

So even if the provider is the one responsible for the system's conformity assessment and CE marking, deployers cannot simply "trust the label." The way you configure and use the system can change its risk classification.

What the AI Act says about "high-risk" systems

Annex III of the Act lists categories of high-risk use and employment is one of them. Specifically, hiring systems are high-risk when they are intended for recruitment or selection, for example to: analyse and screen/filter applications, evaluate candidates, or place targeted job ads. In practice, that captures tools whose functionality influences who is considered or advances in the process.

That means any system that scores, ranks, filters, or otherwise influences access to employment is, by default, considered high-risk. Such systems must go through a conformity assessment, carry CE marking, and appear in the EU's public database for AI systems.

How to determine whether a hiring platform is high-risk

To decide, look beyond the marketing language and focus on what the system actually does:

FunctionLikely ClassificationWhy
Workflow or applicant tracking (no AI scoring)Not high-riskAdministrative only, no automated evaluation
AI-based ranking, scoring, or fit predictionHigh-riskDirectly influences candidate selection
Interview or video analysisHigh-riskEvaluates personal characteristics for employment
Scheduling or messaging automationNot high-riskNo assessment or selection function
Recommendation engine (suggests top candidates)Likely high-riskInfluences access to employment

Function, not framing

Regulators have been clear: classification depends on function, not framing. In other words, calling a feature "decision support" or "AI-assisted insights" doesn't exempt it from being high-risk if it effectively shapes the hiring outcome.

If the model screens out 80% of candidates before a human ever looks at them, it's high-risk, even if someone later "approves" the shortlist.

"Human-in-the-loop" ≠ automatic safe harbor

Vendors might argue that their tools are safe because humans remain "in the loop." But under the AI Act, human oversight must be meaningful, not just a checkbox. If human review is superficial or largely dependent on AI recommendations, regulators will still treat the system as high-risk.

Why many HR tech systems will be considered high-risk

Recruitment has been one of the earliest and most widespread applications of AI. Most modern platforms already include features such as:

  • CV parsing and automated fit scoring,
  • algorithmic shortlisting or ranking,
  • AI-driven recommendations,
  • or video-based candidate analysis.

These functionalities all fall within the employment provisions of Annex III. So, while vendors will understandably prefer to classify their systems as low-risk, the reality is that many current HR tech products perform high-risk functions in practice.

Some systems will legitimately fall outside the scope

Not every tool that touches hiring is high-risk. Platforms limited to logistics (e.g. scheduling, tracking, documentation) and that don't use AI to screen, rank, or evaluate candidates can legitimately fall outside Annex III, because they're not performing recruitment or selection functions.

Still, deployers should document that reasoning in an internal note, outlining:

  • which modules are used,
  • whether any AI-based assessment is active,
  • and why that use doesn't meet Annex III's definition.

That written record is your best defense if questions arise later.

The takeaway

The AI Act doesn't just regulate "AI companies." It also reaches every organization that uses AI including in hiring. As a deployer, your role isn't to second-guess your vendors' compliance, but to understand your own use and document your reasoning.

  • If the system screens, scores, or ranks candidates, assume high-risk.
  • If it only handles admin or logistics, document why it's not.
  • And when in doubt, focus on what the system actually does, not what the marketing says.

The coming year will bring more guidance, but one principle is already clear: regulators will look at substance over marketing spin.

Ready to get compliant?

See how Trustflo helps you discover shadow AI and prepare for the AI Act.