The Alignment Advantage Report: How Recruiters and Job Seekers Use AI – and Where Perceptions and Evaluations Diverge
- Marcus

- Dec 7, 2025
- 4 min read

I recently read The Alignment Advantage Report by Checkr, which surveyed 3,000 HR professionals and 3,000 job candidates about how AI is used and perceived in recruiting.
The report offers direct comparisons of expectations and values from both groups, identifying where perceptions align and diverge. These differences reveal actionable insights for recruiting teams using AI now or in the future.
The Study Base: 6,000 Perspectives on AI in Recruiting
The report explores, among other things, the following questions:
Which AI tools are used by HR teams?
Which AI tools are used by candidates?
How are fairness, transparency, and the overall impact of AI perceived?
Where do perspectives align – and where do apparent gaps emerge?
The focus is therefore not on strategic models or future visions, but on a systematic comparison of two perspectives on the same process: recruiting from the company side and job searching from the candidate side.
AI Use in Recruiting: Process Support on the Employer Side
Today, HR teams primarily use AI, which makes processes more efficient, faster, and more consistent. The most frequently mentioned use cases include:
resume screening
interview summaries
automated scheduling
background checks
Only about one-quarter of HR teams have clear goals, KPIs, and management support for their AI use. This shows that although many organizations have adopted AI, most rely on operational needs rather than a strategic framework.
AI Use on the Candidate Side: Optimizing Personal Positioning
Job seekers, too, are now using AI on a broad scale. Around 61% of applicants report using AI during the application process, primarily for:
optimizing their CV
writing cover letters
job matching
interview preparation
While organizations primarily use AI for process control, organization, and selection, candidates mainly rely on it to optimize how they present themselves and how well they match a specific role. Both sides are therefore using similar technologies – but with clearly different functional objectives.
Shared Values: Strong Alignment on Ethics and Priorities
Despite different roles, the report shows that both sides share similar core values.
Both HR professionals and candidates identify the following as their most critical ethical priorities:
data protection
transparency in AI-driven decisions
fairness across all demographic groups
This alignment matters. Both groups see AI as more than a tool for efficiency—ethics count for them.
Where Perception and Evaluation Clearly Diverge
Despite shared values, the report points out significant gaps in perception and evaluation that matter for recruiting teams.
Transparency: Being Informed vs. Feeling Informed
One of the report’s key findings:
Around one-third of candidates say they were never informed that AI was used in the application process.
At the same time, many HR teams believe they are transparent about the tools they use and about the division of responsibility between humans and AI.
This isn't a factual contradiction. It's a gap between official transparency and how it's perceived:
HR teams often assess transparency based on whether information is available in principle.
Candidates, on the other hand, evaluate transparency based on whether they can clearly recognize and understand the use of AI in the actual process.
Fairness: Different Levels of Trust in AI
There is also a clear divergence in how fairness is evaluated:
More than half of HR managers believe that AI-supported decisions are at least as fair as human assessments.
On the candidate side, only about one-third agree with this view.
A similarly large share is generally skeptical about AI.
HR teams tend to evaluate AI primarily in terms of:
consistency
standardization
reduction of individual bias
Candidates, however, tend to associate fairness more strongly with:
transparency and comprehensibility
individual evaluation
and visible human responsibility
Impact on the Human Aspect of the Process
The most significant split comes with how people feel about the "human touch":
75% of applicants feel that AI-driven recruiting processes are more impersonal.
Only a comparatively small share of HR leaders perceive this effect with similar intensity.
For HR, the focus is often on process quality (speed, structure, documentation).
For candidates, the focus is on relationship quality (feedback, accessibility, individual communication).
Automation increases efficiency but often creates a sense of distance.
AI on Both Sides – and Emerging Operational Grey Areas
The report also clearly shows that the parallel use of AI on both sides creates new areas of tension:
Companies use AI for selection and evaluation.
Candidates use AI to optimize application documents, answers, and preparation.
This raises new operational questions:
What counts as legitimate support?
Where does optimization become deception?
How should organizations deal with AI-generated texts, tests, or even deepfake elements?
The report does not assess these developments normatively, but it makes one thing clear:
As AI use grows on both sides, clear rules, strong communication, and solid methods matter more.
What Recruiting Teams Can Do Now
The differences highlighted in the report translate into very tangible fields of action for recruiting teams:
1. Make Transparency Visible in Day-to-Day Operations
Transparency does not originate in privacy policies – it emerges in the lived experience of the process:
clear signals in screening
understandable explanations in interview evaluations
visible responsibility in decision-making
2. Make Fairness Explainable
It is not enough to make decisions – they must also be communicated understandably:
clearly explain the criteria
structure feedback
make human responsibility visible
3. Clearly Separate Automation from Decision-Making
AI for organization, structure, and preparation
humans for evaluation, contextualization, and final decisions
4. Deliberately Design Human Touchpoints
individual feedback at key moments
accessible contact persons
personal communication during critical transitions
5. Actively Account for AI Use on the Candidate Side
sensitize recruiters to AI-optimized applications
develop clear internal guidelines
realistically recalibrate expectations regarding the informational value of application documents
6. Systematically Measure Perception
In addition to processing KPIs, also track:
perceived transparency
perceived fairness
experienced appreciation
7. Targeted Team Sensitization
The most significant leverage lies in day-to-day operations:
training formats
internal reflection sessions
targeted workshops to analyze processes from both perspectives
Conclusion
The Alignment Advantage Report by Checkr clearly shows that both recruiting teams and job seekers use AI, but with different goals, views, and standards.
Both sides share:
central ethical values
a strong desire for efficiency
and an expectation of fair processes
However, there are significant differences in perceptions of transparency, fairness, and the impact of AI on the human side of recruiting.
In the perception of fairness,
and how AI affects the human aspect of recruiting.
Recruiting teams gain the most leverage not from more technology, but from purposefully designing the interfaces between automation, communication, and human responsibility. Here, teams determine whether AI acts as professional support or as an impersonal filter.









Comments