How AI has forever changed the hiring process
Artificial intelligence is no longer an emerging curiosity in the world of recruitment. It has become the default. In 2025, nearly every major company uses AI to some degree in hiring, whether for screening CVs, automating interview scheduling, or assessing candidates’ suitability for complex roles. A recent survey found that 99% of hiring managers now rely on AI tools in their recruitment processes, and 95% expect to increase their investment in these technologies in the coming year [1].
This enthusiasm is understandable. AI promises speed and scale, an antidote to the volume of applications that can overwhelm even the most seasoned hiring teams. But behind the scenes, AI is also having an effect more profound. It is reshaping the psychology of candidates, the fairness of assessments, and the very nature of how we define merit.
The impact on behaviour
For employers and candidates alike, the rise of AI in hiring has created a tension between efficiency and authenticity. As companies pursue the allure of automation, they risk sacrificing precisely what they claim to value most. Namely, diversity of thought, genuine human connection and a sense of fairness.
It is a paradox that researchers have begun to explore in detail. Writing in Harvard Business Review, Jonas Goergen and colleagues at the University of St. Gallen, along with Anne-Kathrin Klesse at Erasmus University, have demonstrated that AI assessment tools don’t just change how candidates are evaluated but how the candidates themselves actually behave [2]. Their research, spanning more than 13,000 participants, reveals a powerful shift in self-presentation. Candidates who knew AI was assessing them became more likely to emphasise analytical traits like rule-following or data-centric thinking and to downplay qualities like empathy, creativity, and intuition. In effect, AI nudges people toward a uniform version of competence.
This behavioural distortion is more than an academic curiosity. It has profound implications for the composition of talent pipelines. “When candidates systematically misrepresent themselves, organisations face critical challenges,” the researchers wrote. “Talent pool distortion, validity compromise, and unintended homogenisation” all threaten to undermine the very purpose of assessment [3].
The impact on bias
Companies frequently tout AI as a tool to combat bias. It is true that conventional hiring practices like unstructured interviews and gut-feel assessments are notoriously prone to human prejudice. Yet the idea that AI is inherently objective is wishful thinking. In a study published by Forbes, nearly all hiring managers agreed that AI can and does produce biased recommendations, whether because of skewed training data or opaque algorithms that replicate past discrimination [4].
These biases are not always obvious. Writing in Forbes, Tomas Chamorro-Premuzic describes a troubling dynamic in which AI systems trained on historical promotion and hiring data will often select for traits that have little to do with genuine job performance. Overconfidence, self-promotion, and low agreeableness emerge as reliable predictors of career progression not because these qualities drive better results, but because they are the behaviours that have historically been rewarded in corporate hierarchies [5].
“If we train AI to predict who will get promoted in a company, it will efficiently select politicians,” Chamorro-Premuzic observed. In other words, algorithms may not only fail to fix bias, but they can even automate and entrench it at scale [6].
The impact on process
This is not to say AI has delivered no improvements. It has undoubtedly made it easier for recruiters to process vast quantities of applications and to reach more diverse candidate pools. Unilever, for example, uses HireVue’s AI tools to screen early-career applicants, reporting savings of 50,000 hours and over $1 million in the process [7]. Many hiring managers argue that AI helps free their time for strategic work, cross-training, and genuine human connection [8].
But the candidate experience tells another story. A survey from the American Staffing Association found that nearly half of job seekers believe AI recruiting tools are more biased than human recruiters. Among those actively looking for work, that scepticism runs even deeper [9].
This mistrust is not an abstraction. It has practical consequences. Candidates who perceive AI as opaque and discriminatory are less likely to engage authentically. Instead, they game the system, crafting applications that match algorithmic preferences but reveal little about their real strengths.
The result is a new kind of arms race. Employers deploy AI to filter candidates more efficiently. Candidates, in turn, adopt their own AI tools. They use generative CV builders and ChatGPT-powered cover letters to optimise for keyword matching and pass automated screens. A Software Finder study found that 75% of candidates now use AI to assist with job applications [10].
Yet this widespread adoption brings its own contradictions. The same hiring managers who champion AI’s efficiency are often the quickest to penalise candidates for using it. One in four recruiters admitted they would reject candidates whose CVs were obviously AI-generated, even though 75% could not reliably tell the difference [11].
“It’s less about hypocrisy and more about two sides of the same coin — efficiency for organisations versus genuine skills from candidates,” Adnan Malik, CEO of Software Finder, explained [12].
The paradox is unmistakable. AI can help job seekers improve their materials and reach more opportunities. But the more candidates use it, the more they risk eroding trust and inviting suspicion. The hiring process becomes not a search for mutual fit, but a contest over who can best simulate authenticity.
The reaction
Companies now face an urgent question as to whether they can harness AI’s benefits without hollowing out the very humanity they claim to value.
Some organisations are beginning to grapple with this challenge. New regulations are forcing more transparency. In the European Union, the AI Act requires companies to disclose when AI is used in high-stakes decisions. In New York City, Local Law 144 mandates annual audits of AI hiring systems for bias [13].
Disclosure is an important step, but it is not sufficient. When candidates merely learn that AI is involved, they are even more likely to adjust their behaviour, further compromising the validity of assessments. The solution, researchers argue, is radical transparency. Candidates need clear, specific communication about what AI evaluates and why [14].
Most employers fall short of this standard. Career pages tend to mention AI vaguely, if at all. As a result, candidates fill the information void themselves, sharing blog posts and YouTube tutorials that may bear little resemblance to reality. In effect, companies have ceded control over how their processes are perceived.
Beyond transparency, some organisations are experimenting with hybrid approaches, combining AI assessment with human judgement. Salesforce, Nvidia and Philip Morris International all guarantee that human reviewers make final decisions [15]. Research suggests this does mitigate, but does not eliminate, candidates’ tendency to perform analytically.
Even in hybrid systems, the human element requires deliberate investment. Kathleen Duffy, writing in Forbes, argued that AI alone can never replace the nuanced work of recruiters, who at their best are able to uncover hidden potential, build relationships, and assess qualities like resilience and adaptability [16]. AI can accelerate certain parts of the process like identifying potential matches and collecting structured data, but it lacks the intuition to distinguish between a candidate who ticks the right boxes and one who will truly thrive.
This is why the most effective models integrate AI into a broader framework of human-centred recruitment. At Duffy Group, recruiters use AI for initial research such as sourcing candidates and gathering competitive intelligence, but reserve the core assessment and relationship-building for human professionals [17].
How to react?
For job seekers, this evolving landscape demands new strategies. While it is tempting to rely on AI-generated applications, success ultimately hinges on authenticity. Julia Arpag, CEO of Aligned Recruitment, warned that candidates who lean too heavily on generative tools risk alienating the very decision-makers they hope to impress [18]. “AI can help with structure and phrasing, but hiring managers still want authenticity,” she cautioned.
Arpag recommends a balanced approach of using AI to refine and streamline, but ensuring that applications reflect genuine experience and voice. Networking, she notes, remains as essential as ever, adding: “Word of mouth recommendations still matter.” It’s a refreshing reminder that the human dimension of hiring has not entirely disappeared, even in the era of automation [19].
The question is whether employers will prioritise this human dimension. So far, most companies have focused on efficiency gains, measuring success by cost reduction. Yet negative long-term consequences in the form of talent homogenisation, candidate disengagement and entrenched biases will be harder to quantify but potentially more damaging.
Going forward
As far as Chamorro-Premuzic sees it, while recruitment may look superficially transformed, much of the change has been incremental rather than revolutionary. “So far, it is a case of mostly running faster in the same direction,” he wrote. “But it isn’t clear whether this is the right direction to begin with” [20].
This, perhaps, is the heart of the matter. AI promises to solve the inefficiencies and biases of traditional hiring. But in the rush to embrace automation, organisations risk creating a system that is more efficient, but no more equitable, transparent, or humane.
To avoid this outcome, leaders must resist the temptation to delegate judgment to algorithms. They must insist on clarity about what AI measures, invest in human-centred processes that counteract bias, and remember that technology is a tool, not a philosophy.
As companies grapple with these choices, they would do well to remember that hiring is not merely a transaction. It is an exercise in trust, empathy, and imagination, qualities no machine can fully replicate. In the end, the organisations that thrive will be those that understand AI’s power but also its limits, and that remain committed to seeing people, not just patterns.
Sources
[2] https://hbr.org/2025/07/how-ai-assessment-tools-affect-job-candidates-behavior
[3] https://hbr.org/2025/07/how-ai-assessment-tools-affect-job-candidates-behavior
[7] https://hbr.org/2025/07/how-ai-assessment-tools-affect-job-candidates-behavior
[13] https://hbr.org/2025/07/how-ai-assessment-tools-affect-job-candidates-behavior
[14] https://hbr.org/2025/07/how-ai-assessment-tools-affect-job-candidates-behavior
[15] https://hbr.org/2025/07/how-ai-assessment-tools-affect-job-candidates-behavior