AI assessments skew recruitment processes

Image: Shutterstock
Jul 02 2025 by Management-Issues Print This Article

Job candidates who think they are being assessed for a job by AI tend to highlight their analytical capabilities and downplay more intuitive or emotional qualities because they believe they'll gain a better score, according to new research from Rotterdam School of Management, Erasmus University (RSM).

According to Dr Anne-Kathrin Klesse and co-researchers Jonas Görgen and Dr Emanuel de Bellis, organisations must take greater care when designing and communicating AI assessment tools to avoid unintentional biases.

"The finding that people strategically highlight certain capabilities or characteristics implies that candidates present a skewed picture of who they really are," said Dr Klesse.

By simulating a job selection process and documenting how applicants presented themselves either to AI or human assessors, the researchers were able to document how such behaviour can have consequences for who gets selected – or rejected – for a position.

"If your organisation uses AI in hiring, promotion, or evaluation, you should be aware that these tools do more than just change the process. They may also influence who gets the job," Dr Klesse added.

To avoid candidates feeling pressured to present an adjusted impression of themselves, Dr Klesse suggests:

  • Be transparent about your use of AI and the criteria it employs when assessing candidates.
  • Audit your AI assessment systems regularly for behavioural distortions. Have you noticed that the kind of people selected by your AI appear to have changed their behaviour? You may be creating a narrower talent pool and an unintended bias in your selection process.
  • Inform your recruitment or hiring or admissions teams that candidates may change their behaviour when they know that they are assessed by AI. This awareness can help ensure you don't miss out on great candidates who might present themselves differently under AI-based evaluation.

The researchers conducted 12 studies with over 13,000 participants recording on how people behaved (or said they would behave) when they were assessed by AI compared with a human assessor in real and simulated assessment settings. They also collaborated with a Rotterdam-based start-up that offers competency-based and fair hiring software that doesn't use AI. The start-up surveyed applicants after they completed the application process, and the results indicate that candidates indicated to have changed their behaviour if they thought they were assessed by AI.

The researchers' findings have broader implications as AI becomes more involved in decisions about people's lives and futures. It is vital, they say, to also understand the human side of the equation. Using AI assessment tools does not only improve efficiency or help organisations to cut costs, changes people's behaviour. Understanding these subtle behavioural changes is essential to fully grasp the consequences of outsourcing assessment to AI.

AI assessment changes human behavior is published in Proceedings of the National Academy of Sciences (PNAS).