Imagine a future where AI assistants handle company hiring processes. Is this risky or beneficial? What if AI algorithms even negotiated salaries?
The Role of AI in Recruitment
AI undoubtedly brings efficiency, objectivity, and accuracy to recruitment, saving time and resources. However, it raises critical questions about its limitations in understanding the complexities of human interactions, which AI cannot fully capture. To understand this balance, we must explore both the advantages and disadvantages of using AI in recruitment.
Ethical Considerations
Using AI to negotiate salaries or make critical hiring decisions raises ethical concerns. Transferring authority to an algorithm can create a risk of delegating responsibility without proper oversight. AI should not be seen as an absolute authority but as a support and information tool, meant to complement human judgment, not replace it entirely.
In a future where AI robots manage interviews or salary negotiations, the benefits can be evident: eliminating favoritism and optimizing time. However, there is also a risk of dehumanizing interactions. Therefore, it is crucial to ensure that human oversight remains active and that final decisions are validated by humans.
An AI assistant aiding recruitment for public sector positions or government institutions could potentially benefit meritocracy. Such technology could reduce nepotism and favoritism, allowing selection based strictly on competence. However, until this ideal is reached, we must be aware that AI should be used cautiously, as it is only one part of the decision-making equation, where the human remains essential.
The Amazon Case: Algorithmic Bias in Recruitment
In 2018, Amazon faced issues with an AI algorithm used to evaluate job candidates, which was biased against women. The recruitment system used a machine learning model that accessed the company's historical data on candidates and employees. The problem arose because the historical data was dominated by men, especially in technical roles, leading to an algorithm that favored male candidates. The algorithm penalized resumes containing words associated with women or women's experiences, such as participating in activities or organizations for women.
AI algorithms are trained on historical datasets, which can perpetuate existing biases from those periods. Instead of correcting this error, Amazon's AI continued to penalize resumes associated with women, reflecting a learned bias. To prevent such situations, the solution is to train algorithms on diverse datasets that include representative examples from all demographic groups. Additionally, constantly adjusting AI models and testing them for fairness can help reduce bias and create a more equitable recruitment process.
After discovering the bias in its algorithm, Amazon discontinued its use in recruitment.
Other Instances of Algorithmic Bias
Besides Amazon, other companies have faced similar issues with their recruitment algorithms. For example, recruitment platforms using AI to analyze resumes and recommend candidates have been criticized for favoring candidates from certain groups, particularly based on name, age, or professional background, thus perpetuating discrimination.
The Amazon case is not isolated; several large companies have experienced similar issues related to algorithmic discrimination. In 2019, the Apple Card credit limit algorithm, developed by Goldman Sachs, was accused of discriminating against women. Women received significantly lower credit limits than men, even with the same financial conditions. This incident highlights a common problem in AI: algorithms trained on historical data risk perpetuating pre-existing biases. To avoid such situations, algorithms must be constantly audited and adjusted, and automated decisions should be accompanied by human intervention to ensure fairness and gender equality.
AI Systems and Social Behavior Measurement
According to the AI Act, systems that measure social behavior are prohibited because they can lead to discrimination and limit individual rights. These systems can assess a person's social behavior and influence access to various services or opportunities, creating an unfair system.
In today's society, using algorithms that analyze a person's emotions or feelings is considered problematic because it can generate subjective and biased interpretations, negatively affecting objective assessments in various contexts. Emotions and feelings are difficult to quantify accurately, and algorithms can misinterpret facial expressions, tone of voice, or other non-verbal and paraverbal cues, especially for people from different cultures or with unique communication styles. This can lead to unintentional discrimination or wrong decisions in recruitment, education, or other sensitive areas where fairness and impartiality are essential.
AI in Recruitment: A High-Risk Application
Recruitment is classified as a high-risk application under the new European AI legislation: the EU AI Act, because decisions made by a selection algorithm can directly affect people's lives. An algorithm that evaluates job candidates has the power to decide who gets a job opportunity and who doesn't. If the algorithm is biased or makes mistakes, it can unfairly exclude qualified people, affecting their careers and livelihoods. Also, such a system can amplify existing biases, such as those related to gender, race, or age, creating systematic, unintentional discrimination. For this reason, the AI Act imposes strict rules to ensure that algorithms used in recruitment are fair, transparent, and error-free.
AI Assistants for Recruiters
AI assistants for recruiters are AI-based tools that help human resources teams manage the recruitment process more efficiently. These assistants automate several repetitive and administrative tasks, allowing recruiters to focus on strategic aspects and quality interactions with candidates.
A major advantage of using AI in resume selection is speed and large-scale processing capacity. AI can process thousands of resumes daily, up to 10 times more than manual filtering. Algorithms can evaluate thousands of applications in a concise amount of time, eliminating resumes that do not meet minimum requirements and providing recruiters with a shortlist of the most suitable candidates. Additionally, using AI can contribute to standardizing the selection process, reducing errors and subjectivity that often occur in traditional human assessments.
AI assistants can also provide automated responses to candidates regarding the status of their application, interview scheduling, or even personalized feedback, thus improving the candidate experience. There is the possibility of sending automated follow-up messages, maintaining constant and professional communication throughout the entire recruitment process.
Another advantage is AI's ability to analyze labor market trends and provide recommendations related to open positions, competitive salary packages, and other relevant aspects of recruitment. These assistants can analyze data on market requirements and available candidates, helping recruiters make more informed decisions.
However, there are also significant risks in using AI for resume selection. If algorithms are trained on data reflecting past biases or social inequalities, they can perpetuate the same discrimination in evaluating candidates. For example, candidates from minority groups may be unfairly disadvantaged if the algorithm favors previous success patterns that may be influenced by factors such as gender or age. Thus, monitoring and adjusting these algorithms is essential to prevent discrimination and ensure a fair process.
Transparency and accountability must be integral to automated selection processes. To prevent incorrect or biased decisions, human intervention in the process (human in the loop) is essential, allowing humans to review and correct the algorithm's results where necessary. Human oversight is equally important to constantly monitor how the algorithm works and ensure that it does not make automated decisions that discriminate against or unfairly exclude valuable candidates. In an automated recruitment environment, maintaining this balance between efficiency and human intervention is essential to ensure a fair selection process.
AI Assistants for Candidate Interaction
AI assistants for interacting with candidates are increasingly used in recruitment, playing the role of automated communication with candidates at various stages of the hiring process. These automated systems can answer frequently asked questions, provide information about open positions, and guide candidates through the application process.
A major advantage of using AI assistants is continuous availability. They can interact with candidates 24/7, providing quick answers and eliminating waiting time for human assistance. This improves the candidate experience and maintains a constant flow of communication, which can lead to a higher application completion rate.
In addition, AI assistants can automate repetitive tasks, such as collecting basic candidate information, confirming interview availability, or even providing initial feedback on basic qualifications. This frees up recruiters' time, allowing them to focus on more detailed assessments and higher-level interactions with candidates.
It is important that AI assistants are well integrated with human intervention so that when complex questions or problems arise, a recruiter can intervene quickly. Human oversight and continuous system improvement are essential to ensure that AI assistants provide accurate and helpful information without becoming frustrating for human users.
AI-Generated Resumes
Let's turn things around and look at things from the opposite perspective. Candidate-generated resumes using AI are becoming increasingly common, as these tools can help create well-structured, optimized documents that use relevant keywords for targeted positions. AI-based platforms help candidates generate personalized resumes quickly, identify strengths, and improve how professional experiences and skills are presented. After all, if we look at things this way, only AI can make you a concrete resume that will be analyzed by AI.
The question arises whether AI can detect if a resume was also generated using AI. In principle, an algorithm may recognize certain patterns or styles specific to automatically generated resumes, especially if standardized templates and common phrases offered by AI tools are used. For example, very formal language, the similar structure of certain sections, or the repetition of specific keywords may signal that the document was created with the help of AI.
Currently, technologies that evaluate resumes are not designed to directly detect whether a resume was generated by AI but rather to assess the quality and relevance of the content. Screening algorithms can spot inconsistencies or redundancies in resumes, especially when documents are too general or do not authentically reflect a candidate's experiences. In the future, as the use of AI for creating resumes increases, more advanced solutions may emerge to identify and manage this phenomenon.
AI for Employee Satisfaction Monitoring
AI can revolutionize employee satisfaction monitoring by continuously analyzing feedback and data related to their behavior in the workplace. AI can quickly process information from satisfaction surveys and daily feedback, providing an overview of the well-being of the staff. Through techniques such as sentiment analysis, AI can detect the general tone of feedback, classifying it into positive, neutral, or negative categories, offering human resources teams accurate and helpful data to improve the work environment.
Monitoring Feedback and Employee Satisfaction with AI
Another major advantage of AI is its ability to identify hidden problems or trends that might go unnoticed in a manual analysis. For example, AI can detect a subtle drop in employee morale or an increase in frustration, allowing HR teams to intervene before problems become more serious. Additionally, AI can provide personalized recommendations to management, suggesting solutions to improve satisfaction, such as adjusting workloads, initiating training programs, or improving work-life balance.
Confidentiality and ethics are essential in using AI for employee monitoring. It is important that employees are informed about how their feedback is used and that there is human oversight to prevent misinterpretations of the collected data. However, used correctly, AI can help create a more attractive work environment based on objective data that promotes employee engagement and retention.
Developing AI Assistants for Recruitment in Luxembourg
At RMT Labs in Luxembourg, a platform is being developed that automates repetitive processes in recruitment. For example, an AI assistant that compares a candidate's resume with the job description can determine if that candidate is suitable for the position they applied for. The AI assistant analyzes technical, communication, management, motivation, and leadership skills. Then it can answer your questions related to certain concepts that are listed in the CV, such as: what is a database, how Java compares to PHP, or other technical questions that help a recruiter better understand the candidate's profile and the technologies listed in their CV.
Following a complete analysis of recruitment processes conducted by a team of researchers, it emerged that 75% of repetitive recruitment processes can be automated with AI. Therefore, in the near future, it is very possible that in many recruitment processes you will meet an AI assistant instead of a person.
Ensuring Compliance with the AI Act
Cristiana Deca, CEO of Decalex, AI and Data Expert, comments on ensuring that AI solutions used in recruitment comply with the AI Act:
"Given that the use of AI solutions in recruitment and employee management is considered high risk for fundamental rights and the safety of people, the decision-making process regarding the introduction of such a solution must take into account the verification of the conformity of the solution by checking the technical documentation, traceability, the ease with which the logic behind the algorithm can be explained, clear and concrete information about the datasets used in the AI learning process. After implementation, it is important to set up a test environment where the solution can be checked for possible risk scenarios to see if it meets the standards imposed by the AI Act and the provisions of the GDPR regarding the automated decision-making process."
Mara Priceputu, PhD in Psychology and Specialist in Organizational Psychology, adds:
"The use of AI applications in personnel recruitment is useful in identifying relevant information that the employee makes available. However, how will it respond to the needs of candidates to obtain, in turn, a clear image of the organization? How will it be able to answer questions and requests of the candidate regarding the real work environment, such as:
All these are benchmarks that represent important indicators for the candidate to choose their employer so that a reciprocal, realistic choice can be created for efficient and lasting relationships. Contact with colleagues who know the reality of the concrete parameters of the work facilitates realistic recruitment, thus avoiding personnel fluctuation.
Organizational psychology advocates for building relationships and a work environment with mutual involvement. Human resources policies include care for the quality of professional and personal life."The Final Word Should Remain with HumansIt is good to use AI in HR because automation and advanced analytics can make the recruitment and resource management process much more efficient.
However, the final say must belong to the H in HR, the human factor. Artificial intelligence can filter candidates, analyze data, and offer recommendations, but the final decision must be made by a person to preserve the human element, empathy, and understanding of nuances that AI cannot fully capture.Transferring authority to AI can create a problem of overconfidence.
If we take everything AI says for granted without checking, the risk is that the system will make mistakes or perpetuate existing biases in the data it was trained on. Thus, it is essential that AI is a support tool, not a supreme authority in the decision-making process.It is crucial that humans remain in control of the final decision, using AI as an assistant to evaluate information but actively intervening where value judgments, ethics, or individual situations cannot be treated automatically. In this way, AI can add value but without replacing the discernment and responsibility of human decision-making.
You can find the original article here
Alexandru Dan
CEO, TVL Tech