AI can help streamline recruitment tasks. Indeed, early adopters of AI screening software report a 75% reduction in their cost per screen. However, the use of AI in recruiting and selection is loaded with risks that cause recruiters to be fearful and distrustful. If used correctly, AI may help you enhance your recruitment techniques. However, skepticism remains due to some associated risks this article will cover.
HOW IS AI CHANGING THE RECRUITMENT PROCESS?
Candidate screening is one of the most time-consuming jobs any recruiter will face. The minutiae of a candidate's CV, from attributes and talents to job history and career growth, frequently require a human eye to see. This process might be substantially simplified in the near future due to advances in artificial intelligence.
AI-enabled recruiting solutions make it feasible to increase recruitment KPIswhile also speeding up the hiring process. Anapplicant tracking system (ATS) resume checker can efficiently filter all inbound candidate resumes for particular job openings. The process advances to the next step if the resumes include the appropriate skill keywords, education level, and other job-specific information.
However, there are some significant advantages and possible legal risks to consider when automating some of the human recruitment tasks.
LEGAL ARTIFICIAL INTELLIGENCE RECRUITMENT RISKS
HR professionals must remember that no technology is flawless, and it should not completely replace the human touch.
AI-recruitment technologies, according to some, are meant to reduce unconscious bias in the recruitment process. However, more work is needed to achieve this. Algorithms use data from previous successful candidates and current staff to enhance the review and screening processes and look for connections between data points, regardless of whether they are meaningful, job-related, or legally permissible.
Consider a workforce that is mostly made up of young men. To mirror the personnel in the firm's current workforce, an algorithm may prefer young, male candidates. The algorithm's attempts to emulate the employer's previous hiring practices may unintentionally reinforce underlying prejudices or inequities.
Further, eliminating protected groups like race, gender, and age does not alleviate the issue. Living closer to work, for example may be associated with retention, but considering historical neighborhood segregation, location, and zip codes may be analogs to race. If an algorithm is based on an employee's socioeconomic position, bias may be maintained since some subgroups may be over-or underrepresented in specific categories. While these risks are relatively nascent in case law, they act as bottlenecks.
Despite the concerns, using AI-powered technologies will be beneficial if they are based on credible data sets and models. They must also be closely monitored and assessed.
Before implementing them, employers should design and test their algorithms using strong and varied data sets. These data sets should prioritize job-related attributes that are causally connected to past candidates' performance in the position while disregarding or deemphasizing those that are unrelated or might lead to unlawful selections, such as age, race, or gender. Employers should continue to monitor and verify their algorithms for bias and even seek legal counsel where necessary.
ONE FINAL NOTE
Artificial intelligence carries so much potential for the future of recruitment. However, we need to consider the attendant legal risks to design mitigations where necessary. If we get this right, we will achieve more enhanced hiring processes.