nTech’s VP and General Manager Jimmy Iannuzzi and labor law attorney Marko MrKonich from Littler joined host Bryan Pena for a dynamic and informative webinar discussing Artificial Intelligence in recruiting and the risks involved.
Whether a lack of available information or a lack of understanding, fear surrounds the tremendous risk that Artificial Intelligence brings; hiring bias and outright discrimination are among the top concerns.
The applications of Artificial Intelligence in contingent work are endless. While 84% of executives recognize that Artificial Intelligence, including automated technology, is essential to the survival of their business, 75% of executives also say that utilizing these technologies within their organizations is difficult.
What does it take to properly integrate and encourage responsible artificial intelligence in recruiting?
AI in recruiting is one of the many frontiers being explored in the future of work; balancing candidate experience, best practices for efficiency, and legality of AI in recruiting is an incredible opportunity but an evolving and real-time challenge. Even the most prominent and advanced solutions must be continually tested against changing dynamics
A.I and automation aid in speeding up and condensing the sourcing process, which provides cost-benefit savings for all. Recruiters using AI tools have reported a75% decrease in cost per candidate screening.
Using AI & Automation to source talent creates time and opportunities for deeper human engagement within the talent community. Using A.I to speed up the hiring process isn't just a way to save companies and recruiting agencies money, job candidates prefer a faster hiring process, which leads to an increase in candidate satisfaction. This is especially important in today's competitive market that often favors job seekers. However, like so many ‘good things’ there are risks.
This article provides an overview of the April 27th, 2022 webinar where nTech’s VP and General Manager Jimmy Iannuzzi and labor law attorney Marko MrKonich from Littler provide an in-depth look into these risks.
How will the A.I and robotics revolution shape the Employment and Labor Law landscape?
A.I adds complications. Disparate treatment laws are based on intent. This makes them difficult to apply to machines while ignoring the human using the technology.
According to Marko MrKonich:
What we need to do is be aware that there's a risk. We shouldn’t overtly tell our computer system to give us results that are 50% white, 30% Latinx, 20% or 10% Asian American, and 20% African American. That would be disparate treatment discrimination, even though it's being done using machines.
Using a machine to process patterns, are machines capable of reinforcing the same unconscious bias we are seeking to eliminate?
“There is no ‘apples-to-apples’ comparison that will give that answer in this context. Recognizing the limitations of these technologies, in that they are only a tool used to increase efficiency, cost-benefits and bring speed to the hiring process,” says Jimmy Iannuzzi.
What are some of the challenges and misconceptions of using A.I in the workplace and in their hiring practices?
Marko states that:
Many employers hear ‘A.I’ and they immediately jump to a conclusion without stopping to weigh the benefits, much less properly define A.I within the framework of the solution their organization is seeking. Discrimination laws and algorithmic bias are two of the big issues. If you're thinking about using A.I to select candidates, you have to properly articulate what you're trying to accomplish. Do I want to hire people who stay longer? Do I want to hire people who perform more steadily? Do I want people with higher-end skills? Can you tailor your A.I use to match your goals? Or you can try to pretend that even though you didn't try, they will just automatically somehow work? Remember, AI and machine learning were advertised as ways to reduce if not eliminate human bias in the selection process. But in fact, now we're finding there's a backlash; some people are saying, most notably the New York City Council, that we would rather have people able to opt out of machine learning-based decision-making automated decision making, and go back to human decision making. So, the irony is the tool that was created, in order to reduce human bias, now has an option to bypass it by going back to human bias, and it just doesn't seem to solve the problem.
According to Jimmy, “Questioning the risk is something we really are focusing on. We can't just rely on technology.”
Is the burden of proof on plaintiffs or on discrimination cases common for plaintiffs to gain access to algorithms?
According to Marko:
The burden of proof is on the plaintiffs. But with disparate impact, that burden is to show the existence of the disparate impact and then the burden shifts to the employer or vendor, to show that the impact was tied to something job related and related to business necessity. If you're using A.I and machine learning or decisions, I would advise that you're notifying applicants and employees that it's being used upfront. Before you use it or commit to it, evaluate the public data concerning its reliability.
Finding a balanced solution
When it comes to finding a balanced solution Jimmy believes “…there needs to be an integration of human and technology. A one size fits all solution will simply not work.”
Another important factor in using AI is making sure that your team has clearly defined what it is looking for. Don't allow machine learning right now to make final decisions, the world isn't quite ready for that. If you're doing that you're taking a much greater risk than if you have a combo system. As you implement A.I, continuously audit your results to determine whether you're getting the results you're looking for and whether your system is resulting in bias.
Marko recommends businesses to:
[Be] deliberate and intentional about what you do. Don't let the perfect be the enemy of the good yet. Many of these tools make things better. They make it more efficient, more effective, less discriminatory and can be done while respecting privacy rights. So just because it doesn't eliminate all discrimination or brings a modest amount of legal risk, prepare for that plan for that and your agreements, plan for that and your processes. And then measure your results. And if it's working, continue to use it if it's not working, tweak it and make sure you're monitoring the discrimination side. So you're not creating unmonitored legal risks that you're not prepared to defend.
While A.I tools aren't perfect, they work effectively in combination with human decision-making.
At nTech Workforce we utilize A.I and automation cautiously and never lose sight of candidates. We understand the importance of efficiency while never losing sight of the human touch and interaction.
Contact us today to learn more about our services and the many ways we are more than a staffing company.