
Posted April 3rd, 2019 in Top Stories, Legal Insights
Is AI Innovation and Background Check Regulation Putting Employers on a Collision Course?
The use of artificial intelligence (AI) in hiring is growing at a furious pace. While AI can increase efficiencies, some business applications present significant legal risk. For example, using algorithms rather than people to score background checks and other data about job applicants has become commonplace; “but for criminal background checks, a growing number of state and municipal ‘fair chance’ laws require employers to avoid making blanket decisions about applicants and, instead, call for ‘individualized assessments’ that consider rehabilitation and other mitigating factors that might positively impact someone’s suitability in the workplace,” said Nilan Johnson Lewis attorney Mark Girouard, who regularly advises and defends companies regarding their use of background screens. AI solutions may not appropriately account for these individualized considerations. Veena Iyer, also with Nilan Johnson Lewis, adds: “Beyond fair chance ordinances, the strict liability framework of the federal Fair Credit Reporting Act (FCRA) creates additional peril for employers. FCRA has very strict rules for gaining consent from, disclosing information back to, and resolving disputes from applicants; if the data science firms now moving into the background check space put the capabilities of their technology ahead of the legal boundaries, employers could be on the hook for their errors.” Both lawyers say that while AI may be able to more efficiently collect and analyze data from a range of sources, employers ultimately need to prove that the information used to make decisions is actually pertinent to the requirements of the job. For more information about the possible pitfalls of AI-based background check solutions, contact Mark Girouard at 612.305.7579 and mgirouard@nilanjohnson.com, or Veena Iyer at 612.305.7695 and viyer@nilanjohnson.com.