New York City is considering proposed legislation that would regulate the use of artificial intelligence in hiring.
If passed, the new law would (effective January 1, 2022) require that, in order to sell an “automated decision tool” in New York City, the tool’s developer must:
- Be able to show that the tool was the subject of a “bias audit” conducted in the past year;
- Offer, at no additional cost, an annual bias audit and provide the results of the audit to the purchaser; and
- Include a notice (aimed at the purchaser) stating that the tool is subject to the provisions of this law.
One potential issue here is that “automated decision tool” is defined in a way that may sweep in a broad range of selection tools:
“Any system whose function is governed by statistical theory, or systems whose parameters are defined by such systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests, and other learning algorithms, which automatically filters candidates or prospective candidates for hire or for any term, condition or privilege of employment in a way that establishes a preferred candidate or candidates.”
Notice to Candidates
In addition to imposing requirements on a tool’s developer, any employer who uses such a tool must provide a notice to candidates (within 30 days of them being screened) of:
- The fact that an automated decision tool that is required by the law to be audited for bias was used in connection with their candidacy; and
- The job qualifications or characteristics that such tool was used to assess in the candidate.
The law doesn’t address the degree of specificity with which “qualifications or characteristics” must be described in the notice. We believe that general descriptions of broad competencies (e.g., “customer orientation” or “problem-solving”) should suffice.
The required “bias audit” is defined as “an impartial evaluation,” of the tool “to assess its predicted compliance with the provisions of section 8-107” of the New York City Code. Section 8-107 is the anti-discrimination section of the City Code. It includes, among other provisions, a prohibition against employment practices that have disparate impact. In general, this provision tracks the definition of disparate impact from federal law, with one very significant difference. While federal law focuses on disparate impact on the basis of race or gender (and expressly requires the collection of race and gender data to assess adverse impact), the City Code encompasses disparate impact based on any protected characteristic, meaning “age, race, creed, color, national origin, gender, disability, marital status, partnership status, caregiver status, sexual and reproductive health decisions, sexual orientation, uniformed service or alienage or citizenship status.”
That last aspect could create a significant compliance hurdle for employers, since they generally do not (and cannot, without running afoul of other laws) collect information about applicants’ protected characteristics other than race and gender, making it difficult to see how a tool developer could test whether or not its tool is likely to have disparate impact on other groups.
While the law continues to work its way through committee, businesses that are based in or hire employees for positions in New York City should review their pre-employment selection tools and confer with their vendors to determine if they utilize learning algorithms and, therefore, could be subject to the new law.