Will Using AI Impact Your Liability as an Employer?

Artificial Intelligence (AI) is revolutionizing the workplace across dozens of industries because it promises to streamline processes, enhance decision-making, and boost productivity. The Equal Employment Opportunity Commission (EEOC) has concerns, however, about issues with potential bias, especially when AI is deployed during the hiring process. Every business leader who is contemplating the inclusion of AI in their company’s employment procedures should be aware of the potential setbacks and emerging legal challenges facing this new technology.

AI and Inherent Bias

The EEOC recently published a technical assistance document that highlights the potential negative impact of algorithms and automated systems on protected classes under the Civil Rights Act of 1964. This document suggests that AI may be perpetuating biases that have been unintentionally programmed into certain algorithms through past hiring decisions, promotions, or performance evaluations. Since machine learning often requires human input for development, it may also be developing bias from these sources as well.

If discovered to be true, this may result in legal liability for the employer under anti-discrimination laws. The EEOC suggests that employers can be held liable if AI tools used in selection procedures are found to have an adverse impact on certain protected groups based on race, gender, disability, or sexual orientation. 

Managing AI in the Workplace

So how can companies utilize AI responsibly? Employers will need to vet the technology products they purchase very carefully to make sure they don’t wind up liable for shortcomings unintentionally included by the vendor. They also will need to be proactive in double-checking the results of their own algorithms through the use of statistical analysis. As AI continues to develop, we can expect labor laws to follow suit, so it will also be incumbent on employers to stay informed as the exact expectations of due diligence evolve over time. 

Data Compliance

Using AI in general involves the collection and processing of vast amounts of data, raising all sorts of legal questions that have yet to be resolved about data ownership and the appropriate use of this information. Companies that use AI may find themselves running afoul of the California Privacy Rights Act (CPRA), making it essential that employers comply with its data privacy and security regulations to protect sensitive data from unauthorized use. This may also require a deep understanding of how AI accesses and retains data in order to stay in compliance. 

It’s understandable for employers to want to maintain their competitive edge by harnessing top-of-the-line technology, but it’s equally important to understand how to mitigate the inherent risks. Our firm works diligently to ensure that California businesses operate on a solid legal foundation. If you have questions about data compliance or ensuring fair standards under the EEOC, call (925) 399-1529 to schedule a consultation today.

Integrated General Counsel
Latest posts by Integrated General Counsel (see all)