Skip to Main Content

Why Your Business Needs a New Employee Handbook Because of AI

image for Why Your Business Needs a New Employee Handbook Because of AI

By Christopher S. Avcollie

Heraclitus understood something that most modern employers have not yet grasped: the world does not pause to wait for us. The ancient Greek philosopher observed that the river you step into is never the same river twice—it is constantly flowing, changing, renewing itself moment by moment. The same is now profoundly true of the American workplace. Artificial intelligence has become the river, and it is moving faster than any current in human history. Yet millions of American employers are standing on the bank with an employee handbook written for a world that no longer exists.

In the past five years alone, AI has migrated from science fiction to the center of everyday employment decisions. Algorithms now screen résumés, schedule interviews, monitor productivity, evaluate performance, and recommend terminations—all before a human manager so much as glances at an employee’s file. The technology has arrived at breathtaking speed, and the legal frameworks governing its use in the workplace are scrambling to keep pace. State legislatures are passing new AI employment laws. Federal agencies are issuing—and then retracting—guidance on algorithmic discrimination. Courts are beginning to confront, for the first time, questions that no one imagined asking a decade ago.

The thesis of this article is both simple and urgent: the rapid changes being wrought by the advent and use of AI in the workplace require employers to develop specific AI policies to address these dramatic technological changes. A company operating today with even a two-year-old employee handbook is not merely behind the times—it is exposed. It is exposed to discrimination claims, to privacy litigation, to regulatory penalties, and to the kind of reputational damage that no business can afford in an era when its workforce and its customers are paying close attention.

What follows is a roadmap of the key areas where AI demands that your employee handbook be fundamentally reconsidered and revised. If your company’s policies do not address these issues yet, the time to act is now—before your next lawsuit, your next regulatory audit, or your next employee complaint forces the conversation on terms that are far less favorable to you.

I. AI in Employee Screening and the Urgent Need to Revise Your EEO Policies

If you are an employer using AI tools to screen job applicants—and statistically speaking, there is a very good chance you are—you are making employment decisions that may expose you to significant civil liability under federal and state anti-discrimination laws. And if your Equal Employment Opportunity (“EEO”) policy does not explicitly address the use of AI in hiring and screening, you are likely operating without a legal net beneath you.

Approximately 65% of companies now use some form of AI or automation in their hiring process.1 These tools promise speed, consistency, and objectivity. What they often deliver, however, is historical bias at scale. AI systems are trained on data—and data reflects the past. If your prior hiring decisions favored one demographic group over another (whether intentionally or not), an AI trained on that data will replicate and amplify those patterns going forward.

The most famous cautionary tale is Amazon’s abandoned AI hiring tool, which the company’s own engineers discovered was systematically penalizing résumés that included the word “women”—as in, “women’s chess club”—and downgrading graduates of all-women’s colleges.2 Amazon, with all of its resources and engineering talent, could not make the tool fair. It was quietly shelved. But most employers using third-party AI hiring tools do not have Amazon’s resources to audit those systems, and many have never even considered the possibility that their vendor’s product might be discriminating on their behalf.

The law is not impressed by ignorance. Under Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act (“ADA”), the Age Discrimination in Employment Act (“ADEA”), and their state law equivalents, employers are liable for discriminatory employment practices regardless of intent. The “disparate impact” theory of liability holds that a facially neutral practice—such as a keyword-scanning algorithm—can violate civil rights law if it produces discriminatory results with respect to a protected class.3 “We were just following the algorithm” is not a recognized legal defense.

The Equal Employment Opportunity Commission settled its first AI hiring discrimination case in 2023, recovering $365,000 for a group of job-seekers whose applications had been filtered out by an employer’s automated screening tool.4 That settlement was a shot across the bow. More will follow.

Several states have already moved to regulate AI in hiring. New York City’s Local Law 144, which took effect on January 1, 2023, requires employers using automated employment decision tools to conduct independent bias audits and provide specific notice to job candidates.5 Illinois amended the Illinois Human Rights Act to prohibit employers from using AI in ways that produce discriminatory outcomes based on protected characteristics.6 California introduced its “No Robo Bosses Act,” requiring 30 days’ notice before any automated decision system is deployed in employment and mandating human oversight of AI-driven employment decisions.7 Over 25 states introduced similar legislation in 2025 alone.8

What must your EEO policy say to address these realities? At a minimum, your revised EEO policy should: (1) explicitly acknowledge the company’s use of AI in hiring and screening decisions; (2) commit to regular bias audits of all AI hiring tools; (3) identify a human decision-maker who reviews AI outputs before final employment decisions are made; (4) establish a process for applicants or employees to challenge AI-generated determinations; and (5) ensure compliance with all applicable state and local AI employment laws, which are proliferating rapidly. If your current EEO policy was written before 2024, it says none of these things, and you are operating in a legal blind spot.

II. AI in the Workplace and the Critical Need to Revise Your Data Privacy and Confidentiality Policies

Here is a scenario worth considering: Your company deploys an AI productivity monitoring system. The system tracks every keystroke, every email, every Slack message, and every minute of screen activity for your entire workforce. The vendor assures you that the data is used only for productivity analytics. What you may not know is that this data is also being fed back into the vendor’s training models, shared with third-party analytics firms, or stored on servers with security practices that would make any reasonable person queasy. Your employees certainly don’t know it. And your employee handbook says nothing about any of it.

This is not hypothetical. This is Tuesday.

AI systems in the workplace are voracious data consumers. They require access to employee communications, performance records, biometric data, health information, personal schedules, and in some cases even off-duty social media activity in order to function. The scope of data collection enabled by modern AI workplace tools is extraordinary—and the legal obligations that accompany that collection are equally extraordinary, though far less frequently observed.

Under both federal and state law, employers have obligations to protect employee data and to notify workers about monitoring and data collection practices.9 Many jurisdictions now require explicit employee consent before collecting or processing personal data for AI training purposes.10 Simply updating a boilerplate confidentiality clause in your handbook is not sufficient—specific agreements addressing AI data use may be legally required, and failure to obtain them can expose employers to class action liability.

The Illinois Biometric Information Privacy Act (“BIPA”) is the most prominent example of the legal exposure created when AI tools intersect with employee data. BIPA requires informed written consent before an employer collects biometric identifiers such as fingerprints, facial geometry, or voice patterns. Companies have faced settlements in the tens of millions of dollars for BIPA violations involving AI systems used in the workplace—often because they deployed tools with biometric capabilities without even realizing that BIPA applied.11

Confidentiality policies face a distinct but equally serious problem. Many companies encourage or permit employees to use AI tools—ChatGPT, Microsoft Copilot, Google Gemini, and their successors—to assist with work tasks. What those companies often fail to appreciate is that when an employee inputs confidential business information into a third-party AI platform, that information may be retained by the platform, used to train future AI models, and potentially exposed to competitors or the public. Several high-profile incidents have already occurred in which employees inadvertently disclosed trade secrets, client data, or proprietary business strategies by inputting them into AI chat tools.12

Your revised data privacy and confidentiality policies must address all of these realities. They should specify: (1) what employee data the company collects through AI tools, and for what purposes; (2) how that data is stored, protected, and shared; (3) what employee consent is required and how it will be obtained; (4) what restrictions apply to employee use of third-party AI tools with respect to confidential company information; (5) what biometric data, if any, is collected and how it is governed; and (6) how the company complies with applicable state data privacy laws—which now include comprehensive statutes in Connecticut, California, Virginia, Colorado, and a growing number of other states. A confidentiality policy that says nothing about AI is a policy that creates more risk than it manages.

III. AI in Performance Reviews and the Need to Revise Your Performance Improvement Policies

Performance management has always been one of the most legally sensitive areas of employment law. Pretextual performance reviews have been the vehicle of choice for discriminatory and retaliatory employer conduct for as long as employment discrimination law has existed. Now, AI has entered the performance management arena—and it has brought with it a new set of problems that your current performance improvement policies are wholly unequipped to address.

AI performance monitoring tools now track employee productivity with granular precision. Some systems count keystrokes per minute, monitor mouse movement patterns, log application usage, time bathroom breaks, and assign algorithmic “productivity scores” to individual workers in real time. Other tools analyze written communications to assess employee engagement, mood, or likelihood of leaving the company. Still others use facial recognition technology to monitor employee attentiveness during video calls.13

The legal problems here are layered and compounding. First, as discussed above, AI performance monitoring tools may collect biometric or sensitive personal data without adequate consent, violating state privacy laws. Second, algorithmic productivity scores may incorporate proxy variables—such as the time of day an employee is most active, or the speed at which they type—that correlate with disability status, medical conditions, pregnancy, age, or other protected characteristics. Third, and most insidiously, because machine learning systems become more entrenched in their biases over time, discriminatory patterns in performance evaluation can become a self-reinforcing cycle: biased AI evaluations become training data for the next generation of AI evaluations, perpetuating and deepening the discrimination with each iteration.14

Performance Improvement Plans (“PIPs”) raise particularly acute concerns in the AI era. The PIP has always been one of the most abused tools in the employer’s arsenal—frequently deployed as a pretext for discrimination or retaliation rather than as a genuine attempt to improve performance. When a PIP is generated or recommended by an AI system, the employee faces a new and disturbing reality: she is being evaluated and potentially condemned by a process she cannot see, cannot challenge, and cannot meaningfully respond to. The “black box” nature of AI recommendations makes it nearly impossible for an employee—or even her attorney—to determine whether the performance concerns driving the PIP are legitimate or the product of algorithmic bias.

Your performance improvement policies must be revised to grapple with these realities. At a minimum, they should: (1) require human review and approval of any AI-generated performance assessment before it is used as the basis for any adverse employment action; (2) establish transparency standards so that employees understand what is being measured and how performance scores are calculated; (3) create a formal process for employees to challenge AI-generated performance data; (4) prohibit the use of AI-generated performance data that has not been audited for bias; and (5) require that any PIP be accompanied by a written explanation of the performance deficiencies that is specific, documented, and not solely derived from algorithmic output. The alternative is a policy framework that makes pretextual adverse actions easier to accomplish and harder to detect—an outcome that is good for neither employees nor employers facing litigation.

IV. Additional AI Policy Revisions Every Employer Needs to Consider

A. Acceptable Use Policies for AI Tools

Your employee handbook almost certainly has an acceptable use policy governing the use of company computers, internet access, and email. That policy was written for a world in which the greatest workplace technology concerns were employees wasting time on social media and accidentally clicking phishing links. Those days are quaint by comparison to AI issues we deal with today.

Employers now need comprehensive AI acceptable use policies that address: (1) which AI tools employees are authorized to use for work purposes; (2) what categories of information may never be input into any AI system (trade secrets, client data, personnel information, attorney-client privileged communications); (3) rules governing employee disclosure of AI use in work product—particularly for roles where authenticity and originality matter; (4) protocols for verifying AI-generated content before it is used in business communications, legal filings, or client deliverables; and (5) consequences for policy violations. The embarrassment of a law firm submitting AI-hallucinated case citations to a federal court—a scenario that has already occurred multiple times—is instructive. AI generates false information with confident fluency. Your policies need to account for that.15

B. AI and the Americans with Disabilities Act: Reasonable Accommodations in the Age of the Algorithm

The ADA requires employers to provide reasonable accommodations to qualified employees with disabilities. That obligation does not pause simply because the employer has delegated key workplace decisions to an algorithm.

AI tools can create new barriers for employees with disabilities in ways that are both obvious and subtle. A video interview AI that scores candidates on eye contact and speech fluency may systematically disadvantage candidates with autism spectrum disorders, Tourette syndrome, stuttering, or vision impairments. An AI productivity monitor that scores employees based on continuous computer activity may penalize employees who require more frequent breaks as a medical accommodation. A facial recognition attendance system may fail to accurately identify employees with certain skin tones or facial differences.16

Your handbook’s ADA reasonable accommodation policy must explicitly address how accommodation requests will be handled when they conflict with AI monitoring requirements. Will an employee who requires rest breaks be exempt from keystroke-monitoring productivity scores during those breaks? What happens when an employee’s medical needs result in AI-generated performance metrics that do not accurately reflect their actual work quality? Who is responsible for making these determinations—the algorithm, or a human being? Your policy needs to answer these questions before your first ADA claim does.

C. AI and Wage and Hour Law: When the Algorithm Sets the Schedule

AI scheduling tools are now widely used in retail, healthcare, hospitality, and other industries to optimize employee scheduling. These systems minimize labor costs by predicting customer demand and adjusting staffing accordingly—often with little human oversight and with minimal notice to the employees whose lives are rearranged as a result.

Several states and localities have enacted “predictive scheduling” laws that require employers to provide advance notice of work schedules and compensation for last-minute schedule changes.17 AI scheduling tools can easily run afoul of these laws if the system is not configured to account for applicable local requirements. Your handbook’s wage and hour policies need to address how AI scheduling is used and ensure compliance with predictive scheduling requirements in every jurisdiction where you operate.

AI is also beginning to appear in wage-setting decisions. Some companies use algorithmic tools to determine employee compensation based on market data, performance metrics, and internal equity factors. These tools carry the same risk of discriminatory outcomes as AI hiring tools—and they implicate equal pay laws in every state. If your AI compensation tool produces systematic pay disparities between men and women, or between employees of different racial groups, you are violating the law whether or not you are aware of it.18

D. Non-Compete and Trade Secret Policies in the Age of AI

If your company uses non-compete agreements, non-solicitation clauses, or trade secret protections, you need to think carefully about how AI intersects with these provisions. Employees who depart from your company and go to work for a competitor may bring with them AI tools trained on your proprietary data, work processes, or client information. The question of whether an AI model trained on confidential company data constitutes a misappropriated trade secret is one that courts are only beginning to confront.19

Conversely, your company’s use of AI may itself create trade secret risks if your proprietary processes or strategic information is inadvertently incorporated into the training data of a third-party AI vendor. Your confidentiality and trade secret policies need to address these scenarios explicitly, including provisions governing what happens to AI models or AI-generated work product when an employee departs.

E. Harassment and Discrimination Policies: When the Harasser is a Bot

Workplace harassment does not require a human harasser. AI systems can generate or facilitate harassing content. Deepfake technology—which uses AI to generate realistic fake images or videos of real people—has already been weaponized in workplace harassment. Employees have been subjected to AI-generated fake pornographic images of co-workers circulated in workplace communications. AI chatbots have been manipulated to produce harassing messages targeted at specific employees.20

Your harassment and discrimination policies must explicitly address AI-facilitated harassment, including: (1) a prohibition on the creation or distribution of any AI-generated content that disparages, demeans, or sexualizes co-workers; (2) clear reporting mechanisms for AI-facilitated harassment; (3) training for managers and HR personnel on how to identify and respond to AI-facilitated harassment; and (4) appropriate disciplinary consequences for violations. Policies drafted before the era of generative AI say none of these things. They need to.

F. Termination and Layoff Policies When AI Recommends the Ax

Perhaps the most consequential area in which AI is reshaping employment practice is in the recommendation of terminations and layoffs. Some employers now use AI tools that analyze workforce data—performance metrics, tenure, compensation, skills gaps, and more—and generate recommendations about which employees should be laid off in a workforce reduction. These tools can be efficient. They can also be deeply discriminatory.

If an AI layoff tool disproportionately recommends the termination of employees over 40, or employees with disabilities, or employees who recently took FMLA leave, the resulting termination decisions violate federal law—even if the employer never consciously intended to discriminate.21 Your termination and layoff policies must require human review of all AI-generated termination recommendations, disparate impact analysis before any AI-driven reduction in force is executed, and documentation demonstrating that each termination decision was made by a human being on the basis of legitimate, non-discriminatory criteria.

G. AI Transparency and Employee Notification Policies

Transparency is not merely an ethical aspiration in the AI employment context—it is increasingly a legal requirement. New York City’s Local Law 144 requires employers to notify job candidates that automated employment decision tools are being used in the hiring process.22 Proposed and enacted legislation in multiple states would extend similar notification requirements throughout the employment relationship. The direction of travel in employment law is unmistakable: employees have a right to know when AI is making decisions about their jobs.

Your handbook should include a standalone AI transparency policy that: (1) informs employees of all AI tools used to make or inform employment decisions; (2) describes what data is collected and how it is used; (3) identifies who employees should contact with questions or concerns about AI-driven decisions; and (4) establishes a process for employees to request human review of any AI-generated employment decision that affects their employment status or compensation. Getting ahead of these requirements now is far less costly than retrofitting compliance after a regulatory investigation or a class action lawsuit.

V. The River Is Not Slowing Down

Heraclitus was right about the river. He would be astonished by the velocity of this particular current. The pace of AI development in the workplace is not going to slow down. If anything, the tools will become more powerful, more pervasive, and more consequential—and the legal frameworks governing them will continue to evolve in ways that no employer can afford to ignore.

The federal regulatory landscape has shifted dramatically in the past two years. President Biden’s executive order on AI, which had directed federal agencies to address AI-related risks in the workplace, was rescinded on President Trump’s first day in office.23 The EEOC removed key guidance documents explaining how Title VII and the ADA apply to AI tools.24 The result is a federal vacuum that state legislatures are rushing to fill. For employers operating across multiple states, this patchwork of state AI employment laws creates a compliance challenge of considerable complexity.

Connecticut employers are not immune from these developments. While Connecticut has not yet enacted comprehensive AI employment legislation, it has enacted significant data privacy law in the Connecticut Data Privacy Act (“CTDPA”), which imposes obligations on businesses that collect and process personal data—including employee data.25 Connecticut employees who are subject to AI-driven employment decisions retain their rights under federal anti-discrimination law and the Connecticut Fair Employment Practices Act. Connecticut employers who assume that the absence of a specific state AI law means they have no AI-related legal obligations are making a costly mistake.

The message for employers is not to be overwhelmed by the pace of change, but to be deliberate about responding to it. An employee handbook that was adequate in 2024 is a liability in 2026. The specific areas identified in this article—EEO, data privacy, performance management, acceptable use, disability accommodations, wage and hour compliance, trade secrets, harassment, termination practices, and transparency—are not an exhaustive list. They are the areas where the legal risks are clearest and most immediate. Every business that uses AI tools in any aspect of its workforce management—and by now, that means most businesses—needs to confront these issues directly and systematically.

What Should Your Business Do Right Now?

The answer is straightforward, even if the work is not: get your employee handbook and your workplace policies reviewed and revised by qualified employment counsel who understands both the AI technology and the evolving law in this area. This is not a task for a template downloaded from the internet. The intersection of AI and employment law is evolving at extraordinary speed, the legal risks are real and substantial, and the cost of getting it wrong—in litigation, regulatory penalties, and reputational damage—far exceeds the cost of getting it right.

At Carey & Associates, P.C., we represent clients in employment matters throughout Connecticut, New York, and across the United States. We have deep experience in both plaintiff-side employment litigation and employer-side compliance counseling. We understand what AI in the workplace looks like from the inside of a discrimination case, and we understand what it takes to build employment policies that are legally sound, practically effective, and prepared for the technology landscape your business is already navigating.

The river is not going to stop flowing. But you can choose to navigate it skillfully, with the right guidance, rather than be swept away by it.

For more information about this topic or you would like to speak to one of our employment attorneys, contact Carey & Associates, P.C. at info@capclaw.com or call (203) 255-4150.

Endnotes

1. Wellhub Editorial Team, “AI in the Hiring Process: What Employers and Employees Should Know,” Wellhub (2024), available at https://wellhub.com/en-us/blog/organizational-development/ai-in-hiring/.

2. Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters (Oct. 10, 2018), available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.

3. Griggs v. Duke Power Co., 401 U.S. 424 (1971) (establishing the disparate impact theory of liability under Title VII). See also 42 U.S.C. § 2000e-2(k).

4. EEOC, “EEOC Settles First AI Hiring Discrimination Case,” (2023). See also Klie Law Offices, “AI and Employment Discrimination: The Legal Landscape,” available at https://klielawoffices.com/ai-employment-discrimination/.

5. New York City Local Law 144 of 2021, codified at N.Y.C. Admin. Code § 20-871 et seq. (requiring bias audits and candidate notification for automated employment decision tools, effective January 1, 2023).

6. Illinois Human Rights Act, 775 ILCS 5/2-102(A-5) (as amended, prohibiting use of AI that results in discriminatory effects based on protected characteristics in employment decisions).

7. California S.B. 7, the “No Robo Bosses Act” (introduced 2025), which would require 30 days’ notice prior to deployment of automated decision systems in employment and mandate meaningful human oversight.

8. Dina Saad, “AI Employment Laws: State-by-State Guide,” Thomson Reuters Practical Law (2025), available at https://legal.thomsonreuters.com/en/insights/articles/ai-employment-laws-state-guide.

9. See, e.g., Cal. Labor Code § 1198.5 (employee access to personnel files); Conn. Gen. Stat. § 31-128a et seq. (Connecticut Personnel Files Act); 29 C.F.R. Part 825 (FMLA medical records confidentiality requirements).

10. Ryan Witkov, “AI in the Workplace: Navigating the Legal Minefield for Employers,” (2024), available at https://witkovlaw.com/ai-workplace-legal-issues-employers/. See also Conn. Gen. Stat. § 42-515 et seq. (Connecticut Data Privacy Act, effective July 1, 2023).

11. Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14/1 et seq. For examples of major BIPA settlements in employment contexts, see, e.g., Snap Inc. BIPA settlement (2022); McDonald’s Corporation BIPA litigation (ongoing).

12. See, e.g., reports of Samsung engineers inadvertently sharing proprietary source code via ChatGPT (2023), and Levi Strauss & Co.’s disclosure of employee data through AI platforms. See also Jeanne Sahadi, “Samsung Bans Employees from Using AI Tools Like ChatGPT,” CNN Business (May 2, 2023).

13. See generally, Ifeoma Ajunwa, “The Paradox of Automation as Anti-Bias Intervention,” 41 Cardozo L. Rev. 1671 (2020); see also Witkov, supra note 10.

14. Klie Law Offices, supra note 4 (“because machine learning systems become more entrenched in their biases over time, discriminatory patterns can become a vicious cycle”).

15. See, e.g., Mata v. Avianca, Inc., No. 22-cv-1461, 2023 WL 4114965 (S.D.N.Y. June 22, 2023) (sanctioning attorneys for submitting AI-generated brief containing fabricated case citations); Park v. Kim, No. 22-2057 (2d Cir. 2023) (attorney sanctioned for citing nonexistent cases generated by AI).

16. See EEOC, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” (May 12, 2022) (subsequently removed from EEOC website in 2025 but preserved at https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines).

17. See, e.g., San Francisco Formula Retail Employee Rights Ordinances (2014); New York City Fair Workweek Law, N.Y.C. Admin. Code § 20-1201 et seq.; Oregon Predictive Scheduling Law, Or. Rev. Stat. § 653.480 et seq.

18. See, e.g., Equal Pay Act of 1963, 29 U.S.C. § 206(d); Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2; Conn. Gen. Stat. § 31-75 (Connecticut Equal Pay Act).

19. See Defend Trade Secrets Act of 1916, 18 U.S.C. §§ 1836-1839; Uniform Trade Secrets Act (adopted in Connecticut as Conn. Gen. Stat. §§ 35-51 to 35-58). The question of whether AI models trained on proprietary data constitute trade secrets is an emerging area of litigation. See also FTC v. Intellivision Technologies Corp. (FTC enforcement action targeting AI claims, 2024).

20. See, e.g., CBS News, “Deepfake Porn Is a Growing Problem at Work” (2024); see also 47 U.S.C. § 230 (Section 230 of the Communications Decency Act and its limits in the context of AI-generated content).

21. See Age Discrimination in Employment Act, 29 U.S.C. §§ 621-634; Americans with Disabilities Act, 42 U.S.C. §§ 12101-12213; Family and Medical Leave Act, 29 U.S.C. §§ 2601-2654. See also EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565 (E.D.N.Y. 2022) (EEOC challenge to AI hiring tool that rejected older applicants).

22. N.Y.C. Local Law 144, supra note 5 (requiring notice at least 10 business days before using an automated employment decision tool in employment decisions affecting any candidate or employee).

23. Exec. Order No. 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” 88 Fed. Reg. 75191 (Nov. 1, 2023), revoked by Exec. Order No. 14148, 90 Fed. Reg. 8237 (Jan. 20, 2025).

24. Saad, supra note 8 (noting EEOC removal of AI guidance documents and Department of Labor’s signal that prior AI best practices guidance may no longer reflect current policy).

25. Connecticut Data Privacy Act (“CTDPA”), Conn. Gen. Stat. § 42-515 et seq. (effective July 1, 2023). Note that the CTDPA contains a partial exemption for employee data, but employers processing employee personal data for AI purposes should consult with counsel regarding the scope of this exemption and its planned phase-out under pending amendments.