Skip to Main Content
(203) 255-4150

When Your Boss is a Robot: Understanding AI in the Workplace and Your Rights

image for When Your Boss is a Robot: Understanding AI in the Workplace and Your Rights

By Chris Avcollie

Sigmund Freud lived between 1856-1939 and was therefore witness to the surge of technology that resulted from the Industrial Revolution.  While he acknowledged the usefulness of the technical innovations of his day, he was also somewhat skeptical of them. Freud famously commented: “Man has, as it were, become a kind of prosthetic God.” He argued that humans, through technology, have created “artificial limbs” and tools that amplify their abilities, making them godlike but also creating new troubles. Freud had no idea what was coming!

The science fiction future that was unimaginable in Freud’s day has arrived, and it’s reviewing your job application! Artificial intelligence is no longer just something we see in movies—it’s making real decisions about real people’s livelihoods every single day. And while AI promises efficiency and objectivity, it’s bringing some very human problems into America’s workplaces: discrimination, privacy violations, and a fundamental shift in the balance of power between workers and employers.

If you’ve applied for a job recently, there’s a good chance an algorithm screened your resume before any human eyes saw it. In fact, about 65% of companies now use some form of AI or automation in their hiring process. That’s not necessarily a bad thing—except when the algorithm is making biased decisions that would be illegal if a human manager made them.

The Algorithm Doesn’t Discriminate…Or Does It?

Here’s a comforting thought: computers can’t be racist, sexist, or ageist. They’re just following their programming, right? Unfortunately, it’s not that simple.

AI tools learn from data—and if that data reflects historical discrimination, the AI will perpetuate that discrimination into the future. When Amazon deployed an AI hiring tool, the tech giant discovered their algorithm was discriminating against women. The system had learned from the company’s past hiring patterns (which favored men) and was essentially programmed to continue that bias.

Think about that. One of the world’s most sophisticated technology companies, with virtually unlimited resources, couldn’t create an AI hiring system that didn’t discriminate. If Amazon struggled with this, what are the odds that the automated system reviewing your application is fair?

The resume scanner that dings you for not having the “right” keywords might be eliminating qualified women because men’s resumes historically used different terminology. The video interview AI that analyzes your facial expressions and speech patterns could be filtering out candidates based on race or ethnicity. The chatbot that asks pre-screening questions might create barriers for older workers who are less comfortable with the technology—even when tech proficiency isn’t required for the actual job.

Your Employer Can’t Hide Behind the Algorithm

Here’s what every worker needs to understand: “We were just following the algorithm” is not a legal defense.

Under federal anti-discrimination laws, you don’t need to prove your employer intended to discriminate against you based on sex, race, religion, disability, age, or another protected characteristic. You only need to prove that their policies had a discriminatory effect on your employment or as the Supreme Court recently held, you experienced some harm in the terms and conditions of your job. This principle applies whether the decision was made by a biased manager or a biased algorithm.

In 2023, the Equal Employment Opportunity Commission (EEOC) settled its first-ever AI hiring discrimination case, recovering $365,000 for a group of job-seekers. That settlement sent a clear message: employers remain liable for discriminatory outcomes, even when those outcomes are produced by automated systems they purchased from third-party vendors.

The Federal Vacuum and the State Response

The legal landscape for AI in employment has become dramatically unclear—and that should concern every working person in America.

On his first day in office, President Trump rescinded Executive Order 14110, which had directed federal agencies to address AI-related risks including bias, privacy violations, and safety concerns (Saad). The EEOC removed key guidance documents explaining how Title VII and the Americans with Disabilities Act apply to AI tools. The Department of Labor has signaled that its prior guidance on AI best practices may no longer reflect current policy.

In other words, the federal government has largely stepped back from regulating AI in the workplace—leaving workers with far less protection than they had just months ago.

Fortunately, several states have stepped into this vacuum. New York City’s Local Law 144, which took effect on January 1, 2023, requires employers using automated employment decision tools to conduct independent bias audits and provide notice to job candidates. Illinois recently amended the Illinois Human Rights Act to prohibit employers from using AI in ways that lead to discriminatory outcomes based on protected characteristics.

California has introduced several bills aimed at regulating AI in employment, including the “No Robo Bosses Act” (SB 7), which would require employers to provide 30 days’ notice before using any automated decision system and mandate human oversight in employment decisions. Over 25 states introduced similar legislation in 2025.

For workers in Connecticut and New York, the current situation is particularly frustrating. Connecticut saw a bill fail that would have protected employees and limited electronic monitoring by employers. While New York City has protections, New York State has yet to pass comprehensive AI employment protections beyond those affecting state agencies.

Beyond Hiring: AI Throughout the Employment Lifecycle

While much attention focuses on AI in hiring, the technology is being used throughout the employment relationship—often without workers’ knowledge or consent.

Performance Monitoring and Evaluation

AI systems are increasingly used to monitor employee productivity, track keystrokes, analyze work patterns, and even predict which employees are likely to quit. These tools raise profound privacy concerns. AI systems often require access to employee communications, performance records, and personal information—and companies may unknowingly cross legal boundaries that could result in privacy violations or breach of employment agreement lawsuits.

Illinois’s Biometric Information Privacy Act (BIPA) has been particularly impactful. Companies have faced million-dollar settlements for BIPA violations related to AI systems that analyze employee facial recognition, voice patterns, or other biometric identifiers without proper consent.

Workplace Surveillance

Some proposed legislation would address AI-driven workplace surveillance. California’s AB 1221 and AB 1331 would require transparency and limit monitoring during off-duty hours or in private spaces. But in most states, employers have broad latitude to monitor workers using AI tools—often without their knowledge.

The Stop Spying Bosses Act, introduced in Congress, would prohibit electronic surveillance for certain purposes, including monitoring employees’ health, keeping tabs on off-duty workers, and interfering with union organizing. However, this legislation has not been enacted into law.

Promotion, Discipline, and Termination

AI tools aren’t just screening job applicants—they’re making recommendations about who should be promoted, who should be disciplined, and who should be laid off. And because machine learning systems become more entrenched in their biases over time, discriminatory patterns can become a vicious cycle: the more AI makes biased decisions, the more that bias becomes embedded in the training data for the next generation of AI tools.

Privacy in the Age of AI

Employee privacy rights don’t disappear simply because an employer is using AI technology. Under both federal and state employment laws, employers have obligations to protect employee information and notify workers about monitoring or data collection practices.

Many jurisdictions require explicit employee consent before collecting or processing personal data for AI training purposes. Simply updating the employee handbook may not be sufficient—specific agreements addressing AI data use may be required. However, workers often face a coercive choice: consent to extensive AI monitoring and data collection, or lose the job opportunity.

The practical reality is stark: AI systems learn from the data they’re fed. If that data includes your communications, performance records, or personal information, your employer may be using your private information in ways you never imagined—and potentially in violation of your privacy rights.

What Workers Should Know and Do

If you’re concerned about AI affecting your employment, here’s what you need to understand:

Know Your Rights: Discrimination based on race, sex, religion, national origin, age, disability, or genetic information is illegal—whether the discriminatory decision was made by a person or an algorithm. Retaliation for complaining about discrimination is also illegal.

Ask Questions: You have the right to know if AI tools are being used to make employment decisions about you. While not all states require disclosure, asking the question puts employers on notice that you’re paying attention. New York City employers, for example, must provide notice at least 10 business days before using an automated employment decision tool.

Document Everything: If you suspect AI discrimination, document the circumstances. Save job postings, application materials, and any communications about the hiring or employment decision. Note dates, times, and who was involved in the process.

Request Reasonable Accommodations: If you have a disability that affects how you interact with AI tools (such as video interview software), you have the right to request reasonable accommodations under the Americans with Disabilities Act. Employers should provide alternatives to AI tools when necessary.

Be Aware of Data Privacy: Understand what employee data your employer collects and how it’s used. In some states, you have rights regarding your personal information. Illinois workers, in particular, have strong protections under BIPA for biometric data.

Don’t Assume the Decision is Final: Just because an AI rejected your application or recommended disciplinary action doesn’t mean that decision was correct—or legal. Automated tools make mistakes, and they can be challenged.

The Bottom Line

The future of work is here, and it’s increasingly automated. But workers still have rights. The fact that an employer is using sophisticated technology doesn’t give them permission to discriminate, violate privacy, or ignore employment laws that have protected workers for decades.

As legislators continue to grapple with how to regulate AI in employment, the fundamental legal principles remain unchanged: employers cannot discriminate based on protected characteristics, they cannot retaliate against workers who assert their rights, and they must respect employee privacy within the bounds of applicable law.

The AFL-CIO put it well in supporting proposed federal legislation: “Working people MUST have a voice in the creation, implementation and regulation of technology”. That voice includes understanding when your rights are being violated—and taking action when they are.

The paradox identified by Freud in his quote about humans becoming “prosthetic gods” is nowhere more evident than in the realm of AI. While this technology indeed gives humans godlike powers, Freud also noted that these technologies are “prostheses” that were not naturally grown and therefore can cause problems for the human condition. Freud questioned whether these tools truly lead to happiness, even as they increase human power. In the American workplace, we will all have to grapple with that paradox as AI becomes increasingly common.

Have You Been Impacted by AI at Work?

If you believe you’ve been discriminated against by an AI hiring tool, unfairly monitored by automated surveillance systems, or subjected to biased AI-driven employment decisions, you don’t have to accept it. The law is evolving rapidly, but your fundamental rights as a worker remain protected.

The employment attorneys at Carey & Associates, P.C. understand both the technology and the law. We’ve been following these issues closely and are prepared to help workers navigate this new frontier of employment law. Whether you’re facing discrimination in hiring, unfair AI-driven performance evaluations, or privacy violations through workplace surveillance, we can evaluate your situation and advise you on your legal options.

Don’t let an algorithm decide your future. Contact Carey & Associates, P.C. at info@capclaw.com or (203) 255-4150 today for a consultation to discuss how AI may be affecting your workplace rights—and what you can do about it.