Skip to Main Content

AI Hallucinations and the Employment At Will Rule were Misstatements Until They Became Reality

image for AI Hallucinations and the Employment At Will Rule were Misstatements Until They Became Reality

AI Hallucinations are misstatements just like the employment at will rule, both were never intended but became reality. In this episode of the Employee Survival Guide®, Mark Carey dives deep into the intersection of employment law and AI hallucinations. These AI Hallucinations pose risks that echo historical inaccuracies in legal doctrine, potentially reshaping the landscape of employee rights and workplace culture. 

Carey begins by unraveling the at-will employment rule, a cornerstone of employment law that has persisted despite its shaky origins. He draws a stark parallel between the historical evolution of employment law and the current challenges posed by AI hallucinations, emphasizing the critical need for verification and scrutiny of AI Hallucination outputs in legal contexts. As AI continues to permeate our workplaces, the dangers of unverified information become increasingly apparent, creating a precarious environment for employees navigating issues such as discrimination, retaliation, and hostile work environments. 

Throughout the episode, listeners will gain valuable insights into the implications of AI Hallucinations on employment law, including how AI hiring bias can affect job opportunities and the potential for discrimination in the workplace. Carey advocates for a transformative shift from at-will employment to a more accountable system that mandates stated reasons for termination, ensuring transparency and fairness in employee relations. 

Join us as we explore how understanding employment contracts, negotiating severance packages, and advocating for employee rights can empower you in the face of evolving workplace dynamics. Whether you’re dealing with performance reviews, workplace harassment, or navigating remote work challenges, this episode is packed with essential tips and strategies to enhance your job survival skills. 

Don’t miss this opportunity to equip yourself with the knowledge needed to thrive in today’s complex work environment. Tune in to the Employee Survival Guide® and discover how to navigate the intricacies of employment law, safeguard your rights, and advocate for a healthier, more equitable workplace culture. Your career deserves it! 

If you enjoyed this episode of the Employee Survival Guide please like us on FacebookTwitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts and Spotify. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.

For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.

Disclaimer: For educational use only, not intended to be legal advice.

Full transcript – click here

Speaker #0 Hey, it’s Mark here and welcome to the next edition of the Employee Survival Guide, where I tell you, as always, what your employer does definitely not want you to know about, and a lot more. Hey, it’s Mark and welcome back to the next edition of the Employee Survival Guide. Today’s topic of choice is how employment law demonstrates the dangers of AI hallucination. Bear with me. I’m going to get into a good one. Employment law has long served as a proving ground for how legal issues harden into doctrine because it sits at the intersection of economics, institutional power, and social policy. Employment law often absorbs contested assumptions early and then carries them forward long after their origins have faded from view. The most familiar example is the at-will employment rule. You know what that means. You’ve heard about it. Today, it is treated as a background principle. fundamental and entrenched that is rarely interrogated or examined, except for I do. I call it into question. Yet legal historians widely agree that the at-will implement rule rests on historical misstatements that hardened into doctrine through repetition, convenience, and institutional inertia. That history matters now more than ever because it closely parallels a growing risk associated with artificial intelligence. hallucinated assertions once relied upon and repeated can quite quietly become law. The danger is not that AI gets things wrong. The danger is that what happens when no one checks it. Do you check your AI research? This is especially true in a field that already knows how easily unverified assumptions can become binding rules. The modern at-will doctrine Amen. can be traced with unusual precision to a single source. There’s a fellow, an attorney up in New York named Horace Gay Wood back in 1877. Yes, I’m going that far back. Published a treatise titled Master and Servant. That’s how he used to write stuff back then. It’s a real thing. I actually learned it in law school. Writing with confidence rather than caution, Wood asserted that the common law, which he borrowed from England, had long recognized employment relationships as charitable. A term rule at will of either the party, unless a fixed term, was expressly stated, like, you know, one or two years. He presented the proposition not as a contested view, but as settled law. Now, look what happened. Courts proved it receptive. Yeah. Well, trial courts cited Wood. Appellate courts cited those trial courts. Soon, courts were no longer citing Wood at all. They were citing one another. This is called stare decisis. the area of law. You cite the prior case that came before it. Within a remarkable short period of time, Wood’s formulation stopped looking like an argument and began to look like a description of reality. By the early 20th century, the at-will rule no longer required explanation. It was simply the law, and employers loved it. The difficulty, as later historians and scholars painstakingly demonstrated, was that the Woods account did not accurately reflect common law he claimed to summarize. Many of the English and American cases he cited did not stand for at-will termination at all. Earlier courts often presumed year-to-year employment or required cause for dismissal, particularly in skilled trades and long-term service relationships. I know you’re getting bored of me talking about this stuff, but I’m going to keep going. The historical record was even uneven, contextual, and fact-dependent. Wood’s rule was not. by the time those inconsistencies were exposed, it was too late. Employment law, perhaps more than any other field, rewards rules that are easy to administer by courts. Once courts began relying on the at-will doctrine to resolve cases efficiently, they had little incentive to reopen its foundations. I’ve done an article a long time ago about this same topic and where the words of at-will came, and you try to argue in front of a judge and it’s nearly impossible. to change their mind or change an employer’s mind. Each citation became another layer of insulation. The doctrine’s authority no longer depended on whether it was historically correct, but on the simple fact that it was already being treated as correct. Sound familiar? This is the critical mechanism by which legal fiction becomes legal reality, not through conspiracy or bad faith, but through repetition, convenience, institutional trust, and employers. Employers, don’t forget that. They love this rule. It allows them to run their private governments without any question and to hide discrimination behind the at-will rule. If you haven’t gotten that by me yet from me, please accept that fact. It’s a reality. Large language models are not malicious. They do not lie. They predict text based on patterns. When source material is ambiguous and incomplete or conflicting, AI fills the gaps in a way that sounds authoritative from websites. The risk emerges when those outputs are relied upon without verification, repeated by others, embedded in briefs. policies, pleadings, or articles, and later cite it as if they reflect settled fact. That statement is a scary reality that’s coming true. At that point, the hallucination no longer looks like an error. It looks like consensus. This is precisely the same dynamic that allowed the outwilled implement doctrine to take hold. Confidence substituted for accuracy. Repetition substituted for proof. Until recently, concerns about AI hallucinations in the law field were largely hypothetical. Warnings about what might happen if fabricated authority slipped through the cracks, that line has now crossed. Let me pause and set something up. When we’re litigating in court, arguing motions for a judge, judge is a lawyer, we’re arguing points of authority, meaning case decisions that came before or statutes or cases interpreting statutes, precedent. And we have to make sure that our citations to cases and arguments we’re using are real. I mean, ethically, we cannot lie or misrepresent those case law decisions or statutes to the court because the whole legal fabric falls apart. So let’s go further. In Shahid v. Assam, a court of appeals case on June 30, 2025, the Georgia Court of Appeals vacated a trial court order after concluding that the order relied on non-existent case law. That had been cited by counsel. The appellate court expressly noted that the trial court’s written order had incorporated bogus authorities, cases that did not exist in any reporter or database, and imposed sanctions in connection with their use. The error was ultimately corrected. The order was vacated. The fictitious cases did not become binding precedent, but would have with more than simple oversight. But the significance lies in what happened before correction. hallucinated case law crossed the institutional threshold and entered an operative judicial order. That’s huge, folks. That is the precise point at which legal fiction stops being theoretical. Implement law teaches us that not every error is caught immediately, that some survive long enough to shape doctrine before anyone realizes what happened. That’s what happened to the implement at will rule. Again, it’s just an hallucination. Okay, it’s… It involves all of your jobs. You all are at-will employees. Now, do you see the significance of this? Employment law developed in response to industrial efficiency, labor mobility, economic pressure. Courts favored simple, repeatable rules. At-will employment fit that need. Modern institutions face similar pressures with AI, speed over verification, cost savings over primary research, plausibility over providence. AI text is especially dangerous because… It does not announce its uncertainty. An hallucinated case citation can look indistinguishable from a real one. A fabricated historical fact can sound identical to a well-sourced one. And once that material circulates, particularly in employment policies, handbooks, or litigation templates, it acquires legitimacy simply by being written down. It isn’t at its core a technology problem, but rather a process problem. Employment law is uniquely vulnerable to this phenomena. It is precedent-driven, policy-sensitive, and often shaped by broad generalizations rather than narrow holdings. Doctrinal shortcuts like employment at will, implement tend to persist precisely because they are useful. If AI-generated errors enter employment law practice through briefs, internal guidance, training materials, or template policies, they may not be challenged immediately. They may simply be repeated. Over time, they can reshape how rules are understood, even if no single case openly endorses them. That is how employment law has always evolved, for better or worse. The lessons of the at-will employment is not that courts are careless or that technology is dangerous. It is that the systems reward convenience, and once a convenient rule is accepted, it rarely gets revisited. At-will employment rule survived, not because it was good. correct, but because it was useful. AI hallucinations will survive the same reason for the same reason, unless institutions impose discipline. AI can be an extraordinary tool for employment lawyers, drafting research, pattern recognition. We just went through a review of the AI closed-end sandbox system. We’re doing another one next week or so to see how it makes our business and work we do as lawyers more efficient. But it must be treated the way lawyers treat junior associates. Helpful, fast, and never authoritative without review. We have to check things. Primary source verification is not optional. In this new paradigm, institutional norms must treat AI output as a starting point, not an endpoint, especially in a field where assumptions can harden into doctrine for generations. And you folks have been, you live and work in a doctrine of at-will implement generations. That’s how long has it existed. Employment law shows us how easily legal fictions can become foundational rules. It also shows us how difficult those rules are to unwind once they take hold. The Georgia case demonstrates that AI hallucinations are no longer hypothetical. They have already entered court orders, if only briefly. History suggests that the new era may exist longer. Further, a comprehensive survey has never been done to see where such errors may have already worked their way into orders unannounced. AI does not introduce a new risk. It accelerates old ones. The question is not whether AI will hallucinate. It will. The question is whether lawyers who already live with the consequences of historical misstatements will stop, verify, and ask, because that’s their job, that’s their licensure, requires them to stop, look, verify, and ask, whether the thing everyone is repeating is actually true. I guess I’m doing that with Employment at Will. I’m questioning all along. Is it really true? And I keep on questioning you, because I don’t think it’s true. I think there’s a better way. It’s called Employment for Cause, meaning… stated reasons for termination instead of just ipsit dixit. We’re going to fire you for no reason without any notice. So with all that said, thank you for listening and let me be of service. Hey, it’s Mark and thank you for listening to this episode of the Employee’s Firebug Guide. If you’d like to be interviewed for our podcast and share your story about what you’re going through at work and do so anonymously, please send me an email at mcary at C-A-P-C-Law.com. And also, if you like this podcast episode and others like it, please leave us a review. It really does help others find this podcast. So leave a review on Apple or Spotify or wherever you listen to podcasts. Thank you very much and glad to be of service to you.

AI Hallucinations and the Employment At Will Rule FAQs

What are AI hallucinations in the workplace?

A. AI hallucinations happen when AI generates false or misleading information. In a workplace, this can lead to inaccurate reports, performance metrics, or misstatements that affect decisions about employees.

How does employment-at-will law interact with AI misstatements?

A. Employment-at-will lets employers terminate employees without cause, but AI-generated errors that affect employment decisions could raise concerns about fairness, documentation, and potential liability under employment laws.

How can employees protect themselves from AI errors affecting their jobs?

A. Employees should document communications, verify AI-generated reports, and address any discrepancies with management or HR. Keeping clear records helps reduce the risk of wrongful termination or other negative consequences from AI misstatements.