Are you prepared for the seismic shifts that artificial intelligence (AI) is bringing to the workplace? In this eye-opening episode of Employee Survival Guide®, Mark Carey dives deep into the implications of AI on employment and the AI hiring bias, revealing how it’s not just changing hiring practices, but also reshaping the very fabric of employee rights and workplace culture. As AI technologies become more prevalent, understanding their impact on discrimination, privacy, and mental health is crucial for every worker navigating this new landscape.
Mark highlights the critical areas where AI is influencing employment, from hiring algorithms that can perpetuate biases (AI hiring bias)—leading to discrimination based on race, gender, and age—to the alarming rise of employee surveillance practices that threaten privacy and trust. With the potential for AI to amplify harassment in the workplace, it’s essential for employees to be aware of their rights and advocate for a fairer work environment. The risks associated with algorithmic decision-making, including the notorious black box problem, are not just theoretical; they can have real-world consequences on job security and employee well-being.
As we explore the challenges of performance monitoring and the erosion of the employer-employee relationship, Mark emphasizes the importance of remaining vigilant. Workers must understand their legal rights in the face of employment law issues, including the potential for wrongful termination, hostile work environments, and retaliation claims. This episode serves as a vital resource for anyone looking to empower themselves in a rapidly changing work landscape, hiring (AI hiring bias) offering insider tips on negotiating severance packages, understanding employment contracts, and navigating workplace policies.
Join us as we dissect the complexities of AI in the workplace and equip you with the essential survival skills needed to thrive amidst these changes. Whether you’re dealing with workplace discrimination, seeking career development tips, or simply trying to maintain a healthy work-life balance, this episode of Employee Survival Guide® is your go-to podcast for navigating the challenges of modern employment. Don’t let AI dictate your career path—take charge of your future and advocate for your rights!
Remember, knowledge is power, and in this episode, we arm you with the insights you need to not only survive but thrive in an AI-driven world. Tune in and transform your understanding of employment dynamics today!
If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts and Spotify. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.
For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.
Disclaimer: For educational use only, not intended to be legal advice.
Transcript:
Speaker #0 Hey, it’s Mark here and welcome to the next edition of the Employee Survival Guide where I tell you as always what your employer does definitely not want you to know about and a lot more. It’s Mark and welcome back. Today we’re going to touch upon a topic that I’ve been thinking about for some time and you’ve been confronted with it day in, day out. Artificial intelligence and working. A pretty complex area, and I’m waiting in because I don’t think many other people are. I think we’re really early in the process. You know, I’m using Google Gemini, and I’m sure you’re using ChatGDP and other devices to play around or draw pictures, et cetera, art, et cetera. But my focus is really on what is going to happen in terms of AI in the workplace that affecting you as employees and executives. And I may be a little all over the place today, but I did try to organize my thoughts. For example, we’re going to talk about hiring and discrimination, about my favorite performance monitoring and surveillance. Other topics include AI decision-making and how about mental health? And the weird one I came up with as well is AI is a harassment amplifier, and people would use a bot to go after you. And then employee trust was really a good one too, because that dawned on me. People really don’t have a… They have low employee engagement, so they have low trust of employers these days. So let’s just dig into it. As a preface, I see a lot of cases, a lot of fact patterns, and I’m I’m just a very curious person. So I went out and just looked at, and yeah, I used AI to search for this podcast because I wanted to see what it was doing. Because if you understand what AI is, I was watching a podcast with Elon Musk the other day from a fellow in, I believe, Norway. And Musk said that we’re running out of data meaning and the implication was that the ai is so fast at picking up all available data it he even talked about having all books in the world that ever were written have already been analyzed you know in terms of machine learning and then photos tvs podcasts you name it and so it’s going to run out of data that’s pretty alarming if you think about it so getting back to what did ai produce its relationship to itself in employment law. I will say that the topical areas were, you know, when I typed it in in terms of Google Gemini to come up with issues, then I looked at the topics and said, you know, was it accurate or not based on my experience? And I wanted to just, you know, share with you what I discovered and also what I’m going to just… talk about in terms of the reality check of what these different topics are, what’s going to happen to us as employees, how employers are going to react to this issue, how they’re going to handle it. So let’s just dive in. Sorry, segue. So the first one is hiring and discrimination. AI-powered recruiting tools can inadvertently perpetuate bias if the data they are trained on contains historical patterns of discrimination. That’s probably the biggest issue that most people think about. think about is, you know, employment or human input into it. How does the program as the coders prevent the AI machine learning of not replicating bias? So this could lead to cases involving employment discrimination based on race, gender, age, disability, et cetera. Employers will need to be very careful about how they use AI in the hiring process to avoid legal trouble. Good luck with that. I see it’s just fraught with issues. If anybody has a recent college graduate, they know that their son or daughter has been interviewed by a computer. The computer is doing all the work. It’s the screening process. That’s more common these days than not. So it’s not something that I experienced when I was ever interviewed for a job, but I never really interviewed for a job because I’ve been doing this all my life. But that’s happening at a very quick pace. So one question in terms of what is the the AI interview process like and what is it looking at, how to, you know, recognition of facial, how are you tweaking on the, in the, is it looking at your nervousness in the video, et cetera. The problem in discrimination in the hiring is the AI system used in recruiting often is trained on historical data. If the data reflects past biases, for example, fewer women or minorities hired in certain roles, The AI algorithm may learn and replicate these patterns, as I indicated. This can lead to qualified candidates being overlooked simply because they don’t match the biased historical model the AI is using. We could see discrimination cases where a rejected job might be consumed by employers alleging disparate treatment. For example, they were intentionally discriminated against due to their membership in a protected class. Disparate impact is another form of where a seemingly neutral AI hiring process disproportionately screens out individuals in a protected category. So the defense challenges, meaning the employer, will face the following of proving the AI system is fair. This may involve demonstrating that it doesn’t have an adverse impact on protected groups. This is a complex, especially with less transparent AI models. You have to understand that the developers don’t really know what’s happening. in the black box, and I’ll get to that in a second. This is happening so quickly. And we all have– at least I have this– is the Schwarzenegger film when machines take over the world approach. You’re like, you know, the concern that AI becomes too smart and it begins to eliminate the humans. But I digress again. And so there are discrimination in the hiring process, a possible issue that we might have. It may be happening now. How would you know if you’re being discriminated based on the hiring process or the AI bot? You really wouldn’t, honestly. The law is very slow to pick up as current events and so you wouldn’t find unless somebody wrote about it like I would write about something where I found a little tweak here and there in my discovery of cases in terms of discovery of data in actual legal cases and we came up with something that you would learn it that way but it’s a very slow process from the legal standpoint to bring these issues to the forefront. So I think it’s a wild west of discrimination in the hiring process. You just pray that they’re going to do it correctly in terms of the coding of what they’re looking for. But again, I have really mixed feelings about that, that humans put in data into computer code that can generate the replication of the bias. Onward. Number two, performance monitoring and surveillance. My favorite This was a really big topic when we all went into our own homes during the pandemic, and we discovered and reporting happened where a lot of employee monitoring took place. It’s been going on for a while, but it came out. And so AI can be used to monitor employee productivity, communication, and even physical movements. This raises significant privacy concerns and can lead to cases where focused on unlawful surveillance, unreasonable expectations, and the creation of a hostile work environment. So, AI augmented monitoring tools go far beyond traditional performance tracking. They may analyze employee email and communications for, get this, sentiment and potential disloyalty. Think about that. You know, you’re writing an email. Everything you do at work, everything you touch, it can be analyzed in a millisecond by a computer to determine whether you’re, you know, right. loyal or whether you’re something maybe is happening to me mental health. So employee monitoring is a huge concern here in terms of the AI issue. We don’t know when, you know, is the government regulating how companies use this technology to firm with employees? We know there are keystroke monitoring software. I always laughed at the one where the You can buy a little device and move your mouse and make sure that the mouse is moving to trick the computer for detecting low productivity. But the facial expressions and body language on Zoom calls and things of that nature, all of that data, emails, Slack, text, visual from Zoom, all of that gets dumped into the data because the AI bots are so hungry for more information. And you can just think about the insidious nature of what it’s looking for. even your physical location, where you are. Some people want to work remotely from wherever these days and maybe that comes into play. Types of cases we’re looking at in terms of this aspect of performance monitoring surveillance, you get obviously the invasion of privacy issue. Employees may argue unreasonably intrusive surveillance violates the right to privacy in the workplace. I agree because you… We had this issue come up when we all went to Zoom during the pandemic. You know there was other things happening in the household around us that we could see. Forget the cat, the pet, etc., but it was like conversations between spouses, family matters, severe and serious issues. How far is the reach and breach that the employer can go to monitor people is really a serious question. I always argued that the issue when you had remote working and employee monitoring is you have this issue of violation of trust. You know, we have these little things on our computers, laptops, we can turn off the screen. But, you know, can you really turn off the microphone? And it’s not likely. And if you are, I’m going to give you a warning. If you have a computer. device from your work, shut it down if you want some privacy. Because leaving it open is like letting Alexa listen to your conversations with your spouse. It’s remembering it’s there and remembering they can listen in on everything you’re doing. So I don’t mean to scare you but it’s reality. And by the way, if you got something as an ad targeting you after listening to or working or talking in front of Alexa, it’s no joke. It’s actually doing that. It’s listening to you and it’s spinning back advertising to you. It’s not an irony. Next issue, data protection violations, collection of and storage of vast quantities of employee data by AI systems will raise issues regarding data protection laws like the GDPR. And I apologize, didn’t look at the acronym. But the issue here is I thought about health concerns issue. What if the AI is working on a through the laptop and listening to you have a conversation about your cancer diagnosis? You didn’t tell You know, there’s a where does HIPAA come into play there? How does the computer notice shut off and when it hears a medical issue like that? We were not being told anything have is your employer telling you about what the constraints are gonna be about You know when it here’s a medical issue happening across its devices I mean certainly they’re gonna say, you know screen come on and saying sorry Susan But you know You can’t talk about that on this you know in front of this computer because we have to observe your HIPAA rights That’s not going to happen employers are their own private governments. They do what they want in their own ways, and it’s all secretive. And we, as employment lawyers, try to enforce the rights of people against employers because employers are terrible. They don’t want to observe the ethics or morals of the issues. They claim that they do, but in reality, they’re sitting there listening to you right now, listening to this podcast. How about that? Your own time at home. Third issue, gig work and independent contractor status. This one came about in terms of the Gemini-related feedback I got. AI platforms often manage gig workers and freelancers. And so the AI bot said in response, legal battles are likely to emerge around whether these workers are truly independent contractors or if they should be classed as employees and the rights and benefits that come with the status. Now, I saw that feedback from the AI device. This is Google Gemini. I’m like, you know what? The data, what I just read to you, didn’t really make too much sense in terms of that’s an issue. Well, we understand what gig workers are. We understand what independent contractors are, but the legal battles, it doesn’t… The AI device is not smart enough yet. It will be to tell us why this is really a big issue. It’s saying often managing whether these employees are truly independent contractors. I mean, it’s not being more specific so it’s you’re really at the um the advent of uh in this case Gemini that its inability to learn about particular areas but it’ll catch up it’ll probably listen to this podcast and reform itself to give a more explain explenative uh analysis about the roles between the pentecostals or the why uh the uh the what do you call it the the AI devices maybe interfering or doing something with the independent contractor so i found that that particular concept gig work independent contractor feedback from ai was not actually it didn’t didn’t tell us anything i guess is the point and i’m struggling actually to help you uh interpret what’s happened because it didn’t really tell us much about anything in terms of because the ai device is you know designed to learn but it’s not producing yet it will uh the fourth thing uh is automation and job displacement well Here we get something really sound and concrete. You know, I read in the Wall Street Journal that, and I think it picked over in the New York Times as well, Wall Street is in, you know, the certain levels of workers in the investment community, right? at the very low end are getting eliminated because, and I think it was Goldman Sachs eliminated a range of individuals at the low end. So if you’re an investment banker trying to start out, AI is doing your job for you. There’s no more pairing decks and spreadsheets and the like. That’s all being done by AI. So it’s automation is happening in job displacement. It’s a good example. It just recently occurred. And so the AI Gemini said to us, As AI automates more tasks, job losses will occur. This could be cases related to severance packages, retraining obligations, and the overall responsibilities companies have toward displaced workers. Again, another example of Gemini not producing a response that is adequate to explain the topic automation, job displacement, the issue of severance package, cases, retraining obligations. Well, retraining, yeah, maybe you should to retain your workers and have them do something more, you know, that AI devices can’t do of course. And then overall responsibilities as, I don’t know what that didn’t really explain it. So another example that AI gets it wrong or at least doesn’t provide enough explanation. So I included these cause they were just, they just kind of stuck out as being nonsensical sometimes in terms of their explanation. The next one is algorithmic decision-making. Here, we’re talking about the coders, so the human people who code. And I’ll tell you what the response was I got from Jim and I. When AI systems make decisions about promotions, terminations, disciplinary actions, there is the potential for bias and unfairness. Lawsuits may challenge a lack of transparency in these AI systems demanding explanations for the decision that heavily affect employees. So I included this one because we all knew that the data in, data out, you can have data meaning the biased data by human individuals. And then next, think about this. The AI device is learning from a wide range of things. It’s going to learn everything. It’s going to learn what the Klu Klux Klan was. It’s going to learn, I think that Google tried to change its AI device to do certain things as a way to not offend people. But so the AI device is going to be… Machine learning device is going to pick up and learn about biases in a historical American history your world history, etc And it’s going to bring that into its algorithmic equations and It’s gonna make decisions About your job. And are they gonna get it? Right? Is it going to? you know is the employer gonna be transparent about if the employer says we I We’re convinced by Accenture or whomever to have AI in our workplace. And, by the way, Accenture has a large team of people trying to promote AI in the workplace. That’s what they do as consultants. And so what about the actual box itself in terms of putting the code in and what is it learning and how you control for bias? And so the fear is being that the bias is being pumped into the algorithmic equation. and thus it’s going to be impacting you adversely. Again, next one, number six, AI decision-making or AI-driven decision-making. Here we get into examples of performance evaluations or promotions or disciplinary actions and terminations. And think about this in all honesty. It’s very important. Companies love to automate things, and so you’re probably experiencing an automated performance evaluation. You know, your manager is, you know… maybe creating something of an input, and then you’re getting a question back from an AI device and you’re having to feed into it. And it’s going to analyze the time it takes for you to do it. It’s going to analyze your pattern of communication, what you say, the language you use, and what it can potentially mean. It’ll interpret that. And then it’ll assess other aspects. Performance reviews can include now your emails and your work on various products. And if you only… if you’ve received a performance improvement plan or an egg, a negative performance review and where it goes into various line items, where the accusation of poor performance, because you need improvement, it’s going to put various facts in. Well, now we’re going to see that the AI device is going to grab from your available work product, you as the employee, and then from your 360 review by your coworkers, remember those, you know, they still happen too. And let’s all be fed into the performance evaluation and, and, The AI device is going to help the manager rate you, but maybe it’ll just rate you itself. That’s pretty scary. And it’s going to be pretty smart about it, too, because it’s looking at every piece of data you’ve ever created about your performance. And who’s smarter right now in terms of your memory versus the computer? The computer is going to memorize every single email you wrote and every single deck you wrote, et cetera, and maybe have more data than you. That’s scary. It means that you have to get on top of your game now because it’s starting now. I’m talking to you about it on a podcast, and my concerns are… Performance evaluations are going to get more uglier We know that they’re don’t work themselves, but you know that we know why they’re used They’re used to basically get rid of people, but maybe it’s gonna get more intense. The next thing is promotions What about AI system identifying candidates for advancement based on a wide range of factors potentially including social media activity and personality assessments? I thought I included this one because We have different generations of people who put out social media. We have a people who put out linkedin we have people put out uh social let’s say um youtube or uh tick tock or you know and various ages and and people are um you know i think of the classic example of um two things actually i heard a story yesterday i was driving where there was uh youtubers who were putting on about vaping there was apparently a large phenomena of people just video recording themselves vaping, teenagers generally. And whether that’s stuck on YouTube, they can’t take it off. So it’s there and the machine learns it. And as you grow older, it’s factored into your MO of who you are as an employee potentially in the computer has a wide range of reach to understand you. So it’s like, well, first of all, never put that stuff on YouTube or anywhere else for that matter. But that’s a different population of workers who… live their lives in the iPhone generation. I have three kids like that. And so, you know, they put up, well, I don’t know if they’re not putting a lot of data out there, but they’re on Instagram or whatever it is. And so the concern is that all that data that they put out there is going to be used to evaluate their performance potentially because the machine learning is learning from everywhere. It’s very scary. So anybody of any age putting out data in the social media sphere, of any sort, it’s going to be used to qualify you for performance evaluations or promotions. And how about disciplinary action? Maybe you’re fitting a pattern that you engaged in some form of insidious bullying of somebody on the internet or shaming or whatever it was and then brought into the workspace because that’s– the computers were told to go out there and see what you do. And let me speak very clear to you that when you do a background check on an individual, Do you know that background check companies will search your social media profile? I only know this personal experience because I’ve asked for them in terms of background checks. I want to know who I’m dealing with sometimes. I didn’t ask the background check folks to do that, but they went ahead and did it. I say that as an example because now you have the AI devices doing what? Much more quantitatively bringing in data all about you. We have this reckoning of your prior activities and what your future activities will be about putting information out there. at one point you want to put information out there cause it’s job relevant in terms of maybe it’s LinkedIn, you want to put something out or if you’re a younger employee, and you want to, you know, you’ve just been fired and you go viral on your TikTok and you want to share that because you’re a 20 something and you know, you know, and the company just, you know, got rid of you in such a way. And these are recent examples where, young employees have done this and it’s gone viral and people, and the CEO of one example in one company had to apologize to the individual and the way it was, the termination was handled. So really caution in terms of AI driven decision-making and your availability of data that you’re putting out there or, you know, putting up there individually at work even outside of work. It’s, it’s, as you can hear, it’s quite crazy in terms of what we’re, dawning upon. So let’s move into section number seven, the challenge of the black box, the AI. Not to be redundant here, but the one significant challenge is that many complex AI systems, especially those using deep learning, are considered black boxes. This means that even the developers may not fully understand how the AI arrives at specific decisions. I mean, potential for bias, accountability. So if an employee is terminated or denied promotion due to an AI assessment, It may be impossible to provide a satisfactory explanation of why that decision was made. The lack of transparency goes against notions of fairness and due process. I pause here because employers are always going to try to create a defense about why someone was let go, and they’re going to build the case if they can’t understand why any device did this to terminate an employee or etc.. That’s a problem for employers and almost want to sit there and wait and pause And watch the landscape develop because they want to use this product. They got to control the product and they got to know what it’s going to do. And you and I can sit here and watch as they screw up because they’re going to make these screw ups. And I’m going to bring these examples once they happen. So transparency is not something employers are going to want to do because they’re not transparent with you now, are they? No. And that’s all I ever talk about to raise your awareness. I’m being transparent because that’s what I see it. Employers don’t want to be transparent because that’s not the way it works. They want you to be hidden. The very essence of this podcast is telling you what your employer does not want you to know, including this podcast. So transparency and AI are in conflict because employers have to justify their reactions to a court for why they made a decision. They can’t still tell a judge, Your Honor, the AI bot did it. Well, no, definitely. Mr. Employer, you’ve got to explain it because that’s the law. So there’s a conundrum there that they want it and they want to use it, but it’s going to get out of control very quickly. Transparency, I doubt it. Okay. The next one is very similar potential for bias. Even if the AI developers themselves have no discriminatory intent, hidden biases in the training data or even the very features of the AI selects for analysis, meaning historical data or current data, news, New York Times, Wall Street Journal, you name it. It may also lead to unfair outcomes. Detecting the bias within a black box system can be extremely difficult.” Yeah, I’m waiting for this one too. You’re going to do some type of audit trail, some type of accountability to ensure that there’s no bias there. I think it’s just– it’s a black box for a good reason because it’s going to probably get a lot of employers in trouble. You’ll have a lot of class actions. And I’ll be looking for it because, you know, who else should look for it? The federal government? Not likely. So it’s up to you and I to police the situation. And why not? You know, this is kind of the dawn of the employee era where an employee actually matters. And, you know, employers realize they need employees instead of the other way around. They can just sit there and abuse them, which is changing. It’s very slow. But so potential for bias. is extremely important, hidden bias, even though the employer and companies say we don’t have it and they’re going to claim that there’s no bias there. There’s an EEO employer, et cetera, equal employment opportunity employer, but it’s going to be the wild west to watch this develop. Now let’s talk about something really important. Another topic, AI workplace mental health wellbeing. This one popped up and I said, I got to pause and just ruminate on this with you because… So AI is poised to influence employees’ mental health in positive and negative ways. And so the potential benefits are this, personalized support. Now think about an AI-powered chatbots or apps could provide tailored advice on stress management to you. Early symptom recognition saying, you know, Susan, I think you might be exhibiting the patterns of the diagnosis of a current maybe major depression or anxiety. And it would something like that. that happening to you, and then assess resources that will be helpful to you. So that’s positive. That sounds likely to happen, I think, right? But so it’s monitoring how I’m talking with you right now and assessing, is Mark having some DSM-5, DSM-5 is the diagnostic statistical manual, to assess, is there something about the way his tone and intonation about he might be feeling kind of blue today? Something of that nature you think about. But it’s a the laptop’s listening to you constantly. Proactive intervention, the AI could analyze communication patterns as I was just describing, behavioral data to identify employees at risk of burnout or mental health decline, allowing for early intervention. Well, again, makes sense. I personally would never use that at my employees at work. That’s something of their own personal privacy, but some employers may decide that this is a relevant area for them to create apps and help employees, like under an employee assistance umbrella. But again,Pretty strange and unusual, but maybe we recognize it being a positive. The risk aspects are self-evident. So increased work pressure, AI systems setting productivity goals, more monitoring employees’ activity constantly could exacerbate stress and anxiety. I mean, who are you working for? You’re not working for the man any longer or the woman. You’re working for the bot. And the bot doesn’t have this consciousness about you about, oh, I think Mark’s… He’s a little overworked right now. His tasks and timing is actually slowing down. Maybe we should give him a break or something like that. Think about the, I’m sure it does exist at Amazon, the warehouse. I mean, the package fillers are, you know, they’re working constantly and there’s lawsuits about this issue and people are burning out and describing this warehouse as a very difficult place to work. But yet you and I both need our packages at our doorstep. And it’s amazing that it happens every day. But someone there in that pipeline says, of logistics is probably experiencing some level of stress and being monitored by it. Why wouldn’t Amazon do that? Of course they would. So increased work pressure. How about emotional surveillance? Employers may deploy AI tools for sentiment analysis of emails. I’ve been discussing that. Facial expressions, monitoring, worker emotional status, and raise serious ethical and privacy concerns. I mean, folks, when you step into the workplace, you You have some privacy and it’s related to your HIPAA. And when you go to the bathroom, other than that you’ve got no privacy and employers basically run the place like a private government, like I tell ya. So, you know, emotional surveillance, I mean, that’s basically stepping over the line and, you know our employer is going to do this too. You know, I have examples in performance reviews where the manager doesn’t really tack, identify fact issues, examples, but goes after the subtleties of how you reacted or spoke or interacted with your team. You know what I’m talking about? You’ve seen and heard about this. And that’s an emotional surveillance scenario that’s happening now. But think about it happening at a magnitude of 100, you know, using computers to do the work that humans can do and putting that information in the hands of managers to make decisions. I personally know of a case in MetLife. I did a podcast years ago and it was, the woman had been working remotely, I think it was during the pandemic, and they told her she had negative performance review. It was a race case but they said because they assessed basically her emotional intelligence, this person had a PhD and worked at very high level. They basically shit-canned her because she had emotional intelligence issues because they surveilled her and they actually told her, that we produced later on in the discovery. But these five different items that she had failed, and they were all based upon how she reacted or said something, etc. So quite serious issue does happen, it’s going to get worse. How about the algorithmic bias and mental health assessment? AI systems used to predict mental health concerns could be based on biased data, leading to misdiagnosis or unfair treatment of employees. I love this one because you want to have a learning machine that’s going to be, you know, it’s going to feed in the data of what it needs to learn. All of the DSM-5, okay? So get that. All of any articles and research reports, all that, because it’s consuming everything off the internet. And so then you have this thing of unfair treatment. It regarded you as having a disability. You either have or don’t have. And that gets into the Americans with Disabilities Act because that’s the act that controls… and governs you know mental health and well-being at the workplace so um you could have a potential situation where too much data goes in and it feeds about this uh junk back to the manager and a manager reacts to it and um you know it basically terminates you so uh that does happen people do get fired when their mental health status is disclosed because some employers you know don’t understand it and they react to it so it can get even worse than that um Privacy concerns, violations. Yeah. Collection of sensitive biometric and emotional data by an AI system will raise alarms about employee privacy and potential misuse of this information. Yes. All day long. Run. You know, this is going to get… This mental health and well-being at work is probably going to be the largest issue that’s going to come up out of this because of all the surveillance that’s happening. And if you have co-workers or yourself or… have mental illness of any sort you know to nominal to severe, you’re kind of put on notice right now because you have to manage yourself with your employer, which is like, wait a minute, I have to think about how I write something or how I talk about work in general and I’m being assessed. I mean, folks, that’s what’s happening now. That’s what they’re going to do to you. They’re going to, and it already does happen. And it’s actually Bridgewater Associates is the largest hedge fund in the world’s down the street from my office here, they actually instituted this principles that Ray Dalio had. And they monitored people the way that they spoke and they rated them. I don’t think that they do it to the extent they used to do it, but they still operate with principles over there. But they used to operate with this, you know, two iPads sitting there and they rate each other as having a business discussion about, you know, tone intonation, whatever it is of being effective or transparent or whatever they wanted to do. I mean, it’s been done already. And so now you’re going to speed that up, that process with AI doing it in a way that is not apparent to you, done through maybe laptops because it’s probably easy to program that. Because the device in front of me has a microphone on it, has a camera on it. a sensor to pick up my blood pressure if I push the little fingerprint icon on the dashboard here. So enormous privacy issues can come up because of what the data is coming, the AI device is coming up with as it monitors you. This leads to the next issue of discrimination based on mental health. If the AI system flags individuals with potential mental health struggles, could this lead to unfair treatment? Yes. Mispromotions? Yes. Termination? Yes. That sounds insidious, but that’s what currently happens today. We just don’t have an AI system doing that. Humans do this to people. Humans do this to people. That’s why you have mental health cases under the Americans with Disabilities Act and state law in the courts. And there’s been thousands of them because humans do this to one another. If you become unhealthy, you’re going to get fired or mistreated. Not to say that all employers do this, but humans are not nice folks at work. And if you’re not a healthy person at work, you’re a lesser than person. And it’s called dehumanization. If you don’t know what that means, learn it. It happens all the time. I think about the person who’s in, again, the Amazon warehouse and they’re experiencing physical problems and not able to move the packages along, etc., They’re being less productive and they’re being monitored for number of packages they can get in a minute, an hour or whatever. And it’s not about them as being a human with emotions. It’s about them as a machine. OK, so it happens. You need to be aware of this. So I won’t get into the newer responsibilities for employers. Feedback I got was duty to monitor the AI impact and then reasonable accommodations. I’ll leave that for another day. Here’s another topic. AI as the harassment amplifier. The news story today is teenagers, which can be downright mean, are using AI currently. I think there’s some ban on production of sexually explicit content in terms of if you ask an AI to do this, but deep fakes. Currently, this is really sad, but teenagers, mostly females, they’re being deep faked in terms of their nudity. and it’s being put out there. That’s the story. So now let’s bring that into the workplace. Number one, that deep fake that’s out there, it’s gonna remain out there for that individual. That’s an identity issue that somebody has basically stolen their identity and is gonna file them in the future for their employment. But in the workplace, there’s the potential that the AI could excel pattern recognition and data analysis in terms of Analyzing, let’s say you’re an employee and you like someone. You basically reach over the line and you start to analyze their social media. You write it basically an algorithm. Essentially, you create your own learning machine and you go out there and gather everything that person you adore that you want to have a relationship with. You go out and look at all of their sensitive personal information in any way possible. You’re mining it. even your communications with them. So you can get into a scenario where you have, you know, harassment of any nature, it doesn’t have to be sexual based. It can be any nature where the coworker is using an AI device to develop a scenario to basically harass you online and potentially at the office. So a deep well of potential misbehavior by coworkers and also by companies who are responsible for managing this issue as well, because it’s their work environment. So you now have taken the work environment and you stuck AI into it and you’ve given this free range to the entire world to use AI. And of course they’re using it. The data statistics of the first week of its production of AI, the first chat that came out, it was insane how many people went on it. And I did too. I’m sure you did. So to find out what the heck’s going on. But now the employers are responsible in their work environment with the existence of AI externally. being brought into the workplace to harass employees. I don’t think we have early examples of that just yet, but you can imagine the social media presence of someone and using that in some way to extract out or some vengeance or some level harassment. Again, people are, humans are mean individuals. So that’s an issue. You could have the deepfake issue happening to a coworker at work. It’s possible. Because you can tell a device to do that, and it will. Next issue, employee trust in AI is a hot topic, okay? If you’re not, you should at this juncture of the podcast episode feel quite uneasy about this new topic of AI in the workplace. And there was already a trust issue beforehand, now it’s getting more intense, and we’re So employers have to manage this issue, and they’re not going to tell you how they’re managing you. They may bring it up at work. If you’re an outside consultant, you’re working in AI for corporations, you know a lot more than the average employee. But employers are not telling employees what they’re using AI for at work. If they do, let me know. Send me an email. I want to know, like, you know, earliest signs of what’s happening. But the employee trust issue is poor to most none at this juncture. It’s called employee engagement. I think it’s in the low 20s to 30s, depending on age. So that’s pretty low. I mean, that’s really low. So that means a lot of people are unhappy at work. And so you then throw this AI issue on top of it. Then you have a lack of trust of the black box issue because even employers don’t know what the heck is going to happen in the future. And they’re implementing these technologies into your workspace. And So, how do you gain trust in the process? I guess and I thought about this before I started to include this topic but think about yourself, your role in this process. And it’s always going to come back to your role in this process. What are you doing for yourself to protect yourself in any way against any misuse of AI? So we can think about basic examples. I can close my machine down, turn the little screen thing off. I can turn the machine off, the laptop, whatever, and I got some privacy at home. And I go about this lifestyle to ensure that am I in a safe private space from my employer? So you can do that. The next thing you can probably think about is your professionalism when you’re working at work. Obviously, you don’t want to be baited into an argument by anybody or raise your voice. You you know, or when you’re having conversations, don’t drop the F-bomb on the individual. So we’re going to get back to maybe a political correctness aspect that we have since moved away from, but maybe that comes back. So think about things you can do for yourself to protect yourself because your employer is not going to do this. They may say that they are, but they’re going to implement these tools because they can’t resist and they’re being sold this technology by consultants. Okay. it’s at a scale it’s uh you you don’t understand it it’s going to overtake the workplace and so how in order to control ai you can control how it interacts with you by making choices and being aware and observant of what’s happening around you now you’re just trying to do your job but now you can saddle this next level of like observation and protectionism of yourself well yeah you got to do that because your employer is not going to do it for you and so um if you’re ever vigilant about gaining information about work, now’s the time. Because this exponential effect it’s going to have upon you is going to get insane. Meaning that AI is going to gather all data about you in any way, shape or form. We’re talking current active data on projects you’re working on, whatever it is that’s feeding into their system. It’s watching you. And that’s crazy. And you have to just be vigilant about, you know, what your boundaries are, what you’re saying in such a way and turn things off when you’re at work related devices off. People have phones. I mean, that’s work related device. And what do you do with it? Do you put it to some type of like, you know, safe that you you know, you can’t hear do you turn it off? And we don’t turn it off, but we should. So I just prompted you to think about all these things that are coming to mind as I’m talking with you about this, that are just quite well serious and insane. And I don’t want to trust employers to do the right thing. So the aspects of eroding trust employees about AI is topical issues like the black box problem I just talked about, you know, we just They’re opaque. You don’t understand them. Employers don’t understand them. Big problem. Fear of job loss because the AI device is going to eradicate investment banking, early stage jobs that people get out of college. And so job loss, a fear of job loss, even at let’s say you’re in your 50s and you already have fear of job loss already because you’re going to age out at your prime earning years. But now the AI device is going to maybe, you know, take some of your job responsibilities away. You shouldn’t see that on the march in terms of before it happens to you. Likewise, you should see that your job responsibilities, some of them are given to younger workers. Usually, that’s a telltale sign that you’re being led to the pasture. Privacy concerns, extensive workplace monitoring as I discussed, big issue, perceived bias that we talked about. You already have a lack of trust, low employee engagement, employer is still trying to figure the stupid thing out, just type in the word or the phrase employee engagement, okay, in a Google search and you’ll see what I mean. You can’t get past maybe, I don’t know, 10 pages of Google if you wanna, what pages are there anymore? But they’re all consultants who are out there trying to pitch this thing called employee engagement. It’s just everywhere. So you’ll find nothing in terms of employee engagement helping employees, meaning this podcast helping you. You won’t find that topic. You’ll find employers and consultants pushing their information out there, doing a lot of SEO to push employee engagement like it meant something. But with an employee engagement at 30%, I mean, they’re obviously not doing their job. They’re making a lot of money, but they’re not doing their job. So how do you build trust in the AI process now upon you? Making things transparent and explainable to you? Well, that’s not going to happen. Why would employers want to do that? The next topic is involving employees. I’ve been talking about that for a while, involving employees in the performance review process, but they don’t want to do that either. No input from employees other than making a widget better, that’s fine, but no involvement in the AI implementation process. Why would employers want to do that? Ethical and responsible AI, I… That’s so new in terms of AI being so new. That’s like a, that’s a topic that maybe policymakers will think about implementing. But, you know, trying to type into an AI like Gemini and trying to ask it for like anything of a sexual nature, a picture or an image, it won’t do that. So there’s some level of concern and hold back that maybe the companies themselves are implementing this or I have. Maybe look up a policymaking, maybe there’s any new federal statutes come out. But I don’t know yet. So trust in AI, you know, I don’t think there’s going to be any employee trust for a long time. It’s more going to be the opposite of, you know, run for the hills, folks. This shit’s happening to you in real time. It’s like and but the problem is in my sincerest thoughts about this is that no one is thinking about this. No employee is actively thinking about this is going to happen around them. That’s the purpose of the podcast episode, because it’s happening now. And I’ve only touched upon a small scale of what’s happened to you. I gave you a large in terms of overview, but it’s already here. And so what are you going to do to protect yourself? I’ve said to you some common concerns of just data privacy, etc. But, you know, Don’t put your data out there. And, you know, I know you want to be social on social media, but it’s going to come around to kicking the ass hard and it’s going to affect your job. uh employers don’t care they’re just going to want to have this machine learn everything you possibly can because it’s doing that now all right so I won’t talk any further I think you got a hard taste of what we’re seeing is I’ll talk about more as I research further into this but it’s here and it’s going to come across in sideways manners and it’s going to affect you and your job and I’ll try to bring these things to light so you’re aware of it and I’m on the front lines because I’m concerned about it and Who else is on the front line with me? The federal government and the courts. Somebody else on the front line with us? You. But we don’t talk about you because I don’t want to talk about you, the employee. But you’re here because you’re listening. And if you’re pissed off, freaked out, whatever, because I’m bringing this to your attention, then good. And now you have more information to deal with to protect yourself. So with that, have a good week. I’ll talk to you soon. Thank you. Hey, it’s Mark. And thank you for listening to this episode of the Employee’s Fiber Guide. If you’d like to be interviewed for our podcast and share your story about what you’re going through at work and do so anonymously, please send me an email at mcaryy at capclaw.com. And also if you liked this podcast episode and others like it, please leave us a review. It really does help others find this podcast. So leave a review on Apple or Spotify or wherever Glad to be of service to you.