FIR #449: Employees’ Use of Shadow AI Surges
Manage episode 465239998 series 3447469
Employees everywhere are using AI to save time and be more productive. The thing is, many of them are using tools their employers have not approved and they’re not telling anyone. Companies are benefiting from this stealth approach to using generative AI, but there are plenty of risks, too. Neville and Shel look at the data and discuss approaches companies can take that will benefit both them and their employees.
Links from this episode:
- Why employees smuggle AI into work
- FIR #419: Is Shadow AI an Evil Lurking in the Heart of Your Company?
- The Rise of Shadow AI is a Double-Edged Sword for Corporate Innovation – NevilleHobson .com
- Chasing Shadows: Getting ahead of Shadow AI
- Half of all employees are Shadow AI users, new study finds
The next monthly, long-form episode of FIR will drop on Monday, February 24.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw transcript:
Hi everybody, and welcome to episode number 449 of four immediate release. I’m She Holtz. I’m Neville Hobson. In episode 4 1 9 last July, we explored the concept of shadow AI questioning its potential risks within organizations. Shadow AI refers to the use of unsanctioned artificial intelligence tools and technologies by employees without the knowledge or approval of their company’s IT governance.
This practice can lead to security and vulnerabilities, data breaches and compliance issues as these tools operate outside established protocols. A recent survey by Software Ag highlights the surveillance of this phenomenon revealing that over half of all employees admits to using AI tools that work artificial approval.
The survey underscores the need for organizations to address this growing trend by implementing clear policies and providing sanctioned AI resources to ensure security and compliance. So what does this latest survey add to the topic we [00:01:00] discussed last July? We’ll discuss the picture next.
This week the BBC reported on this latest survey noting that many employees resort to unauthorized AI applications to enhance productivity, especially when official tools are lacking or inadequate. This unauthorized usage shadow AI mirrors the earlier concept of shadow it where employees used unsanctioned software or devices for work purposes.
The report suggests that organizations should proactively engage with employees to understand their needs. And provide appropriate AI tools, thereby reducing the reliance on unsanctioned applications. And we have three good examples of how different organizations are addressing shadow ai. In response to the growing use of unsanctioned AI tools, banking giant JP Morgan Chase develops an internal generative AI assistant called Index GPT for over 60,000 employees.
This tool assists with tasks such as document summarization and problem solving, [00:02:00] ensuring that employees have access to approved AI resources within a secure environment. The Australian Telecommunications Company, Telstra has implemented a rigorous process to assess all AI tools and capabilities within its business.
Telstra maintains a list of approved generative AI tools and provides guidelines on their usage. For instance, while the company has not officially banned the use of Chinese AI model deep seek, it discourages its use and prefers employees to utilize Microsoft’s copilot for which it is rolling out 21,000 licenses.
And US Retail Giant Walmart monitors AI use within the company and prefers in-house tools, but does not strictly prohibit external platforms. This approach allows Walmart to balance innovation with security by providing employees with approved AI resources while maintaining oversight of external tool usage.
So there’s a number of takeaways here. Shell, I think the first one being there’s a prevalence of shadow ai. Over half of employees use [00:03:00] unsanctioned AI tools at work, posing potential risk to organizations. Those risks from using unauthorized AI can lead to security vulnerabilities, data breaches, and compliance issues.
Proactive organization responses play an essential part in addressing this. Companies like JPMorgan Chase, Telstra, and Walmart are proactively engaging with employees who understand their needs and provide appropriate sanctioned AI tools, thereby reducing your reliance on unsanctioned applications.
Your thoughts? I think that, first of all. Organizations that are restricting the use of ai, which is leading employees to practice shadow AI so they can, enjoy the benefits that AI is going to deliver to them. These organizations need to understand what’s happening with these tools, what these employees are using them for.
I don’t know if it was the same survey you were looking at ’cause what I was. Reading didn’t cite the source of this, but it was a recent survey that found that knowledge workers [00:04:00] are using shadow ai. 83% of them say to s save time. 81% to simplify tasks and 71% to increase productivity. Those are just horrible things.
What business would want that? And I think there. Are misconceptions at leadership levels people who really aren’t paying attention to what’s happening that’s preventing the organization from implementing tools that employees could use with permission. . And that’s driving a lot of this.
I think it’s. Got to be more understanding at the most senior levels of the organization of what these tools bring to the table. Then it’s a question of. Education of employees. Employees really need to understand, first of all what the risks are of using unapproved tools. They need to understand the organization’s process.
Where I work, we have gone to great pains to explain the process of identifying, not just ai, but any. [00:05:00] Technology that might be beneficial and how it is evaluated how it moves into testing and then how it moves into common use across the organization so people understand that there’s a process and if there’s a tool that you like, recommend it and.
It will be put into that process. And finally, I think employees need to understand what tools are available to them. For example, today I will be communicating to employees in our organization that the tool they already have access to, which is co-pilot baked into Office 365 now offers. Access to chat GPTs full complete reasoning model.
You have to pay extra for this if you just have the access to, to chat GPT. But Microsoft decided to, to bake it right into co-pilot at no additional cost. So I’m gonna talk to our employees about what a reasoning model is, when you might want [00:06:00] to use it and hey, you can extra cost. No hassle right there within copilot.
So I, there are steps that organizations can take to minimize the use of shadow it shadow ai. I think what’s concerning is that they’re not taking these steps. They’re just worried about it. Yeah I think that’s the key point. She, what I took mostly from those examples for instance, those three companies in different industries and different countries even are doing is the key to this.
Particularly what struck me was, um, Telstra in Australia. I. Who are right on the ball with the latest thing going, which is deep seek the Chinese AI model that they haven’t abandoned at all, but they’re discouraging its use. So what does that tell me? That little bit of information, not the detail of anything, is they’re communicating something that employees might be wondering about.
They’ve read about deep seek and some might be thinking, oh, I’d love to try that, but can I, should I? Now they’ve got some clarity. They’d rather you didn’t, [00:07:00] and presumably they’re giving some reasons why not and so forth. So it’s not quite clear from that example, but it illustrates, jP Morgan Chase has taken a whole new level.
They’re rolling out an internal generative AI tool for 60,000 employees. So they, at that level, that’s way advanced than just simply explaining what AI tools are and whether you can use ’em, not, they’ve embraced that. They’re rolling that out. Then you’ve got the other extreme, which is Walmart, who, don’t who prefer in-house tools, but don’t strictly prohibit external platforms. I think they, I’m assuming therefore that they’ve explained the reasoning behind all of that. But the point is though they’re proactively communicating, which is the exact thing organizations should be doing.
They need to be proactive for a lot of reasons, and not the least of which is the speed with which all of this is advancing. I was listening to a marketing AI podcast just yesterday. Excellent podcast by the way, called the Artificial Intelligence Show. Yeah. And they were talking about the fact that there is this.
Benchmark [00:08:00] that has been introduced recently called Humanity’s Last exam. You’ve probably heard of it, I think it’s 3000 questions that are asked of a new large language model and it evaluates how well they answered the questions and the. First time it was used, and I can’t remember which models were used in which sequence, but one of them scored something like 8%.
Then the next one a new model was released by a different AI company and it scored something like six 16%. And then a new model was released by another AI company and it scored 24%. These are not. Trick questions, but they’re not simple answers where it could go into its training set. It has to reason to get to the answer.
And what was frightening was that the time that last between that first test that did six or 8% and the last one that did 24. Was about two weeks. So that’s how fast all of this is advancing, which leads a lot of people [00:09:00] to think that we’re probably closer to artificial general intelligence than we had thought we were before.
And companies that are sitting on their hands with this are just opening the door to more and more problems. You talked about deep seek. . If you use deep seeq on the web, one of the many interfaces that allow you to play with it. You’re exchanging data with servers in China and that could be proprietary company data.
There is some statistical evidence about the amount of proprietary information that has been shared using shadow ai. I don’t have the number in front of me, but it’s not inconsequential sharing that with. Open ai, sharing that with Google or Anthropic is one thing. Sharing it with the People’s Republic of China is another
Now I’m playing with Deep Seek. I like it. I love R one watching it think the process it goes through and it’s almost like watching a human think with the text. Displaying on the screen, but I’ve installed it on my computer. It’ll run [00:10:00] if I disconnect myself from the internet completely.
That’s the whole idea of these open source models, is that they don’t call out to any servers anywhere, any data centers. It’s all contained right there on your hard drive if you wanna be able to. Let your employees use this thing, consider an implementation behind your firewall that’s completely protected.
But we have to start thinking about how we give these tools to employees so they can be more productive and efficient. The company can do better, and we don’t run the risks associated with employees bringing tools into the organization behind the backs of it. So bottom line effect, essentially it comes down to communication, doesn’t it?
Because everything you’ve outlined, it requires that proactive engagement with employees to help ’em understand why you’re doing it. What is it, how they can take advantage of it, and the pros and cons of all. The point is though clarity that there is no question in the employee’s mind, [00:11:00] can I use this?
Should I, or I better bring my own? And I think one other thing that you need is not just communication to employees, but engagement with them around this. I will share one other thing that we’re doing where I work, we’re establishing an it an AI committee. And we have an open call for membership on this committee.
And I’m. Not responsible for this. It’s out of the IT department. But what they’re looking for is a cross section of the company. They don’t care how well you understand AI or how much you’ve used it or what level in the organization you’re at. They’re looking to get people from all across the organization.
So there is representation from all corners of the company Yeah. In the decisions that are made about this. And that will be thoroughly communicated. So people don’t think this is executives in the ivory tower, or this is the software police in it. These are representatives from our parts of the organization.
That have [00:12:00] looked at the issues. They’ve looked at the risks and they’ve made these decisions. And I think that makes it a lot more understandable and a lot more acceptable. So engage employees in the process. Yeah, I would agree with that. Shell. So we’ll have a link in the show notes to this survey.
To all the references we’ve made here that you can take a look yourself and get up to speed, but there’s some useful stuff to understand here and for employers. Yeah, you build up that proactive engagement with employees on this very topic and you will be, I’m certain, please, with the outcome from that, I would say, and that’ll be a 30 for this episode of four immediate release.
The post FIR #449: Employees’ Use of Shadow AI Surges appeared first on FIR Podcast Network.
50 episodes