More employees are secretly using AI tools such as ChatGPT to work more productively and efficiently. But shadow AI poses risks to data protection, compliance and reputation. Companies need to establish clear guidelines and provide safe alternatives.
Shadow IT is a well-known issue that most people are familiar with, and nearly all companies encounter it regularly. It refers to the use of end-devices, applications, or cloud services that are not officially approved and managed by an organization's IT department. Shadow IT is present in almost every organizationwithout and sometimes with the implicit approval of the IT department. The reason for the widespread use of shadow IT is that many employees find they can perform their tasks more productively and efficiently by using these tools.
What shadow AI means in everyday work
Today, IT departments are witnessing a similar phenomenon with Large Language Models (LLMs). In addition to private use, LLMs have been integrated into various organizations through existing tools and APIs, often without centralized oversight. This creates new risks, particularly when employees use freely accessible chatbots such as ChatGPT, Claude or Gemini in an uncontrolled manner and upload confidential business data in the process. This emerging issue has been termed “Shadow AI,” which refers to the use of AI tools and applications without the knowledge or approval of the IT department.
The growth of Show AI is significant. In its latest CX Trends study, cloud platform provider Zendesk predicts its usage to increase by around 250% this year compared to the previous year, with even greater growth expected in the future. Zendesk warns that this rapid increase exposes companies to specific security risks and threatens data protection, compliance, and business ethics.
A study conducted by Harmonic Security in autumn 2024 comes to similar conclusions. According to this study, one in two employees now secretly use unapproved AI tools. Most continue this practice even when companies officially ban its use.
Why Employees Use Shadow AI
According to Zendesk, there are several reasons why people are increasingly turning to shadow AI instead of using officially approved solutions. The most frequently cited motives include time savings, easier access, and inadequate alternatives. Additionally, factors such as a lack of support, insufficient budget, and limited knowledge often hinder the use or development of their own AI solutions, as reported by Zendesk on its website.
What are the risks of shadow AI
Without proper monitoring and control, shadow AI poses significant threats to companies, leading to far-reaching consequences. The risks include data and compliance breaches, as well as outright security threats, according to Zendesk. Unverified AI tools often yield false or contradictory results, which can harm customer relationships and damage employees' reputations.
According to Golem.de, "People who use AI tools are classified by colleagues and managers as less competent, less industrious and even lazier than those who use traditional tools". The online magazine refers to a study by Duke University, which has sparked considerable controversy.
How security policies govern AI usage
AI will continue to be present with us, regardless of our colleagues' opinions. Zendesk therefore recommends developing a clear strategy for deploying AI solutions company-wide. This is the only way to prevent the misuse of shadow AI. This approach is essential to prevent the misuse of shadow AI. The cloud platform service provider has outlined best practices for implementing and managing AI projects. The first step involves the targeted release of AI tools that are secured through company licenses and agreements with their providers. By offering clear alternatives, companies can effectively prevent the long-term use of unauthorized applications. Zendesk emphasizes the importance of having clear guidelines for AI usage, precise specifications, and a governance framework to ensure that solutions remain fair, transparent, and sustainable throughout the organization. Additional recommendations include providing training and continued education on AI, which can support daily usage and help clarify associated risks and consequences. Larger companies will benefit from introducing an AI competence center for coordinating all AI initiatives. Ultimately, companies need a “culture of secure AI use”.
When AI governance protects corporate data
The increasing use of shadow AI makes it necessary to develop company-wide security policies for dealing with AI tools and services. Companies must clearly state that employees are prohibited from uploading sensitive data to publicly hosted AI models, such as ChatGPT. With every use of AI, the following applies: communicate transparently. IT departments must document, check, and approve all AI systems used.
Harmonic Security recommends continuous monitoring of unauthorized AI tools to prevent data leaks. At the same time, employees need access to approved alternatives that effectively support them in their work. Security-conscious companies implement effective checks that are tailored to the context, user role, data sensitivity, and the AI used. Technical measures can block data breaches through private AI accounts.
The growing shadow AI problem calls for a well-thought-out security concept. The Zero Trust approach has been proven in practice as an effective protective measure against unauthorized data access. Every connection is verified, whether from the office or a remote location.An optimized security architecture safeguards your business data without overwhelming employees with excessive restrictions.
Implement a Zero Trust concept against shadow AI now