Shadow AI refers to the use of artificial intelligence tools or applications by employees without the formal approval or oversight of their organization's IT department. It involves the unsanctioned use of AI technologies, often generative AI like ChatGPT, to automate tasks such as text editing, data analysis, or report generation, in order to enhance productivity and expedite processes.
A study by Software AG in October 2024 revealed that half of all employees use unapproved AI tools, often without malicious intent. The ease of access to these tools and the push for efficiency in the workplace drive employees to seek their own AI solutions. Michael Marriott, VP of marketing at Harmonic Security, notes that using AI at work has become second nature for many knowledge workers.
Harmonic Security analyzed 176,460 AI prompts from 8,000 users, showing that ChatGPT is the dominant AI model used, with 45% of prompts occurring via personal accounts. The analysis highlights the risks introduced by Shadow AI, including the use of Chinese AI models like DeepSeek and Baidu Chat, which pose potential security threats.
The study found a slight reduction in sensitive prompt frequency from Q4 2024 to Q1 2025, but an increase in legal and financial data exposure. Harmonic suggests that enterprises need to move from passive observation to proactive control and intelligent enforcement of AI use, emphasizing targeted training and coaching.
Please contact us for a free AI assessment.
Click here for the full article:
https://www.securityweek.com/the-shadow-ai-surge-study-finds-50-of-workers-use-unapproved-ai-tools/
Source: Kevin Townsend, Security Week