Member-only story

AIEmployees Enter Sensitive Data Into GenAI Prompts Far Too Often

Mohammed Muneef
4 min readJan 20, 2025

A wide spectrum of data is being shared by employees through generative AI (GenAI) tools, researchers have found, legitimizing many organizations’ hesitancy to fully adopt AI practices.

Every time a user enters data into a prompt for ChatGPT or a similar tool, the information is ingested into the service’s LLM data set as source material used to train the next generation of the algorithm. The concern is that the information could be retrieved at a later date via savvy prompts, a vulnerability, or a hack, if proper data security isn’t in place for the service.

That’s according to researchers at Harmonic, who analyzed thousands of prompts submitted by users into GenAI platforms such as Microsoft, Copilot, OpenAI ChatGPT, Google Gemini, Anthropic’s Clause, and Perplexity. In their research, they discovered that though in many cases employee behavior in using these tools was straightforward, such as wanting to summarize a piece of text, edit a blog, or some other relatively simple task, there were a subset of requests that were much more compromising. In all, 8.5% of the analyzed GenAI prompts included sensitive data, to be exact.

Customer Data Most Often Leaked to GenAI

--

--

Mohammed Muneef
Mohammed Muneef

Written by Mohammed Muneef

🌍 Muneef | Sri Lanka 🔒 Web Penetration Tester & Bug Bounty Hunter 💻 Web Developer & Database Manager 🔗 Passionate about securing and building robust web

No responses yet

Write a response