You’ve just developed a new product, and it’s a smash hit in focus groups. Buyers love it. You’ve secured listings with some of the UK’s leading retailers, as well as niche online stores.
Everything seems to be going well. You send the specification to your admin team, asking them to tidy up your messy spreadsheet. Ten minutes later, a clean, perfectly formatted spreadsheet lands in your inbox – just what you asked for, and in record time.
The admin team has been delivering high-quality work at lightning speed lately. You know how they’re doing it – but you don’t bring it up. Why not? Because opening that door means dealing with a much trickier question.
They’re using ChatGPT and you know full well that uploading company documents to a public AI tool isn’t allowed. But more importantly, you know what comes next:
“So… what’s our actual policy on AI usage?”
You don’t have an answer.
So, it’s easier to stay silent. After all, who wants to go back to waiting days for admin work when it’s now done in minutes – and done well?
A plucky start-up is trying to develop a similar product. Their biggest challenge? Figuring out what makes your version sing. Is it the ingredients? A unique production method?
After weeks of trying and failing, their product team decides to ask ChatGPT for ideas.
That specification your admin team uploaded months ago was used to train a new AI model. When asked to replicate a similar product, the AI spits out your full spec – almost word for word. Cracked it.
The second sheet on that spreadsheet? It had your early sales data and retailer contact details. After generating your product spec, ChatGPT helpfully adds:
“Would you like the contact details of buyers at retailers that might be interested in a product like this?”
Real-world examples are happening every day. Recently, Samsung employees leaked top-secret code to rivals in China by uploading it to ChatGPT while troubleshooting bugs.
Once your data is uploaded to public AI models, you can’t take it back. It may be retained and used in future models.
You can remove this risk and still let your team use AI to do exceptional work – safely and securely.
The first step? Provide your employees with a company-managed AI account.
Just like you wouldn’t allow staff to store sensitive files on their personal computers, you shouldn’t let them use personal ChatGPT accounts for company tasks.
There are plenty of secure AI options available for businesses:
All cost around £20 per user per month, and crucially, none of them use your data to train their models. Your information stays private and secure.
ChatGPT, Claude, and Gemini are currently the best-in-class AI tools. You’ll do well with any of them.
Avoid Microsoft Co-Pilot – users report frequent inaccuracies, clunky UX, and poor reasoning compared to the “Big Three.”
By officially adopting and encouraging responsible AI use at the company level, you unlock innovation and collaboration. No more secrecy – employees are empowered to share best practices, improve workflows, and move faster together.