
Why You Should Be Worried About How ChatGPT Is Being Used at Work
Best Practice Considerations for AI Provision and Use in the Workplace
When employees do not have access to approved or monitored AI tools, they do not stop using AI. They simply use it elsewhere. What changes is not behaviour, but visibility.
This is where some of the most serious AI risks now emerge.
The hidden risk of uneven AI access
In many organisations, access to internal or approved AI tools is limited. Sometimes this is due to information security concerns. Sometimes it reflects licensing, role-based access, or employment status. Permanent staff may have access to internal AI tools. Contractors, temporary staff, or third parties may not.
On paper, this looks cautious but in reality, it creates a gap.
Where people need AI to do their work and cannot access approved tools, they find alternatives. They use personal devices. They use public AI tools. They take screenshots. They copy and paste information manually. In some cases, they photograph screens or documents to work around restrictions.
At that point, AI use has not been prevented. It has been pushed into places the organisation cannot see, monitor, or govern.
Why this creates greater exposure, not less
Unmonitored AI use is more dangerous than monitored use.
When AI activity happens outside approved systems, the organisation loses any ability to control what data is being shared, where it is processed, or how outputs are reused. Security controls no longer apply. Logging disappears. Oversight is lost.
From a governance perspective, this is the worst possible outcome. The organisation believes it has restricted AI use, while in practice it has driven it into shadow channels that are entirely unmanaged. This is how well-intentioned controls create blind spots.
Why this problem is often invisible to leadership
Leadership teams are often unaware this is happening. Official policy may say AI use is restricted or limited. Internal tools may exist. But access is uneven, and work pressures remain the same.
Employees and contractors still need to deliver. They still need summaries, drafts, analysis, and speed. Without a sanctioned path, they create their own.
Because this behaviour sits outside formal systems, it rarely shows up in reports or dashboards. It only becomes visible when something goes wrong, or when someone asks a question the organisation cannot answer.
Custom GPTs do not solve this on their own
Some organisations respond by building internal or custom GPT solutions. These can be useful, but they do not solve the problem if access is incomplete.
If only some employees can use the approved tool, while others are excluded, the same pattern repeats. Those without access route around it. The organisation ends up with two realities: a monitored environment and an unmonitored one.
From a risk perspective, the unmonitored environment is the one that matters.
The real question organisations need to ask
The issue is not whether AI tools exist internally. The issue is whether the organisation has designed its AI controls around how people actually work.
That means asking uncomfortable but necessary questions. Who has access to approved AI tools, and who does not? Why? What happens when someone without access still needs AI to do their job? Where does that activity go? And can the organisation honestly say it knows how AI is being used across its workforce?
Until those questions are answered, confidence in AI controls is largely illusory.
What effective organisations do differently
Organisations that manage this well focus on visibility before restriction. They recognise that AI use is already embedded in work patterns and design controls that reflect reality rather than policy alone.
This involves understanding where AI is being used, by whom, and for what purpose. It means aligning access, guidance, and governance so that approved paths are usable by everyone who needs them. And it requires documenting decisions so that risk is acknowledged rather than denied.
The goal is not to remove risk entirely. It is to prevent it from becoming invisible.
When this needs urgent attention
This issue should be addressed immediately if contractors or third parties are involved in day-to-day work, if access to internal AI tools is uneven, or if leadership assumes AI use is restricted without evidence to support that belief.
In these situations, the organisation is likely already exposed. The absence of visibility does not mean the absence of risk.
Next steps
If your organisation is relying on access restrictions to control AI use, the first step is understanding what is actually happening outside those controls.
An AI risk assessment provides that visibility and allows organisations to redesign governance around real behaviour, not assumptions.
FAQs
Why worry about AI?
Because hidden AI use can create unseen risks at work.
Does blocking AI help?
No, employees often use unapproved AI tools elsewhere, increasing risk.
What is uneven AI access?
It means some staff have approved AI tools while others don’t, causing hidden and unmanaged AI use.
Why limit AI access?
Often due to security or licensing concerns within the company.
What risks arise?
Risks include data leaks and lack of oversight on AI-generated content.
How can organizations manage AI risks?
By providing monitored AI tools and promoting transparent, secure AI use across teams.