Shadow AI, Employees using ChatGPT at work, Are my employees using AI without approval, AI acceptable use policy template, How to stop employees using personal AI accounts

AI governance will be more important than AI tooling

Ben Buckingham
Ben BuckinghamCEO

Many CFOs will remember 2023 as the year every boardroom started talking about AI risk. The rapid rise of LLMs made governance a foreign territory and boards were quickly forced to set their own tone for adoption and oversight.

One of the companies that became a case study for how to think about AI policy was Samsung.

In 2023, a handful of engineers at Samsung used ChatGPT to troubleshoot proprietary semiconductor code, and in the process, confidential chip designs ended up inside a public model. Samsung's response was an immediate company-wide ban on external AI tools.

Samsung was one company that set a blanket culture of ‘no adoption’.

It could be considered a defensible decision in the moment (again, this was foreign territory for all companies), but the outcome became an example of how a no-adoption culture or reactive governance actually tends to create the exact risk a company is trying to prevent: the misuse of company data.

The era of shadow AI amongst confused AI policy

Among reactive enterprise wide bans of ChatGPT and other LLMs, employees simply shifted to personal devices and free accounts to keep using the tools that had already become accustomed to using for work. The phenomenon is known as ‘Shadow AI’, where employees use personal plans to carry out work, and it’s widespread across most organisations. Some research suggests that while 40% of companies have purchased enterprise AI licenses, over 90% of employees are using personal AI accounts to get their work done.

A strict ‘no AI’ policy therefore doesn’t eliminate usage because employees are already conditioned to use tools, particularly LLMs, as a part of their daily workflow.

"If your company has a strict no-AI policy and your employees are still hitting their numbers, they are almost certainly working around the policy rather than following it."

Glenn Hopper, Deep Finance Dispatch

The ‘Shadow AI’ problem actually becomes more acute for organisations because it creates a visibility gap. Without sanctioned tools in place, there is no proper training guidance, no audit logs, no data retention agreements, and no centralised access controls. Organisations no longer have the ability to monitor or manage what is happening to their data, creating a significant security black hole.

Risking your numbers

For CFOs in particular, Shadow AI across finance teams creates a pretty significant issue. Financial data, typically among the most proprietary information a non-publicly listed company holds, is being pasted into consumer-grade tools on personal accounts, none of which are protected by enterprise agreements.

Consider a practical example:

An employee is using Claude for financial analysis on a forecast, but without realising it, their plan is toggled to allow the model to learn from the data being provided. This means the margin analysis being conducted by the team's financial analyst could end up informing responses to other queries about margin analysis, or could be directly surfaced if a competitor were to ask about that company's financial position.

But the productivity realisation for the financial analyst is answers within an hour compared to a whole day or multiple days. If an employee is unable to access enterprise approved AI, they are highly incentivised to use their own.

What practical governance looks like

The most practical way to reduce shadow AI is lightweight governance framework that the CFO can own. This needs to be layered with broader organisational commitments such as training, widespread policy and proper implementation. But if you are looking for first steps - the following three are a starting point:

Step one: an honest audit of what is already happening.

Ask your team what tools they are using and what they are using them for. The goal is visibility, and querying employees can be framed in a way that informs what future tooling your organisation uses if certain platforms are are already creating value for employees.

The other goal is to understand if any data is currently at risk.

Step two: Selection of a tool/ tools with the right enterprise protections. A couple of key questions to ask:

  • Does the platform have MFA?
  • Are there zero data retention agreements?
  • SOC 2 Type 2 certification?
  • What are the data handling terms? Are these clear?

Step three: The final step is a one-page usage policy that specifies what financial data can enter the tool, what cannot, who is responsible for the workflow, and who reviews the output. The goal is a document clear enough that every member of the team can read it once and know exactly where the boundaries are.

The bottom line

Samsung is now fundamentally an AI company - adopting AI across its processes and products. The transition hasn’t been possible without the implementation of proper governance. This came at the expense of a some invisible risk, unrealised productivity, and the trust of employees who wanted to be at the forefront of innovation.

But the lesson ultimately is that strong governance allows companies to focus on innovation, not compliance issues.

Ready to get started?

See how Primary can help your growing business achieve enterprise-grade cash management with startup efficiency.

More blogs