AI at Work: The Governance Gap Is Still Winning

Back in June I wrote about the Gallup data showing AI use at work had nearly doubled in two years. I wanted to revisit that conversation because the numbers have continued to move and the governance situation has not improved at the pace it should have. As of mid-2025, about 40% of US employees report using AI at least a few times per year, with frequent use concentrated in technology, finance, and professional services. Daily use among white-collar workers is rising steadily. The tool adoption curve is not leveling off. It is still climbing.

What concerns me more than the adoption numbers is where the capability development is happening and where it is not. Gallup's research shows that leaders are using AI at roughly twice the rate of individual contributors. That gap matters because it creates a situation where the people making strategic decisions are developing AI-informed intuitions that are not yet shared by the people executing the work. That is a communication and training problem more than a technology problem. You cannot build an AI-enabled organization from the top down while the front line is still figuring out what the tools are.

The governance picture is still the most pressing issue from a security and risk perspective. A significant share of employees are using AI tools that are not sanctioned by their organizations. Some of those tools are connected to personal accounts. Some are processing organizational data in ways that may violate data handling agreements or compliance requirements. None of this is happening because employees are malicious. It is happening because the tools are useful, the official alternatives are often slow or nonexistent, and the guidance on what is and is not acceptable is unclear in most organizations.

I have seen this pattern in every major technology transition I have lived through in 15+ years of IT work. The tools move faster than the governance. The governance catches up eventually, usually after an incident that makes clear what the risk exposure was. The organizations that avoid the incident are the ones that invested in governance proactively, before the incident, when everything still looked fine. That is a hard sell sometimes because the cost of proactive governance is visible and the cost of not having it is invisible right up until it is not.

My recommendation has not changed since June. Build your AI acceptable use policy now. Identify which tools your employees are already using and create a path for the legitimate ones to be sanctioned and secured. Invest in training that is not just compliance-checkbox training but actually helps people understand what to use AI for and what to keep far away from it. And communicate a strategy clearly and repeatedly until people stop asking what it is. The technology is not going to wait for the policy. The policy needs to run alongside it or the gap will keep costing you.

https://www.newsnationnow.com/business/ai-use-employees-doubles-report/

Previous
Previous

Photonic Computers: What Happens When You Build AI Hardware Out of Light

Next
Next

Birthdays, Middle Life, and the Specific Joy of No Longer Caring About the Wrong Things