AI Use at Work Has Doubled and Most Employers Still Do Not Have a Plan
Gallup dropped a report this month that made me put down my coffee and stare at the wall for a moment. The share of US employees who use AI at least a few times per year has gone from 21% to 40% in just two years. Weekly use has nearly doubled. Daily use has doubled in just the past year. The technology adoption curve on this is steep and it is not slowing down. What caught my attention most was not the adoption numbers but what is sitting underneath them. Only about 30% of employees say their organization has laid out any formal guidelines or policies around AI use. Almost half say their employer has started implementing AI in some form, but only 22% report a clear strategy. That is a significant gap between what people are doing and what organizations are actually governing.
From where I sit, this is a familiar pattern. Organizations adopt new tools faster than they adopt the policies, governance frameworks, and security guardrails around those tools. I have seen it happen with cloud migration, with SaaS proliferation, with mobile device management, and now with AI. The technology spreads because people find it useful. The official response follows three to six months later if you are lucky, longer if you are not. In the meantime, you have employees using AI in ways that may or may not be consistent with data privacy requirements, compliance obligations, or organizational risk tolerance. Nobody intended that outcome. It just happened because the tools moved faster than the policies.
The Gallup data breaks down nicely by industry. Technology leads with 50% of workers reporting frequent AI use. Professional services and finance follow. That tracks with what you would expect. What the data also shows is that leaders, defined as managers of managers, use AI at twice the rate of individual contributors. That is an interesting dynamic. The people setting strategy are adopting AI faster than the people executing it, which could mean leaders are developing AI-informed thinking that is not yet translating to frontline workflows. Or it could mean individual contributors are waiting to see if their employer is serious about AI before committing to learning new tools. Both interpretations have implications for how organizations should be approaching this.
The stat that should motivate every IT and security team in the country is the governance gap. Employees are using AI tools, many of them personal or unsanctioned tools brought in from outside, because they find value in them and their employer has not offered a formal alternative. That is a classic shadow IT problem wearing a very modern hat. I have spent a significant portion of my career eliminating shadow IT, and the playbook is the same regardless of the tool category. If employees are using something outside your governance structure, it is because you have not met the underlying need inside it. The answer is not to ban the tool. The answer is to understand why people are using it and build a sanctioned path that actually works.
The organizations that will come out of this moment well are the ones building clear AI governance frameworks now, not after an incident. That means acceptable use policies, security reviews of AI tools before adoption, training programs that help employees understand both the value and the risks, and leadership that communicates a coherent strategy rather than letting adoption happen organically without guardrails. It is not glamorous work. It is exactly the kind of work that prevents expensive problems down the road. Which, if you know anything about me, is precisely my favorite kind of work.
https://www.newsnationnow.com/business/ai-use-employees-doubles-report/