Your company’s AI policy is probably a Word document that nobody’s read, a Slack message from eight months ago, or nothing at all. And you’re the one sitting in the middle of it, wondering if using ChatGPT to draft that report last Tuesday was fine or a firing offense.
Here’s where things stand with company AI policy in 2026: most organizations are genuinely winging it. A 2024 SHRM survey found that fewer than half of organizations have a formal AI use policy in place. The other half have vibes. And those vibes are inconsistent, unwritten, and occasionally reversed without notice.
That’s the mess. And if you’re anxious about it, you’re not being paranoid. You’re being accurate. Let’s figure out what you actually do about it.
The company AI policy situation: nobody knows what the rules are
The number that matters most here isn’t about AI capabilities or job losses. It’s this: 68% of employees using AI at work don’t tell their employer. Sixty-eight percent. That’s not a handful of rogue workers sneaking tools past IT. That’s most of the people using AI, staying quiet, because the rules are either absent or confusing enough that disclosure feels risky.
This isn’t an employee problem. It’s a policy problem. When people can’t figure out whether something is allowed, they default to not asking. That’s rational. If you ask and the answer is “no,” you’ve lost something. If you don’t ask and it’s technically fine, you’re ahead. So people keep their heads down and their Claude tabs minimized.
The result is a workplace where AI use is widespread, invisible, and completely undocumented. Which is genuinely bad for everyone.
What the White House framework actually says (and what it doesn’t)
On March 20, 2026, the White House released its National AI Policy Framework. It sounds important. And in some ways it is. But here’s what it actually does for your workplace situation: not much, immediately.
The framework is a set of recommendations, not binding rules. It doesn’t tell your employer what to do. It doesn’t create employee rights around AI use. What it does is signal a federal direction, one that pushes responsibility for AI governance down to the company level rather than up to regulators. The federal government is also actively trying to preempt state-level AI rules, which means the patchwork of state laws some workers were counting on might not survive contact with Congress.
The practical translation: if your company was waiting for clear federal guidance before writing an AI policy, that guidance isn’t coming. The government has essentially said “figure it out yourselves.” Which is the policy version of your company’s existing non-policy.
HR departments right now are either scrambling to write something, or continuing to ignore the problem while insisting it’s on the roadmap. SHRM has been pushing for “practical, flexible” AI policies since 2024. That advice sounds reasonable. What it means in practice is that the people closest to the work, people like you, are still operating in a gray zone.
The three types of company AI stances (and why all of them are a problem)
Here’s the honest taxonomy from Don’t Replace Me, and it holds up: your company’s AI position is one of three things.
Type 1: No policy at all. Anything goes until it doesn’t. You’re fine right up until someone in leadership has a bad day, sees a news story about an AI data breach, and suddenly there’s an emergency meeting and a ban on “unauthorized AI tools.” Everything you built your workflow around is now a liability.
Type 2: A “no AI” policy that everyone ignores. This is the most dangerous one. You have a written prohibition, but the culture treats it as performative. Your manager uses ChatGPT for their weekly updates. Your colleague runs client emails through Claude. You do the same. Then something goes wrong, a bad output, a data incident, a lawsuit, and suddenly the written policy matters very much, and all of you are exposed.
Type 3: A reasonable policy that nobody’s read. This one is almost worse because of the false comfort. You think you’re covered. You’re not sure what “covered” means. You definitely haven’t checked whether the tools you’re using are on the approved list.
All three of these are genuinely bad situations. They just have different failure modes.
What this means for you as an employee right now
The policy chaos at the top doesn’t change what you’re doing Monday morning. You still have work. AI tools still exist. The question is how to use them without creating a problem for yourself.
First: find out what your company actually has. Search the intranet. Ask HR. Check the employee handbook. If there’s a policy, read it. Not a summary of it. The actual thing. Pay attention to whether it specifies which tools are allowed, whether there are data handling restrictions (there almost certainly should be for client data, financials, anything confidential), and whether there’s an approval process for new tools.
If there’s no policy, that’s important information too. It means you need to make your own decisions about disclosure. The 68% who stay quiet aren’t all making the wrong call, but there are scenarios where transparency is the smarter play, especially if you’re in a client-facing role, a regulated industry, or working for someone who’d feel ambushed finding out later.
Second: stop treating AI as invisible infrastructure. If you’re using it regularly, that usage is part of your work. Treat it that way. If you’d mention that you used a particular research database or software tool to get something done, apply the same logic to AI.
Third: be thoughtful about what goes in. If your company doesn’t have a clear data classification policy for AI, assume that anything client-specific, proprietary, or confidential stays out of external AI tools. This isn’t paranoia. It’s the kind of judgment that keeps you out of trouble when policy finally catches up to practice.
And if the anxiety about all this is genuinely getting to you, that’s worth addressing separately. AI anxiety at work is real, and it doesn’t go away by pretending the uncertainty doesn’t exist. But it also doesn’t go away by stopping. The tools aren’t going anywhere.
This came from a book.
Don't Replace Me
200+ pages. 24 chapters. The honest version of what AI means for your career, written by someone who actually builds this stuff.
Get the Book →The personal AI policy: document, verify, demonstrate value
Here’s the most practical thing you can do right now, and almost nobody does it. Write your own AI policy.
Not a manifesto. Not a memo. Just a simple record of how you use AI in your work. Three things worth tracking:
What you used it for. Not in detail, just enough to show the pattern. “Used Claude to draft first version of the Q2 marketing report. Used ChatGPT to summarize competitor research. Used AI to generate initial formatting for client proposal.”
What you verified. This is the important one. Every output you acted on, note that you checked it. Confirmed the statistics. Validated the figures. Read the draft before sending. This is your receipt. If someone ever asks whether you just blindly published AI output, you have an answer.
What value it created. This one’s optional but useful. “Reduced time on first draft from 3 hours to 45 minutes.” “Generated five concept directions instead of two.” “Caught formatting errors in the report before the deadline.” This turns your AI use from a liability story into a productivity story.
Dee Kargaev calls this the CYA Protocol in Part V of Don’t Replace Me, and it’s the kind of thing that sounds tedious until the moment it matters. When policy uncertainty turns into policy enforcement, you want receipts. This is how you get them.
The documentation doesn’t have to be formal. A running note in a personal doc is fine. A brief Notion page. Anything you could pull up in ten seconds if asked. The point is having it.
How to actually use AI at work while all this gets sorted out
The policy situation isn’t going to resolve itself next quarter. Companies are slow, regulation is slower, and the tools keep moving faster than either. So the practical question isn’t “wait until the rules are clear.” It’s “how do I operate sensibly in the meantime?”
Using AI at work effectively right now means starting with the low-stakes stuff. Internal documents, research summaries, first drafts that humans review before anyone external sees them. This is both good AI practice and sensible risk management. You build fluency with the tools while staying in territory where an error is correctable.
It also means building the skills that hold up regardless of what the policy says. AI skills for non-technical people aren’t about knowing which tool is best. They’re about understanding what AI is actually good at, what it consistently gets wrong, and how to use it in a way that adds value instead of just adding steps. That knowledge doesn’t expire when your company updates its policy.
The people who will be in the worst position when policy eventually clarifies are the ones who either refused to use AI at all (now behind on skills and output speed) or used it carelessly without thinking about documentation, data, or disclosure. The people who will be fine are the ones who treated it seriously, documented what they did, and can show they were thoughtful about it.
That’s not a heroic stance. It’s just basic professionalism applied to a new kind of tool.
Frequently asked questions
Does my company need an AI policy in 2026?
Yes, and most don’t have a good one. The White House’s March 2026 framework pushed AI governance responsibility to organizations rather than creating binding federal rules, which means companies are largely on their own. If yours doesn’t have a policy, that’s a real gap, and in the meantime, employees are operating on guesswork.
What should I do if my company has no AI policy?
Find out whether one exists first by checking your employee handbook or asking HR directly. If there’s nothing, use your judgment: avoid putting confidential or client data into external AI tools, document what you’re using AI for and how you verify outputs, and consider having a low-stakes conversation with your manager about how they view AI tool use. See our guide on whether to tell your boss you use AI for a fuller breakdown.
Is it against the rules to use ChatGPT at work?
It depends entirely on your employer. Some companies explicitly ban external AI tools. Some have approved lists. Many have nothing written down either way. The 68% stat from Microsoft’s Work Trend Index suggests most employees using AI at work aren’t disclosing it, often because the rules are unclear. Read whatever policy exists, and if it’s silent on AI, ask.
What did the White House AI policy framework say in 2026?
The National AI Policy Framework released March 20, 2026 laid out guidelines for responsible AI use but didn’t create binding regulations for private employers. It emphasized pushing governance responsibilities to organizations and signaled federal interest in preempting state-level AI laws. For most workers, it changed nothing immediately. Companies still have to write their own rules.
How do I protect myself if my company's AI policy is vague?
Document your AI use. Keep a running note of what tools you used, what you used them for, what you verified before acting on outputs, and what the result was. This gives you a clear record if questions ever arise. Avoid putting sensitive, confidential, or client-identifiable data into external AI tools regardless of what the policy says.
What's the difference between a company "no AI" policy and how people actually behave?
Often, a significant one. Research consistently shows that employees use AI even when policies technically prohibit it, partly because enforcement is unclear and partly because managers use it too. The risk is that when something goes wrong, the written policy is what gets enforced, not the informal norm. If your company has a written “no AI” policy, take it seriously even if the culture seems to ignore it.