68% of employees who use AI at work don't tell their boss. That's not a fringe number. That's most people.

If you're using ChatGPT to draft emails, Claude to summarize reports, or any other tool to get through your week faster, there's a decent chance you're doing it quietly and wondering if that's okay. Whether you should tell your boss you use AI is one of the most common questions people are asking right now, and almost nobody is giving a straight answer.

Here's one.

Why so many people are hiding it

The secrecy makes sense if you think about it. AI has a weird reputation problem. Half the world thinks it's cheating. The other half thinks anyone who doesn't use it is already obsolete. Neither of those people is easy to work with.

So you end up with a huge chunk of the workforce doing something genuinely useful, quietly, because announcing it feels like walking into a meeting and saying "I've been Googling things this whole time."

The fear is usually one of three things. You think your boss will assume you're cutting corners. You think your company has a policy you've been ignoring. Or you think your coworkers will resent you for being faster than them.

All three of those fears are real. None of them automatically mean you should stop using the tools.

Should you tell your boss you use AI? The honest answer

It depends on three things: your industry, your company culture, and how you're using it.

If you work in law, finance, healthcare, or anywhere with strict data handling rules, this isn't really a philosophical question. It's a compliance question. Feeding client data into a public AI tool without understanding the terms of service is a problem your boss would absolutely want to know about, before someone else finds out.

If you work somewhere more relaxed, a marketing agency, a startup, a mid-size company without a formal AI policy yet, the calculus is different. Most companies are still figuring out what they even think about AI use. A 2024 survey from SHRM found that 28% of employed U. S.adults are already using ChatGPT for work tasks. The policies haven't caught up to the reality.

The practical answer: you don't need to announce it unprompted. But you shouldn't lie about it if someone asks.

The three types of AI users at work

Most people land in one of these three categories. One of them is getting it right.

The Hider

Uses AI for everything. Terrified of anyone finding out. Has deleted browser history twice this week. The anxiety about being "caught" is now taking up more mental space than the actual work.

The risk here isn't moral. It's strategic. If your outputs suddenly look dramatically better than they used to, people will notice. If you can't explain your process when asked, that's a credibility problem. And if your company does have a policy and you've been ignoring it, you want to find that out before HR does.

The Evangelist

Won't stop talking about AI. Sends colleagues links to new tools every Monday. Mentioned prompt engineering in a meeting about budget planning. Has a newsletter.

This person is not wrong about anything, necessarily. They're just exhausting. And paradoxically, the constant AI content often masks that they're not actually doing better work. They're just doing louder work.

The Quiet User

This is the one. Uses AI to get things done faster and better. Doesn't hide it, doesn't announce it. If someone asks, they say "yeah, I used AI to help draft that, then cleaned it up." Nobody dies. Work is good. Life continues.

The quiet user treats AI the same way a good cook treats a sharp knife. It's a tool. It does part of the job. You still have to know how to cook.

This came from a book.

Don't Replace Me

200+ pages. 24 chapters. The honest version of what AI means for your career, written by someone who actually builds this stuff.

Get the Book →

How to read your specific situation

There's no universal rule here, but there's a framework. Ask yourself four questions.

Does your company have an explicit AI policy? If yes, read it. Seriously. Most people haven't. If your workplace prohibits using certain tools or requires disclosure, that's not optional. The number of companies publishing formal AI policies jumped significantly in 2023 and 2024. Yours might have one buried in an intranet nobody checks.

How technically comfortable is your boss? A manager who's already using AI tools themselves is going to react very differently than one who thinks ChatGPT is some kind of chatbot for teenagers. Calibrate accordingly.

Are you handling sensitive data? Customer information, legal documents, financial records. If anything you're working with falls into this category, you need to know whether the tools you're using are appropriate for it. Many aren't, by default.

Is your output actually yours? If AI is generating your deliverables and you're not reviewing or improving them, that's a different problem. Not because AI use is cheating, but because you've stopped being useful. The quality of your judgment is what you're being paid for.

The "quiet weapon" framing

This is how it should work: you use AI to make your work better. Not to produce more, faster, and sloppier. Better.

Rule #17 in Don't Replace Me calls this the quiet weapon approach. Use the tools to be genuinely better at your job, without making it your whole personality. The goal isn't to flood your workplace with AI-generated output. It's to use AI for the parts of your process that were eating time, so you can spend more of yourself on the parts that actually require you.

The people who get this right look very good at their jobs. The people who get it wrong look like they've outsourced their judgment, which is exactly the fear your boss probably has about AI in the first place.

Should you tell your boss you use AI if there's no policy yet?

Most companies don't have a clear AI policy. That might feel like a green light, but it's more of a yellow.

No policy means no protection either way. If something goes wrong, you can't point to a rule that allowed it. If something goes right, you can't take formal credit for a smarter workflow. You're operating in a gray zone, and gray zones eventually get resolved, usually by someone above you drawing a line.

The smarter play is to treat the absence of a policy as an opportunity, not a free pass. You're using AI productively and thoughtfully. That's exactly the kind of firsthand knowledge that shapes good policy when your company eventually writes one. Being the person who says "I've been using these tools for six months, here's what works and what doesn't" is a much better position than being the person who gets caught in a policy rollout.

Some industries are moving faster than others on this. The World Economic Forum's 2025 Future of Jobs report found that AI and information processing technologies are expected to transform 86% of employers' businesses by 2030. Companies that haven't written AI policies yet are not companies where AI isn't happening. They're companies that are behind on paperwork.

The absence of a policy isn't permission. It's an opportunity to be the person who knows what they're doing when everyone else is still figuring it out.

What to actually say if someone asks

You don't need a speech. Most people are asking out of curiosity, not as a trap. Keep it simple.

"Yeah, I've been using AI to help with the first draft. I still go through it and make sure it actually makes sense."

"I use it to pull together research faster. Saves a lot of time on the early stages."

"I ran this through Claude to check the structure before I finalized it."

None of those statements are admissions of wrongdoing. They're descriptions of a workflow. The more casually you say it, the less it sounds like a confession.

What you want to avoid is either extreme: the defensive "well technically the policy doesn't specifically say..." response, or the evangelical "AI is the future and here's why you should be using it too" response. Both of those make people weird about something that doesn't need to be weird.

If you're wrestling with the anxiety side of this, the fear that AI is making you look like you're not doing your job properly, it might be worth reading about how AI anxiety plays out at work. The fear is common. It's also usually about something other than what it seems to be about.

When you should tell your boss you use AI proactively

There are a few situations where volunteering the information is actually the right move.

If your company is having a conversation about AI strategy and your boss doesn't know you've been using it, that's a strange omission. You could be the person who has practical knowledge to contribute, instead of sitting quietly while someone else shapes policy.

If the quality of your work has noticeably improved, your boss might wonder what changed. Being able to say "I've been using some AI tools to improve my process" is better than having no explanation.

If you're being asked to produce more than you realistically can, and AI is how you're managing that, saying so protects you. "I've been able to keep up by using AI assistance on the research phase" is both honest and slightly bulletproof.

And if you're about to start using AI in a way that changes something meaningful about your deliverables, a proactive conversation is better than a retroactive one.

It's also worth thinking about the career upside here. Being known as someone who uses tools well, gets things done faster, and still produces quality work is a reputation that compounds. The person who quietly got 30% better at their job this year is going to have a very different performance review than the person who spent the same year anxious about whether to hit send on a ChatGPT summary.

For a practical look at how to actually set this up well, the guide to using AI at work covers the mechanics without the hype.

How different industries actually handle this

The answer to "should you tell your boss you use AI" looks different depending on where you work. Here's a rough breakdown.

IndustryTypical situationWhat to do
Law / finance / healthcareStrict data rules, client confidentialityCheck policy first. Disclose proactively if uncertain. Never use public AI tools with sensitive data.
Marketing / PR / creativeUsually permissive, policies still formingQuiet use is fine. Mention it casually if it comes up.
Tech / productOften already using AI formallyJust use it. It's probably encouraged.
EducationMixed and rapidly evolvingPolicies vary wildly. Check your institution's guidelines.
Government / public sectorConservative, policy-drivenAssume you need approval until you know otherwise.
Small business / startupNo policy yet, informal cultureYellow light. Use it, don't hide it, be ready to discuss it.

The industries that feel most fraught are the ones where clients or regulators are involved. In those environments, the question of whether you should tell your boss you use AI is secondary to whether you're allowed to use it at all for that specific work.

The data is already out there

Fishbowl's workplace survey found that 68% of people using AI at work aren't telling their employers. That's a lot of quiet keyboards.

The companies responding to this are mostly not trying to ban AI. They're trying to figure out how to manage it. Which means the window right now, where you can use these tools without much scrutiny, is exactly when you should be getting comfortable with them.

The people who treat AI use as a shameful secret tend to use it badly, rushed, unreviewed, hoping nobody notices. The people who treat it like any other part of their workflow tend to use it well.

If you're curious what prompts actually work for the kinds of tasks most office workers deal with, the ChatGPT at work guide is a decent place to start.

Frequently asked questions

Is using AI at work considered cheating?

No, not inherently. Using AI to help draft, research, or structure your work is similar to using any other productivity tool. The question is whether you're reviewing and taking responsibility for the output. If AI is generating things you're passing off without checking, that's a quality problem, not an AI problem.

What happens if my company finds out I've been using AI secretly?

In most cases, not much, as long as you haven't violated data policies or client confidentiality agreements. The more serious risks come from using AI tools with sensitive or proprietary information when your company's policies don't allow it. Check your policy if one exists.

Should I put AI skills on my resume?

Yes, if you've been using AI tools consistently and productively, that's worth listing. "Proficient in AI-assisted research and drafting" is more useful than vague claims. Be specific about what you actually use and how.

How do I know if my company has an AI policy?

Check your employee handbook, intranet, or ask HR directly. Policies published in 2023 or 2024 might not have been communicated well. It's worth knowing before you need to know.

What if my boss is anti-AI?

Go quieter, not louder. Use the tools to do better work. Don't evangelize. If your boss directly asks whether you've been using AI, be honest but brief. You don't need to justify it. "I've been using it to help with research and drafts, and then reviewing everything myself" is a complete answer.

Can my employer monitor my AI tool usage?

Potentially, depending on your device and network. If you're on company equipment or a company network, assume your activity is visible. Use personal devices for personal AI use if you're concerned, and use company-approved tools for work tasks where possible.