The year is 1770. You're watching a machine beat humans at chess. It sits at a table, mechanical arms moving pieces with eerie precision. The Viennese court is losing its mind. A machine that thinks! A machine that plays chess!

The machine was fake. There was a human chess master hidden inside the cabinet, watching the board through mirrors, moving pieces with a system of levers. The illusion was perfect. The understanding was zero.

That's what AI can and can't do in one story. And it's the story most headlines skip entirely.


What AI actually is (and it's weirder than you think)

AI is not a brain. It's not "thinking" in any meaningful sense. The best description anyone's come up with, including people who actually build the stuff, is "spicy autocomplete."

Here's what that means. When you type a prompt into ChatGPT, the model doesn't read your question and reason through an answer. It predicts, at massive statistical scale, what text is most likely to come next. It's doing probability on an almost incomprehensible amount of training data. Every word it produces is basically a very sophisticated guess about what word should follow the last one.

That's it. That's the trick.

The reason it seems so smart is that the internet contains an enormous amount of human thought, and when you compress all of that into a model and ask it to predict patterns, the output looks a lot like reasoning. But it isn't reasoning. It's pattern matching at speed and scale that no human could match.

Dmitry Kargaev calls this Rule #5 in Don't Replace Me: "It's Not Smart. It's Fast." The distinction matters more than almost anything else you'll read about AI this year.


What AI can do (genuinely, actually well)

Let's be fair. The autocomplete metaphor doesn't mean useless. Spicy autocomplete running on billions of documents is legitimately impressive at specific things.

Here's what AI is actually good at:

Speed and volume. A task that takes you three hours can often become a rough draft in three minutes. Not a perfect draft. A starting point. For writing, summarizing, translating, structuring, that's a real advantage.

Pattern recognition. Give AI a thousand rows of data and ask it to find what's unusual. It will spot it faster than any human. This is why it works well in radiology, fraud detection, and quality control. It sees patterns humans miss, not because it's smarter, but because it's tireless and fast.

First drafts. Writing a job description, a marketing brief, a meeting summary, a response to a common customer complaint. AI can produce something usable in seconds. You still need to edit it. But you're editing, not starting from a blank page.

Summarization and translation. Long documents, foreign languages, jargon-heavy reports. AI compresses and translates at a level that would have seemed magical ten years ago. Because it was. The AI in your 2016 Google Translate was the same technology, just less trained.

Pattern completion. Fill in the blanks. Write more of the same. If you give AI a style to match, a format to follow, a template to extend, it's very good at continuing the pattern. The catch is buried in that sentence: it continues. It doesn't originate.


What AI genuinely cannot do (and this is where it gets interesting)

This is the part that matters for your job.

The chess machine had a person inside. Modern AI doesn't. And that means certain things are simply not there.

It doesn't understand meaning. There's a brutal demonstration of this. Dee, the author, spent a session telling an AI it was completely wrong, in different ways, six times in a row. The response each time was a variation of "You are absolutely right." The model didn't understand that it was being corrected. It predicted that agreeing was the appropriate next token. That's the gap between knowing everything and understanding nothing.

It doesn't know when it's wrong. This is the one that gets people in trouble. AI will confidently tell you the wrong answer. It will cite a paper that doesn't exist. It will give you a legal precedent that's fabricated. Not because it's lying. Because it's predicting text, and wrong text can be just as statistically likely as correct text if the training data supported it.

It can't exercise judgment. Judgment requires understanding stakes, context, relationships, and consequences. AI has none of these. It can simulate judgment. It can produce text that sounds like judgment. But when the situation is genuinely novel, genuinely complicated, or genuinely political, you're reading a sophisticated guess.

It can't read a room. It doesn't know your client's mood, your boss's hidden priorities, or why the finance team is being weird about this project. It has no access to the invisible information that experienced humans navigate automatically.

It has no taste. This sounds soft. It isn't. Taste means knowing what's good, not just what follows the pattern. AI can produce content that's statistically similar to great work. It can't tell you whether your campaign concept is boring, or whether your design looks cheap, or whether your pitch has the wrong energy for this particular client. You can. That's not a small advantage.

It doesn't care about outcomes. If the report it generated leads your company to make a terrible decision, the AI doesn't notice. It won't flag the issue next week. It will happily generate another report that contradicts the first one if you ask it differently.


This came from a book.

Don't Replace Me

200+ pages. 24 chapters. The honest version of what AI means for your career, written by someone who actually builds this stuff.

Get the Book →

The Mechanical Turk problem (it's already happening)

Amazon named a product after the fake chess machine. Amazon Mechanical Turk is a platform where humans complete small tasks that are labeled as "AI." In other words, humans pretending to be AI so companies can sell AI services. This happened in 2005 and it's still running.

The reason this is relevant: a lot of what gets called AI in marketing decks is closer to the original Turk than people realize. Software that automates a checklist. Template systems with a chat interface on top. Recommendation engines that have been around since Netflix in 2009. You've been using AI for years without having a crisis about it. Your spam filter is AI. Your autocorrect is AI. Spotify's Discover Weekly is AI. None of it made you panic.

The panic now is about visibility, not a sudden leap in capability. The models got good enough that the output looks like thinking, so suddenly the thing that was quietly running in the background is front and center. That's a change in perception. It's also a real change in utility. But it's not the arrival of actual intelligence. It's the best autocomplete we've ever built.

If you want to dig into which jobs this actually threatens and which ones it doesn't, the breakdown here is more useful than the headlines.


Why understanding this matters for your actual job

Here's the practical version. If AI is pattern matching at speed, then your job risk depends on how much of your work is pattern-based versus judgment-based.

Ask yourself: what percentage of what I do on a given Tuesday could be described as "taking information in, applying a known process, producing output"? That's the automatable slice. Not all of it. Probably not most of it. But the slice that's most at risk.

The rest, the judgment calls, the relationship management, the creative decisions that require taste, the political navigation, the reading of context that isn't written anywhere, that's harder. Not impossible to touch eventually. But not the immediate problem.

The people who get into trouble are the ones who do mostly pattern-based work and don't realize it. And the people who stay employed are the ones who start using AI for the automatable parts before someone else does it for them.

This is what "someone using AI will replace you" actually means. Not a robot. A colleague with the same skills as you, who figured out how to do the boring part of the job in 20 minutes instead of three hours.


The quick reference: what AI can and can't do

AI is genuinely good atAI genuinely cannot do
Producing first drafts fastUnderstanding what it wrote
Finding patterns in large datasetsKnowing when it's wrong
Summarizing long documentsExercising real judgment
Translation and reformattingReading a room
Repeating and extending patternsHaving taste
Answering common questionsCaring about outcomes
Speed and volumeNavigating unstated politics

The short version: AI is incredibly fast at tasks that have a knowable right answer or an acceptable pattern to follow. It fails at everything that requires genuine comprehension, judgment, or caring what happens next.


What this means if you're worried about your job

The best question to ask isn't "will AI replace my job?" It's "which parts of my job are actually pattern-based?"

Most jobs are a mix. The pattern-based parts are automatable. The judgment-based parts, the ones that require you to understand what's actually going on and make a call based on context that isn't written down anywhere, those are harder to touch.

You don't need to become a developer. You don't need to learn to code. You need to know what the tool actually is, use it for the boring parts, and keep doing the judgment parts that require a human in the room. Here's what those skills actually look like if you want to get specific.

The chess machine was impressive until someone opened the cabinet. AI is impressive until you ask it something that requires genuine comprehension. Knowing the difference is the whole game.


Frequently asked questions

What does "spicy autocomplete" mean?

It's a plain-language description of how large language models work. Instead of reasoning through a question, AI predicts what text is most likely to come next based on patterns in its training data. The "spicy" part means it's doing this at enormous scale with massive training data, making the output look like thinking even though it isn't.

Can AI actually understand what it's saying?

No. AI processes patterns and predicts likely text sequences. It has no comprehension of meaning. A well-documented demonstration: AI will agree that it's wrong multiple times in a row, not because it understands the correction, but because "you're right" is a statistically likely response to pushback. This is different from understanding.

What tasks is AI genuinely bad at?

AI consistently struggles with judgment calls in novel situations, detecting its own errors, understanding unstated context and politics, having genuine taste or aesthetic judgment, and caring whether its output is actually correct or useful. These are also the tasks most central to experienced professional work.

Is AI in 2025 actually new, or have we been using it for years?

You've been using AI for over a decade. Spam filters, autocorrect, Netflix recommendations, Spotify's Discover Weekly, Google search ranking, all of these use AI. What changed recently is that the output became human-readable enough to be visible, which caused the panic. The technology shift is real, but the timeline of "AI arriving" is mostly wrong. See the full breakdown of what the data actually says for context.

Why does AI make up facts so confidently?

Because it's predicting likely text, not retrieving verified information. Wrong text can be statistically plausible even if it's factually false. The model has no way to verify claims against reality. It only knows what patterns appeared in training data. This is why you should never use AI output for anything consequential without checking the facts yourself.

Does this mean AI isn't a threat to jobs?

No, it means the threat is specific and different from what headlines suggest. AI won't replace entire professions wholesale. It will automate the pattern-based, repetitive tasks within many jobs, which means people who do mostly that kind of work face real pressure. The practical response is to use the tools yourself for those parts, before someone else uses them to make you redundant.