When I was a kid, I wanted sea monkeys. Bad. The ads in the back of comic books were spectacular — happy little humanoid creatures swimming around a tiny castle, waving at their proud owner. I sent away my allowance and waited.
What arrived was a packet of brine shrimp eggs.
If you were lucky, a few would hatch. They were microscopic. They did not wave. They were not happy. There was no castle.
But here’s the thing — I learned something important from that experience. I learned that compelling presentation has absolutely nothing to do with truthfulness. I learned to ask “is this too good to be true?” before reaching for my allowance. I developed, in other words, a bullshit meter.
That meter has served me every single day since. And right now, in the age of AI, it may be the most important professional skill any of us can have.
AI is incredibly powerful. It’s also confidently wrong.
Let me be direct: AI tools are genuinely remarkable. They accelerate work, surface connections across vast amounts of information, and can make a skilled person dramatically more productive. I use them constantly.
They also hallucinate. They fabricate. They confabulate with the calm authority of someone who has never once been wrong about anything in their entire life.
> MIT research found that AI models are 34% more likely to use words like “definitely” and “certainly” when generating incorrect information. The more wrong it is, the more sure it sounds.
Lawyers have submitted fabricated case citations to courts. A major airline was held legally liable when its chatbot gave a grieving customer the wrong bereavement fare policy. Medical AI tools have given dangerous advice to people in crisis. In 2024, nearly half of enterprise AI users admitted to making at least one major business decision based on information that was simply made up.
This isn’t a fringe problem. It’s happening at scale, every day, across every industry.
The internet generation never ordered the X-ray glasses
Here’s where I think the real risk lies, and I say this without judgment — just observation.
Those of us who grew up before the internet developed a certain immunity through failure. We fell for the sea monkeys. We ordered the X-ray glasses. We got burned by something that looked real but wasn’t, and we recalibrated. We built instincts.
The internet accelerated the delivery of information far beyond our ability to evaluate it. Social media made sharing feel more natural than verifying. An entire generation learned to consume and trust at a pace that made skepticism feel like friction — like you were being slow, or difficult, or out of touch.
AI doesn’t change that dynamic. It supercharges it.
Because AI output *looks* credible. It’s well-formatted. It cites things (sometimes real things). It answers confidently and completely. It doesn’t say “um” or trail off. There are no tells — none of the usual signals that tell your brain to slow down and check.
> The sea monkey ad looked compelling. An AI-generated answer looks authoritative. The fundamental test is the same: does the evidence actually support the claim?
The expertise gap is where the real damage happens
Complete novices are actually pretty safe. They know they don’t know, so they check.
Complete experts are also pretty safe. They have enough deep knowledge to catch errors that contradict what they know to be true.
The danger zone is the middle — the partially informed professional who knows just enough to feel confident evaluating AI output, but not enough to catch the subtle wrong turn. They’re the ones who pass the fabricated case citation to the senior partner without blinking. They’re the ones who ship the AI-generated analysis because it sounded right.
This is not a failure of intelligence. It’s a failure of calibration. And it’s completely understandable — AI is new, it moves fast, and the pressure to use it quickly is real.
What responsible AI use actually looks like
The answer isn’t to avoid AI. That ship has sailed, and frankly, the tools are too useful to abandon. The answer is to use it the way you’d use a brilliant, fast, occasionally unreliable research assistant.
You wouldn’t let a first-year analyst publish a client report without review. You wouldn’t take a junior engineer’s architectural recommendation without a second set of expert eyes. The same principle applies here.
Real AI governance — the kind that actually works — isn’t compliance theater. It’s subject matter experts in the loop who can genuinely recognize when the output has gone sideways. Not rubber stampers. Not people who trust it because it sounds confident. People who know enough to say “that’s wrong, here’s why.”
The organizations getting this right are the ones where domain expertise is treated as the check on AI output, not a relic to be replaced by it.
The bullshit meter isn’t optional anymore
We are in a transitional moment. AI is improving rapidly — the hallucination rates of 2025 are dramatically lower than 2023, and the trajectory is real. Systems are getting more accurate, more grounded, more aware of their own uncertainty.
But we’re not there yet. And in the meantime, the gap between “this sounds convincing” and “this is true” is wide enough to drive serious consequences through.
The sea monkey generation learned something that didn’t make it into the curriculum: the medium of delivery tells you nothing about the truth of the content. A glossy comic book ad. A confident AI response. A viral social media post. They all share the same fundamental property — something is trying to persuade you, and persuasiveness is not the same as accuracy.
The bullshit meter isn’t cynicism. It’s not technophobia. It’s a professional discipline — the habit of pausing before you trust, verifying before you act, and knowing enough about a domain to recognize when something has gone quietly wrong.
Right now, in 2026, that discipline is the difference between AI being your greatest professional advantage and AI being your most expensive mistake.
Check the output. Know your domain. Trust but verify.
The sea monkeys aren’t going to wave at you.

