http://x.com/i/article/2014350409473662976
View on X →Tweet: http://x.com/i/article/2014350409473662976 Article: "AI is making you dumber (and you can't tell)" I think AI is making you dumber and you can't tell. At least that's what happened to me. Read this ~15min guide and by the end you'll have a simple system to use AI without losing the judgment that makes you, you. What Are You Doing to Become Wiser? The first thing AI gives you is speed. The second thing it gives you is confidence. The third thing it quietly takes away is your ability to tell when you’re wrong. Not all at once. Just a little, each time it works. At least, that’s how it tends to go. You start small. You use AI to draft something like a message, a block of code, a landing page. It’s better than you expected. You tweak a few things. It works. Then you use it again. And again. Very quickly, you’re producing things that would have taken you hours or days, in a fraction of the time. And somewhere along the way, something subtle shifts. You stop asking, “Do I understand this?” And start asking, “Does this look right?” Eventually, those feel like the same question. They are not the same question. Most AI mistakes don’t look like mistakes This is really the core problem. AI rarely fails in obvious ways anymore. It doesn’t hand you garbage. It hands you things that are: Plausible Coherent Often quite good That’s what makes it dangerous (yet amazing). You can ship code that runs but is fragile the moment new conditions are introduced. You can publish writing that reads well but misses the thing that makes your voice yours and the je ne sais quoi that turns readers into followers. You can build automations that technically work but don’t actually produce results, and you can’t figure out why. The outputs pass a surface check. The problems are upstream: in assumptions you made without examining them, in logic you didn’t fully follow, in edge cases you didn’t consider because everything looked finished. The word for this is plausible deniability against yourself. The output is good enough that you never have to confront whether you actually understood it. The competence illusion This is kinda the uncomfortable part. You can now produce things that look like expert work without having expert understanding. You can: Build an app you can’t debug Write content you can’t defend Automate decisions you can’t fully explain Design strategies you can’t stress-test And because the outputs are good, it’s easy to believe your capability has grown more than it actually has. But there’s a difference between: Producing something good, and Understanding why it’s good (or when it will fail) AI collapses the barriers that used to hold you back from creating things. But it doesn’t help you think about whether what you’re creating is useful. Whether it will be accepted and bought into by others. Whether it will, on balance, help your business, your life, or the world move closer to how you’d like it to be. You are borrowing intelligence. You’re the one making the decisions. The question is whether you’re making them consciously and deliberately, or acquiescing to something because it’s fast and looks polished. Decisions depend on judgment. And for that, you need to be upgrading your wisdom as fast as you’re supercharging your intelligence. Where wisdom actually shows up Realistically, it’s ain't in the prompt. It’s not in knowing the right tools. It’s not in your tech stack or your AI workflow. It shows up in quieter moments: When something feels slightly off, and you pause instead of shipping When an output looks right, but you check anyway When you decide not to automate something, even though you could When you slow down at the exact moment everyone else is speeding up Wisdom lives in the before and after of using AI: Before: Should I be doing this at all? After: Do I trust this, and why? Everything in between is execution. Execution is the part AI is already better at than you. The before and after is where you earn your keep. The three ways you start losing it Anyone who’s ever moved from great individual contributor to first-time manager has had to figure out what it means to keep your skills sharp when you spend less and less time being hands on. The struggle to build judgment of what “good” means when you aren’t directly in the work and that’s real. That exact dynamic is now playing out between every knowledge worker and AI. Here’s how it happens: 1. You stop verifying At first, you check everything. Then you check most things. Then you check when something feels off. Eventually, you stop checking because it “usually works.” That’s the moment you become dependent on something you’re no longer supervising. And you won’t notice it happened, because nothing will break. Not immediately. The tell: you start saying “looks good” more than “let me understand this.” 2. You stop noticing AI outputs are clean. Polished. Structured. Which makes it harder to see: Subtle logical gaps Slight tone mismatches Small but important inaccuracies Assumptions baked in that you didn’t choose You’re seeing less because everything looks finished and not seeing less because there’s less to see. Finished things don’t trigger scrutiny. Your brain’s quality filter starts letting things through because the packaging is professional. This is the difference between an editor and a reader. Readers accept polished text. Editors interrogate it. AI is turning more of us into readers of our own work. 3. You stop deciding AI suggests the next step. Then the next. Then the next etc etc. At some point, you’re no longer: Choosing direction Setting constraints Making tradeoffs Killing ideas that look good but aren’t right You’re following a very competent stream of suggestions. And it feels like progress. It feels like you’re moving fast. But there’s a difference between velocity and direction. AI gives you velocity. Only you can give yourself direction. The shift: from using AI to supervising it There’s a point where the nature of your role changes, and most people don’t notice when it happens. Early on, you’re using AI to help you do things. It’s a tool. You’re the operator. Later...and this transition can happen in days/weeks, not years, you’re deciding: What should be done What should be automated What should remain human What should exist at all You’re governing a system that does work on your behalf and no longer "just doing the work". And the failure modes change completely. It’s no longer: “Did I write this correctly?” It becomes: “Should this have been written this way at all?” “What happens if this scales?” “What am I no longer seeing?” “What did I used to catch that I’m now missing?” This is the shift from operator to executive. And it’s happening to everyone who uses AI seriously, whether they have that title or not. What the work actually becomes As AI takes on more of the doing, something else changes. The work itself shifts. It becomes less about producing things and more about making sense of what should be produced in the first place. What problem are we actually solving? What matters here, and what doesn’t? What tradeoffs are we making and are they the right ones? What does good even mean in this context? Who are we building this for, and do they actually want it? These questions were always part of the work. But now they are the work!!! Because execution is no longer the bottleneck. Which means your leverage is no longer in how much you can produce. It’s in how well you can decide (and this is why I built Ideabrowser to help give people clarity). But deciding is a very different skill than the one most of us spent our careers building. The wisdom advantage There’s another side to this that most people miss. AI does not level everyone up equally. It amplifies differences. If you have strong judgment: You ask better questions You spot weak reasoning faster You choose better directions You know what matters and what doesn’t You build things that hold up under pressure And AI makes you dramatically more effective. If your judgment is weak: You accept plausible answers without interrogation You follow suggestions without questioning them You build things that look right but don’t hold up You mistake speed for progress You confuse output volume with actual results And AI makes you faster at going in the wrong direction. The gap widens. Not because of technical skill. Because of wisdom. Experience, taste, context, and judgment become the deciding factors. The question is not “Can you use AI?” It’s “Do you know what to do with what it gives you?” You don’t rise to the level of your tools. You fall to the level of what you practice. There’s a simple dynamic here that’s easy to miss. Every time you use AI, you’re making a trade: You save effort now But you give up a rep And reps are how capability forms. Not outputs. Reps. If you consistently: Let AI structure your thinking Let AI resolve ambiguity Let AI decide what’s “good enough” You’re practicing not doing those things yourself. And over time, that compounds. You may still be able to: Recognize good outputs Assemble systems Ship work quickly But your ability to reason from first principles, sit with uncertainty, and work through something messy...well, I think that starts to degrade. Because you stopped practicing and not necessarily because you got worse. Think of it like GPS. You can navigate anywhere instantly. But after years of using it, can you find your way across a city without it? Most people can’t. Not because they lost the ability. Because they never exercised it enough to keep it. The same thing is happening with thinking. The dangerous version of this You don’t notice when you’re not thinking. Everything still feels smooth. Outputs still look good. Progress still appears real. Stakeholders are impressed. Clients are happy. But you’ve quietly shifted from generating judgment to selecting from suggestions. And those are not the same skill. Generating judgment means: I looked at this situation, considered the tradeoffs, weighed what I know, and made a call. Selecting from suggestions means: I picked the option that looked best from a set someone (or something) else created. One builds muscle. The other atrophies it. And both feel like decision-making while you’re doing them. A quick way to tell if this is happening to you I'd pay attention to a few signals. As you work with AI, ask yourself: Am I accepting this because I understand it, or because it looks right? Could I explain this decision clearly to someone without referencing the AI output? Did I actually choose this direction, or did I follow the suggestion? What assumptions am I not examining right now? If this turns out to be wrong, would I know why? Is this output actually good, or just polished? Does this sound like me, or like something anyone could have written? When was the last time I disagreed with what AI gave me? And maybe the most important one: Am I still thinking, or am I just reacting? You don’t need perfect answers. But if you stop asking these questions, you stop noticing. And if you stop noticing, you sacrifice the chance to truly expand your skillset toward wisdom rather than just efficiency. What good practitioners actually do If you watch people who are very good at working with AI... the ones producing genuinely differentiated results, you'll notice a few patterns show up in how they think. They move fast, but slow down at the right moments. They’ll generate 10 drafts quickly. They use speed for exploration. But when something matters like a key decision, a positioning statement, a product direction, they slow down. They stop and ask: what are the boundaries of what I’m exploring here? What am I not exploring? Is this producing outcomes, or just outputs? They slow down, to speed up. Most people treat AI like an assembly line. Good practitioners treat it like a sparring partner. They push back. They disagree. They delete the first three suggestions and ask for something else entirely. They don’t just ask “Is this good?” They ask “What could be wrong here?” They treat AI outputs as proposals, not answers. They ask: What assumptions is this making? What would break this? What’s the failure mode? What’s the version of this that looks right today and falls apart in three months? This is the difference between acceptance and interrogation. And interrogation is where the leverage lives. They occasionally rebuild things manually. Not because they have to. Because they don’t fully trust abstraction. They’ll rewrite something from scratch like a piece of copy, a strategy doc, a piece of logic, write code manually just to reconnect with what’s actually happening underneath the polished output. It’s the equivalent of a CEO who occasionally sits in on customer support calls. Obviously, you do it because it keeps you honest and not because it's the most efficient use of your time. They notice subtle degradation. They pay attention to things like: Does this really sound like me, or like a well-trained language model pretending to be me? Could a competitor produce this exact same thing? Would anyone notice the difference? Is this consistent with how I want to be known by peers, customers, and partners? Most systems don’t fail suddenly. They degrade quietly. The people who stay sharp are the ones who notice the drift before it becomes a problem. A quick example You build an automated email funnel. AI writes the emails. It adjusts messaging based on engagement data. It iterates based on performance. At first, it works well. Open rates are solid. Conversions are decent. It’s saving you 10+ hours a week. So you step back. You focus on other things. Weeks pass. Then performance drops. Not dramatically. Just enough to notice if you’re paying attention. You dig in. What happened? The tone drifted. It got slightly more generic with each iteration. Optimized for open rates, not for sounding like a human who actually cares. The audience shifted. New subscribers had different expectations. The messaging didn’t evolve to meet them. Small inaccuracies crept in. Nothing dramatic. Just enough to erode trust over a few weeks. Nobody was reviewing outputs anymore. The system was running. It just wasn’t being governed. Nothing broke. It degraded. And the only reason you noticed was because you happened to look at a specific email and thought: this doesn’t sound like us anymore. That instinct or that noticing is wisdom!! And it’s the thing you lose first when you stop paying attention. 3 things you can do this week to become wiser If you’re using AI regularly, you need a few constraints that keep you in the game. 1. Pick one thing you refuse to outsource Choose something important. Something that requires judgment. And do it yourself. Not because AI can’t do it. Because you shouldn’t give it up. It could be: Structuring your thinking on a key decision before involving AI Writing the first draft of something that matters - a fundraise memo, a product strategy, a hard conversation Working through a problem fully before asking for help Deciding what NOT to build, even when AI makes building easy The point is simple: keep practicing the kind of thinking you don’t want to lose. I do this every week. Before I involve AI in anything strategic, I sit with a blank page and write out my own thinking first. Messy. Unstructured. Full of contradictions. That’s the point. The mess is where the insight lives. AI cleans things up too fast. How you’ll know it’s working: You feel more clarity before you involve AI, not less You can explain your reasoning without referencing the output You notice when AI is wrong faster than you used to You start catching things you would have missed a month ago If you’re doing this as a ritual and it doesn’t sharpen your thinking, you’re going through the motions. If it’s uncomfortable, you’re probably doing it right. 2. Work with someone whose job is to make you think If you’re serious about this, don’t go it alone. Find a great executive coach. Someone who: Asks where you’re deferring instead of deciding Pushes on your reasoning, not just your results Helps you articulate where you want to be different, not just good Builds a program for using AI to become that, not just to get faster The hardest thing to notice is how your own thinking is changing. Psychologists call this metacognition which basically means thinking about your own thinking. It’s hard to do alone, the same way it’s hard to see the label from inside the jar. Here’s why a coach is different from AI: a good coach is concerned with helping you build your own abilities to think more deeply and clearly. AI tells you what to do. A coach teaches you how to decide. Those are fundamentally different relationships. A good coach helps you see: Where you’re becoming more generic without realizing it Where you’re avoiding hard decisions by delegating them to AI Where you’re outsourcing more of your judgment than you think When and why you should care about something you’ve been ignoring AI makes you more efficient. A coach makes sure you’re still becoming more you. Side note because I know people will ask who my coach is Shoutout to my coach John Zapolski in LA who I've been working with since 2019 and who helped me navigate everything raising VC to selling companies to building the holdco of my dreams. I don't get anything for saying this, but for those looking a coach who gets the AI age/how to scale companies/how to live a life well lived hit up John (jzapolski@gmail.com) How you’ll know it’s working: You catch yourself deferring when you wouldn’t have before Your decisions feel more deliberate, not just faster You can articulate how your thinking is changing, not just what you’re producing Other people start asking how you’re thinking about things, not just what tools you’re using If your outputs improve but your self-awareness doesn’t, something is off. 3. Write down one decision a week Something where: AI was involved Judgment mattered There were tradeoffs and prioritizing one set of goals over another You weren’t sure you were right Write down: What you decided What role AI played What you chose not to do (and why) What you expect to happen Then come back to it in 30 days. This does one thing extremely well: it forces you to see whether you’re actually getting better at deciding, or just faster at producing. It creates a feedback loop for your own judgment...the one thing AI can’t build for you. I keep mine in a simple doc. 5 minutes a week. It’s probably the highest leverage 5 minutes I spend. How you’ll know it’s working: Your predictions about outcomes get more accurate over time You start noticing patterns in when you’re right and wrong You change how you use AI based on what you learn about yourself You develop a clearer sense of what’s worth your attention and what isn’t If nothing about your behavior changes after a month, you’re documenting, not learning. Revisit how you’re doing it. The real divide that’s coming Let me be direct about what I think is going to happen. Over the next 2 to 3 years, we’re going to see a split. Not between people who use AI and people who don’t. That divide is already settled. Everyone will use AI. The split will be between: People who use AI to become wiser - who develop better judgment, ask sharper questions, see further ahead, and make decisions that compound over time. People who use AI to become faster - who produce more, ship more, automate more, and slowly lose the ability to evaluate whether any of it is actually good. Both groups will look productive. Both will ship impressive work. The difference won’t be visible for a while. But over time, the first group will build things that last. Companies with real judgment embedded in them. Content that sounds like a human wrote it because a human actually thought it through. Products that work because someone understood why, not just how. The second group will build things that look right and feel hollow. That work until they don’t. That scale until someone notices nobody’s steering. One last thing Lol, because this post wasn't long enough but I think it's worth saying. AI amplifies whatever you bring to it. If you bring: Care → you get leverage Clarity → you get scale Judgment → you get power If you bring: Carelessness → you get bigger mistakes, faster Shallow thinking → you get polished outputs that achieve nothing Avoidance → you get automation without capability The goal is to use AI as leverage for becoming a better version of yourself. Because the easiest thing in the world right now is to become more capable and less wise at the same time. And the hardest thing, the thing that will separate you from everyone else in your space is to refuse to let that happen. The tools will keep getting better. The question is whether you will too. My team/co-founders are running 2 free workshops on AI next week. On April 23rd. Join to learn workflows, tools and live demos of how to bring your product/design team into the AI age.How AI-Native Product & Design Teams Work On April 22nd. Join to learn new tools, tactics, tips to build your ideas out.Build a startup in 60 minutes using AI Happy building. I'm rooting for you. Your friend, - Greg Isenberg This post was sent originally on my newsletter "Greg's Letter" You can sign up here to get my post first and special posts there.
| Time | Views | Likes | Bookmarks | RTs | Replies |
|---|---|---|---|---|---|
| 11:00 AM UTC | +103 | +1 | — | — | — |
| 10:50 AM UTC | +120 | +3 | +3 | +1 | — |
| 10:40 AM UTC | +96 | +1 | — | +1 | — |
| 10:30 AM UTC | +113 | — | — | — | — |
| 10:20 AM UTC | +94 | — | +1 | — | — |
| 10:10 AM UTC | +85 | — | +1 | — | — |
| 10:00 AM UTC | +110 | +2 | +3 | — | +1 |
| 9:50 AM UTC | +3 | +1 | — | — | — |
| 9:40 AM UTC | +201 | +1 | +3 | — | +1 |
| 9:30 AM UTC | +101 | +1 | +2 | +1 | — |