Podcast

Critical Ignoring: A Conversation With Christopher Mims on the AI Tech Bubble

Written by Intevity | May 6, 2026 3:35:23 PM

George Jagodzinski (00:00):

Today, we learned that the best BS detector starts with neuroscience and emotions, and that people who admit they were wrong are the best to stay close to. I'm joined by Chris Mims, technology columnist at the Wall Street Journal, and author of the book How to AI. Chris has spent more than a decade in 500 plus weekly columns exploring tech and trends. He's one of the very few people I've seen hold himself accountable publicly on predictions. I love it. We dig into why disruption is rare than we think, what AI is actually doing to the way software gets built, and how to think clearly when everyone around you is distracting you with hype and nonsense. Please welcome Chris.

(00:39):

Welcome to Evolving Industry, a no-BS podcast about business leaders who are successfully weaving technology into their company's DNA to forge a better path forward. If you're looking to actually move the ball forward rather than spinning around in a tornado of buzzwords, you're in the right place. I'm your host, George Jagodzinski. Chris, thanks so much for being here.

Chris Mims (01:15):

Yeah. Thank you for having me.

George Jagodzinski (01:16):

I was really looking forward to this conversation because I feel like you amongst anyone has probably built up such a fantastic pattern recognition skill and a BS detector. And now more than ever, I think that that's important. How many columns is it that you've written now?

Chris Mims (01:30):

I had to count for my 10-year anniversary, but I think it's over 500 because I'm past year 11 now.

George Jagodzinski (01:36):

Wow, that's impressive. So, tell me a little bit about what skills or frameworks do you use to do some BS detection to look for what's hype versus real?

Chris Mims (01:45):

It's funny, I actually think that where I start, it's going to sound very woo-woo, but it's actually looking inside. I have a background in neuroscience. I had some training in cognitive psychology. I've talked to a lot of researchers over the years in these areas. We're easiest to fool. There's an author named Maria Konnikova who wrote a really great book about con men, con artists when we're fooled. I highly recommend it to everyone. She talks a lot about what are the conditions under which we are most easily fooled.

(02:16):

I've learned to be careful when I'm excited. I've learned to be careful when everyone else is shouting that this is the latest, greatest thing, and you got to get on the hype train or you're going to get left behind. Obviously, we get a lot of that with AI, but we had it not so recently or not so long ago with obviously cryptocurrency. I don't know how far down Bitcoin is now.

(02:38):

I really start with how am I going to remain calm, cool, and collected when ... That's hard. I mean, look, I'm a writer. I'm a storyteller. Obviously, I'm passionate about what I do, so I wouldn't be able to do what I do if I wasn't excited about the things that are going on. I think that's true of most people.

(02:54):

Once I've gotten past that, there are some formal tools you can use, but again, these are tools that you don't end up being able to use until you can engage what Kahneman called System 2 thinking. I mean, I think most people are familiar with this framework by now. System 1 is your impulsive thinking. System 2 is slower, more logical. You're using more of your forebrain, your neocortex, the thing that really differentiates us from our closest relatives.

(03:23):

And that's the point at which there's all kinds of great frameworks. So, when I'm reading the news, there are things called lateral reading you can use to verify what you're looking at. There's a way to filter out information in advance, because of course the amount that you're overwhelmed with information contributes to all these other problems that's called critical ignoring.

(03:48):

And I wrote a column about that. I was like, "This is the most important skill you need for 2026." So, there are a lot of things that you can use, but step one is don't get sucked in by the hype. And the important asterisk on that assertion is all of us are going to get sucked in by somebody's hype at some point. We have to be aware that we're all going to do that.

George Jagodzinski (04:11):

Yeah. Totally. Yeah. And not having too much of an ego to think that you can avoid it. What I like about, I know you said it's a woo-woo, but it reminds me of, because I'm really into mindfulness, it reminds me of that where it's like you have an emotion and the first thing you need to do is look at the emotion and say, "Why am I having that?" Or maybe it's even just a physical sensation, "Why am I having this physical sensation?" And then you could start to explore what's behind that and how is that going to impact my decision making and all that. On the lateral reading, can you just elaborate on that a little bit? What's a good example?

Chris Mims (04:44):

Yeah. So, lateral readings, it's a really critical skill now because our information ecosystem is so polluted and I don't need to go into why. I think most people just nod when I say that. But lateral reading is, you see something, even if it's a reputable source, but especially if it's something that an AI is telling you and you say, "Okay, what are other sources I can use to verify this that are sufficient? They're not citing the original source."

(05:10):

Funny enough, AI can help you with this. There's certain prompts that people will use. I mean, you can just use it to test your assumptions. I will often go in with, I'm researching a topic, start to form an idea in my head about why a particular critical mineral is doing something that day in the markets and I'm going to test that assumption.

(05:29):

So, I'm going to be asking the AI like, "Okay, what's wrong with this hypothesis? What's wrong with this assertion?" Especially things like Gemini, but ChatGPT will do this as well because it's scraping Google search index. The dark side of lateral reading is folks are like, "I'm going to do my own research." But that's more of a media literacy credibility issue. We can all get down our own rabbit holes.

George Jagodzinski (05:57):

Yeah. And I saw one of your most recent columns you were talking about how it's shocking the number of educated people or intelligent people that think that AI is actually just thinking like a human behind the scenes.

Chris Mims (06:09):

Yeah. They treat it as an oracle, which knows everything or is infallible when it's really not.

George Jagodzinski (06:16):

Yeah. I've been surprised even, I have computer science background. We're in software business and I'm finding people even with CS backgrounds are falling into that trap a little bit.

Chris Mims (06:26):

Yeah. I mean, I think the challenge with AI, you have to think of it as like the world's like least neurotypical intern in a way. So, when you talk about people who are not neurotypical, they have spiky intelligence. So, you think about like a person like Elon Musk who clearly has a great deal of intelligence in some areas and not a lot in others.

(06:46):

The way that people get tricked is sometimes you'll give an AI a hard problem, a hard research problem. Increasingly they're able to do more analytics. They used to not be able to do math, but now they're actually writing Python code in the wings so they can do calculations for you and stuff. I mean, even Google's basic search will do this now. It's incredible.

(07:05):

So, occasionally you'll have this wow moment where you're like, "Whoa, it actually used formal logic to reason through this problem that I gave it and it gave me the right answer and I checked it and it was correct." But the thing you got to remember is that it's just as fallible as a human. You have to test all of its base assumptions. And you also have to assume that every once in a while, it's going to go completely off the rails in a way that a human never would.

(07:32):

And so, fundamentally, it's software. Fundamentally, it is a fuzzier way to do what we have always done with software. And as we all know with software, it's garbage in, garbage out. One assumption, one factor, one data point is wrong, the whole thing is incorrect. So, it can't do more than what a person does. It's not super human in that way. It's only superhumanly fast, really.

George Jagodzinski (08:00):

Yeah. And we're all human, we're all fallible. We all get it wrong. What I love about you is the accountability that you held for yourself. You published an article that was, I forget the title, but it was essentially all the things that you got wrong. I don't want to rehash the whole thing, but I'd love you to think, what motivated you to write that? What did you learn about yourself in writing that?

Chris Mims (08:20):

Yeah. I mean, I learned one unsurprising fact, which is that if you're going to be a tech columnist for a decade at the Wall Street Journal, you're going to have often an irrational belief in your own point of view. But that's true of all of us. We're all confidently making assertions based on strongly held feelings, which we're rationalizing after the fact.

(08:46):

The most insightful thing I think I've ever heard about predictions that people make is that predictions tend to be what we wish would happen rather than a calm calculus about what we think is going to happen. Now, there are people who predict for a living, they run hedge funds and stuff, but that's also why you have hedge funds that do great for 10 years and then lose half their value in one year. So, that fundamental irrationality is something that I learned a great deal about.

(09:19):

And then something I learned from befriending and interviewing futurists. So, people who literally predict the future for a living, the first thing any good futurist will tell you is you cannot predict the future. You can do scenario planning where you imagine what's called the cone of possibility and it's like, here are all the things that could happen and then you can start to prepare yourself for all those eventualities or you can weigh them by probability and say, "These are the ones we're going to prioritize."

(09:46):

And that is not, I think, an intuitive or native way for people to think. People don't talk about that in everyday life. If you ask your friend like, "Hey, what do you think's going to happen?" They're not going to be like, "Well, 20% this, 20% odds the complete opposite." Very few people operate that way. It would just be hard to go through life. You would have decision paralysis all the time.

George Jagodzinski (10:07):

That would be an annoying trend.

Chris Mims (10:07):

That's the way the world actually exists.

George Jagodzinski (10:08):

That would not be my friend for very long.

Chris Mims (10:12):

Yeah. So, there are these formal frameworks that work and big companies use them and they do allow them to ... Sometimes you'll say like, "Oh, it's so incredible how this leader's skating to where the puck is." And it's like, yes, but what that leader has is the talent under them and the resources to prepare for all of the possibilities or many possibilities and 80% of that effort ends up getting wasted because only 20% of it is relevant to what actually transpires, but that's the way you have to do it.

George Jagodzinski (10:44):

Yeah. That makes a lot of sense. In writing that article, is there one thing that you got wrong that really stuck out to you?

Chris Mims (10:51):

I've been too busy obsessing about the more recent things that I've gotten wrong, which might be more relevant to your audience anyway. So, I'll tell you one of them. One thing that I think I got really wrong in my book, well, let's say half wrong, because a lot of what I predicted in the book is happening, and so that is relevant. But one thing I very confidently predicted in the book is that the main way that people are going to be using and encountering AI day to day is as a feature rather than a product. In other words, AI will be in the background, it is going to be empowering the devices and the services that we use already and that we use every day.

(11:29):

So, AI is going to be, you see Google doing this a lot. It's just going to be embedded in Gmail doing auto responses for you or automatically populating your calendar from emails you get, things like that. What I underestimated was the degree to which interacting with chatbots was becoming a new default way for people to interact with computers and software.

(11:53):

To their credit, a lot of the AI companies, especially Anthropic, OpenAI, Google. Microsoft's trying, they'll get there eventually. They're cramming so much functionality that you used to access through conventional software, through tooling, through APIs. They're connecting it with the chatbots and making accessible through this plain language interface in a way that is building a habit in the consumer and people at work, which is analogous to when Google happened, when search engines happened. That was a profoundly new way to search for and access information.

(12:28):

And turns out this is not so far removed. I mean, talking to a chatbot, it's an adjacency to just typing into a search engine. And so, that is becoming a dominant way that people get stuff done and access information. And I really thought that because of the way that early chatbots just required so much expertise and that people always talked about prompt engineering and everything else, I was like, "I'm not so convinced that this is going to take over because it's going to be easier for people to access AI through more familiar means." But I think I was wrong about that. And I think that people are very rapidly learning the skill of getting a lot of utility. We're all becoming our own prompt engineers.

George Jagodzinski (13:11):

Yeah. And product designers. I'm just seeing the skillset of being a product designer being something that everyone, if you want to really move things forward, you got to be leaning into, right?

Chris Mims (13:21):

Right. It's funny, it's like Nietzsche or Schopenhauer's eternal recurrence. It's like every 10 years we get a new way for people to make their own custom one-off software and we're all going to become our own software developers. Again, it's a thing I'm skeptical of, but right now, I mean, Claude Code, you want to go vibe code that thing you've always wanted to build, you can absolutely do it.

George Jagodzinski (13:44):

Yeah. I'll tell you just from our experiences, I've spent decades going into companies and they've built this one-off custom software internally. And we would always do this test where we'd say, "Hey guys, is this your secret sauce? Because if it's not, you shouldn't be building this in-house. You should be outsourcing this. Use best of breed tools."

(14:02):

And I can definitively say that that is turning 180 right now as far as going in because you can build it custom for your company with much fewer resources and just really make it work exactly how you need it to work. So, I'm definitely seeing things change there. I'm curious from you, where do you think we're at on the trough of disillusionment with AI?

Chris Mims (14:23):

I was thinking about that this morning. I think the challenge, because AI is a general-purpose technology, because it's useful for so many different things, there are many different hype cycles which are ... It's like when you hit a bunch of tuning forks and some of them are resonant and some of those waves are exactly out of phase, so they're destroying one another. And so, it depends on the industry, it depends on the company, depends on the individual.

(14:48):

One hype cycle is software. Now, in terms of that, I think we're still approaching the peak because AI is hugely, hugely disruptive to the way that we code and build software. Andrej Karpathy said the hot new programming language is English. And I think what that captures is we are moving the programmer up many layers of abstraction away from the code. A good programmer now in some cases is a planner who is writing an extremely detailed specification, which is being passed off to a boss agent.

(15:25):

It's like we're not even talking to agents anymore. We're literally talking to agents who are talking to agents and we're trying to get that entire hierarchy, which we have built for ourselves to one shot, this little software project or big software project based on a sufficiently detailed specification. And then you have other coders who are more like human in the loop, blah, blah, blah. That's all just coder talk.

(15:47):

When you're talking about using AI and using agentic AI, which again are separate in the minds of a lot of people and in their functionality for everyday tasks, that depends on the industry. So, let's take an industry medicine, we're still going up toward the peak because adoption of AI note-taking tools is uneven. Some people frankly are using the wrong tools. I mean, there's literally two dozen startups in this space.

(16:17):

And when I have talked to people who've really thoroughly tested a lot of these, they're like, "Most of these are garbage, but this is the one I really like." And the law, we're still approaching that peak. So, I think where we are hitting a trough of disillusionment, strangely enough, is actually in the financial markets. Even though four-year-old H100 chips from NVIDIA for powering all this AI are actually more valuable now than they were a couple of years ago because there's so much demand for inference.

(16:48):

Folks are really skeptical about whether or not OpenAI, for example, is going to be able to build as much infrastructure as they have said or how competitive they're going to be with Google and Anthropic. There's a lot of concern about, do we have the physical resources to build out the amount of infrastructure that Oracle has promised. I mean, most of Oracle's future unrealized revenues are supposed to come from build out for OpenAI. That's a question mark.

(17:20):

Half of Microsoft's future Azure cloud build out, again, OpenAI, big question mark. Where do we get all of the electricity for this when gas prices keep rising and these new data centers are being fueled by fracked natural gas here, when the electrical grid is strained, when we have an administration that is trying to knock out as many renewable energy projects as possible, even though wind plus battery storage in plain states is now cheaper than maintaining existing coal-fired power plants in those states on a per kilowatt-hour basis.

(18:04):

So, we're supposed to have this all-in energy strategy, but our energy strategy is our AI strategy, our energy strategy and the world markets are pretty deranged in terms of that right now. So, how does that affect our build out of AI? That's why the Magnificent Seven is down so much from its peaks right now. There's just a lot of uncertainty, also around dollar devaluation, inflation, and that we're not even talking about tariffs as well. AI has become so big that things that are macroeconomic factors that in the old days, you only worried about affecting energy markets and banks. It's like, "Well, now it's going to affect Microsoft."

George Jagodzinski (18:49):

Yeah. There's so many more questions than there are answers. And I know in my circles, because I'm in software, it does feel like we're coming out of the trough of disillusionment. And I'm seeing people who are eye rolling things this time last year are now really leaning into real ROI of what they're implementing, but they also at the same time have way more questions than answers as far as how this is all going to play out. And I find it's really interesting in this time of so many questions that you wrote a book about the laws of AI. It's bold. I like it. I'm curious, talk a little bit about what prompted you to write it and what did you learn as you were writing the book?

Chris Mims (19:30):

Yeah. I mean, I was inspired by others who had written books about topics that were both fast moving and not so fast moving, where they said, "Let's get to the root of how this technology, this phenomenon works, and let's extrapolate rules that will be durable, that people can return to for years to come."

(19:51):

It's important to remember that modern AI, modern generative AI, it's all built on the transformer architecture. So, if we get a fundamentally, and that's how LLMs were built, and that's how everything that represents today's boom is built, and that generative AI and that transformer architecture, that is of course distinct from what I call classic AI, but that just means more like predictive analytics and different models.

(20:16):

If tomorrow, there's a totally radically new architecture that somebody comes up with and transformers get thrown in the bin, then my book's irrelevant. But because we are continuing to build on this fundamental technology, which has certain advantages, certain quirks, certain characteristics, that gave me the confidence to say, "Here's how it works across a bunch of different fields, not just large language models. And here's how, if we take that all the way up the levels of abstraction to what it means for you, just everyday individual in your life, here's the things that you need that are going to be tricky about how it operates." Those things have held true.

(20:59):

Are engineers working around problems like hallucination? Yes. But is hallucination a fundamental characteristic of how these models work? Also, yes. So, it hasn't been eradicated. So, everything that I wrote about in the book about how to be careful about it while using AI is still relevant.

(21:17):

Also, a lot of what I wrote about in the book is that the intersection of management, human nature, big systems and the adoption of new technologies. And if there's one thing I've learned from more than a decade of writing about tech for The Wall Street Journal, is that no matter how amazing and new and useful a new technology is, adoption is rate limited by human's ability to learn it, to integrate it into what they do, to change all the systems around it.

(21:48):

And that's obviously why big companies are slower about this than startups and individuals and entrepreneurs. And when I want to talk to people who are really on the cutting edge, I talk to people who their company is them and they're really early adopters and then they're just like, "I move at whatever pace I want."

George Jagodzinski (22:04):

Yeah. That makes a lot of sense. One of the laws in there was that, I believe it's that experts will benefit from this more than novices, but what's also interesting is AI can teach novices how to learn new skills. How do you balance that and how do you think it's going to play out between experts and novices?

Chris Mims (22:24):

So, I think that there's a huge opportunity for teaching AI. And I don't mean AI in ed tech because ed tech is a mess. And frankly, I don't want AI anywhere near my grade school aged children. No.

George Jagodzinski (22:39):

Agreed.

Chris Mims (22:40):

But if you're talking about somebody who is at a tech company, they're being onboarded and they're supposed to quickly get up to speed so they can start making commits and contributing. Two things are happening. One is senior developers are being made more productive, sometimes only marginally more productive. Let's not get too excited. So, a lot of companies have hiring freezes or they just overhired throughout the pandemic, so they're laying people off.

(23:09):

But eventually we're going to return to a place where people are going to need to hire young coders again. This is just the way these cycles work. How do you onboard that person so they have enough knowledge that they can be directing the agentic AI that's writing a lot of the code for them? There is, I think, a huge opportunity for AIs that are gently helping bring up a person's level of knowledge during that onboarding process. And it's eventually going to be true not just for software companies, going to be true for every company.

(23:38):

I was talking to a construction company yesterday. They were just outlining all of the things that they do internally just in terms of their sales pipeline. And I was just like, "Whoa, whoa, whoa. I need to start recording this conversation because there's so much going on here." And then I started thinking to myself like, "What a nightmare in the CEO's 12-person company." Every time they got to onboard somebody new who's got to integrate themselves into their process. And I just thought to myself, wow, they need a knowledge base. They need it to be accessible through AI. They need the AI to be integral to their onboarding process. Probably those companies exist. I haven't heard from them yet, but there's a huge opportunity for folks to do that.

(24:19):

I think that the opportunity has been missed so far because the ideology in Silicon Valley right now is AI is going to replace all of these humans. And one of my laws of AI is, "Sorry, it's just not going to happen." Yes, it will make some people more productive. So, you might have leaner companies and we're seeing that with leaner startups, but ultimately the AI is not intelligent or flexible in the way a human is. And if you want to succeed, you've got to make sure that your AI is augmenting your best people and eventually your best entry-level people rather than just trying to replace them.

George Jagodzinski (24:54):

Yeah. I mean, each industrial revolution's have this promise that people are going to get replaced and last I checked, there's more people doing work today than there have been.

Chris Mims (25:04):

Yes. That's called the lump of labor fallacy, this idea that there's only so much work out there. And it's like, no, farmers become web developers. This is the way it works.

George Jagodzinski (25:13):

Yeah, yeah. I think you also talk about disruption doesn't happen as often as we think. Can you elaborate on that?

Chris Mims (25:20):

A lot of this is just people are excitable, especially people in tech and investors. That whole news cycle is just driven by people, investors and CEOs, I don't blame them, but their every incentive is to say, "This is the greatest thing since sliced bread, please give me more money." Fundamentally, most developments are incremental and big disruption is pretty infrequent.

(25:47):

So, if I'm going to borrow from another field from evolutionary biology, there was a famous evolutionary biologist named Stephen Jay Gould, and he wrote about what's called punctuated equilibrium. So, if you look at the history of life on earth or any given species, what you have is you have these very rapid times of adaptation and species radiation because something happens, an ice age happens or whatever. It's like, "Okay, adapt or die."

(26:13):

But then you have for a long time, relatively little change in the genetics of the body plan of an animal. The same thing applies to technology. And so, people will say, "Oh, my god, Claude's new agentic framework, it's the biggest thing since ChatGPT." And it's like, "Is it?" Or are we still iterating on what was the true disruption, which was application of transformer architecture to large language models, which was invented. This whole thing happened at Google and was exploited by OpenAI. And then you got the ChatGPT moment. That was a disruption.

(26:47):

The iPhone was a disruption, the internet was a disruption. In between, it's less turnover than you think. Also, if you go all the way back to Christensen's original thesis about disruption, part of his assertion was these are the times when startups and upstarts can displace big companies. And what we've seen, frankly, in the past 20 years is, the big companies are able to acquire and copy fast enough that they aren't getting disrupted. I mean, there was a few years ago when we thought, oh, OpenAI and their backer, Microsoft is going to be hugely disruptive of Google. Who's going to be the Google of the AI age? Increasingly, I think it's Google.

George Jagodzinski (27:33):

Bill Gates told you that startups are silly, but the good ideas persevere. What do you see out there do you think is silly right now?

Chris Mims (27:41):

I think that there are quite a few copycat startups. So, Andrej Karpathy said on his last appearance on the Dorkish Podcast, "Right now there are more companies than ideas in Silicon Valley." So, do we need two dozen medical transcription AI startups? No. Do we need 25 different agentic harnesses for folks who are trying to make agents be less random and behave better? No. I also think that there are certain things in physical AI that are very silly.

(28:24):

I mean, here's a big one. I think humanoid robotics are one of the biggest bubbles in, not just in tech, but I would say in tech history right now, because they require so much capital. And you have Jensen Huang saying, "Oh, physical intelligence is the next big thing in AI." I agree. I happen to be somebody whose first book was about robotics.

(28:49):

And I say with a pretty high degree of confidence that this idea that Optimus is going to send Tesla's stock price to the moon or that a lot of these other companies who, let me be clear, I know a lot of their CEOs, I think they're very smart. I think a lot of them have really noble goals. One of them told me that his ultimate goal is to deal with America's aging population because he witnessed his own grandfather's decline and he wanted a home companion to just help him live more independently. Great. Very stirring.

(29:22):

Are we going to get the kind of AI that's required to make humanoid robots accomplish these kinds of tasks and be truly relevant and cost-effective in factories anytime soon? Absolutely not. Are we going to get more robots? Yes. But humanoids, I think it's the biggest bubble in tech right now.

George Jagodzinski (29:39):

Interesting. You might need to talk with Oliver Mitchell. He was on an episode not that long ago with me and he's an investor in the humanoid space and he did make some compelling arguments for use cases such as welding. It's an industry that has lost a lot of talent and all that. But me being from Massachusetts, I'm partial to dog robots. I think dog robots are going to be humanoid robots.

Chris Mims (30:02):

I mean, maybe we're splitting hairs, but it could be human torsos on dog bodies. Maybe centaur robots are what's actually going to be big.

George Jagodzinski (30:11):

Oh, I'm looking forward to centaur robots. That sounds good. Chris, you hold yourself accountable. You have a nice balanced framework as far as how you look at things. It's really refreshing and I love it. I always like to finish these with a fun question, which is in your life, in your career, what's the best advice you've ever received?

Chris Mims (30:28):

Oh, it's the most basic advice that I got from my college mentor at the very, very beginning of my career. And she said, "What you should do in life is whatever's at the intersection of what you're good at and what you enjoy." And it was just so practical. It wasn't like, "Follow your dreams." Her underlying message was, "Be of service and you will always, A, be employed and B, feel a sense of purpose." So, definitely the best advice I ever got.

George Jagodzinski (30:57):

I love it. I think simple advice is the best advice. Chris, thanks so much for being here.

Chris Mims (31:02):

Yeah. Thank you so much for having me. It's been a pleasure.

George Jagodzinski (31:06):

Thanks for listening to Evolving Industry. For more, subscribe and follow us on your favorite podcast platform, and pretty please drop us a review. We'd really appreciate it. If you're watching or listening on YouTube, hit that subscribe button and smash the bell button for notifications. If you know someone who's pushing the limits to evolve their business, reach out to the show at evolvingindustry@intevity.com. Reach out to me, George Jagodzinski on LinkedIn. I love speaking with people getting the hard work done. The business environment's always changing and you're either keeping up or going extinct. We'll catch you next time, and until then, keep evolving.