Gen AI and the Illusion of Knowledge. Part 2

Critical thinking is slow. It takes time, effort, and a lot of discomfort. It requires reading beyond the headline, questioning assumptions and sometimes admitting we were wrong. Beneath the excitement of a tool that writes like us, speaks like us and even thinks like us, there lies a quieter, more dangerous shift, not just in how we access knowledge, but in how we relate to it.

Generative AI offers the illusion of understanding. It hands us neat answers but trains us to be less intellectually curious. It provides answers without friction, but it is precisely friction that builds critical thinking. Remove it, and we risk becoming a generation that confuses having access to information with having thought through it.

It’s worth asking why, unlike in Gutenberg’s time, today’s educated elite are not resisting this technological upheaval because this shift isn’t just incidental, it’s advantageous to those in power.

Think about it!

A population that accepts answers without scrutiny is easier to influence, easier to manage, and easier to keep distracted. It will make us think that sounding smart is the same as being informed. That a quick answer is as good as deep knowledge. That efficiency is better than insight. Once we accept that, we lose something essential. The danger is not that AI will replace our intelligence but that it will reshape our understanding of what intelligence is.

Throughout history, those in power have always looked for ways to shape how the public thinks. Control the printing press, and you control what gets read. Control the curriculum, and you control how generations are taught to see the world. Control the algorithm, and you control what shows up before someone even knows what to look for. Now, with generative AI, we have entered a new phase. The answers appear neutral. The language sounds objective but every output is shaped by hidden decisions: what to prioritize, what to omit, how to phrase things, whose perspective to center. The user sees none of this. They see only a seamless answer that sounds smart and confident.

This is what makes it dangerous. It doesn’t silence dissent. It drowns it in easy explanations. It doesn’t demand obedience. It gives people just enough information to feel like they know what they’re talking about. And then it sends them back into the world, confident but uninformed. There is a strategic comfort in a world where surface-level knowledge dominates. A population that feels smart but lacks depth is easy to distract and harder to organize. It can be led to believe it understands complex systems without ever probing their mechanics. It accepts the narrative without inspecting the source.

Critical thinking has always been a threat to power. Deep reading, nuanced debate, and intellectual struggle are dangerous because they produce people who ask hard questions: about inequality, exploitation, injustice, and truth. But what happens when those people are instead trained to feel informed without actually being informed? When they rely on AI summaries instead of primary sources?

In this sense, AI becomes a kind of digital opiate, super fast and ever flattering. It tells you you’re smart. It gives you the answers. It saves you the effort of wrestling with uncertainty or contradiction. And all the while, the real machinery of power i.e. the structures that shape your education, your economy, and your reality keep humming in the background, largely unquestioned.

The student who relies on AI to finish assignments may get better grades, but not a better education. The entrepreneur who uses AI to pitch investors may sound sharper, but won’t build better companies. The difference between real understanding and artificial confidence is subtle but it matters.

Generative AI, under the guise of democratizing knowledge, could actually be narrowing the scope of thought by keeping people intellectually passive and pacified by the illusion of learning. We need to normalize not knowing. We need to teach people that a good question is more valuable than a quick answer. We need to celebrate slow thinkers. We need to pause before reposting, rewording, or regurgitating. And above all, we need to stay in the habit of thinking for ourselves.

Gen AI is a remarkable tool. But if we don’t take care, it will become a substitute rather than a shortcut. Not for intelligence, but for the effort it takes to cultivate it. And when that happens, the danger is not just that we will forget how to think. It’s that we won’t even notice.

After all, why would you question the system if it talks back to you in perfect prose?

Sonam Pelden

Subscribe today to get full access to the website as well as email newsletters, and check out the audio recordings of the articles here

Subscribe to Sonam Pelden

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe