Are we losing Ourselves to AI?

Whenever I talk to people about AI the first thing they ask me is if it's plausible that humans will one day be outsmarted and dominated by superintelligence. There’s often a fear that AI will replace human capabilities. Or worse, that something like the Terminator will on purpose or by accident, annihilate everyone on the planet. 

These pop culture touchstones have become an almost religious mythology. It’s only natural that some of us long to experience an AI similar to this to realise a long-held dream. However, making a myth of technology can make it more likely that we fail to operate it well and this kind of thinking limits our imagination, tying them to yesterday’s dreams. The danger with the Terminator story isn't that it will happen but that it distracts us from the real risks presented by AI i.e. the slow and subtle erosion of our identity. 

In a world increasingly shaped by algorithms, the question we should be asking isn’t just what AI can do, but who is building it, and what beliefs and ideologies do they have?

Today, AI is dominated by a handful of predominantly white male technical experts working at a handful of elite institutions. If they’re the ones teaching AI how to think and understand the world, the technology they create is likely to reflect their values and biases - everything from search engines to hiring decisions, from medical diagnoses to the art we consume.

We often say that history is written by the victors. But in this new digital era, history and reality itself, is increasingly being written by AI. And because AI learns from past data, it risks perpetuating the inequalities and prejudices of the past rather than challenging them. The more we rely on these tools, the more our worldview is shaped by an increasingly narrow perspective, subtly erasing cultural nuances and identities that do not fit within the dominant narrative.

 If small and remote communities are ignored in this process, what happens to their voices? What happens to people who are not as influential or who don’t have access to AI development? The consequences of this oversight are profound. If AI continues to evolve without diverse voices contributing to its development, we risk a homogenized future, one where local languages and culture disappear, and where power becomes even more centralized in the hands of a few tech elites. This isn’t just about representation; it’s about autonomy. Who gets to decide what knowledge is prioritized? Who determines what is considered true?

So, how do we overcome this? The solution isn’t simply to demand better AI. It’s to demand more inclusive AI. We need to invest in AI development that includes voices from different cultures, geographies, and socioeconomic backgrounds. Governments and organizations must push for transparency and accountability in AI systems, ensuring that biases are identified and addressed. Small and remote communities must be given the tools and resources to build their own AI solutions, rather than relying on technology created thousands of miles away by people who do not understand their realities.

If we fail to address this now, we risk losing more than just control over AI. We risk losing ourselves. The future of AI should not be dictated by a select few, it should be a collective effort, shaped by the full spectrum of human experience. Only then can we ensure that AI reflects the diverse, complex, and beautiful world we live in, rather than a narrow, algorithmically determined version of it.

Sonam Pelden

Subscribe today to get full access to the website as well as email newsletters about new content when it's available. Your subscription makes this site possible, so thank you!

Subscribe to Sonam Pelden

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe