How We're Really Using AI (And What That Says About Us)

When Openai first launched ChatGPT, the hype was massive. This was going to change everything: writing, thinking, work, creativity, design and life.

Then something very human happened.

We turned the most powerful thinking machine ever built into a slightly more helpful Clippy. That’s not an insult. It’s just reality. A new research paper reveals how people are actually using generative AI, and the results are surprisingly mundane, refreshingly creative, and quietly telling.

In How People Are Really Using Generative AI Now, Marc Zao-Sanders and his team analysed thousands of posts across Reddit, Quora, and AI forums to map out what people are actually doing with these tools. The results aren’t dramatic or dystopian. They’re mostly useful, often humble, and occasionally weird. But taken together, they say a lot about us, not just how we use AI, but how we think, work, and make sense of the world in 2025.

The top use case, by far, is productivity, but not in some big "revolutionize my entire workflow" kind of way. It’s much more incremental. People are asking AI to help them write emails, summarize documents, format spreadsheets, clean up grammar, and understand unfamiliar concepts. Nothing surprising here. Think “Rewrite this in a more professional tone” or “Explain this like I’m five”

So in a way, AI isn’t replacing workers. It’s propping them up. It’s acting like the intern, the proofreader, the coach, the thought partner. This isn’t flashy or dramatic but it’s quietly becoming how knowledge work gets done.

Another major use case was using AI to learn. AI has become the tutor we always wish we had, someone who is patient, fast, and always available. People are using it to understand complex topics before exams and job interviews, understand technical jargon and get quick breakdowns of political history, science, or economics

It’s education by osmosis, where you ask a few questions, build a foundation, and move on. But this isn't deep work. Are we really learning, or just collecting enough context to sound smart? There’s a difference between understanding something and sounding like you do. And only one will hold up in the long run.

People are also using Ai for creative support. Creative work with AI isn’t always about producing finished products. It’s about getting unstuck. It’s about pushing past the blank page. so things like writing stories, brainstorming startup names, or turning rough thoughts into smooth prose. I am also guilty of this, I often dump my thoughts in and it helps me make sense of them. Sometimes I feel its like talking to a really fast and very polite co-writer.

This is where AI shines, not in replacing creativity, but in removing friction. Of course, there's a line. If the AI writes your novel for you, is it still your novel? But if it helps you write your novel, that's a very different relationship. It's less about replacing imagination and more about scaffolding it. But then where does the AI end and the human begin?

Beyond the mainstream use cases, the paper surfaces some surprising strange examples of how people are using GenAI. One of them being as a therapist. I don't think people are claiming ChatGPT should replace licensed professionals, but 1000s of use cases show people turning to gen Ai to process emotions, get help making decisions and even reframe inner narratives. People are turning to AI in a very human way of needing someone (or something) to talk to.

Reddit threads are full of stories like “I journal with ChatGPT every night. It doesn’t judge.” Because when you strip away judgment, status, and social pressure, what people want most is a safe space to think out loud. Generative AI gives them that. While the implications are still being debated, the takeaway is clear: AI can be comforting, even if it's not conscious.

Of course, this isn’t a substitute for human connection. But in a world where loneliness is a growing public health crisis, and therapy is expensive or inaccessible for many, this might be one of the most quietly revolutionary use cases of all. AI won't cure your trauma. But it might help you name it. And that’s not nothing.

Other edge (but notable) cases include roleplay and fanfiction, communication training, and simulate job interviews and negotiations. These edge cases matter. They remind us that technology isn't just about efficiency. It's also about exploration, play, and meaning.

So what does this really tell us? If you zoom out, this research isn’t just about what we do with AI. It’s about how we think with AI.

At this point in time, we are not using AI to replace our minds, but to scaffold them, i.e. to boost confidence, to reduce cognitive load and to turn overwhelm into progress.

But, as with every article in this blog, here a word of caution, because there's a catch. If we’re not careful, we start to delegate too much of the thinking itself. We get good at asking for answers, and forget how to sit with the question. The tools are getting better. But the risk remains, that we stop building the mental muscles we used to rely on.

So the challenge is clear: Use AI. Lean on it. Stretch it. But don’t lose your own curiosity in the process.

AI is only as powerful as the questions we ask of it and the purpose we bring to it. AI is a tool, not a compass. It can do amazing things. But it won’t tell you what matters. That’s still your job.

And maybe, just maybe, we should still leave that to the humans.

Sonam Pelden

Subscribe today to get full access to the website as well as email newsletters, and check out the audio recordings of the articles here

Subscribe to Sonam Pelden

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe