Home / News / Uncategorized / Over-reliance on AI: How it may undermine our ability to think

Over-reliance on AI: How it may undermine our ability to think

Nick Smallman

Posted on: 10/11/2025

Reading time: 10 mins

We are living in extraordinary times. The promise of artificial intelligence in particular your own AI assistant heralds a future where tasks are faster, smarter, seemingly simpler. But as someone who has spent the last 27 years trying to help humans connect with each other I find myself increasingly concerned. Because behind the shiny surface […]

Over-reliance on AI: How it may undermine our ability to think Over-reliance on AI: How it may undermine our ability to think

We are living in extraordinary times. The promise of artificial intelligence in particular your own AI assistant heralds a future where tasks are faster, smarter, seemingly simpler. But as someone who has spent the last 27 years trying to help humans connect with each other I find myself increasingly concerned. Because behind the shiny surface of AI lies a more subtle cost: the gradual erosion of our capacity to think deliberately, deeply and independently. The consequences for our relationships, and our world are dystopian.

The promise of AI: assistance, efficiency and creativity
I’m a fan of helpful technology used well. AI can help us automate the mundane, free up time for higher-order thinking, spark new ideas and generally speed things up. AI is part of the toolbox that we can use to improve certain things. But this is only a good idea on paper. Humans are unlikely to follow best practice, rather they will embrace shortcuts and look for frictionless living if they can. That kind of behaviour is a road to disaster.

The hidden costs of over-reliance on AI
What happens when we begin to treat AI as a substitute rather than a partner? I want to walk you through some of the cognitive, social and psychological consequences.

Cognitive laziness and skill atrophy
One of the more alarming findings is the phenomenon of cognitive off-loading we delegate mental tasks to AI and our own mental muscles begin to weaken. A recent study found that students relying heavily on AI dialogue systems exhibited diminished decision-making and analytical thinking skills. SpringerOpen+1 Another reported that heavy use of AI tools correlated with lower critical-thinking scores. Phys.org+1
It is inevitable then when the machine does the thinking, we stop thinking as hard. Over time, our abilities atrophy.
Good communicators are great thinkers: nothing replaces the muscle of active thought, of mental effort. If that muscle isn’t exercised, it loses tone. So there are serious consequences ahead if we don’t get a grip.

Shallow understanding
When we lean on AI for quick answers the risk is that we miss the depth. We may get the result, but we skip the struggle. That struggle (friction) is what builds comprehension, retention and transferable skill. Research using EEG found that users of a large language model (LLM) had lower neural engagement and under-performed the “brain alone” group. TIME
We may feel we are being efficient; in reality, we may simply be surfing the surface of understanding rather than diving in. And when complexity arrives when new challenges that haven’t been pre-programmed emerge that shallow understanding falls apart. Remember LLM’s are not creative they are good at processing what is already known.

Homogenisation of thought
Another problem: when everyone uses the same AI templates, prompts and suggestions, we risk ending up with the same answers. The study of AI in decision-making found that individuals supported by AI produced “a less diverse set of outcomes for the same task”. Slashdot
In other words, when we outsource creativity and reasoning, we may end up with the same reasoning, the same voices, the same predictable patterns. In a world that needs new thinking, fresh perspectives and genuine human insight this is a serious drawback.

Reduced attention span and reflection
Our ability to sit with a question, reflect, debate, question assumptions is shrinking. The speed of AI plus the temptation to use it for everything short-circuits slow thinking. The Institute for Health & Well-being blog at IE warns that reliance on AI may diminish critical-thinking skills and alter our cognitive processes. ie edu
It’s as though the brain says: “Hey, someone else can do this,” and we stop engaging. Reflection diminishes, attention fragments. There is already a boat load of evidence for this in our dealings with other technologies that hijack our attention.

Manipulation and dependence
When we rely on AI, we surrender a degree of control. The tools we use may embed bias, present partial truths, or shape our perspective in subtle ways. For example, AI systems themselves are not free from human-style cognitive biases; they may replicate or amplify them. Live Science
And studies show that when humans work with AI suggestions they may accept them even when erroneous. This phenomenon, labelled ‘over-reliance’, is becoming recognised as a serious risk. Causality in Cognition Lab+1
When we hand over our decision-making to AI or algorithms in work, life or learning, we become dependent and vulnerable. Remember we will be lazy if we can get away with it.

Erosion of memory retention
Memory is one of the casualties of outsourcing. When we rely on tools to fetch information rather than anchor it in our own minds, our retention weakens. One commentary refers to a concept of “AICICA” (AI-induced cognitive atrophy), warning of long-term stunting of memory and comprehension when we habitually lean on AI. PMC
I’ve seen this: when participants don’t engage deeply, the learning sticks less. It’s the difference between remembering to ride a bike and knowing conceptually how to balance.

Diminished problem-solving skills
Problem-solving is the heart of human thinking and functioning: defining the issue, exploring possibilities, testing hypotheses, iterating. If we let AI present the “answer” rather than working through the process ourselves, our problem-solving muscles shrink. An MDPI publication summarised that long-term reliance on AI decision support reduced cognitive resilience and independent thought. MDPI

Impaired decision-making abilities
When the human element is diminished, decision-making quality can drop. One of the challenges in human–AI collaboration is that the conditions under which the human plus AI outperform human alone are still not fully understood. PMC
In other words: throwing AI at decisions doesn’t guarantee better decisions. In fact, if the human doesn’t engage critically, decisions may become weaker. Worse still, in critical contexts the human is less able to step in when the AI fails or mis-judges.

Increased vulnerability to misinformation
With reduced critical-thinking skills, shallow comprehension, and less engagement, we are exposed. AI-generated content is prolific, often plausible, but sometimes incorrect or misleading. The Guardian recently observed that while many students use AI regularly, a significant percentage believe it has had a negative impact on their skills. The Guardian
We may trust the machine instead of the person. We may accept what looks like an answer without interrogating its basis. That’s dangerous in a media-rich era, when misinformation proliferates. We have a specific bias called The Illusory Truth Effect. This bias is our tendency to believe things the more we see it. In an information environment that is saturated with content this bias alone is incredibly damaging to our understanding of the world. 

Psychological and social consequences
Beyond cognition, there are deeper human effects. If we outsource our thinking, do we also outsource our sense of agency? If we become passive recipients of AI suggestions, our confidence, our self-efficacy, our ability to connect with other humans on a thoughtful level may suffer. One article in the Financial Times observed that over-reliance on AI tools in the workplace also carries risks for mental health loneliness, reduced growth opportunities, reduced meaningful interaction. Financial Times
In training sessions, I’ve seen how building human connection, emotional intelligence and honest conversation are the antidotes to digital isolation. It’s not just what we think it’s how we relate, how we form teams, how we adapt. AI alone cannot nourish that.

Reclaiming our minds: How to use AI wisely
So what is the solution? It isn’t to reject AI that would be naïve. It’s to partner intelligently. At The Sustainable Human we’ll help people do exactly that: engage with technology critically, deliberately and human-first. Here are guiding principles.

Treat AI as collaborator, not authority
Think of AI as a smart assistant not the decision-maker. Use it to generate ideas, organise information, speed up some tasks but always filter the output through your human judgement. Ask: Who asked the question? What assumptions lie behind it? What is missing?
Craft the prompts. Don’t let the prompts craft you.
Human oversight matters. 

Slow thinking, critical questioning and deliberate practice
Echoing the work of Nobel laureate Daniel Kahneman, we need to cultivate “System 2” thinking slow, reflective, analytical not just the fast, intuitive mode. Encourage yourself to pause, reflect, ask questions such as:

  • What is this AI assuming?
  • How do I know this suggestion is valid?
  • What might be missing?
  • Could I solve this without the tool?

Practical habits to strengthen cognitive skills
Here are some actionable habits I’ll invite you to adopt – the key is to be in the habit of thinking independently of AI:

 

  • Delay the use of AI: Before you open the tool, try framing a solution yourself. Write your thoughts down, then ask the AI to compare or challenge.
  • Reflect on the difference: Compare your version of an answer with the AI’s. What changed? Did you lose something? Did you gain something?
  • Use prompting as teaching: Ask the AI to suggest alternatives, challenge assumptions, play devil’s advocate. Make it generate questions, not just answers.
  • Keep a thinking journal: Record moments when you relied on the machine. Note how you felt, what you missed, what you learned. Over time you’ll map your dependency and growth.
  • Engage in human-to-human deliberation: Because the mind-muscle thrives in interaction, not isolation.
  • Set instrumentally weaker conditions: Sometimes work without AI for a predefined period (e.g., write an email or solve a problem without it). This strengthens your independent capability.

Conclusion
As someone who has seen the L&D world from the inside out, I believe deeply in the power of human learning, human relationships and human skill. Technology especially AI will influence everything we do. But that influence must not erode what makes us human: our capacity to think, to question, to connect, to feel and to decide.
With The Sustainable Human I’m excited to launch live events that foster exactly those skills interpersonal communication, critical thinking, media literacy, emotional intelligence, human connection in an AI-dominated reality. Because yes: our future involves machines. But our value lies in humans.
So let’s use AI. And let’s not let AI use us. 

References

  1. Hua et al., “The effects of over-reliance on AI dialogue systems on students,” SLE Journal, 2024. SpringerOpen
  2. Zhai et al.; Krullaars et al., “AI Tools in Society: Impacts on Cognitive Offloading and the Future…” MDPI, 2024. MDPI
  3. “Increased AI use linked to eroding critical thinking skills,” Phys.org, 2025. Phys.org
  4. “AI tools may weaken critical thinking skills by encouraging cognitive off-loading,” PsyPost, 2024. PsyPost – Psychology News
  5. “AI The good, the bad, and the scary,” Engineering Magazine, Virginia Tech, 2023. eng.vt.edu
  6. “Thinking with AI Pros and Cons,” NYU SPS Metaverse Blog, 2024. sps.nyu.edu
  7. “Artificial Intelligence and Cognitive Biases: A Viewpoint,” Journal of Innovation Economics, 2024. shs.cairn.info
  8. “AI in the hands of imperfect users,” PMC, 2023. PMC
  9. “Is Google Making Us Stupid?” Nicholas Carr, The Atlantic, 2008. Wikipedia
  10. “Revisiting the Consequences of Relying Too Much on AI,” BuckleyPlanet, 2025.

Tags

No tags assigned to this post.

You may also like

Trying to Navigate Life in an AI-Driven World? The Sustainable Human is a learning experience for this unique moment.  

There are moments in life when we look up from whatever you are doing and realise we’ve quietly crossed a line. The world around us has changed – not dramatically, – but gradually, imperceptibly, until the air feels different. I feel it, the culture feels it and I’m sure you feel it too.  Artificial Intelligence […]

Why the Imbalance Between Transactional and Relational Communication is Undermining Society

Introduction  We live in a world that moves fast. Too fast. In this relentless pursuit of efficiency, convenience, and productivity which has been turbocharged by technology, something fundamental is slipping through our fingers our ability to truly connect with one another. You know what I mean, the kind of communication that reminds us of our […]

Ai threatens humanity
How AI Threatens Our Humanity: A Guide to Protecting the Human Condition

There’s a good story in every technological revolution: the promise of ease, the gleam of novelty, and  if we’re honest  the creeping cost that arrives almost imperceptibly. AI is that story writ large. It offers astonishing capabilities, but it also nudges us towards a subtler, more dangerous loss: the hollowing out of what makes life […]