Why thinking clearly is getting harder in the algorithm age

Human behaviour, cognitive load, and the rise of AI reasoning tools

Summary

Humans evolved to make fast decisions in complex environments. However, modern digital systems — particularly social media algorithms and AI tools — interact with those instincts in ways that amplify both our strengths and our weaknesses.

Over time this interaction has produced a subtle but powerful shift: people increasingly offload thinking to external systems without understanding how those systems shape outcomes.

The Better Thinking project emerged from recognising this pattern and asking a simple question:

How can structured reasoning tools help people stay in control of their own thinking?

This article explains the behavioural dynamics involved, the risks of unmanaged cognitive offloading, and why public-facing reasoning tools such as C-it and RESOLVE were developed.


Anchor thesis

The modern internet does not optimise for reasoning. It optimises for reaction. Algorithms reward content that triggers fast emotional responses, while careful analysis spreads more slowly.

Over time this creates what can be called the reaction–reasoning gap: an environment where reaction dominates reasoning.

The Better Thinking project explores whether simple reasoning tools can restore the missing step between reaction and analysis.

Importantly, the goal is not to turn everyone into analysts. Human societies appear to operate with a fairly stable distribution of thinking behaviour: most people react quickly using heuristics, a smaller group performs occasional analysis, and a very small minority engage in deeper system-level reasoning.

Better Thinking does not try to change that distribution. Instead it introduces a pause layer that prevents instant reaction and encourages clarification before claims spread.


1. Human cognition was never designed for modern information systems

Human thinking operates through two broad modes.

Fast thinking

  • rapid pattern recognition
  • low effort
  • emotionally driven
  • useful for survival and everyday decisions

Slow thinking

  • deliberate reasoning
  • structured analysis
  • higher cognitive cost
  • required for complex decisions

Slow thinking is cognitively expensive. Humans therefore naturally try to reduce cognitive load whenever possible. This behaviour is not a flaw; it is an efficiency strategy.

However, it creates a vulnerability. When the environment is engineered to capture attention or shape behaviour, people may rely on fast heuristics in situations that actually require deeper reasoning.


2. Why humans rely on mental shortcuts

Human brains are extremely powerful, but deliberate reasoning is costly. Analysing a claim properly requires effort: holding several facts in mind, comparing explanations, checking evidence, and imagining consequences.

Because this type of thinking consumes mental energy, humans naturally rely on shortcuts most of the time.

These shortcuts are known as heuristics.

A heuristic is a quick rule-of-thumb the brain uses to make a judgement without analysing the full problem.

Examples include:

  • trusting a familiar source
  • agreeing with people who share our values
  • rejecting information that conflicts with existing beliefs
  • assuming something popular must contain some truth

Heuristics are not a flaw. They are a survival feature that allows humans to make rapid decisions without exhausting cognitive resources.

However, heuristics evolved for small social environments. The modern digital information environment is vastly larger and more complex.


3. Why algorithms amplify shortcut thinking

Digital platforms did not intentionally design systems to weaken reasoning. Their goal was simply to maximise engagement.

However, engagement algorithms quickly discovered something important about human behaviour: content that triggers fast emotional reactions spreads far more quickly than content that requires careful thought.

As a result, many algorithmic systems now reward content that:

  • provokes outrage
  • confirms identity or group beliefs
  • produces instant agreement
  • creates surprise or shock

These responses rely heavily on heuristics rather than slow reasoning.

This produces a reinforcing loop:

  1. Humans prefer low mental effort.
  2. Algorithms reward fast emotional reactions.
  3. Heuristic responses spread faster than careful analysis.
  4. Reaction becomes more visible than reasoning.

Over time the information environment shifts toward rapid reaction rather than structured analysis.


4. The claim-sharing problem

One of the most visible effects of this dynamic is the rapid spread of claims online.

A typical pattern looks like this:

  1. A claim appears in a post or video.
  2. The claim triggers an emotional response.
  3. The content is shared before it is examined.
  4. The claim spreads through networks faster than verification can occur.

This is not primarily a misinformation problem.

It is a thinking workflow problem.

Most people simply do not have a structured process for evaluating claims.


5. The cognitive load paradox

The digital environment both increases information volume and reduces the time people spend thinking about it.

This produces a paradox:

the more complex the information environment becomes, the more humans rely on shortcuts.

These shortcuts include:

  • trusting familiar sources
  • following social signals
  • deferring judgement to algorithms
  • accepting AI-generated summaries without examining reasoning

Historically these shortcuts came from social cues, group identity, or trusted personalities.

In the AI era a new shortcut is emerging:

“The AI said so.”

AI outputs often appear coherent, neutral, and authoritative. As a result people may treat AI responses as reliable conclusions even when the underlying reasoning is incomplete, shaped by prompt framing, or missing important perspectives.

Without a pause layer between claims and reactions, cognitive offloading may shift from social heuristics toward AI-as-heuristic, potentially amplifying the same reaction loops digital systems already encourage.


6. AI introduces a new level of cognitive offloading

Large language models dramatically expand the ability to outsource thinking tasks.

AI systems can now:

  • summarise complex topics
  • produce structured arguments
  • generate research reports
  • explore possible solutions

Used carefully this can greatly expand human capability.

However, it also introduces new risks.

AI systems can:

  • produce confident but incorrect reasoning
  • reflect biases present in training data
  • respond differently depending on prompt structure
  • carry context and framing from earlier conversation steps

Many users assume AI outputs represent objective analysis.

In practice they are influenced by statistical patterns, context, and prompting.

This creates a new structural layer in the digital thinking ecosystem:

Algorithms optimise engagement
AI systems optimise generation
Better Thinking tools aim to optimise reasoning


7. Context contamination in long AI conversations

Another important observation emerges when working with AI tools over extended discussions.

In long conversations:

  • earlier prompts influence later responses
  • framing assumptions accumulate
  • the model may converge toward earlier ideas

Users often assume each question starts from a clean analytical state.

In reality the surrounding conversation context can subtly shape results.

For important reasoning tasks it is sometimes better to run analysis in fresh prompts or structured frameworks.


8. Native AI vs structured AI interaction

AI behaviour changes significantly depending on how it is used.

Three common interaction modes appear.

Native interaction

Users ask open questions and accept responses.

Advantages:

  • fast
  • convenient

Risks:

  • shallow framing
  • hidden assumptions

Structured interaction

Users guide AI through reasoning frameworks such as RESOLVE.

Advantages:

  • clearer analysis
  • explicit reasoning stages

Costs:

  • higher effort

Ensemble reasoning

Multiple prompts or approaches are used to test conclusions.

Advantages:

  • exposes inconsistencies
  • improves robustness

Costs:

  • greater time investment.

Understanding these modes helps users decide when deeper reasoning is needed.


9. The governance problem

When people rely on external systems to assist thinking, an important question emerges:

who governs the reasoning process?

Without explicit structure:

  • algorithms influence what information we see
  • AI influences how issues are interpreted
  • humans may become passive recipients of conclusions

This is not necessarily intentional. It is simply the consequence of unstructured cognitive offloading.

The solution is not to reject algorithms or AI.

Instead we need transparent reasoning structures that guide how analysis occurs.


10. The role of structured reasoning tools

Structured reasoning tools act as scaffolding for human thinking. They do not replace judgement. Instead they guide thinking through defined stages.

Within the wider reasoning space around Better Thinking, two complementary public-facing routes have emerged.

C-it

C-it is a lightweight claim-clarification tool used when encountering a claim.

It encourages users to step back briefly and clarify:

  • what the claim actually says
  • what assumptions are embedded
  • what evidence is implied
  • what questions need asking

The goal is not to reach a verdict.

The goal is to clarify the claim structure and create a moment of pause before sharing or reacting.

RESOLVE

RESOLVE is a deeper reasoning framework designed for complex problems.

It moves through seven stages:

Reality
End state
System
Options
Logic
Value delivery
Evolve

The framework separates diagnosis from solution. It encourages users to analyse system constraints, explore multiple options, examine trade-offs, and refine thinking iteratively rather than assuming the first framing is complete.


11. Thinking frameworks as cognitive infrastructure

One emerging idea from this work is that reasoning frameworks may function as a form of cognitive infrastructure.

Just as roads organise transport and accounting standards organise finance, reasoning frameworks can organise how complex problems are analysed.

Structured reasoning frameworks behave similarly to other forms of infrastructure, including:

  • scientific methods
  • engineering standards
  • accounting rules

When widely used, such structures reduce ambiguity and improve communication.

The Better Thinking project explores whether simple reasoning frameworks could play a similar role for public reasoning in an AI-rich information environment.


12. Human–AI collaboration requires role clarity

Effective collaboration between humans and AI requires clear roles.

Humans should remain responsible for:

  • defining the problem
  • setting evaluation criteria
  • judging value trade-offs

AI can assist with:

  • information synthesis
  • scenario exploration
  • structured analysis

Without this division of responsibility there is a risk that judgement itself gradually shifts from humans to automated systems.


13. The real goal of Better Thinking

Better Thinking is not about controlling what people think.

It is about improving how thinking happens.

The project explores how simple reasoning structures can:

  • reduce impulsive information sharing
  • improve decision quality
  • support responsible AI use
  • strengthen public reasoning

In a world of accelerating information and increasingly powerful AI systems, thinking itself becomes a form of infrastructure.

The question is not whether humans will offload cognitive work to machines. That process is already underway.

The real question is whether we design systems that help humans think better — or systems that gradually replace thinking altogether.

Better Thinking is an attempt to explore the first path.


Deep dive: reaction, reflection, and reasoning

This section strengthens the behavioural model by clarifying what is well-established, what is inferred, and where Better Thinking adds value.

1. Three modes of thinking (descriptive, not fixed)

Human thinking online can be understood as three functional modes:

ModeWhat it is (evidence-aligned)What it looks like todayRisk if unmanaged
ReactionFast, heuristic-based judgement (widely supported in cognitive science)Immediate emotional responses, rapid sharingMisinterpretation, amplification of weak claims
ReflectionIntermediate processing (less formally defined, but observable)Occasional hesitation, informal questioningInconsistent, often skipped
ReasoningDeliberate, structured analysis (System 2 thinking)Used by analysts, experts, and in high-stakes contextsUnderused due to effort cost

Importantly, these are not rigid categories or fixed population segments. Individuals move between them depending on context, incentives, and cognitive load.


2. Why reaction dominates in digital environments

The dominance of reaction is not accidental. It emerges from the interaction between:

  • human energy efficiency (preference for low-effort thinking)
  • algorithmic amplification (rewarding engagement signals)
  • information overload (too many inputs to analyse deeply)

Together, these create a structural bias toward:

fast → emotional → shareable content

This does not mean people are irrational. It means the system rewards speed over depth.


3. Cognitive offloading: what actually happens

Cognitive offloading is well established in psychology: humans use tools and environments to reduce mental effort.

Historically:

  • writing reduced memory load
  • calculators reduced arithmetic effort
  • software reduced operational complexity

In the AI era, offloading extends further into:

  • interpretation (“summarise this”)
  • reasoning (“what should I think?”)
  • decision support (“what should I do?”)

This creates a key shift:

from offloading tasks → toward offloading judgement

However, what people do with saved cognitive effort is not uniform.

Possible outcomes include:

  • productive redeployment (work, learning, problem solving)
  • passive consumption (scrolling, entertainment)
  • decision avoidance (deferring judgement to systems)

The distribution of these behaviours is still evolving and remains context-dependent.


4. The missing layer: structured reflection

Most digital systems operate on a compressed loop:

Claim → Reaction → Share

What is largely missing is a lightweight, repeatable step that introduces:

  • clarification
  • assumption awareness
  • basic structural questioning

This is where Better Thinking positions its intervention.


5. Better Thinking intervention model

Better Thinking does not attempt to change human nature or eliminate heuristics.

Instead, it introduces structure between existing behaviours:

Reaction → Pause → Reasoning

  • Pause (C-it) creates structured reflection
  • Reasoning (RESOLVE) supports deeper analysis when needed

This aligns with how cognitive systems already operate, but adds:

  • visibility (making thinking steps explicit)
  • repeatability (consistent structure)
  • transferability (usable across domains)

6. Role differentiation in modern contexts

Across different historical contexts, the balance between these modes shifts:

ContextDominant constraintTypical behaviour pattern
Hunter–gathererEnergy scarcityEfficient heuristics with occasional innovation
IndustrialProductivity and scaleProcedural thinking and system optimisation
AI eraInformation overload and cognitive delegationIncreased offloading, variable engagement with reasoning

The key change in the AI era is not that humans think less, but that:

thinking is increasingly mediated by external systems


7. What this means in practice

For most people:

  • heuristics will remain the default
  • deep reasoning will remain selective

For complex or high-consequence issues:

  • unstructured offloading becomes risky
  • structured reasoning becomes more valuable

Better Thinking is therefore not a mass behavioural change programme.

It is a targeted cognitive scaffold for situations where:

  • the issue is complex
  • the stakes are meaningful
  • the framing is uncertain

8. Boundary conditions (what this model does NOT claim)

To avoid overreach, this model does not assume:

  • fixed population percentages (e.g. 90/9/1 as a hard rule)
  • that AI universally degrades thinking
  • that all users should adopt structured reasoning
  • that reasoning frameworks replace expertise

Instead, it proposes:

  • cognitive offloading is real and increasing
  • digital systems amplify fast thinking
  • lightweight structure can improve outcomes in specific contexts

Summary of the deep dive

Human cognition has not fundamentally changed. However, the environment in which it operates has.

As a result:

  • reaction is amplified
  • reflection is compressed
  • reasoning is under-supported

Better Thinking focuses on a narrow but important intervention:

making reflection visible and reasoning usable when it matters most.