CompanionCompanion← All posts
Research·9 min read·

Beyond GenAI: Introducing Questioning AI (QuestAI) to Reverse Cognitive Decline

Co-written by Stefan Kløvning and Sebastian Knørr. Originally published on Medium.


Introduction: When Thinking Is Outsourced

If you use AI frequently for your work, you may have found yourself tempted to leave the thinking and strategising up to AI as well. While the grand promise of Generative AI (GenAI) has been to take over manual and repetitive tasks – freeing humans for vision and strategy – becoming too dependent on AI for execution risks creeping into strategic thinking if you are not using it carefully.

This article explores how to counter that drift by reimagining AI as a tool for active reasoning rather than passive consumption and automation.

The Research: Cognitive Offloading and Critical Thinking

New research from 2025 documents cognitive offloading – the tendency to transfer not just tasks but the thinking behind them to AI. A Microsoft study found that greater confidence in GenAI correlates with less scrutiny of its outputs, whereas people with higher self-confidence tend to evaluate AI more critically. Who or what you trust affects whether you stay analytically engaged.

Implicit in these findings are several critical questions for self-reflection about how AI should be used in ways that will not dilute cognitive or creative abilities in human labour.

Why AI Hallucinates, and Why It Matters

Hallucinations – the tendency of GenAI to fabricate answers when it does not know them – have been shown by OpenAI research to occur from statistical pressures even when training data contains no errors.

AI behaves much like an exam-taker guessing under uncertainty: rewarded for sounding right rather than admitting ignorance. Consequently, GenAI keeps producing probabilistic answers that sound right but may have no basis in reality.

The Psychological Trap of Uncritical Use

The brain's intrinsic tendency toward simplification makes cognitive offloading a natural response to AI's convenience – and mere admonitions to think critically rarely solve it. The effort to counteract this must address the convenience that makes cognitive offloading so appealing, rather than responding to it symptomatically.

Two Fronts for Action: Mindset and Technology

There are two critical factors:

  1. The psychological aspect of developing a mindset compatible with critical reasoning when using AI
  2. Developing AI models that better address both their own limitations (hallucination) and the limitations of the human mind (cognitive offloading)

Because human psychology evolves slowly, the larger opportunity lies in designing AI that enhances, rather than exploits, our cognitive shortcuts.

From Generative AI to Questioning AI (QuestAI)

In 2017, the Future of Life Institute developed the Asilomar AI Principles – one of the earliest and most influential governance frameworks for AI. Despite being endorsed by central figures including Sam Altman and Elon Musk, these principles on paper have not been sufficient to mitigate the damaging cognitive effects that recent research now documents.

We propose a pivot from Generative AI to Questioning AI (QuestAI). Companion by Com2.ai presents QuestAI as the alternative, defined as:

A framework of artificial intelligence designed to stimulate human reasoning through critical inquiry, Socratic dialogue, and enforced verification, rather than passive content delivery.

Building AI That Promotes Thought

The core of QuestAI is building AI systems that optimise human thinking – introducing explicit and implicit measures to encourage awareness and engagement when overreliance is detected.

As one 2025 study concluded, structured prompting promotes rather than hampers engagement and critical thinking, providing:

"a scalable and low-cost governance tool that fosters responsible adoption, supports equitable access to technological benefits, and aligns with societal calls for human-centric AI."

Designing QuestAI in Practice

When Com2.ai built the prompting framework for Companion, the team grounded their work in the HAICEF evaluation approach, translating high-level principles into concrete safeguards:

  • Safety – crisis detection, refusal behaviours, and escalation paths to human support
  • Fairness – bias audits and testing across diverse user scenarios
  • Trustworthiness – transparency about the assistant's limits and provenance for factual claims
  • Usefulness – iterative measurement of task success and user satisfaction

Operationalising QuestAI: The Next Phase for Companion

Building on the HAICEF framework, the next phase for Companion will operationalise QuestAI principles as concrete development priorities. We have begun onboarding testers and establishing an independent ethics board for oversight. Controlled user studies and red-teaming cycles will measure impacts across all four HAICEF dimensions, supported by rigorous audit trails.

Tactical Interventions

To translate QuestAI theory into practice, we are piloting:

  • Awareness checkpoints that prompt reflection before accepting AI suggestions
  • Proof before provision – requiring evidence or minimal user verification before recommendations
  • Bias stress testing through Socratic-style questioning
  • Nudges toward active problem-solving instead of passive offloading

Each tactic will be evaluated with quantitative metrics (acceptance rates, engagement levels, safety violation rates) while qualitative feedback from testers and ethics board reviews will refine governance principles.

Conclusion: AI as Co-Thinker, Not Substitute

By adopting QuestAI principles, AI vendors can help protect human agency. Regular users should build the habit of questioning AI outputs so that tools serve as co-thinkers, not substitutes for thought.

The next frontier of AI innovation is not about making machines think faster – it is about empowering humans to think deeper and more independently.


Co-written by Stefan Kløvning and Sebastian Knørr.

Try Companion for free

Your personal AI for mental fitness – available 24/7, in your language.

Get started free →

More from Companion

Product Update

Why We Moved From OpenAI to Anthropic as Our Default AI Vendor

Product Update

Companion Now Speaks Your Language – 100+ Languages Live