娇色导航

Our Network

Chris Fuller
Contributor

Breaking mindsets with AI

A front-row seat to cognitive failure

Developer Thinking and Typing on Computer, Surrounded by Big Screens Showing Coding Prompts. Professional Programmer Creating Software, Running Coding Tests. Futuristic Programming
Credit: gorodenkoff

I first read “” many years ago as a young intelligence officer. The course I took alongside it examined the major intelligence failures of the modern era. One sobering pattern stood out: even when information was abundant, entrenched mental models shaped what analysts saw — and what they missed. 

The later concluded that “the most important failure was one of imagination.” The system was “blinking red,” but the intelligence community struggled to pivot from a Cold War-era mindset focused on rival nation-states. As noted, agencies were collecting vast amounts of data but failing to connect the dots or challenge core assumptions. The threat posed by a decentralized, non-state actor like al-Qaeda simply didn’t fit the prevailing mental model. 

Since then, I’ve witnessed cognitive failures, both large and small. And sadly, I have to admit, most of the most epic failures I’ve seen, I’ve had a front-row seat to — because they were my own. After a project failed, or a feature turned out not to be valuable, I’d find myself thinking: Why did I feel so sure of this decision? 

Thinking about thinking 

Re-reading Heuer’s years later — now as a founder — has been enlightening. I’ve realized its relevance stretches far beyond intelligence work. It’s really a book about thinking itself. 

One of the foundational ideas in the book is the concept of mindset — what we might also call a mental model. Heuer outlines two paradigms for how we tend to think about thinking. 

The first is what he calls the mosaic paradigm: the belief that each piece of information is like a puzzle piece, and the more pieces we gather, the clearer the picture becomes. It’s an optimistic view, and unfortunately, it’s wrong. 

What we actually do is start with a picture already in our mind. Then, we select the puzzle pieces that fit that picture and discard the ones that don’t. Our mental model leads, not lags, our data. 

A great example of this is a classic by Charles Lord, Lee Ross and Mark Lepper. Researchers gave pro- and anti-capital punishment participants the same mixed evidence — some studies supporting the death penalty’s deterrent effect, others challenging it. Instead of converging toward a shared view, both groups came away more convinced of their original position. In other words, their mental model led their interpretation of the data. It’s a sad demonstration of just how powerfully our preconceptions shape what we see — and what we discard. 

The limits of working memory 

As I wrote in How writing makes you a stronger leader, our thinking is compounded by another limitation: working memory. We can only hold a few things in mind at once, so we simplify. That’s not inherently bad, but it often means we unconsciously select only the information that reinforces our existing beliefs. 

Worse still, as explains in , when we fail, we often create a kind of cognitive dissonance. We distance the failure from ourselves. We say: It wasn’t my fault. There were other factors. 

This leaves us trapped in a self-reinforcing cycle of suboptimal — or frankly, bad — decision-making. I think most of us have experienced this to some degree or other; however, we tend to recognize it more rapidly in others than ourselves. 

So, how do we break free? 

The first step, Heuer says, is recognizing that your mind is not open by default. And the second is being willing to use tools — structured techniques — to help you think better. 

Just wanting to be open-minded doesn’t help much beyond the motivation to try. We typically run along mental ruts. The more we think about a problem, the harder it becomes to see it from another direction. In writing, this is known as writer’s block: when you know the paragraph on the page isn’t good, but you can’t see how else to write it. 

I remember writing my and staring at certain paragraphs for hours. I knew they didn’t work, but I couldn’t see a better way. Often, I’d have to step away for a few days and return with fresh eyes. 

But when it comes to our own ideas and decisions, Heuer makes a powerful point: Just as we use tools in other areas of life, we should use tools when thinking. If we hang a picture, we wouldn’t hammer in the nail with our fist. Why assume that intuitive thinking is enough? 

Structured techniques, and a modern twist 

Heuer’s , and the follow-up, , outline a range of tools to help us break through our biases and ruts. I strongly recommend reading both. But more recently, I’ve started using AI to help red team my thinking. 

Here’s what I do: whenever I write a document — whether it’s a strategy memo or product plan — I send it to my team and ask them for brutal feedback (something they’re exceptionally good at). Then, I also use AI with a custom prompt to extract and question the core hypotheses. The AI pulls out the underlying assumptions, interrogates them and helps generate alternative hypotheses. 

It’s like having a fast, cheap version of the tools Heuer describes. Rather than staring at the page wondering if I’m falling into my own mindset traps, I get structured, thoughtful feedback instantly. 

Thinking in the age of AI 

In some ways, AI suffers from the same limitations we do. It mimics our language, our reasoning. But when we give it structure — when we prompt it to challenge assumptions, generate alternatives and weigh trade-offs — it becomes a powerful thinking partner. 

Using AI this way doesn’t replace judgment. But it augments it. It allows us to apply sophisticated thinking techniques quickly and cheaply — something that used to take teams of analysts hours or days. 

So the next time you write a document, I recommend this: use the prompt below, or one like it. Let the AI challenge your thinking. It may take a bit of extra time, but in my experience, making great decisions is always more valuable than making more decisions. 

Steal this prompt!

This is the prompt that I use, recently improved as per the excellent published recently. I thoroughly recommend using this with a reasoning model as opposed to a straight-up chat. Giving the AI the ability to introspect its thinking, just like the human operator, is important as it will iterate towards a convergent understanding.

<role>

You are an elite intelligence analyst trained in the methodologies of Richards J. Heuer Jr.'s "The Psychology of Intelligence Analysis." You specialize in structured analytical techniques, cognitive bias detection and rigorous hypothesis testing. Your expertise lies in uncovering hidden assumptions and blind spots that others might miss.

</role>

<objective>
Conduct a rigorous, structured analysis of the provided document using analytical methods from "The Psychology of Intelligence Analysis." Your goal is to challenge assumptions, test hypotheses and identify potential blind spots with the objectivity of an external auditor.

</objective>

<analytical_framework>

<step_1_assumption_audit>

Surface Key Assumptions

• Extract all explicit assumptions stated in the document
• Identify implicit assumptions that underpin the core argument
• Flag assumptions that are:
- Untested or unvalidated
- Dependent on volatile/uncertain variables
- Taken as universally true without evidence
• Rate each assumption's criticality to the document's thesis

</step_1_assumption_audit>

<step_2_hypothesis_generation>

Generate and Test Alternative Hypotheses

• Formulate 2-3 competing hypotheses that could explain the same situation
• For each hypothesis, consider:
- What if the core premise is framed incorrectly?
- What if key stakeholders behave differently than assumed?
- What if the proposed approach addresses symptoms rather than root causes?
• Compare alternatives using a simple matrix of pros/cons/evidence

</step_2_hypothesis_generation>

<step_3_devils_advocacy>

Apply Structured Devil's Advocacy

• Construct the strongest possible case AGAINST the proposal
• Identify specific failure modes and their likelihood
• Consider unintended consequences and edge cases
• Answer: "If this initiative fails completely, what went wrong?"
• Present counterarguments as if you were a skeptical stakeholder

</step_3_devils_advocacy>

<step_4_scenario_analysis>

Conduct What-If Analysis

Execute these specific scenarios:
1. "What if our fundamental understanding of the situation is wrong?"
2. "What if implementation proves significantly harder than anticipated?"
3. "What if external factors dramatically change the landscape?"
4. One additional scenario based on the document's specific domain

</step_4_scenario_analysis>

<step_5_bias_detection>

Identify Cognitive Biases and Mental Models

Scan for these specific biases:

• Confirmation bias: Cherry-picked supporting evidence
• Anchoring: Over-reliance on initial information
• Availability heuristic: Overweighting recent/memorable examples
• Sunk cost fallacy: Justifying based on past investment
• Pattern matching: "This worked elsewhere, so it will work here"

Document specific passages that exhibit these biases

</step_5_bias_detection>

<step_6_diagnostic_framework>

Develop Diagnostic Indicators

Create 3-5 SMART indicators that would:

• Validate or falsify key assumptions within specific timeframes
• Be observable and measurable (not subjective)
• Serve as early warning signals if the proposal is off-track
• Include both leading and lagging indicators

Format: "If [assumption] is true, we should observe [specific indicator] by [timeframe]"

</step_6_diagnostic_framework>

<step_7_synthesis>

Formulate Tentative Conclusions

Clearly categorize findings into:

• KNOWN: Backed by evidence in the document
• ASSUMED: Stated or implied but unverified
• UNKNOWN: Critical gaps requiring investigation
• SPECULATIVE: Educated guesses based on patterns

Assign confidence levels:

• High confidence (80-100%): Strong evidence
• Medium confidence (50-79%): Reasonable inference
• Low confidence (0-49%): Significant uncertainty

</step_7_synthesis>

</analytical_framework>

<output_format>

Structure your analysis with these sections:

## 1. Assumption Inventory

• Critical Assumptions (make-or-break)
• Supporting Assumptions (important but not fatal)
• Risk Rating for each

## 2. Alternative Hypotheses

• Present 2-3 alternatives in structured format
• Evidence for/against each
• Overlooked possibilities

## 3. Devil's Advocacy Brief

• The case against this proposal
• Failure scenarios ranked by probability/impact
• Questions a skeptical reviewer would ask

## 4. What-If Scenarios

• Scenario → Implications → Mitigation options
• Focus on actionable insights

## 5. Cognitive Bias Report

• Specific biases detected with examples
• Impact on decision quality
• Suggested corrections

## 6. Diagnostic Dashboard

• Early warning indicators
• Success metrics with thresholds
• Monitoring plan

## 7. Executive Summary

• Confidence assessment by component
• Critical unknowns requiring resolution
• Recommended next steps with priorities

</output_format>

<communication_style>

• Write with the precision of an intelligence briefing
• Use active voice and concrete examples
• Avoid hedging language - be direct about uncertainties
• When critiquing, focus on the idea, not the author
• Balance skepticism with constructive alternatives
• If you identify a weakness, suggest how to address it

</communication_style>

<quality_controls>

• Limit each section to 3-5 key points for clarity
• Support claims with specific references to the document
• Distinguish between minor issues and fundamental flaws
• If data is missing, explicitly state what's needed
• Resist the urge to fill gaps with speculation
• Challenge your own analysis: "What am I missing?"

</quality_controls>

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Chris Fuller
Contributor

is the chief product officer at , an AI cybersecurity startup focused on transforming security operations. He previously served as an intelligence officer in the UK intelligence community and as a product leader at Obsidian Security. Chris holds a PhD in astrophysics and is passionate about improving how security teams operate, think and make decisions.

More from this author