The Gottfredson-Cavalier Human-AI Task Allocation Matrix | JoshCavalier.ai

The Gottfredson-Cavalier
Human-AI Task Allocation Matrix

A decision framework for L&D professionals to systematically allocate human vs. AI involvement across every task.
1 2 3 4 5 6 7
By Conrad Gottfredson & Josh Cavalier
Co-creator, 5 Moments of Need  ·  Author, Applying AI in Learning & Development (ATD Press)

Explore the Matrix

Click any cell. Use the tabs to score tasks, model scenarios, or review decision logic.

The Human-AI Task Scale
Level of Difficulty →
7654321
Fairly Limited
Response
Most Significant
Response
Very Limited
Response
Fairly Significant
Response
1234567
Critical Impact of Failure →
Critical Impact of Failure4
1234567
Level of Difficulty4
1234567
AI Capability Maturity4
1 Low234567 High
Organizational Risk Exposure4
1 Low234567 High
← More human control
More AI autonomy →
Level of Difficulty →
7654321
1234567
Critical Impact of Failure →

How the mapping works

The Human-AI Task Scale levels are assigned to each cell based on the combined risk signal from both Gottfredson axes. Impact of failure weighs slightly heavier than difficulty because consequences of real-world application should drive the investment in human oversight.

The Four Decision Principles

1. Impact dominates. A simple task with catastrophic consequences still requires human-guided execution. The response must match the cost of failure, not the ease of the task.

2. Difficulty modulates autonomy downward. Complex tasks require expert discernment to evaluate AI outputs. A Level 5 allocation only works when the reviewer can actually assess correctness.

3. The diagonal is the decision boundary. Tasks along the diagonal (1,1 → 7,7) map clearly. Off-diagonal tasks require the most nuanced discernment about which axis matters more.

4. Quadrant alignment reveals misjudgments. The biggest mistake: automating high-impact "easy" tasks because they feel simple. Easy does not mean safe to automate.

Using this in practice

Score each task on both axes. Look up the level. Then adjust for organizational risk tolerance, AI maturity, and regulatory exposure. When in doubt, shift one level toward more human control.

AI Intensive Workshop

A hands-on program where your team scores their actual tasks on the matrix and builds an AI implementation roadmap grounded in Gottfredson's risk methodology.

Register Now

The Human-AI Task Scale © Josh Cavalier, licensed under CC BY-SA 4.0
The Change Response Matrix © Conrad Gottfredson