The Gottfredson-Cavalier
Human-AI Task Allocation Matrix
Co-creator, 5 Moments of Need · Author, Applying AI in Learning & Development (ATD Press)
Explore the Matrix
Click any cell. Use the tabs to score tasks, model scenarios, or review decision logic.
How the mapping works
The Human-AI Task Scale levels are assigned to each cell based on the combined risk signal from both Gottfredson axes. Impact of failure weighs slightly heavier than difficulty because consequences of real-world application should drive the investment in human oversight.
The Four Decision Principles
1. Impact dominates. A simple task with catastrophic consequences still requires human-guided execution. The response must match the cost of failure, not the ease of the task.
2. Difficulty modulates autonomy downward. Complex tasks require expert discernment to evaluate AI outputs. A Level 5 allocation only works when the reviewer can actually assess correctness.
3. The diagonal is the decision boundary. Tasks along the diagonal (1,1 → 7,7) map clearly. Off-diagonal tasks require the most nuanced discernment about which axis matters more.
4. Quadrant alignment reveals misjudgments. The biggest mistake: automating high-impact "easy" tasks because they feel simple. Easy does not mean safe to automate.
Using this in practice
Score each task on both axes. Look up the level. Then adjust for organizational risk tolerance, AI maturity, and regulatory exposure. When in doubt, shift one level toward more human control.
AI Intensive Workshop
A hands-on program where your team scores their actual tasks on the matrix and builds an AI implementation roadmap grounded in Gottfredson's risk methodology.
Register Now