debugging broken ai output1 / 1
the tuesday-afternoon investigation
The Tuesday-afternoon investigation
The lessons in this chapter are coming. The orientation step you're reading is the placeholder while the interactive content is authored.
When the chapter ships you'll work through:
- A real broken-output trace from a customer-support AI feature. You'll spot the moment of failure and classify it.
- The four breakage classes (retrieval, prompt, true hallucination, downstream mangling) with a real example of each.
- The 20-minute debugging checklist you'll run every time an AI feature surprises a user.
- The four-question post-mortem template, and how to fold the failure back into your eval suite.
For now, read the chapter overview above and come back when the lesson plan goes live.
⌘↵ runs the editor.
Booting Python…
Output
[promptdojo:~]$ _