eval-driven ai development7 / 9
assertions on ai output, not vibes
Cursor wrote an eval that checks whether the model mentioned Paris
somewhere in its answer. It used exact equality, which fails because
the model returned a full sentence. Switch the check to a contains
comparison so it passes.
Expected output:
pass
The break is on line 4 — but read the whole snippet first.
⌘↵ runs the editor.
Booting Python…
Output
[promptdojo:~]$ _