When stuck, screenshot.
- 01The idea
- 02The example
- 03The drill
- 04The artifact
“The tools are changing too quickly for any fixed tutorial to stay current.”
The pattern repeats. People evaluate AI tools the way they evaluate SaaS — by feature lists and demo videos. That works for software where the value is consistent. It does not work for tools where the value depends on whether the model can do your work.
Here’s a 10-minute test you can run on any new tool, before you commit a calendar to it. Pull up one piece of work you actually have to ship this week — not a contrived prompt — and run the four-step check below.
- Ask it to do the boring part of your work, not the showcase part.
- Read the output as if your boss wrote it. Would you ship?
- Time your second attempt — does it speed up, or just produce more?
- Imagine using it every day for two weeks. Friction or relief?
