I want to give you a diagnostic tool. One test that will tell you something true about both your AI setup and your own thinking, in under ten minutes, every time.
Here it is: describe what you want in plain language and see what comes back.
If what comes back in the first response is worth testing — not perfect, not finished, but genuinely worth engaging with — your setup is working and your thinking is clear. If what comes back is generic, misaligned, or requires significant correction before it’s useful, one of two things is true: either your system doesn’t know you well enough, or you don’t know what you want well enough.
Most people assume it’s the first problem. In my experience, it’s usually the second.
What Vague Inputs Actually Reveal
There is a specific kind of frustration that AI tools produce in people who are new to using them seriously. The output isn’t what they wanted. The system went somewhere they didn’t intend. The response is technically reasonable but misses the point entirely.
The instinct is to blame the tool. To go looking for better prompts, better phrasing, the magic syntax that unlocks the right output. And sometimes that’s the right instinct — sometimes the tool genuinely lacks context or capability.
But more often, when I look at what someone typed and what came back and understand why they’re frustrated, the real problem is visible in the input. The ask was underspecified not because they were lazy but because they hadn’t finished thinking yet. They went to the tool at the moment of the question, not the moment of the answer.
The AI reflected that back at them. It gave them something shaped like an answer because they gave it something shaped like a question. Neither of them knew what they actually wanted.
This is the 10-minute test working as a diagnostic. When the output is vague, follow the thread backward. What were you trying to accomplish? What would a good outcome have looked like? Can you describe it in a sentence? If the answer to that last question is no, you have found the real problem — and it isn’t the AI.
The Clarity Mirror
The most underrated thing AI can do for a knowledge worker is show them where their thinking has holes.
Not by telling them — AI is generally too agreeable to tell you your thinking has holes unless you ask directly. But by producing output that reflects the shape of your input. Sharp input, shaped thinking. Vague input, vague output. The mirror doesn’t lie even when the tool is trying to be helpful.
I use this deliberately now. When I’m working through a problem and I’m not sure I have it right yet, I describe it to Claude before I’m ready — not to get an answer, but to see what the description looks like when it comes back. If the system can’t find a sharp handle on what I’m asking, there probably isn’t one yet. If it takes the description and runs with it confidently in a direction I didn’t intend, I learn something about where my framing was ambiguous.
The 10-minute test becomes a thinking tool, not just a productivity tool. You’re not using it to get output. You’re using it to pressure-test your own model of the problem before you commit to working on it.
This is a genuinely different relationship with the technology than most people have. Most people treat AI as an answer machine. Feed it a question, receive an answer, evaluate the answer. What I’m describing is treating it as a thinking partner — something you talk to while you’re still figuring out the question, not after.
When the Test Passes on the First Try
Let me describe what it feels like when both conditions are met — when your setup is right and your thinking is clear — because I think it is worth naming so you know what you’re building toward.
You sit down with a problem. You have a clear enough picture of the output that you can describe it in one sentence, maybe two. You type it — not a spec, not a brief, not a bulleted list of requirements — just the idea, stated plainly.
And what comes back is not just useful. It is better than what you were holding in your head. Not because the system is smarter than you, but because it took your clear idea and applied the full context of your setup — your sites, your voice, your architecture, your history — and produced something that fit your world in ways you didn’t have to specify because they were already known.
That experience is addictive in the best way. Not because it’s magic. Because it’s the feeling of your own thinking working at full capacity, with execution handled, with noise gone, with the gap between idea and output collapsed almost to zero.
When I said “make every table cell a link to a real page” and got back a verified, published directory with 97 anchor links twelve minutes later, that was the 10-minute test passing on the first try. The input was seven words. The output was weeks of manual work. The distance between them was covered by setup I’d done in advance and clarity I’d built over time.
That’s what the test is measuring. Not whether the AI is good. Whether you’re ready.
The Two Sides of Ready
Ready has two components and they are not the same thing, though people often conflate them.
The first is system readiness — whether your environment has enough context, access, and learned behavior to produce something worth testing on a short input. This is a function of setup. You can improve it deliberately. Load context, build skills, establish integrations, run sessions where the only goal is teaching the system how you work. It compounds over time.
The second is thinking readiness — whether you have done enough internal work on the problem to describe what you want in a way that is specific enough to be actionable. This is a function of time and honest reflection. You can’t shortcut it with better prompts. You have to actually think the problem through to the point where you know what good looks like, even if only approximately.
The 10-minute test fails when either is missing. And the failure mode looks different depending on which one it is.
When your system isn’t ready, the output is competent but generic. It answers a reasonable version of your question, just not your specific version. There’s nothing wrong with it except that it doesn’t fit your world.
When your thinking isn’t ready, the output is a reflection of your own ambiguity. It’s vague because you were vague. It went somewhere you didn’t intend because you didn’t make your intention clear enough to constrain it.
The test tells you which problem you have. And once you know which problem you have, you know exactly what to work on next.
A Note on Patience
The 10-minute test is not a guarantee of a 10-minute output. The name is about the input, not the timeline. It means: if you can’t describe it clearly in under 10 minutes of explanation, something needs work before you proceed.
Some problems need a lot more than 10 minutes of execution time. Complex systems, long articles, multi-step workflows — these take as long as they take. The test isn’t about speed. It’s about entry conditions.
A problem clearly described in one sentence can justify an hour of execution. A problem that required a paragraph of caveats and qualifications to describe probably needs more thinking before it needs more doing.
The discipline is holding that distinction. Knowing the difference between a problem that’s ready and a problem that only feels ready because you’re eager to start. The test is the check. The 10 minutes of clear description or the discovery that you can’t write them — that’s the data you need before you spend any gas.
And when the input is clean and the setup is right and the output lands on the first try — that 10-minute check pays back in ways that compound for as long as you keep building on it.
Frequently Asked Questions
What is the 10-minute test for AI productivity?
The 10-minute test is a diagnostic: describe what you want in plain language and observe what comes back in the first response. If the output is worth engaging with — not perfect, but genuinely on target — both your AI setup and your thinking are working. If the output is generic or misaligned, either the system lacks context about you, or you have not finished thinking through what you actually want. The test reveals which problem is present.
How do you use AI as a clarity mirror?
You use AI as a clarity mirror by describing a problem before you are fully ready — not to get an answer, but to see what shape your description takes when reflected back. If the system cannot find a sharp handle on your ask, there may not be one yet. If it runs confidently in a direction you did not intend, your framing was ambiguous in ways you did not notice. Either response is useful data about the state of your thinking.
What does vague AI output tell you about your thinking?
Vague AI output almost always reflects vague input. The system produces something shaped like an answer because you gave it something shaped like a question. The output is a mirror of your current model of the problem, including the parts of that model that are incomplete or unresolved. Before blaming the tool, follow the thread backward: can you describe the desired outcome in a sentence? If not, the thinking is not done yet.
What is the difference between system readiness and thinking readiness?
System readiness is whether your AI environment has enough context, access, and learned behavior to produce a calibrated response to a short input — a function of setup that improves over time. Thinking readiness is whether you have done enough internal work on the problem to describe what you want in a way that is specific enough to act on — a function of reflection that cannot be shortcut. The 10-minute test fails when either is missing, and the failure mode looks different in each case.
How short should a good AI prompt be?
A well-prepared prompt for a well-calibrated system can be very short — sometimes a single sentence, sometimes a few words. The length is not the variable that matters. What matters is whether the prompt is specific enough about the desired outcome that the system can aim at it without guessing. A seven-word dare to a trusted system outperforms a three-hundred-word spec to a system that does not know you.

Leave a Reply