This week, I learned what makes a good #prompt — and it applies to any #AI. Suppose we’re given a task. After reading the task details, we raise a lot of questions about certain points. For other points, we make assumptions because there’s too much information — and now we’re too tired.

What do we notice? 1. Questions arise – This prevents us from starting the work immediately and forces us to wait until we have all the answers. 2. Assumptions are made – This often leads to unexpected outcomes. 3. Unclear details lead to more questions and assumptions – This suggests that the task is probably too large or too vague.

What can we learn from this? AI often underperforms not because of the AI itself, but because of how we describe the task — the prompt. In some ways, AI can outperform humans because we are limited by our biological makeup. However, the problem lies with both us and the AI. If a task isn’t detailed enough, a wrong outcome is likely.

Example If the prompt is “Draw a car,” and we expect a red car but the AI produces a blue one, the issue isn’t with the AI — it’s that we didn’t specify the color. AI can’t read our minds; we should have mentioned the expected color in the prompt.

The takeaway Next time we write a prompt, let’s take a moment to consider whether it’s detailed enough. If it seems too large, break it down into smaller parts.