I’ve been experimenting with ways to increase AI coders’ autonomy and enable them to tackle more complex tasks.

While researching papers for my PhD thesis on how instruction order affects outcomes, I came across one that confirmed something intuitive: LLMs perform better when they reason before answering, rather than following the “Answer → Justify” pattern. The data showed that exposing reasoning traces upfront consistently improves results.

That insight led me to a practical coding workflow:

The Workflow

1. Prompt generation

Ask Cursor to create a prompt from your codebase:

You are a skilled software architect. Analyze the given task 
and current repository, then write a clear, effective prompt 
for an AI Coder to implement the task. 
{INSERT_TASK_DESC_HERE}

2. Analyze the prompt — This lets you preview how the AI plans to approach the task.

3. Refine if needed — Adjust the prompt until it looks solid.

4. Run the agent — Launch DevinAI, Cursor Background Agent, or Codex.

5. Profit.

Why This Works

Compared to standard prompt rewrites, this workflow is stronger because you frontload reasoning and validation — so you can catch flaws before execution. The AI architect thinks through the approach first, you validate its reasoning, and only then does the coder agent execute.

It’s the difference between “go build this” and “here’s exactly how to build this, I’ve verified the approach.” The second version fails less.