Based on a Real Case of Claude Code Auto-Programming, Sharing Some Prompt Techniques

This article shares a real use case of Claude Code through an actual example. Before sharing, let's do a small survey.
Original Requirement: A valued paid user wanted me to add an article modification time in the articles.
At first glance, this requirement seemed a bit difficult to implement. Because the articles on my website are not stored in a database; they are all built using next.js SSG. They simply don't have an update time.
One technique here is: When solving problems, we should not directly feed the original requirement to Claude Code, for the following reasons:
1. The original requirement is relatively vague, and it might be misunderstood. If it misunderstands, it might eventually add a time, but that time might not be reliable.
2. Claude Code's Token consumption is really expensive. Therefore, vague requirements could lead to a lot of Tokens being wasted meaninglessly.
Therefore, we need to break down the original requirement. I first consulted deepseek, which gave me two solutions:
1. File build time: Each time we build, we need to get the file's build time. However, turbopack's packaging strategy is a bit different; the file hash changes with each build, so this build time might not be reliable.
2. Git commit time: I thought this should be more reliable.
With a general solution direction, I had this simple prompt: Compile the git commit time into the header of each .mdx article.
Claude Code is quite reliable. If the prompt is precise, it generally works fine, and it just gets to work.
After consuming $7 of my credit and taking about 20 minutes, it finally succeeded.
As expected, something unexpected happened: it skipped changes to 171 files.
A tricky part here is that the skipped files only had an additional pass parameter passed, everything else was exactly the same.
<PostLayout pass>...But it was inflexible and defined this extra parameter as a completely different custom component. Then it just skipped processing them ~ ~
import Layout from 'components/post-layout';
import { getGitFileInfo } from '@/utils/git-info';
export default function Article({ children }: any) {
const gitInfo = getGitFileInfo('src/app/your-path/page.mdx');
return (
{children}
);
}But the actual situation was that I needed the result to be like this, and the running result would be completely identical.
import MdxLayout from 'components/mdx-layout';
export default function Article({ children }: any) {
return (
{children}
);
}Then at this point, I stepped into a pitfall with the prompt.
I entered the prompt again: Refactor the skipped 171 files using the same method as above.
Upon closer thought, my expression had some ambiguity. Because Claude Code had already given me a suggested solution, but I didn't agree with it. My intention was to modify the skipped files using the same solution that had already modified hundreds of files. However, during execution, it was understood as: the suggested solution it gave me above.
This ambiguity directly caused it to execute for 20 minutes according to the solution I didn't want. It even encountered 2 errors and self-corrected, voraciously consuming my tokens. The two ambiguous interpretations started conflicting, leading to errors.
Eventually, I had to abandon this execution again and clarify my meaning anew.
Summary
1. Prompts should ideally contain relatively stable and accurate solutions. The less AI has to think, the better, as this can reduce the hallucination rate.
2. Requirement prompts must not contain ambiguity. Ambiguity can easily lead to errors. Although Claude Code can eventually fix them, this causes a lot of token consumption. Moreover, since LLMs generate results based on predictive mechanisms, early misinterpretations or ambiguities can cause every subsequent step to go further in the wrong direction. It will also try to be logically self-consistent, generating things that don't exist. The more it writes, the bigger the problems become, also increasing the developer's review difficulty. If you are fooled by its hallucinations, it can lead to serious consequences.
3. Natural language constraints are not as precise as code. Including file names, code variables, code-specific terms, and professional terminology in prompts can greatly reduce Claude Code's hallucinations.




