AI摘要
Chris Dzombak通过创建一个全局配置文件CLAUDE.md,将资深工程师的思维模式和职业素养灌输给AI,使其成为一个有章法、懂取舍的“准同事”。该文件包括开发哲学、标准工作流、决策框架等,强调所有代码必须手动审查和测试。
最近在网上看到一位大神 Chris Dzombak,用 Claude Code 在短时间内写了整整12个项目,效率高到吓人。
我深挖了一下他的方法,发现终极秘诀不是什么花哨的Prompt,而是给 Claude 植入一个“资深工程师”的灵魂。
他创建了一个全局配置文件 CLAUDE .md,也就是AI的个人操作系统。这个文件里定义了:
开发哲学:比如“增量优于全部重构”、“代码要清晰而非聪明”。
标准工作流:规划 -> 写测试 -> 实现 -> 重构 -> 提交。
“卡住怎么办”预案:尝试3次失败后,必须停下来,记录失败、研究替代方案、反思根本问题。
决策框架:当有多种方案时,按可测试性 > 可读性 > 一致性 > 简单性的顺序选择。
最关键的是,他强调“AI写的代码,最终责任在人”,所有代码都必须手动审查和测试。
这个 CLAUDE.md 文件简直是把高级工程师的思维模式和职业素养灌输给了AI,让它从一个“工具”变成了一个有章法、懂取舍的“准同事”。
# Development Guidelines
## Philosophy
### Core Beliefs
- **Incremental progress over big bangs** - Small changes that compile and pass tests
- **Learning from existing code** - Study and plan before implementing
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear intent over clever code** - Be boring and obvious
### Simplicity Means
- Single responsibility per function/class
- Avoid premature abstractions
- No clever tricks - choose the boring solution
- If you need to explain it, it's too complex
## Process
### 1. Planning & Staging
Break complex work into 3-5 stages. Document in `IMPLEMENTATION_PLAN.md`:
```markdown
## Stage N: [Name]
**Goal**: [Specific deliverable]
**Success Criteria**: [Testable outcomes]
**Tests**: [Specific test cases]
**Status**: [Not Started|In Progress|Complete]
```
- Update status as you progress
- Remove file when all stages are done
### 2. Implementation Flow
1. **Understand** - Study existing patterns in codebase
2. **Test** - Write test first (red)
3. **Implement** - Minimal code to pass (green)
4. **Refactor** - Clean up with tests passing
5. **Commit** - With clear message linking to plan
### 3. When Stuck (After 3 Attempts)
**CRITICAL**: Maximum 3 attempts per issue, then STOP.
1. **Document what failed**:
- What you tried
- Specific error messages
- Why you think it failed
2. **Research alternatives**:
- Find 2-3 similar implementations
- Note different approaches used
3. **Question fundamentals**:
- Is this the right abstraction level?
- Can this be split into smaller problems?
- Is there a simpler approach entirely?
4. **Try different angle**:
- Different library/framework feature?
- Different architectural pattern?
- Remove abstraction instead of adding?
## Technical Standards
### Architecture Principles
- **Composition over inheritance** - Use dependency injection
- **Interfaces over singletons** - Enable testing and flexibility
- **Explicit over implicit** - Clear data flow and dependencies
- **Test-driven when possible** - Never disable tests, fix them
### Code Quality
- **Every commit must**:
- Compile successfully
- Pass all existing tests
- Include tests for new functionality
- Follow project formatting/linting
- **Before committing**:
- Run formatters/linters
- Self-review changes
- Ensure commit message explains "why"
### Error Handling
- Fail fast with descriptive messages
- Include context for debugging
- Handle errors at appropriate level
- Never silently swallow exceptions
## Decision Framework
When multiple valid approaches exist, choose based on:
1. **Testability** - Can I easily test this?
2. **Readability** - Will someone understand this in 6 months?
3. **Consistency** - Does this match project patterns?
4. **Simplicity** - Is this the simplest solution that works?
5. **Reversibility** - How hard to change later?
## Project Integration
### Learning the Codebase
- Find 3 similar features/components
- Identify common patterns and conventions
- Use same libraries/utilities when possible
- Follow existing test patterns
### Tooling
- Use project's existing build system
- Use project's test framework
- Use project's formatter/linter settings
- Don't introduce new tools without strong justification
## Quality Gates
### Definition of Done
- [ ] Tests written and passing
- [ ] Code follows project conventions
- [ ] No linter/formatter warnings
- [ ] Commit messages are clear
- [ ] Implementation matches plan
- [ ] No TODOs without issue numbers
### Test Guidelines
- Test behavior, not implementation
- One assertion per test when possible
- Clear test names describing scenario
- Use existing test utilities/helpers
- Tests should be deterministic
## Important Reminders
**NEVER**:
- Use `--no-verify` to bypass commit hooks
- Disable tests instead of fixing them
- Commit code that doesn't compile
- Make assumptions - verify with existing code
**ALWAYS**:
- Commit working code incrementally
- Update plan documentation as you go
- Learn from existing implementations
- Stop after 3 failed attempts and reassess