Workflow Optimization
Tips & Best Practices · 12 min read
✓From power users and source analysis
Token Efficiency
Use Prompt Caching
Claude Code automatically implements prompt caching with 90% discount on cache reads:
- System prompt cached globally (first-party)
- Tool schemas cached per session
- Early conversation history cached after compression
Choose Right Model
- Sonnet: Fast, cost-effective for routine tasks
- Opus: Complex reasoning, architecture design
- Use
/modelcommand to switch
CLAUDE.md Best Practices
Minimalist Approach
Include only universal rules. Overly long CLAUDE.md consumes valuable context window space.
Hierarchical Organization
~/.claude/CLAUDE.md → Cross-project preferences
project/CLAUDE.md → Project architecture
.claude/rules/*.md → Conditional rules
CLAUDE.local.md → Local private (not committed)
Anti-Patterns
- ❌ Don't repeat what model already knows
- ❌ Don't add excessive formatting rules
- ❌ Don't place sensitive information (API keys)
Conversation Management
Use Plan Mode for Analysis
/plan
Read-only mode for safe code exploration before making changes.
Clear Context When Needed
/clear
Starts fresh conversation when context becomes too large or confused.
Use Worktree for Experiments
/worktree
Isolate experimental changes in Git worktree.
Tool Usage Tips
Be Specific with File Operations
# Good
Read src/components/Button.tsx lines 1-50
# Vague
Read the button file
Use Grep for Large Codebases
Search for all usages of useState
Leverage Skills
/commit
/review-pr
/verify
Cost Management
Monitor Token Usage
/usage
Check current session token consumption.
Use Headless Mode for Scripts
claude -p "Generate a README for this project"
No TUI overhead, direct stdout output.
Enterprise Tips
Configure Policy Limits
- Set tool restrictions via enterprise policy
- Block specific MCP servers if needed
- Enable audit logging for compliance
Team Memory Sync
- Share knowledge across team members
- Secret scanning enabled by default
- Server-preferred sync semantics
💡 Pro Tip: See Token Economics Reference for detailed caching strategies.