env.dev

AI Code Review Prompts

Prompts for LLM-powered code review. Catch bugs, security issues, and style problems with structured review prompts.

Overview

LLM-powered code review can catch bugs, security issues, and inconsistencies faster than manual review. The key is using structured prompts that direct the AI to check specific categories of issues rather than just asking "review this code."

Review Categories

  • Correctness — logic bugs, off-by-one errors, race conditions
  • Security — injection vulnerabilities, auth issues, data exposure
  • Performance — unnecessary re-renders, N+1 queries, memory leaks
  • Maintainability — naming, complexity, duplication
  • Completeness — missing error handling, edge cases, validation

Code Review Prompt Template

Prompt
Review this code for a [framework] application.

Focus on:
1. BUGS: Logic errors, race conditions, off-by-one errors
2. SECURITY: Injection, auth bypass, data exposure
3. PERFORMANCE: Unnecessary work, N+1 queries, memory leaks
4. EDGE CASES: Null handling, empty arrays, boundary values

For each issue found:
- Severity: critical / warning / suggestion
- Line number(s)
- Description of the issue
- Suggested fix

Code:
```
[paste code here]
```

Example Workflow

Paste a git diff, ask the LLM to review for bugs and security issues, iterate on flagged items, then ask for a final summary of remaining concerns. For best results, review one file or module at a time rather than entire PRs.

Frequently Asked Questions

Can AI replace human code review?

LLM code review is excellent for catching common issues — bugs, security vulnerabilities, style inconsistencies, and missing edge cases. It complements but does not replace human review for architecture and design decisions.

Which LLM is best for code review?

Claude and GPT-4 both excel at code review. Claude handles larger diffs better due to its larger context window. Use structured prompts for either model.