Overview
LLM-powered code review can catch bugs, security issues, and inconsistencies faster than manual review. The key is using structured prompts that direct the AI to check specific categories of issues rather than just asking "review this code."
Review Categories
- •Correctness — logic bugs, off-by-one errors, race conditions
- •Security — injection vulnerabilities, auth issues, data exposure
- •Performance — unnecessary re-renders, N+1 queries, memory leaks
- •Maintainability — naming, complexity, duplication
- •Completeness — missing error handling, edge cases, validation
Code Review Prompt Template
Review this code for a [framework] application.
Focus on:
1. BUGS: Logic errors, race conditions, off-by-one errors
2. SECURITY: Injection, auth bypass, data exposure
3. PERFORMANCE: Unnecessary work, N+1 queries, memory leaks
4. EDGE CASES: Null handling, empty arrays, boundary values
For each issue found:
- Severity: critical / warning / suggestion
- Line number(s)
- Description of the issue
- Suggested fix
Code:
```
[paste code here]
```Example Workflow
Paste a git diff, ask the LLM to review for bugs and security issues, iterate on flagged items, then ask for a final summary of remaining concerns. For best results, review one file or module at a time rather than entire PRs.