env.dev

AI Test Generation Prompts

Prompts for generating tests with LLMs. Unit tests, integration tests, and edge case coverage from code context.

Overview

LLMs can generate comprehensive test suites from source code. Provide the implementation, specify the test framework, and describe the expected behavior to get well-structured tests with good edge case coverage.

Test Generation Prompt Template

Prompt
Write tests for this function using [Vitest/Jest/pytest].

```typescript
[paste function here]
```

Cover:
1. Happy path with typical inputs
2. Edge cases: empty input, null/undefined, boundary values
3. Error cases: invalid input, network failures
4. Type safety: ensure TypeScript types are correct

Use describe/it blocks. One assertion per test.
Mock external dependencies with vi.mock().

Test Categories

  • Happy path — typical inputs produce expected outputs
  • Edge cases — empty arrays, zero values, max integers, Unicode strings
  • Error handling — invalid inputs, missing data, network failures
  • Boundary values — off-by-one, min/max, pagination limits
  • Integration — API endpoints, database queries, external services

Coverage Strategy

Ask the LLM to identify untested code paths, generate tests for each branch, and suggest property-based tests for functions with large input spaces. For React components, ask for both unit tests (logic) and rendering tests (user interactions).

Frequently Asked Questions

Can LLMs write good tests?

Yes, especially with structured prompts. Provide the implementation, specify the test framework, list known edge cases, and the LLM will generate comprehensive test suites.

What test framework should I use with AI?

Use whatever your project uses. LLMs work well with Jest, Vitest, Mocha, pytest, and all popular frameworks. Specify the framework in your prompt.