Project Rhea
A lean, DSL-based end-to-end API testing framework powered by TypeScript, Zod, and LLM-friendly JSON schemas
Overview
What is Project Rhea?
Project Rhea is a revolutionary API testing framework designed for developers who value simplicity without sacrificing power. Built on TypeScript and Zod, it provides a clean JSON DSL that makes writing tests intuitive while offering enterprise-grade validation capabilities.
At its core, Rhea transforms API testing from a chore into a streamlined process. Whether you’re testing REST endpoints, validating complex response structures, or orchestrating multi-step workflows, Rhea handles it all with minimal configuration.
Simple JSON DSL
Define tests in clean, readable JSON. No complex setup or boilerplate required.
LLM-Powered Generation
Generate complete test suites from API documentation using OpenAI, Groq, and other providers.
Rich Validation
Status codes, headers, body matching, regex patterns, response times, array assertions, and JSON Schema validation.
Variable System
Extract values from responses and interpolate them across test steps with JSONPath.
Retry & Polling
Built-in retry logic for async operations and flaky endpoints with configurable policies.
Rate Limiting
Token bucket rate limiter for respecting API limits and preventing throttling.
Negative Testing
Test error conditions with expectFailure and continueOnFailure flags.
Conditional Execution
Run steps conditionally based on variable state with flexible operators.
Teardown Support
Cleanup operations that always run, like finally blocks, ensuring test hygiene.
Why Project Rhea?
Zero Config
Works out of the box with TypeScript. No complex setup, no configuration files, no boilerplate required.
AI-First Design
LLM-friendly schemas enable automated test generation from API documentation. Generate comprehensive test suites with a single command.
Type Safety
Full TypeScript support with Zod schema validation. Compile-time and runtime type safety ensures correctness.
Beautiful Reporting
Detailed console output with timing metrics, retry counts, and validation results. Clear, actionable feedback for every test run.
Developer Experience
Functional, concise codebase with no unnecessary abstractions. Clean API design that prioritizes simplicity and usability.
Simplicity First
Define tests in clean JSON without verbose setup. Compare traditional frameworks' complexity to Rhea's elegant approach.
Quick Start
Installation
pnpm installRun a test suite
pnpm execute -f examples/simple-test.jsonGenerate tests with AI
pnpm llm:generate api-docs.json -m gpt-5-nanoHandbook
Architecture
System Architecture
Project Rhea follows a modular, functional architecture with clear separation of concerns. The codebase is organized into distinct layers that handle different aspects of test execution and generation.
Directory Structure
src/
├── schema/
│ ├── dsl.schema.ts # Zod schemas with descriptions
│ ├── types.ts # TypeScript types
│ └── llm-schema-generator.ts # JSON Schema generator for LLMs
├── engine/
│ ├── http-client.ts # HTTP request handling
│ ├── auth.ts # Authentication
│ ├── variable-resolver.ts # {{variable}} interpolation
│ ├── jsonpath.ts # JSONPath extraction
│ ├── expectation-validator.ts # Response validation
│ ├── schema-validator.ts # JSON Schema validation
│ ├── condition-evaluator.ts # Conditional execution logic
│ ├── rate-limiter.ts # Token bucket rate limiter
│ ├── step-executor.ts # Execute individual steps
│ ├── test-executor.ts # Orchestrate test runs
│ └── metrics/ # Test metrics and reporting
├── reporting/
│ └── console-reporter.ts # Format and display results
├── llm/
│ ├── client.ts # LLM client with retry logic
│ ├── config.ts # Configuration management
│ ├── registry.ts # Provider factory
│ ├── providers/ # Provider implementations
│ ├── prompts/ # Prompt generation
│ ├── tasks/ # Task definitions
│ └── metrics/ # LLM metrics tracking
└── cli/
├── execute.ts # CLI entry point
└── llm-generate.ts # LLM-powered generationCore Components
Test Execution Engine
executeTestSuite(suite: TestSuite, verbose?: boolean): PromiseOrchestrates the execution of a complete test suite, including setup steps, all tests, and teardown operations.
Parameters
| Name | Type | Description |
|---|---|---|
suite | TestSuite | The test suite definition conforming to the DSL schema |
verbose | boolean | Enable verbose logging |
Returns
Promise - Complete execution results with timing and metrics Examples
const result = await executeTestSuite(testSuite);
console.log(`Passed: ${result.success}`);
console.log(`Duration: ${result.duration}ms`);Validation System
validateExpectations(response: HttpResponse, expectations: Expectations, schemaRegistry?: Record): ValidationResult Validates HTTP response against expectations including status codes, headers, body content, response times, and JSON Schema validation.
Parameters
| Name | Type | Description |
|---|---|---|
response | HttpResponse | The HTTP response to validate |
expectations | Expectations | Validation expectations (status, headers, body, responseTime) |
schemaRegistry | Record | Optional registry of JSON schemas for schema validation |
Returns
ValidationResult - Validation result with pass/fail status and error detailsVariable System
extractVariables(response: HttpResponse, extract: ExtractDefinition[]): RecordExtracts values from HTTP response using JSONPath expressions and stores them in variable context.
Parameters
| Name | Type | Description |
|---|---|---|
response | HttpResponse | HTTP response containing data to extract |
extract | ExtractDefinition[] | Array of extraction definitions with JSONPath expressions |
Returns
Record - Extracted variables keyed by name DSL Schema
Test Suite Schema
The DSL is defined using Zod schemas with extensive descriptions that make it LLM-friendly. The schema ensures type safety and provides clear validation rules.
Core Test Suite Structure
const TestSuiteSchema = z.object({
name: z.string(),
description: z.string().optional(),
baseUrl: z.string().url(),
variables: z.array(VariableSchema).optional(),
setup: z.array(StepSchema).optional(),
tests: z.array(TestSchema),
teardown: z.array(StepSchema).optional(),
auth: AuthSchema.optional(),
rateLimit: RateLimitSchema.optional(),
schemas: z.array(SchemaDefinitionSchema).optional(),
});Step Definition
Step Fields
| Field | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | Descriptive name for the step |
| method | HttpMethod | Yes | HTTP method (GET, POST, PUT, PATCH, DELETE, etc.) |
| endpoint | string | Yes | Request endpoint path (supports {{variable}} interpolation) |
| headers | Header[] | No | Custom headers array |
| body | unknown | No | Request body (automatically JSON stringified) |
| expect | Expectations | No | Response validation expectations |
| extract | ExtractDefinition[] | No | Variable extraction definitions |
| delay | number | No | Delay in milliseconds before execution |
| retryPolicy | RetryPolicy | No | Retry configuration |
| condition | Condition | No | Conditional execution rule |
| expectFailure | boolean | No | Set to true for negative tests |
| continueOnFailure | boolean | No | Continue execution if step fails |
Validation Options
LLM Integration
AI-Powered Test Generation
Project Rhea includes a sophisticated LLM integration system that can generate complete test suites from API documentation. The system supports multiple providers and includes comprehensive metrics tracking.
Multiple Providers
Support for OpenAI, Groq, and extensible architecture for additional providers
Structured Output
Uses zodResponseFormat for direct Zod schema validation of LLM outputs
Retry Logic
Automatic retry with validation error feedback (up to 3 attempts)
Metrics Tracking
Comprehensive logging of executions, token usage, costs, errors, and retries
Configuration Hierarchy
Global defaults → task-specific config → runtime overrides
Reasoning Levels
Configurable reasoning depth (low, medium, high) for different complexity levels
Usage
CLI Command
pnpm llm:generate api-docs.json \
-m gpt-5-nano \
-p openai \
-r medium \
-o generated-tests.json \
--prompt "Focus on authentication flows"Programmatic Usage
import { createTestSuiteTask } from './src/llm/tasks/create-test-suite';
import { runTask } from './src/llm/client';
const task = createTestSuiteTask(
{
documentation: '... API docs ...',
userPrompt: 'Generate comprehensive test suite',
},
{
model: 'gpt-5-nano',
provider: 'openai',
reasoning: 'medium',
}
);
const testSuite = await runTask(task);Supported Models
OpenAI
gpt-4.1-mini- Balanced performance and costgpt-5-nano- Ultra-low costtext-embedding-3-small- Embeddings
Groq
openai/gpt-oss-20b- Fast inferenceopenai/gpt-oss-120b- Higher capabilitymeta-llama/llama-4-scout-17b-16e-instruct- Efficientmeta-llama/llama-4-maverick-17b-128e-instruct- Advanced
Metrics System
All LLM executions are tracked with detailed metrics saved to /out/llm-metrics/ as JSONL files:
Metrics Files
| File | Description |
|---|---|
| executions.jsonl | Execution timestamps, duration, success/failure |
| token-usage.jsonl | Prompt tokens, cached tokens, completion tokens |
| costs.jsonl | Cost calculations based on pricing table |
| errors.jsonl | Full error messages and stack traces |
| validation-failures.jsonl | Zod validation error details |
| retries.jsonl | Retry attempt logs with reasons |
| summary.json | Aggregated statistics by task and model |
Variable System
Variable Extraction and Interpolation
The variable system allows you to extract values from responses using JSONPath and use them in subsequent steps with {{variable}} syntax.
Extract Variables
{
"steps": [
{
"name": "Create user",
"method": "POST",
"endpoint": "/users",
"body": { "name": "John Doe" },
"extract": [
{ "name": "userId", "path": "$.id" },
{ "name": "email", "path": "$.email" }
]
},
{
"name": "Get user details",
"method": "GET",
"endpoint": "/users/{{userId}}"
}
]
}Initial Variables
{
"variables": [
{ "name": "username", "value": "test@example.com" },
{ "name": "password", "value": "password123" }
],
"tests": [...]
}JSONPath Expressions
JSONPath is used for extracting values from JSON responses. Common patterns:
JSONPath Examples
| Path | Description |
|---|---|
| $.id | Root-level id field |
| $.user.email | Nested email field |
| $.items[0].name | First item's name |
| $.items[*].id | All item IDs |
| $.items[?(@.status == 'active')] | Filtered items |
Retry & Polling
Retry Policies
Configure retry policies for handling flaky endpoints or async operations that require polling.
Basic Retry
{
"name": "Poll until ready",
"method": "GET",
"endpoint": "/status/{{jobId}}",
"retryPolicy": {
"maxAttempts": 5,
"interval": 1000,
"until": {
"$.status": "completed"
}
}
}Retry Policy Fields
| Field | Type | Description |
|---|---|---|
| maxAttempts | number | Maximum number of retry attempts |
| interval | number | Delay between attempts in milliseconds |
| until | object | Condition object with JSONPath → value mapping |
| onStatus | number[] | Optional array of status codes to retry on |
Conditional Execution
Conditional Steps
Execute steps conditionally based on variable state using flexible operators.
{
"variables": [
{ "name": "userType", "value": "admin" },
{ "name": "isProduction", "value": false }
],
"steps": [
{
"name": "Admin-only endpoint",
"method": "GET",
"endpoint": "/admin/stats",
"condition": {
"variable": "userType",
"operator": "equals",
"value": "admin"
}
}
]
}Supported Operators
| Operator | Description |
|---|---|
| equals | Variable equals value |
| notEquals | Variable does not equal value |
| exists | Variable exists (not null/undefined) |
| notExists | Variable does not exist |
| greaterThan | Variable > value (numbers) |
| lessThan | Variable < value (numbers) |
| contains | Variable contains value (strings/arrays) |
Rate Limiting
Token Bucket Rate Limiter
Prevent API throttling by configuring global rate limits using a token bucket algorithm.
{
"rateLimit": {
"maxRequests": 10,
"perMilliseconds": 1000
},
"tests": [...]
}This ensures no more than 10 requests per second across all steps in the test suite.
Authentication
Bearer Token Authentication
Configure authentication at the suite level. Tokens can be static or extracted from setup steps.
Static Token
{
"auth": {
"type": "bearer",
"token": "your-token-here"
},
"tests": [...]
}Dynamic Token from Setup
{
"setup": [
{
"name": "Login",
"method": "POST",
"endpoint": "/auth/login",
"body": {
"username": "{{username}}",
"password": "{{password}}"
},
"extract": [
{ "name": "AUTH_TOKEN", "path": "$.token" }
]
}
],
"auth": {
"type": "bearer",
"token": "{{AUTH_TOKEN}}"
},
"tests": [...]
}Teardown Operations
Cleanup Steps
Teardown steps execute after all tests complete, regardless of test success or failure. They’re like finally blocks, ensuring cleanup always happens.
{
"setup": [
{
"name": "Create test data",
"method": "POST",
"endpoint": "/test-data",
"extract": [
{ "name": "testId", "path": "$.id" }
]
}
],
"tests": [...],
"teardown": [
{
"name": "Delete test data",
"method": "DELETE",
"endpoint": "/test-data/{{testId}}"
}
]
}Reporting & Metrics
Console Reporting
Project Rhea provides beautiful console output with detailed metrics including timing, retry counts, and validation results.
Output Formats
# Standard console output
pnpm execute -f tests.json
# JSON output
pnpm execute -f tests.json --json
# Save to file
pnpm execute -f tests.json --output results.json
# Dry run (validate without executing)
pnpm execute -f tests.json --dry-runMetrics Collected
Test Metrics
| Metric | Description |
|---|---|
| Step Duration | Time taken for each step execution |
| Total Duration | Complete test suite execution time |
| Retry Counts | Number of retries per step |
| Success Rate | Percentage of passed tests |
| Validation Errors | Detailed validation failure messages |
| HTTP Status | Response status codes |
| Response Times | Actual vs expected response times |
Programmatic Usage
Using Rhea Programmatically
Execute Test Suite
import { executeTestSuite, reportToConsole } from './src/index';
import testSuite from './my-test-suite.json';
const result = await executeTestSuite(testSuite);
reportToConsole(result);
process.exit(result.success ? 0 : 1);Validate Schema
import { TestSuiteSchema } from './src/schema/dsl.schema';
const result = TestSuiteSchema.safeParse(yourTestData);
if (!result.success) {
console.error('Validation errors:', result.error);
}Design Principles
Core Principles
Functional & Concise
Arrow functions, no classes, minimal boilerplate. Code follows functional programming principles.
Type-Safe
Full TypeScript with Zod schema validation. Compile-time and runtime type safety.
Lean
No unnecessary abstractions or over-engineering. Every piece of code serves a purpose.
LLM-Friendly
Extensive schema descriptions for AI consumption. Zero-drift schema generation.
Developer-Friendly
Beautiful console output, clear error messages, intuitive API design.
Examples
Example Test Suites
The examples/ directory contains comprehensive examples demonstrating various features:
Basic Examples
| File | Description |
|---|---|
| simple-test.json | Basic GET requests |
| variable-test.json | Variable extraction and interpolation |
| crud-test.json | Complete CRUD lifecycle |
| auth-setup-example.json | Authentication with setup steps |
Advanced Examples
| File | Description |
|---|---|
| negative-testing-example.json | Error handling and expectFailure |
| async-polling-example.json | Retry logic and polling |
| array-validation-example.json | Array assertions |
| conditional-execution-example.json | Conditional step execution |
| schema-validation-example.json | JSON Schema validation |
| teardown-example.json | Cleanup with teardown steps |
| rate-limit-example.json | Rate limiting configuration |
| real-world.json | Comprehensive real-world scenario |
CLI Reference
Command Line Interface
Test Execution
pnpm execute -f [options]
Options:
--dry-run Validate without executing
--json Output results as JSON
--output Save results to file
--verbose Enable verbose logging Schema Generation
pnpm execute --generate-schema [-o output.json]
Generates LLM-friendly JSON Schema from Zod definitions.
Ensures zero drift between implementation and schema.LLM Test Generation
pnpm llm:generate [options]
Options:
-o, --output Output file path
-m, --model Override model
-p, --provider Override provider
-r, --reasoning Reasoning level (low/medium/high)
--prompt Additional user specifications Testing
Test Suite
Project Rhea includes comprehensive test coverage with unit tests and end-to-end tests.
Run Tests
# Run all tests
pnpm test:all
# Run unit tests only
pnpm test:unit
# Run e2e tests
pnpm test:e2e
# Watch mode
pnpm test:watch
# With coverage
pnpm test:coverageTest Philosophy
Tests follow a philosophy of minimal mocking, focusing on testable units without complex setup. Real implementations are tested where possible, with actual file I/O and schema validation.
Web UI
Web Interface
Project Rhea includes a Vue.js web interface for test suite management, execution, and LLM-powered generation.
Development
# Start development server
pnpm dev
# Build for production
pnpm build
# Start production server
pnpm startThe web UI provides:
- Test suite editor with Monaco editor
- Real-time test execution
- LLM-powered test generation with streaming
- Metrics visualization
- Test suite management
Future Enhancements
Roadmap
Potential future additions to Project Rhea:
Multiple Auth Methods
OAuth, API Key, Basic Auth support
Parallel Execution
Run tests in parallel for faster execution
Custom Validators
User-defined validation functions
Interceptors
Request/response interceptors for middleware
HTML Reports
Generate beautiful HTML test reports
CI/CD Integration
Helpers for GitHub Actions, GitLab CI, etc.
Data-Driven Tests
Parametric tests with data sets
Test Dependencies
Define test execution order and dependencies