Project Rhea

A lean, DSL-based end-to-end API testing framework powered by TypeScript, Zod, and LLM-friendly JSON schemas

Overview

What is Project Rhea?

Project Rhea is a revolutionary API testing framework designed for developers who value simplicity without sacrificing power. Built on TypeScript and Zod, it provides a clean JSON DSL that makes writing tests intuitive while offering enterprise-grade validation capabilities.

At its core, Rhea transforms API testing from a chore into a streamlined process. Whether you’re testing REST endpoints, validating complex response structures, or orchestrating multi-step workflows, Rhea handles it all with minimal configuration.

Simple JSON DSL

Define tests in clean, readable JSON. No complex setup or boilerplate required.

LLM-Powered Generation

Generate complete test suites from API documentation using OpenAI, Groq, and other providers.

Rich Validation

Status codes, headers, body matching, regex patterns, response times, array assertions, and JSON Schema validation.

Variable System

Extract values from responses and interpolate them across test steps with JSONPath.

Retry & Polling

Built-in retry logic for async operations and flaky endpoints with configurable policies.

Rate Limiting

Token bucket rate limiter for respecting API limits and preventing throttling.

Negative Testing

Test error conditions with expectFailure and continueOnFailure flags.

Conditional Execution

Run steps conditionally based on variable state with flexible operators.

Teardown Support

Cleanup operations that always run, like finally blocks, ensuring test hygiene.

Why Project Rhea?

Zero Config

Works out of the box with TypeScript. No complex setup, no configuration files, no boilerplate required.

AI-First Design

LLM-friendly schemas enable automated test generation from API documentation. Generate comprehensive test suites with a single command.

Type Safety

Full TypeScript support with Zod schema validation. Compile-time and runtime type safety ensures correctness.

Beautiful Reporting

Detailed console output with timing metrics, retry counts, and validation results. Clear, actionable feedback for every test run.

Developer Experience

Functional, concise codebase with no unnecessary abstractions. Clean API design that prioritizes simplicity and usability.

Simplicity First

Define tests in clean JSON without verbose setup. Compare traditional frameworks' complexity to Rhea's elegant approach.

Quick Start

Installation

pnpm install

Run a test suite

pnpm execute -f examples/simple-test.json

Generate tests with AI

pnpm llm:generate api-docs.json -m gpt-5-nano

Handbook

Architecture

System Architecture

Project Rhea follows a modular, functional architecture with clear separation of concerns. The codebase is organized into distinct layers that handle different aspects of test execution and generation.

Directory Structure

src/ ├── schema/ │ ├── dsl.schema.ts # Zod schemas with descriptions │ ├── types.ts # TypeScript types │ └── llm-schema-generator.ts # JSON Schema generator for LLMs ├── engine/ │ ├── http-client.ts # HTTP request handling │ ├── auth.ts # Authentication │ ├── variable-resolver.ts # {{variable}} interpolation │ ├── jsonpath.ts # JSONPath extraction │ ├── expectation-validator.ts # Response validation │ ├── schema-validator.ts # JSON Schema validation │ ├── condition-evaluator.ts # Conditional execution logic │ ├── rate-limiter.ts # Token bucket rate limiter │ ├── step-executor.ts # Execute individual steps │ ├── test-executor.ts # Orchestrate test runs │ └── metrics/ # Test metrics and reporting ├── reporting/ │ └── console-reporter.ts # Format and display results ├── llm/ │ ├── client.ts # LLM client with retry logic │ ├── config.ts # Configuration management │ ├── registry.ts # Provider factory │ ├── providers/ # Provider implementations │ ├── prompts/ # Prompt generation │ ├── tasks/ # Task definitions │ └── metrics/ # LLM metrics tracking └── cli/ ├── execute.ts # CLI entry point └── llm-generate.ts # LLM-powered generation

Core Components

Test Execution Engine

executeTestSuite(suite: TestSuite, verbose?: boolean): Promise

Orchestrates the execution of a complete test suite, including setup steps, all tests, and teardown operations.

Parameters
NameTypeDescription
suite
TestSuite
The test suite definition conforming to the DSL schema
verbose
optional
boolean
Enable verbose logging
Returns
Promise - Complete execution results with timing and metrics
Examples
const result = await executeTestSuite(testSuite); console.log(`Passed: ${result.success}`); console.log(`Duration: ${result.duration}ms`);

Validation System

validateExpectations(response: HttpResponse, expectations: Expectations, schemaRegistry?: Record): ValidationResult

Validates HTTP response against expectations including status codes, headers, body content, response times, and JSON Schema validation.

Parameters
NameTypeDescription
response
HttpResponse
The HTTP response to validate
expectations
Expectations
Validation expectations (status, headers, body, responseTime)
schemaRegistry
optional
Record
Optional registry of JSON schemas for schema validation
Returns
ValidationResult - Validation result with pass/fail status and error details

Variable System

extractVariables(response: HttpResponse, extract: ExtractDefinition[]): Record

Extracts values from HTTP response using JSONPath expressions and stores them in variable context.

Parameters
NameTypeDescription
response
HttpResponse
HTTP response containing data to extract
extract
ExtractDefinition[]
Array of extraction definitions with JSONPath expressions
Returns
Record - Extracted variables keyed by name

DSL Schema

Test Suite Schema

The DSL is defined using Zod schemas with extensive descriptions that make it LLM-friendly. The schema ensures type safety and provides clear validation rules.

Core Test Suite Structure

const TestSuiteSchema = z.object({ name: z.string(), description: z.string().optional(), baseUrl: z.string().url(), variables: z.array(VariableSchema).optional(), setup: z.array(StepSchema).optional(), tests: z.array(TestSchema), teardown: z.array(StepSchema).optional(), auth: AuthSchema.optional(), rateLimit: RateLimitSchema.optional(), schemas: z.array(SchemaDefinitionSchema).optional(), });

Step Definition

Step Fields

FieldTypeRequiredDescription
namestringYesDescriptive name for the step
methodHttpMethodYesHTTP method (GET, POST, PUT, PATCH, DELETE, etc.)
endpointstringYesRequest endpoint path (supports {{variable}} interpolation)
headersHeader[]NoCustom headers array
bodyunknownNoRequest body (automatically JSON stringified)
expectExpectationsNoResponse validation expectations
extractExtractDefinition[]NoVariable extraction definitions
delaynumberNoDelay in milliseconds before execution
retryPolicyRetryPolicyNoRetry configuration
conditionConditionNoConditional execution rule
expectFailurebooleanNoSet to true for negative tests
continueOnFailurebooleanNoContinue execution if step fails

Validation Options

LLM Integration

AI-Powered Test Generation

Project Rhea includes a sophisticated LLM integration system that can generate complete test suites from API documentation. The system supports multiple providers and includes comprehensive metrics tracking.

Multiple Providers

Support for OpenAI, Groq, and extensible architecture for additional providers

Structured Output

Uses zodResponseFormat for direct Zod schema validation of LLM outputs

Retry Logic

Automatic retry with validation error feedback (up to 3 attempts)

Metrics Tracking

Comprehensive logging of executions, token usage, costs, errors, and retries

Configuration Hierarchy

Global defaults → task-specific config → runtime overrides

Reasoning Levels

Configurable reasoning depth (low, medium, high) for different complexity levels

Usage

CLI Command

pnpm llm:generate api-docs.json \ -m gpt-5-nano \ -p openai \ -r medium \ -o generated-tests.json \ --prompt "Focus on authentication flows"

Programmatic Usage

import { createTestSuiteTask } from './src/llm/tasks/create-test-suite'; import { runTask } from './src/llm/client'; const task = createTestSuiteTask( { documentation: '... API docs ...', userPrompt: 'Generate comprehensive test suite', }, { model: 'gpt-5-nano', provider: 'openai', reasoning: 'medium', } ); const testSuite = await runTask(task);

Supported Models

OpenAI

  • gpt-4.1-mini - Balanced performance and cost
  • gpt-5-nano - Ultra-low cost
  • text-embedding-3-small - Embeddings

Groq

  • openai/gpt-oss-20b - Fast inference
  • openai/gpt-oss-120b - Higher capability
  • meta-llama/llama-4-scout-17b-16e-instruct - Efficient
  • meta-llama/llama-4-maverick-17b-128e-instruct - Advanced

Metrics System

All LLM executions are tracked with detailed metrics saved to /out/llm-metrics/ as JSONL files:

Metrics Files

FileDescription
executions.jsonlExecution timestamps, duration, success/failure
token-usage.jsonlPrompt tokens, cached tokens, completion tokens
costs.jsonlCost calculations based on pricing table
errors.jsonlFull error messages and stack traces
validation-failures.jsonlZod validation error details
retries.jsonlRetry attempt logs with reasons
summary.jsonAggregated statistics by task and model

Variable System

Variable Extraction and Interpolation

The variable system allows you to extract values from responses using JSONPath and use them in subsequent steps with {{variable}} syntax.

Extract Variables

{ "steps": [ { "name": "Create user", "method": "POST", "endpoint": "/users", "body": { "name": "John Doe" }, "extract": [ { "name": "userId", "path": "$.id" }, { "name": "email", "path": "$.email" } ] }, { "name": "Get user details", "method": "GET", "endpoint": "/users/{{userId}}" } ] }

Initial Variables

{ "variables": [ { "name": "username", "value": "test@example.com" }, { "name": "password", "value": "password123" } ], "tests": [...] }

JSONPath Expressions

JSONPath is used for extracting values from JSON responses. Common patterns:

JSONPath Examples

PathDescription
$.idRoot-level id field
$.user.emailNested email field
$.items[0].nameFirst item's name
$.items[*].idAll item IDs
$.items[?(@.status == 'active')]Filtered items

Retry & Polling

Retry Policies

Configure retry policies for handling flaky endpoints or async operations that require polling.

Basic Retry

{ "name": "Poll until ready", "method": "GET", "endpoint": "/status/{{jobId}}", "retryPolicy": { "maxAttempts": 5, "interval": 1000, "until": { "$.status": "completed" } } }

Retry Policy Fields

FieldTypeDescription
maxAttemptsnumberMaximum number of retry attempts
intervalnumberDelay between attempts in milliseconds
untilobjectCondition object with JSONPath → value mapping
onStatusnumber[]Optional array of status codes to retry on

Conditional Execution

Conditional Steps

Execute steps conditionally based on variable state using flexible operators.

{ "variables": [ { "name": "userType", "value": "admin" }, { "name": "isProduction", "value": false } ], "steps": [ { "name": "Admin-only endpoint", "method": "GET", "endpoint": "/admin/stats", "condition": { "variable": "userType", "operator": "equals", "value": "admin" } } ] }

Supported Operators

OperatorDescription
equalsVariable equals value
notEqualsVariable does not equal value
existsVariable exists (not null/undefined)
notExistsVariable does not exist
greaterThanVariable > value (numbers)
lessThanVariable < value (numbers)
containsVariable contains value (strings/arrays)

Rate Limiting

Token Bucket Rate Limiter

Prevent API throttling by configuring global rate limits using a token bucket algorithm.

{ "rateLimit": { "maxRequests": 10, "perMilliseconds": 1000 }, "tests": [...] }

This ensures no more than 10 requests per second across all steps in the test suite.

Authentication

Bearer Token Authentication

Configure authentication at the suite level. Tokens can be static or extracted from setup steps.

Static Token

{ "auth": { "type": "bearer", "token": "your-token-here" }, "tests": [...] }

Dynamic Token from Setup

{ "setup": [ { "name": "Login", "method": "POST", "endpoint": "/auth/login", "body": { "username": "{{username}}", "password": "{{password}}" }, "extract": [ { "name": "AUTH_TOKEN", "path": "$.token" } ] } ], "auth": { "type": "bearer", "token": "{{AUTH_TOKEN}}" }, "tests": [...] }

Teardown Operations

Cleanup Steps

Teardown steps execute after all tests complete, regardless of test success or failure. They’re like finally blocks, ensuring cleanup always happens.

{ "setup": [ { "name": "Create test data", "method": "POST", "endpoint": "/test-data", "extract": [ { "name": "testId", "path": "$.id" } ] } ], "tests": [...], "teardown": [ { "name": "Delete test data", "method": "DELETE", "endpoint": "/test-data/{{testId}}" } ] }

Reporting & Metrics

Console Reporting

Project Rhea provides beautiful console output with detailed metrics including timing, retry counts, and validation results.

Output Formats

# Standard console output pnpm execute -f tests.json # JSON output pnpm execute -f tests.json --json # Save to file pnpm execute -f tests.json --output results.json # Dry run (validate without executing) pnpm execute -f tests.json --dry-run

Metrics Collected

Test Metrics

MetricDescription
Step DurationTime taken for each step execution
Total DurationComplete test suite execution time
Retry CountsNumber of retries per step
Success RatePercentage of passed tests
Validation ErrorsDetailed validation failure messages
HTTP StatusResponse status codes
Response TimesActual vs expected response times

Programmatic Usage

Using Rhea Programmatically

Execute Test Suite

import { executeTestSuite, reportToConsole } from './src/index'; import testSuite from './my-test-suite.json'; const result = await executeTestSuite(testSuite); reportToConsole(result); process.exit(result.success ? 0 : 1);

Validate Schema

import { TestSuiteSchema } from './src/schema/dsl.schema'; const result = TestSuiteSchema.safeParse(yourTestData); if (!result.success) { console.error('Validation errors:', result.error); }

Design Principles

Core Principles

Functional & Concise

Arrow functions, no classes, minimal boilerplate. Code follows functional programming principles.

Type-Safe

Full TypeScript with Zod schema validation. Compile-time and runtime type safety.

Lean

No unnecessary abstractions or over-engineering. Every piece of code serves a purpose.

LLM-Friendly

Extensive schema descriptions for AI consumption. Zero-drift schema generation.

Developer-Friendly

Beautiful console output, clear error messages, intuitive API design.

Examples

Example Test Suites

The examples/ directory contains comprehensive examples demonstrating various features:

Basic Examples

FileDescription
simple-test.jsonBasic GET requests
variable-test.jsonVariable extraction and interpolation
crud-test.jsonComplete CRUD lifecycle
auth-setup-example.jsonAuthentication with setup steps

Advanced Examples

FileDescription
negative-testing-example.jsonError handling and expectFailure
async-polling-example.jsonRetry logic and polling
array-validation-example.jsonArray assertions
conditional-execution-example.jsonConditional step execution
schema-validation-example.jsonJSON Schema validation
teardown-example.jsonCleanup with teardown steps
rate-limit-example.jsonRate limiting configuration
real-world.jsonComprehensive real-world scenario

CLI Reference

Command Line Interface

Test Execution

pnpm execute -f [options] Options: --dry-run Validate without executing --json Output results as JSON --output Save results to file --verbose Enable verbose logging

Schema Generation

pnpm execute --generate-schema [-o output.json] Generates LLM-friendly JSON Schema from Zod definitions. Ensures zero drift between implementation and schema.

LLM Test Generation

pnpm llm:generate [options] Options: -o, --output Output file path -m, --model Override model -p, --provider Override provider -r, --reasoning Reasoning level (low/medium/high) --prompt Additional user specifications

Testing

Test Suite

Project Rhea includes comprehensive test coverage with unit tests and end-to-end tests.

Run Tests

# Run all tests pnpm test:all # Run unit tests only pnpm test:unit # Run e2e tests pnpm test:e2e # Watch mode pnpm test:watch # With coverage pnpm test:coverage

Test Philosophy

Tests follow a philosophy of minimal mocking, focusing on testable units without complex setup. Real implementations are tested where possible, with actual file I/O and schema validation.

Web UI

Web Interface

Project Rhea includes a Vue.js web interface for test suite management, execution, and LLM-powered generation.

Development

# Start development server pnpm dev # Build for production pnpm build # Start production server pnpm start

The web UI provides:

  • Test suite editor with Monaco editor
  • Real-time test execution
  • LLM-powered test generation with streaming
  • Metrics visualization
  • Test suite management

Future Enhancements

Roadmap

Potential future additions to Project Rhea:

Multiple Auth Methods

OAuth, API Key, Basic Auth support

Parallel Execution

Run tests in parallel for faster execution

Custom Validators

User-defined validation functions

Interceptors

Request/response interceptors for middleware

HTML Reports

Generate beautiful HTML test reports

CI/CD Integration

Helpers for GitHub Actions, GitLab CI, etc.

Data-Driven Tests

Parametric tests with data sets

Test Dependencies

Define test execution order and dependencies