Project Iris
Pure JSON Data Transformation

Overview
What is Project Iris?
At its core, Project Iris solves a common problem: transforming data from one shape to another. Whether you’re integrating APIs, reshaping database records, or preparing data for LLMs, Project Iris provides a safe, declarative, and AI-friendly way to express transformations.
The Problem It Solves
Imagine you receive customer data from one system and need to transform it for another:
Data Transformation Example
{
"user": {
"firstName": "Ada",
"lastName": "Lovelace",
"email": "ada@example.com"
},
"orders": [
{ "id": "o1", "totalCents": 1200 },
{ "id": "o2", "totalCents": 2599 }
]
}{
"fullName": {
"$join": {
"parts": [{ "$ref": "$.user.firstName" }, { "$ref": "$.user.lastName" }],
"sep": " "
}
},
"orderIds": {
"$pluck": {
"over": { "$ref": "$.orders" },
"path": "id"
}
},
"totalCents": {
"$sum": {
"over": {
"$pluck": {
"over": { "$ref": "$.orders" },
"path": "totalCents"
}
}
}
}
}{
"fullName": "Ada Lovelace",
"orderIds": ["o1", "o2"],
"totalCents": 3799
}Why Project Iris?
Pure JSON
Transformations are just JSON—no code strings, no eval(), no security risks
LLM-Friendly
Perfect for AI-generated transformations—LLMs understand JSON structure
Type-Safe
Built with Zod schemas for validation before execution
Declarative
Express what you want, not how to compute it
Composable
Build complex transformations from simple operations
Functional
Immutable operations, no side effects
Playground
Coming Soon
An interactive playground where you can experiment with Project Iris transformations in real-time.
Handbook
Getting Started
Installation
pnpm add @chatlyncom/project-irisBasic Usage
Basic Example
import { evaluate } from "@chatlyncom/project-iris";
const source = {
guest: { firstName: "Ada", lastName: "Lovelace" },
orders: [
{ id: "o1", totalCents: 1200 },
{ id: "o2", totalCents: 2599 },
],
};
const config = {
fullName: {
$join: {
parts: [{ $ref: "$.guest.firstName" }, { $ref: "$.guest.lastName" }],
sep: " ",
},
},
orderIds: { $pluck: { over: { $ref: "$.orders" }, path: "id" } },
totalCents: {
$sum: {
over: {
$pluck: {
over: { $ref: "$.orders" },
path: "totalCents",
},
},
},
},
};
const output = evaluate(config, source);
// {
// fullName: "Ada Lovelace",
// orderIds: ["o1", "o2"],
// totalCents: 3799
// }Validation
Always validate your configs before evaluation:
import { validateConfig } from "@chatlyncom/project-iris";
const result = validateConfig(config);
if (!result.ok) {
console.error("Validation errors:", result.issues);
// Handle validation errors
}Concepts
Key Features
Architecture
Architecture Overview
Project Iris follows a layered architecture:
Schema Layer
Zod schemas define valid node shapes and validate inputs
Sugar Layer
Converts convenient sugar ops ($sum, $pluck) to core ops
Engine Layer
Evaluates nodes recursively against source data
Operation Layer
Individual operation implementations
Evaluation Flow
Input Config (JSON)
↓
[Validate with Zod]
↓
[Desugar sugar ops → core ops]
↓
[Create evaluation context]
↓
[Evaluate nodes recursively]
↓
Output (JSON)Compilation & IR: Production Performance
Important: For production code, always use compiled configs. Compilation provides significant performance improvements by pre-processing configs once and avoiding repeated validation and desugaring overhead.
What Compilation Does
The compileConfig function transforms a raw config into an optimized Intermediate Representation (IR):
- Pre-validation: Validates the config once with Zod schemas
- Pre-desugaring: Converts all sugar ops (
$sum,$pluck, etc.) to core ops - Metadata extraction: Collects statistics and extracts all JSONPaths used in
$refnodes - Fingerprinting: Generates a SHA256 hash for change detection and caching
- Cache preparation: Pre-extracts JSONPaths to warm the parsing cache
Performance Benefits
Compilation provides substantial performance improvements:
- No runtime validation: Zod validation happens once at compile time
- No runtime desugaring: Sugar ops are converted to core ops once
- Pre-warmed JSONPath cache: All
$refpaths are parsed and cached upfront - Smaller payload: Desugared configs are typically more compact
Benchmark results show compiled configs execute 2-5x faster than raw configs, depending on complexity.
Best Practices
- Always compile before saving: Compile configs when users create or update them, not when executing
- Save compiled IRs to database: Store the entire
IrisIrobject in your database - Use
execute()for production: Useexecute()instead ofevaluate()when working with compiled configs - Fingerprint for caching: Use
meta.fingerprintSha256to detect config changes and invalidate caches - Version tracking: Include
compilerVersionto ensure compatibility across updates
Examples
API Reference
Core Functions
evaluate(config: IrisConfig, source: JsonObject, options?: Partial): JsonValue Evaluates a Project Iris config against source data.
Parameters
| Name | Type | Description |
|---|---|---|
config | IrisConfig | The transformation config |
source | JsonObject | Input JSON data |
options | Partial | Engine options (currently unused) |
Returns
JsonValue - Transformed outputThrows
EvaluationError on validation or runtime errorsCompilation Functions
compileConfig(config: IrisConfig, options?: CompileOptions): IrisIrCompiles a raw config into an optimized Intermediate Representation (IR). Use this for production code - compile once when saving configs, then use execute() for fast evaluation.
Parameters
| Name | Type | Description |
|---|---|---|
config | IrisConfig | The transformation config to compile |
options | CompileOptions | Optional compilation options: - `name?: string` - Human-readable name for the config - `compilerVersion?: string` - Version of the compiler (default: 'dev') |
Returns
IrisIr - Compiled IR artifact ready for executionThrows
Error if config validation failsExamples
Example
const compiled = compileConfig(rawConfig, {
name: "customer-mapping",
compilerVersion: "1.0.0",
});
// Save compiled to database for later useProduction Guide
Compilation & IR
For production code, always use compiled configs. Compilation provides significant performance improvements.
Step 1: Compile once (when saving to database)
import { compileConfig } from "@chatlyncom/project-iris";
const compiled = compileConfig(rawConfig, {
name: "customer-mapping",
compilerVersion: "1.0.0",
});
// Save the compiled IR to your database
await db.mappings.save({
id: "mapping-123",
compiledConfig: compiled,
});Step 2: Execute compiled configs (when processing data)
import { execute } from "@chatlyncom/project-iris";
// Load compiled config from database
const mapping = await db.mappings.findById("mapping-123");
const compiledConfig = mapping.compiledConfig;
// Execute against source data - much faster!
const output = execute(compiledConfig, sourceData);Performance Benefits
Compiled configs execute 2-5x faster than raw configs, depending on complexity.
Best Practices
- Always compile before saving
- Save compiled IRs to database
- Use
execute()for production - Fingerprint for caching
- Version tracking
Development
Setup & Development
Prerequisites
- Node.js >= 18.18
- pnpm >= 9.1.0
Installation
# Clone the repository
git clone https://github.com/ChatlynCom/project-iris.git
cd project-iris
# Install dependencies
pnpm install
# Build the project
pnpm build
# Run tests
pnpm test
# Run tests in watch mode
pnpm test:watchDevelopment Workflow
# Development build with watch mode
pnpm dev
# Type checking
pnpm typecheck
# Linting
pnpm lint
# Formatting
pnpm format
# Link locally (for testing in other projects)
pnpm link
# Watch and sync changes
pnpm watchContributing
Contributions are welcome! Please:
- Follow the existing code style
- Add tests for new features
- Update documentation
- Ensure
pnpm buildandpnpm testpass - Use meaningful commit messages