Engineering

NestJS Architecture in Practice: Code Patterns That Actually Work

Concrete Patterns for Sustainable Architecture

30 min read Max
#NestJs #Typescript #Architecture #Zod #Software-Design

This article is part of a series


The NestJS Architecture Series

2/2

How to use NestJS as infrastructure rather than architecture. This series explores why service-heavy codebases inevitably collapse, and demonstrates the patterns that keep business logic testable, boundaries explicit, and changes predictable.

This is a follow-up to The Architecture Trap: Why NestJS Makes It Too Easy To Stop Thinking. If you haven’t read that one yet, please do. This piece assumes you already understand why treating NestJS services as the architecture is a problem, why TypeScript’s nature matters, and why domain logic doesn’t belong mixed with framework code.

What follows is concrete: the patterns, the code, the decisions, and the boundaries that make NestJS projects maintainable rather than archaeological. We’ll start with the simplest approach—the one you should reach for by default—and then explore more sophisticated patterns for when simple isn’t sufficient.


Quick Recap: Services as Orchestrators

The foundation from the previous article bears repeating, briefly.

Services in NestJS are not the domain. They are coordination points. They sit between three layers:

  • The HTTP boundary above them (controllers, DTOs, validation)
  • The domain logic beside them (pure functions, business rules)
  • The infrastructure below them (repositories, email, external APIs)

Services load data, call domain functions, save results, and route side effects to the appropriate places. They do not contain the business logic. They invoke it.

Everything else follows from this.


The Default Pattern: Load, Calculate, Save

For most CRUD operations and straightforward workflows, the simplest pattern is also the correct one: the service loads everything the domain needs upfront, passes it to a pure function, and then handles the results.

Here’s what this looks like in practice.

Domain Function: Pure Calculation

The domain function lives in a plain TypeScript file. No decorators, no NestJS imports, no framework awareness at all.

// libs/billing/billing.logic.ts import { Account } from '@/accounts/account.domain'; import { Billing } from './billing.domain'; import { TaxRate } from './tax-rate.domain'; export interface BillingCalculationInput { account: Account; billings: Billing[]; taxRates: TaxRate[]; } export interface BillingResult { subtotal: number; tax: number; total: number; shouldNotifyCustomer: boolean; } export const calculateBilling = (input: BillingCalculationInput): BillingResult => { const subtotal = input.billings.reduce((sum, b) => sum + b.amount, 0); const taxRate = input.taxRates.find(t => t.country === input.account.country); const tax = subtotal * (taxRate?.rate ?? 0); const total = subtotal + tax; const shouldNotifyCustomer = total > 1000; return { subtotal, tax, total, shouldNotifyCustomer }; };

This function can be tested without spinning up NestJS, without mocking repositories, without DI containers. You pass data in, you get data out. That’s the entire contract.

Service: Orchestration Layer

The service’s job is to gather the inputs, invoke the domain, and handle what comes back.

// billing/billing.service.ts import { Injectable } from '@nestjs/common'; import { calculateBilling } from '@/libs/billing/billing.logic'; import { AccountRepository } from '@/accounts/account.repository'; import { BillingRepository } from './billing.repository'; import { TaxRateRepository } from './tax-rate.repository'; import { EmailService } from '@/infrastructure/email/email.service'; import { BillingResponseDto } from './dto/billing.response.dto'; import { toDto } from './billing.mapper'; @Injectable() export class BillingService { constructor( private readonly accountRepo: AccountRepository, private readonly billingRepo: BillingRepository, private readonly taxRateRepo: TaxRateRepository, private readonly emailService: EmailService, ) {} async getBilling(accountId: string): Promise { // Load everything the domain needs const [account, billings, taxRates] = await Promise.all([ this.accountRepo.getAccount(accountId), this.billingRepo.getBillings(accountId), this.taxRateRepo.getTaxRates(), ]); // Domain calculates const result = calculateBilling({ account, billings, taxRates }); // Service handles consequences if (result.shouldNotifyCustomer) { await this.emailService.sendBillingNotification(account.email, result); } return toDto(result); } }

Notice what the service does not do:

  • It does not contain any logic about tax calculation
  • It does not decide when to send email based on business rules
  • It does not transform or validate data beyond loading and saving

All of that lives in the domain or at the boundaries (validation, mapping). The service just wires things together.

Controller: HTTP Boundary

The controller remains boring, which is exactly what we want.

// billing/billing.controller.ts import { Controller, Get, Param } from '@nestjs/common'; import { BillingService } from './billing.service'; import { BillingResponseDto } from './dto/billing.response.dto'; @Controller('billing') export class BillingController { constructor(private readonly billingService: BillingService) {} @Get(':accountId') async getBilling(@Param('accountId') accountId: string): Promise { return this.billingService.getBilling(accountId); } }

No business logic. No transformation. No decisions. Just routing.

Why This Works

This pattern scales remarkably well because each piece has a single, clear responsibility:

  • Domain functions express business rules in pure TypeScript
  • Services orchestrate loading and saving
  • Controllers route HTTP requests to services
  • Repositories handle data access and mapping

When you need to change how tax is calculated, you touch billing.logic.ts. When you need to add a new API endpoint, you touch the controller. When you need to optimize a query, you touch the repository. The boundaries force the changes into the right places.

And critically: testing domain logic requires no framework at all.

// libs/billing/billing.logic.test.ts import { describe, it, expect } from 'vitest'; import { calculateBilling } from './billing.logic'; describe('calculateBilling', () => { it('applies correct tax rate based on country', () => { const result = calculateBilling({ account: { country: 'US', /* ... */ }, billings: [{ amount: 100 }], taxRates: [{ country: 'US', rate: 0.1 }], }); expect(result.tax).toBe(10); expect(result.total).toBe(110); }); it('notifies customer when total exceeds threshold', () => { const result = calculateBilling({ account: { country: 'US', /* ... */ }, billings: [{ amount: 1500 }], taxRates: [{ country: 'US', rate: 0.1 }], }); expect(result.shouldNotifyCustomer).toBe(true); }); });

No mocking. No setup. No teardown. Just inputs and assertions. This is the testing win that justifies the entire approach.

Contrast this with testing a service that contains the business logic:

// The alternative: testing a service (painful) describe('BillingService', () => { let service: BillingService; let accountRepo: jest.Mocked; let billingRepo: jest.Mocked; let taxRateRepo: jest.Mocked; let emailService: jest.Mocked; beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ providers: [ BillingService, { provide: AccountRepository, useValue: createMock() }, { provide: BillingRepository, useValue: createMock() }, { provide: TaxRateRepository, useValue: createMock() }, { provide: EmailService, useValue: createMock() }, ], }).compile(); service = module.get(BillingService); accountRepo = module.get(AccountRepository); // ... repeat for each dependency }); it('applies correct tax rate', async () => { accountRepo.getAccount.mockResolvedValue({ country: 'US' }); billingRepo.getBillings.mockResolvedValue([{ amount: 100 }]); taxRateRepo.getTaxRates.mockResolvedValue([{ country: 'US', rate: 0.1 }]); const result = await service.getBilling('account-123'); expect(result.tax).toBe(10); }); });

This is dramatically more ceremony for the same assertion. And when you have 50 domain functions, the difference compounds.


The N+1 Problem: Why Repository Method Names Matter

One of the most damaging patterns in projects where developers don’t write SQL directly is the hidden N+1 query. It looks innocent in code but murders performance in production.

The Anti-Pattern

Consider generating a billing report for all accounts:

@Injectable() export class BadBillingService { constructor( private readonly accountService: AccountService, private readonly billingService: BillingService, private readonly exemptionService: ExemptionService, ) {} async generateReport(): Promise { const accounts = await this.accountService.getAllAccounts(); const report = []; for (const account of accounts) { const billings = await this.billingService.getBillingsForAccount(account.id); const overdue = await this.billingService.getOverdueBillingsForAccount(account.id); const exemptions = await this.exemptionService.getExemptionsForAccount(account.id); report.push({ account, billings, overdueBillings: overdue, exemptions, }); } return report; } }

This looks completely reasonable. You’re calling service methods with clear names. The logic reads naturally. There’s nothing obviously wrong.

But if you have 100 accounts, this executes:

  • 1 database query to load accounts
  • 100 database queries to load billings
  • 100 database queries to load overdue billings
  • 100 database queries to load exemptions

That’s 301 database round trips. It’s catastrophically slow, it exhausts connection pools, and it scales terribly.

The critical problem is that service methods completely hide whether they touch the database. When you call accountService.getAllAccounts(), you have no idea if that’s:

  • A database query
  • Cached in Redis
  • Calling an external API
  • A pure computation

And when you call it in a loop, you have no idea you’re creating an N+1 problem until production melts.

This is why service-heavy architectures are so dangerous. Every method looks equally cheap. The cost is invisible until you profile it.

The Solution: Explicit Data Shapes

The fix is to make repository methods explicit about what they load and how they join it.

First, define the domain object that represents the aggregate:

// billing/billing.domain.ts export type AccountWithBillingDetails = { account: Account; billings: Billing[]; overdueBillings: Billing[]; exemptions: Exemption[]; };

Then create a repository method that returns this shape in a single query:

// accounts/account.repository.ts @Injectable() export class AccountRepository { async getAccountsWithBillingDetails(): Promise { const results = await this.dataSource .createQueryBuilder(AccountEntity, 'account') .leftJoinAndSelect('account.billings', 'billing') .leftJoinAndSelect('account.exemptions', 'exemption') .getMany(); return results.map(entity => ({ account: accountToDomain(entity), billings: entity.billings.map(billingToDomain), overdueBillings: entity.billings .filter(b => b.isOverdue) .map(billingToDomain), exemptions: entity.exemptions.map(exemptionToDomain), })); } }

Now the service becomes simple and fast:

@Injectable() export class GoodBillingService { async generateReport(): Promise { const accountsWithDetails = await this.accountRepo.getAccountsWithBillingDetails(); return accountsWithDetails.map(data => generateBillingReportEntry(data) ); } }

One database query. One method call. The cost is explicit in the method name: getAccountsWithBillingDetails() tells you immediately that this is loading more than just accounts.

The Rule: Name Methods For What They Return

Repository methods should be named for the data shape they return, not just the entity they start from.

// ❌ Bad: hides what's loaded getAccount(id: string): Promise // ✅ Good: explicit about what's included getAccount(id: string): Promise getAccountWithBillings(id: string): Promise getAccountWithBillingsAndExemptions(id: string): Promise

Yes, the method names get longer. That’s intentional. The length signals the cost. If you find yourself writing:

getAccountWithBillingsAndExemptionsAndTaxRatesAndPaymentHistoryAndAuditLogs()

…you know you’re loading too much, and the name itself forces you to confront that.

This also solves another subtle problem: TypeORM relations can auto-load everything even when you don’t need it. If your entity has:

@Entity() class AccountEntity { @OneToMany(() => BillingEntity, billing => billing.account) billings: BillingEntity[]; @OneToMany(() => ExemptionEntity, exemption => exemption.account) exemptions: ExemptionEntity[]; @OneToMany(() => PaymentEntity, payment => payment.account) payments: PaymentEntity[]; }

Then calling accountRepository.find() might load all of those relations—or none of them—depending on configuration. You have no idea what you’re getting. The return type Account[] tells you nothing about what’s actually loaded.

But if your repository method is called getAccountsWithBillings(), it’s explicit: this loads accounts and billings, nothing more, nothing less. If you don’t need billings, call getAccounts() instead. The method name becomes the contract.

The general principle: preventing direct database access forces developers to be explicit about data shapes. Repository methods become the vocabulary for describing what the application actually needs, rather than hiding individual queries behind service methods that could be doing anything.


Beyond Simple: The Intent Pattern

The load-calculate-save pattern works well when domain logic is purely computational and the side effects are straightforward. But some workflows need more flexibility:

  • Business rules that determine which side effects should happen
  • Operations that must be authorized or rate-limited
  • Integration with external systems like LLMs or third-party APIs
  • Conditional data loading based on domain decisions

One way to handle this cleanly—not the only way, but an interesting one—is to have domain functions return intents rather than executing side effects directly.

What Are Intents?

An intent is a structured declaration of something that should happen. Rather than the domain calling emailService.send() directly (which it can’t, since it doesn’t have access to infrastructure), it returns a value that says “an email should be sent.”

// libs/billing/billing.intents.ts export type Intent = | { type: 'SEND_EMAIL'; to: string; template: string; data: unknown } | { type: 'UPDATE_ACCOUNT_STATUS'; accountId: string; status: AccountStatus } | { type: 'CREATE_AUDIT_LOG'; action: string; metadata: unknown } | { type: 'LOAD_EXEMPTIONS'; accountId: string };

Intents are discriminated unions, which makes them easy to handle with TypeScript’s exhaustive checking.

Domain Functions Return Intents

Now the domain can express conditional behavior without breaking purity:

// libs/billing/billing.logic.ts export const calculateBillingWithIntents = (input: { account: Account; billings: Billing[]; }): { result: BillingResult; intents: Intent[] } => { const intents: Intent[] = []; // Domain logic determines what should happen const hasOverdue = input.billings.some(b => b.status === 'OVERDUE'); if (hasOverdue) { intents.push({ type: 'UPDATE_ACCOUNT_STATUS', accountId: input.account.id, status: 'OVERDUE', }); intents.push({ type: 'SEND_EMAIL', to: input.account.email, template: 'overdue-billing', data: { billings: input.billings.filter(b => b.status === 'OVERDUE') }, }); intents.push({ type: 'CREATE_AUDIT_LOG', action: 'BILLING_OVERDUE_DETECTED', metadata: { accountId: input.account.id, count: input.billings.length }, }); } const result = { /* calculate billing totals */ }; return { result, intents }; };

The domain is still pure—it doesn’t send emails or write to the database. It just returns structured data that describes what should happen based on business rules.

Handling Intents: The Dispatcher Pattern

Rather than having a single god-object IntentService that knows about everything, use a registry of intent handlers.

Each handler is responsible for one type of intent:

// infrastructure/intents/intent-handler.interface.ts export interface IntentHandler { readonly intentType: string; canHandle(intent: Intent): intent is T; execute(intent: T, context: ExecutionContext): Promise; }

Implementations are narrow and testable:

// infrastructure/intents/handlers/email-intent.handler.ts @Injectable() export class EmailIntentHandler implements IntentHandler { readonly intentType = 'SEND_EMAIL'; constructor(private readonly emailService: EmailService) {} canHandle(intent: Intent): intent is EmailIntent { return intent.type === 'SEND_EMAIL'; } async execute(intent: EmailIntent, context: ExecutionContext): Promise { // Policy checks if (!this.canSendEmail(context)) { throw new ForbiddenException('Email sending not allowed'); } // Rate limiting await this.checkRateLimit(intent.to); // Actual execution await this.emailService.send(intent.to, intent.template, intent.data); } private canSendEmail(context: ExecutionContext): boolean { // Authorization logic return true; } private async checkRateLimit(recipient: string): Promise { // Rate limiting logic } }

The dispatcher finds the right handler and invokes it:

// infrastructure/intents/intent-dispatcher.service.ts @Injectable() export class IntentDispatcher { constructor( @Inject(INTENT_HANDLERS) private readonly handlers: IntentHandler[], ) {} async dispatch(intent: Intent, context: ExecutionContext): Promise { const handler = this.handlers.find(h => h.canHandle(intent)); if (!handler) { throw new Error(`No handler registered for intent type: ${intent.type}`); } await handler.execute(intent, context); } async dispatchMany(intents: Intent[], context: ExecutionContext): Promise { for (const intent of intents) { await this.dispatch(intent, context); } } }

Handlers are registered in the module:

// infrastructure/intents/intent.module.ts @Module({ providers: [ IntentDispatcher, { provide: INTENT_HANDLERS, useFactory: ( emailService: EmailService, accountRepo: AccountRepository, auditService: AuditService, ) => [ new EmailIntentHandler(emailService), new AccountStatusIntentHandler(accountRepo), new AuditIntentHandler(auditService), ], inject: [EmailService, AccountRepository, AuditService], }, ], exports: [IntentDispatcher], }) export class IntentModule {}

Now services use the dispatcher to handle intents:

// billing/billing.service.ts @Injectable() export class BillingService { constructor( private readonly accountRepo: AccountRepository, private readonly billingRepo: BillingRepository, private readonly intentDispatcher: IntentDispatcher, ) {} async processBilling( accountId: string, context: ExecutionContext, ): Promise { const account = await this.accountRepo.getAccount(accountId); const billings = await this.billingRepo.getBillings(accountId); const { result, intents } = calculateBillingWithIntents({ account, billings }); // Execute all intents through the dispatcher await this.intentDispatcher.dispatchMany(intents, context); return toDto(result); } }

Why This Pattern Exists

The intent pattern solves a few specific problems:

  1. Centralized policy enforcement: Authorization and rate limiting happen in one place (the handlers) rather than scattered across services.

  2. Domain expressiveness: Business logic can declare side effects without knowing how they’re implemented.

  3. Plugin architecture: New intent types can be added without changing existing domain functions or services.

  4. LLM integration: When an LLM generates a plan, it can return intents that you validate and execute through the same policy layer (more on this shortly).

But it also adds ceremony. You have handlers, dispatchers, registries, and extra indirection. For simple CRUD apps, this is overkill.

The decision comes down to whether you value the flexibility and centralization enough to justify the complexity. For many projects, the answer is no. For some—particularly those with complex authorization, external integrations, or LLM-driven workflows—it’s worth it.


Transactional Intents

One particularly useful intent type is for transactional operations:

export type Intent = | { type: 'TRANSACTION'; operations: TransactionalOperation[] } | { type: 'SEND_EMAIL'; /* ... */ } // ... other intents export type TransactionalOperation = | { op: 'UPDATE'; entity: string; id: string; data: unknown } | { op: 'INSERT'; entity: string; data: unknown } | { op: 'DELETE'; entity: string; id: string };

Domain logic can now express atomic changes:

export const prepareBillingUpdate = (input: { account: Account; billings: Billing[]; }): Intent[] => { return [ { type: 'TRANSACTION', operations: [ { op: 'UPDATE', entity: 'Billing', id: billing.id, data: { /* ... */ } }, { op: 'UPDATE', entity: 'Account', id: account.id, data: { status: 'OVERDUE' } }, { op: 'INSERT', entity: 'AuditEntry', data: { /* ... */ } }, ], }, { type: 'SEND_EMAIL', to: account.email, template: 'overdue', data: {}, }, ]; };

The transaction handler executes everything atomically:

@Injectable() export class TransactionIntentHandler implements IntentHandler { readonly intentType = 'TRANSACTION'; constructor(private readonly dataSource: DataSource) {} canHandle(intent: Intent): intent is TransactionIntent { return intent.type === 'TRANSACTION'; } async execute(intent: TransactionIntent): Promise { await this.dataSource.transaction(async (em) => { for (const operation of intent.operations) { switch (operation.op) { case 'UPDATE': await em.update(operation.entity, operation.id, operation.data); break; case 'INSERT': await em.insert(operation.entity, operation.data); break; case 'DELETE': await em.delete(operation.entity, operation.id); break; } } }); } }

Email sends only if the transaction succeeds. If the transaction fails, the email intent never executes. The sequencing is explicit and correct.


LLM Integration: Intents as a Natural Boundary

One of the more interesting applications of the intent pattern is integrating large language models into workflows. LLMs work by generating plans and calling functions. Rather than letting them execute directly—which raises serious authorization and policy concerns—you can have them generate intents that flow through the same validation and execution layer as everything else.

The Problem with Direct Execution

The naive approach is to give an LLM access to functions and let it call them:

// ❌ Dangerous: LLM calls functions directly const llmResponse = await llm.complete({ prompt: userQuery, functions: ['updateAccountStatus', 'sendEmail', 'suspendAccount'], }); // LLM decides what to execute await executeFunction(llmResponse.functionName, llmResponse.parameters);

This is a security nightmare. The LLM might try to access accounts it shouldn’t, perform operations the user isn’t authorized for, or cause unintended side effects.

Intents as the Policy Boundary

With intents, the LLM generates structured requests, not actions:

// LLM generates intents (requests) const llmResponse = await llm.generateIntents({ prompt: userQuery, allowedIntents: ['LOAD_EXEMPTIONS', 'CHECK_PAYMENT_STATUS'], }); // Validate and authorize each intent for (const intent of llmResponse.intents) { await this.policyService.enforce(intent, user, context); await this.intentDispatcher.dispatch(intent, context); }

The key insight is that intents give you a uniform surface to apply policy. Whether an intent came from a controller, a scheduled job, or an LLM, it goes through the same authorization, rate limiting, and validation.

You’re not building a special path for AI features—you’re using the same architecture that already exists. When LLM capabilities evolve or you add new AI-driven workflows, you’re not refactoring the application. You’re just adding new intent types and handlers.

This is the architectural value of intents: they create a pluggable system where new sources of behavior (including LLMs) integrate cleanly without requiring special cases throughout the codebase.


Queue Processors and Async Work: Entry Points, Not Logic

Queue processors—whether using Bull, BullMQ, or similar libraries—are another common integration point that tempts developers to put logic in the wrong place. Like controllers, processors are entry points to your system. The fact that work arrives from a queue instead of HTTP doesn’t change the architectural role.

Processors as Thin Entry Points

A processor’s job is simple: deserialize the job data, delegate to a service, and handle queue-level concerns like logging and error propagation.

@Processor('billing') export class BillingProcessor { constructor( private readonly billingService: BillingService, private readonly logger: Logger, ) {} @Process('calculate-overdue') async handleOverdueBilling(job: Job<{ accountId: string }>) { this.logger.log(`Processing overdue billing for account ${job.data.accountId}`); try { // Delegate to service, just like a controller would await this.billingService.processOverdueBilling(job.data.accountId); } catch (error) { this.logger.error(`Failed to process overdue billing`, error); throw error; // Bull handles retries } } }

The processor doesn’t load data. It doesn’t make business decisions. It doesn’t know about repositories or domain logic. It routes work to the service layer and lets Bull handle queue-specific concerns like retries and dead letter queues.

This is the same pattern as controllers: thin translation layer between external input and internal orchestration.

The Queue Coupling Problem

The more subtle issue is how jobs get created. The naive approach creates tight coupling:

// ❌ Bad: Service directly depends on queues @Injectable() export class BillingService { constructor( @InjectQueue('billing') private readonly billingQueue: Queue, ) {} async createBilling(accountId: string) { // ... create billing // Direct coupling to Bull await this.billingQueue.add('calculate-overdue', { accountId }); } }

Now your service can’t be tested without mocking Bull. If you switch queue systems, you refactor services. If you want to handle the same work synchronously in development, you need conditionals everywhere.

The service shouldn’t know queues exist.

Intents for Queue Operations

With the intent pattern, queueing becomes another side effect that flows through the dispatcher:

// Domain or service returns intent export type Intent = | { type: 'QUEUE_JOB'; queue: string; job: string; data: unknown } | { type: 'SEND_EMAIL'; /* ... */ } // ... other intents // Service doesn't know about Bull @Injectable() export class BillingService { async createBilling(accountId: string): Promise { const account = await this.accountRepo.getAccount(accountId); const billing = await this.billingRepo.create(account); const intents: Intent[] = [ { type: 'QUEUE_JOB', queue: 'billing', job: 'calculate-overdue', data: { accountId }, }, ]; await this.intentDispatcher.dispatchMany(intents, context); return toDto(billing); } }

The intent handler knows about Bull, but nothing else does:

@Injectable() export class QueueIntentHandler implements IntentHandler { readonly intentType = 'QUEUE_JOB'; constructor( private readonly moduleRef: ModuleRef, // To get queues dynamically ) {} canHandle(intent: Intent): intent is QueueJobIntent { return intent.type === 'QUEUE_JOB'; } async execute(intent: QueueJobIntent): Promise { const queue = this.moduleRef.get(`BullQueue_${intent.queue}`, { strict: false }); if (!queue) { throw new Error(`Queue not found: ${intent.queue}`); } await queue.add(intent.job, intent.data); } }

Now the queue system is swappable. Tests don’t need Bull. The service logic works the same whether jobs run async or sync.

Alternative: Event-Driven Job Creation

Another clean approach is to emit domain events and let a separate listener handle queueing:

// Service emits event @Injectable() export class BillingService { constructor(private readonly eventEmitter: EventEmitter2) {} async createBilling(accountId: string) { const billing = await this.billingRepo.create(/* ... */); // Emit event, don't queue directly this.eventEmitter.emit('billing.created', { billingId: billing.id, accountId }); return toDto(billing); } } // Separate listener creates queue jobs @Injectable() export class BillingQueueListener { constructor( @InjectQueue('billing') private readonly billingQueue: Queue, ) {} @OnEvent('billing.created') async handleBillingCreated(payload: { accountId: string }) { await this.billingQueue.add('calculate-overdue', payload); } }

This is even more decoupled. The service doesn’t know about queues or intents—it just emits events. The queueing logic lives in a dedicated listener that can be enabled, disabled, or replaced without touching the service.

The tradeoff is more indirection. Whether it’s worth it depends on your system’s complexity.

When Framework Integration Is Acceptable

Processors being tightly integrated with NestJS decorators is fine because they’re infrastructure, not domain. The @Processor() and @Process() decorators mark entry points, just like @Controller() and @Get() do.

The pattern works when:

  • Processors delegate to services immediately
  • Services don’t inject @InjectQueue() directly
  • Job creation happens via intents, events, or a dedicated queueing service
  • Queue-specific concerns (retries, backoff, dead letters) stay in processor error handling

It breaks when:

  • Processors contain business logic
  • Services become coupled to Bull’s API
  • Queue operations leak into domain code

The same principle applies to GraphQL resolvers, WebSocket gateways, and cron jobs: tight framework integration is acceptable at entry points, but those entry points must stay thin.


DTOs and Validation: Explicit Boundaries

Before we close, let’s address the details of how data crosses the HTTP boundary, because this is where many NestJS projects create unnecessary coupling.

The Problem DTOs Solve

TypeScript types are compile-time constructs. They don’t exist at runtime. When a request arrives at your API, you have no guarantees about the shape or validity of the data. You need runtime validation.

NestJS encourages using classes with decorators (class-validator), which works but creates tight coupling between your validation rules and your class structure. It also makes validation invisible—decorators are metadata, not executable code.

A cleaner approach is to use Zod for schemas and classes purely as carriers for those schemas.

Schemas as Ground Truth

Define the validation schema first:

// billing/dto/billing.create.schema.ts import { z } from 'zod'; export const billingCreateSchema = z.object({ accountId: z.string().uuid(), amount: z.number().positive(), currency: z.enum(['USD', 'EUR', 'GBP']), dueDate: z.string().datetime(), }); export type BillingCreateInput = z.infer;

The schema is executable. It can validate data. It can generate TypeScript types. It’s the single source of truth.

DTOs as Schema Carriers

The DTO class exists solely to give NestJS something to attach metadata to:

// billing/dto/billing.create.dto.ts import { billingCreateSchema } from './billing.create.schema'; export class BillingCreateDto { static readonly schema = billingCreateSchema; accountId!: string; amount!: number; currency!: 'USD' | 'EUR' | 'GBP'; dueDate!: string; }

The class doesn’t do anything. It’s a runtime artifact that lets the validation pipe read the schema. The reason it exists is simple: the inferred type is compile-time only and gives the validation pipe nothing to read. This is TypeScript’s nature showing through—types vanish, values remain.

You could eliminate the DTO class entirely by using a custom decorator, but the extra indirection isn’t worth it for most projects. The class is minimal and its purpose is clear.

Validation Pipeline

A custom pipe reads the schema property and validates incoming data:

// common/pipes/zod-validation.pipe.ts import { ArgumentMetadata, BadRequestException, Injectable, PipeTransform } from '@nestjs/common'; import { ZodSchema } from 'zod'; @Injectable() export class ZodValidationPipe implements PipeTransform { transform(value: unknown, metadata: ArgumentMetadata) { const schema: ZodSchema | undefined = (metadata.metatype as any)?.schema; if (!schema) { return value; } const result = schema.safeParse(value); if (!result.success) { throw new BadRequestException(result.error.format()); } return result.data; } }

Register it globally or per-controller:

@Controller('billing') @UsePipes(new ZodValidationPipe()) export class BillingController { @Post() async create(@Body() dto: BillingCreateDto): Promise { // dto is validated before this method runs return this.service.create(dto); } }

The validation is visible. The schema is explicit. Malformed data dies at the boundary.


Closing: Architecture Is Constraints You Choose

The patterns in this article—pure domain functions, explicit repository methods, intent dispatchers, Zod schemas at boundaries—are not laws. They are choices. Choices that impose constraints.

Those constraints have a cost. More files. More indirection. More explicit mapping between layers. Code that could be ten lines in a single service method becomes thirty lines spread across domain logic, repositories, and orchestration.

But constraints are what create architecture. Without them, every piece of the system can reach into every other piece. Dependencies flow in all directions. Changes ripple unpredictably. The codebase becomes a graph where everything touches everything, and no one can reason about it anymore.

The patterns here work because they limit what each piece can do:

  • Domain logic cannot access infrastructure, so it stays pure and testable
  • Repository methods must be named for what they return, so data loading is explicit
  • Services orchestrate but don’t decide, so business rules stay consolidated
  • Intents go through policy enforcement, so authorization is centralized

These limits are not restrictions—they are directions. They tell you where things belong. They make the cost of bad decisions visible before those decisions compound.

NestJS gives you none of this by default. It gives you dependency injection, decorators, and modules, which look like architecture but provide no meaningful constraints. You can inject anything into anything. You can put logic anywhere. The framework is permissive, and that permissiveness lets bad patterns spread invisibly until refactoring becomes impossible.

Good architecture is deliberate. It requires you to decide what can talk to what, where logic lives, how data flows, and which boundaries matter. The moment you stop treating the framework as the architecture and start imposing your own rules, clarity emerges.

Nothing about your tooling changes. You still use NestJS. You still have controllers and services. But the meaning of the system no longer lives inside the framework’s conventions. It lives in the constraints you enforce: the boundaries you draw, the dependencies you allow, the patterns you repeat.

And once those constraints are in place, the system stops being a collection of services that happen to coexist. It becomes software with a shape—software you can reason about, test in isolation, and change without fear.

That is what architecture is. Not folder structure. Not decorators. Not the appearance of order.

Architecture is the rules you choose to follow, even when the framework makes it easy not to.