Engineering

JavaScript, TypeScript, and the Trouble With Pretending

or: Typescript is just Javascript with Intents

20 min read Max
#typescript #javascript #architecture #software-engineering #insights

TypeScript brought long‑needed structure to JavaScript, but it also brought a misunderstanding: the idea that it turns JavaScript into a statically typed, object‑oriented language. It doesn’t. And when developers ignore this, their systems become fragile—not because TypeScript is weak, but because they design software for a fantasy runtime instead of the one actually executing their code.

The goal isn’t to abandon TypeScript. The goal is to use it correctly: not as a way to simulate Java, but as a descriptive layer on top of JavaScript’s real behavior. That requires understanding how JavaScript actually models objects, functions, structure, and data. Once you do that, TypeScript becomes both safer and more expressive—not because it enforces correctness, but because it documents intent in a language that otherwise hides it.

What follows is a deeper look at the real architecture of JavaScript systems, why OOP breaks down, why composition works, how factories outperform classes, how runtime validation completes the picture, and how TypeScript can support all of this if used with awareness rather than assumptions.


JavaScript’s Runtime Rules Everything

Every TypeScript project ultimately compiles to JavaScript, and JavaScript is the only thing the runtime knows. TypeScript’s correctness doesn’t exist at runtime: its types aren’t carried forward, its checks aren’t enforced, and its interfaces evaporate entirely.

This means your architecture must make sense in JavaScript, not in TypeScript’s imagined static world. If you forget this, you’ll design systems that work in the type system but collapse in production. The compiler can’t save you from API drift, malformed data, wrong assumptions, or incorrect usage. The runtime wins every disagreement.

I’ve seen this exact scenario play out: a team designs an elegant type hierarchy for handling payment methods. Beautiful inheritance tree, perfect TypeScript. Then a payment processor changes their webhook payload structure. The types still compile. The application crashes in production because no one ever validated the actual shape of the incoming data. The type system was describing a contract that the external world never agreed to.

This isn’t a limitation—it’s the design. JavaScript values flexibility, late binding, and function‑centric composition. It’s a dynamic language with functional roots and prototype‑based object extension. When you lean into that model, TypeScript becomes far more effective.


Why JavaScript Was Never an OOP Language

JavaScript’s class keyword looks like something from Java or C#, so a lot of people unconsciously import that mental model. They start thinking in terms of hierarchies, base classes, casts, and runtime type identity.

None of that really exists in JavaScript.

Under the hood, a class is just a function with a prototype. Instances don’t carry a strong notion of “this is a User” in the way Java objects do. The runtime only cares about the properties that happen to be present.

Take this:

class User { constructor(public name: string, public age: number) {} } const u = new User("Ada", 34);

At runtime this is just an object with name and age hanging off a prototype. There is nothing magical or enforced about it being a User.

In Java:

User u = (User) something;

that cast is part of the runtime semantics. If something is not a User, the VM throws. The type hierarchy is real; it’s enforced while the program runs.

Now compare that with TypeScript:

const u = something as User;

This is not a cast. It is not checked. It does not fail. It is purely a compile‑time hint that disappears completely in the emitted JavaScript. If something has the wrong shape, the runtime won’t say anything until you try to use a missing property. It’s the TypeScript version of “trust me bro.”

That disconnect is why deep, inheritance‑heavy designs feel so brittle in JavaScript and TypeScript. The type system lets you model a beautiful tree of abstractions, but the runtime doesn’t enforce any of it. You end up maintaining a mental model the engine simply doesn’t share.

Once you accept that, object‑oriented design in JavaScript has to become much more modest: objects as data containers, maybe with attached behavior, but not as the backbone of a rigid class hierarchy.

There are exceptions. Some frameworks force your hand—NestJS leans heavily on classes and decorators, and fighting that is worse than accepting it. Subclassing Error is one of the few places where the prototype chain actually matters. But these are pragmatic concessions, not architectural ideals. Even when you must use classes, keep them thin. Treat them as lightweight wrappers around a functional core, not as the organizing principle of your system.


Composition Works Because It Matches JavaScript’s Nature

Where inheritance leans on a hierarchy the runtime doesn’t really have, composition leans on something JavaScript is actually good at: combining small pieces into larger ones.

You don’t need a tower of base classes to describe a creature that can fly and shoot lasers. You just need plain objects and functions:

const canFly = (state: { velocity: number }) => ({ fly() { state.velocity++; } }); const hasLaserEyes = (state: { energy: number }) => ({ shoot() { state.energy -= 10; } }); type CreatureState = { velocity: number; energy: number; }; const createCreature = (state: CreatureState) => { return { ...state, ...canFly(state), ...hasLaserEyes(state) }; }

Nothing here relies on a fragile hierarchy. There is no base class to track, no casting, no abstract supertype to reason about. The object’s behavior is literally the composition of the pieces you spread into it.

TypeScript handles this pattern well, because the types are just a direct description of what the runtime is already doing.

This is the core idea: the closer your architecture is to how JavaScript actually behaves, the less work the type system has to do to keep everything consistent.


The Inheritance Trap: A Real Example

Here’s how inheritance typically goes wrong in TypeScript. It starts reasonable:

abstract class BaseRepository { constructor(protected tableName: string) {} protected abstract validate(data: unknown): T; async findById(id: string): Promise { const row = await db.query( `SELECT * FROM ${this.tableName} WHERE id = ?`, [id] ); return row ? this.validate(row) : null; } } class UserRepository extends BaseRepository { constructor() { super('users'); } protected validate(data: unknown): User { return data as User; // trust me bro } }

This looks clean. It feels like good architecture. The type system is happy.

Then reality intrudes. You need to add caching to some repositories but not others. You need to switch one table to a different database that doesn’t support the same query format. You need to add audit logging, but only for certain models. You realize validate should return T | null for some repositories but throw for others.

Each requirement fractures the abstraction. You start adding flags, overriding methods, checking instanceof to special-case behavior. The base class becomes a junk drawer of compromises. The hierarchy that promised reuse now delivers rigidity.

Compare that to a composed approach:

type Repository = { findById: (id: string) => Promise; create: (data: T) => Promise; }; const createRepository = (config: { tableName: string; validate: (data: unknown) => T | null; db: Database; }): Repository => ({ findById: async (id) => { const row = await config.db.query( `SELECT * FROM ${config.tableName} WHERE id = ?`, [id] ); return row ? config.validate(row) : null; }, create: async (data) => { await config.db.insert(config.tableName, data); return data; } }); const userRepo = createRepository({ tableName: 'users', validate: (data) => UserSchema.parse(data), db: mainDatabase });

Now adding caching is trivial—wrap the repository:

const withCache = ( repo: Repository, cache: Cache ): Repository => ({ findById: async (id) => { const cached = await cache.get(id); if (cached) return cached; const result = await repo.findById(id); if (result) await cache.set(id, result); return result; }, create: repo.create }); const cachedUserRepo = withCache(userRepo, redisCache);

Different database? Pass a different db instance. Different validation strategy? Pass a different function. Audit logging? Another wrapper. Each concern is isolated, testable, and composable. The pieces stay small and honest.

This is what composition means in practice: building systems from explicit dependencies and small, interchangeable parts rather than rigid hierarchies that resist change.


Factory Functions Beat Classes Because They Tell the Truth

A factory function is often the most precise and transparent way to create structured objects in JavaScript. It avoids the ceremony of constructors, avoids prototype magic, and avoids implying guarantees the runtime cannot provide.

A clear factory looks like this:

interface WidgetParams { id: string; label: string; } interface Widget { id: string; label: string; } const createWidget = (params: WidgetParams): Widget => { return { ...params }; }

The return type is explicit. The intent is clear. Nothing hidden. Nothing invented.

Adding behavior is just as direct:

interface StatefulWidget extends Widget { focus(): void; } const createStatefulWidget = (params: WidgetParams): StatefulWidget => { const base = createWidget(params); return { ...base, focus() { console.log(`Focused ${base.id}`); } }; }

Everything is concrete. Visible. Narrow in scope. TypeScript supports it naturally because the runtime supports it naturally.

Factories also solve one of the most annoying problems with classes: the confusion around this. In JavaScript, this is dynamically bound based on how a function is called, not where it’s defined. This leads to constant mistakes:

class Counter { count = 0; increment() { this.count++; } } const counter = new Counter(); const handler = counter.increment; handler(); // Error: Cannot read property 'count' of undefined

The moment you detach the method from the instance, this is lost. Developers work around this with .bind(), arrow functions in constructors, or class field syntax—all patches for a fundamentally leaky abstraction.

Factories eliminate this entirely:

const createCounter = () => { let count = 0; return { increment: () => { count++; }, getCount: () => count }; }; const counter = createCounter(); const handler = counter.increment; handler(); // works perfectly

The closure captures count, and arrow functions don’t rebind this. There’s nothing to get wrong. The code says what it means and does what it says.

This alone eliminates a huge amount of accidental complexity found in TypeScript codebases that rely excessively on classes.


Casting vs Validation: Making Types Real

Every TypeScript developer has seen this pattern:

const user = payload as unknown as User;

It feels structured. It looks like a decision. In reality it is nothing more than a request to the compiler to stop complaining. At runtime it is equivalent to:

const user = payload;

No checks. No guarantees. No validation. The only thing that changed is that future you—and everyone reading the code—will assume user really is a User.

This is where a lot of TypeScript systems quietly go off the rails. Developers start trusting types that have never been earned. They treat responses from an API as if they were verified. They treat configuration files and environment variables as if the compiler had seen them. They move data across module boundaries and assume the shape stayed intact.

TypeScript never saw any of this. It cannot check what it cannot observe. The only moment safety becomes real is the moment you validate.

A schema library like Zod lets you define what your program considers valid, and then check it at runtime:

const UserSchema = z.object({ id: z.string(), age: z.number().int().positive() }); type User = z.infer;

On its own this is just a description. The important part is where you apply it.

All untrusted data should be validated at the boundary where it first enters your code: HTTP handlers, message consumers, file readers, queue workers, cron jobs. That boundary is the only place where you can honestly say “from this point on, a User is actually a User”.

At the HTTP boundary, validation failures should produce clear, specific errors:

app.post('/users', async (req, res) => { const result = UserSchema.safeParse(req.body); if (!result.success) { return res.status(400).json({ error: 'Invalid user data', details: result.error.flatten() }); } const user: User = result.data; // Now you can trust it });

For internal boundaries where failures represent programming errors rather than bad input, fail fast:

const processUser = (data: unknown): User => { return UserSchema.parse(data); // throws if invalid };

This makes violations loud and obvious during development. If an internal module is passing malformed data, you want to know immediately.

The same principle applies inside your own system. If data crosses a boundary you don’t fully control—a different module, a plugin, a layer that might evolve on its own—you should treat it as suspect again. TypeScript always allows as unknown as Something. There is no guarantee that the object you passed out is the object you get back, even when the type looks identical.

Validation at the edges and at key internal boundaries is what makes your types meaningful. Without it, they describe a world that may never have existed.


Type Guards: Still Just Hints

TypeScript’s type guards look safer than raw as casts, and many developers treat them as runtime validation. They’re not.

const isUser = (obj: unknown): obj is User => { return typeof obj === 'object' && obj !== null && 'name' in obj && 'age' in obj; }; if (isUser(data)) { // TypeScript believes data is a User console.log(data.name); }

This tells TypeScript to narrow the type, but look at what it actually checks: the object is truthy, and two properties exist. It doesn’t verify that name is a string. It doesn’t check that age is a positive integer. It doesn’t ensure there aren’t extra properties that violate your assumptions.

If data is { name: null, age: "thirty" }, the guard passes. TypeScript is satisfied. Your code crashes.

Type guards are useful for internal boundaries where you control the data flow and just need to help the type checker understand what you already know. But for external data, they’re not enough:

// Hand-written guard: tedious and incomplete const isUser = (obj: unknown): obj is User => { return typeof obj === 'object' && obj !== null && typeof (obj as any).name === 'string' && typeof (obj as any).age === 'number' && (obj as any).age > 0 && Number.isInteger((obj as any).age); }; // Schema: comprehensive and declarative const UserSchema = z.object({ name: z.string().min(1), age: z.number().positive().int() });

The schema version is clearer, more thorough, and produces better error messages. More importantly, it actually validates deeply—nested objects, arrays, complex transformations—without you writing hundreds of lines of manual checks.

Use type guards when you’re refining types the compiler can’t infer on its own. Use schemas when you’re validating data that came from outside your control.


Discriminated Unions: When TypeScript Actually Helps

There’s one pattern where TypeScript’s static analysis provides something that feels genuinely safe: discriminated unions combined with exhaustive checking.

type Result = | { success: true; data: T } | { success: false; error: E }; const handleResult = (result: Result) => { if (result.success) { // TypeScript knows result.data exists return result.data; } else { // TypeScript knows result.error exists throw result.error; } };

This works because the pattern aligns with JavaScript’s actual behavior. A Result is just an object with different shapes based on a discriminant property. TypeScript can track that property through control flow and narrow the type accordingly. When you check success, the compiler understands which branch you’re in and adjusts the available properties.

The power comes when you extend this to state machines:

type RequestState = | { status: 'idle' } | { status: 'loading'; startedAt: number } | { status: 'success'; data: User; completedAt: number } | { status: 'error'; error: string; failedAt: number }; const renderRequest = (state: RequestState) => { switch (state.status) { case 'idle': return 'Not started'; case 'loading': return `Loading since ${state.startedAt}`; case 'success': return `User: ${state.data.name}`; case 'error': return `Failed: ${state.error}`; } };

If you add a new state and forget to handle it, TypeScript will complain. If you try to access data in the loading branch, TypeScript stops you. This isn’t magical—it’s just the type system understanding JavaScript’s control flow—but it’s effective.

The key is that discriminated unions don’t fight JavaScript’s runtime. They describe a pattern the language already supports: objects with different shapes, differentiated by a common property. TypeScript just makes that pattern explicit and checkable.


Arrow Functions and the Truth About Values

JavaScript becomes much easier to reason about once you stop thinking of functions as special constructs and instead treat them as what they really are: values. A function in JavaScript is simply another kind of object—first‑class, assignable, passable, storable.

The arrow function syntax makes this explicit:

const makeSomething = () => ({ /* ... */ });

This is the same mental model as:

const something = { /* ... */ };

Both are values. Both can be stored, passed around, transformed, and composed. The difference is simply that one happens to be executable.

Arrow functions also eliminate the implicit binding confusion of this. There’s no ceremony, no hidden context, no gotchas. What you see is what you get.

But while functional principles offer clarity, functional purity does not scale in real systems. Too many TypeScript projects try to emulate academic FP: endlessly curried functions, deeply nested compositions, point‑free style, combinators wrapped around combinators. The result is code that might look elegant in a REPL but becomes opaque in a real system.

The goal isn’t to turn your codebase into a mathematics paper. It’s to make the flow of data and intent obvious. Pragmatic FP—passing functions as values, leaning on small pure utilities, composing behavior rather than inheriting it—fits JavaScript perfectly. It respects the runtime and stays readable.

This is where TypeScript genuinely shines. Arrow functions paired with clear parameter and return types create signatures that tell the story of your system:

const loadUser = (id: string): Promise => { /* ... */ }; const transformUser = (user: User): PublicProfile => { /* ... */ }; const saveProfile = (profile: PublicProfile): Promise => { /* ... */ };

You can read the chain of transformations without looking at a single implementation. The types document the shape of each step. That’s the real power: not enforcing correctness, but making intent unmistakable.


Testing Without Mocks

Composition and factories naturally produce code that’s easier to test, because the dependencies are explicit and replaceable.

Consider a typical class-based service:

class UserService { constructor(private db: Database) {} async create(name: string) { const id = generateId(); return this.db.users.insert({ id, name }); } }

Testing this requires mocking the database, which means pulling in a mocking library, setting up expectations, and maintaining brittle stubs that break whenever the interface changes.

Now consider the factory version:

const createUserService = (deps: { insert: (user: User) => Promise; generateId: () => string; }) => ({ create: async (name: string) => { const id = deps.generateId(); return deps.insert({ id, name }); } });

Testing is trivial:

const testInsert = async (user: User) => user; const testGenerateId = () => 'test-id'; const service = createUserService({ insert: testInsert, generateId: testGenerateId }); const result = await service.create('Ada'); // result is { id: 'test-id', name: 'Ada' }

No mocking library. No magic. Just plain functions. The test is honest about what it’s checking: given these inputs, does the service produce the expected output?

When your architecture is built from small, explicit pieces, testing becomes documentation of how those pieces fit together, not a fight with mock frameworks.


TypeScript as Documentation: Code That Reads Like Language

One of the most persistent myths in programming is that “good code is documented code.” People treat comments like proof of craftsmanship, as if adding an explanation on top of unclear logic somehow redeems it.

But the best code doesn’t need comments, because the code itself is the explanation.

One of the sharpest developers I’ve worked with said it perfectly:

It’s called a programming language, so it’s meant to be read.

Most comments exist because the code failed to communicate. And many are outright pointless:

// returns the user by id const getUserById = (id: string): User | null => { /* ... */ };

The comment adds nothing. It just mirrors the signature, line‑for‑line. It isn’t documentation; it’s noise.

More subtle—and far more common—are comments that signal a deeper problem:

// This part is complicated... don't touch const process = (input: RawThing): ProcessedThing => { /* ... */ };

That comment is an admission: I couldn’t express this clearly, so I’m warning you instead.

There are legitimate reasons to comment. Explaining domain constraints that aren’t visible in code. Documenting a non‑obvious algorithm. Highlighting an external contract that’s imposed from outside. But these cases are the exception, not the rule.

Compare documentation-heavy code to well-typed code:

/** * Loads a user from the database by their unique identifier. * * @param id - The user's ID (must be a valid UUID) * @returns A Promise that resolves to the User object if found, * or null if no user exists with that ID * @throws DatabaseError if the database connection fails * @throws ValidationError if the ID format is invalid */ function loadUser(id) { // ... }

versus:

const loadUser = (id: string): Promise => { // ... };

The second version tells you everything the first one does, except it’s enforced by the compiler and can’t drift out of sync with reality. The parameter type is explicit. The return type is explicit. The signature documents the contract without a single line of prose.

This principle extends to entire module boundaries:

export interface UserService { find: (id: string) => Promise; create: (params: CreateUserParams) => Promise; update: (id: string, changes: Partial) => Promise; delete: (id: string) => Promise; }

Anyone importing this module can see exactly what it offers. The types describe the shape, the names describe the intent, and the structure reveals the relationships. The interface is the documentation.

This is TypeScript’s real strength: it turns design decisions into machine-checkable declarations that humans can read. When your types accurately describe your runtime behavior, and your names accurately describe your intent, the system becomes self‑explanatory.

Most of the time, the best “documentation” you can write is code that doesn’t require commentary at all—clear names, clear structures, clear data shapes, and TypeScript signatures that make intent unmistakable.


API Drift and the Validation Layer

One of the most common sources of production bugs in TypeScript systems is API drift: the external service changes its contract, but your code still expects the old shape.

Consider a third-party payment API:

interface PaymentWebhook { id: string; amount: number; status: 'pending' | 'completed' | 'failed'; } app.post('/webhook', (req, res) => { const payment = req.body as PaymentWebhook; processPayment(payment); });

This works fine until the payment provider adds a new status: 'refunded'. Your code doesn’t know about it. TypeScript doesn’t know about it. But the webhook starts sending it.

Now payment.status is 'refunded', which isn’t in your union type. If you have a switch statement that handles the three known statuses, it silently falls through to a default case—or worse, throws an error because you assumed exhaustiveness.

With validation, you catch this immediately:

const PaymentWebhookSchema = z.object({ id: z.string(), amount: z.number(), status: z.enum(['pending', 'completed', 'failed']) }); app.post('/webhook', (req, res) => { const result = PaymentWebhookSchema.safeParse(req.body); if (!result.success) { logger.error('Invalid webhook received', result.error); return res.status(400).json({ error: 'Invalid payload' }); } processPayment(result.data); });

The first time a webhook arrives with status: 'refunded', validation fails. You get an error log. You get a 400 response. Most importantly, you get visibility into the contract violation before it corrupts your system’s state.

This is the difference between TypeScript-as-wishful-thinking and TypeScript-as-documentation. The type says what you expect. The schema says what you require. Only the schema protects you when reality disagrees.


Bringing It All Together

A TypeScript codebase becomes durable when its structure reflects the language it ultimately runs on. JavaScript is dynamic, expressive, function‑first, and deeply flexible. The systems built on top of it should embrace those traits rather than fight them.

Factories instead of classes keep object creation honest and explicit. Composition instead of inheritance keeps behavior modular, testable, and free of rigid hierarchies the runtime cannot enforce. Arrow functions reinforce the truth that functions are values, not special constructs with hidden rules. Discriminated unions align TypeScript’s analysis with JavaScript’s actual control flow. And runtime schemas turn types from wishful thinking into something concrete—something you can trust, something you can build on.

When you validate early at boundaries, the surface of your system becomes reliable. When you design modules whose signatures reflect real, validated shapes, intent becomes obvious. When you remove the need for magic casts and implicit assumptions, the complexity of your architecture drops dramatically.

Here’s what that looks like in practice:

Validate all external data at entry points. APIs, webhooks, file uploads, message queues—if it comes from outside your process, validate it before you trust it.

Use factories instead of classes for object creation. Keep construction explicit and avoid the ceremony and confusion of prototypes and this.

Prefer composition over inheritance. Build systems from small, replaceable pieces rather than rigid hierarchies that resist change.

Use arrow functions by default. They’re clearer, safer, and more honest about how JavaScript actually works.

Never cast without validation. Treat as as a code smell. If you can’t validate, at least acknowledge the risk explicitly.

Make function signatures document intent. A good type signature tells the reader what the function expects and what it promises to return. That’s often better than paragraphs of prose.

Keep types close to runtime reality. The further your types drift from what actually happens at runtime, the less useful they become.

Test that validation actually runs. Don’t just test happy paths. Test that your schemas reject malformed data and that your error handling works.

Treat type guards as hints, not guarantees. Use them to help the compiler understand internal invariants, but don’t mistake them for validation.

TypeScript will never replace the dynamic nature of JavaScript, and it shouldn’t try to. Its power lies in documenting what you meant to build—the contracts, the expectations, the flow of data—while the runtime enforces the reality underneath.

If the types describe your intent, and your validation enforces the truth, everything between those two layers becomes easier to understand and maintain. New contributors can see how the system is shaped. Future you can understand what past you was thinking. And the code behaves like the architecture you drew, instead of the one you wished into existence.

That is the strength of using TypeScript properly: it doesn’t give you safety for free. It gives you clarity, structure, and a language for expressing how your system should behave. The safety comes from respecting the runtime, validating what enters it, and building abstractions that match how JavaScript actually works.

TypeScript can’t save you.

But used honestly, it makes it much easier to save yourself.