PropelAuth Logo
← Back to Blog

Write Rust-like code in TypeScript

Recently, we've been working on a new Rust application, which meant we got to set up a fresh repo from scratch. It's a simple mono-repo with a Rust backend and a TypeScript React frontend, deployed together.

While our journey with Rust hasn't always been smooth, Rust is such an enjoyable language to write. Even if we ignore the memory-safety and performance benefits, it's just a really expressive language.

There's a certain joy that comes with writing code like this:

pub type DocumentUploadResult = Result<DocumentUploadSuccess, DocumentUploadError>;

#[derive(Serialize)]
#[serde(tag = "type")]
pub enum DocumentUploadSuccess {
    Created { document_id: DocumentId },
    Updated { document_id: DocumentId, document_version: DocumentVersion },
}

#[derive(thiserror::Error, Debug)]
pub enum DocumentUploadError {
    #[error("File format error: {0}")]
    FormatError(String),

    #[error("Permission denied: {action} on document {document_id}")]
    PermissionDenied {
        document_id: String,
        action: DocumentAction,
    },

    #[error("Storage error")]
    Storage(#[from] std::io::Error),
}

It's easy to read. It's clear what it does. All the possible outcomes of an operation, both success and failure, are there and allow for complex types for each case.

Getting the same experience in TypeScript

Given how much I enjoy this, I don't see why the Result type has to stop at our API boundary. I want to write TypeScript code in the browser with the same level of nuance:

const result = await api.document.upload(...)
if (result.ok) {
    if (result.data.type === "Created") {
        notifyDocumentCreated(result.data.documentId)
    } else if (result.data.type === "Updated") {
        notifyDocumentUpdated(result.data.documentId, result.data.documentVersion)
    }
} else {
    if (result.error.type === "FormatError") {
        notifyError("Invalid file format")
    } else if (result.error.type === "PermissionDenied") {
        // Note: the FE should NOT get access to documentId
        //       that might leak information
        notifyPermissionDenied()
    } else if (result.error.type === "UnexpectedError") {
        // Storage failures should be converted to an unexpected error
        notifyUnexpectedError()
    }
}

One small nitpick: since you don't get exhaustive matching by default, the actual code often looks like:

if (result.data.type === "Created") {
    notifyDocumentCreated(result.data.documentId);
} else if (result.data.type === "Updated") {
    notifyDocumentUpdated(result.data.documentId, result.data.documentVersion);
} else {
    // Fails to typecheck if we missed any cases
    assertNever(result.data);
}

If you want to get fancy, it's not too hard to support:

match(result, {
    ok: {
        Created: data => notifyDocumentCreated(data.documentId),
        Updated: data => notifyDocumentUpdated(data.documentId, data.documentVersion),
    },
    err: {
        FormatError: error => notifyError("Invalid file format"),
        PermissionDenied: error => notifyPermissionDenied(),
        UnexpectedError: error => notifyUnexpectedError(),
    },
});

You get the idea though, I want complete type safety from my backend all the way to my React frontend - while being careful to not leak internal errors.

Let's say I add a new error variant FileTooLarge. I want every single API call in my frontend to fail to build until I update it to handle this new error type.

With that level of safety, our team can move faster because we aren't as worried about the possible effects of our changes.

Matching our Rust and TypeScript types

The first thing we need is to have the same type defined in both TypeScript and Rust. There are a number of ways to do this (see the "Why not..." section below), but many of them rely on writing your types in some other format first, and then generating both Rust and TypeScript from that.

Writing those intermediate formats, for me, is just a rough experience. For OpenAPI, for example, writing yaml like this:

DocumentUpdated:
    type: object
    properties:
        type:
            const: Updated
        document_id:
            $ref: "#/components/schemas/DocumentId"
        document_version:
            $ref: "#/components/schemas/DocumentVersion"
    required: [type, document_id, document_version]
# ... and a lot more

feels worse than writing the Rust type. Some backend frameworks will generate this spec for you, but Rust's options here are fairly limited.

We wanted something that worked for our existing workflow - where we could continue to write our expressive Rust types and have a TypeScript client that automatically matches it.

Generating TypeScript types from Rust

ts_rs is a crate that generates TypeScript types from your Rust enum/structs. You write:

#[derive(TS)]
#[ts(export)]
struct User {
    user_id: i32,
    first_name: String,
    last_name: String,
}

and it generates:

export type User = { user_id: number; first_name: string; last_name: string };

It is perfect for our use case! We can use ts_rs to generate TypeScript types for our request, response, and error types. It even respects serde, so if we added #[serde(rename_all = "camelCase")] to our User struct, the exported type would have camelCase fields to match it's serialization format.

We're missing something...

ts_rs gives us types, but we don't have an actual client library. We need something that makes the actual fetch call and formats the response. Something like this:

class Client {
    // ... constructor, client options, etc.

    async function routeName(
        inputType: InputType
    ): Promise<Result<SuccessfulResponse, ErrorResponse>> {
        // call a helper method w/ the path, method
        // take the result and turn into either an:
        // { ok: true, data } | { ok: false, error }
        return this.apiCall('/path', 'POST', inputType)
    }
}

Notably, this step was hard to automate. You have to examine each route, find the path, method, input, output, and error type names.

None of those are particularly hard, but common approaches like regexes here felt fragile. We initially did this manually, which was fine, but it's actually the perfect candidate for something to offload to an LLM.

This task is simple enough that a prompt like "{Basic description of the client library}. Could you check that we aren't missing any routes and if so add them?" works.

More importantly, the output is simple enough that we can actually review the code and decide if it's right or not.

Handling external vs internal errors

The request and response types are pretty straightforward, but there's one subtle issue with our errors: we need to have a distinction between an internal error and an external-facing error.

The internal error allows us to leverage thiserror a bit more effectively by writing:

#[derive(thiserror::Error, Debug)]
pub enum InternalExampleError {
    #[error("Validation error")]
    ValidationError(#[from] validator::ValidationError),

    #[error("Database error")]
    DatabaseError(#[from] sqlx::Error),
}

which will automatically implement From for those underlying errors.

The evolution of error handling here is probably worth a whole separate post, so I'll just say this approach has been really great for "sprawling" codebases where you have something like this:

// Can return a ValidationError
validate_args(&data)?;

// Can return a EnrichDataError
let enriched_data = enrich(&data)?;

// Can return a sqlx::Error
update_the_db(enriched_data, db)?;

It keeps the code very focused on the happy paths, and you end up with a detailed list of all the ways it could fail.

We then just need to implement a JSON serializable external error:

#[derive(Serialize, Debug, TS)]
#[ts(export)]
pub enum ExternalExampleError {
    // How you structure the validation error is up to you.
    // This is an example that gives type safety on the fields
    //    that could have errors.
    ValidationError {
        name: Option<String>,
        email: Option<String>,
        // ...
    }

    // Database errors are converted to unexpected errors
    UnexpectedError,
}

Lastly, you just need to implement From<InternalExampleError> for ExternalExampleError and you are good to go.

One small note for completeness - you can use status codes here to account for "global" errors (e.g. unauthenticated requests, user doesn't have permission to do that). Each route's error would return a 400 or 503 for itself and middleware would use 401/403/etc.

The tradeoff here? Well, it's really verbose.

More good news though, LLMs are also pretty good at writing fairly trivial transformations of data. Even Copilot can do this correctly.

Putting it all together - The developer experience

If we put this all together, my experience when writing a new route is:

  • I create the input and output structs. I don't bother writing the error yet.
  • I write the route itself. This is where the real work is, validating data, making sql calls. You know, normal programmer things.
  • I write an internal error enum which is just the union of all the errors propagated by the route.
  • I write an external error enum which includes all the information I want the FE to know about.
  • Claude writes the conversion logic from internal error => external error.
  • ts_rs generates the TypeScript types.
  • Claude updates the FE client library to add the new route.

You can get Claude (or most LLMs) to take over more of those steps, that just happens to be the workflow I prefer.

How does it feel?

So far, we've really enjoyed it! The end result is an RPC-like framework that's very Rust-centric.

We get to think about our API with all the expressiveness that Rust allows. The frontend library is pretty much autogenerated from this backend, and it's equally as rich and expressive.

When I write frontend code I get a lot of the same nice, safe feelings I get when I'm writing backend code.

Isn't this really verbose?

A fair criticism of this approach is that it is certainly verbose. It'd be hard to write even trivial routes in less than 50 lines of code.

That being said, the verbose code here has some nice properties:

  • It's not "magical." It's not a nested hierarchy of abstract classes overwriting behavior from the parent class. It's largely just converting between two types that were almost identical anyway.
  • It's easy to review. It'll add a little time to PRs if you are a slow reader, but not much.
  • It's easy to navigate. If a user writes in with an error they saw, even if you were new to the codebase, you can follow the code from the frontend all the way through the backend pretty fast.

So, while it totally is verbose, the benefits that we get from it outweigh the downsides. Also, we can mitigate some of the downsides (primarily, the health and well-being of my ulnar nerve) with LLMs.

Why not...? (alternative approaches)

I'll be the first to say that there are many other setups that'll have similar guarantees. However, we had some constraints that limited our choices:

The backend must be in Rust. This eliminates options like using FastAPI, generating an OpenAPI spec, and then generating a client from that OpenAPI spec. This eliminates full-stack TypeScript options like tRPC.

I don't want to manage an OpenAPI spec. This is less a constraint and more personal preference, but I've never liked manually writing an OpenAPI spec or any tools to generate it. I just want to write my backend code. We did discuss some options like Poem, but we ultimately opted against changing frameworks just for an OpenAPI spec (part of that decision was we liked the ts_rs client more than the OpenAPI client).

Protobufs. You can get a pretty nice looking client on top of protobufs/gRPC generators, but we had similar hesitation to that of OpenAPI - we'd rather just write the backend route than write a .proto file.

Macros. Macros would definitely reduce the amount of code we'd have to write, but often come at the cost of making the IDE experience worse. If I can't easily find all usages of an error variant because it's being generated by some macro code, it's doing more harm than good.

Generate the whole thing with an LLM. Believe me, we tried. LLMs are noisy enough that it was hard to get this remotely consistent. Oftentimes we'd find comments like

// in a production version, you'd want to include the full object

for our more complex types, and that's just obviously worse than ts_rs.