Skip to main content

FastAPI stays clean until service boundaries start leaking

I read the FastAPI 0.135.x release notes last week while waiting for a deployment to finish. The changelog itself is mundane—a pydantic lower bound bump, some test fixes, a typo correction. But that one line about increasing the pydantic requirement to >=2.9.0 in 0.135.2 is where the illusion cracks. It looks like a routine dependency update until you realize it forces a coordinated dance across every service that touches your API contracts.

The problem isn't the upgrade itself. The problem is what the upgrade reveals: your clean FastAPI abstractions have leaked. What you thought were internal implementation details—pydantic models, field validators, response shapes—have become de facto public contracts. Now a transitive dependency in one service becomes a delivery coordination problem for dozens. I've been on both sides of this: the team pushing the upgrade and the team receiving the breaking change. Neither side is fun.

The Real Problem

FastAPI makes it trivially easy to define an endpoint. You write a pydantic model, you annotate a function, you get validation and docs for free. This is fantastic locally. It feels productive. It stays clean until you ship it and another service needs to consume it. The local developer experience is so smooth that you forget you're building a distributed system.

The moment you share a pydantic model—whether through a shared library, a copied schema, or even just by matching the exact field structure—you've coupled your service's internals to your consumer's expectations. The release notes show this pattern in action. A core dependency update in FastAPI, triggered by changes in pydantic, propagates pressure downstream. It's not a bug; it's a structural consequence. The PR by @svlandeg that fixed the test suite is a good example: it's infrastructure work that only matters because the dependency boundary is porous.

I care more about migration pressure than local elegance. A clean service that's painful to evolve in a distributed system isn't clean—it's brittle. The pydantic upgrade is just the canary. It tells you that your boundaries are permeable. When I see a single line in a changelog trigger a multi-service migration, I know we've traded short-term velocity for long-term coordination cost.

The friction shows up in subtle ways. Your consumer tests start failing because a validation rule tightened in pydantic 2.9.0. Your generated client SDKs won't compile because the model signatures changed. Your contract tests pass, but your integration tests fail because the JSON serialization has a different edge case. These are all symptoms of the same disease: you're depending on behavior, not just shape.

Where Teams Usually Get It Wrong

The most common mistake is treating service contracts as an implementation detail. You define a UserResponse model in Service A because it's convenient. Service B, written by a different team, needs user data. They either import the model directly from a shared library or meticulously recreate it based on the OpenAPI spec. Both paths leak, but in different ways.

The shared library seems efficient until you need to change the model. Now you're versioning a library, cutting releases, and coordinating updates. You're backporting fixes to old versions because some service can't upgrade yet. The recreation path seems decoupled until a field changes and you're debugging why Service B rejects valid data. Either way, the pydantic dependency—and any future changes to it—become a system-wide concern.

Semantic versioning optimism dies at service boundaries. You can follow semver perfectly in your shared library, but if the shape of your data changes, you still need coordinated deployments. The 0.135.2 release didn't break the API contract for most users, but it forced a dependency update on anyone using the library that contained those pydantic models. That's friction masquerading as stability. It's the difference between "it works on my machine" and "it works in my service" until the dependency graph says otherwise.

Another anti-pattern is the "just add a field" mentality. You add an optional field to a pydantic model because Service C needs it. Service D, which doesn't care, starts failing because it's strict about unknown fields. Or worse, Service D silently ignores the new field and you have a data loss bug that only shows up in production. The local change was additive and safe. The distributed change was a cascade of unknowns.

I've also seen teams generate OpenAPI specs from their FastAPI apps and then generate client SDKs from those specs. This seems decoupled until you realize the generated models are just pydantic models in disguise. You've moved the coupling from source code to generated code, but it's still coupling. The pydantic version in the generator affects the shape of the generated models. You're just hiding the dependency behind a code generation step.

The Mechanics of Leakage

Let me get specific about how this actually happens in code. You start with something like this in Service A:

from pydantic import BaseModel

class UserResponse(BaseModel):
    id: int
    email: str
    created_at: datetime
    
    class Config:
        from_attributes = True

@app.get("/users/{user_id}")
async def get_user(user_id: int) -> UserResponse:
    user = await fetch_user(user_id)
    return UserResponse.model_validate(user)

This is clean. It's type-safe. It documents itself. Now Service B needs to call this endpoint. The easy path is to copy this model into Service B's codebase. Maybe you even put it in a shared package:

# shared-models/users.py
from pydantic import BaseModel

class UserResponse(BaseModel):
    id: int
    email: str
    created_at: datetime

Now both services depend on this shared package. When FastAPI 0.135.2 bumps pydantic to >=2.9.0, you need to update the shared package. But Service B isn't ready to upgrade pydantic yet because it's using an old version of another library that's incompatible. So you're stuck. You either upgrade Service B's dependencies (risky) or you pin FastAPI in Service A (defeating the purpose of the upgrade).

The leakage is even more subtle when you don't share code. Service B might use a generated client:

# Generated from OpenAPI spec
class UserResponse(BaseModel):
    id: int
    email: str
    created_at: datetime
    
    class Config:
        from_attributes = True

This looks decoupled because there's no direct import. But the generated model still depends on pydantic, and the generator itself might depend on a specific pydantic version. When the OpenAPI spec changes (because you added a field or changed a validation rule), the generated model changes. Service B's tests break. You're still coordinating.

The root issue is that you're using the same tool for two different jobs: internal data modeling and external contract definition. Pydantic is excellent at both, but mixing them creates a false sense of separation. The tools don't enforce the boundary; they encourage you to ignore it.

A Better Working Shape

The practical fix is to build harder boundaries. Your service should own its internal representation completely. Your public contract—the JSON that crosses the wire—should be a deliberate translation, not a direct export of your pydantic models. This feels like extra work locally. You're maintaining two similar models. But it pays dividends when you need to upgrade a dependency or evolve a field without a system-wide lockstep.

Here's what this looks like in practice. Service A defines its internal model:

# service_a/models/user.py
from pydantic import BaseModel

class User(BaseModel):
    id: int
    email: str
    created_at: datetime
    internal_flag: bool  # Only Service A knows about this

And separately, it defines its public contract:

# service_a/contracts/user.py
from datetime import datetime

class UserResponse:
    def __init__(self, id: int, email: str, created_at: datetime):
        self.id = id
        self.email = email
        self.created_at = created_at
    
    def to_dict(self) -> dict:
        return {
            "id": self.id,
            "email": self.email,
            "created_at": self.created_at.isoformat(),
        }

The endpoint does the translation:

@app.get("/users/{user_id}")
async def get_user(user_id: int):
    user = await fetch_user(user_id)  # Returns internal User model
    response = UserResponse(
        id=user.id,
        email=user.email,
        created_at=user.created_at,
    )
    return response.to_dict()

This is more code. It's less "magical." But when pydantic 3.0 comes out with breaking changes, you only update the translation layer in Service A. Service B continues to consume the same JSON shape. The boundary holds.

I think of this like network protocols. You don't expose your internal memory layout over the wire; you define a wire format. Your API contracts deserve the same discipline. The translation layer is where you absorb change. It's where you can upgrade pydantic in Service A without forcing Service B to do anything until it's ready.

This connects directly to how I think about Async Python is a delivery decision before it is a performance decision. Both are about managing coordination cost. A hard boundary reduces the blast radius of any change, whether it's a new async pattern or a core dependency upgrade.

The key insight is that the translation layer is not overhead—it's isolation. It's the place where you can evolve internal models for performance, add debugging fields, or change validation rules without touching the public contract. It's also where you can handle versioning: you can support multiple versions of your contract by routing to different translators.

Migration Strategies for Already-Leaky Systems

If you're already in the mess, the path out is incremental. You can't rewrite everything at once. Start by identifying the highest-leverage boundary—the one that causes the most coordination pain—and add a translation layer there.

First, freeze the public contract. Document the exact JSON shape that consumers expect. Don't change it. Then, introduce a translation layer between your internal pydantic models and that JSON shape. This layer can be as simple as a function that converts one dict to another. Once that's in place, you can upgrade pydantic internally without touching the public contract.

Next, version your shared libraries. If you must have a shared models package, version it explicitly. Use semantic versioning and support multiple major versions simultaneously. This is painful, but less painful than forcing every service to upgrade at once. The Working With Legacy Code Without Pretending It Is Simple post covers this pattern in more detail: treat the shared library like a product, not an implementation detail.

For teams that can't afford the translation layer yet, the least-bad option is to pin dependencies aggressively. Pin FastAPI, pin pydantic, pin everything. Then upgrade on a schedule, not on a per-service basis. This is still coordination, but it's planned coordination. You know when the pain is coming.

The worst approach is to pretend the problem doesn't exist. I've seen teams delay upgrades for months because the coordination cost is too high, then face a security vulnerability that forces a massive, rushed migration. The pydantic upgrade in 0.135.2 is a gift: it's a low-stakes way to test your boundaries before a high-stakes upgrade hits.

What to Watch in Practice

In day-to-day work, I watch for the import statement. If you're importing a pydantic model from another service's library into your service code, that's a boundary leak. If your consumer tests are validating against your exact pydantic schema, that's a boundary leak. If your deployment pipeline has a step that "regenerates models from the upstream service," that's a boundary leak.

The useful part of the pydantic upgrade is that it acts as a stress test. It forces you to confront whether your contracts are truly decoupled. When the next dependency update comes—and it will—you'll know if you're set up for a coordinated migration or independent deployments.

What I do not trust yet is any tooling that promises zero-cost schema sharing. There's always a cost; it's just deferred to the next breaking change. I prefer explicit translation, even if it feels verbose. Verbose is better than blocked. I've seen teams spend weeks debugging generated code issues that would have taken hours to fix with manual translation.

The part I would watch in your own services is the diff when you change a field. If changing a field name in a model requires changes in more than one codebase, you've coupled your internals to your externals. That's the signal. That's where the real work begins. Make this diff a required check in your code review process. If the diff spans services, require a migration plan.

I also watch the dependency graph. Tools like pipdeptree or poetry show --tree can show you how deep the coupling goes. If you see pydantic appearing in multiple layers of your dependency tree with different version constraints, you have a conflict waiting to happen. The FastAPI 0.135.2 release is a good time to run these tools and see what you're really depending on.

Finally, I watch the team's reaction to upgrades. If the response is "we need to schedule a maintenance window across all services," you have a boundary problem. If the response is "I'll upgrade my service when I have time," you have good boundaries. The difference is not technical; it's organizational. Good boundaries let teams move at different speeds.

Closing Heuristics

This isn't about avoiding FastAPI or pydantic. It's about recognizing that the convenience of their integration is a local optimization. The global optimization—keeping a system shippable—requires deliberate, sometimes awkward, boundaries. The release notes show the pressure. Your job is to build the walls that keep that pressure contained.

When I start a new service now, I ask myself: "What would break if I upgraded pydantic tomorrow?" If the answer includes any other service, I need a better boundary. The answer should be "nothing outside this service." That's the heuristic.

The tradeoff is clear: a little more code locally for a lot less coordination globally. In a small codebase, the extra code feels wasteful. In a distributed system, it feels essential. I'll take the waste over the waiting any day.

The FastAPI 0.135.x releases are just the trigger. The real issue is how we think about service boundaries. Clean code is not about eliminating duplication; it's about eliminating coupling. Sometimes that means writing similar code twice so you can change it independently later. That's not duplication; that's isolation.

I read the release notes while waiting for a deployment. The deployment finished. The upgrade was fine. But I spent the rest of the day checking my own services for import statements that shouldn't be there. That's the real cost of clean code: it stays tidy until your service boundaries start leaking. And they always do, eventually.

Related Reading