The coupling concern is valid but I think it's the right trade-off for most CRUD apps. In practice, 80% of your endpoints are thin wrappers around your models anyway. The remaining 20% where you need separate request/response schemas — you just write those by hand.
The auto-generated admin panel is a nice touch. That alone saves days on internal tooling.
Why would one want to couple these two? Doesn't that couple, say, your API interface with your database schema? Whereas in reality these are separate concepts, even if, yes, sometimes you return a 'user' from an API that looks the same as the 'user' in the database? Honest question, I only just recently got into FastAPI and I was a bit confused at first that yes, it seemed like a lot of duplication, but after a little bit of experience, they are different things that aren't always the same. So what am I missing?
The ORM doesn't force you to use the DB model as your API schema. It's a regular Pydantic BaseModel, so you can make separate request/response schemas whenever you need to. For simple CRUD, using the model directly saves boilerplate. For complex cases, you decouple them as usual. The goal is not one model for everything, it's one contract. Everything speaks Pydantic, whether it's your API layer or your database layer.
With proper DRY, you shouldn't need to repeat the types, docstrings and validation rules. You should be able to say "these fields from user go on its default presenter, there are two other presenters with extra fields" without having to repeat all that info. I have a vague memory that Django Rest Framework does something of the sort.
This feels hard to express in modern type systems though. Perhaps the craziness that is Zod and Typescript could do it, but probably not Python.
You might be interested in a library that flips around the concept of an ORM, like sqlc [1] or aiosql [2]. You specify just what you want from the database, without needing to couple so much API with database tables.
I also find it really strange this weird ORM fascination. Besides the generic "ORM are the Vietnam War of CS" feeling, I feel that with average database to ORM/REST things, end up with at least one of:
a) you somehow actually have a "system of record", so modelling something in a very CRUD way makes sense, but, on the other hand, who the hell ends up building so many system of record systems in the first place to need those kinds of tools and frameworks?
b) you pretend your system is somehow a system of record when it isn't and modelling everything as CRUD makes it a uniform ball of mud. I don't understand what is so important that you can uniformly "CRUD" a bunch of objects? The three most important parts of an API, making it easy to use it right, hard to use it wrong, and easy to figure out the intent behind how your system is meant to be used, are lost that way.
c) you leak your API to your database, or your database to your API, compromising both, so both sucks.
The vietnam of computer science was written 20 years ago (2006 even), and didn't kill off ORMs then. We've only had 20 years of improvement of ORMs since then. We've long ago accepted Vietnam (the country) as what it is and what it will be in the forseeable future. We should do the same with ORM.
I for one don't want to write in a low level assembly language, and shouldn't have to in 2026. Yet, SQL still feels like one.
I've written a lot of one off products using an ORM, and I don't regret any of the time savings from doing so. When and if I make $5-50M a year on a shipped product, okay, maybe I'll think about optimizing. And then I'll hire an expert while I galavant around europe.
A lot of people have this misconception that Pydantic models are only useful for returning API responses. This is proliferated by all-in-one frameworks like FastAPI.
They are useful in and of themselves as internal business/domain objects, though. For a lot of cases I prefer lighter weight dataclasses that have less ceremony and runtime slowdown with the validation layer, but we use Pydantic models all over the place to guarantee and establish contracts for objects flowing thru our system, irrespective of external facing API layer.
With this you don't need to have two sources of truth on the backend. Previously you will end up with one set of DB ORM level validation and a second set of validation on the data transfer object and you have to make sure the types line up. Now the data transfer object will automatically inherit the correct types.
The two sources of truth are for two disparate adapters. Neither the API nor the DB define the domain, but both must adapt the domain to their respective implementation concerns.
The ontology of your persistence layer should be entirely uncoupled from that of your API, which must accommodate an external client.
Lol I never knew django orm is faster than SQLAlchemy. But having used both that makes sense.
> Why Rust? ... Rust handles the database plumbing. Queries are built as an IR in Python, serialized via MessagePack, sent to Rust which generates dialect-specific SQL, executes it, and streams results back. Speed is a side effect of this split, not the goal.
Nice.
So what does it take to deploy this, dependency wise?
> Lol I never knew django orm is faster than SQLAlchemy.
I don’t believe that for a second. Both are wonderful projects, but raw performance was never one of Django ORM’s selling points.
I think its real advantage is making it easy to model new projects and make efficient CRUD calls on those tables. Alchemy’s strong point is “here’s an existing database; let users who grok DB theory query it as efficiently and ergonomically as possible.”
I was surprised too when I saw the results. The benchmarks test standard ORM usage patterns, not the full power of any ORM. SQLAlchemy is more flexible, but that flexibility comes with some overhead.
That said, the ORM layer is rarely the bottleneck when working with a database. The benchmarks were more about making sure that all the Pydantic validation I added comes for free, not about winning a speed race.
Just pip install oxyde, that's it. The Rust core (oxyde-core) ships as pre-built wheels for Linux, macOS, and Windows, so no Rust toolchain needed. Python-side dependencies are just pydantic, msgpack, and typer for the CLI. Database drivers are bundled in the Rust core (uses sqlx under the hood), so you don't need to install asyncpg/aiosqlite/etc separately either.
This looks great, and like it could address a need in ecosystem. Also, the admin dashboard is such a great feature of django, nice job building it from the get-go.
Oh this is super cool. Coming from Java I've long missed JDBI and Rosetta which makes writing SQL Queries in Java a dream. I've toyed around with a similar style interface for python, and looking at this give me hope I can achieve it.
wow big if true. our core codebase is still on psycopg2 and we do this mapping ourselves. type introspective would be very handy for reducing that boilerplate.
I think Prisma does type-safe ORM really well on the typescript side, and was sad it doesn't seem to be super supported in python. This feels sort of similar and makes a lot of sense!
Thanks! Yeah, that SQLModel issue is actually one of the things that pushed me to build this. In Oxyde the models are just Pydantic BaseModel subclasses, so validation always works, both on the way in and on the way out via model_validate().
If your typed query sub-language can't avoid stringly references to the field names already defined by the schema objects, then it's the lost battle already.
The Rust core is not just about speed. It bundles native database drivers (sqlx), connection pooling, streaming serialization. It's more about the full IO stack than just making Python faster.
On F("views"), fair point. It's a conscious trade-off for now. The .pyi stubs cover filter(), create(), and other query methods, but F() is still stringly-typed. Room for improvement there.
> It's more about the full IO stack than just making Python faster.
Does it mean that your db adapter isn't necessarily DBAPI (PEP-249) compliant? That is, it could be that DBAPI exception hierarchy isn't respected, so that middlewares that expect to work across the stack and catch all DB-related issues, may not work if the underlying DB access stack is using your drivers?
> but F() is still stringly-typed. Room for improvement there.
Yeah, I'm pretty sure F() isn't needed. You can look at how sqlalchemy implements combinator-style API around field attributes:
import sqlalchemy as sa
from sqlalchemy.orm import DeclarativeBase
class Base(sa.orm.DeclarativeBase):
pass
class Stats(Base):
__tablename__ = "stats"
id = sa.Column(sa.Integer, primary_key=True)
views = sa.Column(sa.Integer, nullable=False)
print(Stats.views + 1)
The expression `Stats.views + 1` is self-contained, and can be re-used across queries. `Stats.views` is a smart object (https://docs.sqlalchemy.org/en/21/orm/internals.html#sqlalch...) with overloaded operators that make it re-usable in different contexts, such as result getters and query builders.
Right, it's not DBAPI compliant. The whole IO stack goes through Rust/sqlx, so PEP-249 doesn't apply. Oxyde has its own exception hierarchy (OxydeError, IntegrityError, NotFoundError, etc.). In practice most people catch ORM-level exceptions rather than DBAPI ones, but fair to call out.
On F(), good point. The descriptor approach is something I've been thinking about. Definitely on the radar.
Must have missed that meeting. ORMs are not for everything, but for CRUD-heavy apps with validation they save a lot of boilerplate. And there's always execute_raw() for when you need to go off-script.
If there's an Open Source application that you can show me as a good example of ORM usage, I'd be interested in seeing it as a steelman argument for them.
For Oxyde specifically, it's still a young project, so the best public examples I have are the FastAPI tutorial (https://github.com/mr-fatalyst/fastapi-oxyde-example) and the admin panel examples (https://github.com/mr-fatalyst/oxyde-admin). Bigger real-world showcases will come with time. But in general, ORMs pay for themselves when you have lots of models, relations, and standard CRUD with validation. The moment you hand-write the same INSERT/UPDATE/SELECT with joins for every endpoint, it adds up fast.
Where ORMs are clearly weak is in generating suboptimal queries and making it too easy to create N+1 issues. My first introduction to ORMs was Ruby on Rails. You would rely on New Relic to identify performance issues and then fix them.
With solid AGENTS.md / CLAUDE.md, I do not think this would happen as much anymore in new code. So then it is just a matter of style preference (ORM vs whatever else).
The auto-generated admin panel is a nice touch. That alone saves days on internal tooling.
With proper DRY, you shouldn't need to repeat the types, docstrings and validation rules. You should be able to say "these fields from user go on its default presenter, there are two other presenters with extra fields" without having to repeat all that info. I have a vague memory that Django Rest Framework does something of the sort.
This feels hard to express in modern type systems though. Perhaps the craziness that is Zod and Typescript could do it, but probably not Python.
[1] https://sqlc.dev/ [2] https://nackjicholson.github.io/aiosql/
Big fan of FastAPI but I think SQLModel leads to the wrong mental model that somehow db model and api schema are the same.
Therefore I insist on using SQLAlchemy for db models and pydantic for api schemas as a mental boundary.
a) you somehow actually have a "system of record", so modelling something in a very CRUD way makes sense, but, on the other hand, who the hell ends up building so many system of record systems in the first place to need those kinds of tools and frameworks?
b) you pretend your system is somehow a system of record when it isn't and modelling everything as CRUD makes it a uniform ball of mud. I don't understand what is so important that you can uniformly "CRUD" a bunch of objects? The three most important parts of an API, making it easy to use it right, hard to use it wrong, and easy to figure out the intent behind how your system is meant to be used, are lost that way.
c) you leak your API to your database, or your database to your API, compromising both, so both sucks.
I for one don't want to write in a low level assembly language, and shouldn't have to in 2026. Yet, SQL still feels like one.
I've written a lot of one off products using an ORM, and I don't regret any of the time savings from doing so. When and if I make $5-50M a year on a shipped product, okay, maybe I'll think about optimizing. And then I'll hire an expert while I galavant around europe.
They are useful in and of themselves as internal business/domain objects, though. For a lot of cases I prefer lighter weight dataclasses that have less ceremony and runtime slowdown with the validation layer, but we use Pydantic models all over the place to guarantee and establish contracts for objects flowing thru our system, irrespective of external facing API layer.
The ontology of your persistence layer should be entirely uncoupled from that of your API, which must accommodate an external client.
In theory, anyway.
> Why Rust? ... Rust handles the database plumbing. Queries are built as an IR in Python, serialized via MessagePack, sent to Rust which generates dialect-specific SQL, executes it, and streams results back. Speed is a side effect of this split, not the goal.
Nice.
So what does it take to deploy this, dependency wise?
I don’t believe that for a second. Both are wonderful projects, but raw performance was never one of Django ORM’s selling points.
I think its real advantage is making it easy to model new projects and make efficient CRUD calls on those tables. Alchemy’s strong point is “here’s an existing database; let users who grok DB theory query it as efficiently and ergonomically as possible.”
https://www.psycopg.org/psycopg3/docs/advanced/typing.html#e...
Django + async is a nightmare so this is a very welcome project.
There's already Oxide computers https://oxide.computer/ and Oxc the JS linter/formatter https://oxc.rs/.
not really, what makes sense is being JIT-able and friendly to PyPy.
> Type safety was a big motivation.
> https://oxyde.fatalyst.dev/latest/guide/expressions/#basic-u...
> F("views") + 1
If your typed query sub-language can't avoid stringly references to the field names already defined by the schema objects, then it's the lost battle already.
Does it mean that your db adapter isn't necessarily DBAPI (PEP-249) compliant? That is, it could be that DBAPI exception hierarchy isn't respected, so that middlewares that expect to work across the stack and catch all DB-related issues, may not work if the underlying DB access stack is using your drivers?
> but F() is still stringly-typed. Room for improvement there.
Yeah, I'm pretty sure F() isn't needed. You can look at how sqlalchemy implements combinator-style API around field attributes:
The expression `Stats.views + 1` is self-contained, and can be re-used across queries. `Stats.views` is a smart object (https://docs.sqlalchemy.org/en/21/orm/internals.html#sqlalch...) with overloaded operators that make it re-usable in different contexts, such as result getters and query builders.On F(), good point. The descriptor approach is something I've been thinking about. Definitely on the radar.
With solid AGENTS.md / CLAUDE.md, I do not think this would happen as much anymore in new code. So then it is just a matter of style preference (ORM vs whatever else).