> In March 2026, I migrated to self-hosted object storage powered by Versity S3 Gateway.
Thanks for sharing this, I wasn't even aware of Versity S3 from my searches and discussions here. I recently migrated my projects from MinIO to Garage, but this seems like another viable option to consider.
First time hearing about Versity for me too. I thought "S3 Gateways" were an Amazon-only service rather than something mere mortals could set up.
I've been trying to give some containers (LXC/D and OCI) unprivileged access to a network-accessible ZFS filesystem and this might be what I need. Managing UID/GID through bind-mounts from the host to the container (ie NFS on host) has been trickier than I was expecting.
I don't get it, if it's running on the same (mentioning "local") machine, why does it even need the S3 API? Could just be plain IO on the local drive(s)
The app was already built against the S3 API when it used cloud storage. Keeping that interface means the code doesn't change - you just point it at a local S3-compatible gateway instead of AWS/DO. Makes it trivial to switch back or move providers if needed.
If the app was written using the S3 API, it would be much faster/cheaper to migrate to a local system the provides the same API. Switching to local IO would mean (probably) rewriting a lot of code.
Surely "read object" and "write object" are not hard to migrate to local file system. You can also use Apache OpenDAL which provide the same interface to both.
Yeah, unless you have the raw S3 API throughout your codebase you should be able to write a couple dozen lines of code (maximum) to introduce a shim that's trivial to replace with local file access. In fact, I've done this in most projects that work with S3 or similar APIs so I can test them locally without needing real S3!
seperate machine I think given the quoted point at the end:
> The costs have increased: renting an additional dedicated server costs more than storing ~100GB at a managed object storage service. But the improved performance and reliability are worth it.
Apart from all these other products that implement s3? MinIO, Ceph (RGW), Garage, SeaweedFS, Zenko CloudServer, OpenIO, LakeFS, Versity, Storj, Riak CS,
JuiceFS, Rustfs, s3proxy.
Riak CS been dead for over a decade which makes me question the rest. Some of these also do not have the same behaviors when it comes to paths (MinIO is one of those IIRC).
Also, none of them implement full S3 API and features.
There's a difference between S3 API spec and what Amazon does with S3 - for isntance, the new CAS capabilities with Amazon are not part of the spec.
Ceph certainly implements the full API spec, though it may lag behind some changes.
It's mostly a question of engineering time available to the projects to keep up with changes.
What kind of vendor lock-in do you even talk about. Their API is public knowledge, AWS publishes the spec, there are multiple open source reference client implementations available on GitHub, there are multiple alternatives supporting the protocol, you can find writings from AWS people as high in hierarchy as Werner Vogels about internals. Maybe you could say that some s3 features with no alternative implementation in alternative products are a lock-in. I would consider it a „competitive advantage”. YMMV.
> part of it is just to lock people into AWS once they start working with it.
This is some next-level conspiracy theory stuff. What exactly would the alternative have been in 2006? S3 is one of the most commonly implemented object storage APIs around, so if the goal is lock-in, they're really bad at it.
> What exactly would the alternative have been in 2006?
Well, WebDAV (Document Authoring and Versioning) had been around for 8 years when AWS decided they needed a custom API. And what service provider wasn't trying to lock you into a service by providing a custom API (especially pre-GPT) when one existed already? Assuming they made the choice for a business benefit doesn't require anything close to a conspiracy theory.
And it worked as a moat until other companies and open source projects started cloning the API. See also: Microsoft.
When I was in school, we had a SkunkDAV setup that department secretaries were supposed to use to update websites... supporting that was no fun at all. I'm not sure why it was so painful (was 25 years ago) but it left a bad taste in my mouth.
WebDAV is kinda bad, and back then it was a big deal that corporate proxies wouldn't forward custom HTTP methods. You could barely trust PUT to work, let alone PROPFIND.
yeah, sure, those 5-10 different API calls would surely be a huge toll to refactor... I'd rather run an additional service to reimplement the S3 API mapping to my local drive /s
For this project, where you have 120GB of customer data, and thirty requests a second for ~8k objects (0.25MB/s object reads), you’d seem to be able to 100x the throughput vertically scaling on one machine with a file system and an SSD and never thinking about object storage. Would love to see why the complexity
(Author here) that's more or less what I have right now – one machine with a file system and an SSD. S3 API on top is there to give multiple web servers shared access to the same storage. I could have used something else instead of S3 – say, NFS – but there was a feature request for S3 [1] and S3 has a big ecosystem around it already.
Same here. Had a production node running btrfs under heavy write load (lots of small files, frequent creates) and spent two days debugging what turned out to be filesystem-level corruption. Switched to ext4 and never looked back. The article doesn't mention what filesystem sits under Versitygw here, which seems like a pretty relevant omission for anyone thinking of replicating the setup.
I'd worry about file create, write, then fsync performance with btrfs, but not about reliability or data-loss.
But a quick grep across versitygw tells me they don't use Sync()/fsync, so not a problem... Any data loss occurring from that is obviously not btrfs fault.
As someone who has dealt with wacky storage issues/designs, a lot of this "felt" strange to me. Btrfs? Rsync? Then I got to the bottom and saw that they were only handling about 100 GB of data! At that scale, nearly anything will work great and TFA was right to just pick the thing with the fewest knobs.
At a previous job years ago, we had a service that was essentially a file server for something like 50TB of tiny files. We only backed it up once a week because just _walking_ the whole filesystem with something like `du` took more than a day. Yes, we should have simply thrown money at the problem and just bought the right solution from an enterprise storage vendor or dumped them all into S3. Unfortunately, these were not options. Blame management.
A close second would have been to rearchitect dependent services to speak S3 instead of a bespoke REST-ish API, deploy something like SeaweedFS, and call it a day. SeaweedFS handles lots of small files gracefully because it doesn't just naively store one object per file on the filesystem like most locally-hosted S3 solutions (including Versity) do. And we'd get replication/redundancy on top of it. Unfortunately, I didn't get buy-in from the other teams maintaining the dependent services ("sorry, we don't have time to refactor our code, guess that makes it a 'you' problem").
What I did instead was descend into madness. Instead of writing each file to disk, all new files were written to a "cache" directory which matched the original filesystem layout of the server. And then every hour, that directory was tarred up and archived. When a read was required, the code would check the cache first. If the file wasn't there, it would figure out which tarball was needed and extract the file from there instead. This only worked because all files had a timestamp embedded in the path. Read performance sucked, but that didn't matter because reads were very rare. But the data absolutely had to be there when needed.
Most importantly, backups took less than an hour for the first time in years.
> The costs have increased: renting an additional dedicated server costs more than storing ~100GB at a managed object storage service. But the improved performance and reliability are worth it.
Were your users complaining about reliability and performance? If it cost more, adds more work (backup/restore management), and the users aren't happier then why make the change in the first place?
Not the OP but I have some… similar experience. When you run a high availability service without a full ops team, reliable infrastructure is non-negotiable. Burn out has to be managed.
Moved object storage from AWS to CloudFlare and have been pretty happy. No problems with performance so far. Bills were 90% cheaper too (free bandwidth)
Thanks for sharing this, I wasn't even aware of Versity S3 from my searches and discussions here. I recently migrated my projects from MinIO to Garage, but this seems like another viable option to consider.
I've been trying to give some containers (LXC/D and OCI) unprivileged access to a network-accessible ZFS filesystem and this might be what I need. Managing UID/GID through bind-mounts from the host to the container (ie NFS on host) has been trickier than I was expecting.
> The costs have increased: renting an additional dedicated server costs more than storing ~100GB at a managed object storage service. But the improved performance and reliability are worth it.
Part of it is that it follows the object storage model, and part of it is just to lock people into AWS once they start working with it.
I've worked at a few places where single-node K8s "clusters" were frequently used just because they wanted the same API everywhere.
Also, none of them implement full S3 API and features.
Ceph certainly implements the full API spec, though it may lag behind some changes. It's mostly a question of engineering time available to the projects to keep up with changes.
This is some next-level conspiracy theory stuff. What exactly would the alternative have been in 2006? S3 is one of the most commonly implemented object storage APIs around, so if the goal is lock-in, they're really bad at it.
Well, WebDAV (Document Authoring and Versioning) had been around for 8 years when AWS decided they needed a custom API. And what service provider wasn't trying to lock you into a service by providing a custom API (especially pre-GPT) when one existed already? Assuming they made the choice for a business benefit doesn't require anything close to a conspiracy theory.
And it worked as a moat until other companies and open source projects started cloning the API. See also: Microsoft.
And still need redundant backend giving it as API
For this project, where you have 120GB of customer data, and thirty requests a second for ~8k objects (0.25MB/s object reads), you’d seem to be able to 100x the throughput vertically scaling on one machine with a file system and an SSD and never thinking about object storage. Would love to see why the complexity
[1] https://github.com/healthchecks/healthchecks/issues/609
But a quick grep across versitygw tells me they don't use Sync()/fsync, so not a problem... Any data loss occurring from that is obviously not btrfs fault.
At a previous job years ago, we had a service that was essentially a file server for something like 50TB of tiny files. We only backed it up once a week because just _walking_ the whole filesystem with something like `du` took more than a day. Yes, we should have simply thrown money at the problem and just bought the right solution from an enterprise storage vendor or dumped them all into S3. Unfortunately, these were not options. Blame management.
A close second would have been to rearchitect dependent services to speak S3 instead of a bespoke REST-ish API, deploy something like SeaweedFS, and call it a day. SeaweedFS handles lots of small files gracefully because it doesn't just naively store one object per file on the filesystem like most locally-hosted S3 solutions (including Versity) do. And we'd get replication/redundancy on top of it. Unfortunately, I didn't get buy-in from the other teams maintaining the dependent services ("sorry, we don't have time to refactor our code, guess that makes it a 'you' problem").
What I did instead was descend into madness. Instead of writing each file to disk, all new files were written to a "cache" directory which matched the original filesystem layout of the server. And then every hour, that directory was tarred up and archived. When a read was required, the code would check the cache first. If the file wasn't there, it would figure out which tarball was needed and extract the file from there instead. This only worked because all files had a timestamp embedded in the path. Read performance sucked, but that didn't matter because reads were very rare. But the data absolutely had to be there when needed.
Most importantly, backups took less than an hour for the first time in years.
Were your users complaining about reliability and performance? If it cost more, adds more work (backup/restore management), and the users aren't happier then why make the change in the first place?
On a separate note, what tool is the final benchmark screenshot form?