I work on SeaweedFS since 2011, and full time since 2025.
SeaweedFS was started as a learning project and evolves along the way, getting ideas from papers for Facebook Haystack, Google Colossus, Facebook Tectonics. With its distributed append-only storage, it naturally fits object store. Sorry to see MinIO went away. SeaweedFS learned a lot from it. Some S3 interface code was copied from MinIO when it was still Apache 2.0 License. AWS S3 APIs are fairly complicated. I am trying to replicate as much as possible.
Some recent developments:
* Run "weed mini -dir=xxx", it will just work. Nothing else to setup.
I confirm this, I used SeaweedFS to serve 1M users daily with 56 million images / ~100TB with 2 servers + HDD only, while Minio can't do this. Seaweedfs performance is much better than Minio's.
The only problem is that SeaweedFS documentation is hard to understand.
This is Chris and I am the creator of SeaweedFS. I am starting to work full time on SeaweedFS now. Just create issues on SeaweedFS if any.
Recently SeaweedFS is moving fast and added a lot more features, such as:
* Server Side Encryption: SSE-S3, SSE-KMS, SSE-C
* Object Versioning
* Object Lock & Retention
* IAM integration
* a lot of integration tests
Also, SeaweedFS performance is the best in almost all categories in a user's test https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com...
And after that, there is a recent architectural change that increases performance even more, with write latency reduced by 30%.
Thank you for your work. I was in a position where I had to choose between minio and seaweed FS and though seaweed FS was better in every way the lack of an includes dashboard or UI accessibility was a huge factor for me back then. I don't expect or even want you to make any roadmap changes but just wanted to let you know of a possible pain point.
One similar use case used Cassandra as SeaweedFS filer store, and created thousands of files per second in a temp folder, and moved the files to a final folder. It caused a lot of tombstones for the updates in Cassandra.
Later, they changed to use Redis for the temp folder, and keep Cassandra for other folders. Everything has been very smooth since then.
Just download the single binary, for most platforms, and run "weed mini -dir=your_data_directory", with all the configuration optimized.