Hacker Newsnew | past | comments | ask | show | jobs | submit | chrislusf's commentslogin

I work on SeaweedFS.

Just download the single binary, for most platforms, and run "weed mini -dir=your_data_directory", with all the configuration optimized.


I work on SeaweedFS.

I am trying to support AWS S3 APIs as complete as possible.

Recently added support for Table Bucket, besides myriads of details, such as policies, STS, IAM, OIDC, WORM, lock and versioning, governance, etc.


(I work on SeaweedFS.)

Haha, you used Claude to find the Clause code.

I used Claude to generate a lot of admin UI pages, saved a lot of time. The core storage engine part I dare not using AI, same as you.


I work on SeaweedFS since 2011, and full time since 2025.

SeaweedFS was started as a learning project and evolves along the way, getting ideas from papers for Facebook Haystack, Google Colossus, Facebook Tectonics. With its distributed append-only storage, it naturally fits object store. Sorry to see MinIO went away. SeaweedFS learned a lot from it. Some S3 interface code was copied from MinIO when it was still Apache 2.0 License. AWS S3 APIs are fairly complicated. I am trying to replicate as much as possible.

Some recent developments:

* Run "weed mini -dir=xxx", it will just work. Nothing else to setup.

* Added Table Bucket and Iceberg Catalog.

* Added admin UI


I work on SeaweedFS. It is not backed by any greedy VC. So no urgency to make a large profit from the open source community.


Disclaim: I work on SeaweedFS.

Why skipping SeaweedFS? It rank #1 on all benchmarks, and has a lot of features.


I confirm this, I used SeaweedFS to serve 1M users daily with 56 million images / ~100TB with 2 servers + HDD only, while Minio can't do this. Seaweedfs performance is much better than Minio's. The only problem is that SeaweedFS documentation is hard to understand.


SeaweedFS is also so optimized for small objects, it can't store larger objects (max 32GiB[1]).

Not a concern for many use-cases, just something to be aware of as it's not a universal solution.

[1]: https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#st...


Not correct. The files are chunked into smaller pieces and spread to all volume servers.


Well, then I suggest updating the incorrect readme. It's why I've ignored SeaweedFS.


SeaweedFS is very nice and takes quite an effort to lose data.


can you link benchmarks


It is in the parent comment.


I work on SeaweedFS. So very biased. :)

Just run "weed sever -s3 -dir=..." to have an object store.


I'll try it!


I work on SeaweedFS. It has support for these if conditions, and a lot more.


This is Chris and I am the creator of SeaweedFS. I am starting to work full time on SeaweedFS now. Just create issues on SeaweedFS if any.

Recently SeaweedFS is moving fast and added a lot more features, such as: * Server Side Encryption: SSE-S3, SSE-KMS, SSE-C * Object Versioning * Object Lock & Retention * IAM integration * a lot of integration tests

Also, SeaweedFS performance is the best in almost all categories in a user's test https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... And after that, there is a recent architectural change that increases performance even more, with write latency reduced by 30%.


Congratulations on earning that opportunity!

Thank you for your work. I was in a position where I had to choose between minio and seaweed FS and though seaweed FS was better in every way the lack of an includes dashboard or UI accessibility was a huge factor for me back then. I don't expect or even want you to make any roadmap changes but just wanted to let you know of a possible pain point.


Thank! There is an admin UI already. AI coding makes this fairly easy.


I'm sorry I probably missed it then, this was like 4 years ago so I could be wrong.


Should not be a problem.

One similar use case used Cassandra as SeaweedFS filer store, and created thousands of files per second in a temp folder, and moved the files to a final folder. It caused a lot of tombstones for the updates in Cassandra.

Later, they changed to use Redis for the temp folder, and keep Cassandra for other folders. Everything has been very smooth since then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: