Hacker Newsnew | past | comments | ask | show | jobs | submit | fenos's commentslogin

This is indeed at the very top of the list :)


We don’t support static website hosting just yet - might happen in the future :)


just a small elaboration:

You can serve HTML content if you have a custom domain enabled. We don't plan to do anything beyond that - there are already some great platforms for website hosting, and we have enough problems to solve with Postgres


That would be cool, at least for someone building a small app with just one tiny SaaS bill to worry about. Even better; using the free tier for an app that is a very very small niche.


Thanks for asking this, we do not support signed URLs just yet, but it will be added in the next iteration


Presigned URLs are useful because client app can upload/download directly from S3, saving the app server from this traffic. Does Row-Level Security achieve the same benefit?


Yes, I agree! Although I should have specified that we support signed URL https://supabase.com/docs/reference/javascript/storage-from-... just not in the S3 protocol just yet :)


Thanks for confirming! Could you maybe update https://supabase.com/docs/guides/storage/s3/compatibility to include s3-esque presigning as a feature to track?


Can you give an indication of what "next iteration" means in terms of timeline (even just ballpark)?


You can think of the Storage product as an upload server that sits in front of S3.

Generally, you would want to place an upload server to accept uploads from your customers, that is because you want to do some sort of file validation, access control or anything else once the file is uploaded. The nice thing is that we run Storage within the same AWS network, so the upload latency is as small as it can be.

In terms of serving files, we provide a CDN out-of-the-box for any files that you upload to Storage, minimising latencies geographically


> Generally, you would want to place an upload server to accept uploads from your customers

A common pattern on AWS is to not handle the upload on your own servers. Checks are made ahead of time, conditions baked into the signed URL, and processing is handled after the fact via bucket events.


That is also a common pattern I agree, both ways are fine if the upload server is optimised accordingly :)


Yes, absolutely! You can download files as streams and make use of Range requests too.

The good news is that the Standard API is also supporting stream!


We do not store the files in Postgres, the files are stored in a managed S3 bucket.

We store the metadata of the objects and buckets in Postgres so that you can easily query it with SQL. You can also implement access control with RLS to allow access to certain resources.

It is not currently possible to guarantee atomicity on 2 different file uploads since each file is uploaded on a single request, this seems a more high-level functionality that could be implemented at the application level


Oh.

So this is like, S3 on top of S3? That's interesting.


Yes indeed! I would call it S3 on steroids!

Currently, it happens to be S3 to S3, but you could write an adapter, let's say GoogleCloudStorage and it will become S3 -> GoogleCloudStorage, or any other type of underline Storage.

Additionally, we provide a special way of authenticating to Supabase S3 using the SessionToken, which allows you to scope S3 operations to your users specific access control

https://supabase.com/docs/guides/storage/s3/authentication#s...


What about for second tier cloud providers like Linode, Vultr or UpCloud, they all offer S3 compatible object storage, will I need to write an adaptor for these or will it work just fine in lieu of their S3 compatibility?


Our S3 Driver is compatible with any S3 Compatible object Storage so you don’t have to write one :)


Gentle reminder here that S3 compatability is a sliding window and without further couching of the term it’s more of a marketing term than anything for vendors. What do I mean by this statement? I mean that you can go to cloud vendor Foo and they can tell you they offer s3 compatible api’s or clients but then you find out they only support the most basic of operations, like 30% of the API. Vendor Bar might support 50% of the api and Baz 80%.

In a lot of cases, if your use case is simple, 30% is enough if you’re doing the most common GET and PUT operations etc. But all it takes is one unsupported call in your desired workflow to rule out that vendor as an option until such time that said API is supported. My main beef with this is that there’s no easy way to tell usually unless the vendor provides a support matrix that you have to map to the operations you need, like this: https://docs.storj.io/dcs/api/s3/s3-compatibility. If no such matrix is provided on both the client side and server side you have no easy way to tell if it will even work without wiring things in and attempting to actually execute the code.

One thing to note is that it’s quite unrealistic for vendors to strive for 100% compat - there’s some AWS specific stuff in the API that will basically never be relevant for anyone other than AWS. But the current situation of Wild West could stand for some significant improvement


I agree that s3 compatibility is a bit of a moving target and we would not implement any of the AWS specific actions.

We are transparent with what's the level of compatibility - https://supabase.com/docs/guides/storage/s3/compatibility

The most often used APIs are covered but if something is missing, let me know!


I’m confused about what directions this goes.

The announcement is that Supabase now supports (user) —s3 protocol—> (Supabase)

Above you say that (Supabase) —Supabase S3 Driver—> (AWS S3)

Are you further saying that that (Supabase) —Supabase S3 Driver—> (any S3 compatible storage provider) ? If so, how does the user configure that?

It seems more likely that you mean that for any application with the architecture (user) —s3 protocol—> (any S3 compatible storage provider), Supabase can now be swapped in as that storage target.


As I understand it, supabase is open source. So the hosted supabase is

(user) -> s3 protocol -> (Supabase) -> (AWS S3)

you could fork (or contribute) a database driver for any s3 compatible backend of choice.

(user) -> s3 protocl -> (pbronez-base) -> (GCP Cloud Storage)


in case it's not clear why this is required, some of the things the storage engine handles are:

image transformations, caching, automatic cache-busting, multiple protocols, metadata management, postgres compatibility, multipart uploads, compatibility across storage backends, etc


Yes, both can be fine :) after all, a Protocol can be interpreted as a Standardised API which the client and server interact with, it can be low-level or high-level.

I hope you like the addition and we have the implementation all open-source on the Supabase Storage server


Fantastic! I've never had a chance on launching the app on windows, I'm glad it worked!

I think you are running the dev mode, that's why the Chrome dev tools appeared :)

The import paths feature will be released soon so stay tuned! In the meantime, you can temporarily place the imported protos close to the one you want to use

Ty


Sorry I didn't call this out more clearly, but I think the server-hot vs hot-server thing is a typo in the readme that's wrong on any platform.

Looking forward to include path support. I tried flattening my protos into one directory, updated all their `import` directives and gave it a go, and it works great, thanks! Both client and server running on Windows. My protos use a well known type (google.protobuf.Duration) and I was particularly impressed that it worked without me having to copy in the proto file for that.

I'm sure you already know this, but just in case: Include paths don't just help with proto files scattered across different directories, they also allow part of the path to be part of the canonical name of the proto. [1, 2] For example, if you have these files:

    /
        src/
            xyz/
                foo.proto   # Contains: import "xyz/bar.proto"
                bar.proto
Then you need -I/src (or similar) because if you run protoc from within /src/xyz/ without -I then it's like you specified -I/src/xyz, and then foo.proto can't find bar.proto (because it looks for /src/xyz/xyz/bar.proto). Using relative paths like this is useful because you can have proto files with the same file name (like "config.proto") so long as their full canonical name is different, and because generated files are output into corresponding subdirectories (which is especially useful in Python where the subdirectory indicates the package name).

[1] https://stackoverflow.com/questions/18735609/the-path-in-pro...

[2] https://github.com/protocolbuffers/protobuf/issues/723#issue...


Wow! Those two features will be certainly a good addition to the client! I’ll be adding them to the list


Hi! Creator of BloomRPC here. Thanks for the warming welcoming of the client! I’m glad to see lots of positive feedbacks.

The client aim to give the best developer experience possible on your day to day usage of GRPC services.

We are using it heavily internally and improving it everyday

I hope you’ll enjoy working with bloomrpc and that will be soon your GRPC companions


PM for gRPC here - this looks great! Thank you for creating it. I'll definitely be pointing folks to it. I saw that there is support for reflection and TLS planned already.

Are there plans to support record/replay of a sequence?


Thanks, I’m glad you like it!

Yes, TLS will arrive very soon. Reflection too but it will take slightly longer.

Request chain will follow along!

I haven’t thought about recording and replay.

That is a great addition to the editor once we support request chain


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: