You can serve HTML content if you have a custom domain enabled. We don't plan to do anything beyond that - there are already some great platforms for website hosting, and we have enough problems to solve with Postgres
That would be cool, at least for someone building a small app with just one tiny SaaS bill to worry about. Even better; using the free tier for an app that is a very very small niche.
Presigned URLs are useful because client app can upload/download directly from S3, saving the app server from this traffic. Does Row-Level Security achieve the same benefit?
You can think of the Storage product as an upload server that sits in front of S3.
Generally, you would want to place an upload server to accept uploads from your customers, that is because you want to do some sort of file validation, access control or anything else once the file is uploaded. The nice thing is that we run Storage within the same AWS network, so the upload latency is as small as it can be.
In terms of serving files, we provide a CDN out-of-the-box for any files that you upload to Storage, minimising latencies geographically
> Generally, you would want to place an upload server to accept uploads from your customers
A common pattern on AWS is to not handle the upload on your own servers. Checks are made ahead of time, conditions baked into the signed URL, and processing is handled after the fact via bucket events.
We do not store the files in Postgres, the files are stored in a managed S3 bucket.
We store the metadata of the objects and buckets in Postgres so that you can easily query it with SQL. You can also implement access control with RLS to allow access to certain resources.
It is not currently possible to guarantee atomicity on 2 different file uploads since each file is uploaded on a single request, this seems a more high-level functionality that could be implemented at the application level
Currently, it happens to be S3 to S3, but you could write an adapter, let's say GoogleCloudStorage and it will become S3 -> GoogleCloudStorage, or any other type of underline Storage.
Additionally, we provide a special way of authenticating to Supabase S3 using the SessionToken, which allows you to scope S3 operations to your users specific access control
What about for second tier cloud providers like Linode, Vultr or UpCloud, they all offer S3 compatible object storage, will I need to write an adaptor for these or will it work just fine in lieu of their S3 compatibility?
Gentle reminder here that S3 compatability is a sliding window and without further couching of the term it’s more of a marketing term than anything for vendors. What do I mean by this statement? I mean that you can go to cloud vendor Foo and they can tell you they offer s3 compatible api’s or clients but then you find out they only support the most basic of operations, like 30% of the API. Vendor Bar might support 50% of the api and Baz 80%.
In a lot of cases, if your use case is simple, 30% is enough if you’re doing the most common GET and PUT operations etc. But all it takes is one unsupported call in your desired workflow to rule out that vendor as an option until such time that said API is supported. My main beef with this is that there’s no easy way to tell usually unless the vendor provides a support matrix that you have to map to the operations you need, like this: https://docs.storj.io/dcs/api/s3/s3-compatibility. If no such matrix is provided on both the client side and server side you have no easy way to tell if it will even work without wiring things in and attempting to actually execute the code.
One thing to note is that it’s quite unrealistic for vendors to strive for 100% compat - there’s some AWS specific stuff in the API that will basically never be relevant for anyone other than AWS. But the current situation of Wild West could stand for some significant improvement
The announcement is that Supabase now supports (user) —s3 protocol—> (Supabase)
Above you say that (Supabase) —Supabase S3 Driver—> (AWS S3)
Are you further saying that that (Supabase) —Supabase S3 Driver—> (any S3 compatible storage provider) ? If so, how does the user configure that?
It seems more likely that you mean that for any application with the architecture (user) —s3 protocol—> (any S3 compatible storage provider), Supabase can now be swapped in as that storage target.
Yes, both can be fine :) after all, a Protocol can be interpreted as a Standardised API which the client and server interact with, it can be low-level or high-level.
I hope you like the addition and we have the implementation all open-source on the Supabase Storage server
Fantastic! I've never had a chance on launching the app on windows, I'm glad it worked!
I think you are running the dev mode, that's why the Chrome dev tools appeared :)
The import paths feature will be released soon so stay tuned!
In the meantime, you can temporarily place the imported protos close to the one you want to use
Sorry I didn't call this out more clearly, but I think the server-hot vs hot-server thing is a typo in the readme that's wrong on any platform.
Looking forward to include path support. I tried flattening my protos into one directory, updated all their `import` directives and gave it a go, and it works great, thanks! Both client and server running on Windows. My protos use a well known type (google.protobuf.Duration) and I was particularly impressed that it worked without me having to copy in the proto file for that.
I'm sure you already know this, but just in case: Include paths don't just help with proto files scattered across different directories, they also allow part of the path to be part of the canonical name of the proto. [1, 2] For example, if you have these files:
Then you need -I/src (or similar) because if you run protoc from within /src/xyz/ without -I then it's like you specified -I/src/xyz, and then foo.proto can't find bar.proto (because it looks for /src/xyz/xyz/bar.proto). Using relative paths like this is useful because you can have proto files with the same file name (like "config.proto") so long as their full canonical name is different, and because generated files are output into corresponding subdirectories (which is especially useful in Python where the subdirectory indicates the package name).
PM for gRPC here - this looks great! Thank you for creating it. I'll definitely be pointing folks to it. I saw that there is support for reflection and TLS planned already.
Are there plans to support record/replay of a sequence?