Hacker Newsnew | past | comments | ask | show | jobs | submit | dabinat's commentslogin

I’m curious if they remove the displays. Not every laptop works with the display closed and it might cause heat issues that throttle the CPU or reduce the life of the machine to run it like that long-term.

It’s easy enough to use your own domain as a CNAME.

It can be difficult to reason about seemingly innocuous things at scale. I have definitely fallen into the trap of increasing file size from 8 KB to 10 KB and having it cause massive problems when multiplied across all customers at once.

The problem with using S3 as a filesystem is that it’s immutable, and that hasn’t changed with S3 Files. So if I have a large file and change 1 byte of it, or even just rename it, it needs to upload the entire file all over again. This seems most useful for read-heavy workflows of files that are small enough to fit in the cache.

That’s not that different than CoW filesystems - there is no rule that files must map 1:1 to objects; you can (transparently) divide a file into smaller chunks to enable more fine grained edits.

The most obvious approach seems to implement device blocks as S3 objects and use any existing file system on top of it.

S3 is notoriously miserable with small objects.

The unit of granularity for a CoW filesystem is a block, which is typically 4kB or smaller. The unit of granularity for S3 is the entire object or 5MB (minimum multipart upload size), whichever is smaller. The difference can be immense.

But this doesn't

Files can be immutable if you have mutable metadata - but S3 does not have mutable metadata, so you can't rename a directory without a full copy of all its contents.

Immutable files can be solved by chunking them, allowing files to be opened and appended to - we do this in HopsFS. However, random writes are typically not supported in scaleout metadata file systems - but rarely used by POSIX clients, thankfully.


Depends how you implement the fs layer on top of s3; as a quick example, I've done a couple of implementations of exactly that, where a file is chunked into multiple s3 objects; this allows for CoW semantics if required, and parallel upload/downloads; in the end it heavily depends on your use case

The big problem is that Apple disables AV1 support in Safari on devices that do not have hardware decoders, even if those devices are powerful enough to decode in software. I can understand it for a phone, but it seems unreasonable for an M2 Ultra Mac Studio.

I am surprised that vehicle manufacturers would support a weight-based fee, given that their most profitable vehicles like SUVs and pickup trucks tend to be the heaviest.

Intel’s doing interesting things with their Arc GPUs. They’re offering GPUs that aren’t super fast for gaming but are relatively low power and have a boatload of VRAM. The new B70 is half the retail price of a 5090 (probably more like 1/3rd or 1/4 of actual 5090 selling prices) but has the same amount of memory and half the TDP. So for the same price as a 5090 you could get several and use them together.

Is it feasible to run LLM inference comparably without CUDA or Rocm? How much of the cost performance goes away?

I have heard people claim the opposite: that Chrome is the memory hog and Firefox is much leaner. I think it’s probably dependent upon usage patterns, OS and extensions.

But I think the biggest problem with Firefox is Mozilla itself. I’d love to see a group with some actual backing behind them fork Firefox and make a proper competitor unaffected by Mozilla’s poor decision-making.


You could provide code to enterprise customers for a fee, with contractural restrictions on how they can use it.

You could also have trusted third parties see the code and vouch for it.

Or you may decide that the 5% asking for this feature aren’t worth it. You don’t have to capture every customer.


To me, the correct solution to the problem of being tied to one ecosystem crate for utility features like serialization or logging is reflection / comptime. The problem is not the orphan rule, it’s that Rust needs reflection a lot more than a dynamically-typed language does, and it should have been added a long time ago. (It’s in development now, but it will most likely be years before it ships in a stable version.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: