Brains are great at pattern recognition (lots of studies). This includes ratios. Your shade of color is not a good example, because it's just a single value, not relative to anything on its own. But if you have multiple colors, there will be various relationships/ratios between physical properties of the colors (wavelength, intensity etc.). Similar in music, 1:2 frequency ratio is recognized as an octave. Strongest ratios (i.e. strong pattern) are usually the simple ratios like 1:2, 1:3 & 2:3, etc. However, science hasn't been able to find out, if we can recognize Golden ratio because of the Fibonacci sequence pattern that is often found in nature or if it's to us just a ratio that is close to a simpler ratio like 5:3.
It will probably get there one day since all BCL is annotated. Perhaps this is not done because you can have parts of the library be completely safe to use in AOT while another part not be.
Oh, I despise this tactic so much. It means the company has known from the start that they can't offer it for free in the long term, but decided to subsidize it in order to gain a dominant position and get rid of competition. This breaks the conditions needed for a free market dynamics to work. In other words, they win market share for reasons other than efficiency, quality, or innovation. That's why some forms of government subsidies are prohibited under certain agreements, for example. Some multinational corporations have annual revenues larger than the GDP of many countries and can easily subsidize negative pricing for years to undercut competitors, consolidate market share, and ultimately gain monopoly power.
Also, the company has hinted false promises to the customer, as it signals that they have developed a business model where they can offer something for free. For example a two-sided marketplace where one side gets something for free to attract users and the other side pays (as it profits form these users). Users can't know something isn't sustainable unless the company explicitly states it in some way (e.g. this is a limited time offer).
So from the user's perspective, this is a bait-and-switch tactic, where the company has used a free offer in order to manipulate the market.
Well, you claim to combine several interesting features. Type safety, small binary size, high performance, predictable performance (no GC). So, I'm interested how this will turn out.
For web small binary size is really important. Frameworks like Flutter, Blazor WASM produce big binaries which limits their usability on the web.
JS/TS complicates runtime type safety, and it's performance makes it not suitable for everything (multithreading, control over memory management, GC etc.)
I wonder how much/if no GC hurts productivity.
It looks like Coi has potential to be used for web, server and cross-platform desktop.
Since the intermediate step is C++ I have a question what this means for hot-reload (does that make it impossible to implement)?
> the pages within a CBZ are going to be encoded (JPEG/PNG/etc) rather than just being bitmaps. They need to be decoded first, the GPU isn't going to let you create a texture directly from JPEG data.
It seems that JPEG can be decoded on the GPU [1] [2]
> CRC32 is limited by memory bandwidth if you're using a normal (i.e. SIMD) implementation.
According to smhasher tests [3] CRC32 is not limited by memory bandwidth. Even if we multiply CRC32 scores x4 (to estimate 512 bit wide SIMD from 128 bit wide results), we still don't get close to memory bandwidth.
The 32 bit hash of CRC32 is too low for file checksums. xxhash is definitely an improvement over CRC32.
> to actually check the integrity of archived files you want to use something like sha256, not CRC32 or xxhash
Why would you need to use a cryptographic hash function to check integrity of archived files? Quality a non-cryptographic hash function will detect corruptions due to things like bit-rot, bad RAM, etc. just the same.
And why is 256 bits needed here? Kopia developers, for example, think 128 bit hashes are big enough for backup archives [4].
Maybe the CRC32 implementations in the smasher suite just aren't that fast?
[1] claims 15 GB/s for the slowest implementation (Chromium) they compared (all vectorized).
> The 32 bit hash of CRC32 is too low for file checksums. xxhash is definitely an improvement over CRC32.
Why? What kind of error rate do you expect, and what kind of reliability do you want to achieve? Assumptions that would lead to a >32bit checksum requirement seem outlandish to me.
From SMHasher test results quality of xxhash seems higher. It has less bias / higher uniformity that CRC.
What bothers me with probability calculations, is that they always assume perfect uniformity. I've never seen any estimates how bias affects collision probability and how to modify the probability formula to account for non-perfect uniformity of a hash function.
It doesn't matter, though. xxhash is better than crc32 for hashing keys in a hash table, but both of them are inappropriate for file checksums -- especially as part of a data archival/durability strategy.
It's not obvious to me that per-page checksums in an archive format for comic books are useful at all, but if you really wanted them for some reason then crc32 (fast, common, should detect bad RAM or a decoder bug) or sha256 (slower, common, should detect any change to the bitstream) seem like reasonable choices and xxhash/xxh3 seems like LARPing.
> both of them are inappropriate for file checksums
CRCs like CRC32 were born for this kind of work. CRCs detect corruption when transmitting/storing data. What do you mean when you say that it's inappropriate for file checksums? It's ideal for file checksums.
Uniformity isn’t directly important for error detection. CRC-32 has the nice property that it’s guaranteed to detect all burst errors up to 32 bits in size, while hashes do that with probability at best 2^−b of course. (But it’s valid to care about detecting larger errors with higher probability, yes.)
There’s a whole field’s worth of really cool stuff about error correction that I wish I knew a fraction of enough to give reading recommendations about, but my comment wasn’t that deep – it’s just that in hashes, you obviously care about distribution because that’s almost the entire point of non-cryptographic hashes, and in error correction you only care that x ≠ y implies f(x) ≠ f(y) with high probability, which is only directly related in the obvious way of making use of the output space (even though it’s probably indirectly related in some interesting subtler ways).
E.g. f(x) = concat(xxhash32(x), 0xf00) is just as good at error detection as xxhash32 but is a terrible hash, and, as mentioned, CRC-32 is infinitely better at detecting certain types of errors than any universal hash family.
This seems to make sense, but I need to read more about error correction to fully understand it. I was considering possibility that data could also contain patterns where error detection performs poorly due to bias, and I haven't seen how to include these estimates in probability calculations.
> The 32 bit hash of CRC32 is too low for file checksums.
What makes you say this? I agree that there are better algorithms than CRC32 for this usecase, but if I was implementing something I'd most likely still truncate the hash to somewhere in the same ballpark (likely either 32, 48, or 64 bits).
Note that the purpose of the hash is important. These aren't being used for deduplication where you need a guaranteed unique value between all independently queried pieces of data globally but rather just to detect file corruption. At 32 bits you have only a 1 out of 2^(32-1) chance of a false negative. That should be more than enough. By the time you make it to 64 bits, if you encounter a corrupted file once _every nanosecond_ for the next 500 years or so you would expect to miss only a single event. That is a rather absurd level of reliability in my view.
I've seen few arguments that with the amount of data we have today the 2^(32-1) chance can happen, but I can't vouch their calculations were done correctly.
Readme in SMHasher test suite also seems to indicate that 32 bits might be too few for file checksums:
"Hash functions for symbol tables or hash tables typically use 32 bit hashes, for databases, file systems and file checksums typically 64 or 128bit, for crypto now starting with 256 bit."
That's vaguely describing common practices, not what's actually necessary or why. It also doesn't address my note that the purpose of the hash is important. Are "file systems" and "file checksums" referring to globally unique handles, content addressed tables, detection of bitrot, or something else?
For detecting file corruption the amount of data alone isn't the issue. Rather what matters is the rate at which corruption events occur. If I have 20 TiB of data and experience corruption at a rate of only 1 event per TiB per year (for simplicity assume each event occurs in a separate file) that's only 20 events per year. I don't know about you but I'm not worried about the false negative rate on that at 32 bits. And from personal experience that hypothetical is a gross overestimation of real world corruption rates.
It depends on how you calculate statistics. If you are designing a file format that over the lifetime of the format hundreds of millions of user will use (storing billions of files), what are the chances that 32 bits checksum won't be able to catch at least one corruption? During transfer over unstable wireless internet connection, storage on cheap flash drive, poor HDD with a higher error rate, unstable RAM etc. We want to avoid data corruption if we can even in less then ideal conditions. Cost of going from 32 bit to 64 bit hashes is very small.
No, it doesn't "depend on how you calculate statistics". Or rather you are not asking the right question. We do not care if a different person suffers a false negative. The question is if you, personally, are likely to suffer a false negative. In other words, will any given real world deployment of the solution be expected to suffer from an unacceptably high rate of false negatives?
Answering that requires figuring out two things. The sort of real world deployment you're designing for and what the acceptable false negative rate is. For an extremely conservative lower bound suppose 1 error per TiB per year and suppose 1000 TiB of storage. That gives a 99.99998% success rate for any given year. That translates to expecting 1 false negative every 4 million years.
I don't know about you but I certainly don't have anywhere near a petabyte of data, I don't suffer corruption at anywhere near a rate of 1 event per TiB per year, and I'm not in the business of archiving digital data on a geological timeframe.
I can't say I agree with your logic here. We are not talking about any specific backup or anything like that. We are talking about the design of a file format that is going to be used globally.
Business running a lottery has to calculate the odds of anyone winning, not just the odds of a single person winning. Same, a designer of a file format has to consider chances for all users. What percent of users will be affected by any design decision.
For example, what if you would offer a guarantee that 32 bit hash will protect you from corruption, and compensate generously anyone who would get this type of corruption; how would you calculate probability then?
If you offer compensation then of course you need to consider your risk exposure, ie total users. That's similar to a lottery where the central authority is concerned with all payouts while an individual is only concerned with their own payout.
Outside of brand reputation issues that is not how real world products are designed. You design a tool for the specific task it will be used for. You don't run your statistics in aggregate based on the expected number of customers.
Users are independent from one another. If the population doubles my filesystem doesn't suddenly become less reliable. If more people purchase the same laptop that I have the chance of mine failing doesn't suddenly go up. If more people deep fry things in their kitchen my own personal risk of a kitchen fire isn't increased regardless of how busy the fire department might become.
> It seems that JPEG can be decoded on the GPU [1] [2]
Sure, but you wouldn't want to. Many algorithms can be executed on a GPU via CUDA/ROCm, but the use cases for on-GPU JPEG/PNG decoding (mostly AI model training? maybe some sort of giant megapixel texture?) are unrelated to anything you'd use CBZ for.
For a comic book the performance-sensitive part is loading the current and adjoining pages, which can be done fast enough to appear instant on the CPU. If the program does bulk loading then it's for thumbnail generation which would also be on the CPU.
Loading compressed comic pages directly to the GPU would be if you needed to ... I dunno, have some sort of VR library browser? It's difficult to think of a use case.
> According to smhasher tests [3] CRC32 is not limited by memory bandwidth.
> Even if we multiply CRC32 scores x4 (to estimate 512 bit wide SIMD from 128
> bit wide results), we still don't get close to memory bandwidth.
Your link shows CRC32 at 7963.20 MiB/s (~7.77 GiB/s) which indicates it's either very old or isn't measuring pure CRC32 throughput (I see stuff about the C++ STL in the logs).
Look at https://github.com/corsix/fast-crc32 for example, which measures 85 GB/s (GB, GiB, eh close enough) on the Apple M1. That's fast enough that I'm comfortable calling it limited by memory bandwidth on real-world systems. Obviously if you solder a Raspberry Pi to some GDDR then the ratio differs.
> The 32 bit hash of CRC32 is too low for file checksums. xxhash is definitely
> an improvement over CRC32.
You don't want to use xxhash (or crc32, or cityhash, ...) for checksums of archived files, that's not what they're designed for. Use them as the key function for hash tables. That's why their output is 32- or 64-bits, they're designed to fit into a machine integer.
File checksums don't have the same size limit so it's fine to use 256- or 512-bit checksum algorithms, which means you're not limited to xxhash.
> Why would you need to use a cryptographic hash function to check integrity
> of archived files? Quality a non-cryptographic hash function will detect
> corruptions due to things like bit-rot, bad RAM, etc. just the same.
I have personally seen bitrot and network transmission errors that were not caught by xxhash-type hash functions, but were caught by higher-level checksums. The performance properties of hash functions used for hash table keys make those same functions less appropriate for archival.
> And why is 256 bits needed here? Kopia developers, for example, think 128
> bit hashes are big enough for backup archives [4].
The checksum algorithm doesn't need to be cryptographically strong, but if you're using software written in the past decade then SHA256 is supported everywhere by everything so might as well use it by default unless there's a compelling reason not to.
For archival you only need to compute the checksums on file transfer and/or periodic archive scrubbing, so the overhead of SHA256 vs SHA1/MD5 doesn't really matter.
I don't know what kopia is, but according to your link it looks like their wire protocol involves each client downloading a complete index of the repository content, including a CAS identifier for every file. The semantics would be something like Git? Their list of supported algorithms looks reasonable (blake, sha2, sha3) so I wouldn't have the same concerns as I would if they were using xxhash or cityhash.
> which can be done fast enough to appear instant on the CPU
Big scanned PDFs can be problfrom more efficient processing (if it had HW support for such technique)
> Your link shows CRC32 at 7963.20 MiB/s (~7.77 GiB/s) which indicates it's either very old or isn't measuring pure CRC32 throughput
It may not be fastest implementation of CRC32, but it's also done on old Ryzen 5 3350G 3.6GHz. Below the table are results done on different HW. On Intel i7-6820HQ CRC32 achieves 27.6 GB/s.
> measures 85 GB/s (GB, GiB, eh close enough) on the Apple M1. That's fast enough that I'm comfortable calling it limited by memory bandwidth on real-world systems.
That looks incredibly suspicious since Apple M1 has maximum memory bandwidth of 68.25 GB/s [1].
> I have personally seen bitrot and network transmission errors that were not caught by xxhash-type hash functions, but were caught by higher-level checksums. The performance properties of hash functions used for hash table keys make those same functions less appropriate for archival.
Your argument is meaningless without more details. xxhash supports 128 bits, which I doubt wouldn't be able to catch an error in you case.
SHA256 is an order of magnitude or more slower than non-cryptographic hashes. In my experience archival process usually has big enough effect on performance to care about it.
I'm beginning to suspect your primary reason for disliking xxhash is because it's not de facto standard like CRC or SHA. I agree that this is a big one, but you constantly imply like there's more to why xxhash is bad. Maybe my knowledge is lacking, care to explain? Why wouldn't 128 bit xxhash be more than enough for checksums of files. AFAIK the only thing it doesn't do is protect you against tampering.
> I don't know what kopia is, but according to your link it looks like their wire protocol involves each client downloading a complete index of the repository content, including a CAS identifier for every file. The semantics would be something like Git? Their list of supported algorithms looks reasonable (blake, sha2, sha3) so I wouldn't have the same concerns as I would if they were using xxhash or cityhash.
Kopia uses hashes for block level deduplication. What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?
> What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?
malicious block hash collisions where the colliding block was introduced by some way other than tampering (e.g. storing a file created by someone else)
reply