Holy thank god. These things deserve to be written in Rust (see also, matrix-rust-sdk and the entire ecosystem rebasing on it). And ipfs-go and the existing ipfs ecosystem, I'm just so disappointed. They've had so much money floating around, so many filecoin adjacent projects with ipfs-adjacent insiders and yet so much low hanging fruit that seems unrealized or locked up in poc nodejs projects.
This is probably the single best thing to happen for ipfs in 3+ years. Bravo, I actually have hope in ipfs again.
The messaging on the site makes me think I'm not alone in feeling this, and that excites me even more.
What do you mean "IPFS runs on Golang"? IPFS is a protocol that had official implementations in Golang and JavaScript.
I'm glad to see another implementation. After trying to run go-ipfs (no Kubo) for years I gave up. The implemention was incredibly low quality. It had tons of bugs, a very quirky CLI and API, and performs horribly. With any decent amount of data it would thrash like crazy for no clear reason.
I'm really glad to see another implementation. It may bring me back to running my own node.
I happen to like Rust and despise Golang but that isn't a big deal, just raises the chance that I can contribute. The go-ipfs code that I read was low quality even go Go.
Hey folks, I am one of the founders behind this project, happy to answer any questions you might have about it.
We have an initial release out since earlier today: https://github.com/n0-computer/iroh/releases/tag/v0.1.0 but we are still very early, so be gentle :)
Is this project also paid for by Protocol Labs, like the initial IPFS implementation? Or is it a completely separate organisation? Is n0 somehow receiving grants from Protocol Labs to fund this effort? If not, how is the organisation supposed to survive long-term?
Project looks great. I have been following IPFS from a distance for a while now but it still seems like it’s not really ready to use for much. I was investigating tech that would allow you to build a decentralised, censorship resistant forum but it seemed extremely difficult to run a real website in IPFS.
What use cases do you think work well today and are there any prominent projects doing anything real with the tech?
As kubo/go-ipfs are notoriously finicky to load-balance/proxy and being able to run anonymously over Tor/I2P are just short of explicit non-goals to the point of neglecting open PRs for years - can we expect a different approach from Iroh?
It is nice to see another IPFS implementation coming up, especially in Rust - but will there be any good client libraries in Rust any time soon? Does your project involve implementing client libraries?
Yes. I experimented with the go IPFS client but stopped using it after noticing that even a fairly small share was burning up 75% of a CPU core all of the time.
How is Rust or any other language supposed to help with this? Unless there is a serious design problem in the current go codebase, removing GC overhead will not by itself solve the problem of some software having to run all the time
hey I work on the project. This isn't a "re-write in rust" thing, we intend to iterate on the protocol itself to drive performance improvements. nearly all of our team are veterans of the IPFS ecosystem who want to see IPFS evolve to be more performant. We obviously can't change the overall performance of the network overnight, but we do control interop between nodes that run iroh, where we plan to ship fast-paths that we can propose as spec-level changes to the protocol once proven in the wild.
Our initial research indicates the thing that needs the most attention isn't the DHT (a commonly-cited source of slowness), but the data transfer protocol: bitswap. We plan to tackle that in the coming months.
Yea... i'm a huge fan of Rust and it is my primary language these days, but despite my many complaints with Go.. it's not bad. In fact it's pretty solid. I can't imagine the primary slowness in ipfs-go will be solved by ipfs-rust, at least on language alone. Some, sure, but not primary.. i would think.
Do i think Rust could have a better, more generic library with code that is far more concise than Go? Yes.. but that's not likely the cause for ipfs-go's issues.
I can definitely see a chance for a rewrite producing better code though. As much as i'd love to give Rust that victory, i just can't imagine Rust is _that_ big of contributor in this specific case.
We are going to first try to reach feature parity with the go implementation in the areas we care about. But it is quite clear that there are some drastic improvements needed at the protocol level to get the desired performance.
We are going to experiment with such improvements, and try to work with protocol labs to standardise them if/when we come up with a good solution.
I'm not sure people are claiming that the choice of programming language is the deciding factor? All the website and the comments are saying is "the current client (which happens to be written in go) is slow, maybe this rewrite will be faster". I'm not sure it's fair to read it as an attack on go, or an endorsement of rust.
Based in my experience trying to use the Kubo client and reading the code to try to understand bugs the codebase is just not if a great quality. Any high quality re-write should be able to do a much better job. I'm sure the language choice helps but I don't think that is the biggest factor at play here.
What IPFS lacks that I think would be ideal is a distributed file system. Let's say that Wikipedia shares some data, and I only have 100GB of space but want to help them out, then being able to dedicate 100GB of storage to help Wikipedia or whoever else would be awesome.
Also, if I have a complete file or backup on multiple computers being able to immediately start sharing that across multiple servers without having to use like double the storage to also share it with IPFS.
You can kind of handle this use case today. You can download a part of wikipedia, and then people should retrieve content from your node in addition to the official mirrors.
The problem is that the performance of content discovery and content sync in ipfs leaves something to be desired. And also, probably due to this, there is also a lack of tooling for such use cases.
The "shared wikipedia" use case is one I specifically would like to solve. Compared to some other things out there, it is a relatively small data set. And lots of people would be willing to help share wikipedia. So it makes for a great test case.
Well, some of that is anti the way IPFS CID hashing works: every edit would result in a new CID for (e.g. https://en.wikipedia.org/wiki/InterPlanetary_File_System) so your local pin of /ipfs/68106339799a6170efed62807e5c245d becomes orphaned when the next edit makes /ipfs/1ba72a5504f558ade3e784d9d349543c the most recent revision. I'm aware of /ipns but unless there was going to be an /ipns entry for every single page then it doesn't become as much of a distributed filesystem as you might think. Or, I guess put another way: it's really great for a distributed filesystem that doesn't change very often. I would expect archive.org would get more benefit from having IPFS seeds than anything on the Internet that is dynamically generated content
I was actually sad that they don't already generate torrents for all their collections (or at least it didn't appear that they do)
Er, it's not that hard. Think about git repos, they have refs that point to a tip that you traverse. This would work the same way.
And in fact CID and chunking is what makes this possible...
Ipfs is in fact ripe and perfect for some of these sorts of things and I highly suspect we're going to see more innovation in this space, in applications that are actually pushing the envelope, now that there's a sold rust impl
You must be my boss; he loves to say that expression on zoom meetings where he's not responsible for the implementation
> refs that point to a tip that you traverse
Oh, ok, I guess you'll go to /ipns/wikipedia.org and just keep clicking links until you arrive at /wiki/InterPlanetary_File_System then. I think someone tried to prove the "6 degrees of Wikipedia," so yeah, how hard can it be?
I'm ashamed I got baited into even replying to this
The discussion was a way of replicating or distributing a filesystem structures, there's absolutely no requirement anywhere that individual files in that FS are directly addressable via IPNS. And yes, it's simple because there are countless implementations of chunking deduping CID systems. Basically none of which have that artificially created requirement. MANY of which ARE ALREADY BUILT ON IPFS AND IPNS and work exactly the way I describe.
Also, again, its like I mentioned it for a reason, see Git, where in fact, there isn't a nameable identity per-file, per-commit, and yet, there's a revisioned filesystem abstraction built on top! That works perfectly fine with pointers to CID content that points to other CID content.
It's almost exactly what is being discussed at hand.
Shame away buddy. Btw I've contributed code to more than one Content Addressable chunking deduping filesystem projects, because they're the future of file sync, and frankly now that Iroh is here, we're likely to start realizing these things in a "final" ish from.
Oh and the other commenter saying the exact same thing as me ;).
In theory, you would create an IPNS entry for the root of wikipedia, then update that on each change. Wikipedia is a relatively static data set, so on a day probably much less than 1% gets changed.
So the new wikipedia tree reuses the vast majority of the previous version.
Not to be facetious, but what are serious use cases / uses of IPFS in the wild? I usually use it for downloading books from Libgen… I’d be curious to hear from people who use it for less illicit stuff. I think it’s a wonderful project and good to have more than one implementation.
I've used it in the past to distribute updates to a cluster of servers sharing the same egress/ingress. At one point, the amount of servers made it so updating the software on the servers saturated the ingress and bottlenecked on bandwidth.
Used IPFS to distribute the updates in a P2P fashion instead so all the servers could share the data between themselves. Rolling out updates went from taking minutes to seconds.
Could have used torrents for the same effect basically, but felt for trying something novel and worked out fine in the end.
I normally disagree with people criticizing websites for focusing on style instead of having an ugly 2000s interface, but I have to agree with you on this. This website is extremely difficult to navigate.
Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
This is probably the single best thing to happen for ipfs in 3+ years. Bravo, I actually have hope in ipfs again.
The messaging on the site makes me think I'm not alone in feeling this, and that excites me even more.