It would be great to see IPFS used in more places and to be able to use it more in my own projects. Unfortunately it’s often hard from a business perspective to make the full jump into IPFS.

For this reason I think the following approach could be taken to convince companies to adopt IPFS:

Convince them to expose multihash links to their public content, such as cas.example.com/QmXo....

Convince them to upload this content automatically to IPFS

For example with GitHub:

I currently dislike directly referencing GitHub archives, as organization name changes, project name changes, and various deletions could all threaten the permanence of that link. I would much prefer referencing a hash instead, particularly if I could easily pin that content to guarantee it didn’t get deleted.

The second step would then decentralize the serving of this content, to minimize chance of data loss and downtime. It would also allow organizations to seamlessly help host this content on IPFS without any URL changes. As well as all the various other advantages of IPFS.

This could apply to Amazon S3/CloudFront, to Hackage, to GitLab, to Google Hosted Libraries, and to many other organizations.

S3X (S3 ontop of IPFS) seems to fall in line with what you’re talking about, as it lets anyone who uses MinIO be able to use IPFS, and thus anyone that builds applications with an S3 client can now use IPFS.

One of the difficulties in improving adoption of IPFS is usability issues, in part because it means that people need to adjust the way their applications work, whether thats building new ones from the ground up to use IPFS, or modifying existing ones to use IPFS.

Undoubtedly there will be more avenues for this kind of stuff, but S3X is a pretty good start.

So our system itself is fairly inherently centralized and highly mutable, so we are serving it through DNS.

However there are a lot of opportunities to move parts of what we are doing into IPFS.

We have a lot of static assets which we would like to have publicly available, particularly to browsers. We currently use an S3 bucket with hash-based URLs but would happily use IPFS instead as long as the pricing and performance was comparable, and if there were HTTP urls available until IPFS is supported natively in browsers.

We also work a lot with existing open source software, typically hosted on GitHub and sometimes deployed on Hackage. It would be ideal to develop and catalog this software in whichever way is most convenient for the developers (e.g. GitHub) but then use a more permanent IPFS-y URLs when deploying or pinning a specific version of the software.

Long term we will also most likely be serving various useful datasets for offline usage. We would like to serve these over IPFS for URL permanence, and so that we don’t have to worry as much about keeping all the datasets around long term, as the ones people care about they can easily pin themselves. Again for this purpose we would want S3-comparable price and performance.

So to sum it up: pricing and performance, browser compatibility, interop with existing services and communities like GitHub.

In terms of the cost of existing services you will be hard pressed to find one that is comparable to S3 in terms of cost. Typical services char $0.14->$0.15/gb to store your data in one location. I’m a bit biased, but the only ipfs service out there that comes close to S3 in terms of price is the one I work on https://play2.temporal.cloud, in terms of cost it’s roughly comparable to Google Compute Engine block storage. It’s $0.05gb/$0.07gb and your data gets stored in two different geographical regions. But even this is more expensive than S3 by about double the pure storage cost.

So s3x gives you 90% of what you want:

interop with existing services

performance

costs roughly comparable to GCE block storage

S3 comparable pricing for IPFS services does not exist at all, the only comparable cost is what I’ve mentioned. If this is undesirable the only other option is to use go-ds-s3 and run your own IPFS nodes, but this likely isn’t a good idea since you will undoubtedly rack up huge bandwidth costs with S3.

One thing to keep in mind is that S3 service providers often charge you storage + bandwidth. So if you are doing a lot of egress traffic with your S3 provider it is likely your cost can balloon, which is the one thing IPFS services have the leverage on. There isn’t a single IPFS service provider that does bandwidth charges.

Since our S3 costs are very low relative to our other AWS services. It may work out for us to use an IPFS storage provider, particularly if that means we don’t have to worry about cost surges due to bandwidth surges.

Is there a good guide somewhere to compare the available IPFS storage providers, and the available IPFS gateways, in terms of things like pricing, performance and long term stability?

I still think it would be quite useful to try and convince Amazon to provide IPFS-backed hosting, since so many people already use S3 for storage, and often in very immutable ways.

I also think that the GitHub use-case still benefits heavily from cooperation with GitHub itself. It is technically possible to just immediately download any GitHub archive I need and upload it to IPFS, but it’s not particularly convenient.

I still think it would be quite useful to try and convince Amazon to provide IPFS-backed hosting, since so many people already use S3 for storage, and often in very immutable ways.

It would be quite useful for you but how is it useful for Amazon to provide a service that competes with its wildly successful S3 service and makes it effortless for customers to migrate away to other providers where using other providers is part of the IPFS architecture? How would they charge for an IPFS service?

tysonzero:

I also think that the GitHub use-case still benefits heavily from cooperation with GitHub itself. It is technically possible to just immediately download any GitHub archive I need and upload it to IPFS, but it’s not particularly convenient.

Again, why would GitHub do that? I doubt that bandwidth and storage costs for GitHub archives are much of a problem for Microsoft.

It would be quite useful for you but how is it useful for Amazon to provide a service that competes with its wildly successful S3 service and makes it effortless for customers to migrate away to other providers where using other providers is part of the IPFS architecture? How would they charge for an IPFS service?

They would charge for pinning at a fairly comparable rate to what they currently charge for storage. Maybe slightly higher but with no bandwidth fees as mentioned above?

From a business perspective it would keep people who like IPFS and content addressable storage paying them for pinning instead of someone else. But you are right that it’s possible the competitive reasons may outweigh that.

Again, why would GitHub do that? I doubt that bandwidth and storage costs for GitHub archives are much of a problem for Microsoft.

It’s less about the bandwidth and storage costs of GitHub. It’s about providing much more permanent URLs for archives rather than the current ones which can break for a variety of different reasons.

This would allow current GitHub-hosted software like nix to be hosted on IPFS instead, using GitHub for the development and pinning, and not relying on it to always be up in order to deploy.

The market has changed a little bit since then, with Pinata now doing “replications”, but each replication doubles your data costs. For example Temporal stores your data in 2 different regions (east coast of canada + west coast of canada) and on 3 ipfs nodes in total all for the same cost ($0.05/$0.07), whereas Pinata will charge you extra just for distributing your data onto an extra node.

Since our S3 costs are very low relative to our other AWS services. It may work out for us to use an IPFS storage provider, particularly if that means we don’t have to worry about cost surges due to bandwidth surges.

Temporal doesn’t charge you for bandwidth and simply charges for storage allocated.