Storing a few TB of data?

From @e12e on Sun Apr 05 2015 20:45:50 GMT+0000 (UTC)

If I push a few TB of data into ipfs (say some media files, install isos etc) – is that a reasonable use of the technology? How could/should I assure that the network has the capacity to safely store the data, with suitable redundancy? If I dedicate a few TB of disk on my server to ipfs – can I make sure that some sensible portion is reserved for my use?


Copied from original issue: https://github.com/ipfs/faq/issues/5

1 Like

From @whyrusleeping on Mon Apr 06 2015 00:07:02 GMT+0000 (UTC)

Data added to ipfs is only stored locally until another interested party requests it from you. ipfs itself (the protocol) provides no guarantees of redundancy or remote storage.

1 Like

From @jbenet on Mon Apr 06 2015 03:00:18 GMT+0000 (UTC)

I’ll add that that’s why Filecoin exists-- as a way to incentivize networks of people to replicate/backup your data.

From @sudhirj on Thu Dec 03 2015 18:09:57 GMT+0000 (UTC)

BTW, could someone confirm that ipfs add makes a copy of the data I’m adding? So to add 1TB of data into IPFS I’ll need at least 2 TB on my disk?

From @whyrusleeping on Thu Dec 03 2015 18:18:55 GMT+0000 (UTC)

@sudhirj that is correct. Similar to adding files into git, in the future we may find a way to work around this.

From @sudhirj on Fri Dec 04 2015 13:59:30 GMT+0000 (UTC)

Okay. Then let’s put down alternative options for others as well (I’m looking at it for large-file data distribution). The alternatives to this are

  1. Mount the IPFS datastore as a FUSE drive (requires root permissions and special installation) - will this take care of everything automatically? Is copying a file into the mounted folder same as ipfs add, for instance?
  2. Assuming a FUSE mount, can regular IPFS commands like ipfs pin be used on data on the mounted drive?