Addressing petabytes of genetic data with IPFS

Hi.

I might have a similar usecase to this. Let me explain the scenario a bit.

I work with various organizations who have very large sets of data. Normal set might be something like 50TB or so but some could be more or less. The data are large binary blobs by and large but some of them could be text(ish).

In order to avoid moving all this data around indices are built so that one person can pull out a subset of the data but this data could still end up being some hundred(s) of megabytes.

I would like to put this data on a distributed platform so that each entity who needs the data can pull it from a peer or peers nearby instead the central store. So basically a bittorrent type of system but I would like to be able to set up different “containers” or fenced in areas where access to data is more restricted.

Could IPFs do this?

Looks like that filestore is a pretty good fit for this right?