We have several ipfs repos, with ~90GB, ~170GB and ~220GB of content in each - the majority of this data being explicitly pinned to support our application.
Recently pin operations have slowed down and now never complete at all - disk space is not an issue as these repos are placed on a 2TB EBS volume, and each has a configured IPFS StorageMax of at least 500GB.
Is there a way to avoid this sort of performance degradation with large pinsets?
Is there a maximum recommended storage limit for a single IPFS repo, after which this is expected behavior?
It might be noteworthy that these nodes also actively serve content from their pinsets explicitly.
so on one of the nodes we have 266,600 pins and repo size 225G, another 273,993 pins and 91G.
thanks for the link! we can definitely add a parallel cache but it would be great to see a native solution in the future.
are there any other potential configurations we can optimize for such large pinsets?
is it recommended we use more nodes (maybe cluster) in order to support this type of workload?