let’s say i am having a single private ipfs node and I connect another private node, how can I share the MFS amongst the nodes
I think you can just pin the root and you’re done. Your node you pin it on will go get the data and keep it I think.
interesting that docker-compose exec private-ipfs0 ipfs pin add /
throws the error
Error: invalid path "/": path does not begin with '/'
ERROR: 1
Also pinning in the ipfsNode0 with the CID doesn’t show it in the MFS in the ipfsNode1
You need to add ‘-r’ to pin directories but do you really want to add the root dir? You’ll need to copy it into MFS after pinning it.
the original question was how can i replicate the MFS across the private nodes
, then Mr. wclayf suggested pinning the root as one way which doesn’t work because i tried with the a single CID, it doesn’t show as a mfs in the explorer of other nodes. maybe this is not even possible, and if that’s the case how do we make sure we can have the distributed MFS?!
something like this.
If IPFS will successfully pin something (return a success code), and then not even make an attempt to copy all the data to the instance that pinned it, that seems like a bug to me. You can PIN CIDs that don’t exist locally right? Maybe I’m wrong about that too.
The main reason people have so many issues with IPFS is really that the documentation is almost non-existent. Sure every method is documented (like here: HTTP API | IPFS Docs) but it’s actually like about 10% as detailed as it should be, and this has been the case for several years actually. The docs never get added to. They’re just completely neglecting it. You really can’t blame the developers either, because at some point you have to blame the management, whoever that happens to be, which may get me kicked off here for saying. Hope not because it needed to be said, and I don’t feel it needed to be stated any politer than that.
well, TBH even the docs are not consistent across the implementations.
There are lot of people writing the same tutorials over and over again, nothing new, and the ones that manage to create something useful they keep that for themselves. it’s a shame really for a project like this to have such a shitty approach to developer friendliness.
If you get kicked out for saying that they should ban me for life because I truly agree with you. Part of the blame goes to the management and part to the developers, I know it’s not easy, I am a dev too, but c’mon, how hard it’s to explain the function, 2 lines of good comment should be enough.
I just didn’t want to come across mean spirited, while also stating how bad the “documentation problem” really is. Interestingly the way they generate the docs is from the code. They ask the developers to document each function, and so the developer puts in about a sentence or two (in the code), or maybe even one paragraph of you’re lucky, and then all the website docs and other docs are “generated” from the source code (API docs I mean).
What really needs to happen is that somewhere, for each API function, there needs to be extensive discussions, use cases, and examples.
IPFS is considered by many in the industry to be the thing we want to build Web3.0 on, and build public general purpose blockchain-like systems on, so IPFS is very important to the entire world right now…yet the developers can’t be bothered to add more docs, sadly. I guess they’re too busy coding, to get to these ‘unimportant’ things like helping the world learn to take advantage of their work.
i didn’t take it as a being mean-spirited
, more like, honest and direct.
I’ve been asking on the different channels so many times, but the core and the near-core team take ages to respond, usually I get respond from people who are exited about the idea.
As for the Web3.0. that is true, I am going to use it in that manner too, thus my question. Creating the resilient and fail safe replication for the MFS is something I really would need.
you can see the project here https://anagolay.dev and https://kelp.digital
but, let’s close this, since we are digressing, and maybe continue discussing things, or maybe even join forces on other channels, https://twitter.com/woss_io is me
Yeah since there’s no DM support in this forum, you can find me here, and get in touch:
Same goes for all IPFS people. We can setup a forum on Quanta maybe too.
(i’m not on twitter, although I do have some Fediverse accounts)
Regarding this, users with trust level >= 1 can send DMs to others. You are at level 2, so you should be able to message other users.
If you have a test_folder
in your first node:
1 - ipfs files stat /test_folder
in node 1
2 - ipfs pin add <cid>
in node 2, using CID from node 1.
3 - ipfs files cp /ipfs/<cid> /test_folder
in node 2, using the CID from node 1.
I opened an issue. Parsing "/" results in 'Path does not start with "/"' · Issue #35 · ipfs/go-path · GitHub. The pin add
command is expecting an “ipfs-path”, so something like /ipfs/<cid>...
or /ipns/<key>...
, though it takes CIDs directly.
Gracias Hector!
So there is no automatic replication between the nodes. Huh, I imagine doing that for 20 nodes…
As for the issue, thanks for that I’m watching it already.
@hector in your roadmap/feature list this kind of request? If not, where would be a good place to either contribute, talk or discuss this.
Also what do you think, would this be useful?
To Conclude:
Hoctor’s answer is to copy the MFS from node1 to node2 manually ( or using an event/script/cronjob ) . I guess that is as good as any ATM.
MFS replication is something that has been asked before in the context of ipfs-cluster, but it is not trivial.
One aspect I’m still confused about:
If someone pins an MFS folder root, and the folder doesn’t happen to be stored locally, then doesn’t that mean the ‘pin’ should return an error code? I mean if you’re saying the files won’t be actually persisted until copied, then what does pin do in this case?
Pin will endlessly attempt to fetch files that ere not available localy unless you specify a timeout, once the timeout expires it will then throw an error
@phillmac Are you talking about a scenario where the MFS files are no longer available on the network (remote machines), and so the PIN fails due to not being able to get the data? …or are you saying the PIN fails even though the data being pinned WAS available on the network?
IMO pinning should always go get the data, if it isn’t local. I mean that’s the definition of what pinning even is. So if there’s a scenario where MFS pins would otherwise fail, there should be some kind of ‘force’ option for the PIN command that says “Hey IPFS, I really mean it. Get this data and pin it.”
By default pin will try indefinitly to search for a node with the data, without giving up.
Quite often I’ve tried to pin some data thats already been deleted and no longer available in the swarm at all.
In that case the pin job will be ‘stuck’ forever waiting for the data to become available.
You can observe this by using the --progress
option, the fetched count will feeze either at zero or whenever it encounters the unavailble data
If the data becomes available at a later time then alls well and good, the pin job will carry on its merry way untill its complete.
Otherwise, which has been quite a problem for me, you can end up with many stuck jobs using bandwidth to the point it slows everything else down.
So to get arround that issue, if you specify --timeout=15m
for e.g. The job will be terminated after 15 mins if it’s not complete and throw an error
What you’re saying about timeouts and unreachable (un-findable) data sounds fine. That makes sense.
However the issue at hand here is that apparently if you pin a large MFS file structure root folder, the “pin” is guaranteed to always NOT pin the data unless all the files just happen to have already been copied to the local machine…even if the files ARE reachable on the remote machine. That seemed like what @hector was implying, unless I’m confused.
Also my opinion is that if pinning MFS files is therefore unreliable (for the case I just gave), then pin should return an error code immediately in this scenario, because it definitely can tell if something is already stored locally or not. And if it’s not local it’s not going to copy it automatically?, and therefore cannot claim to have “pinned” it right?
I did not imply anything. I you pin something , or the operation will not finish until the whole tree is fully available locally.ipfs files cp
something,
Thanks @hector. So my original answer above where said this:
“I think you can just pin the root and you’re done. Your node you pin it on will go get the data and keep it I think.”
was actually correct then? We can use ‘PIN’ to get MFS files from foreign servers, even without using a copy command? It certainly seemed to me like you were saying copy should be used and that my suggestion to just PIN it, was not correct. Thanks for clarifying.