I assume you meant ipfs add -n. And no, that won’t make a difference,
because ipfs add -n essentially just reads the content from your file
and does the “chunk and hash the blocks” step without reading the
blockstore or trying to fetch blocks from the network. Kind of like a
dry-run upload that gives you a CID back.
Going back to your original post, you said:
“On rare occasions no errors are produced by ipfs get yet the file is
To me, that sounds like a bug in ipfs get. Does it exit zero too?
If it does, then yeah that’s definitely a bug.
Another thing you can use to pull a DAG to your IPFS instance recursively
is the “ipfs dag stat” command.
There’s another API method you might find useful: /api/v0/dag/get. This
will get a DAG node, non-recursively. By default, it returns a JSON
representation of the node with link and data fields. The data field is
base64-encoded bytes. The format of those bytes is dependent upon the
codec of the CID.
The link field is an array of links (CIDs with some extra info).
You can use that JSON output to crawl the DAG. For
example, calling the endpoint with my sample CID
gives me back the following JSON for the root node:
I, too, found this stuff confusing until I went into a deep dive. For a
while, I was running a 1+ terabyte package mirror on IPFS, and doing that
efficiently forced me to get a firm grasp on some of the IPFS innards.
It’s content-addressable, so why can’t I fetch a file knowing its hash?
Another way to phrase the answer to that question is that IPFS is
content-addressed at the block level, not at the filesystem level, and a
file or directory on IPFS is just a root block of a DAG.
Sorry if I ramble. I’m basically giving a brain dump in hopes that it
will be useful.