Hi,
I understood how a file trough a chunking process is cut into blocks allowing the creation of a merkle DAG. My question is how from the blocks we can recreate the file ? Indeed we may have all the blocks of bytes but in which order IPFS manage to concatenate them all.
UnixFs stocks the amounts of links of a node (number of children) but does it also stock the order ?
Thanks
Unixfs roots looks kinda like this:
{
"data": null,
"links":[
{"hash": "Qmfoo", "size": 1000},
{"hash": "Qmbar", "size": 500},
{"hash": "Qmbaz", "size": 1000}
]
}
This is an ordered list.
The size
field say how much of the original file offset is covered by this block.
So you just copy each range in a DFS ordered traversal.
(note real sizes are far bigger than this)
This also allows you to do seeking and streaming, for example if I only care about the bytes from 1250 to 1350, I can skip all offsets outside this range (you need to compute the offsets by suming the sizes)
In this case I would only fetch the root block and the leaf block Qmbar
(because Qmfoo
and Qmbaz
are outside of the range I want).
(this is a simplified version the actual spec is here specs/UNIXFS.md at main · ipfs/specs · GitHub)
1 Like
Thanks @Jorropo for your time