I Like Big Blocks And I Cannot Slice

Hm.. if we descope solutions that depend on some sort of data envelopes that facilitate storing metadata and chunking (e.g. UnixFS), then the lowest common denominator across all mentioned systems are opaque bytes of user data without any wrappers.

Such raw user data is identified by a CID with raw (0x55) codec and for discussions that include interop and backward-compatibility, it may be what mean when we say “big block”.

Probably? Bluesky uses raw blocks for blobs with images, video, and audio (docs).
One should be able to put a big raw block in a CAR just fine, no matter the size of the block.

The block size limits are enforced mainly on the data transfer layer (e.g. in Bitswap over Libp2p, or HTTP retrieval client, for perf/security reasons. If you are a semi-trusted logged-in user of a platform, some limitations could be lifted.

ps. a relevant prior art discussion where Adin elaborated on data transfer security when dealing with low trust p2p contexts:

In that context, my realistic hopes for future “big block” support in IPFS Mainnet are mainly around trusted setups that raise limits, or doing HTTP retrieval (once it is enabled by default in main IPFS implementations like Kubo/Helia). Security concerns could be solved with range requests within those opaque blobs (e.g. having HTTP spec for Blake3 and raw CIDs), or some other way (open-ended, thanks to HTTP content type negotiation).

2 Likes