WEEK 2 QUESTIONS | April 11 - 17, 2021

WEEK 2 QUESTIONS | April 11 - 17, 2021 (times are in EST)

It’s that time of week again! We’ve got some good questions here :tada:.

Do you have any suggestions for how to reduce any of these questions? Do you know any of the answers? Do you have your own questions? Fire away below and let’s see if we can clear things up together :rocket:!


parashar asked (paraphrased) “Why does ipfs init display an error on first run?” (forum - 03:43 April 11)

ERROR provider.queue Failed to enqueue cid: leveldb: closed


I’m not sure why the error is displayed on startup, perhaps this should be suppressed if harmless?

dreamdust asked “Given an IPNS hash, is there an API method to lookup the corresponding public key?” (#ipfs - 22:54 April 13)

RubenKeleva replied:

well, the IPNS is the public key … at least for ed25519 keys (which is the default on 0.8)

dreamdust replied

Hmm… the docs say an IPNS CID is the hash of the public key. Are you’re saying that’s incorrect and it’s the public key?

At least that’s what the docs say here: IPNS | IPFS Docs

So I’m wondering if it’s possible via the API to get the public key for an IPNS address. It looks like the answer is no.


Considering we validate IPNS via checking the signature against the public key … surely there must be away to retrieve the public key! If anyone knows where to find information on how this procedure works, I’d love to see it!



TLDR: IPNS identifiers are represented as libp2p peerIDs. libp2p peerIDs are either the SHA256 of the public key (wrapped in a protobuf) or if the public key is small enough it is not hashed at all.

When the key is hashed the full public key can be included in the IPNS record itself. Historical note: IIRC it used to be that we’d put the public key in the DHT too (in a separate namespace since it needs different validation), but that resulted in an extra network lookup that didn’t seem like it was really necessary so for a long time now they’ve been bundled together.

Final Observation:

So it seems all the validating happens for us! I wonder if there’s really a need for an API to lookup the key, but there does seem to be some interest in that. I’d love to hear everyone’s thoughts on this!

roozelena asked (paraphrased) “Why when I check findprovs when I know a peer is serving it, does it sometimes return no peers?” (forum - 8:39 April 14)

I uploaded two files for example in slate.host but after ipfs dht findprovs no ID is displayed!


I’ve also noticed this, and I’m not sure why. If I add a file somewhere, I can ipfs cat it fine, but if I first try dht findprovs, I don’t get an output. Not sure why :thinking:.

lolita asked “Merging two directories with their CID?” (forum - 18:45 April 11)

Hi folks,

I have a question that is there a way other than manual do it to merge two folder using their cid? Like folder with cid xxx has file abc, and folder with cid yyy has file def, something command like ipfs merge xxx yyy will give cid zzz that contains file abcdef? Assume there are no conflict file, or with a merging option.



Is this something people commonly need to do? Should maybe ipfs cp have an option to handle merging? I wrote a script to merge directories together:

#!/usr/bin/env python
import requests, json, sys

# Usage: ipfs_merge_dir.py <dir1> <dir2> [dir3 dir4...]

# Address of node to use
NODE = "http://localhost:5001"

API = "/api/v0/"
# TempDir
TEMP = "/merge_tmp"

# Creates temp dir in MFS, removing existing one if present
def MkTempDir():
	req = requests.post(NODE+API+"files/mkdir?arg="+TEMP)
# Removes temp dir (and contents) from MFS
def RmTempDir():
	req = requests.post(NODE+API+"files/rm?arg=%s&force=true" % TEMP)

# Returns CID of the temp dir
def CIDTempDir():
	req = requests.post(NODE+API+"files/stat?arg=" + TEMP)
	data = json.loads(req.text)
	return data["Hash"]

# Copies contents of CIDs to temp dir
def CpCIDs(cids):
	req = requests.post(NODE+API+"ls?arg="+cids)
	dirData = json.loads(req.text)
	if "Type" in dirData and dirData["Type"] == "error":

	for obj in dirData["Objects"]:
		if len(obj["Links"]) == 0:
			print("No links on CID, aborting: " + obj["Hash"])
		if obj["Links"][0]["Name"] == "":
			print("Not a directory: " + obj["Hash"])

		for i in obj["Links"]:
			req = requests.post(NODE+API+"files/cp?arg=/ipfs/%s&arg=%s/%s" % (i["Hash"], TEMP, i["Name"]))
			if req.text != "":

# Parses directories to merge together
def parseFlags():
	if len(sys.argv) < 3:
		print("Usage: ipfs_merge_dir.py <dir1> <dir2> [dir3 dir4...]")
	cids = sys.argv[2:]
	out = sys.argv[1]
	for i in cids:
		out += "&arg="+i
	return out

# Main program
if __name__ == "__main__":
	cids = parseFlags()


echoSMILE asked “What is the current solution for bad content inside IPFS?” (#ipfs - 19:47 April 18)

aschmahmann replied:

For a while people have been just using NGINX proxies to block any content they don’t feel like providing and that’s been fine, but there’s an interest in making this easier, more portable, and more featureful (e.g. so I can choose to add the list of content your gateway blocks to a list of content mine blocks).


Looks like there’s some desire for more features like this! I noticed another user here expressing their concerns about this as well. Maybe there should be a writeup explaining the differences between HTTP and IPFS, comparing how both solve the “bad content” problem.

Personally I really like the idea of being able to have a blacklist, ensuring I don’t keep any record of certain CIDs, and don’t help propogate them. I think this is an excellent idea.

1 Like