JS-IPFS limit bandwidth per peer

Hi so i’m currently playing around with js-ipfs in node trying to understand all the different parts.
What i would like to do is limit the bandwidth of data my node provides to each individual peer based on a list.
Is that currently doable or are there any examples i can check out?

Just letting you know the GO implementation has the same issue. For some reason the IPFS developers consider this either low priority or unsolvable. It’s a deal-breaker for me and I cannot use IPFS until this issue is fixed. I think a huge number of developers attempting to use IPFS are also driven away by this massive flaw.

Is there an issue tracking this? I couldn’t find any on github.com/ipfs/js-ipfs .

I did all my searching around go-ipfs, and here are some links:

https://www.reddit.com/r/ipfs/comments/8yllit/ipfs_bandwidth_usage/

but no I didn’t see any mention in the JS implementation. I had no luck getting ‘trickle’ to work, which I concluded must be because of the fact that trickle is documented as not working for executables that are stand-alone without the right type of dynamic linking.

I asked about it on freenode irc ipfs channel also. This is a major problem that any sane person should consider the TOP priority bug, because it has the potential to drive away so many early adopters as to significantly hurt the entire success of IPFS itself.

I’m reviving this topic, as it seems to be a show stopper with regard to UX. I captured this bandwidth issue in a YouTube video, showing exactly what the issue is and why it’s so detrimental to front end UX:

Solutions I’ve tried:

  • Limit the connections by setting LowWater at 20 and HighWater at 40.
  • Using the ‘low-power’ profile
  • Tried to block the ipfs.io that seem to be the source of the bandwidth, but was not successful.

Seems to me there are two possible solutions:

  • Create a blacklist filter to block nodes that push too much bandwidth.
  • Create a per-peer or overall bandwidth limit setting as part of the IPFS node software.

If anyone else has a proposed solution to this problem, I’m keen to try it.

1 Like

Have you verified your routing is in “dhtclient” mode? The “lowpower” option should turn that on, but maybe you can add it manually also (i.e. specify ‘dhtclient’ as the routing option). Also “lowpower” has no dash in it right?

Here’s the profiles list (hopefully up to date)

maybe there’s some command that spits out the current profile/routing after startup you can also call to verify it did accept the option (in case somehow it’s just not getting set, where you think it is).

Yes, I’m using the ‘low power’ profile. In this thread @hsn10 indicated that js-ipfs does not have DHT implemented yet.

For reference, here is the IPFS config settings for the node at chat.fullstack.cash (the app in the YouTube video). And here is the code that sets it.

IPFS node configuration: {
  "Addresses": {
    "Swarm": [],
    "Announce": [],
    "API": "",
    "Gateway": "",
    "RPC": "",
    "Delegates": [
      "/dns4/node0.delegate.ipfs.io/tcp/443/https",
      "/dns4/node1.delegate.ipfs.io/tcp/443/https",
      "/dns4/node2.delegate.ipfs.io/tcp/443/https",
      "/dns4/node3.delegate.ipfs.io/tcp/443/https"
    ]
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false,
      "Interval": 10
    },
    "webRTCStar": {
      "Enabled": true
    }
  },
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmZa1sAxajnQjVM8WjWXoMbmPd7NsWhfKsPkErzpm9wGkp",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/dns4/node0.preload.ipfs.io/tcp/443/wss/p2p/QmZMxNdpMkewiVZLMRxaNxUeZpDUb34pWjZ1kZvsd16Zic",
    "/dns4/node1.preload.ipfs.io/tcp/443/wss/p2p/Qmbut9Ywz9YEDrz8ySBSgWyJk41Uvm2QJPhwDJzJyGFsD6",
    "/dns4/node2.preload.ipfs.io/tcp/443/wss/p2p/QmV7gnbW5VTcJ3oyM2Xk1rdFBJ3kTkvxc87UFGsun29STS",
    "/dns4/node3.preload.ipfs.io/tcp/443/wss/p2p/QmY7JB6MQXhxHvq7dBDh4HpbH29v4yE9JRadAVpndvzySN"
  ],
  "Pubsub": {
    "Enabled": true
  },
  "Swarm": {
    "ConnMgr": {
      "LowWater": 20,
      "HighWater": 40
    },
    "DisableNatPortMap": true
  },
  "Routing": {
    "Type": "none"
  },
  "Identity": {
    "PeerID": "QmXQaP57JMXHe3SQC2JDmsxN3kQ1aHeManRhR4ue4zHArJ",
    "PrivKey": "CAASqAkwggSkAgEAAoIBAQDc1jqvgnfs0Pb0gYd3fYhgZbFgJYZOfW8jHeazJwMGj2y5b+fgtygAOGtd51mRlMl30ihpvgiZd4lcnmUEJeKD1Aasrg4F0ZGeS+Fu14IJUMCsBBf0yHsAyIFdL0imqSEKbsL1lg0vqCSMNI8lT3LMjXQG+96YwdZv8TLnZQr/seileE68aJ5RS3e+Ke5IYuAuNTUCouvBsSShddqQVjj87N2PWeSTckCaweMS/e6Gm81vsxDqicB8+E8STUxw1DnoHdD6I7ljT9sIFS9crLM/ZoM2cOtGXCY2FjWSSt/zLH4d11JWrxyagX+hoTf5Xnycp5GKfvjyjx2ewZQDl//xAgMBAAECggEAAvKxrRzf4reN6mjtwOa6OrY00lih5LuYL5bzONZHHC/vNsEDjoyHYkxeg44GdDLxJxI1Q6cbqIfP276KEO58CgA7OBQpQALij6NJ7r+9/seXENzLoJMKEFI85txuGvp0RFZC8EIY6jdTiJMdi5UWTlx/jWXQnIeu6AbnY+8lgNEN9WwR5Of8KikWceqrQ+fChLunUz/Ug6SG2pf3q6lGuftfc9GfHqOT2OQOQDOi+gjHeYPF1R63l7db4gCikGlwvfpEYXP3fVkgHVxvsZMfq7TlasUcHW0TLTqRF9YbDLsBDnEvsOTBdIMPO6YGG5tYkH63IDZKshgxsRLZ3d1WGQKBgQDyhwN5l293y6ql9uOL6MmrRIR58/4VpeQgJYarEYvId3s3ffaMqPkx2vXLPkHaeraaJjdZfah7Q+aOqew4l+0n3BDGclSJ4lopKi8tCP3XA8ZDyYwjO07jwWh2rkrEylpDz/VoUDVJUK4ioBD9T6TUEJqlIiGyryocg9HqvB7FyQKBgQDpGsD3PalIbCuPdrMlhlNndsCymY2bMQo2FVo9wlOr4f2Je2byKBVai5gc4FzHzqoNowYOuoSF35SYB/U65b1dWIewAxSpUlPRQOWEf/6GjAB7N1ZsFDarvodkbadCFqB5AyiT7+MZxDTziARlDb1EFLqN13jFQQyf4RiSSFQc6QKBgQCgZz2cINVfhPuTkuvCcC9ZsBJyWjaVeMedn1QnNo6eArAi7pOvSl6uY6QnTUDe0ESPRXFcJejVxf3qI2aRs6Htt/X8WkehfmylRzo2bfj9SYjK8rVV4/b0WcnOnM3kw/TZXuRvnoTvvYW+buFtuExK2cR+LUneVU3j2CdxOgScSQKBgQCaABvV+7235AbPTAtE0j6Nzy21kK62BasKWgb5YEXo+2+GAancd9DLtgezpCKHuqgsRDS/TEg7LZ+85R0FYTw+zDswdIiU6JgJWceIRws/loTG4qNM2fnYcxJ9rdffWJWB/S00tzohDrgw3/6PSIluzgcFqIHYR4ZwpcSW+APh6QKBgAeqk7LcK5TqDK3PhNflBrFHxnyA9scJWk06UCrJc/2t3oVB/khg5MzEyeXDO47JoebKibankEYhP4qiKMwVRc/DsM8bv+uCOrTJP719c9AO/Z+Wrh8LKXAaivW5lmbEWAtgTZALjiXsst33PMYzmzeporDG7CAw4rnKGhHeW4fP"
  },
  "datastore": {
    "Spec": {
      "type": "mount",
      "mounts": [
        {
          "mountpoint": "/blocks",
          "type": "measure",
          "prefix": "flatfs.datastore",
          "child": {
            "type": "flatfs",
            "path": "blocks",
            "sync": true,
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2"
          }
        },
        {
          "mountpoint": "/",
          "type": "measure",
          "prefix": "leveldb.datastore",
          "child": {
            "type": "levelds",
            "path": "datastore",
            "compression": "none"
          }
        }
      ]
    }
  },
  "Keychain": {
    "dek": {
      "keyLength": 64,
      "iterationCount": 10000,
      "salt": "jQsI5EhejR5u197UIYvx1Wpj",
      "hash": "sha2-512"
    }
  }
}

I guess you’re saying since the JS impl doesn’t have DHT implemented you can’t put “dhtclient” as your Routing Type. I dont’ know much about the rest of those settings so I’m out of ideas. :frowning:

It does look like js-ipfs can use dhtclient routing, according to these release notes for v0.48.0. However, switching it to Routing.Type: dhtclient does not seem to have an effect, or if anything, is actually making the bandwidth consumption higher.

I also added the libp2p configuration, and I confirmed the configuration for libp2p at runtime. It still doesn’t seem to make a difference to the bandwidth usage.

const node = await IPFS.create({
  libp2p: {
    config: {
      dht: {
        enabled: true,
        clientMode: true,
      },
    },
  },
})

So when you programatically stop just that one ‘node’ you got back from the ‘IPFS.create’ call you can confirm that the network use goes away? You stated that, already, but I’m just being sure you’ve totally eliminated possibility of some OTHER instance/processing being the culprit.

also maybe something like this will show something:

or try to turn on more logging thru the logging endpoint in case that has any helpful info.

I’m running js-ipfs in the browser. So I’m not sure how to use the logging suggested by @wclayf above. Also ‘whyrusleeping’ mentioned that there is ‘logging bandwidth by protocol’, but I’m not sure how to tap into that.

If you watch the youtube video, or visit chat.fullstack.cash, it seems like the bulk of the bandwidth is coming from nodeX.preload.ipfs.io.

I found this thread on how to set up your own preload nodes, and I’m attempting to do that. But the directions do not seem as straightforward as they first appear. I don’t see those original preload nodes listed in the IPFS config.

I thought I’d add an update to this thread:

I replaced the Bootstrap array in the IPFS config setting with a list of my own IPFS nodes, and I haven’t seen the bandwidth spikes that I recorded in the YouTube video.

I’m still waiting to see if they come back. It worries me that I don’t exactly know how I ‘fixed’ the problem, because that means it could come back and I still wouldn’t know why.

1 Like

The responses on the refs endpoint you were seeing were almost certainly not where the traffic was coming from. You can see the responses there are on the order of 100s of bytes. Where you probably want to look for traffic is in the websocket connections.

However, all that refs traffic indicates that you are asking the preload nodes to find you data so that you can download it which means that your node was likely not idle at the time but actively requesting data for download.

I’m not super familiar with the how js-ipfs APIs are exposed but it looks like the data should be exposed via the stats API js-ipfs/STATS.md at master · ipfs/js-ipfs · GitHub. The steps to walk through would likely be to first check if the data usage is coming from Bitswap (i.e. the data retrieval protocol) and then figure out what data your application is trying to download and seeing if things add up. If your application is trying to download a 1GiB file and you’re getting several hundred kbps download speeds that’s probably a good thing given an alternative of really slowly downloading that file.

2 Likes

I really appreciate your time looking at this and providing the link to the Stats endpoint, @adin. I agree with your assessment. This whole adventure has really forced me to get better at debugging js-ipfs. It’s been a good learning experience.

I’ll continue to dig into my app issues, using your suggestions.

I don’t want to hijack this thread any more than necessary. My app is clearly having an issue, which is separate from the bandwidth limiting issue raised in this thread.

1 Like