IPFS Repo Storage And Files Over A Huge Amount(30G+), "ipfs add" action slow down in a high concurrency environment

Version information:
go-ipfs version: 0.4.17-
Repo version: 7
System version: amd64/linux
Golang version: go1.10.3

Type:
Bug, Feature

Description:
I tested the “ipfs add” performance in a high concurrency scenario on my centOS node environment

When my IPFS repo storage and files over a huge amount(30G+), “ipfs add” action slow down ,Surprisingly,I simulated 100 processes work on it at the same time,and it’s cost morn than 10 seconds ,and I adjusted the process to 1000,it cost almost 2 mins 。

This is our test shell script:

#!/bin/bash
echo “IPFS-Swing-Test Start!”
Njob=100
for ((i=0; i<$Njob; i++)); do
echo “progress $i is testing”
dd if=/dev/urandom of=$i bs=1K count=50
time docker exec ipfs_host ipfs add $i &
done
wait
#等待循环结束再执行wait后面的内容
echo -e “time-consuming: $SECONDS seconds”
#显示脚本执行耗时
This is result of time consuming:

"real 0m9.627s
user 0m0.077s
sys 0m0.017s

real 0m9.634s
user 0m0.089s
sys 0m0.007s

real 0m9.826s
user 0m0.083s
sys 0m0.019s

real 0m9.784s
user 0m0.090s
sys 0m0.013s

real 0m9.949s
user 0m0.089s
sys 0m0.011s

real 0m10.046s
user 0m0.089s
sys 0m0.017s

real 0m10.108s
user 0m0.102s
sys 0m0.012s

real 0m10.383s
user 0m0.084s
sys 0m0.017s

real 0m10.530s
user 0m0.105s
sys 0m0.015s

real 0m10.527s
user 0m0.080s
sys 0m0.017s
time-consuming: 12 seconds
The cause of the problem, I guess, may be due to the seriousness of the Merkle tree’s shard retrieval, Is official team encountered this problem? What is the reason of this issue? Can I help you optimize this issue together?

Version information:

go-ipfs version: 0.4.17-
Repo version: 7
System version: amd64/linux
Golang version: go1.10.3

Type:

Bug, Feature

Description:

I tested the “ipfs add” performance in a high concurrency scenario on my centOS node environment

When my IPFS repo storage and files over a huge amount(30G+), “ipfs add” action slow down ,Surprisingly,I simulated 100 processes work on it at the same time,and it’s cost morn than 10 seconds ,and I adjusted the process to 1000,it cost almost 2 mins 。

This is our test shell script:

#!/bin/bash
echo "IPFS-Swing-Test Start!"
Njob=100
for ((i=0; i<$Njob; i++)); do
          echo  "progress $i is testing"
          dd if=/dev/urandom of=$i bs=1K count=50
          time docker exec ipfs_host ipfs add $i &
done
wait
#等待循环结束再执行wait后面的内容
echo -e "time-consuming: $SECONDS    seconds"    
#显示脚本执行耗时

This is result of time consuming:

"real	0m9.627s
user	0m0.077s
sys	0m0.017s

real	0m9.634s
user	0m0.089s
sys	0m0.007s

real	0m9.826s
user	0m0.083s
sys	0m0.019s

real	0m9.784s
user	0m0.090s
sys	0m0.013s

real	0m9.949s
user	0m0.089s
sys	0m0.011s

real	0m10.046s
user	0m0.089s
sys	0m0.017s

real	0m10.108s
user	0m0.102s
sys	0m0.012s

real	0m10.383s
user	0m0.084s
sys	0m0.017s

real	0m10.530s
user	0m0.105s
sys	0m0.015s

real	0m10.527s
user	0m0.080s
sys	0m0.017s
time-consuming: 12    seconds

The cause of the problem, I guess, may be due to the seriousness of the Merkle tree’s shard retrieval, Is official team encountered this problem? What is the reason of this issue? Can I help you optimize this issue together?

Hello I have encountered this issue before. The issue appears to be do to a bug with conurrent file adds that also pin, and pinning in general.

In order to circumvent this, anytime I add a file to IPFS, I do so with the --local (aka, no pinning) option enabled. After which I will then manually trigger the pin. This has been working quite well

here is the result
[devuser@blockchain40 export]$ time sudo docker exec ipfs_host ipfs add /export/yangxing
added QmaKK2vxX4bGzXrnxdCuGBCBx6RAmez6TGtVXxE4c5QDst yangxing

real 0m1.652s
user 0m0.164s
sys 0m0.044s
[devuser@blockchain40 export]$ time sudo docker exec ipfs_host ipfs add --local /export/yangxing
added QmaKK2vxX4bGzXrnxdCuGBCBx6RAmez6TGtVXxE4c5QDst yangxing

real 0m1.574s
user 0m0.149s
sys 0m0.021s

you wiil find it is very slow than your ipfs because the ipfs has store millions of data(less than 10KB each).
I have used --local but it no help.

thanks, what’s diff between ipfs add --pin=false and ipfs add --local ?

try : ipfs add --pin=false