Create a block from scratch

I’ve never seen that before. It seems like a Windows specific issue, you’re gonna need someone familiar with Windows to help you with it (I’m a Unix guy).

As far as files still being there, if you used “ipfs add”, they are pinned automatically, and if you used “ipfs files” (or import in webui), they’ll act as if they were pinned for as long as you don’t remove them from there. And even if your files aren’t pinned, as long as you haven’t run the gc, or told the daemon to run the gc on a regular basis, they are still in there.

And if the files are still in the repo, they get “reprovided” every 12 hours, so they should be accessible from the network (assuming your node is running and reachable).

After i sent you the previous email, i tried a few times to load certain CIDs which i believe no longer exist in the system. The browser ceased to load new files and I need to restart my daemon in order for it to work again.

Things are smooth today except for the above observation. As long as i don’t load those “lost” CIDs, the system works fine.

I have a few questions which i’m not sure whether it’s appropriate to ask here. Please let me know if i need to raise new topics for it.

  1. I have a lot of “old” data blocks from my previous failed setup. Are they of any use anymore?
  2. Is there a folder size limit? I have a folder over 2Gb, and it couldn’t be loaded in a new tab
  3. i want to move my .ipfs directories from my c:\ (which is a SSD) to my d:\ (a much bigger HDD), what shall i do?

Yes, you probably should start a new topic, for a simple reason: there’s no way anyone else is still reading this thread. If you want others to chime in, start a new one.

  1. Unless they are from information you can’t get back any other way, no.
  2. IPFS really doesn’t care how much data you put in (as long as your repo is sized to handle it). However, directories that have so many files in them that they need multiple blocks to store are not very efficient, don’t do that (use a better folder hierarchy). Also, if your repo ends up containing a lot of blocks, you might want to consider enabling AcceleratedDHTClient:
ipfs config --json Experimental.AcceleratedDHTClient true
  1. Your best bet is to use a symlink.

Could Not Connect to IPFS at all this morning

Tried reboot Windows, but still not working

Tried a few previous config…same, it just won’t connect

C:\Users\User>ipconfig /all

Windows IP Configuration

Host Name . . . . . . . . . . . . : DESKTOP-D08P37A
Primary Dns Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No

Ethernet adapter Ethernet:

Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Realtek PCIe GbE Family Controller
Physical Address. . . . . . . . . : 50-81-40-91-47-D4
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::c02d:d258:d8d1:d0b9%8(Preferred)
IPv4 Address. . . . . . . . . . . : 223.16.242.253(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.240.0
Lease Obtained. . . . . . . . . . : Monday, February 28, 2022 4:42:54 AM
Lease Expires . . . . . . . . . . : Monday, February 28, 2022 7:18:49 AM
Default Gateway . . . . . . . . . : 223.16.240.1
DHCP Server . . . . . . . . . . . : 10.17.8.51
DHCPv6 IAID . . . . . . . . . . . : 223379776
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-28-CA-C6-C9-50-81-40-91-47-D4
DNS Servers . . . . . . . . . . . : 210.3.59.68
210.3.59.77
NetBIOS over Tcpip. . . . . . . . : Enabled

Wireless LAN adapter Wi-Fi:

Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . : localdomain
Description . . . . . . . . . . . : Realtek RTL8821CE 802.11ac PCIe Adapter
Physical Address. . . . . . . . . : 5C-FB-3A-8D-90-5B
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes

Never mind, it’s back on again
Sorry for the troubles…

There is only one possible explanation: your computer is haunted :grin:

for your information: i did the following last night to my Brave setting

  • set redirect to “On”
  • up cache size to 5Gb
    i’m pretty sure this is the culprit!

I have mine set to use my own node:

my desktop won’t work in your setting

The browser has to be on the same machine as the node for it to work. Maybe “localhost” isn’t defined on your machine, try and replace that word with “127.0.0.1” instead.

Not working too, both shows the following error
Only a valid IPFS gateway with Origin isolation enabled can be used in Brave

so what you mean the best practice to save files should be like this: directory → directory → directory ->…files (as few as possible)

Yesterday i uploaded close to 200 files (77MB) to the system, in one single directory. Is that bad?

That’s probably still just 1 block, so it’s fine. The block is just the list of files, not the files themselves. The file sizes don’t matter, just how many there are per folder. If the list gets too long, it has to split it into multiple blocks, and that slows things down (more blocks to find/download just to get to the file).

so it’s something like a thousand files in a single directory/folder might slow things down, correct?

It depends on the length of the filenames, if they have short names, you can have more of them than if they have long names. It all has to fit into 1 block, which is about a quarter of a MB.

In the past 30 hours, the system was very unstable. Sometimes it works like a charm but most of the time it returns 504 Gateway Time-out. Besides time-out error, i got this error “internalWebError: promise channel was closed” too

Couldn’t help to wonder why it’s so hard to load something from my own node.

Is there anything i can do to help? Below’s the current ipfs config for your reference.

{
“API”: {
“HTTPHeaders”: {}
},
“Addresses”: {
“API”: “/ip4/127.0.0.1/tcp/5001”,
“Announce”: ,
“AppendAnnounce”: ,
“Gateway”: “/ip4/127.0.0.1/tcp/8080”,
“NoAnnounce”: ,
“Swarm”: [
“/ip4/127.0.0.1/tcp/4001”,
“/ip4/127.0.0.1/udp/4001/quic
]
},
“AutoNAT”: {},
“Bootstrap”: [
“/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN”,
“/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa”,
“/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb”,
“/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt”,
“/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ”,
“/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
],
“DNS”: {
“Resolvers”: {}
},
“Datastore”: {
“BloomFilterSize”: 0,
“GCPeriod”: “1h”,
“HashOnRead”: false,
“Spec”: {
“mounts”: [
{
“child”: {
“path”: “blocks”,
“shardFunc”: “/repo/flatfs/shard/v1/next-to-last/2”,
“sync”: true,
“type”: “flatfs”
},
“mountpoint”: “/blocks”,
“prefix”: “flatfs.datastore”,
“type”: “measure”
},
{
“child”: {
“compression”: “none”,
“path”: “datastore”,
“type”: “levelds”
},
“mountpoint”: “/”,
“prefix”: “leveldb.datastore”,
“type”: “measure”
}
],
“type”: “mount”
},
“StorageGCWatermark”: 90,
“StorageMax”: “10GB”
},
“Discovery”: {
“MDNS”: {
“Enabled”: true,
“Interval”: 10
}
},
“Experimental”: {
“AcceleratedDHTClient”: true,
“FilestoreEnabled”: false,
“GraphsyncEnabled”: false,
“Libp2pStreamMounting”: false,
“P2pHttpProxy”: false,
“StrategicProviding”: false,
“UrlstoreEnabled”: false
},
“Gateway”: {
“APICommands”: ,
“HTTPHeaders”: {
“Access-Control-Allow-Headers”: [
“X-Requested-With”,
“Range”,
“User-Agent”
],
“Access-Control-Allow-Methods”: [
“GET”
],
“Access-Control-Allow-Origin”: [
“*”
]
},
“NoDNSLink”: false,
“NoFetch”: false,
“PathPrefixes”: ,
“PublicGateways”: null,
“RootRedirect”: “”,
“Writable”: false
},
“Identity”: {
“PeerID”: “12D3KooWK5GQfwTovUTHR8Ud2WG4fMWFx644w59g6YAoXvYA278y”,
“PrivKey”: “CAESQPwYl0Ho9KT/Jx9J3eD5pebE/+Ayb/UaaaS+foWu7LW/iYphu8IcjlLRL1LyhxZAfkIIjicmxy9HpcaDRKL2EP4=”
},
“Internal”: {},
“Ipns”: {
“RecordLifetime”: “”,
“RepublishPeriod”: “”,
“ResolveCacheSize”: 128
},
“Migration”: {
“DownloadSources”: ,
“Keep”: “”
},
“Mounts”: {
“FuseAllowOther”: false,
“IPFS”: “/ipfs”,
“IPNS”: “/ipns”
},
“Peering”: {
“Peers”: null
},
“Pinning”: {
“RemoteServices”: {}
},
“Plugins”: {
“Plugins”: null
},
“Provider”: {
“Strategy”: “”
},
“Pubsub”: {
“DisableSigning”: false,
“Router”: “”
},
“Reprovider”: {
“Interval”: “6h”,
“Strategy”: “all”
},
“Routing”: {
“Type”: “dht”
},
“Swarm”: {
“AddrFilters”: null,
“ConnMgr”: {
“GracePeriod”: “60s”,
“HighWater”: 300,
“LowWater”: 50,
“Type”: “basic”
},
“DisableBandwidthMetrics”: false,
“DisableNatPortMap”: true,
“EnableHolePunching”: true,
“RelayClient”: {
“Enabled”: true
},
“RelayService”: {},
“Transports”: {
“Multiplexers”: {},
“Network”: {},
“Security”: {}
}
}
}

There are two ways to improve that (you should do both):

  • open port tcp/udp 4001 in your modem’s firewall (and adjust node settings once done)
  • re-pin your content in multiple places (my content appears in about 10 separate nodes after I put it up, which makes it instantly discoverable)

how do i re-pin? Using services like Pinata, nft.storage?

Correct. Here is the way I do it for myself:

It all starts on Fleek. My content is managed using git and stored on GitHub. After making a change and pushing it to GitHub, Fleek picks it up automatically, builds it, and puts it up on IPFS.

Next, I re-pin it on my 2 nodes using “ipfs pin update”. The first 1 gets it from Fleek, the second gets it from the first over my LAN (gotta love how IPFS does that).

Next, I re-pin on Piñata using “ipfs pin remote”.

Next, I use “ipfs dag export” on my node to create a .car file, and use “curl” to upload it to web3.storage, where it gets re-pinned on several nodes and backed up on Filecoin.

Once that is done, the content is at least on 10 nodes and gets discovered from pretty much anywhere instantly (even if I shut my nodes down for a while, which can happen).