Author

Topic: Bitcoin full nodes with IPv4 and IPv6 - Why most peers are on IPv4? (Read 3537 times)

full member
Activity: 179
Merit: 131
I still suspect that there is something wrong in Bitcoin package related to the handling of IPv6.

I have installed and set up bitcoind on my PC at home that I am using as NAS backup. I set only to run on IPv6, i.e. onlynet=ipv6. It has been running for more than a day now. But there are only 2 IPv6 peers connected to it and all 8 outgoing connections have been used. This has happened after about an hour after restarting bitcoind, so no change after that. When I checked it just now, there were 29 unique IPv6 peers recorded on the debug.log. And out of that 29 IPv6 peers, 16 of them opened port 8333.

On IPv4, I usually get more than 20 peers (8 of them outgoing peers) within an hour after I restart bitcoind. I expected that my node gets more IPv6 peers than 10 after runing more than a day. So it looks like the propagation of peers on IPv6 is very much lower. But I still don't believe that it is because there are less IPv6 peers. But let's see how this look like in the next 2 days.
full member
Activity: 179
Merit: 131
My implementation would be to connect to as many nodes as possible until there are none able to connect to or the outbound connections are full.

For example, with IPv6, it would search the local database for all IPv6 nodes and attempt to connect to all of them. Then it would ask for peers and connect to any IPv6 that it's peers gives to it. After either all 8 outbound connections are taken or there are no remaining IPv6 nodes that can be connected to after searching through the databases described above, it will make outbound connections to IPv4 nodes if any outbound slots are available. It would accept any incoming connections from both IPv4 and IPv6.
I would imagine that you need to modify a lot of other places in the current Bitcoin software. So the risk of breaking other parts in the system that might not even be related to the main purpose of the changes, will be higher.

So I still think that the risk is lower if we would just separate the MAX_OUTBOUND_CONNECTIONS for individually IPv4 and IPv6. Because in this case, the parts that will be broken is very likely the additional ones related to IPv6. I hope that they will do very minimal modifications on the parts related to IPv4.
sr. member
Activity: 268
Merit: 258
I think it will be more complex to implement that kind of network type prioritisation. Suppose that you want to prioritise IPv6 over IPv4. How do you manage the threshold of the number of IPv6 peers before you allow IPv4 peers to be connected to the node and being connected from the node? How about if the nodes would never reach that threshold, resulting you have less IPv6 peers and no IPv4 peer for a long time? Would you use a dynamic timer to avoid that issue? As the network is dynamic, would you keep maintaining the number of IPv6 peers to always above the threshold regularly? Or would you manage that base on the timer? And so on Smiley

And if you manage to find the solution for that without increasing MAX_OUTBOUND_CONNECTIONS or the maximum outbound connection slots remain 8, the chances that the node will have less IPv4 peers for longer time than before the implementation of your solution is quite high.
My implementation would be to connect to as many nodes as possible until there are none able to connect to or the outbound connections are full.

For example, with IPv6, it would search the local database for all IPv6 nodes and attempt to connect to all of them. Then it would ask for peers and connect to any IPv6 that it's peers gives to it. After either all 8 outbound connections are taken or there are no remaining IPv6 nodes that can be connected to after searching through the databases described above, it will make outbound connections to IPv4 nodes if any outbound slots are available. It would accept any incoming connections from both IPv4 and IPv6.
full member
Activity: 179
Merit: 131
I think that is actually a bad idea. You run into memory issues at some point and having more connections takes up more bandwidth. I tried setting MAX_OUTBOUND_CONNECTIONS to something really high before and it crashed the program. It just took up way to much memory.
I don' think you would have significant issues if you would increase MAX_OUTBOUND_CONNECTIONS to 16. But in any case as I previously mentioned, I am quite sure it will still not solve the problem that there are much more IPv4 peers than IPv6 peers connected to and from the node, unless the maximum outbound connections setting is being separated for each IPv4 and IPv6 addresses.

I think it might be better to have an option to prioritize one network over another e.g. get as many IPv6 connections as possible before getting IPv4 connections.

I think I will try writing some code up and submitting a PR for that.
I think it will be more complex to implement that kind of network type prioritisation. Suppose that you want to prioritise IPv6 over IPv4. How do you manage the threshold of the number of IPv6 peers before you allow IPv4 peers to be connected to the node and being connected from the node? How about if the nodes would never reach that threshold, resulting you have less IPv6 peers and no IPv4 peer for a long time? Would you use a dynamic timer to avoid that issue? As the network is dynamic, would you keep maintaining the number of IPv6 peers to always above the threshold regularly? Or would you manage that base on the timer? And so on Smiley

And if you manage to find the solution for that without increasing MAX_OUTBOUND_CONNECTIONS or the maximum outbound connection slots remain 8, the chances that the node will have less IPv4 peers for longer time than before the implementation of your solution is quite high.
sr. member
Activity: 268
Merit: 258
Thanks for your comment. That makes sense to me.

I think the fundamental problem is that, Bitcoin software only allows maximum 8 outbound connections regardless how many available IP addresses on the node. A part from obsolete implementation, i.e. based on only IPv4 network, I think the intention is possibly also to avoid bad nodes with multiple IP addresses flooding the network.

Due to that 8 outbound connections limitation, it is quite likely that my nodes always connect to IPv4 peers first and they will be quite fast occupying the whole 8 outbound connection slots. For the same reason, the likelihood for my nodes to get IPv6 peers to connect to them is also quite low as all of those IPv6 peers experience the same issue, i.e. 8 outbound connections limitation, unless they are solely running on IPv6. I think this is what causing the propagation of IPv6 nodes into the whole network very slow.

I can easily change the MAX_OUTBOUND_CONNECTIONS on net.cpp and re-compile my bitcoind. But I am quite sure that will not solve the problem, because my nodes will first connect to the available IPv4 peers and occupy the whole outbound connection slots again.
I think that is actually a bad idea. You run into memory issues at some point and having more connections takes up more bandwidth. I tried setting MAX_OUTBOUND_CONNECTIONS to something really high before and it crashed the program. It just took up way to much memory.

I think the solution is to separate the maximum outbound connections per IP address types. So that each IPv4 and IPv6 addresses individually get maximum 8 outbound connections as they are clearly on separate networks. For this reason I raised this feature request. Unfortunately, I am not a hard core programmer otherwise I would fork Bitcoin, create a patch and pull request it.
I think it might be better to have an option to prioritize one network over another e.g. get as many IPv6 connections as possible before getting IPv4 connections.

I think I will try writing some code up and submitting a PR for that.
full member
Activity: 179
Merit: 131
You clearly already know everything and don't need any help. Sorry to disturb you. Have a nice day!

Please don't get me wrong. If I knew the reasons of the problem, I would not ask the question in the first place.

However, you didn't actually answer my main question, especially on why Bitcoin node does not automatically connect to the available IPv6 peers while I can manually force it to connect to the IPv6 peers.
It has to do with how node discovery works. Your node has an internal database of nodes that it will try to connect to. That database is built by nodes that you have previously connected to (so it will include your IPv6 peers from the past) and nodes that your peers have announced to you. Given that you have only connected to a few IPv6 nodes, then your database of available IPv6 nodes is probably very small. Since you can only have 8 total outgoing connections, your node is unable to connect to other IPv6 nodes that it knows about once all 8 connections are taken. After that, you just have to wait for other IPv6 nodes to connect to you, and you have no control over that whatsoever. All you can do is hope that your peers announce you to its peers and that your IP address makes it onto a DNS seed so that new nodes may be able to see your IPv6 Address and connect to you.

Basically it all really has to do with how there are significantly less IPv6 nodes than there are IPv4 nodes and that the addresses of those IPv6 nodes is not propagated through the network.

To get more IPv6 nodes, you could force your node to only accept IPv6 connections or force it to get peers from the DNS seed which will probably return more IPv6 nodes for you to connect to.

Thanks for your comment. That makes sense to me.

I think the fundamental problem is that, Bitcoin software only allows maximum 8 outbound connections regardless how many available IP addresses on the node. A part from obsolete implementation, i.e. based on only IPv4 network, I think the intention is possibly also to avoid bad nodes with multiple IP addresses flooding the network.

Due to that 8 outbound connections limitation, it is quite likely that my nodes always connect to IPv4 peers first and they will be quite fast occupying the whole 8 outbound connection slots. For the same reason, the likelihood for my nodes to get IPv6 peers to connect to them is also quite low as all of those IPv6 peers experience the same issue, i.e. 8 outbound connections limitation, unless they are solely running on IPv6. I think this is what causing the propagation of IPv6 nodes into the whole network very slow.

I can easily change the MAX_OUTBOUND_CONNECTIONS on net.cpp and re-compile my bitcoind. But I am quite sure that will not solve the problem, because my nodes will first connect to the available IPv4 peers and occupy the whole outbound connection slots again.

I think the solution is to separate the maximum outbound connections per IP address types. So that each IPv4 and IPv6 addresses individually get maximum 8 outbound connections as they are clearly on separate networks. For this reason I raised this feature request. Unfortunately, I am not a hard core programmer otherwise I would fork Bitcoin, create a patch and pull request it.
sr. member
Activity: 268
Merit: 258
You clearly already know everything and don't need any help. Sorry to disturb you. Have a nice day!

Please don't get me wrong. If I knew the reasons of the problem, I would not ask the question in the first place.

However, you didn't actually answer my main question, especially on why Bitcoin node does not automatically connect to the available IPv6 peers while I can manually force it to connect to the IPv6 peers.
It has to do with how node discovery works. Your node has an internal database of nodes that it will try to connect to. That database is built by nodes that you have previously connected to (so it will include your IPv6 peers from the past) and nodes that your peers have announced to you. Given that you have only connected to a few IPv6 nodes, then your database of available IPv6 nodes is probably very small. Since you can only have 8 total outgoing connections, your node is unable to connect to other IPv6 nodes that it knows about once all 8 connections are taken. After that, you just have to wait for other IPv6 nodes to connect to you, and you have no control over that whatsoever. All you can do is hope that your peers announce you to its peers and that your IP address makes it onto a DNS seed so that new nodes may be able to see your IPv6 Address and connect to you.

Basically it all really has to do with how there are significantly less IPv6 nodes than there are IPv4 nodes and that the addresses of those IPv6 nodes is not propagated through the network.

To get more IPv6 nodes, you could force your node to only accept IPv6 connections or force it to get peers from the DNS seed which will probably return more IPv6 nodes for you to connect to.
full member
Activity: 179
Merit: 131
You clearly already know everything and don't need any help. Sorry to disturb you. Have a nice day!

Please don't get me wrong. If I knew the reasons of the problem, I would not ask the question in the first place.

However, you didn't actually answer my main question, especially on why Bitcoin node does not automatically connect to the available IPv6 peers while I can manually force it to connect to the IPv6 peers.
legendary
Activity: 2128
Merit: 1073
You clearly already know everything and don't need any help. Sorry to disturb you. Have a nice day!

If you configured you IPv6 addresses by hand then your deployment wasn't canonical. The normal, expected way of deploying IPv4 is DHCP; the normal, expected way of deploying IPv6 is DHCP6 or SLAAC.
I am not sure which parts from my information made you came to this wrong conclusion. As I clearly wrote that this is a VPS or a server, I don't need DHCP6 to configure IPv6. As I also wrote above, I got /64 subnet of IPv6 addresses from my VPS providers. So I can use any of 18,446,744,073,709,551,616 IPv6 addresses that my VPS providers gave me. And this is the normal way of configuring IPv6 on servers.

The temporary and permanent IPv6 addresses are described in RFC 4941 http://tools.ietf.org/html/rfc4941 "Privacy Extensions for Stateless Address Autoconfiguration in IPv6". There's plenty of tutorials available if you find reading RFCs too difficult.
If you had been given a subnet of IPv6 address, the permanent or temporary addresses (in your term) are not really relevant any more. I think you misunderstood the information that you mentioned.

My guess is that your nodes are operating properly, although maybe non-optimal. You probably simply don't understand some wrinkle in the expected network deployment procedures. The IPv6 represents return to the normalcy, auto-configuration is default, like it was in nearly every network protocol stack like DECNET, Novell, AppleTalk, etc. IPv4 was the odd one requiring so much manual settings.
I have been using and working on IPv6 at work in the last 5 years. And 3 years ago, my ISP provider deployed IPv6. So I am quite familiar with it and I understood enough about the settings related to that.

What I would actually recommend is that you rent a Windows VPS for a short term evaluation, just to understand how it should be done. It somewhat pains me to be recommending Windows, but that is the reality: Microsoft did it right while most of Linux distributions screwed up.
This is really interesting comments Smiley There are 5 PCs in my home and none of them is using Window$ OS. I use Window$ PC at work as I have to. There are countless problems on Window$ in regards to IPv6 setup on servers and workstations in my office, which Microsoft certified engineers cannot even fix properly. There is no IPv6 related issues on Linux PCs that I cannot fix myself.

I guess this discussion goes out of topic too far.
full member
Activity: 179
Merit: 131
If you configured you IPv6 addresses by hand then your deployment wasn't canonical. The normal, expected way of deploying IPv4 is DHCP; the normal, expected way of deploying IPv6 is DHCP6 or SLAAC.
I am not sure which parts from my information made you came to this wrong conclusion. As I clearly wrote that this is a VPS or a server, I don't need DHCP6 to configure IPv6. As I also wrote above, I got /64 subnet of IPv6 addresses from my VPS providers. So I can use any of 18,446,744,073,709,551,616 IPv6 addresses that my VPS providers gave me. And this is the normal way of configuring IPv6 on servers.

The temporary and permanent IPv6 addresses are described in RFC 4941 http://tools.ietf.org/html/rfc4941 "Privacy Extensions for Stateless Address Autoconfiguration in IPv6". There's plenty of tutorials available if you find reading RFCs too difficult.
If you had been given a subnet of IPv6 address, the permanent or temporary addresses (in your term) are not really relevant any more. I think you misunderstood the information that you mentioned.

My guess is that your nodes are operating properly, although maybe non-optimal. You probably simply don't understand some wrinkle in the expected network deployment procedures. The IPv6 represents return to the normalcy, auto-configuration is default, like it was in nearly every network protocol stack like DECNET, Novell, AppleTalk, etc. IPv4 was the odd one requiring so much manual settings.
I have been using and working on IPv6 at work in the last 5 years. And 3 years ago, my ISP provider deployed IPv6. So I am quite familiar with it and I understood enough about the settings related to that.

What I would actually recommend is that you rent a Windows VPS for a short term evaluation, just to understand how it should be done. It somewhat pains me to be recommending Windows, but that is the reality: Microsoft did it right while most of Linux distributions screwed up.
This is really interesting comments Smiley There are 5 PCs in my home and none of them is using Window$ OS. I use Window$ PC at work as I have to. There are countless problems on Window$ in regards to IPv6 setup on servers and workstations in my office, which Microsoft certified engineers cannot even fix properly. There is no IPv6 related issues on Linux PCs that I cannot fix myself.

I guess this discussion goes out of topic too far.
legendary
Activity: 2128
Merit: 1073
On both my VPS' of Bitcoin nodes, I have a single eth0 interface with 1 IPv4 and 4 IPv6 addresses.

And my setup on IPv6 was initially "standard". But I thought that causes the problem on IPv6 Bitcoin node connections as the the outgoing packets use different IPv6 address than the IPv6 address used by the incoming packets which Bitcoin node listens to. That is why I made sure that Bitcoin only uses a single IPv6 address for both incoming and outgoing packets. But this turned out to be in fact an issue according to you.

I am not sure what you meant by "permanent" and "temporary" addresses. All IPv4 and IPv6 addresses on my eth0 are all permanent addresses. Perhaps the "temporary" address that you meant is the "link-local addresses".

This becomes more confusing than I expected. Unfortunately, there is no documentation related to this that I can find.
If you configured you IPv6 addresses by hand then your deployment wasn't canonical. The normal, expected way of deploying IPv4 is DHCP; the normal, expected way of deploying IPv6 is DHCP6 or SLAAC.

The temporary and permanent IPv6 addresses are described in RFC 4941 http://tools.ietf.org/html/rfc4941 "Privacy Extensions for Stateless Address Autoconfiguration in IPv6". There's plenty of tutorials available if you find reading RFCs too difficult.

My guess is that your nodes are operating properly, although maybe non-optimal. You probably simply don't understand some wrinkle in the expected network deployment procedures. The IPv6 represents return to the normalcy, auto-configuration is default, like it was in nearly every network protocol stack like DECNET, Novell, AppleTalk, etc. IPv4 was the odd one requiring so much manual settings.

What I would actually recommend is that you rent a Windows VPS for a short term evaluation, just to understand how it should be done. It somewhat pains me to be recommending Windows, but that is the reality: Microsoft did it right while most of Linux distributions screwed up.
full member
Activity: 179
Merit: 131
If you have single IPv6 address per interface then your IPv6 deployment is somewhat nonstandard. The canonical way of deploying IPv6 is with at least two IPv6 aliases per interface: one "permanent" and at least one "temporary" that is changed on a regular schedule. The "permanent" is mostly for incoming connections, the "temporary" is mostly for outgoing connections. If you have man long-lasting outgoing connections (like e.g. Bitcoin, unlike e.g. HTTP) you should have multiple simultaneous "temporary" IPv6 addresses active on the interface.

On both my VPS' of Bitcoin nodes, I have a single eth0 interface with 1 IPv4 and 4 IPv6 addresses.

And my setup on IPv6 was initially "standard". But I thought that causes the problem on IPv6 Bitcoin node connections as the the outgoing packets use different IPv6 address than the IPv6 address used by the incoming packets which Bitcoin node listens to. That is why I made sure that Bitcoin only uses a single IPv6 address for both incoming and outgoing packets. But this turned out to be in fact an issue according to you.

I am not sure what you meant by "permanent" and "temporary" addresses. All IPv4 and IPv6 addresses on my eth0 are all permanent addresses. Perhaps the "temporary" address that you meant is the "link-local addresses".

This becomes more confusing than I expected. Unfortunately, there is no documentation related to this that I can find.
legendary
Activity: 2128
Merit: 1073
I set 4 IPv6 addresses on the /64 subnet that I got from my providers. But I use only 1 IPv6 address for the node and I made sure that both incoming and outgoing Bitcoin packets use that particular IPv6 address via ip route and iptables settings.

According to what you mentioned, does this mean we should not set 2 different IP addresses on a single Bitcoin node even they are on different network, i.e. IPv4 and IPv6?
If you have single IPv6 address per interface then your IPv6 deployment is somewhat nonstandard. The canonical way of deploying IPv6 is with at least two IPv6 aliases per interface: one "permanent" and at least one "temporary" that is changed on a regular schedule. The "permanent" is mostly for incoming connections, the "temporary" is mostly for outgoing connections. If you have man long-lasting outgoing connections (like e.g. Bitcoin, unlike e.g. HTTP) you should have multiple simultaneous "temporary" IPv6 addresses active on the interface.

There are so many Linux distributions that I can't give you the specifics for yours. I happen to know that Windows do come out of the box with well optimized canonical IPv6 setup. However if you really want to understand what is going on locally with your node you will have to delve into reading the Bitcoin source.

As far as Bitnodes they are not to be 100% trusted. They have been subject to Sybill attacks by people trying to game the node version statistics. They deployed some secret anti-Sybil and anti-DDoS code with the net effect that they undercount the real reachable nodes. This is a well known and well discussed issue with Bitcoin: the nodes really mean nothing without proof-of-work. So there's no easy, simple, obviously right way of counting Bitcoin peers. It really depends on the particular configuration of your machine as well as on the configurations of any possible Bitcoin peers using the same ISP as you.
full member
Activity: 179
Merit: 131
The general idea was that Bitcoin client is supposed to randomize its outgoing IP address usage to avoid constantly connecting to the same subnets. The code is a little weird, it avoids connecting to the same /16 subnet over IPv4, what it effectively does with IPv6 I did not analyze. But even with regularly changing IPv4 addresses (like vast majority of xDSL deployments, every 24 hours) it doesn't operate well, certainly is much worse than Bittorrent at locating and connecting to peers.

I set 4 IPv6 addresses on the /64 subnet that I got from my providers. But I use only 1 IPv6 address for the node and I made sure that both incoming and outgoing Bitcoin packets use that particular IPv6 address via ip route and iptables settings.

According to what you mentioned, does this mean we should not set 2 different IP addresses on a single Bitcoin node even they are on different network, i.e. IPv4 and IPv6?
legendary
Activity: 2128
Merit: 1073
Bitnodes is quite buggy/unreliable, essentially I wouldn't count on it. I haven't kept exact statistics but my rough guess is that it undercounts nodes 25% to 75%, depending on the exact routing between their prober and your client.

Also, Bitcoin client isn't working really well with IPv6. If the IPv6 address privacy is enabled (practically all Windows machines) it doesn't update its own "temporary" IPv6 address properly. Additionally if you have e.g. a laptop with both wired and wireless port on the same LAN (e.g. docking station to recharge) it doesn't properly pick up between multiple available IPv6 adapters and doesn't track the changes (visible via getnetworkinfo localaddresses).

The general idea was that Bitcoin client is supposed to randomize its outgoing IP address usage to avoid constantly connecting to the same subnets. The code is a little weird, it avoids connecting to the same /16 subnet over IPv4, what it effectively does with IPv6 I did not analyze. But even with regularly changing IPv4 addresses (like vast majority of xDSL deployments, every 24 hours) it doesn't operate well, certainly is much worse than Bittorrent at locating and connecting to peers.

full member
Activity: 179
Merit: 131
Out of those few nodes, how many are still up or at those addresses? You are checking the debug.log, which only tells you nodes you have connected to in the past, that doesn't mean that all of those nodes are still online. Out of those 34 nodes, it is possible that many of them have gone offline, or that some are the same node just with a different ip address. Of course, your nodes wouldn't know that so they are counted separately.

Basically, the probability of getting ipv6 nodes is low compared to ipv4 nodes since there are significantly less of them that exist. According to bitnodes, out of 5703 nodes, only 826 are ipv6 nodes. That isn't a lot compared to the ipv4 nodes.

I pinged all of those 11 and 23 IPv6 nodes and all of them are replying.


Correction

The word "replying" on my comment above is wrong. I just assumed that all the nodes were replying as those nodes are up according to nmap. But in fact not all of them opened port 8333. So some of them in fact shows for instance the following:

Code:
Starting Nmap 6.47 ( http://nmap.org ) at 2016-01-31 19:04 CET
Nmap scan report for XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX
Host is up.
PORT     STATE    SERVICE
8333/tcp filtered unknown

or

Code:
Starting Nmap 6.47 ( http://nmap.org ) at 2016-01-31 19:05 CET
Nmap scan report for XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX
Host is up.
PORT     STATE    SERVICE
8333/tcp closed unknown

At the time I checked them, only 6 of the IPv6 peers on my debug.log have port 8333 opened.
full member
Activity: 179
Merit: 131
I don't see any proof that your client is somehow not accepting IPv6 connections/preferring IPv4 connections or malfunctioning in any way. There aren't that many users with IPv6 on their connections (and I've definitely never had a node connected via IPv6, either on the different VPS services I used or in my home connection). The probability of having IPv4 nodes connected is pretty higher.

Here are Google's IPv6 statistics. They're worth what they're worth... but it's a starting point to measure adoption.

Yes. There is nothing preventing my nodes to connect to any available IPv6 peers as I can manually initiate the connection to those peers. And there is nothing preventing any IPv6 peers from connecting to my nodes. However, I have an impression that something in Bitcoin node prefers IPv4 peers than IPv6 peers, hence my questions about this.

I think Google is not a good reference in term of IPv6 usage on Bitcoin node. According to https://bitnodes.21.co/nodes/, there are 5705 nodes at the time I am writing this. And out that there are 4875 IPv4 nodes and 830 IPv6 nodes. So I was wondering that only maximum 3 of those 830 nodes, connect to each of my 2 nodes. I guess I just have to force my nodes to connect to those IPv6 nodes as many as possible.
staff
Activity: 3458
Merit: 6793
Just writing some code
There aren't many nodes or people that are using IPv6. It is likely that there are simply not enough IPv6 nodes that can connect to you.

I am not sure if that is the main reason. Because when I did a quick check with "cat debug.log | grep 'peeraddr=\[' | wc -l" just now, I got 2256 list of peers with IPv6. There are of course duplication of IPv6 addresses. But I expect that my nodes should be able to connect to more than one IPv6 and there should be more IPv6 peers that can connect to my nodes instead of only maximum 3 of them.

Just for completeness, with the following command:
Code:
cat debug.log | grep -oP '(?<=peeraddr=\[).+(\])' | wc -l

I got 2268 IPv6 peers.

Out out that, there are actually only 11 unique IPv6 peers which I obtained using the following command:
Code:
cat debug.log | grep -oP '(?<=peeraddr=\[).*(?=\])' | sort | uniq | wc -l

On the other node, I got 23 unique IPv6 addresses.

As I mentioned above, I think my nodes should be able to get more IPv6 peers connected to them than only 3 peers, and they should be able to connect to more than one IPv6 peers.
Out of those few nodes, how many are still up or at those addresses? You are checking the debug.log, which only tells you nodes you have connected to in the past, that doesn't mean that all of those nodes are still online. Out of those 34 nodes, it is possible that many of them have gone offline, or that some are the same node just with a different ip address. Of course, your nodes wouldn't know that so they are counted separately.

Basically, the probability of getting ipv6 nodes is low compared to ipv4 nodes since there are significantly less of them that exist. According to bitnodes, out of 5703 nodes, only 826 are ipv6 nodes. That isn't a lot compared to the ipv4 nodes.
legendary
Activity: 1512
Merit: 1012
I don't see any proof that your client is somehow not accepting IPv6 connections/preferring IPv4 connections or malfunctioning in any way. There aren't that many users with IPv6 on their connections (and I've definitely never had a node connected via IPv6, either on the different VPS services I used or in my home connection). The probability of having IPv4 nodes connected is pretty higher.

Here are Google's IPv6 statistics. They're worth what they're worth... but it's a starting point to measure adoption.
full member
Activity: 179
Merit: 131
There aren't many nodes or people that are using IPv6. It is likely that there are simply not enough IPv6 nodes that can connect to you.

I am not sure if that is the main reason. Because when I did a quick check with "cat debug.log | grep 'peeraddr=\[' | wc -l" just now, I got 2256 list of peers with IPv6. There are of course duplication of IPv6 addresses. But I expect that my nodes should be able to connect to more than one IPv6 and there should be more IPv6 peers that can connect to my nodes instead of only maximum 3 of them.

Just for completeness, with the following command:
Code:
cat debug.log | grep -oP '(?<=peeraddr=\[).+(\])' | wc -l

I got 2268 IPv6 peers.

Out out that, there are actually only 11 unique IPv6 peers which I obtained using the following command:
Code:
cat debug.log | grep -oP '(?<=peeraddr=\[).*(?=\])' | sort | uniq | wc -l

On the other node, I got 23 unique IPv6 addresses.

As I mentioned above, I think my nodes should be able to get more IPv6 peers connected to them than only 3 peers, and they should be able to connect to more than one IPv6 peers.
staff
Activity: 3458
Merit: 6793
Just writing some code
There aren't many nodes or people that are using IPv6. It is likely that there are simply not enough IPv6 nodes that can connect to you.
full member
Activity: 179
Merit: 131
I have been running 2 Bitcoin full nodes for a few weeks using 0.11.2 version on Linux. On each node, I set to accept connections on both IPv4 and IPv6.

On the incoming connection, the number of peers connected to my nodes are mostly on IPv4, in the order above 80 peers while only about 3 peers connected on IPv6.

On the outgoing connection, my nodes always connect by itself to 7 different peers on IPv4 but only 1 peer on IPv6. I am using the default maxconnections, which I believe it is 125 connections for both outgoing and incoming connections.

I can force my nodes to connect to IPv6 peers using addnode command, but I am wondering why my nodes does not want to connect to more IPv6 peers by itself and why only a few IPv6 peers connected to them. Do you guys have any idea why this is happening?
Jump to: