In bitcoin you only vote with your associated hashpower as a mining pool . Full nodes don't vote/play part in consensus . They are just observers of the outcome ( block creation ) that mining nodes do . I think we agree on that ?
If you think so, then what do you think about this page?
https://en.bitcoin.it/wiki/Bitcoin_is_not_ruled_by_minersThere are two options: this page is wrong, or you are wrong.
Propagation with nodes that are on what bandwidth ? And with what specs ?
You can see minimal requirements for running Bitcoin Core. They are listed on the project page.
Offcourse nodes need time to validate blocks .
This is the problem. Validation takes a lot of time. I can run two nodes on localhost, and let one empty node to be synchronized from another one. Guess how long it would take? Most of the time will be spent on validation. Downloading will be instant, because data on localhost are transferred with maximum available speed, that can be simulated by your computer. They can be even on the same disk! If validation would be simpler, you could have it as fast, as just making a copy of some folder! But for some reason, it can take a lot of hours. Which means, bandwidth is not the biggest issue.
Who defines what is decentralisation ?
Of course full nodes. The more independent full nodes you have, the more decentralized the whole system is. And obviously, they should be owned by independent people, and controlled from independent machines. Which brings us back to the first quote: if you think that miners can "vote", and full nodes can "only observe", then what kind of decentralization is present in your model?
So, my definition of decentralization is quite simple: you count all full nodes, and you count, how many of them are owned by independent people. The more such nodes you can find in the wild, the higher the whole decentralization is. If you don't like this definition, then give a better one, because you cannot beat something with nothing.
And how much decentralised or centralised a network is ?
If you express it by the number of independent nodes, as stated in my previous sentences, then it is quite simple: a network with 100 independent nodes is 10x more decentralized than a network with 10 independent nodes. Machines can be better or worse, but if you take only a single parameter called "decentralization", then the number of independent parties can be used to conclude, what it is. Which means, three Byzantine generals are 3x less decentralized than nine generals.
Common question i make , can a 5 nodes network be decentralised and a 10000 nodes network be centralised ?
Of course, because in my definition I mentioned about the number of
independent nodes. Which means, 10000 nodes won't help, if they are owned by a single entity. And because you can never prove, that any two nodes are independent, it is hard to measure. But you can judge by their behavior, just observe the nodes you connect with, and make your own conclusions, based on what you can observe in the wild.
Decentralisation comes from the economic incentives participants have . Being an active node offers to that , and that's achieved only through PoW .
Having a huge hashrate is not enough to rule the world. You can have quite impressive hashrate, but you will have no impact on BTC, if you mine BCH. Which means, you have to produce blocks, that are accepted by the users. Which also means, you don't have to be the miner to "vote". Why? Because miners alone, could for example decide, that they want 50 BTC forever, and that halvings should be disabled. If they theoretically could do that, then tell me, why it is not the case? The answer is simple: because it is all about connections between users, miners, developers, and all other people involved in Bitcoin. As I linked at the very beginning of this post, "Bitcoin is not ruled by miners".
Who do you think would be interested more than anyone to stay on the network ?
A lot of people value backward compatibility more than you can imagine. We still use IPv4, even if IPv6 was deployed. We started from ASCII, and UTF-16 was created before UTF-8, but finally, UTF-8 seems to be winning side. Because of backward compatibility. And I can tell you more: in the context of Bitcoin, we started from P2PK, and now, P2TR is more similar to P2PK than to P2SH. We have just public keys with commitments. Because this is backward-compatible. And we could end up with a single address type, that is P2PK, and attach everything else through commitments, if the whole system would support that from the very beginning! And when it comes to hash functions, even collisions for SHA-1 didn't stop people from using it, and instead they created "hardened SHA-1", which is still used by "git". Guess what: because it is backward-compatible!
The world is "unupgradable", and the sooner you accept it, the easier your life will be. Because then, you will understand, why some countries have quite advanced banking, and why in some other places, you can still use cheques. If you have nothing, then your new system could be built from scratch, without any serious limitations. But if something is already there, then upgrading it is very hard. Extremely hard, so you will lose a lot of customers, if you will break backward-compatibility.
If all these were done by following what the most weak person wanted then there couldn't be decentralisation
The system is not about "the weakest" or "the worst". It is about "compatible" vs "incompatible". You could have terabyte blocks, here and now. But please, compress your terabyte-sized blocks, and include some proofs, to not force absolutely everyone to process that. Make it as a commitment. Guess what: the block header has 80 bytes. Does it mean that you can process 80 bytes per 10 minutes? No, you can process a lot more. And the same is true here: just commit your terabyte-sized network to the existing 4 MB witness, and you are good to go. Also, by tweaking public keys, you won't need any additional on-chain bytes to form a commitment. Taproot can show you that: you have 256-bit public key, no matter, how big Ordinals are behind a single address. And in the same way, the network can process much more than 4 MB per 10 minutes, without any hard-fork or soft-fork, if you know, how to make a commitment.