Pages:
Author

Topic: Why can Core not Scale at a 0.X% for blocksize in the S-Curve f(X) (Read 1126 times)

legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange

Heart is basically diverged from bitcoin original idea: a p2p electronic cash system. He is a bitcoin-as-gold guy. Don't bother listening to him.

We need bitcoin as the monetary system of the future. Banks should back-off, any other proposal is void, imo.

Of course we ideally want both: A store of value and a way to make cheap and fast transactions, but can we have both with no tradeoffs? This is what seems impossible to me thus far.

And Bitcoin already works well as a digital gold. What you are proposing is: "let's try my untested ideas by changing bitcoin, with the risk of ruining the digital gold property which already works, in order to see if my ideas actually turn out great in real life".

Like I said, im not seeing any numbers, tests, research. You need a testnet and gather some data with your modifications. I don't think you will get much support for your hard fork proposal otherwise.

Something like this for starters:





FYI, that images come from Bitfury Research on 6 September 2015 --> https://bitfury.com/content/downloads/block-size-1.1.1.pdf

But IMO there are few problem with that image/research :
1. No info what kind of computer/server was used for research
2. Looks like daily/yearly traffic exclude traffic used when new nodes (don't have blockchain yet) connected
3. Since SegWit and newer Bitcoin Core version, block verification time should be faster

TLDR, it's outdated research and no longer accurate with today's condition

--snip--
I've pretty much given up my pursuit of an adaptive/dynamic blocksize.  I used to think like you do.  I couldn't see a good reason why we shouldn't be pursuing what seemed like a really good idea.  But I can't code it, no one else is volunteering to do so, which means it's clearly not happening.  Consider that the same is happening with your ideas right now.  You can't force these things.  Either people get on board, or they don't.

I'm pretty sure few developer/contributor have made BIP about adaptive/dynamic blocksize few years ago. But we know it's rejected like most proposal.

Edit : it's BIP 105, 106 and 107.
legendary
Activity: 2898
Merit: 1823
@Wind_FURY,
It is the first time I'm hearing about this fork  Cheesy

Just check their web site,  Undecided

Too many forks out there, many of them with obvious poisonous incentives, many others with little or zero history of discussion and theoretical efforts, ...

My proposals, both for block time decrease and collaborative work are suggested for improving current bitcoin network in scales it is experiencing. I don't understand how a coin/fork with few million dollars market cap and few penta hash mining power would need any kind of improvement and how useful it would be as a testbed for such improvements.

So, thanks but no, I'm not interested.

But you want to use the Bitcoin network as your "testbed" for your ideas, and then criticize the Bitcoin Core developers of they reject them? Hahaha.

Plant your feet on the ground, my friend. The Core developers are doing a good job maintaining the network's decentralization and security. That must not change.
legendary
Activity: 3934
Merit: 3190
Leave no FUD unchallenged
--snip--
I've pretty much given up my pursuit of an adaptive/dynamic blocksize.  I used to think like you do.  I couldn't see a good reason why we shouldn't be pursuing what seemed like a really good idea.  But I can't code it, no one else is volunteering to do so, which means it's clearly not happening.  Consider that the same is happening with your ideas right now.  You can't force these things.  Either people get on board, or they don't.

I'm pretty sure few developer/contributor have made BIP about adaptive/dynamic blocksize few years ago. But we know it's rejected like most proposal.

Edit : it's BIP 105, 106 and 107.

Indeed.  And if an idea that can both raise and lower throughput limits depending on network conditions can be rejected, then an idea that can only raise them (such as a blocktime reduction) will be even more likely prone to rejection (unless there was a clear and urgent need for it, which there currently isn't).

A ~90 second blocktime like aliashraf proposes would be the approximate equivalent of a 6.66MB base blocksize or a potential 26.64MB SegWit blockweight with the current ~10 minute blocktime.  If the community deemed the SegWit2X potential 8MB blockweight overly excessive, how would anyone in their right mind think the community would openly support more than thrice that amount of potential throughput?

And don't even get me started on the PoW change where he'd happily obliterate the security of the network and reset the difficulty to circa 2011 levels.  Total lunacy. 
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
@Doomad
I don't understand how your example would help proving your point? @Evil-Knievel made an argument about Moore Bound and I mathematically proved it not being applicable because this is a very high bound for a graph with a degree of 100 (totally feasible) and a diameter of 4 (interestingly short) ...
So, who is not listening/reading here?  Huh
legendary
Activity: 3934
Merit: 3190
Leave no FUD unchallenged
It is not how it works. Improvement proposals have to be discussed and refined before being coded. You are not interested in discussing ideas? Ok then, don't participate just wait for the alpha/beta/release versions, no problem.

As of my PoCW project, I need a LOT of contribution for which, thanks to you and guys like you, I got nothing other than discouraging comments and FUD about how "dangerous" is touching bitcoin.

You only seem interested in having a "discussion" up to the point where someone raises a concern, then you just start talking over them.  Case in point:

Sure, when you scale up the number of connections each node has to have, you can always ensure very low hop counts.
Clearly, this doesn't apply for random graphs (as we have in Bitcoin) so you would need some kind of fancy management overhead that constructs those special graphs in some decentralized way. Also, you would have to check what effects those 100+ node connections actually have: are there any new attack surfaces regarding DOS, net split? Are slow nodes doomed to fail because they suddenly have to process and rebroadcast all that crap from the other 100 nodes.
So, no Moore Bound problem, ok?



"Yeah, that's just not a problem, so la-la-la-I'm-not-listening-la-la-la".  You don't want a discussion, you want people to blindly follow you without question as though what you're saying is the only way to move forwards.  You aren't looking for ways to overcome or work around any issues that are being raised, you're simply ignoring them and hoping they don't cause you grief later down the line.

You are not an easy person to have a discussion with.  
legendary
Activity: 1372
Merit: 1252
Im up for discussing the game theory involved in the different proposals and im not discouraging you to continue working on it and hope we can finally see the code eventually.

But nonetheless we've discussed Richard Heart's claims on blocksize and blocktime. You said he was a "bitcoin as digital gold" charlatan and to be ignored.

I claim that this is a mistake. Most "bitcoin as digital gold" guys are the biggest whales in Bitcoin, in other words, these are the main guys you must convince since in any hardfork people vote with their coins at the end of the day.

My point is that the "bitcoin as digital gold" guys are not going to give you an inch of space for the tradeoffs involved in your proposal. They are usually extremely conservative and consider Bitcoin good enough for their needs and don't care if the rest cannot afford transactions or if it's too slow or whatnot.

So yeah, don't ignore their opinion or your hardfork will get dumped hard.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
It is the first time I'm hearing about this fork  Cheesy

Just check their web site,  Undecided

Too many forks out there, many of them with obvious poisonous incentives, many others with little or zero history of discussion and theoretical efforts, ...

My proposals, both for block time decrease and collaborative work are suggested for improving current bitcoin network in scales it is experiencing. I don't understand how a coin/fork with few million dollars market cap and few penta hash mining power would need any kind of improvement and how useful it would be as a testbed for such improvements.

So, thanks but no, I'm not interested.

At least they're willing to actually code it and put their idea into practice, which is seemingly more than you're willing or able to do.  Get it coded, put it on a testnet, show everyone that it works without sacrificing any of the current qualities the current userbase sees value in.  Until then, thanks but no, we're not interested.  

I've pretty much given up my pursuit of an adaptive/dynamic blocksize.  I used to think like you do.  I couldn't see a good reason why we shouldn't be pursuing what seemed like a really good idea.  But I can't code it, no one else is volunteering to do so, which means it's clearly not happening.  Consider that the same is happening with your ideas right now.  You can't force these things.  Either people get on board, or they don't.


It is not how it works. Improvement proposals have to be discussed and refined before being coded. You are not interested in discussing ideas? Ok then, don't participate just wait for the alpha/beta/release versions, no problem.

As of my PoCW project, I need a LOT of contribution for which, thanks to you and guys like you, I got nothing other than discouraging comments and FUD about how "dangerous" is touching bitcoin.

I'm not a shit-fork guy, neither an scammy ico/start-up one, so, I continue running my campaign to convince more people about the feasibility of having a better bitcoin. By better I mean better at accomplishing its mission as an alternative monetary system, decentralized, secure and fast.

For this I have already proposed an innovative original idea for replacing bitcoin's winner-takes-all  with a contributor-takes-share approach that can neutralize pooling pressure and eliminate pools from the ecosystem forever, plus a series of complementary improvements like decreasing block time, improving transaction format and more.

Coding is in progress and will keep you informed about the progress   Wink
legendary
Activity: 3934
Merit: 3190
Leave no FUD unchallenged
It is the first time I'm hearing about this fork  Cheesy

Just check their web site,  Undecided

Too many forks out there, many of them with obvious poisonous incentives, many others with little or zero history of discussion and theoretical efforts, ...

My proposals, both for block time decrease and collaborative work are suggested for improving current bitcoin network in scales it is experiencing. I don't understand how a coin/fork with few million dollars market cap and few penta hash mining power would need any kind of improvement and how useful it would be as a testbed for such improvements.

So, thanks but no, I'm not interested.

At least they're willing to actually code it and put their idea into practice, which is seemingly more than you're willing or able to do.  Get it coded, put it on a testnet, show everyone that it works without sacrificing any of the current qualities the current userbase sees value in.  Until then, thanks but no, we're not interested.  

I've pretty much given up my pursuit of an adaptive/dynamic blocksize.  I used to think like you do.  I couldn't see a good reason why we shouldn't be pursuing what seemed like a really good idea.  But I can't code it, no one else is volunteering to do so, which means it's clearly not happening.  Consider that the same is happening with your ideas right now.  You can't force these things.  Either people get on board, or they don't.

legendary
Activity: 1372
Merit: 1252

Heart is basically diverged from bitcoin original idea: a p2p electronic cash system. He is a bitcoin-as-gold guy. Don't bother listening to him.

We need bitcoin as the monetary system of the future. Banks should back-off, any other proposal is void, imo.

Of course we ideally want both: A store of value and a way to make cheap and fast transactions, but can we have both with no tradeoffs? This is what seems impossible to me thus far.

And Bitcoin already works well as a digital gold. What you are proposing is: "let's try my untested ideas by changing bitcoin, with the risk of ruining the digital gold property which already works, in order to see if my ideas actually turn out great in real life".

Like I said, im not seeing any numbers, tests, research. You need a testnet and gather some data with your modifications. I don't think you will get much support for your hard fork proposal otherwise.

Something like this for starters:



legendary
Activity: 1456
Merit: 1175
Always remember the cause!
@Wind_FURY,
It is the first time I'm hearing about this fork  Cheesy

Just check their web site,  Undecided

Too many forks out there, many of them with obvious poisonous incentives, many others with little or zero history of discussion and theoretical efforts, ...

My proposals, both for block time decrease and collaborative work are suggested for improving current bitcoin network in scales it is experiencing. I don't understand how a coin/fork with few million dollars market cap and few penta hash mining power would need any kind of improvement and how useful it would be as a testbed for such improvements.

So, thanks but no, I'm not interested.
legendary
Activity: 2898
Merit: 1823
Good news for you, aliashraf. The "real Bitcoin Core", BTCC, https://thebitcoincore.org/, will go through a hard fork to one minute block times.

I believe it will be on Thursday next week. You can comtribute by reviewing their code for bugs. It should be a good experiment for their network. Cool

legendary
Activity: 1456
Merit: 1175
Always remember the cause!

"t0 2.5 min then at t1 90s" sounds a bit reckless.
I don't see anything reckless here:
With current situation of mining, dominated by pools, no worries about network diameter, they have already resolved it and for my proposed pool-free protocol, it would be taken care of,  properly.

 

Apart from block-size and block-time changes, your other proposition (from another thread) is a radical change to PoW algorithm. ...

Changing the mining algorithm so as to wreck uncertainty on the miners worldwide would be a bad-disruptive thing to do, not the good-disruptive. How disruptive changing network specifications would be, like you propose here, should be demonstrable on a test net.. Undecided

My PoCW proposal keeps SHA256 unchanged, the only disruptive aspect is eliminating pooling pressure and the need for pools (not the possibility).

I'm not slandering anybody. It would be a sign of arrogance for these guys if they take my criticism as an offense.
A lot of people including Greg Maxwell have pointed out that any technical discussion here at BCT has lost significance because of the way it quickly gets personal/ political. Here's what you said in your post:
Spreading FUD about  mysterious huge risks of 'breaking' the network because of improvement and evolution is an important part of the Core's strategy.
In my opinion, accusing someone of spreading FUD as part of their 'strategy' is not technical criticism. (which you should focus on and which no doubt will be welcomed). You cannot hope to call them FUDDers, Dogma-ridden and expect them not to think that you have an ulterior agenda..There have been too many of those out here..¯\_(ツ)_/¯
No, I don't agree. Spreading FUD about a mysterious fragility in bitcoin and claiming every serious improvement proposal as a potentially dangerous move, is a regular practice of these guys and should be denounced properly. Nobody has a privilege or diplomatic immunity to act like that and to be exempt from criticism.

Plus, I think it is very normal in a technical discussion to accuse the other side of such a misconduct when s/he is just handwaving about mysterious threats and consequences.  

For instance the whole Buterin trilemma thing is a FUD that targets average crypto enthusiasts and users to give up hopelessly with having a decentralized, secure, and well performing system and instead accept his weird proposals about PoS and weak subjectivity (believe it? 'weak' shittiness).

Analogically, when someone claims reducing block time like 2-4 times in a fully decentralized mining scene will lead to centralization, he is spreading FUD, simply because there is no evidence or technical proof supporting such a claim. It is just a FUD.

Protesting against extreme conservatism and asking for a more open minded team in charge of bitcoin development is not a crime. Nor being against the idea of leaving bitcoin unchanged and sticking with 2nd layer projects should be considered an act of slandering.
Many people don't view it as extreme conservatism. Its accepting that you don't know what you don't know. ...
When it comes to protocols we design and implement, there is almost nothing that we don't know.



legendary
Activity: 1456
Merit: 1175
Always remember the cause!
Sure, when you scale up the number of connections each node has to have, you can always ensure very low hop counts.
Clearly, this doesn't apply for random graphs (as we have in Bitcoin) so you would need some kind of fancy management overhead that constructs those special graphs in some decentralized way. Also, you would have to check what effects those 100+ node connections actually have: are there any new attack surfaces regarding DOS, net split? Are slow nodes doomed to fail because they suddenly have to process and rebroadcast all that crap from the other 100 nodes.
So, no Moore Bound problem, ok?

Bitcoin relay network is consisted of two different class of full nodes: non mining nodes (wallets, browsers, etc.) and mining related nodes.

By mining related I mean full nodes that are directly connected to a mining facility(pools, farms, etc.). Currently with pools dominating the mining scene I believe the number of mining nodes is extremely low (below 2k probably). Most of this nodes belong to farms/pools, centers with hundreds of thousands up to millions of dollars investment which have incentives and strengths to do anything they found necessary for optimizing their presence on the network.

IOW, having a very compact graph of mining full nodes with a 4 hopes diameter in the current bitcoin situation with pools, seems to be a trivial job. Actually it is already done by miners using simple strategies like deliberately choosing each other as peers or more effectively by joining FIBRE and alike.

But my target is more ambitious: I wanna set free miners from pool slavery by neutralizing pooling pressure. So I have to solve a much more difficult problem for a network with a minimum of like 100,000 mining nodes constrained by the same 4 hopes diameter condition. Cool

For this, we would need some minor improvements in the networking module of bitcoin on one hand and promoting independent relay networks like FIBRE that provide a backbone service for instantaneous message relay, on the other hand. I think we don't have to be worried about centralization or trust problems with such networks because nodes always have legacy bitcoin relay network as backup and as a reference witness to evaluate the service both in terms of validity and speed and can easily abandon unfaithful services.

Quote
I am sure it is doable somehow, but whoever tried to implement a proper, resilient Chord P2P network probably figured out quickly that it's virtually impossible to properly handle all the different side cases. For this type of graph, I bet it the task would be even harder. Probably, the graph structure maintaining overhead would become the limiting factor at some point.
I don't think Chord protocol is necessary here just imagine we have a handful of commercial services each consisted of complete graphs of 20-50 non-mining full nodes trusting each other and optimized to handle hundreds of connected full nodes. Miners would pay for the service and besides bitcoin legacy p2p network, they join to such services probably through a load balancing mechanism ...

legendary
Activity: 1904
Merit: 1159
@amishmanish
More people have correctly reminded me of political obstacles rather than technical ones unlike what you suggest. I've been around for a while. I understand who is who in bitcoin and which frontiers worth fighting.

Buterin's trilemma is one example and Core exaggerations about the fragility and sensitivity of bitcoin protocol/software that prohibits any "reckless' manipulation is another example. We should get rid of such superstitious pretexts to have a fresh and dynamic development environment because we need contribution.
Apart from block-size and block-time changes, your other proposition (from another thread) is a radical change to PoW algorithm. I am not someone who understands the game-theory, the graph theory and associated higher mathematics to judge on decentralization aspects of a mining algorithm. After much reading and thought, I came to an understanding that PoW works. The history of its development shows that various such ideas of "Proof-of-X" were tried but failed to pass the trustlessness criterion. This made me decide that bitcoin is my best bet. I am sure that the whales invested in this are also aiming for stability and not radical experimentation.

If the obstacle is not technical but political then you need to generate enough information to help people make a decision as to why a change in algorithm or block times can be feasible. This needs code, testing and evidence. Hence, a testnet. It will be far more easier to convince people that way. Or maybe if you could point us to a peer-reviewed discussion on these topics.

As i admitted above, there are technical aspects I don't understand yet. What I do understand is that Bitcoin's PoW works and has been working through the past decade. What i know is that a whole hardware manufacturing industry supports it with increasing hashpower and that hashpower is what keeps it valuable. I understand that mining has a centralization aspect but I also know that the entities involved will be wary of risking a self-goal by targeting the very thing that keeps them rich. As a believer in bitcoin's ideals, I hope and cheer for news that mining hardware may soon see entry of other major players leading to some resolution of the issue that you wish to address.

Changing the mining algorithm so as to wreck uncertainty on the miners worldwide would be a bad-disruptive thing to do, not the good-disruptive. How disruptive changing network specifications would be, like you propose here, should be demonstrable on a test net.. Undecided

I'm not slandering anybody. It would be a sign of arrogance for these guys if they take my criticism as an offense.
A lot of people including Greg Maxwell have pointed out that any technical discussion here at BCT has lost significance because of the way it quickly gets personal/ political. Here's what you said in your post:
Spreading FUD about  mysterious huge risks of 'breaking' the network because of improvement and evolution is an important part of the Core's strategy.
In my opinion, accusing someone of spreading FUD as part of their 'strategy' is not technical criticism. (which you should focus on and which no doubt will be welcomed). You cannot hope to call them FUDDers, Dogma-ridden and expect them not to think that you have an ulterior agenda..There have been too many of those out here..¯\_(ツ)_/¯

Protesting against extreme conservatism and asking for a more open minded team in charge of bitcoin development is not a crime. Nor being against the idea of leaving bitcoin unchanged and sticking with 2nd layer projects should be considered an act of slandering.
Many people don't view it as extreme conservatism. Its accepting that you don't know what you don't know. There are enough examples of lower blocktimes, bigger block-sizes and there is nothing to suggest that they won't ever face bottlenecks. There is the Github way to do it and it says it well.
Quote
Testing and code review is the bottleneck for development; we get more pull requests than we can review and test on short notice. Please be patient and help out by testing other people's pull requests, and remember this is a security-critical project where any mistake might cost people lots of money.
With that, I'll bow out of this discussion and leave you to point us to information on why what you say should work. Maybe some of that material you have been promising for a while. It will be good learning for us.
legendary
Activity: 1372
Merit: 1252
@cellard

No block size increase is required or recommended for 100 tps:

Step 1: Decrease block time immediately to 2.5 minutes.  
It gives us a rough 60 tps throughput with considerable positive decentralization consequences without any security draw backs (unlike what Buterin claims with his stupid trilemma).

Step 2: Improve relay network.  A greater support should be available for advanced communication features like unsolicited push messages. Full nodes connected to mining facilities should have access to parallel dedicated channels to push/pull critical messages with fewest hops. The possibility of merging protocols like FIBRE should be studied.

Step 3: Decrease block time even more, down to 90 seconds and touch 100 tps record.
The main challenge would be centralized situation with pools that potentially lets them to commit selfish-mining-like misbehaviors. To mitigate this, non-mining honest full nodes should participate in maintaining a back-up channel on which they push critical unsolicited new_block_found messages as soon as they found the block header valid.
I think a clean implementation can keep propagation delay below 2 seconds and the longest distance below 4 hops. It yields a worst case progress around 6 seconds.



As of your eternal concerns about community consensu about change:

This discussion is about how feasible is improving bitcoin throughput without putting the network in danger of centralization or security risks, unlike what PoW/cryptocurrency opponents along with off-chain/second layer proponents claim.

For a real practical plan, I'd start with PoCW and eliminating pools in the first place.
I'm not personally recommending any such optimizations in the current framework of bitcoin PoW that is based on winner-takes-all approach. So, my plan includes eliminating pools from the ecosystem as well as optimizing performance.

So, my agenda is more sophisticated than just a few performance optimizations and harder to accomplish, a completely off-topic subject.





You must first prove empirically that your ideas will not result in a clusterfuck when put in practice. You need somehow a test model to try how it would work and get the data. At least in BCash, Craig Wright, Rizun and co seems to be spending resources trying to run models in which they can gather data to see how massive blocksizes would look like.

"t0 2.5 min then at t1 90s" sounds a bit reckless.

Richard Heart explains here why touching blocktime may not be a good idea:

https://www.youtube.com/watch?time_continue=2940&v=iFJ2MZ3KciQ

Again, you will need a lot more to convince people to hardfork.
legendary
Activity: 1260
Merit: 1168
Sure, when you scale up the number of connections each node has to have, you can always ensure very low hop counts.
Clearly, this doesn't apply for random graphs (as we have in Bitcoin) so you would need some kind of fancy management overhead that constructs those special graphs in some decentralized way. Also, you would have to check what effects those 100+ node connections actually have: are there any new attack surfaces regarding DOS, net split? Are slow nodes doomed to fail because they suddenly have to process and rebroadcast all that crap from the other 100 nodes.

I am sure it is doable somehow, but whoever tried to implement a proper, resilient Chord P2P network probably figured out quickly that it's virtually impossible to properly handle all the different side cases. For this type of graph, I bet it the task would be even harder. Probably, the graph structure maintaining overhead would become the limiting factor at some point.

But I am very interested in hearing your ideas, if you have some cool solution to such issues. I could never figure it out.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
Quote
Step 3: Decrease block time even more, down to 90 seconds and touch 100 tps record.
The main challenge would be centralized situation with pools that potentially lets them to commit selfish-mining-like misbehaviors. To mitigate this, non-mining honest full nodes should participate in maintaining a back-up channel on which they push critical unsolicited new_block_found messages as soon as they found the block header valid.
I think a clean implementation can keep propagation delay below 2 seconds and the longest distance below 4 hops. It yields a worst case progress around 6 seconds.

You "think" it will happen that way in theory, but everyone does not have access to the same speed of bandwidth.

That 4 hop idea won't work! Given a degree (maximum connections) d, and a maximum diameter (hop count) k, the construction of a graph (of maximum size) that matches these properties is called the "degree diameter problem" and has been around for ages. Funnily enough, research has shown that such graphs have an upper bound for the number of vertices (nodes) called the "Moore-Bound".

Either you restrict the Bitcoin Network to a small number of participants (less than the Moore Bound) or you accept the fact, that the diameter in a scalable peer to peer network (that is, without infinitely long routing tables or connections) will always depend on the number of nodes itself - increasing as the network grows.
Actually, Moore Bound is the maximum number of vertices for a given diameter (maximum distance) and degree which is proved to be:

For a degree (d) of 100 (100 peers) to have a 4 hops diameter (k), the upper bound is 100,000,000 vertices!

Of course to fit so many nodes in such a network we would  need very restricted topology which is infeasible for a permissionless network. But the point is  possibilities are wide enough. For instance I could imagine high speed relay only loops of nodes with a cumulative number of tens of thousand peers that compensate for topological imperfectness.

legendary
Activity: 1260
Merit: 1168
Quote
Step 3: Decrease block time even more, down to 90 seconds and touch 100 tps record.
The main challenge would be centralized situation with pools that potentially lets them to commit selfish-mining-like misbehaviors. To mitigate this, non-mining honest full nodes should participate in maintaining a back-up channel on which they push critical unsolicited new_block_found messages as soon as they found the block header valid.
I think a clean implementation can keep propagation delay below 2 seconds and the longest distance below 4 hops. It yields a worst case progress around 6 seconds.

You "think" it will happen that way in theory, but everyone does not have access to the same speed of bandwidth.

That 4 hop idea won't work! Given a degree (maximum connections) d, and a maximum diameter (hop count) k, the construction of a graph (of maximum size) that matches these properties is called the "degree diameter problem" and has been around for ages. Funnily enough, research has shown that such graphs have an upper bound for the number of vertices (nodes) called the "Moore-Bound".

Either you restrict the Bitcoin Network to a small number of participants (less than the Moore Bound) or you accept the fact, that the diameter in a scalable peer to peer network (that is, without infinitely long routing tables or connections) will always depend on the number of nodes itself - increasing as the network grows.
legendary
Activity: 2898
Merit: 1823
@cellard

No block size increase is required or recommended for 100 tps:

Step 1: Decrease block time immediately to 2.5 minutes.  
It gives us a rough 60 tps throughput with considerable positive decentralization consequences without any security draw backs (unlike what Buterin claims with his stupid trilemma).

Then prove your theories by running a test net as said by amishmanish.

How many minutes between blocks does Dogecoin have? I believe it is close to your 2.5 minute target. Do you believe it solved Bitcoin's scaling problem?

Quote
Step 2: Improve relay network.  A greater support should be available for advanced communication features like unsolicited push messages. Full nodes connected to mining facilities should have access to parallel dedicated channels to push/pull critical messages with fewest hops. The possibility of merging protocols like FIBRE should be studied.

Ok. I cannot comment, but I hope someone will in connection to "Step 3".

Quote
Step 3: Decrease block time even more, down to 90 seconds and touch 100 tps record.
The main challenge would be centralized situation with pools that potentially lets them to commit selfish-mining-like misbehaviors. To mitigate this, non-mining honest full nodes should participate in maintaining a back-up channel on which they push critical unsolicited new_block_found messages as soon as they found the block header valid.
I think a clean implementation can keep propagation delay below 2 seconds and the longest distance below 4 hops. It yields a worst case progress around 6 seconds.

You "think" it will happen that way in theory, but everyone does not have access to the same speed of bandwidth.

Quote
As of your eternal concerns about community consensu about change:

This discussion is about how feasible is improving bitcoin throughput without putting the network in danger of centralization or security risks, unlike what PoW/cryptocurrency opponents along with off-chain/second layer proponents claim.

For a real practical plan, I'd start with PoCW and eliminating pools in the first place.
I'm not personally recommending any such optimizations in the current framework of bitcoin PoW that is based on winner-takes-all approach. So, my plan includes eliminating pools from the ecosystem as well as optimizing performance.

So, my agenda is more sophisticated than just a few performance optimizations and harder to accomplish, a completely off-topic subject.

Have you made a proposal or is it possible for you to tell everyone in the Bitcoin mailing list?
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
@cellard

No block size increase is required or recommended for 100 tps:

Step 1: Decrease block time immediately to 2.5 minutes.  
It gives us a rough 60 tps throughput with considerable positive decentralization consequences without any security draw backs (unlike what Buterin claims with his stupid trilemma).

Step 2: Improve relay network.  A greater support should be available for advanced communication features like unsolicited push messages. Full nodes connected to mining facilities should have access to parallel dedicated channels to push/pull critical messages with fewest hops. The possibility of merging protocols like FIBRE should be studied.

Step 3: Decrease block time even more, down to 90 seconds and touch 100 tps record.
The main challenge would be centralized situation with pools that potentially lets them to commit selfish-mining-like misbehaviors. To mitigate this, non-mining honest full nodes should participate in maintaining a back-up channel on which they push critical unsolicited new_block_found messages as soon as they found the block header valid.
I think a clean implementation can keep propagation delay below 2 seconds and the longest distance below 4 hops. It yields a worst case progress around 6 seconds.



As of your eternal concerns about community consensu about change:

This discussion is about how feasible is improving bitcoin throughput without putting the network in danger of centralization or security risks, unlike what PoW/cryptocurrency opponents along with off-chain/second layer proponents claim.

For a real practical plan, I'd start with PoCW and eliminating pools in the first place.
I'm not personally recommending any such optimizations in the current framework of bitcoin PoW that is based on winner-takes-all approach. So, my plan includes eliminating pools from the ecosystem as well as optimizing performance.

So, my agenda is more sophisticated than just a few performance optimizations and harder to accomplish, a completely off-topic subject.


Pages:
Jump to: