Pages:
Author

Topic: Scaling Bitcoin Above 3 Million TX pre block - page 4. (Read 3376 times)

hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 10:46:55 AM
#38
Matt Corallo just pointed out at the conference that propagation has little to do w/ bandwidth connectivity but general tcp packet loss which can occurs regardless of your connectivity.

I know there is some posts on the dev list referencing this. Don't ask me to explain in details this is a bit beyond me technically

You mean tcp re-transmission rates?  Thats a function of network congestion ( assuming we can ignore radio interference, etc.), which is kinda related to your 'connectivity'.

And round we go go again. tcp doesn't loose packets, it drops them when it cannot forward them as quickly as it receives them. This is everything to do with the quality of your connection, not the protocol.
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 10:25:01 AM
#37
Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.

Quit with the lame "Its beneath me to explain" angle.  If you have an issue, state it, and support your contention with a relevant tech reference.

Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
September 12, 2015, 08:47:43 AM
#36
Matt Corallo just pointed out at the conference that propagation has little to do w/ bandwidth connectivity but general tcp packet loss which can occurs regardless of your connectivity.

I know there is some posts on the dev list referencing this. Don't ask me to explain in details this is a bit beyond me technically
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 12, 2015, 08:37:33 AM
#35
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

point out to me where the 250x increase is, por favor.
legendary
Activity: 1386
Merit: 1009
September 12, 2015, 07:43:21 AM
#34
Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.
hero member
Activity: 798
Merit: 1000
Move On !!!!!!
September 12, 2015, 07:25:07 AM
#33
we have the tech more or less laid out and working
we just need to optimize it a little and make it part of the standard protocol
we can scale bitcoin
we can scale bitcoin to 4K TPS running on a silly home computer

excited?

you should be.


I am very excited! Smiley

All we need now is to make devs start making changes and implementing new things. I'll be even more excited when we get to this point!
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
September 12, 2015, 07:16:30 AM
#32
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.

Their public nodes are listed as:

public.us-west.relay.mattcorallo.com
public.us-east.relay.mattcorallo.com
public.eu.relay.mattcorallo.com
public.{jpy,hk}.relay.mattcorallo.com
public.bjs.relay.mattcorallo.com
public.{sgp,au}.relay.mattcorallo.com

All registered under mattcorallo.com  if your rely on their service, then when this company is down, bitcoin is over
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 08:58:58 PM
#31
in other news

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 08:57:27 PM
#30
we have the tech more or less laid out and working
we just need to optimize it a little and make it part of the standard protocol
we can scale bitcoin
we can scale bitcoin to 4K TPS running on a silly home computer

excited?

you should be.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 08:53:18 PM
#29
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away


http://sourceforge.net/p/bitcoin/mailman/message/32676543/

Quote
Essentially instead of relaying entire
blocks, nodes keep a rolling window of recently-seen transactions and
skip those when relaying blocks.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
September 11, 2015, 08:25:24 PM
#28
Corallo Relay Network is just a group of private controlled servers on internet backbone, it can not be applied to all of the nodes around the world. If everyone is using Corallo Relay Network to relay 1GB blocks and it is shutdown by the government, bitcoin is dead right away
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 08:14:00 PM
#27
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

broadcasting a block is no longer an issue


250 times faster? huh?  Where did you get that from?

My understanding is this is just a bunch of dedicated nodes
on a good connection, it's not going to magically warp speed the whole network.

no joke man 250X faster
its not " just a bunch of dedicated nodes,on a good connection"
it only sends out "pointers to TXs" the new block includes, all miners have pretty much the same mem pool, so they can use the pointers to make the block. and they can check and make sure they made the exact same block with the merkle root. if they are missing any TX they can ask a peer.

miners are already using this, but its not standard and it isn't P2P
this method needs to be implemented on a P2P level.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
September 11, 2015, 08:11:16 PM
#26
For any suggestions, first consider a worst case scenario, if that works, then you can make your point. 1GB block in a worst case scenario will definitely separate the network into many forks, each fork will just grow on each USA/EUROPE/CHINA chain and never receive block fast enough from other chains, so the bitcoin is totally broken, no transaction can be confirmed globally
hero member
Activity: 700
Merit: 500
September 11, 2015, 08:10:59 PM
#25
Bitcoin started steadily being used in 2010, five years after that under normal use conditions it's still not suffering from the 1Mb block limit. I've seen people arguing that merely increasing the block size limit will ramp up adoption but I think it's much more complicated than that. After so many concentrated efforts, bitcoin's usercases doesn't seem to be growing at rampant rates, not that there's a way to be 100% certain about that, but at least data in the blockchain makes that evident.

There are many concepts that come against increasing the block size cap eight fold. For example, with a higher cap, the fee market would likely change. Getting into the blockchain would be worth less, while right now, putting data into the blockchain costs a somewhat significant sum. With the destruction of the current fee market for the upcoming years and blocks 8x times bigger you'd expect that we'd get something back. Like for example 'stress tests' being harder and more expensive to bring to reality. But that's also not the case. Bigger blocks would just be easier (and also cheaper) to be filled with trash transactions. In fact, calculation's we've seen about tx/s going up with the size are only taking into account transactions of certain size. Limiting the mempool was a suggestion to counter this, that wouldn't really work out well if the goal of increasing the block size cap was to make bitcoin handle more tx/s.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 11, 2015, 08:03:12 PM
#24
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

broadcasting a block is no longer an issue


250 times faster? huh?  Where did you get that from?

My understanding is this is just a bunch of dedicated nodes
on a good connection, it's not going to magically warp speed the whole network.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 07:59:13 PM
#23
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.



right this is a BIG PROBLEM i understand
the solution is using the same method the "Corallo Relay Network" uses
broadcasting a block in this method is 250 times faster!
and more optimizations can be done

a 1GB block can be broadcast with 4MB with this method, broadcasting a block is no longer an issue.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 07:56:49 PM
#22
The block limit should reflect a sort of requirement we require users to be able to handle in order to run nodes.
we don't want that limit to be so high as to exclude ordinary users
we don't want it so low that its starts to limit TPS and slow confirmation time.
luckily for us, we aren't in 1995 anymore and typical home users can download at ~1MB per second.
of course we want some comfort zone so we should be talking about 100-300MB block limit
which gives bitcoin PLENTY of space to grow
many when we hit that limit again internet speeds will have 100X
 

legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 11, 2015, 07:51:53 PM
#21
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.




Blockchain reorgs.

The reason why proof-of-work and the longest chain rule are effective in maintaining distributed consensus,
is that the time between blocks is much bigger than the time it takes to broadcast a block to the network.

If two miners find a block at about the same time and start broadcasting them to about half the network
each, you have a fork.  One half is on one chain, and one half is on another.  It gets resolved by the
longest chain rule when the next block is found.  Since the winning blocks takes only a few seconds
to reach the network, the fork is quickly resolved.   However, if the blocks take a long time to broadcast,
then more competing blocks are going to be solved and broadcast.  I hope you get my point, but it
is clear to me that a huge difference in broadcast time vs block time is necessary for the longest chain
rule to maintain order, minimize reorgs and prevent network splits.  The longer the time, the more reorgs and the more
problems you will get.  Would a full minute be ok for broadcast times?  Probably, although reorgs will
increase.  Several minutes?  Now its starting to be a real issue.  10 minutes? No way.





legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 11, 2015, 07:44:36 PM
#20
Adam,the download time must be MUST lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?

reorgs?
with new block propagation being trivial thanks to "Corallo Relay Network" we only need nodes to be able to keep up with the TPS
so if every user on the network can comfortably download 100MB in 10mins, there shouldn't be any problem using 100MB as the block limit
actually there WOULD be a problem if block were bigger than this limit, which is what the limit should be about in the first place.

legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 11, 2015, 07:40:34 PM
#19
Adam,the download time must be MUCH lower than the block interval.
Otherwise, the reorgs will be too high.

Do you get that?
Pages:
Jump to: