Pages:
Author

Topic: Scaling Bitcoin Above 3 Million TX pre block - page 3. (Read 3376 times)

legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 12, 2015, 05:54:33 PM
#58
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.

As far as I'm aware the LN is off the main chain so it's irrevlevant to actually scaling main chain.

legendary
Activity: 883
Merit: 1005
September 12, 2015, 05:51:26 PM
#57

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

This really helped me understand your argument.  It would be great if this was implemented but it still would not address the issues of blockchain storage or the threat of spam.
legendary
Activity: 1386
Merit: 1009
September 12, 2015, 05:29:56 PM
#56
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 05:06:26 PM
#55
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  Cheesy
That's quite a straw man here, I didn't say that, please don't overgeneralize.  Huh

Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate.
That's exactly an informal fallacy. Larger blocks mean more txs, BUT more txs don't necessarily mean larger blocks. You are equalizing them.

Whaaat?  Bitcoin is engineered to generate a block once every ~10 minutes. That is set in stone. So of course more transactions mean larger blocks - unless you are shrinking the transaction size  Huh  What you said makes no logical sense.

Quote
I have said 1GB blocks require a redefine of bitcoin. I haven't said you have to redefine what bitcoin is to increase transaction throughput. But you have weakened/replaced my argument to make it easier to refute -- straw man.

The tx throughput can vary, the rate of block creation is fixed. We can have as many transactions as users generate, but we still have the same number of blocks.

edit: Maybe we are getting hung up on the 1Gb thing. Same holds true for 2Mb, 4Mb ..... 32Mb blocks. Above 32Mb, you need to change how bitcoin sends messages, but thats academic to this discussion.

legendary
Activity: 1386
Merit: 1009
September 12, 2015, 04:46:41 PM
#54
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  Cheesy
That's quite a straw man here, I didn't say that, please don't overgeneralize.  Huh

Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate.

That's exactly an informal fallacy. Larger blocks mean more txs, BUT more txs don't necessarily mean larger blocks. You are equalizing them.

I have said 1GB blocks require a redefine of bitcoin. I haven't said you have to redefine what bitcoin is to increase transaction throughput. But you have weakened/replaced my argument to make it easier to refute -- straw man.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 12, 2015, 04:45:47 PM
#53

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.





Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast.   Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 04:40:38 PM
#52

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?

Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know)  But I can imagine a case where it could be extended to a wider network.



hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 04:35:07 PM
#51
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  Cheesy
That's quite a straw man here, I didn't say that, please don't overgeneralize.  Huh

Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate.

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.
Once again, this all is based on a weak assumption that miners are cooperative -- in the worst case scenarion we are falling back on the regular propagation protocol. While the Matt's RN doesn't have any major downsides per se, it's effectively downplaying the issue at hand -- that in the worst case scenario the information to be transmitted scales linearly with block sizes. While it appears we can easily increase blocksizes thanks to Matt's RN, it's becoming worse in case of uncooperative behavior.

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.
I'd like to know how exactly Matt's RN would obviate it. It would mask it, yes, but it's not a magic bullet.

As it is, its just a step in the right direction, but I'm also saying that it is an idea that can be developed and deployed across the network in general. But, yeah, i don't think its a magic bullet, but it is certainly an indicator that positive thought exists in Bitcoin, and solutions to its inherent problems can be found.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 12, 2015, 04:26:38 PM
#50

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.

Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?


hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 04:22:47 PM
#49

I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 

A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent.

This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.
legendary
Activity: 1386
Merit: 1009
September 12, 2015, 03:23:50 PM
#48
I don't see why you have to redefine what bitcoin is to increase transaction throughput.  Cheesy
That's quite a straw man here, I didn't say that, please don't overgeneralize.  Huh

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.
Once again, this all is based on a weak assumption that miners are cooperative -- in the worst case scenarion we are falling back on the regular propagation protocol. While the Matt's RN doesn't have any major downsides per se, it's effectively downplaying the issue at hand -- that in the worst case scenario the information to be transmitted scales linearly with block sizes. While it appears we can easily increase blocksizes thanks to Matt's RN, it's becoming worse in case of uncooperative behavior.

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.
I'd like to know how exactly Matt's RN would obviate it. It would mask it, yes, but it's not a magic bullet.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 12, 2015, 02:55:34 PM
#47

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.

Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.

I don't see why you have to redefine what bitcoin is to increase transaction throughput.  Cheesy

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.  

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.

tl;dr  Matts RN could have benfits both for the miners orphan concerns and the tx throughput ( more tx\s per block)


I'm having trouble following this.

"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."

Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
 
legendary
Activity: 1358
Merit: 1014
September 12, 2015, 02:46:27 PM
#46
I think compression of blocks was already addressed by gmaxwell in here but i can't find the actual facts. In any case if this hasn't been considered as teh end all be all solutions agains the blocksize problem im sure there are drawbacks, so im pretty sure we will end up needing bigger blocks and blockstream type of tech anyway.
sr. member
Activity: 294
Merit: 250
September 12, 2015, 02:16:01 PM
#45
Don't you think that it is a little too much btc for that amount?
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
September 12, 2015, 02:13:51 PM
#44

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.

Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.

I don't see why you have to redefine what bitcoin is to increase transaction throughput.  Cheesy

I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent.  All fine and dandy.  

But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network.  But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument.

tl;dr  Matts RN could have benfits both for the miners orphan concerns and the tx throughput ( more tx\s per block)
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
September 12, 2015, 12:54:58 PM
#43
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content.

It is simply saving bandwidth in terms of information that was already communicated.

The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie).


Agree with both these points.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
September 12, 2015, 12:34:46 PM
#42
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content.

It is simply saving bandwidth in terms of information that was already communicated.

The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie).
legendary
Activity: 883
Merit: 1005
September 12, 2015, 12:31:21 PM
#41
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node.
This would slow propagation with the current block size.

However if a node was a few weeks/months/years behind then it may benefit from compressed 'blocks-of-Blocks'. This would require a lot of programming to set up and test.


Edit: I think adamstgBit should stop creating shitty threads on this topic its not helping anyone.
legendary
Activity: 1386
Merit: 1009
September 12, 2015, 12:17:40 PM
#40
Seeing you posts, Adam, I'm really excited by your apparent lack of understanding of the technical side of Bitcoin. I figured, maybe you could consider refraining from posting nonsense, and educate yourself first?
Because I see no point arguing with you, when you can't grasp some relatively simple technical ideas.

Quit with the lame "Its beneath me to explain" angle.  If you have an issue, state it, and support your contention with a relevant tech reference.
That's what I usually do, when there's hope for reasonable discussion.

Quote
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know.
Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is?

What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data.
Moreover, it depends on two unreliable assumptions:
1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools.
2) that participants' mempools are highly synchronized.

The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
September 12, 2015, 10:52:29 AM
#39
Why is it that no-one with any technical credibility backs @adamstgBit's claims (and in response please show people that have quoted you as their source rather than yourself misquoting them)?

Apparently he is smarter than everyone in the world I guess - so why doesn't he just fork Bitcoin (perhaps BitcoinAB) and see how that goes?

Prior to this whole block size thing I thought this guy was reasonable but now that he creates a new thread every day full of bullshit claims I can only wonder whether in fact he sold his account and whoever is posting this stuff is actually some newbie (and that wouldn't surprise me one bit).
Pages:
Jump to: