Pages:
Author

Topic: Gavin Andresen Proposes Bitcoin Hard Fork to Address Network Scalability - page 11. (Read 18413 times)

hero member
Activity: 924
Merit: 1001
I am glad to see that the viewpoint of the dev team has changed from "Well deal with that later" to "We need to do this now, so there is a later".

The industry is evaluating bitcoin for its capabilities and restrictions now, for things they will build "later".

Hats off to Gavin for taking the reigns.  Wish it wasn't such a one man show in this regard.

Any chance someone can begin to manage the development priorities, enhancements, and create a timeline for execution, so there is a "5 year plan" set in stone?

-B-
legendary
Activity: 2184
Merit: 1024
Vave.com - Crypto Casino
What's a fork. And what's the difference between forks when it's stiff or flaccid?

Not everyone's a bitcoin MSc Engineer FFS.
legendary
Activity: 1354
Merit: 1020
I was diagnosed with brain parasite
I think is is wise choice.
Fork it!
legendary
Activity: 1512
Merit: 1036

For each additional second it takes for a block to propagate there is only ~1/600 chance (maybe more if the difficulty is increasing) that the block will be orphaned because of extra propagation time. If the amount of additional TX fees makes it worth the miners to take this risk then they will include the additional TXs and take the risk of their found block being orphaned

It's about 1 in 600.5001389 chance a block will be found within a second following another. However a block find faster than the network latency does not always result in an orphan - it is not an orphan if the same miner also found the following block.

That is the problem with orphans, they reduce the strength of decentralized mining against attack, by favoring larger miners and discarding proof-of-works that would otherwise strengthen the difficulty. An extreme demonstration of this was just had on testnet after a reset to difficulty 1 vs difficulty 100k hashrate: Even holding 1% of the network hashrate it was impossible to get your block find published, because the largest miner was finding blocks at nearly one per second and building upon their own chain even though they didn't have a majority hashrate nor were they running an "attack" client.

An attacker can enhance the chance of a malicious block acceptance by not including irrelevant transactions. This is in addition to a 51% attack actually becoming a 49.83% attack with a one second delay between legitimate miners. There is already a multisecond delay in pooled mining between the pool software learning of a new block from bitcoin, and pushing it out to miners, and their miner software flushing the work and getting the new block hashing on hardware.
full member
Activity: 151
Merit: 100
Miners are leaving transactions out of blocks today because the blocks propagate faster (ie, have more odds of being confirmed) if they are <250K.  Odds on losing the block fee exceed the transaction fees available, and the miner leaves the transaction unconfirmed. 

Raising the size of the largest allowed blocks will not change that. 

Do we have an implementation yet of broadcasting only the block header, and then letting the nodes assemble the blocks out of transactions they've already received over the network?  That would reduce miners' disincentives for including transactions, so wouldn't that would be the more immediate means of increasing the number of transactions per block?

I agree that the block size needs to be increased; I'm just saying that increasing the allowed block size won't help if miners still have a financial incentive to limit the actual block size to 250K.
As the block subsidy is reduced this will become less of an issue as a higher percentage of total block reward will be from TX fees.

Fore each additional second it takes for a block to propagate there is only ~1/600 chance (maybe more if the difficulty is increasing) that the block will be orphaned because of extra propagation time. If the amount of additional TX fees makes it worth the miners to take this risk then they will include the additional TXs and take the risk of their found block being orphaned
newbie
Activity: 25
Merit: 0
anyone got a link to info on the 7 trx per sec limit?

Current block size limit is 1 Mb. An average transaction is 200-something bytes. One block every ten minutes, i.e. every 600 seconds.

You do the math Wink.

This is a result of number of nodes then? Or is it the difficulty level the nodes have imposed upon them?

The limit is also written into the protocol. What you are implying is that the block size limit would prevent the average number of TX per second to be more then 7. Under this theory someone could broadcast 20 TX one second then as long as few enough other TX get transmitted they will all get included in the block, however this is not how it works. It is my understanding that the nodes will reject TXs if more then 7 are transmitted in a second

What script in the core has a hard limit written into it?

I think the transactions are more limited as to difficulty than any hard line core limit???

The first response was simple math, current blocks mined at X time and it contains Y trans.

So 'difficulty' in creating a new block is the limiting factor, is it not?

If there is a hard limit in the core somewhere, can anyone point out to what part of the core on Github is the file showing any hard limit?

My understanding right now is that the so-called difficulty in mining new blocks is the speed factor?

If 7 tranx a second is the real limit, then why not ease difficulty and let btc hit it's full 21M or whatever limit in bct there is, less difficulty mining should up tranx per second.

At 7 a second it's a 600k daily limit which is way too small.

Global Debit and Branded CC trans in 2012 were 40 Billion Debit cards and 18 Billion credit card trans, so 60 Billion is the global yearly transactions.

BTC at 7 trans a second can do 220 million a year

So to be equal to credit cards it would have to increase trans speed 100 fold, that's why some geeks have said the core needs to be done in Assembly and put on a network of major super computers to get to the level of the global CC system, thousands of low end computers can't compete with super computers and a sleek assembly level system.

So maybe a trusted core network of 10 super computers and the old system gets canned.

Right now btc has no need to try to compete with a network that can do 20 Billion transactions a year live visa/mc

But if btc is to keep growing, it has to do way more than 200M trans a year.

Debit and CC info from

http://www.creditcards.com/credit-card-news/credit-card-industry-facts-personal-debt-statistics-1276.php

full member
Activity: 183
Merit: 100
needs to be done.  bitcoin wasn't just built perfectly the first time.  it needs adjustments and this is one of them.
The block size was appropriate for it's early days when there were few transactions, and balanced the resources needed to run a node with maintaining the ability to have a lot of TX per block. 

As bitcoin has evolved into more of a payment method, more transactions will occur every second which  equals more space in each block
legendary
Activity: 1232
Merit: 1001
mining is so 2012-2013
needs to be done.  bitcoin wasn't just built perfectly the first time.  it needs adjustments and this is one of them.
hero member
Activity: 568
Merit: 500
Smoke weed everyday!
anyone got a link to info on the 7 trx per sec limit?

Current block size limit is 1 Mb. An average transaction is 200-something bytes. One block every ten minutes, i.e. every 600 seconds.

You do the math Wink.
The limit is also written into the protocol. What you are implying is that the block size limit would prevent the average number of TX per second to be more then 7. Under this theory someone could broadcast 20 TX one second then as long as few enough other TX get transmitted they will all get included in the block, however this is not how it works. It is my understanding that the nodes will reject TXs if more then 7 are transmitted in a second
legendary
Activity: 924
Merit: 1132
Miners are leaving transactions out of blocks today because the blocks propagate faster (ie, have more odds of being confirmed) if they are <250K.  Odds on losing the block fee exceed the transaction fees available, and the miner leaves the transaction unconfirmed. 

Raising the size of the largest allowed blocks will not change that. 

Do we have an implementation yet of broadcasting only the block header, and then letting the nodes assemble the blocks out of transactions they've already received over the network?  That would reduce miners' disincentives for including transactions, so wouldn't that would be the more immediate means of increasing the number of transactions per block?

I agree that the block size needs to be increased; I'm just saying that increasing the allowed block size won't help if miners still have a financial incentive to limit the actual block size to 250K.

legendary
Activity: 2674
Merit: 3000
Terminated.
Well if this were to be done, then the developers should look at the hardfork wishlist, and implement more stuff from that.
The less forks that we have, the better.
Although I do agree that this needs to be done.
legendary
Activity: 1974
Merit: 1030
anyone got a link to info on the 7 trx per sec limit?

Current block size limit is 1 Mb. An average transaction is 200-something bytes. One block every ten minutes, i.e. every 600 seconds.

You do the math Wink.
hero member
Activity: 490
Merit: 500
the blockchain is so bloated now, is there some way of reducing its size.

There's work being done on this. If you read the bitcoin foundation post of Gavin where he talks about this proposed fork, he mentions this. Sorry no link.
newbie
Activity: 25
Merit: 0
anyone got a link to info on the 7 trx per sec limit?

I thought the network limit was more reliant on number of nodes than a limit in the script?
hero member
Activity: 756
Merit: 500
the blockchain is so bloated now, is there some way of reducing its size.
hero member
Activity: 490
Merit: 500
OK, but why is the block size limited ... at the beginning ?  Huh

Very good question! I don't have the answer. Maybe for the same reason that IPV4 address space became to small?
legendary
Activity: 1512
Merit: 1012
OK, but why is the block size limited ... at the beginning ?  Huh
hero member
Activity: 490
Merit: 500
why do you want increase block when the 7 transactions per second is not reach ?
http://btcaudio.tk/live = 0,7-0,9 transactions per second.

It's called vision. It's what some men have that's thinking ahead and not only looking at what they have in front of their nose.
legendary
Activity: 1512
Merit: 1012
why do you want increase block when the 7 transactions per second is not reach ?
http://btcaudio.tk/live = 0,7-0,9 transactions per second.
legendary
Activity: 1862
Merit: 1015
I remain sceptical on any block size increase, because it promotes centralization of nodes. If Gavin's plan is implemented there will be progressively less full nodes being able or willing to participate in the network because bandwidth and storage requirements are too demanding. A non-conditional yearly 50% increase is a very bold figure. It's not even clear if technology will be able keep up with this rate longer term.

If block size increases really can't be circumvented through other measures, they should not be done in a static irreversible step-by-step increase (comparable to coin issuance) but instead in a dynamic way that allows for increases and decreases based on previous usage (comparable to difficulty adjustment). Doing it that way would allow systemic self-regulation, always providing as much bandwidth as necessary but not (much) more than needed. So some healthy competition between transactions (size, fees) remains in place and resources (bandwidth, storage) are much more likely of being used responsibly and not being wasted.
Just because the max block size is larger does not mean that every block (or even any block) will be as large as the max block size. If the miners do not include enough TXs to make the block a larger size then the nodes will not need to use as much bandwidth as they otherwise would, therefore there is little reason why a node that operates today would cease operating tomorrow if the block size were increased tomorrow.

I think having the max block size change based on the last n block sizes would be too complicated and would add an unnecessary level of complexity to the protocol. 
Pages:
Jump to: