Author

Topic: Scaling Bitcoin - the Reliability Factor proposal (Read 1866 times)

member
Activity: 140
Merit: 10
Decentralized Block-chain Voting
September 24, 2015, 01:36:53 PM
#9
Interesting proposal, but how would you measure consensus? 

There is a new way to measure consensus on the blockchain based on bitcoin ownership, as per TechCrunch:
http://techcrunch.com/2015/09/21/a-solution-to-bitcoins-governance-problem/
newbie
Activity: 14
Merit: 0
interesting concept
legendary
Activity: 1246
Merit: 1011
I'm still thinking... What if I change the meaning of R

Code:
R = the number of full block we (statistically) accept for the next 2 weeks

I'm looking for a csv file with the values of the block size for several months
to make some tests. Can somebody give me a link to such a file ?

I'm not sure exactly how you mean this to differ from your first notion.  My original query remains.  Why would we want to restrict block size at all?  Why should we throttle transactions some of the time when we could just not throttle them at all?

What I'm getting at is that your proposal has no "Motivation" section.  It achieves the goal of raising or lowering the block size limit depending on "user traffic" but it does not explain why this is desired.
member
Activity: 65
Merit: 16
I'm still thinking... What if I change the meaning of R

Code:
R = the number of full block we (statistically) accept for the next 2 weeks

I'm looking for a csv file with the values of the block size for several months
to make some tests. Can somebody give me a link to such a file ?
member
Activity: 65
Merit: 16
Quote
Can I take a step back and ask why we might want a blocksize limit at all?  Can you give a simple reason for why we might prefer R = 1% to R = 0%?
You raised an important issue: what is the purpose of having a block size limit
if this limit is (almost) never reached (R = 1% => 1% of chance to have a full block in the next 2 weeks).
You are right, we can set R = 0 and there will be no limit with not much diferences with the previous case.

So we get back to the fondamental question of what is the purpose of having a block size limit
and how to implement such a limit. I need to think more about it...
legendary
Activity: 1246
Merit: 1011
Can I take a step back and ask why we might want a blocksize limit at all?  Can you give a simple reason for why we might prefer R = 1% to R = 0%?
member
Activity: 65
Merit: 16
Quote
The idea appears to be somewhat close to BIP 106.

Yes, it is very close. Thanks for pointing out.

From BIP-106:

Code:
If more than 50% of block's size, found in the first 2000 of the last difficulty period, is more than 90% MaxBlockSize
      Double MaxBlockSize
  Else if more than 90% of block's size, found in the first 2000 of the last difficulty period, is less than 50% MaxBlockSize
      Half MaxBlockSize
  Else
      Keep the same MaxBlockSize
I can see 2 significant improvements with my proposition (Reliability Factor):

1) - with BIP-106 if 50% of the last 2000 blocks are 90% full then there will be a large number of full blocks
    - with Reliability Factor this number will be very low (R = 1% => only 1% of chance to have a full block in the next 2000 blocks)

2) - with BIP-106 there is only 3 possible corrections: 1/2, 1 or 2 times MaxBlockSize
    - with Reliability Factor this correction can be anything between 1/2 and 2 and it will take in consideration even small changes in the block size average over time
legendary
Activity: 1662
Merit: 1050
The idea appears to be somewhat close to BIP 106.
member
Activity: 65
Merit: 16
Here is the proposition

The Block Size limit (Bmax) is ajusted every 2 weeks by an algorithm
based on the block sizes of the last 2 weeks.

Prerequisites

1) The bitcoin community must agree on a Reliability Factor (R)

This factor (R) is comprised between 0 and 100% and represents
the probability of having a block exceed Bmax during the next 2 weeks.

Once set, this value does not change.

example: R = 1% means we have a probability of 1% to have a block size over
Bmax during the next 2 weeks. This represents one occurrence every 4 years.

2) All transactions exceeding Bmax are rejected by the minors: no hard fork


Algorithm

1) Every 2 weeks, the algorithm computes the average size (Ba) of the last 2000 blocks
(about 2 weeks) and the root mean square (RMS) for the same period.

2) Given Ba and RMS, the algorithm computes (gaussian curve) the new Bmax such that
the Reliability Factor (R) remains unchanged.


Conclusion

- Easy to implement
- The block size limit can go up or down depending on user traffic
- This presentation is oversimplified, for example what appends if all the blocks
are full: Ba = Bmax and RMS = 0, the new Bmax will always remains the same.
I think we can forecast and solve easily such cases.

See you at https://scalingbitcoin.org/montreal2015/
Jump to: