Pages:
Author

Topic: Clif High's solution for Bitcoin to scale to infinity. (Read 3519 times)

hero member
Activity: 770
Merit: 629

1) PoW is a bad cryptographic protection

2) a block chain is way too much severity of consensus (we don't need the EXACT ORDER of transactions)

3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.

In other words, that most of the principles of bitcoin are, well, improvable.  Which is no surprise because it is the oldest, and very first tech that is used.


1. PoW has nothing to do with blockchain size. a 1Kb block or a 1Gb block both result in a 256bit hash.
PoW only needs 256bit hash, and doesnt care about blocksize. (hint: you wont see a hard drive in an asic)
its about timing


I wasn't making that remark in direct relationship to the current subject.  PoW is bad cryptographic security, because the "good guy" doesn't have any advantage over the "bad guy".  It is simply the one that wastes more that wins.  I only mentioned it because digital signatures do have the advantage that PoW doesn't have: the one with the secret key can easily sign, and it is essentially practically impossible to imitate such a signature if you don't have the key.

you really need to step back and really understand bitcoin
hashpower alone is meaningless.. it doesnt matter if one pool has 50000 exahash and another pool only has 50petahash.

the nodes will reject whatever block doesnt follow the rules.


Sure, and if there are no other blocks available, then those nodes just stop working.  The point is that blocks that are made by the "bad guys" follow the rules perfectly and in PoW, the rule is then that the bad guy wins.  The rule in PoW is that someone with sufficient hash rate can undo all the transactions of the 50 last blocks, by just presenting an alternative chain with more PoW than in these 50 blocks.  The rule is to orphan the "good" 50 last blocks. 
If all miners now decide to follow the rule (what else could they do ?), and build only upon the chain (the bad guy chain) with most PoW, then that's the only chain that is available.  If your node doesn't agree, then your node will simply stop.  I've been explaining this 50 times to you, but you don't want to understand this, because you are stuck to the myth that non-mining nodes have power.

sr. member
Activity: 392
Merit: 250
Best IoT Platform Based on Blockchain
Indeed, the "solution" in this thread doesn't bring any solution to any serious problem, but adds problems.

Maybe you can ask Clif High to join this forum, review comments in this thread, and give his feedback.
legendary
Activity: 4410
Merit: 4766

1) PoW is a bad cryptographic protection

2) a block chain is way too much severity of consensus (we don't need the EXACT ORDER of transactions)

3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.

In other words, that most of the principles of bitcoin are, well, improvable.  Which is no surprise because it is the oldest, and very first tech that is used.


1. PoW has nothing to do with blockchain size. a 1Kb block or a 1Gb block both result in a 256bit hash.
PoW only needs 256bit hash, and doesnt care about blocksize. (hint: you wont see a hard drive in an asic)
its about timing


I wasn't making that remark in direct relationship to the current subject.  PoW is bad cryptographic security, because the "good guy" doesn't have any advantage over the "bad guy".  It is simply the one that wastes more that wins.  I only mentioned it because digital signatures do have the advantage that PoW doesn't have: the one with the secret key can easily sign, and it is essentially practically impossible to imitate such a signature if you don't have the key.

you really need to step back and really understand bitcoin
hashpower alone is meaningless.. it doesnt matter if one pool has 50000 exahash and another pool only has 50petahash.

the nodes will reject whatever block doesnt follow the rules.

yea the pools can play with themselves all they like but the nodes are what consensus agrees as what is acceptable
blocks get rejected many times a week it doesnt matter if the pool has 1% of the hash or 16% of the hash.. a bad rule breaking block is still a bad rule breaking block

Quote
2. you do. to then have a checkable history by just knowing the latest contains data of the previous. thus no need to constantly be checking everything, because the previous is locked.

I'm not saying that a block chain is "not good enough".  I'm saying it is way too severe.  You don't NEED full block chain ordering in order to verify transaction validity.  It is much harder to come to "exact order consensus" than it is to come to "correct transaction set consensus".  The order is not needed.  If you have a consensus on a BAG of valid past transactions (no matter what is the order), that's good enough to find out whether a newly proposed transaction is valid or not:
1) do all inputs correspond to an existing output in a transaction in the bag ?
2) do none of these inputs appear in any existing transaction input in the bag ?

That's all that is needed.  No order needed.  In the bag, or not in the bag.   Mathematically, you only need the SET of transactions, you don't need the sequence.  But of course a sequence is (more than) good enough.  But it is a complication that is not needed.

your thinking more about how centralised banks work. EG only needing the UTXO set to act as bank account balances.
but thats backward thinking of the old world of fiat.
please move passed the centralist mindset of fiat banking setups and grasp the decentralised diverse symbiotic relationships of a peer network.
i know you find it complicated. but thats the beauty of its security. it cant be messed with that easily if no one has control.

stop thinking that bitcoin needs to be controlled and that once controlled bitcoin doesnt need the chain.. otherwise you might aswell just go play with fiat banks.

bitcoin is different, diverse, decentralised for good reason. please learn it.

Quote
3. and as we both pointed out signatures of different pools become troublesome of users seeing 20+ pools all with different signatures, plus to offset things like propagation.. the timing of signing then becomes less instant to give room for the network congestion to breath.
which then brings back the issues of "but its not fast enough if your waiting 1mb-10min in a grocery store checkout line hoping a confirm happens soon

Indeed, the "solution" in this thread doesn't bring any solution to any serious problem, but adds problems.

it add potential new features.. but agreed there are far more issues that this threads 'proposal' has not counteracted.
hero member
Activity: 770
Merit: 629

1) PoW is a bad cryptographic protection

2) a block chain is way too much severity of consensus (we don't need the EXACT ORDER of transactions)

3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.

In other words, that most of the principles of bitcoin are, well, improvable.  Which is no surprise because it is the oldest, and very first tech that is used.


1. PoW has nothing to do with blockchain size. a 1Kb block or a 1Gb block both result in a 256bit hash.
PoW only needs 256bit hash, and doesnt care about blocksize. (hint: you wont see a hard drive in an asic)
its about timing


I wasn't making that remark in direct relationship to the current subject.  PoW is bad cryptographic security, because the "good guy" doesn't have any advantage over the "bad guy".  It is simply the one that wastes more that wins.  I only mentioned it because digital signatures do have the advantage that PoW doesn't have: the one with the secret key can easily sign, and it is essentially practically impossible to imitate such a signature if you don't have the key.

Quote
2. you do. to then have a checkable history by just knowing the latest contains data of the previous. thus no need to constantly be checking everything, because the previous is locked.

I'm not saying that a block chain is "not good enough".  I'm saying it is way too severe.  You don't NEED full block chain ordering in order to verify transaction validity.  It is much harder to come to "exact order consensus" than it is to come to "correct transaction set consensus".  The order is not needed.  If you have a consensus on a BAG of valid past transactions (no matter what is the order), that's good enough to find out whether a newly proposed transaction is valid or not:
1) do all inputs correspond to an existing output in a transaction in the bag ?
2) do none of these inputs appear in any existing transaction input in the bag ?

That's all that is needed.  No order needed.  In the bag, or not in the bag.   Mathematically, you only need the SET of transactions, you don't need the sequence.  But of course a sequence is (more than) good enough.  But it is a complication that is not needed.

Quote
3. and as we both pointed out signatures of different pools become troublesome of users seeing 20+ pools all with different signatures, plus to offset things like propagation.. the timing of signing then becomes less instant to give room for the network congestion to breath.
which then brings back the issues of "but its not fast enough if your waiting 1mb-10min in a grocery store checkout line hoping a confirm happens soon

Indeed, the "solution" in this thread doesn't bring any solution to any serious problem, but adds problems.

legendary
Activity: 4410
Merit: 4766

1) PoW is a bad cryptographic protection

2) a block chain is way too much severity of consensus (we don't need the EXACT ORDER of transactions)

3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.

In other words, that most of the principles of bitcoin are, well, improvable.  Which is no surprise because it is the oldest, and very first tech that is used.


1. PoW has nothing to do with blockchain size. a 1Kb block or a 1Gb block both result in a 256bit hash.
PoW only needs 256bit hash, and doesnt care about blocksize. (hint: you wont see a hard drive in an asic)
its about timing

2. you do. to then have a checkable history by just knowing the latest contains data of the previous. thus no need to constantly be checking everything, because the previous is locked.

3. and as we both pointed out signatures of different pools become troublesome of users seeing 20+ pools all with different signatures, plus to offset things like propagation.. the timing of signing then becomes less instant to give room for the network congestion to breath.
which then brings back the issues of "but its not fast enough if your waiting 1mb-10min in a grocery store checkout line hoping a confirm happens soon
hero member
Activity: 770
Merit: 629

There is really no difference between putting all these transactions in one big block or making all these "clusters".

to put them into 600 clusters allows for 'semi-confirm' of 1 second. instead of waiting 10 minutes. drawback is the user needs to see 20 different signatures (from all the pools)

i explained this already...!


Then you are just doing a kind of "20 masternode scheme" of DASH, with the "pools" electing themselves as masternodes.
This is nothing else but the instant pay mechanism of DASH, with informal master nodes instead of protocol-defined masternodes.
In other words, bitcoin then has 20 certificate authorities, signing the validity of transactions.

Once you do that, why not simply use PoS ?  Why waste electricity on mining, if you trust miner signatures ?  They could simply sign off the main block too, in a PoS scheme, instead of wasting electricity on PoW !

Quote
yep i explained it..

Ok, sorry, I misunderstood your post as "being in favour" of this scheme, while it is a total clusterfuck concerning bandwidth etc...

If 8 MB blocks are a "bandwidth issue", then 20 times the block size is most probably a "bandwidth issue" !


However, what is positive in these discussions is that people are slowly, very slowly, discovering that:

1) PoW is a bad cryptographic protection

2) a block chain is way too much severity of consensus (we don't need the EXACT ORDER of transactions)

3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.

In other words, that most of the principles of bitcoin are, well, improvable.  Which is no surprise because it is the oldest, and very first tech that is used.
legendary
Activity: 4410
Merit: 4766

There is really no difference between putting all these transactions in one big block or making all these "clusters".

to put them into 600 clusters allows for 'semi-confirm' of 1 second. instead of waiting 10 minutes. drawback is the user needs to see 20 different signatures (from all the pools)

i explained this already...!

Note that as long as a pool hasn't built all his clusters together, it CANNOT START HASHING on the main block, because it doesn't know the final signature to include.

yep again i explained this already
it would be signing clusters of fresh transactions so by the time its validated a previous main block to know what tx's that were in mempool already, to add to the new mainblock it would have already also/separately grabbed a whole load of tx's that are fresh to put into clusters.
ive explained this already...!

In fact, the "list" of signatures is more wasteful than the "Merkle tree" hash or signature that is used within a big block.  So from the miner's PoV, there is no difference between making his list of linked clusters, to include the final signature into his block on which it will start hashing, or to make one big block with a Merkle tree of hashes (his list is just slightly slower).

yep again i explained this already
1 extended block vs 600 clusters = 42kb-48kb of signature wasted. but to users its a 1second confirm feature

The "signatures" of the pools are absolutely no guarantee that the whole series will not be orphaned if another pool wins the main block.  And, as you point out, not only do the individual transactions fly through the network, but all the *different versions of clusters of all pools* fly through the network with their signatures.  That is a large multitude of traffic as compared to one big block.  

again i explained this..
issues: '12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)'

The factor is the number of competing pools (not only the big ones !).  So if you see a transaction flying by, and there are 30 mining pools active, you will see 30 different miner clusters containing this transaction fly by within a second or so...  You will see this transaction 30 times, you will have to check the validity of 30 clusters, and finally, only one of them will make it as the 25th cluster in a row by a given pool that won the race for the main block.  And maybe it is a main block that doesn't, finally include that transaction in one of its side clusters despite the 29 signatures you saw, because these 29 pools that signed it, didn't win the main block.
Essentially, if there are about 30 active mining pools, the spent bandwidth is multiplied by about 30 as compared to a single big block.

wow you waffled a paragraph to repeat what i said and all you done was change the number 20 to 30..
yep i explained it..

issues: '12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)'

but its good to see you have a critical hat on. but your just repeating what i already said.
hero member
Activity: 770
Merit: 629
ok this is how i see it

We already know that whatever is in purple gets hashed. And that the hash is thrown to ASICS to make a more secure hash with a bunch of zero’s at the start
there are [80bytes of data] that is unused in a block..



now imagine that we used that 80bytes for something as simple as a signature(sidehash)..
more precisely a signature hash of data signed by the pools own chosen coinbase(reward) keypair.
and then hashed the same as before.

now you may well be asking what 'message' could that signature be part of


the message could be a cluster of tx's and a signature belonging to another previous cluster of tx's.. and so on.. and so on

so how would it work.

well imagine every second new tx's are signed into a cluster (extended block) by the pool making the main block

so timeline, using just 3 clusterblocks/extended blocks for example
previous block solved. Previous block hash is added to the new block aswell as 2500 tx added to the new block.
2500 tx added to sideblockA and signed as A by that pool
2500 tx and signature A added to sideblock B data and signed as B by that pool
2500 tx and signature B added to sideblock C data and signed as C by that pool
sideblockC signature added to block and the block is hashed and PoW’d as usual all within the same 10 minute window


possibilities, because signatures are involved. clusters/extended blocks can be signed once a second, meaning it can make 600 cluster/extended blocks to allow a tx to get semi confirmed in second of being seen by a pool.
yes them 600 clusters instead of 1 extended block may have a 42-48kb extra bloat(600 signatures), but then you gain the feature of 1second semi confirmation instead of just extra tx per single block waiting 10 minutes for a confirm.

issues:
there are 20+pools
so your TX might be in cluster/extended block antpool: 350 of the 600 blocks signed by antpool
 or 123 of the 600 blocks signed by bitfury
 or 500 of the 600 blocks signed by btcc
all because in the 600 seconds between mainblocks each pool has their own 600 cluster/extendedblocks and tx arrangement that pool has solely/independently chosen
resolution:
user receives atleast 20 pool responses that the users tx is in a cluster/extended block somewhere in each pools

issue:
this then causes alot more data flying around the network of 'soft confirming' 12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)

issue:
logical minds will think screw it lets make it 60 clusters/extended blocks with a 10 second semi confirm. practical minds then argue that although 10 seconds is better than 10 minutes. there is still 1200 extended blocks flying through the network. and 10 seconds is now slower than visa's 'touchless' NFC swipe and go payment method.

my opinion.
1. just using 1 extended block that can hold an infinite amount of tx's is a way of 'going soft' with dynamics. but then the nodes should have a dynamic useragent flag that pools find a consensus of nodes of, to know how many tx's should be put in this extended block where it wont cause issue to the majority of nodes.

2. by using it to also 'speed up' the confirm by having MULTIPLE extended blocks signed every second/10seconds/whatever.. is better than wrecking blockrewards/difficulty/halving schedules like some other lame proposals of just reducing the 10min average to 2minutes, 1min, 30seconds. but again nodes will need to have rules of acceptability to keep pools inline so that pools do not overdo it.

i could continue waffling on, but ill stop here for now

There is really no difference between putting all these transactions in one big block or making all these "clusters".  Note that as long as a pool hasn't built all his clusters together, it CANNOT START HASHING on the main block, because it doesn't know the final signature to include.  In fact, the "list" of signatures is more wasteful than the "Merkle tree" hash or signature that is used within a big block.  So from the miner's PoV, there is no difference between making his list of linked clusters, to include the final signature into his block on which it will start hashing, or to make one big block with a Merkle tree of hashes (his list is just slightly slower).

The "signatures" of the pools are absolutely no guarantee that the whole series will not be orphaned if another pool wins the main block.  And, as you point out, not only do the individual transactions fly through the network, but all the *different versions of clusters of all pools* fly through the network with their signatures.  That is a large multitude of traffic as compared to one big block.  The factor is the number of competing pools (not only the big ones !).  So if you see a transaction flying by, and there are 30 mining pools active, you will see 30 different miner clusters containing this transaction fly by within a second or so...  You will see this transaction 30 times, you will have to check the validity of 30 clusters, and finally, only one of them will make it as the 25th cluster in a row by a given pool that won the race for the main block.  And maybe it is a main block that doesn't, finally include that transaction in one of its side clusters despite the 29 signatures you saw, because these 29 pools that signed it, didn't win the main block.

Essentially, if there are about 30 active mining pools, the spent bandwidth is multiplied by about 30 as compared to a single big block.
hero member
Activity: 770
Merit: 629
Very doubtful.

Where are all these extra lateral blocks going to be stored?

Reminds me of pseudoscience.

Geez.  Tongue

You never use a Spread sheet Before?

Block #52652 A   |   Block #52652  B   |   Block #52652  C
Block #52653 A
Block #52654 A
Block #52655 A   |   Block #52655 B

They would be stored in the blockchain like the rest of the blocks.


Tell me, do blocks B, C etc... also come within the same 10 minutes as the A-blocks ?  Do they also have block rewards ?  Do these rewards increase bitcoin's inflation ?

If yes --> Mwhahaha

If no -->
Who mines them ?  Does block A have to certify (Merkle of blocks ?) blocks B, C etc.... and hence "wait for them" ? Is there only one miner of blocks A,B and C, namely the one mining block A and getting its rewards ? What's then the difference if the one mining block A is also including the transactions of B and C into block A ?


Since no one has wrote the code yet,
Just my speculation:
Block B & C would be broadcast as quickly as possible, so in the same 10 minute window.
ZERO BLOCK rewards for the lateral Blocks, however the Miner would receive the transaction fees for the additional blocks.
1 Miner for each Block #, and the A,B,C or more that follow

You are correct , 1 large block A could contain B & C


But if it is not a miner, who will "broadcast" blocks B and C ?  Will this not be a cacophony of many people broadcasting many slightly different blocks B and C ?  Normally, only a miner can "broadcast" a valid block (everybody can claim to be miner and broadcast a block with wrong proof of work of course, but that's quickly discovered in the block header).  But who will broadcast blocks B and C ?

If it is the miner of block A, he's just cutting his big own block in a few pieces.

Nope, only the miner that mined the main block would be able to broadcast the lateral blocks for that main block.
The Lateral or adjacent blocks would have to match a checksum included in the main block or fail inclusion into the chain.



Well, then he could just as well make one big block, instead of splitting his one big block he's alone in deciding about, into several small ones.
 
Quote
Quote
Difference is you have to verify the block, so doing it Clif's way means you can validate a smaller block faster, and only use the lateral blocks when the it is full of transactions. The Additional blocks would be more important , if they can run past the next main block that is found .
Meaning someone tries to spam the network, and a main block plus 20 lateral blocks, eats up the entire spam attack,
and then the next block is only 1 main block again.
Quote
But who is broadcasting these non-PoW, non-PoS blocks ?  Every node ?

Only the miner of the main block would be able to broadcast the lateral blocks and only for the block he mined the main one.



So he's just putting what he'd put in one big block, in several smaller ones.

Quote
Quote
Single Large Blocks can also adapt , but usually BTC miners are not lowering the blocksize now, even  when they include only 1 transaction.

At some point the larger blocksizes would not be able to be verified in the time before the next block, therefore limiting the maximum size.

Quote
How are you going to be able to verify the gazillion of POSSIBLE and BROADCAST small blocks, while the miner of block A has just picked out two of them ?  

Suppose that the mem pool contains, at a certain moment, 10 000 transactions.  There are 5000 different ways to put 6000 of these 10 000 transactions into 3 blocks of about 2000 transactions each because there are 5000 nodes doing so, with slightly different mem pools (not sync of course).  So you receive 15000 blocks from 5000 different nodes.  The miner of block A has picked 3 of them, which he calls A, B and C.  You will indeed find, amongst the 15000 broadcast blocks, that one is block A, another one is block B and still another one is block C.  But if you don't have the time to verify the big block {A,B,C}, do you think you have the time to verify those 15000 blocks ??

Only Block A requires Complete Verification, the additional Lateral Blocks are only accepted if they match the Checksums, listed by the verified block.


Of course not. There are two "levels" of verification: "header verification", and "full transaction consensus verification".

Let us not forget what is the PURPOSE of a block chain: coming to a consensus of *transactions*.   You want to know whether a given transaction is part of the past consensus, or not, because the validity of a new transaction depends on that.  It is the only reason of existence of a block chain: knowing on what set of past transactions, there is consensus.

You can check the validity of the block headers: that makes you know that:
1) the header fits in correctly in the chain
2) it has the right amount of PoW

By verifying only the headers, you can verify the block chain structure, and the Merkle tree hashes included in them.  But you don't know anything about the consensus of transactions this chain is supposed to bring you.

In order to know that, you have to know the transactions themselves.  They need two verifications:
1) the validity of the transactions themselves, for which you need previous consensus knowledge: that is, what previous transactions did we agree upon ?  They provide the outputs that can be used as inputs in a valid transaction.
2) their inclusion in the consensus the miner decided upon.  You combine their hashes into a Merkle tree, and you verify whether that Merkle tree hash corresponds to what is in the block header.  If it fits, ALL of them are OK, if it doesn't fit, the block including its block header, is false.

But if you need to know this, you need to know ALL THE SIDE BLOCKS and verify all of them.  It is sufficient that one single transaction in block C doesn't work, and your block header is, in the end, false.

==> there is no conceptual difference between verifying block A only, or not verifying any block.  It is A,B and C or nothing. 

Because a smart guy could include a transaction in block A, but "screw up" block C.  If that's the case, block A is just as invalid as if, in a normal chain, the block itself were false.  If you ONLY verify block A, and you think it is OK, and you accept the payment, then if it turns out that block C was erroneous, your block A is JUST AS FALSE and your transaction will not be part of consensus.  The block header will turn out to be false, after all, just as if you included a double spend or a wrong Merkle hash in today's chain.

Verifying ONLY block A doesn't verify anything: if block B or block C are false, this invalidates the block header, and hence also block A.
legendary
Activity: 4410
Merit: 4766
ok this is how i see it

We already know that whatever is in purple gets hashed. And that the hash is thrown to ASICS to make a more secure hash with a bunch of zero’s at the start
there are [80bytes of data] that is unused in a block..



now imagine that we used that 80bytes for something as simple as a signature(sidehash)..
more precisely a signature hash of data signed by the pools own chosen coinbase(reward) keypair.
and then hashed the same as before.

now you may well be asking what 'message' could that signature be part of


the message could be a cluster of tx's and a signature belonging to another previous cluster of tx's.. and so on.. and so on

so how would it work.

well imagine every second new tx's are signed into a cluster (extended block) by the pool making the main block

so timeline, using just 3 clusterblocks/extended blocks for example
previous block solved. Previous block hash is added to the new block aswell as 2500 tx added to the new block.
2500 tx added to sideblockA and signed as A by that pool
2500 tx and signature A added to sideblock B data and signed as B by that pool
2500 tx and signature B added to sideblock C data and signed as C by that pool
sideblockC signature added to block and the block is hashed and PoW’d as usual all within the same 10 minute window


possibilities, because signatures are involved. clusters/extended blocks can be signed once a second, meaning it can make 600 cluster/extended blocks to allow a tx to get semi confirmed in second of being seen by a pool.
yes them 600 clusters instead of 1 extended block may have a 42-48kb extra bloat(600 signatures), but then you gain the feature of 1second semi confirmation instead of just extra tx per single block waiting 10 minutes for a confirm.

issues:
there are 20+pools
so your TX might be in cluster/extended block antpool: 350 of the 600 blocks signed by antpool
 or 123 of the 600 blocks signed by bitfury
 or 500 of the 600 blocks signed by btcc
all because in the 600 seconds between mainblocks each pool has their own 600 cluster/extendedblocks and tx arrangement that pool has solely/independently chosen
resolution:
user receives atleast 20 pool responses that the users tx is in a cluster/extended block somewhere in each pools

issue:
this then causes alot more data flying around the network of 'soft confirming' 12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)

issue:
logical minds will think screw it lets make it 60 clusters/extended blocks with a 10 second semi confirm. practical minds then argue that although 10 seconds is better than 10 minutes. there is still 1200 extended blocks flying through the network. and 10 seconds is now slower than visa's 'touchless' NFC swipe and go payment method.

my opinion.
1. just using 1 extended block that can hold an infinite amount of tx's is a way of 'going soft' with dynamics. but then the nodes should have a dynamic useragent flag that pools find a consensus of nodes of, to know how many tx's should be put in this extended block where it wont cause issue to the majority of nodes.

2. by using it to also 'speed up' the confirm by having MULTIPLE extended blocks signed every second/10seconds/whatever.. is better than wrecking blockrewards/difficulty/halving schedules like some other lame proposals of just reducing the 10min average to 2minutes, 1min, 30seconds. but again nodes will need to have rules of acceptability to keep pools inline so that pools do not overdo it.

i could continue waffling on, but ill stop here for now
hero member
Activity: 924
Merit: 506
Looks like you are confused understanding what is really happening here, people want to be able to mine up to the 21M limit in a few years and not decades, problem isn't bandwidth and storage as mining is now specialized industry anyone wanting to start mining they'd need to invest large amounts so let them spend some on good internet and hardware but with the ASIC miners you'll only need one full node.

Problem isn't in the lack of space but how many transactions in size and how often (time between finding blocks) should be mined.
Unless anyone has any method/program/algorithm that could compress all sorts of signature versions/transaction data of normal addresses-multi sig addresses and reduce their sizes in order for miners being able to put more TXs in a 1MB of space, otherwise I suggested a mining mechanism where all the miners point their hash power and then mine 1 block exactly every 10 minutes and that would be a huge pool with no orphan blocks/aborted blocks/rejected blocks and also that would mean no more lottery for each miner trying to win=much easier and people will be able to mine with one tenth of the electricity and that could potentially affect the price in a negative way.
legendary
Activity: 1092
Merit: 1000
Very doubtful.

Where are all these extra lateral blocks going to be stored?

Reminds me of pseudoscience.

Geez.  Tongue

You never use a Spread sheet Before?

Block #52652 A   |   Block #52652  B   |   Block #52652  C
Block #52653 A
Block #52654 A
Block #52655 A   |   Block #52655 B

They would be stored in the blockchain like the rest of the blocks.


Tell me, do blocks B, C etc... also come within the same 10 minutes as the A-blocks ?  Do they also have block rewards ?  Do these rewards increase bitcoin's inflation ?

If yes --> Mwhahaha

If no -->
Who mines them ?  Does block A have to certify (Merkle of blocks ?) blocks B, C etc.... and hence "wait for them" ? Is there only one miner of blocks A,B and C, namely the one mining block A and getting its rewards ? What's then the difference if the one mining block A is also including the transactions of B and C into block A ?


Since no one has wrote the code yet,
Just my speculation:
Block B & C would be broadcast as quickly as possible, so in the same 10 minute window.
ZERO BLOCK rewards for the lateral Blocks, however the Miner would receive the transaction fees for the additional blocks.
1 Miner for each Block #, and the A,B,C or more that follow

You are correct , 1 large block A could contain B & C


But if it is not a miner, who will "broadcast" blocks B and C ?  Will this not be a cacophony of many people broadcasting many slightly different blocks B and C ?  Normally, only a miner can "broadcast" a valid block (everybody can claim to be miner and broadcast a block with wrong proof of work of course, but that's quickly discovered in the block header).  But who will broadcast blocks B and C ?

If it is the miner of block A, he's just cutting his big own block in a few pieces.

Nope, only the miner that mined the main block would be able to broadcast the lateral blocks for that main block.
The Lateral or adjacent blocks would have to match a checksum included in the main block or fail inclusion into the chain.

 

Quote
Difference is you have to verify the block, so doing it Clif's way means you can validate a smaller block faster, and only use the lateral blocks when the it is full of transactions. The Additional blocks would be more important , if they can run past the next main block that is found .
Meaning someone tries to spam the network, and a main block plus 20 lateral blocks, eats up the entire spam attack,
and then the next block is only 1 main block again.
Quote
But who is broadcasting these non-PoW, non-PoS blocks ?  Every node ?

Only the miner of the main block would be able to broadcast the lateral blocks and only for the block he mined the main one.




Quote
Single Large Blocks can also adapt , but usually BTC miners are not lowering the blocksize now, even  when they include only 1 transaction.

At some point the larger blocksizes would not be able to be verified in the time before the next block, therefore limiting the maximum size.

Quote
How are you going to be able to verify the gazillion of POSSIBLE and BROADCAST small blocks, while the miner of block A has just picked out two of them ?  

Suppose that the mem pool contains, at a certain moment, 10 000 transactions.  There are 5000 different ways to put 6000 of these 10 000 transactions into 3 blocks of about 2000 transactions each because there are 5000 nodes doing so, with slightly different mem pools (not sync of course).  So you receive 15000 blocks from 5000 different nodes.  The miner of block A has picked 3 of them, which he calls A, B and C.  You will indeed find, amongst the 15000 broadcast blocks, that one is block A, another one is block B and still another one is block C.  But if you don't have the time to verify the big block {A,B,C}, do you think you have the time to verify those 15000 blocks ??

Only Block A requires Complete Verification, the additional Lateral Blocks are only accepted if they match the Checksums, listed by the verified block.


 Cool
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
If Clif High's idea is another variant of Extension blocks or Auxiliary blocks, then that's different, but it's not exactly a new idea, as you said.  The downside with that sort of idea is that it raises questions over fungibility in that coins aren't equal and identical if they are stored in two different varieties of block.

Yes, a "ExtensionBlockBitcoin" would not be the same than a "MainChainBitcoin". My understanding is, however, that the "peg" between these two varieties of Bitcoin could be easier to implement with extension/lateral blocks  than with near-fully-separated sidechains.

If the peg is working well then we could hide the functionality from the users (at least those who opt-in to it) in the clients - and for merchants, I think it would be a competitive disadvantage to not accept "ExtensionBlockBitcoins". So the fungibility problem looks solvable, for me - the "hard problem" is making the peg work.
sr. member
Activity: 434
Merit: 250
It's really scary, everything looks OK, but I'm concerned about its viability, that's just the theory, we need something more practical to prove. I look forward to a good result from his solution. Hope it will succeed. I am tired of seeing things that are theoretically small but weak in the present.
sr. member
Activity: 378
Merit: 250
Dont be fucking stupid. 

You can't get blood from a stone, there's no perpetual motion machines, you can't fold a piece of paper in half unlimited times, if a genie gives you three wishes, no you can't use one of the wishes to wish for more wishes, and you can't fit more than X bits of information into a space providing X bits of information. 

WE AGREE!  Grin

I was going to say the same thing, for once I agree with Johnny
hero member
Activity: 770
Merit: 629
Very doubtful.

Where are all these extra lateral blocks going to be stored?

Reminds me of pseudoscience.

Geez.  Tongue

You never use a Spread sheet Before?

Block #52652 A   |   Block #52652  B   |   Block #52652  C
Block #52653 A
Block #52654 A
Block #52655 A   |   Block #52655 B

They would be stored in the blockchain like the rest of the blocks.


Tell me, do blocks B, C etc... also come within the same 10 minutes as the A-blocks ?  Do they also have block rewards ?  Do these rewards increase bitcoin's inflation ?

If yes --> Mwhahaha

If no -->
Who mines them ?  Does block A have to certify (Merkle of blocks ?) blocks B, C etc.... and hence "wait for them" ? Is there only one miner of blocks A,B and C, namely the one mining block A and getting its rewards ? What's then the difference if the one mining block A is also including the transactions of B and C into block A ?


Since no one has wrote the code yet,
Just my speculation:
Block B & C would be broadcast as quickly as possible, so in the same 10 minute window.
ZERO BLOCK rewards for the lateral Blocks, however the Miner would receive the transaction fees for the additional blocks.
1 Miner for each Block #, and the A,B,C or more that follow

You are correct , 1 large block A could contain B & C


But if it is not a miner, who will "broadcast" blocks B and C ?  Will this not be a cacophony of many people broadcasting many slightly different blocks B and C ?  Normally, only a miner can "broadcast" a valid block (everybody can claim to be miner and broadcast a block with wrong proof of work of course, but that's quickly discovered in the block header).  But who will broadcast blocks B and C ?

If it is the miner of block A, he's just cutting his big own block in a few pieces.

Quote
Difference is you have to verify the block, so doing it Clif's way means you can validate a smaller block faster, and only use the lateral blocks when the it is full of transactions. The Additional blocks would be more important , if they can run past the next main block that is found .
Meaning someone tries to spam the network, and a main block plus 20 lateral blocks, eats up the entire spam attack,
and then the next block is only 1 main block again.

But who is broadcasting these non-PoW, non-PoS blocks ?  Every node ?

Quote
Single Large Blocks can also adapt , but usually BTC miners are not lowering the blocksize now, even  when they include only 1 transaction.

At some point the larger blocksizes would not be able to be verified in the time before the next block, therefore limiting the maximum size.

How are you going to be able to verify the gazillion of POSSIBLE and BROADCAST small blocks, while the miner of block A has just picked out two of them ?  

Suppose that the mem pool contains, at a certain moment, 10 000 transactions.  There are 5000 different ways to put 6000 of these 10 000 transactions into 3 blocks of about 2000 transactions each because there are 5000 nodes doing so, with slightly different mem pools (not sync of course).  So you receive 15000 blocks from 5000 different nodes.  The miner of block A has picked 3 of them, which he calls A, B and C.  You will indeed find, amongst the 15000 broadcast blocks, that one is block A, another one is block B and still another one is block C.  But if you don't have the time to verify the big block {A,B,C}, do you think you have the time to verify those 15000 blocks ??

wck
member
Activity: 70
Merit: 10
I will appreciate it (I am sure anyone with large savings in Bitcoin will too) if someone can explain how + why Clif High's solution can/cannot work, instead of outright saying it can/cannot.
At least I would like to see the details of how and why it can/cannot work, instead of an opinion.
Clif High developed his own spider to scout the entire web universe for collective human sentiments for prediction and many of his predictions came true so I don't assume him to make flimsy suggestion on solving Bitcoin scaling issue.
I assume he knows what he is talking about.
The only thing left is if he's right, then we need to lay out a plan on how to make it possible with his concept.
His solution has nothing to do with perpetual motion machine or free energy for nothing (so please don't make the stuff up or make accusation).
I think his solution has to do with parallel programming or multithreading on the block chain numbers to solve the next block, but this is just my interpretation.

Ok, I actually watched the vid (at least the part you referenced)

He is misunderstanding the problem.  

The problem is not how to organize the data structure.
If that were the issue, then what he is saying would be fine -- you could have 'slave blocks' or 'ancillary blocks'
that are pointed to by the main block.

You still have to process all those blocks.

When people talk about scalability challenges in Bitcoin, they mean:  How do we deal with all the storage requirements, the bandwidth requirements, the propogation requirements, etc.
As the network grows to accomodate greater transaction volume and a bigger blockchain, these requirements also grow.

So whether its all in one big block or in a master block with a bunch of slave blocks, well you still have to propogate that data, you still have to process that data, you still have to store
that data.  There are more efficient ways and less efficient ways to do those things, but nothing he is saying is providing a way to "scale to infinity", or give a huge increase in efficiency.

The lightening network is a huge increase in efficiency because it moves things off the main chain into bidirectional payment channels on a different network.  Aside from
something radical like that, the main scaling improvements will come as technology gives us faster internet and cheaper hard drives, stuff like that.

Hope that helps and makes sense.



Very clear explanation!  Now if only people will listen to what you are saying.
sr. member
Activity: 322
Merit: 253
Property1of1OU
Very doubtful.

Where are all these extra lateral blocks going to be stored?

Reminds me of pseudoscience.

Geez.  Tongue

You never use a Spread sheet Before?

Block #52652 A   |   Block #52652  B   |   Block #52652  C
Block #52653 A
Block #52654 A
Block #52655 A   |   Block #52655 B

They would be stored in the blockchain like the rest of the blocks.


If this proposed "solution" is just extra blocks at the same time, it's no different to an increase to the blocksize.  Nodes still have to store and relay the additional data.  Blocks B and C still take up space in terms of storage and bandwidth.  If anything, it would use up fractionally more space than a flat increase to the blocksize because there would be a few extra block headers to store.  If Clif High's idea is another variant of Extension blocks or Auxiliary blocks, then that's different, but it's not exactly a new idea, as you said.  The downside with that sort of idea is that it raises questions over fungibility in that coins aren't equal and identical if they are stored in two different varieties of block.


I agree ... reminds me "IPv4 address exhaustion" ( https://en.wikipedia.org/wiki/IPv4_address_exhaustion ) ... Also reminds me "Colored Coins" ( build new tokens from the dust, All we are is dust in the wind Grin )
sr. member
Activity: 322
Merit: 253
Property1of1OU
Cliff High is a bit insane and I don't think you can predict the future analizing keywords, and it's just ridiculous that he can make predictions like he does, he said $30,000 is coming this year or something.

Actually 'Machine Learning'1 (and Sciences in general is all about to make predictions, testing hypothesis, etc )

I think a new version of his 'keywords prediction' software could be adapted on top of these deep learning frameworks like Caffe, CNTK, TensorFlow, Theano and Torch ...  for instance you can visualize words proximity using word2vec2 http://projector.tensorflow.org/

But I wonder a code to cryptographically bind adjacent blocks fitting into 83 bytes ...


1 - Machine Learning as a subfield of AI that concerns about make predictions from the data (Statistical Learning).
2 -> word2vector - Distributed Representations of Words and Phrases and their Compositionality

sr. member
Activity: 392
Merit: 250
Best IoT Platform Based on Blockchain
Clif said (in my own words) in his video that his solution applies to (preceding?) blocks of the exact same size and structure.
Pages:
Jump to: