Author

Topic: Dumb question (Read 755 times)

sr. member
Activity: 476
Merit: 251
COINECT
May 09, 2013, 06:26:48 AM
#10
Here's an interesting quote from Satoshi himself on this point:

Quote
It is strictly necessary that the longest chain is always considered the valid
one.  Nodes that were present may remember that one branch was there first and
got replaced by another, but there would be no way for them to convince those
who were not present of this.  We can't have subfactions of nodes that cling to
one branch that they think was first, others that saw another branch first, and
others that joined later and never saw what happened.  The CPU power
proof-of-work vote must have the final say.  The only way for everyone to stay
on the same page is to believe that the longest chain is always the valid one,
no matter what.

From http://www.mail-archive.com/[email protected]/msg09980.html

He seemed to be against this type of a scheme.
sr. member
Activity: 476
Merit: 251
COINECT
April 29, 2013, 04:20:16 AM
#9
Maybe I missed the point... What difference will it make when such a solution is implanted within the protocol? Huh

It would prevent any problems if whoever is entrusted to distribute the centralized checkpoints is ever compromised.
member
Activity: 97
Merit: 10
April 27, 2013, 06:03:36 AM
#8
Maybe I missed the point... What difference will it make when such a solution is implanted within the protocol? Huh
staff
Activity: 4284
Merit: 8808
April 27, 2013, 04:06:41 AM
#7
"Make the best chain selection not a pure function of the chain" is a perennial proposal, and can be answered to as a class:

Chain-external statefulness can result in fatal consistency failures. Not all nodes will have observed identical past prior states— consider newly started nodes, or an attack where the attacker intentionally simultaneously announces conflicting chain segments to distinct groups of nodes in order to intentional create incoherent state.

For example, some of the most common proposals in this space is simply to refuse to make reorganizations over some size X. But this means an attacker who can produce X+1 blocks can do a simultaneous announce to half the network of one fork while giving the other half one more block. Everyone locks in and the network is forever split.  If you assume that an attacker couldn't make an X block reorg, then the "protection" was pointless in the first place.

Likewise the all the soft versions I've seen proposed have the same kind of problem, though potentially less fatal— while closing one pattern of reorg it acts as a reorg size multiplier for a different attack pattern using an attack optimized for it.

I don't expect this general line of research to be promising, simply because — above all Bitcoin is a consensus system and using node local data to make decisions gets in the way of achieving the fastest possible consensus.
newbie
Activity: 3
Merit: 0
April 24, 2013, 07:52:55 AM
#6
anti-scam is right, you'd have to be careful with the math so you didn't wind up with blockchains accidentally diverging all over the place. Let's give ourselves a few definitions so we're on the same page.

The credibility C of a blockchain is a function of the blocks in that chain. The work W is a measure of how much work went into a given block. The age A of a block is how long ago this node first saw that block. Right now, this looks like:
    C = sum of W(b)
But there's no reason it couldn't look like this:
    C = sum of W(b) * F(A(b))
where F(t) is some function of the a block's age.

The hard part is how you pick for F(t). Discuss!
newbie
Activity: 7
Merit: 0
April 24, 2013, 02:37:07 AM
#5
thank you for this finding
sr. member
Activity: 476
Merit: 251
COINECT
April 24, 2013, 12:50:15 AM
#4

There is an assumption that the reduction of the block reward will reduce the hashing power of legitimate miners, making it very easy to attack the block chain. This is far from certain, but it is a possibility. In fact, the first halving did cause a severe drop in mining, but it was only temporary, demonstrating that the assumption may be wrong.

There is no reason with the checkpointing can't be decentralized. However, there seems to be a risk that a higher threshold for revising a block chain could result in a permanent fork.

How exactly would you determine which checkpoint is considered legitimate in this scheme? Some sort of stake-based voting? I suppose it would seem reasonable that included in that stake would have to be a coin older than any transaction you're claiming knowledge of. This actually seems like a fairly good idea, but I've never seen it.
legendary
Activity: 4466
Merit: 3391
April 24, 2013, 12:37:08 AM
#3
Hi everyone!

I've been a lurker on these forums for a while, but now I actually have a genuine question so I figured I'd start getting those newbie posts out of the way.  I'm almost certain this has been asked before so it's perfect fodder for the newbie forum, but I couldn't find it anywhere.  So here goes.

The Stanford paper pointed out two interesting things.

First, as long as Moore's law holds, the cost of attacking the entire blockchain is proportional to the cost of attacking the last hour.  Or day.  Or whatever.  Granted, that's a large proportion, but that seems to be exactly the fear that led to Bitcoin's current checkpointing solution.  But centralized checkpoints suck.

Second, you can achieve a sort of decentralized checkpointing by making old nodes skeptical of blocks that would invalidate blocks they themselves witnessed a long time ago.  So revisionist histories would require more and more proof of work the more (and older) blocks they invalidated.

This seems like an awesome idea, but since the paper was published in 2012 and Bitcoin hasn't done this yet, I'm assuming there's something wrong with it.  What am I missing?

There is an assumption that the reduction of the block reward will reduce the hashing power of legitimate miners, making it very easy to attack the block chain. This is far from certain, but it is a possibility. In fact, the first halving did cause a severe drop in mining, but it was only temporary, demonstrating that the assumption may be wrong.

There is no reason with the checkpointing can't be decentralized. However, there seems to be a risk that a higher threshold for revising a block chain could result in a permanent fork.
sr. member
Activity: 476
Merit: 251
COINECT
April 24, 2013, 12:12:54 AM
#2
The only question I would ask is how do those old nodes then convince other nodes of their skepticism? What is the "threshold" of skepticism that needs to be reached before a transaction is questioned? This seems to me like it would get into a voting/consensus sort of system, which is fraught with error.
newbie
Activity: 3
Merit: 0
April 24, 2013, 12:11:00 AM
#1
Hi everyone!

I've been a lurker on these forums for a while, but now I actually have a genuine question so I figured I'd start getting those newbie posts out of the way.  I'm almost certain this has been asked before so it's perfect fodder for the newbie forum, but I couldn't find it anywhere.  So here goes.

The Stanford paper pointed out two interesting things.

First, as long as Moore's law holds, the cost of attacking the entire blockchain is proportional to the cost of attacking the last hour.  Or day.  Or whatever.  Granted, that's a large proportion, but that seems to be exactly the fear that led to Bitcoin's current checkpointing solution.  But centralized checkpoints suck.

Second, you can achieve a sort of decentralized checkpointing by making old nodes skeptical of blocks that would invalidate blocks they themselves witnessed a long time ago.  So revisionist histories would require more and more proof of work the more (and older) blocks they invalidated.

This seems like an awesome idea, but since the paper was published in 2012 and Bitcoin hasn't done this yet, I'm assuming there's something wrong with it.  What am I missing?
Jump to: