Pages:
Author

Topic: Nxt source code flaw reports - page 31. (Read 113369 times)

hero member
Activity: 687
Merit: 500
January 08, 2014, 04:10:31 PM
Well, i can wait until I corrupt the block. And by what means do peers decide which block is the real one?
legendary
Activity: 2142
Merit: 1010
Newbie
January 08, 2014, 04:02:21 PM
You mean that the corrupted block doesn't get distributed good enough?

Next few blocks will make one of these twin blocks orphaned.
hero member
Activity: 687
Merit: 500
January 08, 2014, 03:56:35 PM
You mean that the corrupted block doesn't get distributed good enough?
legendary
Activity: 2142
Merit: 1010
Newbie
January 08, 2014, 03:49:33 PM
If my analysis is correct, this is a serious bug in version 0.4.7e.

This is not a bug. I was always saying that 1 confirmation == 0 confirmations. This is called "Cunicula attack", he was the 1st who devised this attack.

Btw, in the future such an incorrect block will be reported by peers and u'll be penalized.

Edit: Bitcoin has a similar attack called "Finney attack".
hero member
Activity: 687
Merit: 500
January 08, 2014, 03:45:02 PM
Hmmm...am I drunk?

Let's say I am a rich guy with millions of nxt. That means I will forge a block quite often. When a new block is forged by someone, my counter in the client gets updated.
If the new value of the counter is very low, let's say 15 seconds, I can be quite confident to forge the next block. So I start a transaction and transfer some nxt to an exchange.
When I forge the block my transaction gets included in that block and in a matter of minutes, my transaction will get validated and I can use my nxt on that exchange.
I convert the nxt on the exchange into BTC and transfer them to some BTC wallet.
Now the dark part of my mind thinks: I got the BTC but I would like to have my nxt back too. If I only could eliminate the transaction to the exchange...
Well, I forged the block, so in a certain sense I own that block. Let's change the data of the block!
1) I throw out the transaction to the exchange, update the relevant fields (number of Transactions, totalamount, totalFee, payloadLength, payloadHash).
2) I leave the timestamp as it is so nobody get suspicious.
3) Finally I sign the block (change the block signature).

Since I have many nxt, my public node is attractive. Many peers are asking for blocks and in some cases for older blocks too (we all know how often we had to start from scratch Wink ).
Will a peer accept my corrupted block? Let's check. The peer on the other side calls pushBlock to test whether the block is acceptable. In pushBlock the client checks:

Code:
		if (blockTimestamp > curTime + 15	|| blockTimestamp <= Block.getLastBlock().timestamp)
{
return false;
}
if ((payloadLength > 32640) || (224 + payloadLength != buffer.capacity()))
{
return false;
}
No Problem, we didn't change the blockTimestamp and the payloadLength got updated.

Code:
			if (block.previousBlock != lastBlock || 
blocks.get(block.getId()) != null ||
!block.verifyGenerationSignature() ||
!block.verifyBlockSignature())
{
return false;
}
No problem, the generation signature didn't change and we were allowed to give the Block a new block signature.
Next, the peer checks the transactions. Since we simply deleted our transaction he will not complain. He will also not complain when checking the balances later.
The numberOfTransactions, totalAmount and totalFee got updated so
Code:
			if (i != block.numberOfTransactions || 
calculatedTotalAmount != block.totalAmount ||
calculatedTotalFee != block.totalFee)
{
return false;
}
should not be a problem.
The payloadHash got updated too so
Code:
			if (!Arrays.equals(digest.digest(), block.payloadHash))
{
return false;
}
should be ok.
That's it, the peer has no way to find out that the block is corrupted.

The more clients get the corrupted block, the more the network is corrupted. If rich nxt accounts collude the situation gets worse because it's easier to distribute the corrupted block.
I have my transaction deleted and the guy running the exchange (let's call the exchange DGOX), who is struggling to manage all the transfers in his exchange manually will at some point ask himself why there are some nxt missing.

If my analysis is correct, this is a serious bug in version 0.4.7e.
sr. member
Activity: 602
Merit: 268
Internet of Value
January 08, 2014, 03:39:16 PM
I have a question regarding referencedTransactions. ReferencedTransactions are used so that a transaction gets only confirmed when the other transaction is already confirmed.

Okay, lets say I am creating a transaction T2 which has T1 as referencedTransaction. T1 is in the latest block, so T2 gets confirmed in the next block. What if the block in which T1 is in a fork chain and gets orphaned later? T1 gets added to the unconfirmedTransaction list. Now it could be that the deadline is expired and the transaction gets lost. T2 is in the main chain and is confirmed. So the referencedTransaction scheme does not work in that case?

If the block with T1 is orphaned then the block with T2 is orphaned too.

FYI, this is actually how Satoshi Dice sends semi-instant payouts.  They include your payment as part of the reward-- if you try to double-spend, their transaction back to you is invalidated, when your transaction is invalidated.  Likewised, if your TX ends up in an orphaned block, their payment will subsequently be part of an orphaned chain.

So... semi-instant transactions are possible under certain circumstances.

Actually, one of my small "test the waters using NXT in a service"-projects is using that form for instant payments... works like a charm so far Smiley
https://nxtschrodinger.appspot.com if you want to check it out Wink

Super fast payout. I love it rico, but I just won some NXT from you  Grin
newbie
Activity: 56
Merit: 0
January 08, 2014, 02:44:09 PM
I have a question regarding referencedTransactions. ReferencedTransactions are used so that a transaction gets only confirmed when the other transaction is already confirmed.

Okay, lets say I am creating a transaction T2 which has T1 as referencedTransaction. T1 is in the latest block, so T2 gets confirmed in the next block. What if the block in which T1 is in a fork chain and gets orphaned later? T1 gets added to the unconfirmedTransaction list. Now it could be that the deadline is expired and the transaction gets lost. T2 is in the main chain and is confirmed. So the referencedTransaction scheme does not work in that case?

If the block with T1 is orphaned then the block with T2 is orphaned too.

FYI, this is actually how Satoshi Dice sends semi-instant payouts.  They include your payment as part of the reward-- if you try to double-spend, their transaction back to you is invalidated, when your transaction is invalidated.  Likewised, if your TX ends up in an orphaned block, their payment will subsequently be part of an orphaned chain.

So... semi-instant transactions are possible under certain circumstances.

Actually, one of my small "test the waters using NXT in a service"-projects is using that form for instant payments... works like a charm so far Smiley
https://nxtschrodinger.appspot.com if you want to check it out Wink
legendary
Activity: 1470
Merit: 1004
January 08, 2014, 12:52:26 PM
I can't answer all the questions, coz this may reveal the flaw. Sorry.
I have strong feeling that everything bad is inside this spagetty thread Cheesy
But I can't see nothing except potential bugs with syncronization, which shound be already fixed by Jean-Luc.

Well, I already revealed that the fatal flaw will bring a copycoin to death within a day. Use this hint.

That narrows it down to 1440 possibilities  Smiley
hero member
Activity: 784
Merit: 501
January 08, 2014, 12:21:57 PM
I can't answer all the questions, coz this may reveal the flaw. Sorry.
I have strong feeling that everything bad is inside this spagetty thread Cheesy
But I can't see nothing except potential bugs with syncronization, which shound be already fixed by Jean-Luc.

Well, I already revealed that the fatal flaw will bring a copycoin to death within a day. Use this hint.
I do remember this hint.
legendary
Activity: 2142
Merit: 1010
Newbie
January 08, 2014, 12:17:06 PM
I can't answer all the questions, coz this may reveal the flaw. Sorry.
I have strong feeling that everything bad is inside this spagetty thread Cheesy
But I can't see nothing except potential bugs with syncronization, which shound be already fixed by Jean-Luc.

Well, I already revealed that the fatal flaw will bring a copycoin to death within a day. Use this hint.
hero member
Activity: 784
Merit: 501
January 08, 2014, 12:15:35 PM
I can't answer all the questions, coz this may reveal the flaw. Sorry.
I have strong feeling that everything bad is inside this spagetty thread Cheesy
But I can't see nothing except potential bugs with syncronization, which shound be already fixed by Jean-Luc.
hero member
Activity: 784
Merit: 501
January 08, 2014, 12:13:04 PM
In line 4594
Code:
if (Block.pushBlock(buffer, false)) {
already push incoming blocks once
and in line 4609
Code:
futureBlocks.add(block);
also add it in futureBlocks.
No.
pushBlock() return true if it pushed block to chain actually, and false if not.
So block will be added to futureBlocks only if it is not pushed to chain.
legendary
Activity: 2142
Merit: 1010
Newbie
January 08, 2014, 12:11:47 PM
In line 4594
Code:
if (Block.pushBlock(buffer, false)) {
already push incoming blocks once
and in line 4609
Code:
futureBlocks.add(block);
also add it in futureBlocks

and then in line 4637, without any checking of the incoming blocks
Code:
Block.saveBlocks("blocks.nxt.bak");
Transaction.saveTransactions("transactions.nxt.bak");
note: incoming blocks are already in blocks

in line 4642
Code:
while (lastBlock != commonBlockId && Block.popLastBlock()) { }
I think this is for ensure the block chain's consistency, while at this moment lastBlock != commonBlockId inevitably as the futureBlocks is not empty and we have push them into blocks at line 4594 (see the beginning of this post), so we pop them out  Huh I don't understand why pop the element that just push in.

until we get lastBlock == commonBlockId, we going on
and in line 4660
Code:
if (!Block.pushBlock(buffer, false))
push the futureBlocks into blocks second time. so push, pop, and push again Huh
after all of these are done.
we reach here
Code:
if (Block.getLastBlock().cumulativeDifficulty.compareTo(curCumulativeDifficulty) < 0) {

Block.loadBlocks("blocks.nxt.bak");
Transaction.loadTransactions("transactions.nxt.bak");

peer.blacklist_UserBroadCast();

}
checking the cumulativeDifficulty, if not satisfy, we load back blocks and transactions we just create at line 4637
but blocks.nxt.bak already have all the blocks in futureBlocks, so we will restore all incoming blocks(futureBlocks) along with all old blocks?

do I miss something, I just get stuck here. pls point it out. thx

I can't answer all the questions, coz this may reveal the flaw. Sorry.
hero member
Activity: 784
Merit: 501
January 08, 2014, 12:09:50 PM
I just imagine, that node do number of requests in loading thread, and between that requests peer can change tail of it's blockchain. Just because there's no such entity like "locading session".
In best case it just add new block. In worst case it can switch to another fork below common block.
It's too late here, so I refuse to analyse what we get in that case Smiley
To protect "loading session" we can add response parameter "lastBlockHash" to all requests since the "getCumulativeDifference", and if it is changed - abandon all blocks and changes we got from that peer already.
newbie
Activity: 26
Merit: 0
January 08, 2014, 12:05:21 PM
In line 4594
Code:
if (Block.pushBlock(buffer, false)) {
already push incoming blocks once
and in line 4609
Code:
futureBlocks.add(block);
also add it in futureBlocks

and then in line 4637, without any checking of the incoming blocks
Code:
Block.saveBlocks("blocks.nxt.bak");
Transaction.saveTransactions("transactions.nxt.bak");
note: incoming blocks are already in blocks

in line 4642
Code:
while (lastBlock != commonBlockId && Block.popLastBlock()) { }
I think this is for ensure the block chain's consistency, while at this moment lastBlock != commonBlockId inevitably as the futureBlocks is not empty and we have push them into blocks at line 4594 (see the beginning of this post), so we pop them out  Huh I don't understand why pop the element that just push in.

until we get lastBlock == commonBlockId, we going on
and in line 4660
Code:
if (!Block.pushBlock(buffer, false))
push the futureBlocks into blocks second time. so push, pop, and push again Huh
after all of these are done.
we reach here
Code:
if (Block.getLastBlock().cumulativeDifficulty.compareTo(curCumulativeDifficulty) < 0) {

Block.loadBlocks("blocks.nxt.bak");
Transaction.loadTransactions("transactions.nxt.bak");

peer.blacklist_UserBroadCast();

}
checking the cumulativeDifficulty, if not satisfy, we load back blocks and transactions we just create at line 4637
but blocks.nxt.bak already have all the blocks in futureBlocks, so we will restore all incoming blocks(futureBlocks) along with all old blocks?

do I miss something, I just get stuck here. pls point it out. thx
hero member
Activity: 784
Merit: 501
January 08, 2014, 11:11:32 AM
Seems like there's vector of attack. Not very deadly, but...

When some peer ask attacker's node for cummulative difficulty, nothing prevents this node to ask that peer's difficulty at the same time, and return some greater number.
Then on request for milestone from same peer, attacker will return just one milestone block id, 720 ids lower than latest block.
Than on request of next blocks attacker will return just one block, keep itself free from big traffic.
Note, that this attack doesn't broke attacked node, it's blockchain and even cumulativeDifficulty, which is calculated dynamically from blockchain. It's just shrink blockchain on attacked node, strip last 720 blocks from it.
On the next second attacked node do the same loop and asks another, good peer for that 720 blocks, generating some amount of traffic between good nodes.
Attacker's node will not be blacklisted. This attack will almost not be visible in clients. There will be some jumps in last blocks for a seconds, no more.

Not very danger attack Smiley Just some thoughts.
legendary
Activity: 2142
Merit: 1010
Newbie
January 08, 2014, 10:50:09 AM
The problem I try to note is that on step 2 and 3 we produce a lot of unnecesary traffic. Yes, most time it stops on step 1, but when new block is arrive, network start to move a lot of bytes. And this traffic can be lowered.

Let's rewrite the algo. What's ur proposal?

1. Add lowestHeight to "getMilestoneBlockIds" request.
Code:
int lowestHeight = Math.max(0, Block.getLastBlock().height-720);
Just because we will not download blocks deeper anyway.

2. In response for "getMilestoneBlockIds" use another logic and send first block ids tightly:

Code:
int lowestHeight=...;
int jumpLength=1;
int maxJumpLength = block.height * 4 / 1461 + 1;

while (block.height>=lowestHeight) {
    milestoneBlockIds.add(convert(block.getId()));
    for (int i = 0; i < jumpLength && block.height>=lowestHeight; i++) {
        block = blocks.get(block.previousBlock);
    }
    jumpLength=Math.min(jumpLength*2, maxJumpLength);
}
milestoneBlockIds.add(convert(block.getId())); // just to be sure...

It gives us 12 ids in milestone request and 2-4 ids in getNextBlockIds request most time.

Thx, I'll try this after resolving the current issues.
hero member
Activity: 784
Merit: 501
January 08, 2014, 10:40:10 AM
The problem I try to note is that on step 2 and 3 we produce a lot of unnecesary traffic. Yes, most time it stops on step 1, but when new block is arrive, network start to move a lot of bytes. And this traffic can be lowered.

Let's rewrite the algo. What's ur proposal?

1. Add lowestHeight to "getMilestoneBlockIds" request.
Code:
int lowestHeight = Math.max(0, Block.getLastBlock().height-720);
Just because we will not download blocks deeper anyway.

2. In response for "getMilestoneBlockIds" use another logic and send first block ids tightly:

Code:
int lowestHeight=...;
int jumpLength=1;
int maxJumpLength = block.height * 4 / 1461 + 1;

while (block.height>=lowestHeight) {
    milestoneBlockIds.add(convert(block.getId()));
    for (int i = 0; i < jumpLength && block.height>=lowestHeight; i++) {
        block = blocks.get(block.previousBlock);
    }
    jumpLength=Math.min(jumpLength*2, maxJumpLength);
}
milestoneBlockIds.add(convert(block.getId())); // just to be sure...

It gives us 12 ids in milestone request and 2-4 ids in getNextBlockIds request most time.
legendary
Activity: 2142
Merit: 1010
Newbie
January 08, 2014, 10:21:50 AM
The problem I try to note is that on step 2 and 3 we produce a lot of unnecesary traffic. Yes, most time it stops on step 1, but when new block is arrive, network start to move a lot of bytes. And this traffic can be lowered.

Let's rewrite the algo. What's ur proposal?
hero member
Activity: 784
Merit: 501
January 08, 2014, 10:20:21 AM
Yes, I'm failed with elementary math Cheesy
Code:
int jumpLength = block.height * 4 / 1461 + 1;

It returns 365 blockIds everytime, so when one year will pass, milestone ids will be one for each day.
So we have more or less the same response for "getMilestoneBlockIds", but longer and longer for each "getNextBlockIds".
Are all rest assumptions correct?
I see a lot of unnecessary traffic, every second.

If u catch blocks from genesis block then yes, more and more traffic will be required.
Hmmm...
Let's "debug" it step-by-step.

1. Get peer's cumulativeDifficulty.
Okey, let's assume that peer have higher difficulty, so it have more blocks, then we are. And assume simplest case: we are on same chain, but peer has newer block or two.

2. Get milestones.
We get the array of ids. The first id from array is the latest block id from peer. We definitely have no such block, just because we have lower cumulative difficulty.
But next block id is common for our node and peer. For today it is about 100 blocks down to chain.

3. Get next block ids starting from common.
We get about 100 ids in response. Then find that common one is 98th. Or 99th.

4. Get new block or two from peer.

The problem I try to note is that on step 2 and 3 we produce a lot of unnecesary traffic. Yes, most time it stops on step 1, but when new block is arrive, network start to move a lot of bytes. And this traffic can be lowered.
Pages:
Jump to: