Pages:
Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 95. (Read 450522 times)

hero member
Activity: 952
Merit: 501
maybe we need to rebuild the website?

sr. member
Activity: 448
Merit: 250
Ben2016
I see clear communication among our wonderful devs, along with their solid determination and efforts.... All great for good news to arrive soon  Smiley
sr. member
Activity: 476
Merit: 500
sr. member
Activity: 464
Merit: 260
Hi EK, can you please confirm if this was due to something the miner sent or is it an issue in only the core server?  I checked the miner code and I don't see how it could send more than 32 bytes.

Took me a few days to find out, but the bug was the following:

The core server only allows a hash of max. 32 byte length in the BountyAnnouncement transaction. When something longer than that was tried to be added, the hash was simply set to null:

Code:
PiggybackedProofOfBountyAnnouncement(final ByteBuffer buffer, final byte transactionVersion)
throws NxtException.NotValidException {
super(buffer, transactionVersion);
this.workId = buffer.getLong();
final short hashSize = buffer.getShort();

if ((hashSize > 0) && (hashSize <= Constants.MAX_HASH_ANNOUNCEMENT_SIZE_BYTES)) {
this.hashAnnounced = new byte[hashSize];
buffer.get(this.hashAnnounced, 0, hashSize);
} else {
this.hashAnnounced = null;
}

}

When signing such transaction, which came in from via HTTP, everything appearently went fine ... the core server just signed the BountyAnnouncement with empty hash and submitted it. The verification worked fine as well because Attachments have no verification themselves.

The problem happened in the rebroadcast. Imagine someone originally attached 33 bytes (instead of 32) in the hash, then the hash would be nulled - hash has length zero. But the original transaction is still broadcast in its original form causing 33 extra "unexpected bytes" to be over after parsing the TX. This is what the error that you posted earlier stated.

I fixed it and learned ... variable length input sucks a lot. Tried to port many features to fixed length inputs (as you can see from my commits today).

However, 33 bytes must have come from somewhere, i.e., the miner. I suspect an extra byte for a minus sign maybe?



Thx for the explanation.  Many months back we had this same issue and it was due to a negative sign in the miner...thought I got rid of that issue.  I'll take a look again and get it fixed.
legendary
Activity: 1260
Merit: 1168
Hi EK, can you please confirm if this was due to something the miner sent or is it an issue in only the core server?  I checked the miner code and I don't see how it could send more than 32 bytes.

Took me a few days to find out, but the bug was the following:

The core server only allows a hash of max. 32 byte length in the BountyAnnouncement transaction. When something longer than that was tried to be added, the hash was simply set to null:

Code:
PiggybackedProofOfBountyAnnouncement(final ByteBuffer buffer, final byte transactionVersion)
throws NxtException.NotValidException {
super(buffer, transactionVersion);
this.workId = buffer.getLong();
final short hashSize = buffer.getShort();

if ((hashSize > 0) && (hashSize <= Constants.MAX_HASH_ANNOUNCEMENT_SIZE_BYTES)) {
this.hashAnnounced = new byte[hashSize];
buffer.get(this.hashAnnounced, 0, hashSize);
} else {
this.hashAnnounced = null;
}

}

When signing such transaction, which came in from via HTTP, everything appearently went fine ... the core server just signed the BountyAnnouncement with empty hash and submitted it. The verification worked fine as well because Attachments have no verification themselves.

The problem happened in the rebroadcast. Imagine someone originally attached 33 bytes (instead of 32) in the hash, then the hash would be nulled - hash has length zero. But the original transaction is still broadcast in its original form causing 33 extra "unexpected bytes" to be over after parsing the TX. This is what the error that you posted earlier stated.

I fixed it and learned ... variable length input sucks a lot. Tried to port many features to fixed length inputs (as you can see from my commits today).

However, 33 bytes must have come from somewhere, i.e., the miner. I suspect an extra byte for a minus sign maybe?

sr. member
Activity: 464
Merit: 260
Just thinking out loud: do we really need the bounty announcements in the new SN scheme?

It depends on how well the SN can handle a huge volume of submissions...but I believe it will have to do this either way.

Yesterday, I submitted a job that had an issue which allowed every pass of the interpreter to find a "Bounty", so the miner tried to send hundreds of submissions pretty quickly....this is something anyone could do (i.e. create a simple job that allows legitimate bounty submissions to spam the SN).

So I thought your original design had a small fee on each of these submissions, along with the announcement in order to deter this kind of behaviour.  But if you have another approach that simplifies things, I'd be all for it.



Well, first of all a job has a natural bounty limit ... submissions beyond this level are not permitted. But of course, there is a grace period between the submissions and their actual inclusion in the blockchain (or its unconfirmed transaction cache). In this time window it is possible to flood as much as you can. We could add a "SN rate limit" which would allow not more than x transactions per node per second.

What sucks more is the lack of direct feedback from the SN. Since we queue at the moment, the miner does not even know if his submission was dropped, accepted or denied. We really have to think this through! Is queuing the right way to go at all?

Btw: I could reproduce your bug today. I just could not yet find out why it happens.

Fixed that bug. We will have to make sure that hashes and multiplicators are at most 32 bytes long ... not 33 like it could happen before.

Fix is here: https://github.com/OrdinaryDude/elastic-core/commit/b95596e572af659cb7355a68643a58098579109f
Extra checks in API are here: https://github.com/OrdinaryDude/elastic-core/commit/4870aa5c22786e27fbfdc37665a45e82f99410c9

Do not use that yet!

Hi EK, can you please confirm if this was due to something the miner sent or is it an issue in only the core server?  I checked the miner code and I don't see how it could send more than 32 bytes.
legendary
Activity: 1260
Merit: 1168
Just thinking out loud: do we really need the bounty announcements in the new SN scheme?

It depends on how well the SN can handle a huge volume of submissions...but I believe it will have to do this either way.

Yesterday, I submitted a job that had an issue which allowed every pass of the interpreter to find a "Bounty", so the miner tried to send hundreds of submissions pretty quickly....this is something anyone could do (i.e. create a simple job that allows legitimate bounty submissions to spam the SN).

So I thought your original design had a small fee on each of these submissions, along with the announcement in order to deter this kind of behaviour.  But if you have another approach that simplifies things, I'd be all for it.



Well, first of all a job has a natural bounty limit ... submissions beyond this level are not permitted. But of course, there is a grace period between the submissions and their actual inclusion in the blockchain (or its unconfirmed transaction cache). In this time window it is possible to flood as much as you can. We could add a "SN rate limit" which would allow not more than x transactions per node per second.

What sucks more is the lack of direct feedback from the SN. Since we queue at the moment, the miner does not even know if his submission was dropped, accepted or denied. We really have to think this through! Is queuing the right way to go at all?

Btw: I could reproduce your bug today. I just could not yet find out why it happens.

Fixed that bug. We will have to make sure that hashes and multiplicators are at most 32 bytes long ... not 33 like it could happen before.

Fix is here: https://github.com/OrdinaryDude/elastic-core/commit/b95596e572af659cb7355a68643a58098579109f
Extra checks in API are here: https://github.com/OrdinaryDude/elastic-core/commit/4870aa5c22786e27fbfdc37665a45e82f99410c9

Do not use that yet!
legendary
Activity: 1526
Merit: 1034
When do these tokens finally launch? and any plans on which exchange ?

Judging by the messages posted here between the devs and the problems they are facing, I'd say we are still months away

Honestly, not a huge issue. The more time spent making this project perfect, the more it'll be used and the more people will value it. I think a lot of people in the crypto world have very high hopes for what Elastic is going to be when it comes to fruition. Really excited to see what's to come.
legendary
Activity: 2165
Merit: 1002
When do these tokens finally launch? and any plans on which exchange ?

Judging by the messages posted here between the devs and the problems they are facing, I'd say we are still months away
hero member
Activity: 2147
Merit: 518
Is queuing the right way to go at all?

I don't really see any way around it, but maybe some other bright minds around here can come up with a solution.

But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated.  For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else.  And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN.

I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties.  I know you'll say we need the POW logic ;-)  I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues.

Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right?
Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?

Is there any practical use for this? I mean are there any parties that would be interested in posting jobs that need solving to xel network? Is the a chance that there exists somebody - hypothetical person - that has nothing to do other than submitting dumb jobs while they can simply put an entire elastic thread on ignore and dont bother about it.
ImI
legendary
Activity: 1946
Merit: 1019
Is queuing the right way to go at all?

I don't really see any way around it, but maybe some other bright minds around here can come up with a solution.

But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated.  For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else.  And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN.

I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties.  I know you'll say we need the POW logic ;-)  I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues.

Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right?
Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?

I think jobs must be checked by at least a portion of SNs, otherwise a SN could send bad jobs, be it by accident or with malicios intent. So, to check a SNs integrity, other SNs have to check what a SN does. Whether all SNs have to check all jobs, I don't know.

SN are checked through guard-nodes afaik.
sr. member
Activity: 464
Merit: 260
Is queuing the right way to go at all?

I don't really see any way around it, but maybe some other bright minds around here can come up with a solution.

But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated.  For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else.  And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN.

I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties.  I know you'll say we need the POW logic ;-)  I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues.

Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right?
Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?

I think jobs must be checked by at least a portion of SNs, otherwise a SN could send bad jobs, be it by accident or with malicios intent. So, to check a SNs integrity, other SNs have to check what a SN does. Whether all SNs have to check all jobs, I don't know.

That is not correct, the current design does not require multiple SNs to validate a job..
hero member
Activity: 994
Merit: 513
Is queuing the right way to go at all?

I don't really see any way around it, but maybe some other bright minds around here can come up with a solution.

But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated.  For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else.  And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN.

I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties.  I know you'll say we need the POW logic ;-)  I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues.

Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right?
Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?

I think jobs must be checked by at least a portion of SNs, otherwise a SN could send bad jobs, be it by accident or with malicios intent. So, to check a SNs integrity, other SNs have to check what a SN does. Whether all SNs have to check all jobs, I don't know.
sr. member
Activity: 464
Merit: 260
Is queuing the right way to go at all?

I don't really see any way around it, but maybe some other bright minds around here can come up with a solution.

But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated.  For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else.  And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN.

I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties.  I know you'll say we need the POW logic ;-)  I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues.

Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right?
Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?

Yes, I can target all my submission to a specific SN.  I'm not suggesting anyone do this, but it has to be accounted for that someone could do it.
full member
Activity: 294
Merit: 101
Aluna.Social
 When do these tokens finally launch? and any plans on which exchange ?
sr. member
Activity: 448
Merit: 250
Ben2016
Hi everyone, how high the block height is supposed to go on Elastic Explorer ?
hero member
Activity: 1022
Merit: 507
Is queuing the right way to go at all?

I don't really see any way around it, but maybe some other bright minds around here can come up with a solution.

But the bottom line is we can easily have complex jobs that take considerable time to solve, and each submission would need to be validated.  For example, if I create a job that takes 0.1 sec to solve, and I dump 1000 submissions onto a SN, that node would be tied up for 2 minutes before it could do anything else.  And this will take place while people are dumping hundreds if not thousands of legitimate POW submissions on the SN.

I still don't know of a good solution...I still wish we didn't have POW in xel and just focused on bounties.  I know you'll say we need the POW logic ;-)  I'm just not convinced, and I feel like it just complicates the validation / targetting / volume issues.

Are users allowed to dump submissions to nodes of their choice? With many SNs in the network all those submissions could be handled by (probably) tens of SNs simultaneously right?
Assuming the jobs are being spread evenly between all SNs, is it really a problem at all?
legendary
Activity: 1848
Merit: 1334
just in case
@EK:
I created Elastic-Coin github organization. my suggestion we can work repos on this organization. I invited you with admin privileges.
https://github.com/elastic-coin

We can add to more people and we can share to jobs .
If you accept invite, you should start push to elastic-coin repo. We will update all links to this organization url.
I will use release tags from your commits & i will update docker side @github.

@ek what do you think?
legendary
Activity: 1848
Merit: 1334
just in case
@EK: We can block time reduced to 30 sec for testing purposes. we never see what happens on 30k blocks
btw, pls import my last changes to repos
reduce block time for test is a good advice Grin

Dont listen to him. Increase block time to 2 minutes and put it on mainnet, u dont even need to test it. 2 min is ideal if u seek a trade-off between security and transaction propagation time. Reducing the block time will lead to the possibility of massive microtx spam. Tx fee is ridiculously low atm making spam attacks cheaper than ever before.
I said for testing purposes. You don't need to explain block time effects  Wink
Pages:
Jump to: