Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 235. (Read 450524 times)

legendary
Activity: 1260
Merit: 1168
I think this sounds like a good approach and may finally give Elastic the performance that it needs.

As mentioned previously, I only put together my miner as a way to test the AST Parser I wrote (I simply wanted to learn about AST trees). However, I'll clean up my miner and get it posted to github this weekend if someone wants to rip out my AST logic and replace it with a C interpreter that you have described above.



The ideal would be to use your AST parser to create C code on the fly, which is then compiled and linked into a library that then again is used to execute the logic inside your miner ;-) A polymorphic miner, basically, changing its own code depending on the live work.
On-the-fly Inline assembler would be great (and geek) as well, but I think that way too hard for now.

If you post your code to the git, I will try to "hack in" the library approach shortly after ;-)
sr. member
Activity: 464
Merit: 260
Just finished a rudimentary not-yet tested Elastic-to-C compiler.
See current ElasticPL github tree.

Just do
Code:
java -jar ElasticToCCompiler.jar yourprogram.spl

The test program which is written in ElasticPL
Code:
m[18]=-(1>2);
m[19]=-2424;
m[19]=2424;
m[20]=1-2424;
m[21]=--2424;
m[21]=-3;

verify m[1]<=50000;

converts to (and saves as yourprogram.spl.c)
Code:
#include 
#include
#include

int32_t m[64000];

int execute();
int main(){
clock_t start, end;
int counter = 0;
start = clock();
while(1==1){end = clock(); execute(); counter=counter+1;
if((double)(end-start)/CLOCKS_PER_SEC >=1) break;
}
printf("BENCHMARK: %d evaluations per second.\n",counter);
}

int execute(){
m[18] = (-1*((((1) > (2))?1:0)));
m[19] = (-1*(2424));
m[19] = 2424;
m[20] = ((1) - (2424));
m[21] = (-1*((-1*(2424))));
m[21] = (-1*(3));
return ((((m[1]) <= (50000))?1:0));
}


... aaaaand executes 6 MILLION TIMES PER SECOND

(at least for now without the sha256 pow check, but written in pure C this will be no real "overhead")

Code:
beavis@methusalem ~/Development/elastic-pl (git)-[master] % ./a.out                                   
BENCHMARK: 6359785 evaluations per second.

Now, a mining application can link that C "library" and use it for mining! Somehow ... of course PoW check and so on is not yet included in the estimation!

I think this sounds like a good approach and may finally give Elastic the performance that it needs.

As mentioned previously, I only put together my miner as a way to test the AST Parser I wrote (I simply wanted to learn about AST trees). However, I'll clean up my miner and get it posted to github this weekend if someone wants to rip out my AST logic and replace it with a C interpreter that you have described above.

legendary
Activity: 1260
Merit: 1168
Btw. I followed HunterMinerCrafter's advise and entirely dropped the Boolean type. We are working on word width types only.
legendary
Activity: 1260
Merit: 1168
EK, I finally got everything up and running and here's some feedback...keep in mind this is based on my crappy scratch-built ast parser:

1) The good - it's faster.  My C miner is running your latest changes at about 60K evals/sec

Might be the solution to all evil!!!

Just finished a rudimentary not-yet tested Elastic-to-C compiler.
See current ElasticPL github tree.

Just do
Code:
java -jar ElasticToCCompiler.jar yourprogram.spl

The test program which is written in ElasticPL
Code:
m[18]=-(1>2);
m[19]=-2424;
m[19]=2424;
m[20]=1-2424;
m[21]=--2424;
m[21]=-3;

verify m[1]<=50000;

converts to (and saves as yourprogram.spl.c)
Code:
#include 
#include
#include

int32_t m[64000];

int execute();
int main(){
clock_t start, end;
int counter = 0;
start = clock();
while(1==1){end = clock(); execute(); counter=counter+1;
if((double)(end-start)/CLOCKS_PER_SEC >=1) break;
}
printf("BENCHMARK: %d evaluations per second.\n",counter);
}

int execute(){
m[18] = (-1*((((1) > (2))?1:0)));
m[19] = (-1*(2424));
m[19] = 2424;
m[20] = ((1) - (2424));
m[21] = (-1*((-1*(2424))));
m[21] = (-1*(3));
return ((((m[1]) <= (50000))?1:0));
}


... aaaaand executes 6 MILLION TIMES PER SECOND

(at least for now without the sha256 pow check, but written in pure C this will be no real "overhead")

Code:
beavis@methusalem ~/Development/elastic-pl (git)-[master] % ./a.out                                   
BENCHMARK: 6359785 evaluations per second.

Now, a mining application can link that C "library" and use it for mining! Somehow ... of course PoW check and so on is not yet included in the estimation!



EDIT:
Also, loops are treated correctly:

Code:
m[18]=-(1>2);
m[19]=-2424;
m[19]=2424;
m[20]=1-2424;
m[21]=--2424;
m[21]=-3;
if(m[1]==1){
repeat(5){
m[1]=m[2]<<3;
}
}

verify (m[1] and 255)<35000;

becomes

Code:
#include 
#include
#include

int32_t m[64000];

int execute();
int main(){
clock_t start, end;
int counter = 0;
start = clock();
while(1==1){end = clock(); execute(); counter=counter+1;
if((double)(end-start)/CLOCKS_PER_SEC >=3) break;
}
printf("BENCHMARK: %d evaluations per second.\n",counter/3);
}

int execute(){
m[18] = (-1*((((1) > (2))?1:0)));
m[19] = (-1*(2424));
m[19] = 2424;
m[20] = ((1) - (2424));
m[21] = (-1*((-1*(2424))));
m[21] = (-1*(3));
if (((((m[1]) == (1))?1:0)) !=0 ) {int loop1 = 0;
for (; loop1 < (5); ++loop1) {m[1] = ((m[2]) << (3));
}}
return ((((((m[1]) & (255))) < (35000))?1:0));
}


Aaaand things may become complicated in C, which I suspect to be levelled out by GCC anyway.
Example for our "fail safe modulo function".

Elastic PL:
Code:
m[0]=m[1]%1;

C:
Code:
m[0] = ((1) != 0)?(((m[1]) % (1))):0);
legendary
Activity: 1260
Merit: 1168
EK, I finally got everything up and running and here's some feedback...keep in mind this is based on my crappy scratch-built ast parser:

1) The good - it's faster.  My C miner is running your latest changes at about 60K evals/sec


Yeah, still poor! I guess the best thing you could do is generate C code from the ElasticPL AST tree directly. This should be trivial, of course several things would have to be handled correctly ;-)
For example "--2424" (which at the moment is valid ElasticPL actually) would be converted to -1*(-1*(2424)).

But if we would come up with a ElasticPL to C or OpenCL converter, we would see millions of evals/sec.
sr. member
Activity: 464
Merit: 260
EK, I finally got everything up and running and here's some feedback...keep in mind this is based on my crappy scratch-built ast parser:

1) The good - it's faster.  My C miner is running your latest changes at about 60K evals/sec

2) Bounties seem to be working correctly.

3) m[21]=--2424;  Why does your parser allow this?  Mine failed, then I tried it in Visual Studio and it failed as well.  Is this really valid in ElasticPL?

4) However; I'm not convinced mangle_state is working correctly.  I walked your sample ElasticPL through the interpreter line by line and to me it doesn't look like it fires of the mangle_state for '<', '>', '&", etc.  This is the good thing about comparing your code to mine as it's a true independent test, but I don't know if the bug is in your code or mine.  Mine is pretty literal and fires off for each operator...but yours seems to skip some.  Not sure if this will be an issue or not.

Now that POW is running fast again, I'll spend some time this weekend to see if all my earlier concerns are resolved.  For example, in the old version, I was able to manually feed canceled work packages into my miner and get paid for POW...still need to test this in the new version.

legendary
Activity: 1260
Merit: 1168
Code:
$ git pull
remote: Counting objects: 68, done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 68 (delta 33), reused 4 (delta 4), pack-reused 7
Unpacking objects: 100% (68/68), done.
From https://github.com/OrdinaryDude/elastic-miner
   7f9cea7..84ae187  master     -> origin/master
   7b77987..57e2e6b  development -> origin/development
Updating 7f9cea7..84ae187
Fast-forward
 lib/ElasticPL.jar                  | Bin 0 -> 131216 bytes
 lib/bcprov-ext-jdk15on-155.jar     | Bin 0 -> 3466487 bytes
 lib/gnu-crypto.jar                 | Bin 0 -> 598036 bytes
 lib/javax-crypto.jar               | Bin 0 -> 96430 bytes
 lib/javax-security.jar             | Bin 0 -> 16969 bytes
 miner.jar                          | Bin 7401526 -> 7367002 bytes
 src/elastic_miner/CryptoStuff.java |   3 ++
 src/elastic_miner/Main.java        |  60 ++++++++++++++++++++++---------------
 8 files changed, 39 insertions(+), 24 deletions(-)
 create mode 100644 lib/ElasticPL.jar
 create mode 100644 lib/bcprov-ext-jdk15on-155.jar
 create mode 100644 lib/gnu-crypto.jar
 create mode 100644 lib/javax-crypto.jar
 create mode 100644 lib/javax-security.jar
$ ./compile.sh
javac: file not found: src/evil/ElasticPL/*.java
Usage: javac
use -help for a list of possible options

idea ?




Found in elastic-reference-client/

No, this will fail as you are pulling in the wrong ElasticPL version. just remove the ElasticPL part from the compile script or use the binary instead for now! I will fix the script in a few minutes
full member
Activity: 168
Merit: 100
Code:
$ git pull
remote: Counting objects: 68, done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 68 (delta 33), reused 4 (delta 4), pack-reused 7
Unpacking objects: 100% (68/68), done.
From https://github.com/OrdinaryDude/elastic-miner
   7f9cea7..84ae187  master     -> origin/master
   7b77987..57e2e6b  development -> origin/development
Updating 7f9cea7..84ae187
Fast-forward
 lib/ElasticPL.jar                  | Bin 0 -> 131216 bytes
 lib/bcprov-ext-jdk15on-155.jar     | Bin 0 -> 3466487 bytes
 lib/gnu-crypto.jar                 | Bin 0 -> 598036 bytes
 lib/javax-crypto.jar               | Bin 0 -> 96430 bytes
 lib/javax-security.jar             | Bin 0 -> 16969 bytes
 miner.jar                          | Bin 7401526 -> 7367002 bytes
 src/elastic_miner/CryptoStuff.java |   3 ++
 src/elastic_miner/Main.java        |  60 ++++++++++++++++++++++---------------
 8 files changed, 39 insertions(+), 24 deletions(-)
 create mode 100644 lib/ElasticPL.jar
 create mode 100644 lib/bcprov-ext-jdk15on-155.jar
 create mode 100644 lib/gnu-crypto.jar
 create mode 100644 lib/javax-crypto.jar
 create mode 100644 lib/javax-security.jar
$ ./compile.sh
javac: file not found: src/evil/ElasticPL/*.java
Usage: javac
use -help for a list of possible options

idea ?




Found in elastic-reference-client/
full member
Activity: 168
Merit: 100
Code:
$ git pull
remote: Counting objects: 68, done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 68 (delta 33), reused 4 (delta 4), pack-reused 7
Unpacking objects: 100% (68/68), done.
From https://github.com/OrdinaryDude/elastic-miner
   7f9cea7..84ae187  master     -> origin/master
   7b77987..57e2e6b  development -> origin/development
Updating 7f9cea7..84ae187
Fast-forward
 lib/ElasticPL.jar                  | Bin 0 -> 131216 bytes
 lib/bcprov-ext-jdk15on-155.jar     | Bin 0 -> 3466487 bytes
 lib/gnu-crypto.jar                 | Bin 0 -> 598036 bytes
 lib/javax-crypto.jar               | Bin 0 -> 96430 bytes
 lib/javax-security.jar             | Bin 0 -> 16969 bytes
 miner.jar                          | Bin 7401526 -> 7367002 bytes
 src/elastic_miner/CryptoStuff.java |   3 ++
 src/elastic_miner/Main.java        |  60 ++++++++++++++++++++++---------------
 8 files changed, 39 insertions(+), 24 deletions(-)
 create mode 100644 lib/ElasticPL.jar
 create mode 100644 lib/bcprov-ext-jdk15on-155.jar
 create mode 100644 lib/gnu-crypto.jar
 create mode 100644 lib/javax-crypto.jar
 create mode 100644 lib/javax-security.jar
$ ./compile.sh
javac: file not found: src/evil/ElasticPL/*.java
Usage: javac
use -help for a list of possible options

idea ?


legendary
Activity: 1260
Merit: 1168
EK, I upgraded elastic-core, but I'm not seeing any peers. 

Do you need to update your amazon aws nodes to 0.5.0?  If so, once upgraded please send a few test xel to XEL-8DES-WHKN-M6SZ-8JRP5 so I can submit some work packages.

Yes, the hard fork was not yet done 0.5.0 ... check your PM for the genesis account info ... you can use this to submit work on your local testnet node.
sr. member
Activity: 464
Merit: 260
EK, I upgraded elastic-core, but I'm not seeing any peers. 

Do you need to update your amazon aws nodes to 0.5.0?  If so, once upgraded please send a few test xel to XEL-8DES-WHKN-M6SZ-8JRP5 so I can submit some work packages.
legendary
Activity: 1260
Merit: 1168
The simple solution here may be to use your proposal with a caveat that the target can't ever go below a certain level...maybe 0x0000FFFFFFF....  This may suck for the high WCET packages, but if the author sets the rewards correctly, this it would out.  (i.e. if their WCET is 10x average, then you would expect their POW reward to be 10x the average or miners will go to other packages).

This is what I was thinking when I said "minimum possible diff" ... Id rather say "minimum allowed diff". The unconfirmed pow limit should limit any burst that may happen due to "new miners" entering the arena.

Still not sure if this scheme is solid at all, and how it could be attacked!
sr. member
Activity: 464
Merit: 260
lol...thx EK  Smiley

First, I completely agree that the target should be per work package.  However, I don't know if your proposal gets rid of the burst of POW submissions for low difficulty work.  If no POW has been submitted yet, the difficulty lowers automatically, but the issue is whether there is no POW because it's too difficult vs no one mining that package.  If no one is mining it, you would still get the burst of low difficulty POW when people jump on board.

The simple solution here may be to use your proposal with a caveat that the target can't ever go below a certain level...maybe 0x0000FFFFFFF....  This may suck for the high WCET packages, but if the author sets the rewards correctly, this it would out.  (i.e. if their WCET is 10x average, then you would expect their POW reward to be 10x the average or miners will go to other packages).
legendary
Activity: 1260
Merit: 1168
Just in case Lannister does a bounty hunting ;-)

Here my first submission on an idea, it is really bad and up for discussion:

1. Every work has its own target value
2. Once a work is created, the target value gets initialized with the average final target value of the last 10 closed jobs.
3. As long as no PoW submission is made, the target value or rather the difficulty drops very quickly so that in at most 5 blocks it reaches the "least possible difficulty" (to help readjusting to changed miner population)
4. There can be only 20 unconfirmed PoW submissions in the memory pool (and in each block) at most per work.
5. The retargetting mechanism reacts quickly, per block, to adapt to a targetvalue that results in on average 10 PoW submissions per block

This approach:
- mitigates the problem of a "too easy initial difficulty"
- is accounts for changed number of miners (which can of course change in long periods with no work) in two ways:
1.) If the number of miners decreased or if potent miners disappeared, the takes at most 5 blocks to "heal" the problem
2.) If we have too many miners suddenly, or if potent miners joined in the meantime, they can only clutter the blockchain at a rate of max. 20 tx / block until the retarget mechanism kicks in (should take 1 or at most 2 blocks).
- mitigates the attack when a potent miner "waits" until the difficulty drops to burst his precomputed PoW submissions. He, in this case, can only get through 20 at most until the difficulty readjusts in the following block.

PS: If I win, I dedicate my "bounty" (if one will ever exist) to coralreefer!  Grin
legendary
Activity: 1260
Merit: 1168
Preliminaries:

Work authors push work to the Elastic network. At the same time there maybe 0 or more works active parallely.
To ensure "workers" who work on the work packages are paid out continuously, they may (even if they do not solve the actual task) submit PoW packages. Those are solutions that result in a "hash" that is formed when the program was executed fully and that meets a certain target value (actually is lower than the target value).

Also: The blockchain grows by PoS only ... PoW submissions are just normal transactions that are included in the blocks.

Problem description:

Right now, all work packages share the same target value. Very quick jobs may therefore lower the target value (ie. increase the difficulty) more significantly than "long running jobs". This can have several effects ...

- on the one hand its hard to estimate a good "per POW submission price" for a work author, as it highly depends on the other work in the network.
- on the other hand, effects like this may happen: Currently the target value is measured by a sliding window of the last 3 blocks and how many PoW submissions were performed in those blocks. It is basically used to rate limit the number of valid PoW submissions. The ideal number is 10 per block on average. But if there is no work in the network, the blockchain still grows (since it is backed by PoS) and when a new work is then created, the last 3 blocks had exactly 0 PoW Submissions in them. 0 Pow submissions cause the difficulty to drop again. This again may cause a huge burst, when a lot of workers/miners start submitting PoW submissions again that meet the "minimum possible difficulty" before the target value approximate its desired value again.

So now?

We need a way to regulate the rate of PoW submissions in a fair manner (fair for the work creators), and in a manner that prevents bursts of shit loads of PoW submissions after periods without any work online.

Naive Approach:

The naive approach to give each work an own target value and keep the rest as is sucks as well ... Everytime a work is newly created, the target value first has to find its correct value ... until then, the blockchain is bloated with dozens of PoW transactions.

Keeping a global target value and "remembering" the last value throughout periods of without work (means ... periods with no PoW submissions) sucks as well: Problem described in the first bullet point in the problem description still applies. Also this scheme does not adapt to miners that "shut down their miners". A potent miner could boost the difficulty sky high and then leave, rendering the entire project useless as no PoW would ever get submitted again and as a consequence the old target value would be "remembered".
legendary
Activity: 1260
Merit: 1168
I think, given that the new PoW hash thing works out, the most important question that remains is how to model the PoW retargetting ... the way it is now sucks! Should we go for a "target value per work"? But if yes, how do we model it correctly?

Maybe lannister reads this and wants to call out a bounty for the "best scheme"?
sr. member
Activity: 424
Merit: 250
Blockchain is the future
In which exchange can I buy XEL?

It's not listed yet. Few weeks minimum to mainnet launch...

Thanks
hero member
Activity: 1022
Merit: 507
In which exchange can I buy XEL?

It's not listed yet. Few weeks minimum to mainnet launch...
legendary
Activity: 1260
Merit: 1168

When I dump the stream, I'm not seeing any randomization.  To me, it looks like its just pushing the first 4 bytes of the digest to stream ids 0,1,2,3,4, 8,9,10 and zeros to the others.

I suspect the problem was this?
https://github.com/OrdinaryDude/elastic-miner/commit/885d2f75f34e17641261fbbf0d6527c6b4a9e3f5
sr. member
Activity: 424
Merit: 250
Blockchain is the future
In which exchange can I buy XEL?
Jump to: