Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 224. (Read 450523 times)

ImI
legendary
Activity: 1946
Merit: 1019
hero member
Activity: 882
Merit: 548
Crowdsale Golem over! We raised 820000 ETH in 20 minutes. Thank you everyone who participated. Included in the 3 fastest ICO startaps:
       
Firstblood, SingualDTV, Golem
  Wink Wink Wink


hero member
Activity: 742
Merit: 501
Looks like a lot of work is being done on this.  Can't wait to see it go live.

Will be watching this one for sure!

you are also doing pretty well on zclassic.

congratulations.
lol

May be, he is preparing to release " Elastic Classic ".  he is waiting full version of elastic software.
hero member
Activity: 994
Merit: 513
(…)

I get the whole Nash equilibrium thing and for the most part, I agree. In fact, we can see that this is working the way E-K writes it "in the wild" right now: miners mine not only one Altcoin at a time and jump from coin to coin as a group.

(…)

A possible, but very invasive solution would be to implement the bidding system mentioned earlier, or something similar, which would let new jobs enter only at certain points in (block-)time. I think systems like that should be seen as a "last resort", when game theory and assuming that users are acting rationally is not working out, though.

To clarify this: I don't think, we should take precautions right now, I think this is something that needs to be observed and tested on the testnet and that very likely is either a very minor problem, or no problem at all. It is just something to keep in mind and that may need testing focussed towards it at some point.


Here is another idea from the "just putting it out there" kind:

What would happen, if the overall difficulty would be reset to a fixed value every X PoS blocks (as in "at a certain point in time")? Or, as a variance of that, every X blocks a global difficulty is determined and applied to all jobs? That's just a random thought I had, so I don't know what to make of it, yet.
Having fixed points in time at which things happen would be a way to help rational players acting rationally, though.

By the way, is there a good bitcoin wiki out there? Does anyone have a link?
legendary
Activity: 1330
Merit: 1000
Looks like a lot of work is being done on this.  Can't wait to see it go live.

Will be watching this one for sure!

you are also doing pretty well on zclassic.

congratulations.
lol
sr. member
Activity: 346
Merit: 250
Looks like a lot of work is being done on this.  Can't wait to see it go live.

Will be watching this one for sure!
hero member
Activity: 994
Merit: 513
Lots of stuff happening, love it!

Haven't had time to read it all carefully, just skimmed over it, so maybe it is mentioned already, but I think, what you describe above is why it would be good to have at least some type of difficuty setting that is affected by overall network activity, so that miners are less likely to jump between jobs.

EDIT: Ok, I've read a little more. so here are some more thoughts on that matter:

I get the whole Nash equilibrium thing and for the most part, I agree. In fact, we can see that this is working the way E-K writes it "in the wild" right now: miners mine not only one Altcoin at a time and jump from coin to coin as a group.

However, the example mentioned above doesn't account for a situation that will occur in the Elastic network frequently:

What happens, when a new job is posted?
I'm not a miner and don't follow hashrate and difficulty charts much, but in a small system, this could have a big effect on the whole network. Maybe the effect Zcash had a few days back?

I think a likely scenario is, that lots of miners jump on it at the same time, leading to the oscillation effect, until the network finds the perfect balance again. I don't know how much of a problem this is, but it sounds like it could be.

Should this be accounted for? And if yes, how? There is also a good chance, that miners "mature" and learn that this can happen, thus will become a little more cautious regarding working on newly posted jobs. Ever so often, a job may come along with such a big reward, that everybody wants a piece of the cake, though. And then, what?

A possible, but very invasive solution would be to implement the bidding system mentioned earlier, or something similar, which would let new jobs enter only at certain points in (block-)time. I think systems like that should be seen as a "last resort", when game theory and assuming that users are acting rationally is not working out, though.
legendary
Activity: 1260
Merit: 1168
We ignore the target, since it is just there to throttle the POW frequency.

I agree with your analysis except maybe I'm not fully understanding the new POW target.  In your example above these 2 jobs would have different targets, right?  If so, my assumption is that miners would also be using this to determine what to mine.

So like you said, assume 1 WCET equals to 1 Second of computation; therefore, Job 1 is twice as easy to evaluate than Job 2.  But if Job 1 has target of 0x00000000... and job 2 has target of 0x0000FFFF..., job 2 would actually be easier to get POW rewards from even though you'd only check for them half as often as the difficulty is exponential.  I haven't added this logic to the miner yet, as I've been waiting for the targeting to be done first, but I do plan to implement logic similar to this.

Either way, I think you've demonstrated that this model should work and that so far there really isn't an obvious way to exploit it that I see.

I am not sure if this is really solid, to me at least it looks ok! But 4 eyes see more than 2 ;-)

I completely ignored the target value in my calculation, and just assumed that it quickly scales in a way, that X miners compete for 10 POWs in each block. So the more miners work on one job, the less the chances every one of them has to find a POW and the "more lucrative" other jobs start looking. The remainder of the proof just tries to find a miner-to-work-distribution-equilibrium so miners have the optimal "time to block" (-> game theory, assuming a selfish miner, this should happen automatically)

I hope this way of "analyzing" is valid here. If yes, this leaves two questions:
- How to correctly "quickly scale" that target value
- How to avoid that everyone decides at the "same time".

sr. member
Activity: 464
Merit: 260
We ignore the target, since it is just there to throttle the POW frequency.

I agree with your analysis except maybe I'm not fully understanding the new POW target.  In your example above these 2 jobs would have different targets, right?  If so, my assumption is that miners would also be using this to determine what to mine.

So like you said, assume 1 WCET equals to 1 Second of computation; therefore, Job 1 is twice as easy to evaluate than Job 2.  But if Job 1 has target of 0x00000000... and job 2 has target of 0x0000FFFF..., job 2 would actually be easier to get POW rewards from even though you'd only check for them half as often as the difficulty is exponential.  I haven't added this logic to the miner yet, as I've been waiting for the targeting to be done first, but I do plan to implement logic similar to this.

Either way, I think you've demonstrated that this model should work and that so far there really isn't an obvious way to exploit it that I see.
legendary
Activity: 1260
Merit: 1168
Lets make it a bit easier so we can think more thoroughly about this.
1000 Miners are active ...

These are the jobs at the beginning
Code:
Job 1:
    WCET:           100
    POW Reward:    1
    Workers: 0

Job 2:
    WCET:            200
    POW Reward:    1
    Workers: 0

So you are right, if they all "fetch work simultaneously" this would happen


Code:
Job 1:
    WCET:           100
    POW Reward:    1
    Workers: 1000


Job 2:
    WCET:            200
    POW Reward:    1
    Workers: 0

We ignore the target, since it is just there to throttle the POW frequency, let's assume is quickly enough converges so that the miners who are involved finds exactly 10 POW per block.

Now, assume that 1000 Workers in Job 1 have exactly the same computation power.  Thinking further .. again to make it easy ... assume 1 WCET equals to 1 Second of computation.
Each of the 1000 miners in this case has the chance of 10*1/1000 = 1 in 100 (10 out of 1000 miners find a POW in each block) to find a POW in a block, hence has to work 100*100WCET*1s = 10000 seconds on average to find one POW solution.

It is, for the first miner to "think this through" more lucrative to work on the second job, if one miner switches he has a 100% chance to fining at least 1 POW per block, since he is the only miner. If two switch, the chance is still 50%, and so on.

Lets say, one after the other, miners reevaluate the best thing to do
The equilibrium in this case, where every miner has found the best selfish strategy, however would be:

Code:
Job 1:
    WCET:           100
    POW Reward:    1
    Workers: 667


Job 2:
    WCET:            200
    POW Reward:    1
    Workers: 333

In this case everyone in the first group has a chance of 10/667 = 1/66.7 to find a POW which takes on average 100WCET*66.7*1s = 6670s (a lot better than the 10000 seconds from before the equilibrium was reached).
The second group has a chance of 10/333 = 1/33, but has a higher WCET taking on average 200WCET*33*1s = 6660s to find one POW.

Switching to either job makes no sense for any of the two groups since they both operate at the "local optimum", switching would mean they increase their own "time to POW".

Ob course, the second job has less workers, but he has to blame himself ... he could have increased his POW rewards ;-)



The only problem that might arise is, when miners synchronize and "decide" on the strategy at the same time. Then such oscillations (the ones you described) can occur.

Not sure how we could prevent such synchronizations (which could occur as all miners will probably reevaluate when a new block is found)
Maybe a pseudorandomly long "lock" on a job, each different for each miner, could be the solution.
sr. member
Activity: 464
Merit: 260
Of course, this does not mean that such nash equilibria distribute the work in the network "well", but theoretically those equilibria must exist. OR did I miss something here?

I think it will be fine, but here is a more clear example of what I was trying to say:

Job 1:
    WCET:           100
    POW Reward:    1
    Target:          0x0000FFFF....

Job 2:
    WCET:            90
    POW Reward:    1
    Target:          0x0000FFFF....

Initially, I would expect miners to jump on job #2.  Then after some retargeting, let's say Job 2 Target changes to 0x000000FF...
I was thinking miners would then determine that Job 1 would be their best bet and everyone rushes over there.  Then once Job 1 retargets, they may all rush back to Job 2, etc.

As you pointed out, the reality is probably that these miners will get distributed to some degree over time.  Just trying to think through this as much as I can.

legendary
Activity: 1260
Merit: 1168
So thinking through the retargeting a bit more, in general it would seem that miners will move from job to job as a group.  For example, if there are 10 miners total and X number jobs total with similar WCET / rewards, there's a good possibility that they will all determine that job Y is the most profitable and move over there (yes there could be variation in this logic, but in general, they probably gravitate towards the same job).  After a few blocks, the target for this job Y will change enough that it's not as profitable as job Z, and everyone will then move over to job Z driving up it's difficulty while job Y's difficulty begins to drop....then they move on to the next job, etc.

I'm not saying anything is wrong with this, and maybe this is a good thing, but we should just be aware that this model may encourage miners to hop from job to job as a group rather than find some sort of distribution across jobs.  So if the 2 most profitable jobs have similar rewards and WCET, it would seem that the group of miners would just bounce between the two jobs which may be the best outcome we could hope for.  I was just trying to think if there was any way to keep the miners more distributed and less hoping which helps with the retargeting.

And just to finish the thought...if there is one profitable job, and 2 low profit jobs, I believe this model would still encourage the miners to migrate to the profitable job regardless of the difficulty as they still get to compete for 10 POW rewards of the profitable job regardless, which is what we want.


Well, thinking in terms of game theory ...
Assuming the fact that we more or less have always "work" to do, and assuming that all miners do not cooperate but instead mine selfishly aiming for optimizing the own reward, the work distribution should over time reach some kind of "nash equilibrium" where no worker can improve his situation by switching to another job.

Of course, this does not mean that such nash equilibria distribute the work in the network "well", but theoretically those equilibria must exist. OR did I miss something here?
sr. member
Activity: 464
Merit: 260
So thinking through the retargeting a bit more, in general it would seem that miners will move from job to job as a group.  For example, if there are 10 miners total and X number jobs total with similar WCET / rewards, there's a good possibility that they will all determine that job Y is the most profitable and move over there (yes there could be variation in this logic, but in general, they probably gravitate towards the same job).  After a few blocks, the target for this job Y will change enough that it's not as profitable as job Z, and everyone will then move over to job Z driving up it's difficulty while job Y's difficulty begins to drop....then they move on to the next job, etc.

I'm not saying anything is wrong with this, and maybe this is a good thing, but we should just be aware that this model may encourage miners to hop from job to job as a group rather than find some sort of distribution across jobs.  So if the 2 most profitable jobs have similar rewards and WCET, it would seem that the group of miners would just bounce between the two jobs which may be the best outcome we could hope for.  I was just trying to think if there was any way to keep the miners more distributed and less hoping which helps with the retargeting.

And just to finish the thought...if there is one profitable job, and 2 low profit jobs, I believe this model would still encourage the miners to migrate to the profitable job regardless of the difficulty as they still get to compete for 10 POW rewards of the profitable job regardless, which is what we want.




legendary
Activity: 1260
Merit: 1168
Hi EK, nice work as usual.   Here are a couple questions / comments:

1) Can you clarify whether this logic is setting a global target, or a target per job?  (I'm assuming this is per job)

Sure, every block carries the "base target difficulty" which is the minimum difficulty of the last x closed jobs.
All jobs that get included with a block, adopt this Block's value and adjust their own target value from there

2) If it's per job, will the network accept 10 POW per job / block, or just 10 POW total across all jobs?

At the moment per job and block. So twice as many works = twice as many PoWs.

3) Will the network reject all POW submissions > 10 per block, or will it accept them all with the understanding (based on retargettting) that over time the average will be 10 POW per block?  I'm just trying to understand how the scenario where a farm come online, etc where we get a burst of POW...it could be hundreds of submissions...and we don't want this high powered user to get blacklisted.

At the moment, they are just dropped with a "duplicate tx message" without causing the peers to be blacklisted.

I see where you are getting a minimum difficulty.  I may have missed it, but I didn't see where you are ensuring the 5% & 20% reduction doesn't drop the targeting below the minimum.  I still think we need to maintain the minimum even if its giving less than 10 POW per block as even my pi can produce 10 POW per block at the minimum.

This check happens at the end ;-)

Code:
if(targetI.compareTo(Constants.least_possible_target)==1){
     targetI = Constants.least_possible_target;
     }else if(targetI.compareTo(BigInteger.valueOf(1L))==-1){ // safe guard, should never happen at all
     targetI = BigInteger.valueOf(1L);
     }
sr. member
Activity: 464
Merit: 260
Hi EK, nice work as usual.   Here are a couple questions / comments:

1) Can you clarify whether this logic is setting a global target, or a target per job?  (I'm assuming this is per job)
2) If it's per job, will the network accept 10 POW per job / block, or just 10 POW total across all jobs?
3) Will the network reject all POW submissions > 10 per block, or will it accept them all with the understanding (based on retargettting) that over time the average will be 10 POW per block?  I'm just trying to understand how the scenario where a farm come online, etc where we get a burst of POW...it could be hundreds of submissions...and we don't want this high powered user to get blacklisted.

1. If no POW submission is received it may be that either the job is too difficult or that nobody mines it because some other job is more lucrative. In this case we increase the target (decrease the difficulty) by only 5% per block.
2. If POW submissions are received then we just scale it at most by 20% per block, so that we receive a certain number of POW per 60 seconds (timestamps of the blocks and the duration to mine them are accounted for).

I see where you are getting a minimum difficulty.  I may have missed it, but I didn't see where you are ensuring the 5% & 20% reduction doesn't drop the targeting below the minimum.  I still think we need to maintain the minimum even if its giving less than 10 POW per block as even my pi can produce 10 POW per block at the minimum.





legendary
Activity: 1260
Merit: 1168
Coralreefer, before I commit everything, maybe a public discussion on that topic might be good:
Currently the retarget is very simple. We take the minimim difficulty over the last 10 jobs as the baseline for the new job.
Then:

1. If no POW submission is received it may be that either the job is too difficult or that nobody mines it because some other job is more lucrative. In this case we increase the target (decrease the difficulty) by only 5% per block.
2. If POW submissions are received then we just scale it at most by 20% per block, so that we receive a certain number of POW per 60 seconds (timestamps of the blocks and the duration to mine them are accounted for).

What still sucks a little bit is point 1. I would like to see a better way to distringuish "too hard jobs" from "too boring jobs" in the retarget function. Maybe we could use some "heuristic" such as increase by 5% windows only until a certain lower bound difficulty is met. Maybe the lowest one over the last 1024 blocks (or some other lower bound derived from the WCET) or so to avoid insanely low difficulties after long periods of inactivity.

EDIT: Or we could only decrease the difficulty if there is no other job in the network that receives POWs??? That idea just came to my head in the shower and it seems legit! (It was implicitly in some comment ttookk made earlier)


I feel that it's just 3-4 more lines of code to make it solid enough!  Wink

This is, what gets called for each job on each new block seperately (EDIT: With the latest "do others get POWs?" extention from my edit above)

Code:
public void updatePowTarget(Block currentBlock){
    
     // Initialize with the blocks base target (this is set in BlockImpl::calculateNextMinPowTarget
     // to the lowest difficulty over the last 1<=n<=10 closed jobs,
     // or to the minimal possible difficulty if there aren't any closed jobs yet)
     BigInteger targetI = Nxt.getBlockchain().getBlock(this.getBlock_id()).getMinPowTarget();
    
    
     if( currentBlock.getId() != this.getBlock_id() ){  
     // Do standard retargeting (yet to be peer reviewed)
    
     long PastBlocksMass = 0;
     int account_for_blocks_max=3;
     long seconds_passed = 0;
     int desired_pows = 0;
     long PastBlocksTotalMass = 0;
    
     Block b = currentBlock;
     int counter = 0;
     while(true){
     if(b==null || b.getId() == this.getBlock_id()) break;
     counter=counter+1;
     PastBlocksMass += b.countNumberPOWPerWorkId(this.getId());
     PastBlocksTotalMass += b.countNumberPOW();
     seconds_passed = currentBlock.getTimestamp()-b.getTimestamp();
     if(seconds_passed<0) seconds_passed=60*counter; // Crippled timestamps, assume everything went smoothly! Should not happen anyway!
     if(b.getPreviousBlock()==null || counter == account_for_blocks_max)
     break;
     b=b.getPreviousBlock();
     }
    
     // Now see how we would scale the target, this is important because 3 blocks do not always take the same amount of time
     if(seconds_passed<1) seconds_passed=1;
    
     // Normalize the time span so we always work with "60 second windows"
     double pows_per_60_seconds = (PastBlocksMass * 60.0 / seconds_passed);

     // DIRTY HACK; Assume 0     if(pows_per_60_seconds > 0 && pows_per_60_seconds<1)
     pows_per_60_seconds = 1;
     double factor = 0;
    
     if(pows_per_60_seconds>0){
     // We have received at least one POW in the last 60 seconds
     System.out.println("*** RETARGETING ***");
     System.out.println("Workid: " + this.getId());
     System.out.println("Accounted last blocks: " + counter);
     System.out.println("Blocks span how much time: " + seconds_passed);
     System.out.println("How many seen POWs: " + PastBlocksMass);
     System.out.println("Normalized # per 60s: " + pows_per_60_seconds);
     System.out.println("Wanted # per 60s: " + 10);
     factor = 10 / pows_per_60_seconds;
     // round the factor to change the diff max 20% per block!
         if(factor<0.80) factor=0.80;
         if(factor>1.20) factor=1.20;
        
         System.out.println("Scalingfactor: " + factor);
     }else if(pows_per_60_seconds == 0 && PastBlocksTotalMass==0){
     // This job did not get any POWs but and others also didnt get any! Seems the diff is too high!
     // The best way is to gradually decrease the difficulty (increase the target value) until the job is mineable again.
     // As a safe guard, we do not allow "too high" changes in this case. Lets move by 5% at a time.
     // Target value should double after ~15 blocks! Let'S assume that this time span will never be reached
     // as (in the case the job is too difficult) there will be at least one POW at some point, causing the
     // above branch of the if statement to apply
     System.out.println("*** RETARGETING ***");
     System.out.println("Workid: " + this.getId());
     System.out.println("Accounted last blocks: " + counter+"\nNO POW SUBMISSIONS IN LAST " + counter + " BLOCKS!");
     factor=1.05;
     System.out.println("Scalingfactor: " + factor);
     }else{
     // This job is just too boring, others still get POWs
     System.out.println("*** RETARGETING ***");
     System.out.println("Workid: " + this.getId());
     System.out.println("Skipped retargeting, no POW received for this job but others!");
     }
    
     BigDecimal intermediate = new BigDecimal(targetI);
     System.out.println("Factor is_ " + factor);
     intermediate = intermediate.multiply(BigDecimal.valueOf(factor));
     targetI = intermediate.toBigInteger();
     if(targetI.compareTo(Constants.least_possible_target)==1){
     targetI = Constants.least_possible_target;
     }else if(targetI.compareTo(BigInteger.valueOf(1L))==-1){ // safe guard, should never happen at all
     targetI = BigInteger.valueOf(1L);
     }
     }else{
     // do nothing, especially when its the block where the work was included
     }
     this.work_min_pow_target = targetI.toString(16);
    
    }
legendary
Activity: 1232
Merit: 1001
too much tecnical discussions...

What percentages ~ of the XEL project  completed? tell us for normal xel followers,

ek, pls make a summary for us

Hi beyinsi,

my personal point of view is that perhaps 90% are done. The remaining 10% are mostly details (e.g., the retargeting algorithm) which we all discussed on the last pages and still need to come to a consensus.

Website: Check
Blockexplorer: Check
Fast and efficient C based miner: Check
Wallet/Backend functionality: Check
Elastic PL Development Editor: Check
Elastic PL Parser (the core language): Check
Wallet Frontend: Missing parts (Claim from Genesis or Miner overview for example)
The retargeting algorithm: Done, but not sure if flawed. Discussion is still open here

And of course testing testing testing ;-)


Thank you for the summary!

You're doing a great work. Keep it up!

indeed, great summary. Nice
sr. member
Activity: 243
Merit: 250
too much tecnical discussions...

What percentages ~ of the XEL project  completed? tell us for normal xel followers,

ek, pls make a summary for us

Hi beyinsi,

my personal point of view is that perhaps 90% are done. The remaining 10% are mostly details (e.g., the retargeting algorithm) which we all discussed on the last pages and still need to come to a consensus.

Website: Check
Blockexplorer: Check
Fast and efficient C based miner: Check
Wallet/Backend functionality: Check
Elastic PL Development Editor: Check
Elastic PL Parser (the core language): Check
Wallet Frontend: Missing parts (Claim from Genesis or Miner overview for example)
The retargeting algorithm: Done, but not sure if flawed. Discussion is still open here

And of course testing testing testing ;-)


Thank you for the summary!

You're doing a great work. Keep it up!
legendary
Activity: 1512
Merit: 1004
Any timeline for working client release?
Thanks.
hero member
Activity: 994
Merit: 513
Can you play Crysis on this Super Computer?

If you mange to show, that the problem of "Playing Crysis" can be somehow reduced to a knowingly NP-complete problem, such as e.g. the Hamiltonian cycle problem, I am pretty sure you could then (somehow) express it as a brute force search over a 384 bit search space. If you succeed with that, chances are in your favor that you'll be able to play Crysis on this super computer!

Wow… Sounds a little far fetched at the moment and kinda off-topic, but just imagine what you could do if you could borrow computing power in real time and play a super complex, high definition game over it… I'd imagine a lot of people would pay good money for it.

Combine this with something similar to Upsilon Circuit and… well, need I say more?


I want E-Ks quote above written in calligraphy, to hang it on a wall, by the way. 
Jump to: