Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 195. (Read 450523 times)

hero member
Activity: 500
Merit: 507
I would love to see XEL listed in serious exchanges as soon as possible (considering of course all the testing that needs to take place to have a solid system). Sounds to me though that with the suggested gradual forked release, it will take substantial time till we see this actually happening. For the holders of the coin who consider liquidating some of their holdings that is not the best of plans.
I hope we can have some additional suggestions regarding the best way to roll it out to the masses.
legendary
Activity: 1148
Merit: 1000
nice to see some progress!

hopefully project can release before the end of this year.
hero member
Activity: 952
Merit: 501
when will step one start?
legendary
Activity: 2165
Merit: 1002
does that mean we can redeem from 1.0?


or to be safe to redeem after 1.5?

do we have simple win and mac build for newbies.  thanks


I suppose the hardforks will not affect the original coin amounts in the genesis block. So you can claim whenever you want.

But if there is the slightest chance of reverting the blockchain, people should not be trading (well, at least should not be buying unless they realise the risk) until the final version is out. exchanges should definitely not list it until then
sr. member
Activity: 448
Merit: 250
Ben2016

i like the release plan. one issue i see is that after mainnet launch we have basically no way to keep exchanges from listing XEL. as soon as we launch folks will start trading XEL even if its just some bogus exchange like liqui.io or something. that could lead to problems.
I don't understand, what kind of problems ? Some fools gonna trade this as soon as out, but that's their problem !
ImI
legendary
Activity: 1946
Merit: 1019

i like the release plan. one issue i see is that after mainnet launch we have basically no way to keep exchanges from listing XEL. as soon as we launch folks will start trading XEL even if its just some bogus exchange like liqui.io or something. that could lead to problems.
legendary
Activity: 1330
Merit: 1000
does that mean we can redeem from 1.0?


or to be safe to redeem after 1.5?

do we have simple win and mac build for newbies.  thanks
hero member
Activity: 994
Merit: 513
What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.

Step 1 seems to be a very great idea
But the possebillty to reset the whole chain in step two would probably disappointing everybody externally intrested - because they are waiting for a couple of months to get on board, and now the mainnet starts - they maybe buy tokens from other - and would get ripped off by a reset.
Otherwise this model would be Great to get some media-Focus.
But Bad reviews from disappointed "late-investors" could be the a killing argument for this project...

Yes, I see a similar problem with step 2. I think there should be no going back. Hardforks are a good tool in principle, I advocated them some pages back, but I think rolling back the chain is a no-go. When a hardfork is applied, I think all balances MUST stay the way they are, no matter how unfair the distribution has become for whatever reason. If a miner is able to exploit a loophole, that should be considered a bounty.
sr. member
Activity: 464
Merit: 260
Yeah, I think we need additional mathematical operators.
EDIT: SQRT on integers sucks a bit, we might need to think about the design! What do you think?

Nice job with the SDK.

I had thought about this float issue during my original miner design.  Having an integer based design allows us to run at the best speeds and I'd prefer not to move to a float based design.  However, if we create a small chunk of memory (maybe 1000 floats) that can be used to store this type of data if needed, we may get the best of both worlds.  It would be available if needed, and if not, I don't think we'd see any decrease in performance.

Or am I oversimplifying this?
full member
Activity: 206
Merit: 106
Old Account was Sev0 (it was hacked)
What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.

Step 1 seems to be a very great idea
But the possebillty to reset the whole chain in step two would probably disappointing everybody externally intrested - because they are waiting for a couple of months to get on board, and now the mainnet starts - they maybe buy tokens from other - and would get ripped off by a reset.
Otherwise this model would be Great to get some media-Focus.
But Bad reviews from disappointed "late-investors" could be the a killing argument for this project...
legendary
Activity: 1456
Merit: 1000
What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.

Sounds like an innovative release plan with the stages and a good approach with the predetermined hard forks.  Sort of similar to the predetermined Monero forks which removes contention.

Thanks again for all your hard work.
legendary
Activity: 1260
Merit: 1168
What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.
legendary
Activity: 1260
Merit: 1168
EK, starting to look at fitting the traveling salesman problem into elastic.  Obviously, this problem requires distances to be calculated.  Is the expectation that the author takes care of these calculations outside of elastic and provides this as part of the raw data, or do you think we need to add some additional math functions to ElasticPL such as sqrt (and I'm sure there are others)?

Yeah, I think we need additional mathematical operators.
EDIT: SQRT on integers sucks a bit, we might need to think about the design! What do you think?

I have started to implement a "Elastic NodeJS-based SDK".
This will allow us to use the elastic network programatically. Here a simple example that will create a simple work, and listen for status changes as well as new POW and bounties ;-) It's not finished yet, but will be today (also, the core client had to be extended so this version is not compatible with the 0.8.0 release).

Github:

https://github.com/OrdinaryDude/elastic-nodejs


Simple Elastic PL controller:

Code:
var elastic = require("./lib/index.js");

/* Configuration BEGIN */
var passphrase = "test";
var host = "127.0.0.1";
var port = 6876;
var use_ssl = false;
/* Configuration END */

elastic.init(passphrase, host, port, use_ssl, function(){

console.log("[!] Established connection with Elastic Core.");

/* Create a new work */
var work = {
work_title: "My simple work",
source_code: "verify m[0] >= 1000000;",
xel_per_bounty: 1,
xel_per_pow: 0.1,
deposit_in_xel: 10000
};

/* Publish and watch the new work */
elastic.publish_work(work, function(event, data){
switch(event){
case elastic.events.WORK_WAITING_TO_CONFIRM:
console.log("[*] Work with id = " + data.transaction + " is awaiting first confirmation.");
break;
case elastic.events.WORK_STARTED:
console.log("[*] Work with id = " + data.transaction + " just became live.");
break;
case elastic.events.WORK_CLOSED:
console.log("[*] Work with id = " + data.transaction + " is finished.");
break;
case elastic.events.WORK_UPDATE:
// Right now, this event is fired every block with the current work JSON object as the data!
// You can read the left balance and the number of POW/Bounties here
break;
case elastic.events.NEW_BOUNTY:

break;
case elastic.events.NEW_POW:

break;
case elastic.events.FAILURE:
console.log("[E] Failure:",data);
break;

}
});
});
sr. member
Activity: 464
Merit: 260
EK, starting to look at fitting the traveling salesman problem into elastic.  Obviously, this problem requires distances to be calculated.  Is the expectation that the author takes care of these calculations outside of elastic and provides this as part of the raw data, or do you think we need to add some additional math functions to ElasticPL such as sqrt (and I'm sure there are others)?
full member
Activity: 124
Merit: 100
Vernam Cipher or One Pad?(the only provably unbreakable system) Could that be used based on irrational numbers for generation or something?
hero member
Activity: 994
Merit: 513
I think, the traveling salesman problem is perfect for testing, because it is extremely scalable: You could either generate millions of random combinations or very few, very optimized solutions.

I still like the idea of two job authors playing chess against each other, using nothing but Elastic. Bruteforcing moves might not be the most promising approach to playing chess, but it sounds doable to me.

Ok, I'll spend some time looking at the annealing or traveling salesman problem to see if I can get a real-world example in elastic.  EK, I'm sure I'll have some questions for you  Wink

Maybe ttookk, you can create your chess example as your final exam for your newly aquired coding skills  Grin

Whoa, whoa, whoa… That's like saying "I want to learn blacksmithing and the second piece I'm going to make is a full blown Katana with steel I smelted myself from raw iron ore" or something like that. Final exam for sure!

The chess thing would probably be a classic decision tree situation; the more moves you go in the future, the more options and possible outcomes there are. This would at least be the bruteforce attempt.

Nah, but seriously. I'm going to look at some chess sourcecode I found at github.
sr. member
Activity: 464
Merit: 260
I think, the traveling salesman problem is perfect for testing, because it is extremely scalable: You could either generate millions of random combinations or very few, very optimized solutions.

I still like the idea of two job authors playing chess against each other, using nothing but Elastic. Bruteforcing moves might not be the most promising approach to playing chess, but it sounds doable to me.

Ok, I'll spend some time looking at the annealing or traveling salesman problem to see if I can get a real-world example in elastic.  EK, I'm sure I'll have some questions for you  Wink

Maybe ttookk, you can create your chess example as your final exam for your newly aquired coding skills  Grin
hero member
Activity: 994
Merit: 513
Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.

Okay, I guess this is where I'm the non-conformist to the blockchain movement.  I would think the author should have some accountability here (i.e. run a client, or at least check in regularly)...the results should be saved locally on their workstation and sent back as an updated job via RPC.  To me, the blockchain is the engine performing the work...but I don't see why the author can't have a client that helps then analyze / prepare / submit jobs as and when they want.

Yes, the client would need to be coded to handle that logic, but does it really need to be part of the blockchain?

I agree with you on this one. Maybe we (I??/You??) should just implement some real world problem clients.
This will require a constant monitoring of the blockchain (and the unconfirmed transactions) along with automated cancelling and recreation of jobs.

What we still have to think about, input might be larger than 12 integers (when it comes down to bloom filters for example). Do you think it might be wise to have at least some sort of persistent distributed storage?

And what a out "job updating"? We could allow jobs to be updated while running?

Yes, I thought this was the approach from the beginning, actually. That's why I compared Elastic to Lego: I assumed everything that is built right now was just the blocks, everything else would generate out of it. Like, for more complex approaches (e.g. traveling salesman problem), there will emerge schemes that job authors can implement "from the outside", i.e. using Elastic, but as the building blocks, using outside libraries to improve functionality by managing, what actually needs computation.

I think, the traveling salesman problem is perfect for testing, because it is extremely scalable: You could either generate millions of random combinations or very few, very optimized solutions.

I still like the idea of two job authors playing chess against each other, using nothing but Elastic. Bruteforcing moves might not be the most promising approach to playing chess, but it sounds doable to me.
legendary
Activity: 1260
Merit: 1168
I think the first thing to do would be to create some sort of SDK, to submit and control jobs programmatically.
I can pull off one in python pretty quick I think!
sr. member
Activity: 464
Merit: 260
What we still have to think about, input might be larger than 12 integers

Maybe we implement a randomize function that authors can use to use the 12 inputs to randomize as many inputs as they need.

Do you think it might be wise to have at least some sort of persistent distributed storage?

I could definitely see a use case for this...not sure how to implement efficiently but should probably be on the roadmap.

And what a out "job updating"? We could allow jobs to be updated while running?

I thought about this when I first wrote the miner...right now it doesn't account for this.  However, if the UI allows an update, and submits it with a different work ID, then the miner would run as currently designed.  I was trying to avoid having to reparse the ElasticPL every 60 sec or so to see if it changed....as long as the update cancels the old work ID and creates a new one, it should work seamlessly.
Jump to: