Pages:
Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 81. (Read 450520 times)

hero member
Activity: 1111
Merit: 588




My car is on the verge of breaking down ;-) Time to replace it with something new and maybe save a polar bear! If you like to support me, without any reward desired, there you go:
BTC: 1Liz5VpeqYEUQXdU8yepPRc3erZQDKDajm
ETH: 3a43e0311a02743de52878543c92a13170efbeee


time to support our hero EK.  Smiley





sent 0.2BTC to show my respect and support.


https://blockchain.info/tx/3deb0732e51bf3829d69f45a74deba89c713e54e7f0bf8b63b62d711c7639ff3

Sent 0.2 btc also . Thank you for everything EK .
legendary
Activity: 1148
Merit: 1000




My car is on the verge of breaking down ;-) Time to replace it with something new and maybe save a polar bear! If you like to support me, without any reward desired, there you go:
BTC: 1Liz5VpeqYEUQXdU8yepPRc3erZQDKDajm
ETH: 3a43e0311a02743de52878543c92a13170efbeee


time to support our hero EK.  Smiley





sent 0.2BTC to show my respect and support.


https://blockchain.info/tx/3deb0732e51bf3829d69f45a74deba89c713e54e7f0bf8b63b62d711c7639ff3
sr. member
Activity: 354
Merit: 250
I can't wait for this project to launch.. Hoping it opens 10x on the market  Grin Grin Cheesy

i really really don't want to discuss price here.

but seriously 10X for this great project?

look out there, many scams are valued more than 10 million.
Yes,the stationmaster is right, but maybe we can not discuss the price, but participate in the test
legendary
Activity: 1330
Merit: 1000
I can't wait for this project to launch.. Hoping it opens 10x on the market  Grin Grin Cheesy

i really really don't want to discuss price here.

but seriously 10X for this great project?

look out there, many scams are valued more than 10 million.
full member
Activity: 196
Merit: 100
Did you see the roonies on him???
I can't wait for this project to launch.. Hoping it opens 10x on the market  Grin Grin Cheesy
hero member
Activity: 952
Merit: 501
Got so many ideas ;-)
Maybe we could add an alternative verification method.

Currently we have to limit the program complexity so that the supernodes can run the code in a fairly short amount of time in order to verify if the user actually found a bounty. So basically, the supernode takes the user's input, runs the entire code, and sees if the result is what meets the bounty criteria.

What if we (optionally) allow to check bounties differently? Meaning we allow checking just the result. So it does not matter if the user actually ran the program, or asked some tooth ferry, as long as the value, that the user submitted meets the criteria, it's all right. (Example: sha256d(x) hat 10 leading zeros).

Since we would allow the "calculation" to be disconnected entirely  from the "verification", the actual program can be much longer and a lot more complex. Since the verification routine that the supernodes must run is short enough, it's fine.

For this type of work verification, we would have to disable POW payments since it may happen that the program runs 10 minutes or so.

What do you think? These are little things that open up entirely new use cases!

that's a great idea can bring a lot of real usecase, not only for sell the concept.

blockchain should aim for daily use. Wink
hero member
Activity: 952
Merit: 501




My car is on the verge of breaking down ;-) Time to replace it with something new and maybe save a polar bear! If you like to support me, without any reward desired, there you go:
BTC: 1Liz5VpeqYEUQXdU8yepPRc3erZQDKDajm
ETH: 3a43e0311a02743de52878543c92a13170efbeee


time to support our hero EK.  Smiley

legendary
Activity: 1364
Merit: 1000
Very excited about the project !! Keep us posted !
full member
Activity: 139
Merit: 100
best project i ever seen!
full member
Activity: 169
Merit: 100
Nice new website http://www.elastic.pw/ Really good community around this project. Hope launch will be in summer or earlier.
sr. member
Activity: 581
Merit: 253
Great to see so much progress guys.
hero member
Activity: 661
Merit: 500
Got so many ideas ;-)
Maybe we could add an alternative verification method.

Currently we have to limit the program complexity so that the supernodes can run the code in a fairly short amount of time in order to verify if the user actually found a bounty. So basically, the supernode takes the user's input, runs the entire code, and sees if the result is what meets the bounty criteria.

What if we (optionally) allow to check bounties differently? Meaning we allow checking just the result. So it does not matter if the user actually ran the program, or asked some tooth ferry, as long as the value, that the user submitted meets the criteria, it's all right. (Example: sha256d(x) hat 10 leading zeros).

Since we would allow the "calculation" to be disconnected entirely  from the "verification", the actual program can be much longer and a lot more complex. Since the verification routine that the supernodes must run is short enough, it's fine.

For this type of work verification, we would have to disable POW payments since it may happen that the program runs 10 minutes or so.

What do you think? These are little things that open up entirely new use cases!

This sounds appropriate EK and could eliminate a potential bottleneck around the verification process.

If I am understanding this correctly, you are essentially allowing the calculation queue to be decoupled from the verification queue. So jobs start in order (x,y,z) and the job lengths of L(x=3sec,y=2sec,z=1sec), the first calculations verified would be the first job done with calculations (verification order: z, y, x), not necessarily the first calculation job that went into the queue.  

Kind of a first come first serve approach to both the calculation queue and the verification queue?


A follow up question. Will a user be able to terminate an order if they do not like the approximated lengths of the job? Or will the users be able to dynamically adjust the processing power allocated/purchased for their jobs? Best term i could come up with is something like dynamic processing power allocation(after initial job submission)?   

Wondering if there is a possibility people start continuously starting/cancelling jobs which could potentially bog down the network.
hero member
Activity: 734
Merit: 500
I can't open the website , What's the problem

Which Website? elastic-project.com is down and will stay so. Try http://www.elastic.pw
sr. member
Activity: 308
Merit: 250
I can't open the website , What's the problem
legendary
Activity: 1260
Merit: 1168
Coralreefer, thumbs up for your hinch concerning the POWs ;-)
legendary
Activity: 1260
Merit: 1168
Got so many ideas ;-)
Maybe we could add an alternative verification method.

Currently we have to limit the program complexity so that the supernodes can run the code in a fairly short amount of time in order to verify if the user actually found a bounty. So basically, the supernode takes the user's input, runs the entire code, and sees if the result is what meets the bounty criteria.

What if we (optionally) allow to check bounties differently? Meaning we allow checking just the result. So it does not matter if the user actually ran the program, or asked some tooth ferry, as long as the value, that the user submitted meets the criteria, it's all right. (Example: sha256d(x) hat 10 leading zeros).

Since we would allow the "calculation" to be disconnected entirely  from the "verification", the actual program can be much longer and a lot more complex. Since the verification routine that the supernodes must run is short enough, it's fine.

For this type of work verification, we would have to disable POW payments since it may happen that the program runs 10 minutes or so.

What do you think? These are little things that open up entirely new use cases!
full member
Activity: 139
Merit: 100
Don't use the current git! New features are coming soon  Wink And a first real use case.
The new feature will be a "storage" for jobs. So jobs can be created with multiple "iterations".

This works like this:
1. In the first round bounties are submitted normally. Al bounty submissions have attached a "storage" value - this is basically data that the work author wants to "keep". The storage is filled from within the Elastic PL program.
2. When the maximum number of bounties is reached, the job goes into iteration 2 (and 3 and 4 ...) where the job can reuse the "storage" from other iterations.
3. The job is closed when ITERATIONS*MAX_BOUNTIES Bounties are reached.


Why do we need this? Simulated Annealing, Genetic Algorithms, and any other optimization technique which is iteration-based and needs information from the prior iteration.

Not sure if the 10 bounty limit is not too low at this point. Will finish the implementation tomorrow, then we can test it!


can't wait  Lips sealed Grin
full member
Activity: 414
Merit: 101
looking forward XEL release.
full member
Activity: 414
Merit: 101
Don't use the current git! New features are coming soon  Wink And a first real use case.
The new feature will be a "storage" for jobs. So jobs can be created with multiple "iterations".

This works like this:
1. In the first round bounties are submitted normally. Al bounty submissions have attached a "storage" value - this is basically data that the work author wants to "keep". The storage is filled from within the Elastic PL program.
2. When the maximum number of bounties is reached, the job goes into iteration 2 (and 3 and 4 ...) where the job can reuse the "storage" from other iterations.
3. The job is closed when ITERATIONS*MAX_BOUNTIES Bounties are reached.


Why do we need this? Simulated Annealing, Genetic Algorithms, and any other optimization technique which is iteration-based and needs information from the prior iteration.

Not sure if the 10 bounty limit is not too low at this point. Will finish the implementation tomorrow, then we can test it!


we can see elastic is better than golem and iex.  Cheesy
hero member
Activity: 500
Merit: 507
Don't use the current git! New features are coming soon  Wink And a first real use case.
The new feature will be a "storage" for jobs. So jobs can be created with multiple "iterations".

This works like this:
1. In the first round bounties are submitted normally. Al bounty submissions have attached a "storage" value - this is basically data that the work author wants to "keep". The storage is filled from within the Elastic PL program.
2. When the maximum number of bounties is reached, the job goes into iteration 2 (and 3 and 4 ...) where the job can reuse the "storage" from other iterations.
3. The job is closed when ITERATIONS*MAX_BOUNTIES Bounties are reached.


Why do we need this? Simulated Annealing, Genetic Algorithms, and any other optimization technique which is iteration-based and needs information from the prior iteration.

Not sure if the 10 bounty limit is not too low at this point. Will finish the implementation tomorrow, then we can test it!


Thank you for the update! Amazing work as always!!
Pages:
Jump to: