Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 196. (Read 450523 times)

legendary
Activity: 1260
Merit: 1168
Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.

Okay, I guess this is where I'm the non-conformist to the blockchain movement.  I would think the author should have some accountability here (i.e. run a client, or at least check in regularly)...the results should be saved locally on their workstation and sent back as an updated job via RPC.  To me, the blockchain is the engine performing the work...but I don't see why the author can't have a client that helps then analyze / prepare / submit jobs as and when they want.

Yes, the client would need to be coded to handle that logic, but does it really need to be part of the blockchain?

I agree with you on this one. Maybe we (I??/You??) should just implement some real world problem clients.
This will require a constant monitoring of the blockchain (and the unconfirmed transactions) along with automated cancelling and recreation of jobs.

What we still have to think about, input might be larger than 12 integers (when it comes down to bloom filters for example). Do you think it might be wise to have at least some sort of persistent distributed storage?

And what a out "job updating"? We could allow jobs to be updated while running?
sr. member
Activity: 464
Merit: 260
Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.

Okay, I guess this is where I'm the non-conformist to the blockchain movement.  I would think the author should have some accountability here (i.e. run a client, or at least check in regularly)...the results should be saved locally on their workstation and sent back as an updated job via RPC.  To me, the blockchain is the engine performing the work...but I don't see why the author can't have a client that helps then analyze / prepare / submit jobs as and when they want.

Yes, the client would need to be coded to handle that logic, but does it really need to be part of the blockchain?
legendary
Activity: 1260
Merit: 1168
Maybe we could collect some common problems and think if they can be implemented now, or - if not - what it would take.

Travelling Salesman, Monte Carlo Experiments, GA optimization, Simulated Annealing, linear or non-linear optimization problems, just to name a few.

Actually this looks interesting to me.  I might need some help getting pointed in the right direction, but I'd like to take on one or two of these simulations...I agree...other than general testing, putting the elastic network up against some real-world scenarios is probably the most beneficial thing we can do right now to prove the design works.

Maybe I'll start with simulated annealing...reminds me of my material science classes from decades past  Smiley

https://github.com/chncyhn/simulated-annealing-tsp ;-) I think at the moment it might be hard to implement this, but maybe I am wrong on this
sr. member
Activity: 464
Merit: 260
Maybe we could collect some common problems and think if they can be implemented now, or - if not - what it would take.

Travelling Salesman, Monte Carlo Experiments, GA optimization, Simulated Annealing, linear or non-linear optimization problems, just to name a few.

Actually this looks interesting to me.  I might need some help getting pointed in the right direction, but I'd like to take on one or two of these simulations...I agree...other than general testing, putting the elastic network up against some real-world scenarios is probably the most beneficial thing we can do right now to prove the design works.

Maybe I'll start with simulated annealing...reminds me of my material science classes from decades past  Smiley
legendary
Activity: 1260
Merit: 1168
I think it is not far away, we can use it for any kind of search space to search it in a bruteforce manner. I think to make it perfect, we need some "synchronization" feature.

Makes sense.  Just thinking through this...so we would display the solved Bounties to the author, with some sort of cut/paste feature of the outputs to ease setup of the next job?  Or is there a way to automate this even further?

The shitty thing with manual processing is that we lose time.
Let's say you need a quick genetic algorithm over 10000 generations. Manual processing cannot be quicker as 10000*60 seconds. You get your results when you retire. It would be great to have some "intermediate" bounties (these may be different from the actual paid bounties). Better would be to collect as many good "intermediate" bounties as possible, if a threshold is reached, the elastic PL program automatically switches to round two and reuses those.

Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.
sr. member
Activity: 464
Merit: 260
I think it is not far away, we can use it for any kind of search space to search it in a bruteforce manner. I think to make it perfect, we need some "synchronization" feature.

Makes sense.  Just thinking through this...so we would display the solved Bounties to the author, with some sort of cut/paste feature of the outputs to ease setup of the next job?  Or is there a way to automate this even further?
legendary
Activity: 1260
Merit: 1168
In my personal opinion ... this is doomed to fail BIG WILLY STYLE!

LOL...I always enjoy hearing your input...informative, and it always makes me laugh  Cheesy

If I recall, some of your early interest in this project was from an academic perspective (i.e. research).  Do you think the system "as-is" is marketable to researchers, or do we need this additional level of security?  Seems like HE is still in its infancy, so I'd hate to lose out on on our advantage, but if there is limited market w/o HE, then we need to pursue it.

I think it is not far away, we can use it for any kind of search space to search it in a bruteforce manner. I think to make it perfect, we need some "synchronization" feature.

Often, optimization problems are solved iteratively. First of all, a lot of work is put in, then after the iteration step we aggregate the solutions / prepare the next steps / and then perform more operations in the next iterations. Think of it as a multi-level-bounty system. The bounties of one iteration go as input in the next iteration.

Maybe we could collect some common problems and think if they can be implemented now, or - if not - what it would take.

Travelling Salesman, Monte Carlo Experiments, GA optimization, Simulated Annealing, linear or non-linear optimization problems, just to name a few.
sr. member
Activity: 464
Merit: 260
In my personal opinion ... this is doomed to fail BIG WILLY STYLE!

LOL...I always enjoy hearing your input...informative, and it always makes me laugh  Cheesy

If I recall, some of your early interest in this project was from an academic perspective (i.e. research).  Do you think the system "as-is" is marketable to researchers, or do we need this additional level of security?  Seems like HE is still in its infancy, so I'd hate to lose out on on our advantage, but if there is limited market w/o HE, then we need to pursue it.
hero member
Activity: 616
Merit: 500
watching this thread, interesting stuff
legendary
Activity: 1260
Merit: 1168
Maybe another way to look at this is to target specific use cases.  For example, someone mentioned video renedering...what functions are required for that?  Maybe the partial homomorphic encryption systems are good enough for some of the mainstream use cases.

I personally think that video rendering is impossible (and also will be impossible for golem) because verification just takes a hell of a lot time and needs to be performed on all nodes. Golem's idea to just verify several pixels ... well ... you know as much as I do how reliable this with pseudo random validators Grin And even if you just render 25% you still have 1/4 chance to win. If you lose you continue and try again at 30%. In my personal opinion ... this is doomed to fail BIG WILLY STYLE!

But I can imagine that we will have some decentralized storage where we can perform some work on. Like uploading a large dataset and using the nodes (which would need to support a master-slave method next to the bruteforce search approach) to perform some operations on it. We would still have to figure out how such scheme might look like. But on confidential data sets it might be wise to use homomorphic encryption. Not sure if partial HE suffices here.

Pallier for example just support a few basic operators...
sr. member
Activity: 464
Merit: 260
I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.

HELib: 33 ms for encryption and up to 5 seconds for decryption  Shocked

A bummer. Unless we only allow "operations" on the encrypted big integers, and encryption and decryption is entirely up to the user alone.
Also, not sure how effective the "verify" routine can be made. Does any cryptosystem have some useable operations that would be suitable for the verify routine?

Maybe another way to look at this is to target specific use cases.  For example, someone mentioned video renedering...what functions are required for that?  Maybe the partial homomorphic encryption systems are good enough for some of the mainstream use cases.
sr. member
Activity: 464
Merit: 260
I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.

HELib: 33 ms for encryption and up to 5 seconds for decryption  Shocked

A bummer.

Yep, saw that.  I'll keep looking though.  Wonder if there is a way to leverage FPGAs for this...I might have a bit of experience there ;-)
legendary
Activity: 1260
Merit: 1168
I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.

HELib: 33 ms for encryption and up to 5 seconds for decryption  Shocked

A bummer. Unless we only allow "operations" on the encrypted big integers, and encryption and decryption is entirely up to the user alone.
Also, not sure how effective the "verify" routine can be made. Does any cryptosystem have some useable operations that would be suitable for the verify routine?
sr. member
Activity: 464
Merit: 260
So what I see is left is:
c) Homomorphic Encryption

Did you pick an algorithm for this yet?  I imagine this could take a bit of effort so I'd like to get a head start if you know which approach you want to take.

This is a very good question. Maybe we could pick one together? I think we agree that we cannot use something that needs seconds of preprocessing, we need something blazing fast otherwise plenty of DOS attacks are inevitable.

Fast seem to me all partially homomorphic encryptions such as Unpadded RSA, ElGamal, Goldwasser–Micali, Benaloh and Paillier.
"FHE"'s such as Brakerski-Gentry-Vaikuntanathan, Brakerski, NRTU and Gentry-Sahai-Waters are not only experimental but have a significantly higher overhead.

Not sure what to do, and tbh my knowledge of FHE comes down to a few hours of reading about it. But I agree, it's a plus.
More of a plus it would be if it can be somehow used in the verify routine (like looking for a vanity address key without actually revealing the key itself to the network).

I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.
legendary
Activity: 1260
Merit: 1168
Here the Brakerski-Gentry-Vaikuntanathan-FHE implementation: I will take a look at it: https://github.com/shaih/HElib
legendary
Activity: 1260
Merit: 1168
So what I see is left is:
c) Homomorphic Encryption

Did you pick an algorithm for this yet?  I imagine this could take a bit of effort so I'd like to get a head start if you know which approach you want to take.

This is a very good question. Maybe we could pick one together? I think we agree that we cannot use something that needs seconds of preprocessing, we need something blazing fast otherwise plenty of DOS attacks are inevitable.

Fast seem to me all partially homomorphic encryptions such as Unpadded RSA, ElGamal, Goldwasser–Micali, Benaloh and Paillier.
"FHE"'s such as Brakerski-Gentry-Vaikuntanathan, Brakerski, NRTU and Gentry-Sahai-Waters are not only experimental but have a significantly higher overhead.

Not sure what to do, and tbh my knowledge of FHE comes down to a few hours of reading about it. But I agree, it's a plus.
More of a plus it would be if it can be somehow used in the verify routine (like looking for a vanity address key without actually revealing the key itself to the network).
sr. member
Activity: 464
Merit: 260
So what I see is left is:
c) Homomorphic Encryption

Did you pick an algorithm for this yet?  I imagine this could take a bit of effort so I'd like to get a head start if you know which approach you want to take.
hero member
Activity: 513
Merit: 500
Lannister (OP):
Last Active:   November 02, 2016, 04:56:07 PM

Might not be alive. He mentioned health issues.
sr. member
Activity: 464
Merit: 260

Did the "synching error" resolve? I will modify the parser now and then I will roll out the next testnet as quick as possible, with a new retargeting function.

So what I see is left is:
a) The "mining tab" in the GUI
b) GPU miner
c) Homomorphic Encryption
d) Testing testing testing

Yep, I am able to run offline.  But I had to set some properties so it doesn't try to sync.

I think the GPU miner is ready for Beta...I've done about all I can do with it unless you come up with something.
legendary
Activity: 1260
Merit: 1168
Maybe we should allow HEX input to avoid confusion!? For now, I have changed elastic editor to support HEX output! Its runnable out of IntelliJ IDEA for now! Works with MVN as well!

Thanks EK, that explains it...I dumped the data as an unsigned int instead of int.  I think HEX input would be helpful as a user, you wouldn't want to have to convert all your unsigned inputs to signed inputs, so if you decide to change the core server to support HEX, please let me know as I'll need to update the C parser as well....but once again....something that can probably wait for after mainnet launch  Smiley

EK, I went ahead and modified the miner to allow for hex inputs in case you decide to add this to core at a later point.

Cool, I will add it now!
Sorry, I was on multiple trains for the last 2 days, so I had both a limited connection and a limited toolset with me. Back at home in my command center I can work a full speed again.

Did the "synching error" resolve? I will modify the parser now and then I will roll out the next testnet as quick as possible, with a new retargeting function.

So what I see is left is:
a) The "mining tab" in the GUI
b) GPU miner
c) Homomorphic Encryption
d) Testing testing testing
Jump to: