Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 231. (Read 450523 times)

legendary
Activity: 1260
Merit: 1168
EK, I've merged your changes with my windows version and included some cleanup (like only re-compiling the code if the work package we are mining changed).  Here are a few compatibility items I've observed:

For windows, the compile C library has the following issues ( I can't use the jar file due to this):
  - Windows doesn't use mm_malloc.h, it uses malloc.h instead
  - The external functions / variables need to be prefaces with "__declspec(dllexport)"
  - The compiler complained that init_ints needs to return a value (or change the return type)

I also built your the code on my pi (not sure if anyone would ever use this for xel).
  - The "-std=c99" option is needed for gcc
  - The only gcc option of the ones you included that worked on the pi was "-Ofast"

Surprisingly, my little pi gets around 180kEval/s....both with your test version and the one in my git that includes the changes for win32.


Great news ;-) 180k on a pi is awesome!
Yeah, the return value was my mistake ... we don't need one!

Right now, the compilation process is somewhat "ugly". The ideal would be, if we did not need to issue a single "system" or "popen" command, doing all neccesary work from the executable itself. I have not yet found a way to link clang or gcc into the binary and compile the code "programmatically". I will keep trying.
sr. member
Activity: 464
Merit: 260
EK, I've merged your changes with my windows version and included some cleanup (like only re-compiling the code if the work package we are mining changed).  Here are a few compatibility items I've observed:

For windows, the compile C library has the following issues ( I can't use the jar file due to this):
  - Windows doesn't use mm_malloc.h, it uses malloc.h instead
  - The external functions / variables need to be prefaces with "__declspec(dllexport)"
  - The compiler complained that init_ints needs to return a value (or change the return type)

I also built your the code on my pi (not sure if anyone would ever use this for xel).
  - The "-std=c99" option is needed for gcc
  - The only gcc option of the ones you included that worked on the pi was "-Ofast"

Surprisingly, my little pi gets around 180kEval/s....both with your test version and the one in my git that includes the changes for win32.
legendary
Activity: 1260
Merit: 1168
What else can we do?

This is a subset of general project governance problem.   Once again, I'd suggest to take a look on DASH approach to the issue: masternode voting.

Hmm, maybe I have to read up on DASHs governance. Using trusted masternodes as the ones pointing towards the new version and where to download it looks intersting. On the other hand, if a majority, or maybe even a considerable minority falls in the wrong hands, they could point towards compromised code… This is a hard one. Having fixed hardfork intevals would be no problem at all, but it wouldn't help with the problem at hand (i.e. emergencies, buggy code immediately after a HF and so on…).

I will also start doing some research! I mean how do others do it? Or don't they have buggy code?
But then again .. look at the Ethereum fiasco. They were just not prepared!
hero member
Activity: 994
Merit: 513
What else can we do?

This is a subset of general project governance problem.   Once again, I'd suggest to take a look on DASH approach to the issue: masternode voting.

Hmm, maybe I have to read up on DASHs governance. Using trusted masternodes as the ones pointing towards the new version and where to download it looks intersting. On the other hand, if a majority, or maybe even a considerable minority falls in the wrong hands, they could point towards compromised code… This is a hard one. Having fixed hardfork intevals would be no problem at all, but it wouldn't help with the problem at hand (i.e. emergencies, buggy code immediately after a HF and so on…).
hero member
Activity: 535
Merit: 500
Please fill faucet again. Thanks.

XEL-7UTR-VYZY-ZUZZ-DTJJ5
full member
Activity: 235
Merit: 100
What else can we do?

This is a subset of general project governance problem.   Once again, I'd suggest to take a look on DASH approach to the issue: masternode voting.
hero member
Activity: 535
Merit: 500
Guys, outstanding contribution!

@EK explorer dead. After pull (del blockchain and more than 20min wait) there is no up to date peers according to my two nodes.

I'll try to restart explorer tomorrow.

Strange, sure the pull was successful (or did it fail because you didn't stash any changes)?
Did the thing compile correctly? I just checked out freshly, and it connects right away.

I always do git reset --hard HEAD before pull

EDIT:

Waited more than 40min and now it's fine. Explorer seems ok.

EDIT2:

But my second node is still "outsider". Still banned. I will try to launch it tomorrow.
hero member
Activity: 792
Merit: 501
Guys, outstanding contribution!

@EK explorer dead. After pull (del blockchain and more than 20min wait) there is no up to date peers according to my two nodes.

I'll try to restart explorer tomorrow.


if nothing helps use my node elastic.cryptnodes.site

regards
hero member
Activity: 792
Merit: 501
Better late than never ..

public testnet node is updated : https://elastic.cryptnodes.site/

if someone could send me some testnet xel : XEL-XFMU-85XU-V4S3-EZZY2 thanks

Regards
legendary
Activity: 1260
Merit: 1168
Guys, outstanding contribution!

@EK explorer dead. After pull (del blockchain and more than 20min wait) there is no up to date peers according to my two nodes.

I'll try to restart explorer tomorrow.

Strange, sure the pull was successful (or did it fail because you didn't stash any changes)?
Did the thing compile correctly? I just checked out freshly, and it connects right away.
hero member
Activity: 535
Merit: 500
Guys, outstanding contribution!

@EK explorer dead. After pull (del blockchain and more than 20min wait) there is no up to date peers according to my two nodes.

I'll try to restart explorer tomorrow.
hero member
Activity: 994
Merit: 513
@ttookk: Thanks for the comments.
Well ... I think I misused the word "fork" here  Wink

With frk I meant, what happens if we notice that the parser for example crashes for a specific input and we need to update the code. Since the new code would be incompatible with the old chain, we would have to "fork" (so yes, here again it's correct ... we would have to create a slightly different version).

But how do we do this most "handsomely"? Compfortable both for the devs and the users.
I mean hust pushing an update will not be enough since some users will end up with the old client creating a parallel (forked) chain than th one created by the new version users.

So maybe I should write "how do we handle updates the best way?"
Updates in the sense of both urgent updates and not so urgent updates.

Hm, I either don't get it, or I don't think there is a better system than either automatic updates or letting users and miners know that they are on a fork…

Or is the question how to let them know that they will end up on a fork? Sounds like a job for the nodes?

By the way, good job on optimizing the miner! Too bad it's all gibberish to me…

Automatic updates seems very dangerous to me .. what if an attacker gans access to the server where the update is centrally hosted? This could have severe impacts.

Yes you could inform the users that they are on a fork ... but what if you need to IMMEDEATELY push an update because some critical 0-day just got released. Then you push the new code to the git and write an urgent update notice somewhere. First of all just a few people start updating so technically THEY are on the fork with the lower amount of nodes. The majoity is still on the old version!

But we somehow need to convince (or better achieve it by the protocol itself) that either all people switch to the new version qucikly, or that the switch is made "soft" that we at no time have both old-clients and new-clients work on separate chains. In the above scenario this would be not the case.

Sorry for my bad explanations: I am again working around 15 hours straight now.


Yeah, automatic updates look dangerous as hell. I suspect anything we do in that regard would need nodes as getekeepers…

Anything regarding the "second solution", by the way?

And an extra kudos to coralreefer for the awesome job!
legendary
Activity: 1260
Merit: 1168
@ttookk: Thanks for the comments.
Well ... I think I misused the word "fork" here  Wink

With frk I meant, what happens if we notice that the parser for example crashes for a specific input and we need to update the code. Since the new code would be incompatible with the old chain, we would have to "fork" (so yes, here again it's correct ... we would have to create a slightly different version).

But how do we do this most "handsomely"? Compfortable both for the devs and the users.
I mean hust pushing an update will not be enough since some users will end up with the old client creating a parallel (forked) chain than th one created by the new version users.

So maybe I should write "how do we handle updates the best way?"
Updates in the sense of both urgent updates and not so urgent updates.

Hm, I either don't get it, or I don't think there is a better system than either automatic updates or letting users and miners know that they are on a fork…

Or is the question how to let them know that they will end up on a fork? Sounds like a job for the nodes?

By the way, good job on optimizing the miner! Too bad it's all gibberish to me…

Automatic updates seems very dangerous to me .. what if an attacker gans access to the server where the update is centrally hosted? This could have severe impacts.

Yes you could inform the users that they are on a fork ... but what if you need to IMMEDEATELY push an update because some critical 0-day just got released. Then you push the new code to the git and write an urgent update notice somewhere. First of all just a few people start updating so technically THEY are on the fork with the lower amount of nodes. The majoity is still on the old version!

But we somehow need to convince (or better achieve it by the protocol itself) that either all people switch to the new version qucikly, or that the switch is made "soft" that we at no time have both old-clients and new-clients work on separate chains. In the above scenario this would be not the case.

Sorry for my bad explanations: I am again working around 15 hours straight now.
hero member
Activity: 994
Merit: 513
@ttookk: Thanks for the comments.
Well ... I think I misused the word "fork" here  Wink

With frk I meant, what happens if we notice that the parser for example crashes for a specific input and we need to update the code. Since the new code would be incompatible with the old chain, we would have to "fork" (so yes, here again it's correct ... we would have to create a slightly different version).

But how do we do this most "handsomely"? Compfortable both for the devs and the users.
I mean hust pushing an update will not be enough since some users will end up with the old client creating a parallel (forked) chain than th one created by the new version users.

So maybe I should write "how do we handle updates the best way?"
Updates in the sense of both urgent updates and not so urgent updates.

Hm, I either don't get it, or I don't think there is a better system than either automatic updates or letting users and miners know that they are on a fork…

Or is the question how to let them know that they will end up on a fork? Sounds like a job for the nodes?

By the way, good job on optimizing the miner! Too bad it's all gibberish to me…
legendary
Activity: 1260
Merit: 1168
@ttookk: Thanks for the comments.
Well ... I think I misused the word "fork" here  Wink

With frk I meant, what happens if we notice that the parser for example crashes for a specific input and we need to update the code. Since the new code would be incompatible with the old chain, we would have to "fork" (so yes, here again it's correct ... we would have to create a slightly different version).

But how do we do this most "handsomely"? Compfortable both for the devs and the users.
I mean hust pushing an update will not be enough since some users will end up with the old client creating a parallel (forked) chain than th one created by the new version users.

So maybe I should write "how do we handle updates the best way?"
Updates in the sense of both urgent updates and not so urgent updates.
hero member
Activity: 994
Merit: 513
I am thinking about how to keep the software as "updateable" as possible, and how to model a "emergency stop" in case something goes wrong.
We already has a few ideas here in this thread:

- Require a hard fork every X blocks ... basically force everyone to update the client at given block numbers:

E.g.:
from block 100000+ we require version 0.5.0
from block 200000+ we require version 0.6.0
etc.

Problem: If all devs die in an earthquake, things may come to a halt. Not really sure if we want this

(…)


My brain is only working half at the moment, so this may come off as a little unstructured, I hope my point gwts across anyway.

I think the moment the Elastic project has no active devs working on it, having an upcoming hardfork is the smallest concern, really. If you can't pay a dev to basically change version numbers, you might have a serious problem.

But isn't having a mandatory hardfork kinda centralized as well? In a scenario, in which multiple devs work on slightly different clients, how will be determined which one is the "true" version to rule them all? In a case like ETH, the miners were the ones deciding which is the new "truee" Ethereum for them, they could decide to stay on the old chain. In Elastic, they could probably do so as well, simply by cloning the old code and rebranding it. So, a "mandatory" hardfork is not that mandatory in the end.

I don't understand why people are so afraid of hardforks, anyway. I mean, that the system might fork can happen at any given time, can't it? Pretty much anyone could create a fork of any coin at any given time. With that in mind, this looks more like a philosophical question, or a question of "Ethos" (I have a hard time finding the right word for it), than a technical question.
I get that forks are bad and that they divide hashing and brain power and spread it over multiple chains, but in practice, people have good reason to stay on the biggest chain.

And in the end, the code is there to serve people. The code has to adapt, not the other way around…


May the fork be with you.
legendary
Activity: 1260
Merit: 1168
I am thinking about how to keep the software as "updateable" as possible, and how to model a "emergency stop" in case something goes wrong.
We already has a few ideas here in this thread:

- Require a hard fork every X blocks ... basically force everyone to update the client at given block numbers:

E.g.:
from block 100000+ we require version 0.5.0
from block 200000+ we require version 0.6.0
etc.

Problem: If all devs die in an earthquake, things may come to a halt. Not really sure if we want this

- Add an "emergency pubkey" which is able to "require an update"

E.g. such public key could announce a "required version number" and no transaction is being allowed or relayed unless the software is updated

Problem: This is centralized

- Use bitcoins soft-forking mechanism

E.g.
only use features / additions / changes if at least 75% of all last x blocks incorporate a specific "feature flag".

Problem: Loooong reaction times to unforeseeable disaters


What else can we do?
legendary
Activity: 1260
Merit: 1168
Testnet hardforked to version 0.5.0! Delete your nxt_test_db/  Wink

Now, let's start on the last two remaining things:
- Getting the miner ready cross platform and to support all missing hashes and EC calculations
- Getting the new retarget mechanism working ...

... we still have to decide on how to model the retargetting exactly!

Sidechannel Brainstorming: How can we keep the system as adjustable as possible without the need to hard fork in the future.
legendary
Activity: 1260
Merit: 1168
Really nice !

Thanks ;-) We went from 120k to 3 million over night ;-) Whooo!
copper member
Activity: 2324
Merit: 1348
It becomes even better:

Multithreaded version is now working solid. On a notebook I get

Code:
[17:03:18] Attempting to start 4 miner threads
[17:03:18] 4 mining threads started
[17:03:24] CPU2: 840.81 kEval/s
[17:03:24] CPU0: 840.78 kEval/s
[17:03:24] CPU1: 838.86 kEval/s
[17:03:24] CPU3: 845.26 kEval/s
[17:02:52] CPU0: ***** POW Accepted! *****

which is

3.363.000+ evals per second!

Bounties and POW are all correctly found and submitted ;-)

On my desktop rig I get 7M+.  Grin
The miner is highly experimental, and needs GCC for the dlmopen().

Really nice !
Jump to: