Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 244. (Read 450524 times)

legendary
Activity: 1260
Merit: 1168
Small bug: I just noticed that the hashing statements are still limited to a 256 cell memory space, despite the memory having been increased to 64k cells.

Never mind, ignore me, I was looking at the wrong branch of elastic-core.  Grin

However, I did find some other parser/interp weirdness just now...

First, negative literals can't be specified.  So you always need to put "-1" as "(0-1)" which eats up WCET unnecessarily.

Somewhat similarly, it seems that the parser would allow "((m[0] = 0) & (m[1] = 1)) + 1" but not "(m[0] = 0) & (m[1] = 1) + 1"....

Finally, the following program does not interpret as one would expect:
Code:
m[20] = ((m[15] = 15) & (m[16] = 16) & (m[17] = 17)) +1;

It never assigns m[20] and assigns 1 to m[15].

I have fixed a couple of things in the parser now. I agree that a linear language would be way better, I have started experimenting with ANTLR (which would also be capable of generating parsers for C which might be handy for coralreefer) but I feel it would be better to (at least for now) focus on the design questions regarding the POW retarget ... if the parser is safe now ;-)

Fixes in latest git commit:
  • Updated to JavaCC 6.1 RC
  • Assignments cannot be used in any nested expressions anymore, the problem was that the assignment statement does not push any value to the stack causing some undeterministic behaviour when used as if it would return a value (like in an assignment itself)
  • The stack-depth is not only limited for the calculation stack, but also for the parser stack. Nested blocks (or any other nodes) with a depth > 512 are not allowed and throw a ParseException during parse. This saves us from a real StackOverflowException which could cause the entire java process to crash. Was a bit tricky as JavaCC regenerates the JJParserState on every invocation.
    (See the POW example in stack_overflow.spl, which is basically your 100000x nested block worst case example)
  • fixed a NumberFormatException, that was thrown when an integer literal was provided which was not an integer (e.g., 1523589238529385239)
  • unary minus operator coming soon is now there

I will have your 1 BTC gift ready this evening  Wink Thanks for everything!
hero member
Activity: 994
Merit: 513
Quote
I still think taxed hashes are an ok solution. I know it has disadvantages, but I think those are small enough to consider it.

I also kinda like the idea of taxed hashes when implemented right (e.g. using actual time spans instead of block heights to specify a time frame where solutions have to be revealed) ;-) But I also fully trust HunterMinerCrafter when he says that this is not the correct way to go ... he has proven to me that he has a much better eye for the "big picture" than I have.

I am eager to dive deeper into a discussion why the current (and any other) approaches discussed so far are bad, and how we could possibly come to a satisfying solution  Wink

Sorry that I have been offline today, I will be online the entire day tomorrow.

Well, maybe we have to settle for the least incorrect way at some point Wink
I agree though, that HunterMinerCrafter (as well as you, for that matter) has proven his expertise enough to have the last word on whether or not taxed hashes are the way to go or not.

(…)

It seems like this could leave a lot of miners unhappy when their deposits aren't returned because of "natural" cause.  (Like a sudden run of "by chance" rapidly forged blocks which do not include their reveal tx, but runs out their "x blocks" counter.)


Ok, here is another "just putting it out there so that it's out there, and I don't expect anybody to take it even remotely seriously" idea, but you could also, instead of taxing hashes, implement Decreds PoS lottery system: when you submit a hash, you buy a PoS ticket with the XEL submitted (obviously, in a system, where there are no new coins mined, putting out a relevant amount of coins for an unknown period of time, just to scoop some tx fees is not exactly exciting).
Nice that we talked about it Tongue Wink
legendary
Activity: 1260
Merit: 1168
Quote
I still think taxed hashes are an ok solution. I know it has disadvantages, but I think those are small enough to consider it.

I also kinda like the idea of taxed hashes when implemented right (e.g. using actual time spans instead of block heights to specify a time frame where solutions have to be revealed) ;-) But I also fully trust HunterMinerCrafter when he says that this is not the correct way to go ... he has proven to me that he has a much better eye for the "big picture" than I have.

I am eager to dive deeper into a discussion why the current (and any other) approaches discussed so far are bad, and how we could possibly come to a satisfying solution  Wink

Sorry that I have been offline today, I will be online the entire day tomorrow.
hero member
Activity: 994
Merit: 513
What if there are not one, but two seperate blockchains? One is dedicated to moving XEL around, so that people can send and receive XEL. This blockchain has a normal difficulty retarget rhythm, to keep a steady blocktime. All in all, it works like a traditional Blockchain.
When it comes to solving jobs, you wouldn't really need a steady blocktime, would you? And even if you would, you'd have it by the first blockchain, keeping the system alive, giving it a "pulse". This would allow that the second blockchain has no set blocktime, thus wouldn't need constant difficulty retargeting, and in case of no jobs incoming, the second blockchain could go in "hibernation mode" until a job gets in…

This is effectively already how XEL operates.  The XEL blockchain, itself, is a PoS chain.  It has embedded into it's tx structure a "nested" PoW chain for jobs.

Heh, yeah, after posting I thought it probably is that way. I'm a little confused on why difficulty retargeting is a problem then. Can you explain this to me like I'm a five-year-old (from now on, just assume that I am a fiver-year-old)?

Quote
This means that the "Pseudo-FAA" can't be used to attack the security of the network consensus, it can only be used to manipulate fund distributions for work jobs.

I still think taxed hashes are an ok solution. I know it has disadvantages, but I think those are small enough to consider it.
sr. member
Activity: 434
Merit: 250
What if there are not one, but two seperate blockchains? One is dedicated to moving XEL around, so that people can send and receive XEL. This blockchain has a normal difficulty retarget rhythm, to keep a steady blocktime. All in all, it works like a traditional Blockchain.
When it comes to solving jobs, you wouldn't really need a steady blocktime, would you? And even if you would, you'd have it by the first blockchain, keeping the system alive, giving it a "pulse". This would allow that the second blockchain has no set blocktime, thus wouldn't need constant difficulty retargeting, and in case of no jobs incoming, the second blockchain could go in "hibernation mode" until a job gets in…

This is effectively already how XEL operates.  The XEL blockchain, itself, is a PoS chain.  It has embedded into it's tx structure a "nested" PoW chain for jobs.

This means that the "Pseudo-FAA" can't be used to attack the security of the network consensus, it can only be used to manipulate fund distributions for work jobs.
legendary
Activity: 1232
Merit: 1001
what is the chance of success for elastic?

It is somewhat hard to say, at this point.  As I see it, the biggest stumbling block is what to do about difficulty calculation for PoW, particularly given the "pseudo-FAA" problem(s).  Right now, if you follow the implications of this combination of problems, the end result plays out (assuming all actors as rational) such that the only ones who could feasibly contribute to work shares are those with both significant general cpu resources, and significant sha hash-rate capabilities.  (As they would have incentive to saturate the embedded PoW chain and keep difficulty maxed out by pushing through "hashing only trapdoor work" for themselves, to push other, smaller miners out of the competition.)

This is a problem for a couple of reasons:  First, it will consolidate and centralize work resources around those with specialized hashing hardware.  Second, it will drive the cost of computation up significantly (both due to the smaller pool of contributors leading to decreased supply of work, and due to the cost added by the energy consumption in the hashing required to contribute) such that it will not be able to be nearly cost effective relative to things like AWS/Goog.  Considering the "developer contortion" hoops that you have to jump through to even compute a job on XEL, relative to the ease of pushing work out to traditional cloud providers, and with the combined "extra costs," the relative difference becomes immense.  The value proposition of XEL then becomes highly questionable.

(…)

From a non-technical standpoint, I think a very important question is what we believe Elastics workload is going to look like. Do we expect a few jobs a day, or one per week? I think this is crucial to the qustion whether or not Elastic is going to be able to survive long term and how to fill voids in the workload. I guess there is a significant difference between filling small voids lasting for a few minutes, or filling days worth of nothingness.

What I thought about was basically creating some kind of Elastic mining pool. If there are no jobs to be done, the power of those mining XEL will be used to mine other currencies via Elastic; this way, the miners are incentivised to keep mining on Elastic, but they still get rewarded. I realize that this idea has a lot of flaws, especially if the voids that need to be filled get too big/lomg, because at some point, miners won't bother to mine through Elastic. Additionally, if Elastic gets big enough, this could have serious impact on the currencies mined that way(I.e. the the difficulty fluctuations could be fowarded to those currencies). There may be tons of other flaws in that idea as well.

I feel like this is a problem that can only partially solved technically and needs a "social" solution, i.e. Elastic being popular could possibly lessen the implications.

Ok, now, this is me thinking even waaaaaaaaay outside the box:

What if there are not one, but two seperate blockchains? One is dedicated to moving XEL around, so that people can send and receive XEL. This blockchain has a normal difficulty retarget rhythm, to keep a steady blocktime. All in all, it works like a traditional Blockchain.
When it comes to solving jobs, you wouldn't really need a steady blocktime, would you? And even if you would, you'd have it by the first blockchain, keeping the system alive, giving it a "pulse". This would allow that the second blockchain has no set blocktime, thus wouldn't need constant difficulty retargeting, and in case of no jobs incoming, the second blockchain could go in "hibernation mode" until a job gets in…

Feel free to tell me to shut up. Seriously, if I'm bothering you guys, say the word and I'll stop posting in this thread.
as non technical as I am, I understand some of your suggestions. Please keep posting. EK, any comments on this please !
interesting points, how does for expl. nicehash handles it? is there algorithm that can be integrated into xel to handle this kind of scenarios? @ttookk keep posting
sr. member
Activity: 448
Merit: 250
Ben2016
what is the chance of success for elastic?

It is somewhat hard to say, at this point.  As I see it, the biggest stumbling block is what to do about difficulty calculation for PoW, particularly given the "pseudo-FAA" problem(s).  Right now, if you follow the implications of this combination of problems, the end result plays out (assuming all actors as rational) such that the only ones who could feasibly contribute to work shares are those with both significant general cpu resources, and significant sha hash-rate capabilities.  (As they would have incentive to saturate the embedded PoW chain and keep difficulty maxed out by pushing through "hashing only trapdoor work" for themselves, to push other, smaller miners out of the competition.)

This is a problem for a couple of reasons:  First, it will consolidate and centralize work resources around those with specialized hashing hardware.  Second, it will drive the cost of computation up significantly (both due to the smaller pool of contributors leading to decreased supply of work, and due to the cost added by the energy consumption in the hashing required to contribute) such that it will not be able to be nearly cost effective relative to things like AWS/Goog.  Considering the "developer contortion" hoops that you have to jump through to even compute a job on XEL, relative to the ease of pushing work out to traditional cloud providers, and with the combined "extra costs," the relative difference becomes immense.  The value proposition of XEL then becomes highly questionable.

(…)

From a non-technical standpoint, I think a very important question is what we believe Elastics workload is going to look like. Do we expect a few jobs a day, or one per week? I think this is crucial to the qustion whether or not Elastic is going to be able to survive long term and how to fill voids in the workload. I guess there is a significant difference between filling small voids lasting for a few minutes, or filling days worth of nothingness.

What I thought about was basically creating some kind of Elastic mining pool. If there are no jobs to be done, the power of those mining XEL will be used to mine other currencies via Elastic; this way, the miners are incentivised to keep mining on Elastic, but they still get rewarded. I realize that this idea has a lot of flaws, especially if the voids that need to be filled get too big/lomg, because at some point, miners won't bother to mine through Elastic. Additionally, if Elastic gets big enough, this could have serious impact on the currencies mined that way(I.e. the the difficulty fluctuations could be fowarded to those currencies). There may be tons of other flaws in that idea as well.

I feel like this is a problem that can only partially solved technically and needs a "social" solution, i.e. Elastic being popular could possibly lessen the implications.

Ok, now, this is me thinking even waaaaaaaaay outside the box:

What if there are not one, but two seperate blockchains? One is dedicated to moving XEL around, so that people can send and receive XEL. This blockchain has a normal difficulty retarget rhythm, to keep a steady blocktime. All in all, it works like a traditional Blockchain.
When it comes to solving jobs, you wouldn't really need a steady blocktime, would you? And even if you would, you'd have it by the first blockchain, keeping the system alive, giving it a "pulse". This would allow that the second blockchain has no set blocktime, thus wouldn't need constant difficulty retargeting, and in case of no jobs incoming, the second blockchain could go in "hibernation mode" until a job gets in…

Feel free to tell me to shut up. Seriously, if I'm bothering you guys, say the word and I'll stop posting in this thread.
as non technical as I am, I understand some of your suggestions. Please keep posting. EK, any comments on this please !
sr. member
Activity: 464
Merit: 260
OK.  In that case, I would probably suggest switching to a different parsing model. The current generated parser just commonly has too much overhead.  The grammar is small and straightforward enough that the current parsing infrastructure is almost certainly overkill.  (It also seems like it could probably be made into a "linear" language, without syntactic nesting, which would obviate both the stack depth concerns and the parser complexity concerns.)

I agree that a fairly simplistic parser might work.  I've been coding an ElasticPL parser in C to use with the cpuminer fork I've been using for my FPGAs.  Even though I have no experience with coding a parser from scratch, I've already got it doing most of the basic EPL parsing correctly....and if I can code this then I'm sure you guys with more experience in this could put together something pretty robust.

Yes it's a complete hack job, but I'm really just doing it as a learning experience as I don't even have a mining rig.  I just find all the posts in this thread around ElasticPL really interesting and wanted to learn more about what it takes to build a vm.
hero member
Activity: 994
Merit: 513
what is the chance of success for elastic?

It is somewhat hard to say, at this point.  As I see it, the biggest stumbling block is what to do about difficulty calculation for PoW, particularly given the "pseudo-FAA" problem(s).  Right now, if you follow the implications of this combination of problems, the end result plays out (assuming all actors as rational) such that the only ones who could feasibly contribute to work shares are those with both significant general cpu resources, and significant sha hash-rate capabilities.  (As they would have incentive to saturate the embedded PoW chain and keep difficulty maxed out by pushing through "hashing only trapdoor work" for themselves, to push other, smaller miners out of the competition.)

This is a problem for a couple of reasons:  First, it will consolidate and centralize work resources around those with specialized hashing hardware.  Second, it will drive the cost of computation up significantly (both due to the smaller pool of contributors leading to decreased supply of work, and due to the cost added by the energy consumption in the hashing required to contribute) such that it will not be able to be nearly cost effective relative to things like AWS/Goog.  Considering the "developer contortion" hoops that you have to jump through to even compute a job on XEL, relative to the ease of pushing work out to traditional cloud providers, and with the combined "extra costs," the relative difference becomes immense.  The value proposition of XEL then becomes highly questionable.

(…)

From a non-technical standpoint, I think a very important question is what we believe Elastics workload is going to look like. Do we expect a few jobs a day, or one per week? I think this is crucial to the qustion whether or not Elastic is going to be able to survive long term and how to fill voids in the workload. I guess there is a significant difference between filling small voids lasting for a few minutes, or filling days worth of nothingness.

What I thought about was basically creating some kind of Elastic mining pool. If there are no jobs to be done, the power of those mining XEL will be used to mine other currencies via Elastic; this way, the miners are incentivised to keep mining on Elastic, but they still get rewarded. I realize that this idea has a lot of flaws, especially if the voids that need to be filled get too big/lomg, because at some point, miners won't bother to mine through Elastic. Additionally, if Elastic gets big enough, this could have serious impact on the currencies mined that way(I.e. the the difficulty fluctuations could be fowarded to those currencies). There may be tons of other flaws in that idea as well.

I feel like this is a problem that can only partially solved technically and needs a "social" solution, i.e. Elastic being popular could possibly lessen the implications.

Ok, now, this is me thinking even waaaaaaaaay outside the box:

What if there are not one, but two seperate blockchains? One is dedicated to moving XEL around, so that people can send and receive XEL. This blockchain has a normal difficulty retarget rhythm, to keep a steady blocktime. All in all, it works like a traditional Blockchain.
When it comes to solving jobs, you wouldn't really need a steady blocktime, would you? And even if you would, you'd have it by the first blockchain, keeping the system alive, giving it a "pulse". This would allow that the second blockchain has no set blocktime, thus wouldn't need constant difficulty retargeting, and in case of no jobs incoming, the second blockchain could go in "hibernation mode" until a job gets in…

Feel free to tell me to shut up. Seriously, if I'm bothering you guys, say the word and I'll stop posting in this thread.
sr. member
Activity: 434
Merit: 250
what is the chance of success for elastic?

It is somewhat hard to say, at this point.  As I see it, the biggest stumbling block is what to do about difficulty calculation for PoW, particularly given the "pseudo-FAA" problem(s).  Right now, if you follow the implications of this combination of problems, the end result plays out (assuming all actors as rational) such that the only ones who could feasibly contribute to work shares are those with both significant general cpu resources, and significant sha hash-rate capabilities.  (As they would have incentive to saturate the embedded PoW chain and keep difficulty maxed out by pushing through "hashing only trapdoor work" for themselves, to push other, smaller miners out of the competition.)

This is a problem for a couple of reasons:  First, it will consolidate and centralize work resources around those with specialized hashing hardware.  Second, it will drive the cost of computation up significantly (both due to the smaller pool of contributors leading to decreased supply of work, and due to the cost added by the energy consumption in the hashing required to contribute) such that it will not be able to be nearly cost effective relative to things like AWS/Goog.  Considering the "developer contortion" hoops that you have to jump through to even compute a job on XEL, relative to the ease of pushing work out to traditional cloud providers, and with the combined "extra costs," the relative difference becomes immense.  The value proposition of XEL then becomes highly questionable.

Can these problems be overcome, resulting in a network that is both "sound" and can be price-competitive with contemporary centralized compute services?  I'm, personally, still quite skeptical.  Cool

(Note that this is an improvement over my position of a few weeks ago, at which point I believed that it probably could not ever even be made "sound."  This is a large part of why I decided to dive into bug hunting on it - to answer that question more definitively for myself.)
legendary
Activity: 1330
Merit: 1000

hi man,

just a small question.

what is the chance of success for elastic?

and thanks for your contribution here. that is huge.
sr. member
Activity: 434
Merit: 250
Small bug: I just noticed that the hashing statements are still limited to a 256 cell memory space, despite the memory having been increased to 64k cells.

Never mind, ignore me, I was looking at the wrong branch of elastic-core.  Grin

However, I did find some other parser/interp weirdness just now...

First, negative literals can't be specified.  So you always need to put "-1" as "(0-1)" which eats up WCET unnecessarily.

Somewhat similarly, it seems that the parser would allow "((m[0] = 0) & (m[1] = 1)) + 1" but not "(m[0] = 0) & (m[1] = 1) + 1"....

Finally, the following program does not interpret as one would expect:
Code:
m[20] = ((m[15] = 15) & (m[16] = 16) & (m[17] = 17)) +1;

It never assigns m[20] and assigns 1 to m[15].
sr. member
Activity: 434
Merit: 250
Small bug: I just noticed that the hashing statements are still limited to a 256 cell memory space, despite the memory having been increased to 64k cells.
sr. member
Activity: 434
Merit: 250
java stack exception as in "maximum allowed AST depth reached?" or a true java stack exception?

True java stack exception.  For example:
Code:
{{{{{{{{{{............m[0]=0; }}}}}}.......
Nested 10k deep or so.

Also, it occurs to me that because the java stack depth is non-deterministic (varies machine to machine, or even run-to-run on the same machine) this could be considered a consensus-breaker.

Quote
In my eyes 7 seconds is bad enough  Wink

OK.  In that case, I would probably suggest switching to a different parsing model. The current generated parser just commonly has too much overhead.  The grammar is small and straightforward enough that the current parsing infrastructure is almost certainly overkill.  (It also seems like it could probably be made into a "linear" language, without syntactic nesting, which would obviate both the stack depth concerns and the parser complexity concerns.)

Quote
I will send you a BTC from my own savings soon, to express my gratitude for helping me with the parser.
We still have to figure out what to do with this awful FAA ;-) And, generally, with all those other "really interesting attacks".

I'm starting to get bored with banging on the parser and RuntimeEstimator (and increasingly feeling like they should both just be entirely overhauled - replaced with something simpler) so I might "skip ahead" to more interesting stuff soon.  Yesterday I just realized a dead simple solution to what I've been considering as the most interesting attack, and the solution adds only a very minor constraint on the vm, so maybe we'll delve into that next.

Quote
And I hope that we will soon see official bounty bug hunting guidelines here, but until then I strongly assume that you will be rewarded by Lannister for sure  Wink Thanks once again for all your help, without you we would be 10 steps behind.

At least 10.  Wink

Soon we'll be getting into the "real stuff" - the problems for which I don't see practical solutions being likely, at least without some significant design changes.  These are mostly all centered around the difficulty targeting and stuff like that "pseudo-FAA" problem.  (I'm still entirely convinced that "per-job" difficulty will be a must, somehow.)

full member
Activity: 168
Merit: 100
Code:
@echo off
if exist jdk (
set "javaDir=jdk"
goto startJava
)
rem inspired by http://stackoverflow.com/questions/24486834/reliable-way-to-find-jre-installation-in-windows-bat-file-to-run-java-program
rem requires Windows 7 or higher
setlocal enableextensions disabledelayedexpansion

rem find Java information in the Windows registry
rem for 64 bit Java on Windows 64 bit or for Java on Windows 32 bit
set "javaKey=HKLM\SOFTWARE\JavaSoft\Java Development Kit"

rem look for Java version
set "javaVersion="
for /f "tokens=3" %%v in ('reg query "%javaKey%" /v "CurrentVersion" 2^>nul') do set "javaVersion=%%v"

rem for 32 bit Java on Windows 64 bit
set "javaKey32=HKLM\SOFTWARE\Wow6432Node\JavaSoft\Java Development Kit"

rem look for 32 bit Java version on Windows 64 bit
set "javaVersion32="
for /f "tokens=3" %%v in ('reg query "%javaKey32%" /v "CurrentVersion" 2^>nul') do set "javaVersion32=%%v"

echo Java version in "%javaKey%" is "%javaVersion%" and in "%javaKey32%" is "%javaVersion32%"

rem test if a java version has been found
if not defined javaVersion if not defined javaVersion32 (
echo Java not found, please install Java JRE
goto endProcess
)

if not defined javaVersion ( set "javaVersion=0" )

if not defined javaVersion32 ( set "javaVersion32=0" )

rem test if a java version is compatible
if not %javaVersion% geq 1.8 (
if not %javaVersion32% geq 1.8 (
echo Java version is lower than 1.8, please install Java 8 or later Java JRE
goto endProcess
) else (
echo using Java 32 bit on a 64 bit Windows workstation
set "javaKey=%javaKey32%"
set "javaVersion=%javaVersion32%"
)
)

rem Get java home for current java version
for /f "tokens=2,*" %%d in ('reg query "%javaKey%\%javaVersion%" /v "JavaHome" 2^>nul') do set "javaDir=%%e"

if not defined javaDir (
echo Java directory not found
goto endProcess
) else (
echo using Java home directory "%javaDir%"
)

:startJava
    rem cd src
    rem dir *.java /s /B > ../FilesList.txt
    rem cd ..
    rem mkdir classes
"%javaDir%"\bin\javac.exe -classpath classes;lib\*;conf -d classes/ @FilesList.txt  

:endProcess
endlocal

to generate FilesList.txt

Code:
cd src
dir *.java /s /B > ../FilesList.txt
cd ..
mkdir classes
legendary
Activity: 1456
Merit: 1000
I don't see compile.bat under that link.  Could you post the contents that should be in compile.bat and we can create our own?

Cheers.

Sorry, I mixed up the link. It's been 13 hours work for me today already. Here it is, but it probably needs some tweaking since it's for the old repo. For eample the "rem" should be removed to regenerate the FilesList.txt. I can try to fix it for the crrent repo tomorrow, or someone else is quicker than me ;-)

https://github.com/OrdinaryDude/old-development-tree/blob/master/compile.bat

thanks
legendary
Activity: 1260
Merit: 1168
I don't see compile.bat under that link.  Could you post the contents that should be in compile.bat and we can create our own?

Cheers.

Sorry, I mixed up the link. It's been 13 hours work for me today already. Here it is, but it probably needs some tweaking since it's for the old repo. For eample the "rem" should be removed to regenerate the FilesList.txt. I can try to fix it for the crrent repo tomorrow, or someone else is quicker than me ;-)

https://github.com/OrdinaryDude/old-development-tree/blob/master/compile.bat
legendary
Activity: 1456
Merit: 1000
@unvoid: Maybe we should start writing a Wiki? And we could polish your small tutorial to be published in the OP posting. I am sure if we prepare everything, Lannister will easily swap out the content (am I correct, Lanniser?)


Btw. I have added a compile.bat in the development tree
https://github.com/OrdinaryDude/elastic-core/tree/development
Not sure if it works for the master branch, though.

Quote
Let the @Lannister reward us all as community with little help with OP. Give him a sign if you have better contact with him.

If Bitmessage would only work more reliably. Sometimes I get a message through very quickly, on different days I wait hours to get an "Ack" for my sent message.

We could also consider letting him close this thread and have someone else open a new one? If everyone agrees, of course.
We will also need a reliable person to take care of the website anyway, not sure what happened with Isarmatrose but he registred www.elastic-project.org. Its just that the content is poor at the moment. We could maybe get this domain from him (or convince him to become more active again) to fill it with content.

Examples could be: introduction, programmers reference for elastic PL, download links, faucet and explorer links, some background information on how everything works, ...

The thing is, I am totally busy with the reference client and the programming language parser at the moment so I have no time for a website or anything  Wink

I don't see compile.bat under that link.  Could you post the contents that should be in compile.bat and we can create our own?

Cheers.
legendary
Activity: 1260
Merit: 1168
@unvoid: Maybe we should start writing a Wiki? And we could polish your small tutorial to be published in the OP posting. I am sure if we prepare everything, Lannister will easily swap out the content (am I correct, Lanniser?)


Btw. I have added a compile.bat in the development tree
https://github.com/OrdinaryDude/elastic-core/tree/development
Not sure if it works for the master branch, though.

Quote
Let the @Lannister reward us all as community with little help with OP. Give him a sign if you have better contact with him.

If Bitmessage would only work more reliably. Sometimes I get a message through very quickly, on different days I wait hours to get an "Ack" for my sent message.

We could also consider letting him close this thread and have someone else open a new one? If everyone agrees, of course.
We will also need a reliable person to take care of the website anyway, not sure what happened with Isarmatrose but he registred www.elastic-project.org. Its just that the content is poor at the moment. We could maybe get this domain from him (or convince him to become more active again) to fill it with content.

Examples could be: introduction, programmers reference for elastic PL, download links, faucet and explorer links, some background information on how everything works, ...

The thing is, I am totally busy with the reference client and the programming language parser at the moment so I have no time for a website or anything  Wink
hero member
Activity: 535
Merit: 500
I strongly assume that you will be rewarded by Lannister for sure  Wink

Let the @Lannister reward us all as community with little help with OP. Give him a sign if you have better contact with him.

Thanks!
Jump to: