Author

Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer - page 187. (Read 450523 times)

legendary
Activity: 1848
Merit: 1334
just in case
Evil-Knievel, dude, We want to see Mainnet cmon my friend Smiley
hero member
Activity: 1111
Merit: 588

To me, it seems that the journalist could have done his job better. There is no other way I could explain why the almost finished and already working XEL is ignored while for example "iex.ce" is thoroughly elaborated on.

totally agree on it.

I was thinking to mention to Ben Dickson's twitter about XEL , but i couldn't send him a link with extensive information about the project . I think that we should hire a professional for promoting .
A new ANN thread would be a nice first step , where we could gather all the information spread in this thread and informing about the roadmap of the project . For something like that we need graphics , logo and a nice website .
Maybe we should make a mini donation as community or donate a portion of our coins for that purpose .
EK and guys ( ttook , coralfreerer etc ) have done so many things for XEL , i think it's time for the rest of us to do our part of the job .
sr. member
Activity: 448
Merit: 250
Ben2016
Are you planning to launch before the end of 2017?
2017? I thought we are almost done !

There's always a room for "one more test" or "one more feature", you know. But this is one of a few projects where the words "long term" don't sound scammy to me.
nothing scammy here, but since developers don't get paid ( which is terrible), they have to work along with their paying jobs and family and etc.

Who said, that developers don't get paid ? Evil-Knievel paying, as long as I remember
EK paid others from his own pocket from what I know.
sr. member
Activity: 434
Merit: 250
Are you planning to launch before the end of 2017?
2017? I thought we are almost done !

There's always a room for "one more test" or "one more feature", you know. But this is one of a few projects where the words "long term" don't sound scammy to me.
nothing scammy here, but since developers don't get paid ( which is terrible), they have to work along with their paying jobs and family and etc.

Who said, that developers don't get paid ? Evil-Knievel paying, as long as I remember
sr. member
Activity: 448
Merit: 250
Ben2016
Are you planning to launch before the end of 2017?
2017? I thought we are almost done !

There's always a room for "one more test" or "one more feature", you know. But this is one of a few projects where the words "long term" don't sound scammy to me.
nothing scammy here, but since developers don't get paid ( which is terrible), they have to work along with their paying jobs and family and etc.
hero member
Activity: 690
Merit: 500
Are you planning to launch before the end of 2017?
2017? I thought we are almost done !

There's always a room for "one more test" or "one more feature", you know. But this is one of a few projects where the words "long term" don't sound scammy to me.
hero member
Activity: 588
Merit: 500
Gon Totto
Are you planning to launch before the end of 2017?
2017? I thought we are almost done !

Almost done might be mean today, next week, next month, who knows  Smiley

only EK knows  Grin
sr. member
Activity: 448
Merit: 250
Ben2016
Are you planning to launch before the end of 2017?
2017? I thought we are almost done !
hero member
Activity: 690
Merit: 500
Are you planning to launch before the end of 2017?
sr. member
Activity: 243
Merit: 250

To me, it seems that the journalist could have done his job better. There is no other way I could explain why the almost finished and already working XEL is ignored while for example "iex.ce" is thoroughly elaborated on.


Totally agree.So less announces about XEL.

We will do more marketing. Angry
hero member
Activity: 661
Merit: 500

To me, it seems that the journalist could have done his job better. There is no other way I could explain why the almost finished and already working XEL is ignored while for example "iex.ce" is thoroughly elaborated on.

totally agree on it.

Yeah, if you didnt pay the journo you aint gunna get "hyped". No matter. Keep on chuggin fellas. You are an inspiration to the real folks here not trolling.  Grin Smiley Cheesy
full member
Activity: 124
Merit: 100
There is no need to rush to be recognized. Bitcoin never did. There is going to be a functional and useful system very soon. From that point it will grow steadily for sure.
ImI
legendary
Activity: 1946
Merit: 1019

To me, it seems that the journalist could have done his job better. There is no other way I could explain why the almost finished and already working XEL is ignored while for example "iex.ce" is thoroughly elaborated on.

Such articles work very often on a basis of initiation. That means we contact an author and introduce him to XEL and next time he eventually mentions us.

For us its best to wait with such contacts until we are live and can not only declare something that we want to accomplish in the future, but something that already WORKS. Thats a BIG advantage in contrast to all those nice plans of Golem and others.
legendary
Activity: 1260
Merit: 1168

To me, it seems that the journalist could have done his job better. There is no other way I could explain why the almost finished and already working XEL is ignored while for example "iex.ce" is thoroughly elaborated on.
legendary
Activity: 1330
Merit: 1000
legendary
Activity: 1260
Merit: 1168
Hmm... so, why is there a limit like that? I mean 64K ints is not nearly enough to process big data, so that seems like a bottleneck to me.

EK mentioned he wanted this for some of his own use cases. Can he elaborate and point to a practical use case for XEL?

Sorry guys, it's christmas time and I am family-hopping the entire time. I am back home by tomorrow.
We have to have some sort of limit, because otherwise you can DOS weak nodes easily and split them from the rest of the network.
With no limit, you can just start processing gigabytes of data in memory by loading them up slowly and waiting until first the weak nodes crash with a memory overflow, with the rest following.

If anyone finds a way here to use unlimited memory without a) bloating the blockchain and b) without bloating the memory of slower nodes and c) without writing the stuff to the disk which would both just shift the point of failure and make processing really slow, if someone sees a way to do it, we should discuss that!

EDIT: A few weeks earlier I came up with the idea to allow "data sets" to be uploaded, of arbitrary size, and have the nodes do calculations on it. But here, we run into synchronization problems and would require Golem's block-partial-work-on-subscribe scheme ... which sucks big time  Wink

Enjoy holidays! Well deserved!

The way I picture distributed computing is that work request doesn't contain the work data, or even the code - but just the hash for it. E.g. using IPFS to distribute the data/code. Then computation should arrive at same result done by multiple workers and workers only publish the hash of the result. Once there is agreement work is correctly computed - payment is made. At least that's how I would do it. But nothing wrong with making what you have go live, and later expand it to include more computation methods as they mature.


Thanks ;-) Well the holiday was two-fold. First, the mustang broke down, then I got a rental which also broke down and I was standing 5 hours on the highway waiting for a replacement  Wink What a weekend.

Regarding the work scheme, I (and we all) was just sticking to the original idea and together we came up with something that is actually working and which allows for an almost-reliable proof-of-execution. Surely, it is perfectly suited for one specific class of computation problems.

What you describe is usable for different kinds of tasks, ones that do not explore a search space for the right solution to a complex computation problem but which "map-reduces" a large but simple computation problem into multiple smaller tasks which are computed by multiple nodes and then puzzled together afterward.

Golem does exactly what you have described and I think that it is not sufficient to verify the computation result by letting multiple nodes (or should I say my own Sybils) execute the same package and checking for the equality of the results. I would go even further ;-) I am quite sure that I will be able to show a way to bypass Golems verification mechanism (as it is described currently) and I think I would be willing to put a public 1000 dollar bid on it! Surely, I could lose but at the moment I am really confident. And if I can't make it, I think HMC would be very likely able to claim my stake ;-)

(Reference: https://golem.network/doc/Golemwhitepaper.pdf, Nov 2016, Resilence Chapter)
hero member
Activity: 513
Merit: 500
Hmm... so, why is there a limit like that? I mean 64K ints is not nearly enough to process big data, so that seems like a bottleneck to me.

EK mentioned he wanted this for some of his own use cases. Can he elaborate and point to a practical use case for XEL?

Sorry guys, it's christmas time and I am family-hopping the entire time. I am back home by tomorrow.
We have to have some sort of limit, because otherwise you can DOS weak nodes easily and split them from the rest of the network.
With no limit, you can just start processing gigabytes of data in memory by loading them up slowly and waiting until first the weak nodes crash with a memory overflow, with the rest following.

If anyone finds a way here to use unlimited memory without a) bloating the blockchain and b) without bloating the memory of slower nodes and c) without writing the stuff to the disk which would both just shift the point of failure and make processing really slow, if someone sees a way to do it, we should discuss that!

EDIT: A few weeks earlier I came up with the idea to allow "data sets" to be uploaded, of arbitrary size, and have the nodes do calculations on it. But here, we run into synchronization problems and would require Golem's block-partial-work-on-subscribe scheme ... which sucks big time  Wink

Enjoy holidays! Well deserved!

The way I picture distributed computing is that work request doesn't contain the work data, or even the code - but just the hash for it. E.g. using IPFS to distribute the data/code. Then computation should arrive at same result done by multiple workers and workers only publish the hash of the result. Once there is agreement work is correctly computed - payment is made. At least that's how I would do it. But nothing wrong with making what you have go live, and later expand it to include more computation methods as they mature.
member
Activity: 237
Merit: 10
Is there anything I should know before giving it a shot. Any pitfalls? Do u have a record of offences for which u have been convicted and sentenced to two years in australian correctional facility? Do u eat bugs? Damnit I wouldnt touch it until I have a detailed report on an introvert behind XEL btctalk account.
legendary
Activity: 1260
Merit: 1168
hi ek, any update?


* elastic pl
* website
* core testing
* mainnet

?

Just came back an hour ago. I will post some updates tomorrow.
hero member
Activity: 742
Merit: 501
hi ek, any update?


* elastic pl
* website
* core testing
* mainnet

?
Jump to: