Author

Topic: NXT :: descendant of Bitcoin - Updated Information - page 1089. (Read 2761629 times)

legendary
Activity: 1176
Merit: 1134
Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price?

What is an objective difference between those two options?  They look like two ways to state the same thing to me.

Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?

One of the problems we run into is the Halting Problem -- there's just no way to know before-hand if a program will find an answer or chug away with calculations until the heat death of the universe.

kill -9 after time limit is up. time limit = # NXT paid * milliseconds
legendary
Activity: 1176
Merit: 1134
Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?

Of course if you don't set some sort of a price per instruction then you'd end up "stopping the network".

Sure it makes some sense for a forger to determine how many operations they would be willing to execute but then how would we actually "check" that this has been done correctly if other nodes are *not* doing the same work?

BTW - I am not really so excited about the whole DAC thing (which I gather James is) and would instead like to see simpler "more useful" concepts being created (rather than "pie in the skynet" ideas).

All sorts of useful things can be done using the middle of the NXTlayers.
I envision the ability for a Turing script to trigger sending an email. it would do this by processing its inputs (aliases and AM) and outputting an AM. Then services running on hub nodes process the AM and send out an email

Lots of details missing, but those are details. I am trying to understand the concepts for each layer.

Now not all forging nodes will have all services available, but as long as 90% of the forging is done on full service nodes, then within 10 minutes it is almost assured that any services needed will be done.

More details needed to make sure things are done once and only once, but these again are details.

So, if you don't like the DAC layer, you can just deal with the services layer. Whatever service we can run on 100+ hub servers can be triggered by AM created by Turing scripts.

This requires proper segmentation of the overall task to the right layer. From my understanding as long as the Turing scripts can get access to input data, it wont have to do a whole lot of complicated calculations. Though for a price any calculation is allowed.

I think 1 NXT per millisecond of compute time is reasonable.

James
sr. member
Activity: 490
Merit: 250
I don't really come from outer space.
Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price?

What is an objective difference between those two options?  They look like two ways to state the same thing to me.

Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?

One of the problems we run into is the Halting Problem -- there's just no way to know before-hand if a program will find an answer or chug away with calculations until the heat death of the universe.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that its would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?

Of course if you don't set some sort of a price per instruction then you'd end up "stopping the network".

Sure it makes some sense for a forger to determine how many operations they would be willing to execute but then how would we actually "check" that this has been done correctly if other nodes are *not* doing the same work?

BTW - I am not really so excited about the whole DAC thing (which I gather James is) and would instead like to see simpler "more useful" concepts being created (rather than "pie in the skynet" ideas).
legendary
Activity: 1722
Merit: 1217
if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.

Good grief - suggesting ideas for a "useful instruction set" is now called "central planning"?

So - let's say we go with the idea of the 1 instruction low-level language and we charge say 1 NXT per 100 "operations".

Then a simple SHA256 operation will likely cost you 100's of NXT - so we have now a completely useless VM for doing SHA256 (but at least it wasn't "centrally planned" I guess).

Quote
No one has answered what it will do 1000 tps claims?

Indeed - the reason why I was suggesting we would need a "practical" instruction set if we are going to bother with this at all.


Well ok i may have misunderstood you so let me just step back and ask some questions. Why should we set a price per operations? Why not let the forger determine how many operations he is willing to perform for a given price? Is the trouble here in the fact that it would be un reasonable for the rest of the network to check the forgers work if the forger happened to be running on a really powerful computer?

I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM

In which case I would suggest why bother doing "any calcs" at all - there seems to be *no thought* here about what this concept is needed apart from to somehow answer Ethereum with "me too"!

If we don't even have any idea what it is going to be used or useful for then I'd suggest not building it (at least we *know* what Zerocoin can offer).


This is the way it works man. We dont expect that as soon as someone suggests an idea that he must have every detail worked out. Someone suggests an idea and then we talk about it.
legendary
Activity: 1176
Merit: 1134
I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM

In which case I would suggest why bother doing "any calcs" at all - there seems to be *no thought* here about what this concept is needed apart from to somehow answer Ethereum with "me too"!

Did you see my post about the layers with 400000 NXT bounty?
Building from Turing complete you can get to where you can flowchart DAC control flow and have it automatically generate the code to implement it.
How is that not useful?
We will of course need to do a few reference DAC's so people will get an idea of what can be implemented.

Alias has a bunch of ideas:
*************
Alright. I'm going to interpret "most valuable" as meaning the following:

Quote
The most valuable DACs will be those that simultaneously engage a large audience (far beyond those individuals with exclusively an economic interest in cryptocurrencies) and inspire them to imagine a world where decentralized/distributed trustless systems are the norm.

You might watch the following to get near the same wavelength as me - http://www.youtube.com/watch?v=dE1DuBesGYM. Also just for clarity, I tend to focus on the Decentralised Autonomous Community definition rather than the Decentralised Autonomous Corporation/Company definition.

Simple DAC games - These would allow simple competitive gaming with or without betting or financial reward (think tournaments).

Tic-tac-toe
Chess
Rock, Paper, Scissors

Complex DACs:

[1] Prototype Resource Based Economy (RBE) - A simplistic version of this would be a simple ledger of all the resources owned by this DAC's community. Effectively a stock take of all the lawn mowers, projectors, step-ladders, basketballs owned by the community and who has access to them right now. And who has requested access to them in the future. And who needs access to them right now in particular cases (e.g. - defibrillator).

I cannot stress enough that even a poorly implemented prototype RBE would generate a huge amount of interest. http://www.youtube.com/watch?v=PIMy0QBSQWo.

[2] Prototype Decentralized Government - This doesn't have to be at a very large scale. It could allow a golf club or a soccer club to organize themselves and vote on pressing issues.
[3] Prototype Credit Union - I think that this is the pièce de résistance. It doesn't have to be incredibly complex. Think 10 members in the first credit union. They all lodge 100 NXT into the credit union. One member has a great idea for a new business/DAC etc. The community listen to his pitch and vote to give him 500 NXT. He launches his venture makes a tonne of money and pays back the credit union (most likely with interest) and he lodges a bit more anyway.
[4] Prototype Voting Registry
[5] Prototype Distributed Census

*I say prototype here because one would get their head in a twist if they tried to implement all the nuances associated each system.

I hope that helps. Feel free to ask me any more questions. I will be much more active within the NXT community from the middle of February onwards.

As a closing remark I might suggest that there might be no extremely complex DACs. What looks like a complex DAC might be as a result of emergent behavior on top of simple interacting DACs. Determine the most primitive lego bricks of how humans communicate, organize and transfer/store data and 95% of all DACs will be entirely construct-able from them. That is where NXT could beat Ethereum. In their case, it is almost hard to know where to start because you can do almost anything. If we can create the lego bricks that guide people we might end up with things liek this - http://totallytop10.com/wp-content/uploads/2010/01/lego-giraffe1.jpg. I suppose that is also the Minecraft way of looking at things.
legendary
Activity: 1176
Merit: 1134
Except CfB said it had to be a low level (assembly language) type of language. He started a contest to find language with fewest opcodes for Turing complete. I doubt anything is less than 1, though there were half a dozen variants of single opcode Turing completes

This isn't a competition to find the least # of instructions (but if it was then "you won"). Cheesy

I am guessing CfB just isn't keen on something that is too complicated but believe me *without* instructions such as SHA256 such a VM would really be of no practical use at all.

So I guess it depends whether we are wanting to add something "useful" or whether we just want to say "me too" when it comes to having some sort of "Turing complete VM".


I think the whole idea is stupid and goes nowhere. Anytime something new is announced by someone else, we have "me too" discussion here. Last week it was Zerocoin. What happened to it? Nothing. This week it's "Turing complete VM"
 
No one has answered what it will do 1000 tps claims?


I have created a project plan that is being worked on to add zerocoin functionality. I am sorry I could not get this project done in 3 days. How long will I have to get the zerocoin functionality done?

We will be adding all features to NXT that makes NXT more valuable.
I think NXTcash will make NXT valuable. I also think being able to flowchart DACs and submitting them to be run on NXT network is also very valuable feature.

Notice that running DACs will require a lot of transactions.

James
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM

In which case I would suggest why bother doing "any calcs" at all - there seems to be *no thought* here about what this concept is needed apart from to somehow answer Ethereum with "me too"!

If we don't even have any idea what it is going to be used or useful for then I'd suggest not building it (at least we *know* what Zerocoin can offer).
legendary
Activity: 1176
Merit: 1134
if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.

Good grief - suggesting ideas for a "useful instruction set" is now called "central planning"?

So - let's say we go with the idea of the 1 instruction low-level language and we charge say 1 NXT per 100 "operations".

Then a simple SHA256 operation will likely cost you 100's of NXT - so we have now a completely useless VM for doing SHA256 (but at least it wasn't "centrally planned" I guess).

Quote
No one has answered what it will do 1000 tps claims?

Indeed - the reason why I was suggesting we would need a "practical" instruction set if we are going to bother with this at all.

I would imagine much cheaper to pay 1 NXT for client generate AM that does any complex calcs. The scripts can access AM data, so the idea is for the higherlevel code to push down expensive to calculate data into AM
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.

Good grief - suggesting ideas for a "useful instruction set" is now called "central planning"?

So - let's say we go with the idea of the 1 instruction low-level language and we charge say 1 NXT per 100 "operations".

Then a simple SHA256 operation will likely cost you 100's of NXT - so we have now a completely useless VM for doing SHA256 (but at least it wasn't "centrally planned" I guess).

Quote
No one has answered what it will do 1000 tps claims?

Indeed - the reason why I was suggesting we would need a "practical" instruction set if we are going to bother with this at all.
hero member
Activity: 644
Merit: 500
Except CfB said it had to be a low level (assembly language) type of language. He started a contest to find language with fewest opcodes for Turing complete. I doubt anything is less than 1, though there were half a dozen variants of single opcode Turing completes

This isn't a competition to find the least # of instructions (but if it was then "you won"). Cheesy

I am guessing CfB just isn't keen on something that is too complicated but believe me *without* instructions such as SHA256 such a VM would really be of no practical use at all.

So I guess it depends whether we are wanting to add something "useful" or whether we just want to say "me too" when it comes to having some sort of "Turing complete VM".


I think the whole idea is stupid and goes nowhere. Anytime something new is announced by someone else, we have "me too" discussion here. Last week it was Zerocoin. What happened to it? Nothing. This week it's "Turing complete VM"
 
No one has answered what it will do 1000 tps claims?

member
Activity: 98
Merit: 10
Think about what such a VM is going to be "useful" for first and foremost. To my mind it would be for executing contracts which may require "proofs" for which either EC or SHA256 operations are likely to be the "bread and butter" (same as is the case with Bitcoin's "script" and exactly why it *has* such op codes).

Trouble is I don't know (see my above post). I'm coming into all this from a math perspective (subleq and one instruction computing piqued my interest here), not a cryptocurrency perspective. I just bought my first crypto little over a month ago Smiley Could you expand on this? What else might smart contracts need?

legendary
Activity: 1722
Merit: 1217
Or are you thinking of hardware optimization, for applications that require lots of it?

Think about what such a VM is going to be "useful" for first and foremost. To my mind it would be for executing contracts which may require "proofs" for which either EC or SHA256 operations are likely to be the "bread and butter" (same as is the case with Bitcoin's "script" and exactly why it *has* such op codes).

Just being able to execute "arbitrary" code for the purpose of "executing arbitrary" code seems kind of silly to me (e.g. do you think anyone is going to spend large amounts of Nxt fees to run some sort of "card game" or the like?).


paying enough to intentivize the network to run your code would be expensive so we can trust self interested people not to use it for arbitrary tasks. im sure some would but it would be a small problem. if on the other hand we tried to centrally plan this and dictate what sorts of code are acceptable and what sorts arnt, than who know what sort of innovation we may be stifling for lack of creativity and foresight on our part.
member
Activity: 98
Merit: 10
I wish CfB would have answered my question about whether he liked subleq or not. That is really the only thing that matters. If we can cut time to market in half by using subleq, it would make sense to use it.

I'd wait (for just a while! Clock won't tick by much) until we knew what the parameters were more clearly, what exactly is being achieved by having a reduced instruction set, before we decide on which particular instructions would be good.

(All I know right now is "Smart contracts need to run algos", I don't know why they need algos that are built from few instructions.)
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Or are you thinking of hardware optimization, for applications that require lots of it?

Think about what such a VM is going to be "useful" for first and foremost. To my mind it would be for executing contracts which may require "proofs" for which either EC or SHA256 operations are likely to be the "bread and butter" (same as is the case with Bitcoin's "script" and exactly why it *has* such op codes).

Just being able to execute "arbitrary" code for the purpose of "executing arbitrary" code seems kind of silly to me (e.g. do you think anyone is going to spend large amounts of Nxt fees to run some sort of "card game" or the like?).
legendary
Activity: 1722
Merit: 1217
Choose your language:

Versions of non-JVM languages Language    On JVM
Erlang    Erjang
JavaScript    Rhino
Pascal    Free Pascal
PHP            Quercus
Python    Jython
REXX    NetRexx[2]
Ruby    JRuby
Tcl            Jacl
   
Languages designed expressly for JVM Language
BBj
Clojure
Fantom
Groovy
MIDletPascal
Scala
Kawa

We need a low-level language. Most (all?) languages in ur list r high-level.

Why? What's the difference?

low level languages interface directly with the hard drive, ram and processor. with low level languages you give those components very specific directions. higher level languages interface with lower level languages. they are designed to make the processes executed by lower level languages more intuitive and allow for more abstraction in the deployment of the lower level language functionality.
legendary
Activity: 1176
Merit: 1134
I was thinking about subleq some more over lunch, and took a look at some of what's been done.

We talked about three factors earlier: testing, execution speed, memory use. However we weigh the importance of each of these factors, I don't think there's a situation where subleq, by itself, is a good choice (a more comprehensive reduced instruction set could be though).

Here's what I mean in more detail:

It takes at minimum three subleq (subtraction) operations to perform one addition. To do c = a + b, you do

i = 0 - a
i = i - b  (remember this is an assignment equal, no the math isn't funny)
c = 0 - i

Was fun to do as a mental exercise, but pretty stupid in practice. Much better to just have a 2nd instruction,   ADD a, b   which replaces b with a + b directly in one operation. The testing cost should be minimal, since addition is so well-understood. It saves on memory and computation. And is a lot less stupid to work with.

For more complex operations, this compounds further. Take a look at http://www.sccs.swarthmore.edu/users/06/adem/engin/e25/finale/ for some idea of it.


The budget is millions of instructions per script instance to get to milliseconds runtime
If we are talking about a few hundred lines of C code, there is no way that expands to million instructions.

If subleq is no good, then we need some other instruction set. this is only one layer of the total picture, certainly an important one.

I wish CfB would have answered my question about whether he liked subleq or not. That is really the only thing that matters. If we can cut time to market in half by using subleq, it would make sense to use it.

Who really looks at compiler output nowadays anyway? When all the layers are complete, people will be designing DACs using flowcharting software. They wont care what the assembly code is like. As long as the C compiler works, do you care about the machine code?

Time to market. Clock is ticking. Etherium is way ahead of us.

James
member
Activity: 98
Merit: 10
Except CfB said it had to be a low level (assembly language) type of language. He started a contest to find language with fewest opcodes for Turing complete. I doubt anything is less than 1, though there were half a dozen variants of single opcode Turing completes

This isn't a competition to find the least # of instructions (but if it was then "you won"). Cheesy

I am guessing CfB just isn't keen on something that is too complicated but believe me *without* instructions such as SHA256 such a VM would really be of no practical use at all.

So I guess it depends whether we are wanting to add something "useful" or whether we just want to say "me too" when it comes to having some sort of "Turing complete VM".


SHA256 could just be implemented as a higher-level function (not part of the opcode). After all, it's just a composition of ANDs, ORs, NOTs, and bitshifts.

Or are you thinking of hardware optimization, for applications that require lots of it?
legendary
Activity: 1176
Merit: 1134
Except CfB said it had to be a low level (assembly language) type of language. He started a contest to find language with fewest opcodes for Turing complete. I doubt anything is less than 1, though there were half a dozen variants of single opcode Turing completes

This isn't a competition to find the least # of instructions (but if it was then "you won"). Cheesy

I am guessing CfB just isn't keen on something that is too complicated but believe me *without* instructions such as SHA256 such a VM would really be of no practical use at all.

So I guess it depends whether we are wanting to add something "useful" or whether we just want to say "me too" when it comes to having some sort of "Turing complete VM".

There are 28 of the usual type of assembly instructions that are built from the single subleq. There is a C compiler that generates higher_subleq

The only other alternative so far is 28 instructions, but not sure if an open source C compiler was found for it, nor how much other code already exists.

This is why I defined a layered approach to get to where anybody can flowchart a DAC. That sure sounds useful, just ask Alias

James
member
Activity: 98
Merit: 10
I was thinking about subleq some more over lunch, and took a look at some of what's been done.

We talked about three factors earlier: testing, execution speed, memory use. However we weigh the importance of each of these factors, I don't think there's a situation where subleq, by itself, is a good choice (a more comprehensive reduced instruction set could be though).

Here's what I mean in more detail:

It takes at minimum three subleq (subtraction) operations to perform one addition. To do c = a + b, you do

i = 0 - a  (there's a register holding the constant 0 somewhere)
i = i - b  (remember this is an assignment equal, so the math isn't funny)
c = 0 - i

Was fun to come up with as a mental exercise, but pretty stupid in practice. Much better to just have a 2nd instruction,   ADD a, b   which replaces b with a + b directly in one operation. The testing cost should be minimal, since addition is so well-understood. It saves on memory and computation. And is a lot less stupid to work with.

For more complex operations, this compounds further. Take a look at http://www.sccs.swarthmore.edu/users/06/adem/engin/e25/finale/ for some idea of it. (GOTO section 3.2 Implemented Operations)

Jump to: