Pages:
Author

Topic: Do you think "iamnotback" really has the" Bitcoin killer"? - page 13. (Read 79977 times)

sr. member
Activity: 336
Merit: 265
reddit is very important for a cryptocurrency's community.

Thanks for giving me an opportunity to rant.

I fucking hate Redditardit. Their system is corrupt. I want to replace that piece-of-shit shit with app that runs on our blockchain. Redditardit must die.

I'm serious. Many pieces-of-shit are going to die.
legendary
Activity: 1358
Merit: 1014
I was going to advice to get bitnet reddit registered but it's already registered:

https://www.reddit.com/r/Bitnet/

Looks like it was something related to bitcoin but the guy that registered it deleted his account. Maybe is it possible to regain control of this somehow?
reddit is very important for a cryptocurrency's community.
sr. member
Activity: 336
Merit: 265
Just a question from a non technical guy. I haven't really read all that you guys wrote, but i've noticed javascript being mentioned. So why javascript and not webassembly ? Apparently WASM is superior, even javascript's creator brendan eich endorses it.

I just looked into it :

http://webassembly.org/docs/web/

WASM is unstable, incomplete, and not supported everywhere. Can't yet target it. Also it is a low-level paradigm, so not really suitable for a programming language. Think of it more as an output target for a compiler.

We use C where we need absolute performance. Otherwise we prefer to use a better high-level language. Problem is that high-level languages sort of suck. There really isn't an ideal choice yet. But C is yucky for rapid coding. Yet JavaScript is also lacking many things, especially the ability to manipulate packed binary data structures. Node.js has some verbose API for that which is not elegant. These are some of the reasons I wanted to try to create a statically typed scripting language that compiles to TypeScript.

I find it very difficult to comprehend what @iadix writes. His explanations are not clear to me. Overly verbose and unable to get directly to the point. I know I could figure out his points if I wanted to take the time to unravel it, but it would be much easier if he could communicate more effectively. Sorry.

I know basically that he is trying to solve the issues of how to interface modules and APIs with typed objects. Me too. He has his formulation and I was working on different one which would I think be more elegant and attract more app developers.
sr. member
Activity: 336
Merit: 265
Or either doing another script language, who recognize the module interface/api, and can make call to these modules, to have better high level node definition, and building application with this script language.

90% of application is making call to modules interface and formating UI.

If the script language can include some form of html templating , additionally to exchanging data via api, it could allow to program more of the application with this script language .

That's my intended plan if I can get it done. Not sure if I can, but if so I think it will be much more elegant.
sr. member
Activity: 336
Merit: 265
Lol, the random april fools trolling busted you; eth apologist, steem fanboy, and ltc facilitator. Do you think this is by mystake, or design ?

Sorry you are incorrect.

The miners are being induced to buy new ASICs which will force them to vote for SegWit. It is a very clever strategy.

^^^ LOL. Humour-free zone, obviously.  Embarrassed  ^^^

@stereotype what can you say now that LTC is $9.50?  Tongue
full member
Activity: 322
Merit: 151
They're tactical
In the absolute to do something based on com & IDL,  it would be easy to come up with something like this :

Code:

Class myModule : IDL_INTERFACE
{

   Int method1 (int a, string b)
   {
       mem_zone_ref params={ptr_null};

        create_obj (¶ms,"params",type_json_object);
         set_key_int (¶ms,"a",a);
        set_key_str (¶ms,"b",b);
        
        module_call ("myModule", "method1",¶ms);

        deRef (¶ms);

   }
      
}

IDL_interface *myInterface=new MyModule ();
myInterface->method1 (18, "toto" );


Or the opposite

Code:

Class jsonRPCHost
{
   IDL_interface myInterface=MyModule ();


  JsonResult method1(string jsonParams)
  {  
       mem_zone_ref params={ptr_null};
        Int a;
        String b;

        tree_from_json (jsonParams, ¶ms);

        get_key_int (¶ms,"a",&a);
        get_key_str (¶ms,"b",&b);

        myInterface->method1 (a,b);
        
        deRef (¶ms);      
  }
}

jsonRPCHost rpc;

rpc.method1 ("{a:18,b:\"toto\"}");



Or can do same for passing parameters as json array instead of named values.

Code:
rpc.method1 ("[18,\"toto\"]");

To have binding between com interface , json/rpc interface, and the binary modules.


The code to extract the parameters need to be changed, rpc support the two modes, with my code it detect if the json is an array and the module need to adapt the params parsing if it use array or named values. By default it use an array for the moment, but using named value would allow to pass directly jsonified js objects to the rpc on the client side, here the js api build the array out of the js object for the rpc.




generic module call in C

Code:
     
          tree_from_json ("{a:18,b:\"toto\"}", ¶ms);

          module_call ("myModule", "method1" ,¶ms);


the name of the method can be extracted from the full json rpc request.

with "mod.method" synthax in the rpc request,  the runtime function executing the call to the module method could make the call automatically with generic code for any rpc request .

For the moment it use http url path to determine the module name, and the config file bind the http path to the module interface, either it's used as a cgi and extract the parameters from http query string, or with the rpc interface, both can be used. The block explorer use the cgi api with parameters from the http query string, the wallet use the rpc api.

In any case, the module call can be made to local module, or to remote module via http/rpc/request, the C synthaxe just remove the boilerplate of c++ object definition, and in the end the c++ interfaces just translate parameters to/from json to c++ compiler specific data, to potentially retranform it back to json to make rpc call.

The c++ interface definition  just add boilerplate, break binary compatibility, and is not even very useful from js app point of view. Can make développement of module easier using c++ synthax, but that's about it. And I dont think can expect opengl C++ app too soon anyway. And other than for doing opengl app,the js api can do it, and it can still somehow handle opengl even if rpc call are slow.

Maybe for programing pure server side modules without UI, it could be useful to have c++ layer to make code typing faster, but it raise certain question with error checking & c++ operators. Without using exception I dont see how to check operators error, and without operators it would end barely less fugly in c++. For the interfacing glue code can be made to interface c++/com with json for js rpc & other modules. But doing an interface to handle inter object operators is not easy. And then the module execution cant be distributed because of the operators instead of using the C synthax.


With a bit more work it can embedded into IE, and script in vb,c#, and js with ie ( without rpc interface, with com runtime ), but that work only for Microsoft stuff.

The equivalent of this is XPCOM which is supposed to be more portable, ff & chrome & safari can use those, even if chrome is dropping it, and will develop something else, safari have their own native plugin format, but it support the XPCOM thing.

But even under linux, it's not widely used to develop application, it's mostly used by browser, to define the DOM objects, and the plugin, but outside of flash the other plugin are marginal.

So in the end this can be third way to develop more a COM/XPCOM like approach in c++. Im not sure how far IDL can go with design pattern, I think you can at least define method who deal with other IDL interface, there might be way to define complex type but not sure, most likely conversion from json object to c++ method parameters need to be done manually with json object members passed manually with the good order to the IDL interface method.

To have something similar to c++ operators with COM , you would have

Code:

StringInterface *A;
IntegerInterface *B;
Int value;

A=rpc->getStringInterface ("1234");

B=A;  // automatic operator overload defined in the interface, return the interface to an instance of B
value=B->getValue ();  // rpc call
B->deRef (); // rpc call


If A and B implementation are hosted remotely , it mean the host need to keep an instance of B as long as the client needs it, and it make three rpc request to have the equivalent of operators.


And you could get to

Value=A->B-->member [xx]->method1  ().result.value;

With rpc call each time, but it's hard to have good error handling, or asynchronous requests with a synthax like this in c++. If one call return invalid object interface, it mean most likely crash. And it's not always possible to know if the function will succeed with remote calls.

So in the end to have something safe , it will always be fugly in a way or another.




In C you can get same operation in one call with explicit typing

Code:

Json_to_tree ("\"toto\"",¶ms);

module_call("A","getValueToInt",¶ms,&value);

// params contain eventually data used by the operator
// or the operand like B represented as json object if the operator use the value of B

// value is the return value represented as json object or in binary form if the return type is explicit in the module call declaration.


But it need to make each call step by step with fugly synthax.
 
There is a single rpc request , and no boilerplate for the api call itself, but boilerplate in the code to manipulate the binary json object from C. ( but again the code is very safe, and hard to make crash, there will always be a meaning full answer to interface method call).




Other than IDL I dont see other interface definition that has really practical advantage.



But there is no really convenient easy way to force javascript to use a particular format out of an interface definition, or to have anyway to check the object that is passed to the rpc method call from js is the format used to define the interface in c++, or to check from js the result is the type it expect.


My main concern is not optimisation but a language to program blockchain node must have good support for all these:

network api, asynchronous io is a big plus, upnp is big plus

http protocol, json, data format, compression, url & query string parsing, utf8 etc

object serialization and hashing.

Elliptic crypto.

Database system.

GC / ref counting is must.

Simd vectorial math and opengl are a big plus.

Good threading support is big plus.

No need for virtual machine or interpreter is big plus.



Easily work with html templating or integrated ui small plus, templating  can be done in browser from json, or html can be made externally.


With all this in mind there is not many language who fit the bill, and even without premature optimization once you are stuck with a language framework, there is not always good way to optimize much more, or work around certain bugs or deficiency in one area or the other.


Other than C and C++ I dont see what other language can fit the bill, not even getting in resources uses, memory, performance, and access to local system/hardware resources, binary compatibility, etc
sr. member
Activity: 336
Merit: 265
Satoshi our great NWO master:

What you describe, what you are suggesting, perhaps, is that a benevolent Satoshi has great power to do good, and that, conversely, a malevolent Satoshi has a nuclear bomb in regard to his private keys.

Wink

If you wanted to utilize Bitcoin reserves which could not be visibly spent until it was time to enslave the world, how would you do it?

What if you could print paper high powered SDRs implicitly backed by Bitcoin. And then create Basel rounds that progressively ratchet the old banking system to default by requiring Tier 1 reserves of this quality.



@traincarswreck there is no such thing as a stable fungible value. It can't exist as it violates the laws of physics. That is Nash's error. And there is no such thing as a plurality of asymptotically fungible stable values. That is a fantasy in the mind of a crazy, brilliant man who didn't quite figure out his error. His mistake was not realizing that his ideal would only be plausible in the non-fungible case. He was close to realizing that.

Nash was on the right track though. We can have an asymptotic plurality of stable values, when they are all non-fungible. And my project will bring that theory into existence.

Bitcoin will be destroyed. Mankind will prosper. And I will prove you are wrong. But it won't happen overnight. It will take a while yet.

Fungible money will die. Slowly but it will wither away.

That is what my Rise of Knowledge, Demise of Finance points out. Yeah atoms are heavily but they don't get heavier. Relative value will decline (the absolute value will always have mass but that is irrelevant as I had pointed about to Eric Raymond on his blog, c.f. the Dark Enlightenment thread).

There are no stable values in a relativistic universe. But this is a good thing, otherwise we would not exist because the past and the future would collapse into indistinguishable (the light cones of relativity would overlap) if there could be any absolute reference point because relativism wouldn't exist.

End of story.

I am tired of talking. The discussion is redundant. I will reply to @dinofelis' other errors then end my participation in this thread. Adios amigos.

P.S. thanks to all for the discussion.

No one believes you.

Any one who whoreships fungible value can never believe me, for their entire thesis is destroyed. So they will just have to be destroyed. It is their destiny.

Love of money, is the root of all evil.

Love of knowledge and production is glorious and fruitful.

I am a true capitalist. The financiers (especially the whale-most of all financiers) are leeches and parasites.
sr. member
Activity: 336
Merit: 265
@iadix be back with you shortly...

You seem to forecast further out than I'm comfortable doing. I've grown to like waiting for the charts to be fairly definitive.

You are a trader. I am a value investor (with a very strong technological slant).

That is why I miss for example a good trade on Ripple.

My greatest value is as a creator and programmer so I shouldn't be doing this activity.

But I had a need to do this, to turn my 10 BTC into potentially 50BTC which can aid the funding of my altcoin project (given I am not doing an ICO). Also this analysis was an offshoot of diversionary analysis I needed to do anyway to make sure I understand the Nash ideal money value proposition of Bitcoin so as to understand how my project fits into the big picture.

If I disappear from speculation discussion, you'll know it is because I am head in sand on my primary vocation, although I notice I am finding it too tempting lately to go off on too many polymath-like tangents. I really need to discipline myself asap. For example, I was tempted to go off and research Ripple right now to get to the essence of it, but I decided not to.
full member
Activity: 322
Merit: 151
They're tactical
Not to sound insistant but .. Cheesy

Well again it's not to play smarter than you game or authority contest, but im bit code addict, if I dont have my dose of coding my hands start to shake & all, it's just know where you want to get at, and how to collaborate efficiently Wink not turning around the pot for ever, as you say talk is cheap  Grin https://youtu.be/IQTgQ0PNGHU  Grin Grin

If you want to make language agnostic interface definition ok, but what do you have in mind to define them ?

The only systems I know of to have cross language definition is COM and ns XPCOM, with the IDL file who can be compiled to different language to define the interface in the target language and being able to call methods implemented in the host module from the client application language. With COM then the object can be used with vb,c#,js, all language who support com interface, but it's hard core Microsoft stuff, on linux it's not so great.

But ultimately for it to be useful, it need a way to implement both the host side of the interface in the host component language, and the client application side of the call in the client application language. Each languages will have their own internal representation of the data, if you dont understand this, you dont understand anything at the problematic of cross language interfacing.

With c++ even if the two class are defined with the exact same code in the same language, but the class implementation are compiled with different compilers, there are great chances they cant call each other methods, even if the definition is the same just because of incompatbility in the compilers.

And my objective is to make interface call either to locally hosted modules with regular local language call, either remote call via http/json/rpc if the interface implementation is hosted on another node, transparently for application.

The whole point of my framework is to have a network of node who host modules who implement an interface/api who can be called either locally or remotely, and make it as transparent for the app developers as possible.

You say im into premature optimization but if I had to state my main concern with the design it's more memory security, and binary compatibility. Actually albeit it's C code, you will see very little direct memory access outside of stack variables and strings. All memory access can be checked, with the runtime in paranoiac mode, you could fill the whole memory with junk and it would not crash. Even if it cant allocate memory it shouldn't crash.

The code is made with the main purpose of having zero memiry exception. It's also the advantage of using regular C call with pointer to result because it can return a success/failure state additionally to the result, which can allow to detected case of invalid access without triggering exceptions, which is not really possible with c++ operators.

In the end, between

Code:
If (!tree_get_key(object, key,value))
{
// INVALID VALUE
}

Or

Code:
Try
{
Value = object  [ "key"];
}
Catch
{
// INVALID VALUE
}

You cant say the code is much better with c++ in the end to have same level of safety.


Let say you want to have

Code:
Try
{
  MyAddr=peer [2].block ["xxxx"].tx [3].output  [5].addr [0];
}
Catch
{
// INVALID value
}

You could easily come up with operators for different object implemented in differents modules compiled with different compilers, and having exception thrown by a runtime different from the one who is supposed to handle it, and ending in an happy mess.

The best way i though to be able program c++ app with it is to have a c++ layer on the top of the framework, to have all the c++ synthax with overloaded operators , OO design, interface definition with c++ class, iterators etc, and the application developper deal himself with binary compatibility with the c++ code.

As long as the methods are implemented on top level, at application level, and dont have to be used by other applications or distributed over the network why not. Or the trick can be to define the rpc interface with C export, extract the json parameters and use the c++ code inside, this can work normally, as long as the c++ code doesnt have to be shared, the c++ OO pattern is invisible at the interface level.


My objectives globally is this, having node who expose API both to local or remote application, with memory safety on dynamic type and cross language interface ( which is different from language agnostic ), and having a way to program application easily in high level language using this distributed api.

If you want to get in interface definition and all why not, but which format to use to be able to easily implement different side of the interface in different language easily ? Using com IDL ? XML ?

Ultimately what js application developpers are interested into is the json definition  of the object, that's the only thing they will be looking at.

And im ok with perfectly immutable system and protocol, I can get the idea that blockchain protocol is not necessarily a thing supposed to evolve , but that would be true if we had the actual equivalent of blockchain mathematics to define the protocol once for all without any need to change anything in it ever. Thinking you ll have the perfect protocol right at the first time from scratch is foolish. So ultimately still need way to be able to upgrade the protocol as easily as possible, even if the goal is not to upgrade it at all.

Especially if the protocol is more complex, need really to see the two aspect of bitcoin protocol, the network data format aspect, and the block validation algorithm aspect. 90% of blockchain use the same network data format, but different algorithm to accept/reject the blocks, so the protocol is not only about data format definition, but also about algorithm to validate those data.

if an high level representation of a blockchain is to be though of, it need also to be able to represent the block validation algorithm, so most likely as some kind of script or code who define the block validity. If a reward scheme is to be implemented in the block validation, it would happen there,  Even if it's mostly likely being below application level.


But if it to program full blockchain node from scratch, and you dont have clear idea of the language or framework/sdk toolset to use, and you dont want to use my design which fit my purpose, but you dont know how to get the equivalent feature, I cant help you much lol i expressed all the concern i have with the current solution I know of, and why they are not fit for my purpose.

Im all for layered design and design pattern, interfaces and all, but it has to stay within the scope of the feature im after for distributed application problematics. Having distributed application coded in bare c++ for me it's only trouble. Or it need a layer of glue code between the different part of application which i want to avoid.

And there isnt really much true cross platform solution, even with c++.

The closest is ns XPCOM. But I dont think it's even really supported much.

Other than this it all come to hacking glue code somewhere in between to have interface who can share information represented in different format by different language.

By manipulating json tree in C, with the safe memory representation and access, it solve all the low level interface definition aspect. Even if the synthax is heavy, it's still very simple logic. The code is long to write, but hard to do mistake, and the purpose of each function, the type involved,  and return state are obvious and explicit. So after there is very short debuging time.

Can add a parameter to the interface easily, recompiling the module,  copying it to linux and windows machine, restart the node, done. Js application can then use the parameters. And only the module implementing the interface has to be recompiled, and only once for all win & linux machine on intel.

C/C++ programs can share binary representation of the data via the tree api, and do direct module function call in the local process memory, and remote program or js program can call the function with json/rpc over http. All working on the same data format, with sur typing possible on the C side to differentiate between different json objects. But all the data in the C side can be turned to json and vice versa.

So C/C++ applications can easily call module function indepently of the location of the module, and js application can call those same functions using it's native object format as parameter via http request. Knowing it's not so much about linear scalability than about asymetric repartition of resources across different nodes.

My requirement is this, either to implement low level blockchain protocol, other protocols on top of it,or distributed application . And I dont see a better way to get there than the solutions im working on.

sr. member
Activity: 336
Merit: 265
full member
Activity: 322
Merit: 151
They're tactical
But all together im sure it could be turned to c++ with much better synthax, even only using over loaded operators.

You ll notice the only thing it would need to encapsulate modules as c++ class is adding the class { } in the file, and removing the prefix in functiun name, and roughly you get a class definition out of modules, they are a single C file , it's thought to allow this easily, and macros/regexp could easily turn the tree manipulation functiun to a better synthax with c++ operators.

Like


tree_manager_get_child_value_hash(&last_blk, NODE_HASH("prev"), prev);

=>

Prev=last_blk ["prev"]

,

tree_manager_set_child_value_hash(result, "merkleroot", merkle);

=>

result ["merkleroot"] = merkle;




Etc With over loaded operators and type conversion.



( the tree_xx)  function allow explicit type conversion, and are roughly equivalent of c++ operators, and they can be either just copying/passing a reference (by default) or being equivalent to copy constructors , and they can already allow looping like for each on array of json objects with the C synthax, it could easily be implemented as c++ iterators.


But ....


c++ operators function exported name is not compatible between compilers.

Object this call is not compatible between compilers.

Can make the compilation more complex on certain platform (like raspberry pi).

Can make complex type conversion implicit/hidden, even sometime hard to really express with c++ operators .


So for distributed application it's a bit bothering.




And anyway the real application is programmed in js, so it's better to have application driven data definition, and use json as base for data definution , in the absolute all the data format and protocol could be defined at run-time, loaded from a conf file or user supplied from the UI.  

The c++ would just add synthax sugar to code that is supposed to be low level, while making compatbility with json data definition from application more difficult, because lack of compatbility in module interface implementation at binary level. And it wouldnt change anything to the synthax of the js code anyway, as it all happen through json/rpc.

The whole OO synthax sugar to manipulate data in the UI can be made in javascript with hardcoded data format to communicate with the node, and it doesnt have to know about nodes internal object representation, just the json encoding of the parameters and return result of the rpc request , no matter how the module interface is implemented in the node and which compiler has been used to compile it.

full member
Activity: 322
Merit: 151
They're tactical
Just a question from a non technical guy. I haven't really read all that you guys wrote, but i've noticed javascript being mentioned. So why javascript and not webassembly ? Apparently WASM is superior, even javascript's creator brendan eich endorses it.

I just looked into it :

http://webassembly.org/docs/web/

Apparently it still need to be compiled to js to run in a browser.


Can be interesting if it has good support for network protocol and crypto.


ABIs

In the MVP, WebAssembly does not yet have a stable ABI for libraries. Developers will need to ensure that all code linked into an application are compiled with the same compiler and options.

In the future, when WebAssembly is extended to support dynamic linking, stable ABIs are expected to be defined in accompaniment.


My ABI already have this Smiley and it support aligned simd 128 bits operation too Smiley and full native compatibility with C ( it's C Cheesy ), cross compiler binary compatbility, dynamic linking, and json/rpc compatibility Smiley and I already have large part of blockchain protocol implemented with it.

But it  seem similar in the purpose to where I want to get at.


The big advantage of js is that it has large base of code , sdk, programmers, support etc which is mostly what make it interesting.

The only pb is access to local storage, and performance for binary data or crypto, for operating full node it seem a bit light.

But for programming high level logic, UI, and interactive application, it's good. Just need to have simple design pattern or it can become dicey.

With good naming convention and specialized object who are all horizontal it can be doable in js.


But adding a layer in c++ or true OO just to have compile time design pattern in the node, where all the application logic happen in another layer anyway throught a rpc/json interface, seem a bit pointless.

For me the two way are

Either doing full OO script to define node and rpc interface to app , and eventually program some parts or all the app itself with it, or a synthax to export rpc method from the script code for other modules or js app.

Either doing the OO layer with js and rpc calls.


WASM seem to have a system like this to bind module interface to js api, but it doesnt seem to support http/rpc. Or to execute remote modules. With the js rpc it can execute code on remote machine.

The thing with table/element definition in the modules to have data definition with WASM is replaced by runtime type definition based on json.

So the data definition can be shared easily between low level and high level. No need for "glue code".




http://webassembly.org/docs/semantics/#table

Table

A table is similar to a linear memory whose elements, instead of being bytes, are opaque values of a particular table element type. This allows the table to contain values—like GC references, raw OS handles, or native pointers—that are accessed by WebAssembly code indirectly through an integer index. This feature bridges the gap between low-level, untrusted linear memory and high-level opaque handles/references at the cost of a bounds-checked table indirection.


My system with explicit type added to json does the same than this, expect it's not opaque, and convertible to json, using json allowed typed string/integer/real/object/array.

So the equivalent definition for module interface can be used also as the definition of the json/rpc interface. And it's never totally opaque.

In worst case scenario such thing with array of element with binary ofset can be done with the tree system, but better to add named and typed sub members , in transparent structure that can be shared with js.




The function arguments passing is the same for C program call (dll api), or rpc call (via http/json api)).

Having internal compile time constraint for modules rpc interface is not necessarily a good thing, and it will be useless in any case for the application layer, unless the application layer language can understand those type. Which while not be the case with js.


The best you will have for data definition in js is json and flaky runtime type.

Unless you add a layer of runtime in js who is able to get the good class instance from high level definition, like a factory to instanciate complex class hierarchy based on runtime parameters to build an OO representation of the node and specialized class for each rpc interfaces/modules.

But the data format will be hardcoded in a way or another in the javascript code anyway, independently of the node definition. You can just pick between different hardcoded format at run-time by using the good class instance corresponding to the node rpc interface.

Unless you want to compile the js code with the hardcoded values from a definition also used to compile the node interface implementation , but it wouldnt give much security at run-time.

Or can generate the js code to call the api at run-time based on module definition like WASM. But doesnt seem all that convenient.

Having data definition based on json in the node's internal C dll-api is not the worst solution.

And there is an api, it can compile as dll on windows or so on linux like in debug mode, with the dependencies and api definition etc, but the data format of functions parameters are defined / parsed at run-time, and they can be defined at run-time automatically based on json object.

The API at the C level can be called directly via http/rpc transparently.

C function parameters are sent with a json object instead of a C structure or list of parameters.The C code can check the validity of the data structure at runtime, and do it's operation safely on it if it has the good type / children, and return the data to the app in the same format via json/rpc.

The only thing the compiler see is pointers and integers, but all memory access are checked at run-time.

the http json is like a layer added to the call to handle remote access to the api, it turn the json parameters from the rpc request to the tree, call the exported function specfied in the rpc request method with this parameter, encode the result to json, and handle the http layer.

From the js point of view, it's like directly calling the C function exported in the module. The rpc request method is the exported  function name, The module is selected with http url path. The parameters are parsed at run-time by the module code from the json format.
hero member
Activity: 532
Merit: 500
Just a question from a non technical guy. I haven't really read all that you guys wrote, but i've noticed javascript being mentioned. So why javascript and not webassembly ? Apparently WASM is superior, even javascript's creator brendan eich endorses it.
full member
Activity: 322
Merit: 151
They're tactical
I guess my use of the word 'spaghetti' offended you. Sorry but I think writing high-level code for apps in C is not a good idea (because developers don't like to do that, it produces less readable code, reduces productivity, slower to market, etc). But I think we need to differentiate between the server-side and client-side. I suppose most of the code you are showing me is for a server-side full node. For the client side apps, I hope you aren't proposing to use C


For the app code, there i see two possibility.

Either using javascript, and encapsulating modules rpc interfaces in js class. But the design pattern is still weak, but already good application can be made with html5/js .

Or either doing another script language, who recognize the module interface/api, and can make call to these modules, to have better high level node definition, and building application with this script language.

90% of application is making call to modules interface and formating UI.

If the script language can include some form of html templating , additionally to exchanging data via api, it could allow to program more of the application with this script language .

Maybe programming in C for the warrior or if they need specific code that need to be done in C, but normally the idea is to have module already made to fit the purpose to be used by the high level app language. For certain real time app like 3d video game or opengl the http/rpc can be too slow, if the app still need to have real time connection to the blockchain in lower level language.

The part of application to be made in C / c++ or js is up to each developper. Most webapp should be made with js.

For purely blockchain related stuff , the wallet rpc interface and block explorer should be enought with the in browser crypto part in js. Normally most operation that can be done on a blockchain by an application is implemented in those modules with the regular rpc api and block explorer api.

Other module are to implement other interface for other type of application, but the logic can be either programmed in C or js, knowing that with in browser js, there cant be local data, all the user data has to be handled via a module interface. ( either it's on a local node or not).

But outside of code that need to store permanent data, it can be programmed in js too, but js is not necessarily good for doing complex parsing on long list. So complex parsing /  filtering etc is better done via C modules.

And really the synthax is not so bad, there is not too many key word it's mostly tree_set_child _value_type (object, key, value ) ; and the gets. It's to manipulate json tree in C with reference counting etc.
But such tree can be translated to/from json in one line. Either it's to build them out of rpc/json request, or output the json result for the javascript app.

As most of the node operation even low level stuff use this tree structure, js app can plug anywhere in the internal api with json/rpc call.

Only the very low level things like kernel level io or sockets really need to be done in C.

But parsing thousand blocks and tx in js, not good idea either Wink

As all the data manipulated by the node internally use this structure, all the internal "DLL based api" is also  the "rpc/json api" . Without there is anything special to do. The function dont make a difference if it's call from another C modules, or a js rpc. The data format is the same for both the call argument, and the result data .

Both the C program and js/rpc works on the same api. The http server just load the json parameters of the rpc request to the tree, and tranform the result tree to json for the rpc/json. Whereas the C app directly sent argument using the tree and parse functiun result from the tree, instead of using json.

But internally it's same than old school dll api, expect parameters and data is defined dynamically as json data tree, and the compiler is blind to it. ( but not the runtime).

Any level of hierarchy of array of object can be represented like this.

By object I mean object in the json sense, a collection of named value.


full member
Activity: 322
Merit: 151
They're tactical
Loading precompiled modules it's called dynamic linking Smiley

DLLs are already supported by the OS.

Well i made my own dll who can be made from so or dll, and can be loaded and linked on both linux and windows, independently of the compiler.

The goal to have module is to  like a static class

That is not correct English grammar. It is difficult to understand your writing.

Sorry lol im typing on the tablet Smiley
What I mean is to have module definition used as static class. A collection of methods, and unique instance of data.



, and they can be moved from linux to win without recompilation.

Yeah I read that on your site, and I don't understand what is the worthwhile benefit.

Developers know how to use compilers.

Do you want to dynamically load this over the network? Why? And code trust may be an issue in that case depending on your use case.


For linux binary compatibility can be an issue, and binary often have to be recompiled for each machine. And like this it's 100% the same binary code running on all nodes.

The code trust can be solved with verticalisation and private network like I explained before.

The main concern for doing these module is not for this project, but I think they are useful here, to deploy easily distributed application, it remove the burden of compiling. And can provide good abstraction for interface definition, and as they can deal with json data, they are fit for json/rpc. A module implement a json/rpc interface.


The point of doing a script language is to have synthax improvement on the C.

That is what I thought. You are trying to model OOP in C instead of using a more appropriate language, all because you want this dynamic linking of code?

It is the holistic reasoning that I am not getting. I don't understand why you would choose to make these design decisions.


Not really oop, I know the limit of C with this, but for interface definition and encapsulation the module does the trick. Full oop need to be done in another layer. Or it can be directly exploited in js, adding the OO synthax with js class definition.


Api cannot be language agnostic as they need to take in account the functions call parameters and the types of arguments lol

An API can surely be language agnostic. REST APIs are language agnostic.


They use XML. If the language you use dont support XML, you cant use it. If your language has certain limitation  on data type, it can be unsafe.


Using json can pass language agnostic data, but the language has to parse it, and with compiled languages structures or class cant be made at run-time with the members of the json object.

Orthogonality (separation-of-concerns) does have a performance cost. But premature optimization is bad design.





Im not hacking stuff together like spaghetti lol there is design pattern, modules, typing, interface, api, etc.

Really premature optimization is spaghetti. You may have certain features, but that doesn't mean the priorities of how you achieved them are not spaghetti. Spaghetti is conflating things which should be remain orthogonal until there is an overriding justification for conflating them.

Also OOP as in subclassing, is an anti-pattern. Not saying you have that feature (subclassing, i.e. virtual inheritance with subtypes).

Where am I talking about optimisation ?
sr. member
Activity: 336
Merit: 265
Loading precompiled modules it's called dynamic linking Smiley

DLLs are already supported by the OS.

The goal to have module is to  like a static class

That is not correct English grammar. It is difficult to understand your writing.

, and they can be moved from linux to win without recompilation.

Yeah I read that on your site yesterday, and I don't understand what is the worthwhile benefit.

Developers know how to use compilers.

Do you want to dynamically load this over the network? Why? And code trust may be an issue in that case depending on your use case.

The point of doing a script language is to have synthax improvement on the C.

That is what I thought. You are trying to model OOP in C instead of using a more appropriate language, all because you want this dynamic linking of code?

It is the holistic reasoning that I am not getting. I don't understand why you would choose to make these design decisions.

Api cannot be language agnostic as they need to take in account the functions call parameters and the types of arguments lol

An API can surely be language agnostic. REST APIs are language agnostic.

Using json can pass language agnostic data, but the language has to parse it, and with compiled languages structures or class cant be made at run-time with the members of the json object.

Orthogonality (separation-of-concerns) does have a performance cost. But premature optimization is bad design.

Im not hacking stuff together like spaghetti lol there is design pattern, modules, typing, interface, api, etc.

Really premature optimization is spaghetti. You may have certain features, but that doesn't mean the priorities of how you achieved them are not spaghetti. Spaghetti is conflating things which should be remain orthogonal until there is an overriding justification for conflating them.

Also OOP as in subclassing, is an anti-pattern. Not saying you have that feature (subclassing, i.e. virtual inheritance with subtypes).

But ok ill take this as ill have to build application  for a non existant blockchain made with a non existant language who is so agnostic that it doesnt exist. So ill let you at it and keep doing application with my thing Smiley

No you don't have to do anything. You are free to do what ever you want. I am not coercing anyone. I am going to build a correct design. Those who find it appealing will build on it.

I am merely asking questions to try to understand your reasoning about why you made the design decisions you did. Until I understand well your reasons, I can't be 100% sure whether I reject or accept your reasons and your design choice.

Remember you were trying to help/influence me to base my design around or compatible with your existing code. It is not unappreciated, but if I don't agree with design decisions, then I don't agree. I am not 100% sure yet. I need to fully understand your reasons for your design choices.

I guess my use of the word 'spaghetti' offended you. Sorry but I think writing high-level code for apps in C is not a good idea (because developers don't like to do that, it produces less readable code, reduces productivity, slower to market, etc). But I think we need to differentiate between the server-side and client-side. I suppose most of the code you are showing me is for a server-side full node. For the client side apps, I hope you aren't proposing to use C  Huh

But my goal is to improve the synthax, and modules in my mind they are more for the low level operation.

Incorrect. Modules are integral with static typing. That is one of the aspects that I mean when I use the word spaghetti to describe what I think you may be doing in your design.

Edit: I see you edited your post and added some code examples. I am too sleepy to review those now. I had looked at that block explorer C code of yours yesterday.
full member
Activity: 322
Merit: 151
They're tactical
@iadix, I did not yet finished trying to figure out what you are doing with your code. I see you are doing some strange things such as loading precompiled modules and some scripting concept which I don't yet see the relevance of. I was too exhausted when I was looking at it. Had to sleep. Woke up and was finishing up my analysis of blockchain economics. Now I need to prepare to go for a doctors appointment which will consume 6 hours because have to get xrays first.

So then when I return, I will probably be too exhausted to look at your stuff today.

I will say that the code of manually building these JSON objects looks very fugly to me. And coding everything in C seems not ideal and not programmer friendly. We need C only for code which needs fine tuned performance, which typically is only a small portion of the overall code.

There is TypeScript for static typing on JS.

I will read your last message later as I don't have enough time right now.

But again I really don't understand the overall point of the JSON stuff. And I don't understand why you claim it is optimal to conflate the languages used to create programs which implement the APIs. APIs should be language agnostic. Good design is to not prematurely optimize and commit to a language.

Sure seems to me that you are probably conflating things instead of building good APIs. But I need to think about this more before I can definitively respond to you. I am still open minded.

My goal is not to prove which one of us is more clever. I simply want to design the systems that will scale (and I mean scale in terms of programmers adoption of our platform thus obtuse frameworks are not desirable, not just the other meanings of scaling). And to balance that with needing to get something launched asap.

I put a lot of effort into correct layered design. I don't believe in just hacking things together like spaghetti. I don't understand what APIs you claim you have designed and why you claim they are optimal? I don't even understand yet the significance of your tree concept. It is as if you are trying to build an object system in program/library code that should instead be integrated into a programming language. Perhaps I just don't yet grasp it holistically.

Loading precompiled modules it's called dynamic linking Smiley There is no scripting in this code. The goal to have module is to  like a static class, and they can be moved from linux to win without recompilation.

Json object can loaded from json text, the tree_manager_set_child_value (node, "xx",value) is like hash key list. Like node["xx"]=value. It's the same than to have hash table.

But you cant have in C

load_json ("{key:value}",&xx); and then xx.key, like in as3 or js .

because static type so it's read_node_child(&xx,"key",&value);

The point of doing a script language is to have synthax improvement on the C.

Api cannot be language agnostic as they need to take in account the functions call parameters and the types of arguments lol even with XPCOM api definition are compiled from IDL to the target language. I could write interface definition, but if it's to have interface definition in json, to define an interface with json parameters, the api definition is just decoration  at programming level. The json is the api/data format definition used by the json/rpc api. I dont see the point of defining interface otherwise. To have IDL like thing, compiled to json ? Or compiled to C header to load preformated json ?
 
Using json can pass language agnostic data, but the language has to parse it, and with compiled languages structures or class cant be made at run-time with the members of the json object.

Im not hacking stuff together like spaghetti lol there is design pattern, modules, typing, interface, api, etc.

But ok ill take this as ill have to build application  for a non existant blockchain made with a non existant language who is so agnostic that it doesnt exist. So ill let you at it and keep doing application with my thing Smiley

But my goal is to improve the synthax, and modules in my mind they are more for the low level operation. And the high level part in js. Or adding à icing in c++ even if id rather leaving the scripting in other language. Anyway most of programmable part is made in js. Module they are mostly to implement the hard part to remove it from js app, but still able to parse json/rpc request and output json result.

Can make definition of the rpc interface, but it's useless at programming level. The compiler is blind to the api.


It's more like as3/js, object and type  are build at run-time based on data definition.

It's the implementation of wallet rpc api

https://github.com/iadix/purenode/blob/master/rpc_wallet/rpc_methods.c


Code:

       function rpc_call(in_method,in_params,in_success)
        {
            $.ajax({
                url: '/jsonrpc',
                data: JSON.stringify({ jsonrpc: '2.0', method: in_method, params: in_params, id: 1 }),  // id is needed !!
                type: "POST",
                dataType: "json",
                success: in_success,
                error: function (err) { alert("Error"); }
            });
        }


The js wallet :
https://github.com/iadix/purenode/blob/master/export/web/wallet.html

Code:


function import_address(address) {
            rpc_call('importaddress', [address], function (data) { });
        }


        function get_addrs(username) {
            rpc_call('getpubaddrs', [username], function (data) {
                $('#newaddr').css('display', 'block');
                if ((typeof data.result.addrs === 'undefined') || (data.result.addrs.length == 0)) {
                    my_addrs = null;
                }
                else {
                    my_addrs = data.result.addrs;
                }
                update_addrs ();
            });
        }


        function import_keys(username, label) {
            rpc_call('importkeypair', [username, label, pubkey, privkey, 0], function (data) {
                get_addrs(username);
            });
        }




Complete api for the rpc wallet in js

http://iadix.com/web/js/keys.js

And for the block explorer

http://iadix.com/web/js/blocks.js


Implemented in :

https://github.com/iadix/purenode/blob/master/block_explorer/block_explorer.c

Code:
OS_API_C_FUNC(int) getlastblock(mem_zone_ref_const_ptr params, unsigned int rpc_mode, mem_zone_ref_ptr result)
{
mem_zone_ref last_blk = { PTR_NULL };


if (tree_manager_find_child_node(&my_node, NODE_HASH("last block"), NODE_BITCORE_BLK_HDR, &last_blk))
{
mem_zone_ref txs = { PTR_NULL };
char   chash[65];
hash_t hash, merkle, proof, nullhash, rdiff,hdiff,prev;
size_t size;
unsigned int version, time, bits, nonce;
uint64_t height;

memset_c(nullhash, 0, sizeof(hash_t));

if (!tree_manager_get_child_value_hash(&last_blk, NODE_HASH("blk_hash"), hash))
{
compute_block_hash(&last_blk, hash);
tree_manager_set_child_value_hash(&last_blk, "blk_hash", hash);
}

tree_manager_get_child_value_str(&last_blk, NODE_HASH("blk_hash"), chash, 65, 16);
tree_manager_get_child_value_hash(&last_blk, NODE_HASH("merkle_root"), merkle);
tree_manager_get_child_value_hash(&last_blk, NODE_HASH("prev"), prev);
tree_manager_get_child_value_i32(&last_blk, NODE_HASH("version"), &version);
tree_manager_get_child_value_i32(&last_blk, NODE_HASH("time"), &time);
tree_manager_get_child_value_i32(&last_blk, NODE_HASH("bits"), &bits);
tree_manager_get_child_value_i32(&last_blk, NODE_HASH("nonce"), &nonce);

if (!get_block_size(chash, &size))
size = 0;

get_blk_height(chash, &height);

if (is_pow_block(chash))
{
SetCompact (bits, hdiff);
get_pow_block (chash, proof);
tree_manager_set_child_value_hash (result, "proofhash", proof);
tree_manager_set_child_value_hash (result, "hbits", rdiff);
}
else if (get_blk_staking_infos)
get_blk_staking_infos(&last_blk, chash, result);

tree_manager_set_child_value_hash(result, "hash", hash);
tree_manager_set_child_value_i32(result , "confirmations", 0);
tree_manager_set_child_value_i32(result , "size", size);
tree_manager_set_child_value_i64(result , "height", height);
tree_manager_set_child_value_i32(result, "time", time);
tree_manager_set_child_value_i32(result, "version", version);
tree_manager_set_child_value_i32(result, "bits", bits);
tree_manager_set_child_value_i32(result, "nonce", nonce);
tree_manager_set_child_value_hash(result, "merkleroot", merkle);
tree_manager_set_child_value_hash(result, "previousblockhash", prev);
tree_manager_set_child_value_hash(result, "nextblockhash", nullhash);
tree_manager_set_child_value_float(result, "difficulty", GetDifficulty(bits));
tree_manager_add_child_node(result, "txs", NODE_JSON_ARRAY,&txs);
get_blk_txs(chash, &txs,10);
release_zone_ref(&txs);
/*
"mint" : 0.00000000,
"blocktrust" : "100001",
"chaintrust" : "100001",
"nextblockhash" : "af49672bafd39e39f8058967a2cce926a9b21db14c452a7883fba63a78a611a6",
"flags" : "proof-of-work stake-modifier",
"entropybit" : 0,
*/
return 1;
}

return 0;
}


The json/rpc method call are directly module function calls. This is the rpc/json api. But it's defined at run-time.

But tree can be built automatically from json string in one call. And json string from this tree too, it's the interest.
hero member
Activity: 532
Merit: 500
OMG @iamnotback your conspiracies and thoughts about eth were right. Vitalik admitted AXA and bilderberg group are shareholders of ethereum foundation.

Also in metropolis there's going to be implemented the possiblity to reverse any contract if desired, named "Proof of Vitalik".

Being overcome with guilt, Vitalik decided to redirect 20% of the mining rewards to the dao hackers.

Being unable to find solutions for nothing at stake and stake grinding, Vitalik decided to give up on proof of stake and go with PoA instead.

As if it wasn't bad enough, Devcon 3 will be held in Pyongyang.

I ain't tell no lies, proof inside https://blog.ethereum.org/2017/04/01/ethereum-dev-roundup-q1/
legendary
Activity: 1554
Merit: 1000
^^^ LOL. Humour-free zone, obviously.  Embarrassed  ^^^
sr. member
Activity: 336
Merit: 265
Lol, the random april fools trolling busted you; eth apologist, steem fanboy, and ltc facilitator. Do you think this is by mystake, or design ?

Sorry you are incorrect.

The miners are being induced to buy new ASICs which will force them to vote for SegWit. It is a very clever strategy.
Pages:
Jump to: