Pages:
Author

Topic: Do you think "iamnotback" really has the" Bitcoin killer"? - page 11. (Read 79971 times)

full member
Activity: 322
Merit: 151
They're tactical
Oki Smiley Believe or not i'm already trying to be concise  Grin Grin


Well just a last thing i wanted to mention with struct, but i know you already know most of this stuff, but it's just because you asked the reason why it's better than structures etc Smiley

But if i have to really explain all my hollistic reasoning, it would take a whole book, because it's system that is originally bootable and can run on bare metal, so there are many aspect i designed coming from ActiveX and web browser, and problematics of video streaming, interfacing, data format, p2p web 2.0, javascript, DOM etc Smiley



But the really first motivation for doing the tree thing instead of C structure, is not to have to bother about pointer ownsership and data translation.


Consider this

Code:

typedef struct image
{
   unsigned char *data;
  int width;
  int height;

}

typedef struct event
{
    event type;
    void *data;

}

image *global_list[16];


function handle_image(image *myimage)
{
  if(myimage->with<256)
     global_list[n]=myimage;
}


function event_handler(event *myevent)
{
image *myimage;


if(myevent->type==image_event)
 handle_image((image *)myevent->image);

}





Well simple case, but the question is do you need to free the event data after the call to the event_handler or not.

Without reference counter you can't know.

If you free it, you might end with an unallocated pointer in the global_image_list, or worst, a pointer to something else, without having anyway to detect this.

If you don't free it and it's not copied, you end up with a memory leak ( allocated memory without any valid pointer to it in the program data).  

Same if you need to reallocate the pointer.


The main motivation is this, so you can pass generic reference between different modules and functions, they can copy a reference of it, with lockless algorithm for reference counting so references can be shared between different threads.

Originally i just developed the json parser to be able to construct complex object hierarchy from one line and have a more 'atomic feeling' in the object construction who succeed or fail as a whole. And to have a sense of internal 'static object typing'.

And it's a very critical feature for my paradigm, because i want the design pattern to be based on asynchronous event rather than on direct function/method calls, and there is no explicit code path the compiler can easily check.

This is a must to have a single thread processing of events emitted by asynchronous sources, and to have the 'green threaded' feeling on high level, even with C, and for most simple case the green thread can keep locklessly synchronized with other heavy threads or interrupts.





And the other motivation is with interfacing and data serialization / jsonification , binary compatibility etc.


If you start from C struct, you need 4 functions around each structure you want to serialize or share, 1 to serialize, 1 to deserialize, 1 to jsoninify, 1 to dejonisfy.

With code spread in many different part, away from the 'hot code' where the real work on the data take place.

With this system, you can just add a member to the structure in the 'hot code', in the code that actually produce the data, and then it will be automatically serialized or jsonified with the new member.  Without you have a single line of code to change anywhere else. Even if that involve some boilerblate in the 'hot code'.

Even if this policy with rpc mean that caller define data format of the input, and callee the data format of the output, even if they can be in different language. But it seem more logical and more efficient from a programmer perspective.



But after well it's still C code, so you can use all the dirty C stuff, binary packed structures, cheap cast, inline assembler, stack manipulation, and screw up everything and end with 0day exploits every day.

Or you can use the boilerplate and having no buffer overflow, automatic conversion to/from json and binary, lockless reference counting, and multi thread safety on manipulation of array and lists.



Most of the time, cpu cache and instruction pipelining will do a much better job at dealing with problem either it's concurrent access, or caching, with instruction re odering, most modern cpu have complex instruction pipeline who can do the job at runtime much better than any compiler, and the last generation cpu and motherboard / north bridge/ south bridge etc have good handling of SMP with data shared between core, caches, and north-south bridges, with atomic operations etc.


My dream would have to get to green thread in C from bare metal interupt Smiley And i'm not too far actually lol


I hope it's not too long Smiley


I'm trying to be succint, i think it's the last very long thing i need to explain for the moment anyway Smiley


And i know you already know most of the stuff and you already discussed them for month with sean Smiley

But its to explain my side Cheesy


I started to code on the script parser too, i'll try to get something working in some days Smiley
sr. member
Activity: 336
Merit: 265
I hope you don't disappear. And I hope I can show you something in code asap that makes you interested.

And maybe you will understand why for me compile time check is mostly irrelevant.

Rather I would say compile-time checks are important especially for small details, but we can't possibly type every semantic due to unbounded semantics.

Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program

This is unbounded nondeterminism.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

That is only fundamentally incompatible with compile-time (i.e. static) typing in the sense of an exponential explosion of types in type signatures.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.

I don't think anyone was proposing dependent typing.

Any way @iadix, I must say that I better expend some time on developing my stuff and not talking about all this theory. I was already talking about all this theory for months with @keean. I don't want to repeat it all again now.

Let me go try to do some coding right now today. And let's see how soon I could show something in code you could respond to.

We can then trade ideas on specific coding improvements, instead of this abstract discussion.

I understand what you want, and I essentially want same. We just have perhaps a different idea about the exact form and priorities but let's see how close we are to agreement once I have something concrete in code to discuss.

Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

I think you may not be fully versed on typeclasses. The are not OOP classes. @keean and I had long discussions about how these apply to genericity and callbacks.

Please allow me  some time to try to show something. This talk about abstracts or your framework, will just slow me down.


Please two or three sentence replies only. We really need to be more concise right now.
sr. member
Activity: 336
Merit: 265
I wrote something in private that I wanted to share, so I decided to stick it here.

as a trader i like the mess and the drama  Kiss



i've been told since forever that in order for something to have value, you need to exchange it for other value

...

yes you may have a breakthrough and answer for those "centralized by fiat power" consensus system,
but will the value of this new coin trully make the developer/early adopters insanely wealthy?
if anyone could make that coin basicly without exchanging what they are currently having, what would become the coin value other than early adopters initial speculations?

You could exchange your knowledge for tokens.



How do you justify the blatant BS linked above (your quote of me) wherein the Chinaman deliberately says some BS to crash the LTC price?

thats one tool in a pool of many. seen Americans (famous Bitcoin is a failed experiment line), Russians, Japanese all do the same bitcoin/crypto over its history.

That is why we need an altcoin wherein the hodlers aren't speculating. They are using the tokens for something in which they have no desire to speculate with them.

With that wide base of transaction use which doesn't care about the exchange value, the manipulators will not be able to have much impact.

As I said, I have some ideas about how to make whales impotent. Traders won't like it, but long-term HODLers are going to love it, because it is deflationary, which is even better than Bitcoin (i.e. the coin supply will shrink forever never reaching 0).
sr. member
Activity: 336
Merit: 265
One of the reasons I am creating my project...


Who else is tired of this shit?
full member
Activity: 322
Merit: 151
They're tactical
The core holistic concept i wanted to get at originally was to get to "loppless programs". And maybe you will understand why for me compile time check is mostly irrelevant.


Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program

And most application today they are not follow this model at all.

Most application, either they are server, or UI, have mostly empty main loop, but is programmed in way to have handlers for certain events, and mostly for UI app the main loop is mostly pumping and dispatching events from different sources. In wepapp, there is no 'main loop' ( it's in the browser or js engine).

In any case, the whole code path is never determined at compile time.

There is no predefined code path that the compiler can follow.

All the program is made as a collection of class and module to handle different kind of events, either they come from the hardware or another thread.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

In this event programming model, it's really more about high level component definition, which can be manipulated by generic low level function to match certain event or data type, rather than having the whole code flow and function call hierarchy being determined at compile time.

In this context, the amount of thing for which the compiler can be useful is very low. And this would be true for any language used, either it's java with tomcat, or c++ to program server/UI, or php.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.

Most of the code is event handler definition, but there is no full OO code path or call hierarchy.


Blockchain node fit in the case too, they don't have much big main loop, they mostly react to network event, either from the P2P network, or the rpc interface. The qt wallet UI is same, no big main loop, only event handler.



And my goal is to get to this kind of programming paradigm, where application are not programmed as a single block where all code path can be determined at compile time, but as collection of components implementing certain interface or event handlers, and then having low level scheduler to format and dispatch those event handling in different threads or module or nodes.

The scheduler doesn't have to know anything specific about the high level type of any module, and modules don't have to know anything about lower layer or other modules it doesn't use.

The compiler can't check anything, because it use generic type to represent the object, but it doesn't matter because it wouldn't be able to determine the call hierarchy or code path, because there is no call hierarchy in the program being compiled.

It just dispatch generic event, to generic components/module to handle them.

In this paradigm, programming an application is mostly like java applet or ActiveX , it doesn't contain a main loop, and it's more about programming routine to handle events, or processing some data, rather than based on predetermined code path.

Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

full member
Activity: 322
Merit: 151
They're tactical
But I think you really over estimate the relevance of compile time type checking in the context of cross language interface and distributed application.

Even in the best case, with 100% homogenous language for all app , modules and all layer down to kernel level,  the compiler can only check his side of the application.

Nothing say all part will be compiled with the same compiler, or using the same language at all.

And it seem impossible to have all the part from binary data protocol and crypto, cross language interface definition, to high level OO programing, for distributed application in a single high level language.

For me it's impossible.

Need to make compromise somewhere.

The compromise is the glue code between low & high level is full of boilerplate, but I dont see better way to do this.

It remain still useable on both end from low level/kernel/binary data, to high level OO, network protocol, rpc, and the data representation remain consistent in all part.

It should be easy to program simple OO/GC like languages, or having high level definition of binary network protocol, with a synthax similar to js, or almost compatible with js engine, but with other built object available through module api/interface call.

The module abstraction replace the javascript DOM objects definition, and is a much simpler interface.

Or its possible to simulate the interface of the http object from the browser to have more browser-js compatible code for distributed request.
full member
Activity: 322
Merit: 151
They're tactical
From your discussion on the git

https://gist.github.com/shelby3/19ecb56e16f159096d9c178a4b4cd8fd

Quote
One example is that C can't stop you from accidentally reading from unintialized memory and writing+reading past the end of allocated memory block, thus opening your code to innumerable security holes. You'll never know if you've caught all these bugs which become Zeroday exploits, because all your code is written with grains of sand detail. Rather low-level code should only be used for the 5 - 20% of the code that needs to be finely tuned by an expert. High-level programming languages result in much faster programs most of the time in the real world scenario (not benchmarks contests). High-level programs are less verbose, provide more consistency of structure and performance, and enable less opaque expression of semantics (i.e. the program's logic and purpose). C obscures the actual sematics in loads of unnecessary low-level details as such as manually managing pointers, inability to express and abstract over high-level data structures such as collections and functional composition.

Quote
Not fault, but rather reading the wrong data injected by the hacker.

Hacker is injecting some data into memory that you read by accident; because he observes one of your arrays sometimes precedes a certain string that you store in memory. For example, you receive some input string from the Internet, store that on the heap, then with certain random probability that string ends up being read from due to a bug in your code that reads from an array on the heap. In this way, the hacker has injected some data where you did not expect it. This is one way how Zeroday exploits are made.


It's why it's better to use the object tree structure than C structure directly.

And C structure can be dicey with cross compiling and binary network protocol.

Creating json string from object tree structure is not so trivial with C either. (stdio, vargs , strcpy, strtod etc). Even with c++ and boost spirit, it's far from being extremely limpid either.

With the object tree system, it avoid bad memory access like this, among other things.

Even when the caller is blind to the data that the function use. ( for the c compiler it's a reference pointer)


But again nothing prevent to declare and use C structure, but i avoid them as much as possible with the API.

The only exception in the non kernel code i think is the string structure, but it has 3 members and is very generic.

Otherwise i avoid to use binary C structure as argument in function call that need to be potentially cross compiler/network safe.

Strings are packed internally with the bitcore variable string format for easy serialization to the binary p2p protocol.

Like this

say bye to zeroday exploit, bad memory access, etc etc etc.

say hi to cross compiler binary compatibility, cross language interfaces, safe and secure distributed application etc etc.


I'm sure you see where i want to get at Cheesy Cheesy
full member
Activity: 322
Merit: 151
They're tactical
The C compiler can only understand the "leaf" of the object, the var who can be represented by native C type. ( int, float, strings, pointers, array etc), for the object tree it doesnt understand the type, to the C compilers, all objects are a reference pointer.

There is static typing internally for objects with built in types.

Or can use "object template" close to type class to instanciate an object based on compile time structure, using those built in type as static type definition in a json like format.

The C compiler wont understand anything beyond its basic native types, in the leaf of the objects.

Objects can have static type associated to their reference , and a name.

It's the interest over C structures lists. Members can be referenced by name and type based on runtime input. Close to functional programing.

And structures can be manipulated by same functions either they are un serialized from binary data, or constructed from a json objects, or constructed manually in C based on C compiler binary type.

But different compilers can have different binary packing for struct members.

In the absolute nothing prevent to use C struct, but need specfific glue code on both side to unserialize them ( compiler can have weird binary packing of structure ) and turn them to json.

With this you have instanciation based on type class like format, serialization to binary data and conversion to text json automatically, with reference counter, arrays, lists, collections, thread safety, dynamic linking at binary level, interface implemented in cross compiler/system executable binary, without glue code specific to each structure.

Static object types can be defined based on either built in structure/objects, or native C compiler types, and arrays/lists of those.

full member
Activity: 322
Merit: 151
They're tactical
The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

Okay so you support APIs either binary compatibility or JSON.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

Why can't you translate these JSON objects to binary data structures in C? Which data structures can't a C struct model?

It would be much more elegant and typed to access fields of data structures with the normal dot notation instead of the boilerplate of library calls.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling.

The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Articulation is a strange word to use there. I think what you mean to say that is that the high-level coding will be done in JavaScript and calls can be made to C modules which can be loaded at run-time.


There can be a way to pack binary structure in a single variable in binary form. I do this for der signature.

When there is complex hierarchy, and the types need to be kept across layers that transport it ( eg event loops to deal with event with different members type), it can be useful to have meta type objects like this.

It can transport safely complex data structure with a generic reference pointer type. And the object can be both serialized to binary data and hashed, or to textual json for rpc/json & js api.


Yes basically it's the idea to have binary modules plugin to implement rpc interface, the rpc method is the exported symbol name, the class is the module name, and params are passed with the tree structure. ( could add typedef alias for basic C compiler check).
sr. member
Activity: 336
Merit: 265
The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

Okay so you support APIs either binary compatibility or JSON.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

Why can't you translate these JSON objects to binary data structures in C? Which data structures can't a C struct model?

It would be much more elegant and typed to access fields of data structures with the normal dot notation instead of the boilerplate of library calls.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling.

The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Articulation is a strange word to use there. I think what you mean to say that is that the high-level coding will be done in JavaScript and calls can be made to C modules which can be loaded at run-time.
full member
Activity: 322
Merit: 151
They're tactical
And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

So you want to serialize binary data structures to JSON text and deserialize then back to binary data structures? Why?

The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

You mean you are supplementing C with a meta type system residing in the run-time parsing of these JSON tree structures? But then your meta type system is not statically checked at compile-time.

The p2p protocol use binary data format, opengl too, crypto too.

I thought you wrote you are serializing these to JSON text? Why are you now saying you are transmitting them in binary format?

Your communication is very difficult for me to understand.

Yes it cant be checked at compile time, but

Why the "but"? Can it or can't it?

there is way to have a static definition of type too like with DOM objects style, with a default structure template  associated with the meta type, and can make sure all object of this meta type have a certain forced structure on instanciation, even at binary level. ( so they can be serialized/hashed from json or binary data definition).

What do these words mean?

Do you understand that is very difficult to get a huge number of programmers to adopt some strange framework written for one person's preferences. Generally changes in programming have to follow sort of what is popular and understood.

If you have a superior coding paradigm, then it should be something that can be articulated fairly simply and programmers will get it easily and say "a ha! that is nice!".

Something that is very convoluted to explain, is probably not going to be popular.

The tree can be serialized to binary format and text json. Both.

The node has both p2p protocol in binary data, and rpc interface in json. Working on the same objects.

The type definition will always escape the C compiler comprehension, but can use typedef alias on the reference pointer to help.

The articulation in higher level wont be made in C. It can be distributed in pure binary form for linux and windows, module with new rpc interface can be added to the node without recompiling. Nobody has to see any bit of those source to develop à js app with it.

And if it's to be used at low level, ill document the low level API, to use it to make C or C++ apps. But it's not the point for the moment.


The relevant part of the api for app programmers is not there, but in the javascript code. With the rpc/json api.

Only block chain protocol implementation, or the host side of interface for application modules has to be done in C with the internal api.

Js app developers or node hoster can just get the exe and modules and make their app from the rpc/api.

It could be in assembler or Lisp it wouldnt change a thing.

But I can document the internal api and interface, but it's already all in the code source, and there are examples.

To program high level scripting with it need to know the high level API with the tree.



If there was already a well adopted solution to do this that make all app developers happy with a safe, secure, efficient distributed application framework in high level I would say you ok, but there isnt ... next what ...

full member
Activity: 322
Merit: 151
They're tactical
I asked you to please:

For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

Yet you still wrote another wall of text. Can't you make your point more concisely? Shouldn't you think more carefully about what you want to write?

In one sentence , the idea is to have a representation of a hierachy of objects and lists ( keyname-ref ) ( collection of collection of collection of collection) manipulated in C, and used as function arguments, to facilitate cross compiler compatbility, and memory leaks detection, and allowing to represent simple high level operator on object and variables from C, and being convertible to/from textual json with a generic functions.

Okay this is slightly better communication. Now you are talking to me in high-level concepts that can be digested and understood.

1. I don't need cross-compiler compatibility if I am using Java or JavaScript that runs every where. Performance and up-time hardening are not my first priorities. That will come later. I am one guy trying to get to testnet, not trying to write the perfect C++ implementation on the first draft of the code.

2. I don't need memory-leak detection (i.e. refcounting) if I have GC from Java, JavaScript, or Go.

3. Emulating high-level data structures in C with a library, means the static typing of those data structures is lost. I remember you wrote that you didn't want to use C++ because it is a mess. So apparently you decided to forsake static typing.

4. I would prefer to have a language which can statically type the data structures and which doesn't require the boilerplate of library calls for interfacing with higher-level data structures.

In other words, I see you have made compromises because of priorities which you think are more important. And what are those very important priorities? Performance?

1. Up time should already be good Smiley but Yes you can write code in C using this data format as function arguments and call it from js or java, even remotely via http/json

2. Yes normally, but need to check the certain case sent in pm, but for most things Yes.

3. Static typing can be emulated at the meta typing level at run-time, but hardly by the C compiler, but maybe some tricks with compile time check could be made with macro or pragma.

4. There is some form of static typing internally but it's not visible on the C level. It could be seen by higher level script supporting static typing.


Performance not in short term.



Initially the real motivation is operating system project based on micro kernel, with system agnostic binary modules who can be compiled from windows or linux, and can abstract most need for complex memory allocation and tree of objects at drivers level.

So it can be booted directly from a pi , or pc, or in virtual box from bare metal. To have also rpc and distributed modules in mind. For doing efficient server side operation in C, for 3d , or data processing, with distributed application programmed from this.

Like an application server. With integrated crypto, vectorial math, data list, and somehow like small tomcat for embedded system. Oriented with json and webapp.


The goal originally is this, except I integrated modules to deal with the blockchain protocol and implemented the low level functions with win32/Linux kernel api to make blockchain nodes with rpc server.
sr. member
Activity: 336
Merit: 265
And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

So you want to serialize binary data structures to JSON text and deserialize then back to binary data structures? Why?

The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

You mean you are supplementing C with a meta type system residing in the run-time parsing of these JSON tree structures? But then your meta type system is not statically checked at compile-time.

The p2p protocol use binary data format, opengl too, crypto too.

I thought you wrote you are serializing these to JSON text? Why are you now saying you are transmitting them in binary format?

Your communication is very difficult for me to understand.

Yes it cant be checked at compile time, but

Why the "but"? Can it or can't it?

there is way to have a static definition of type too like with DOM objects style, with a default structure template  associated with the meta type, and can make sure all object of this meta type have a certain forced structure on instanciation, even at binary level. ( so they can be serialized/hashed from json or binary data definition).

What do these words mean?

Do you understand that is very difficult to get a huge number of programmers to adopt some strange framework written for one person's preferences. Generally changes in programming have to follow sort of what is popular and understood.

If you have a superior coding paradigm, then it should be something that can be articulated fairly simply and programmers will get it easily and say "a ha! that is nice!".

Something that is very convoluted to explain, is probably not going to be popular.
sr. member
Activity: 336
Merit: 265
I asked you to please:

For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

Yet you still wrote another wall of text. Can't you make your point more concisely? Shouldn't you think more carefully about what you want to write?

In one sentence , the idea is to have a representation of a hierachy of objects and lists ( keyname-ref ) ( collection of collection of collection of collection) manipulated in C, and used as function arguments, to facilitate cross compiler compatbility, and memory leaks detection, and allowing to represent simple high level operator on object and variables from C, and being convertible to/from textual json with a generic functions.

Okay this is slightly better communication. Now you are talking to me in high-level concepts that can be digested and understood.

1. I don't need cross-compiler compatibility if I am using Java or JavaScript that runs every where. Performance and up-time hardening are not my first priorities. That will come later. I am one guy trying to get to testnet, not trying to write the perfect C++ implementation on the first draft of the code.

2. I don't need memory-leak detection (i.e. refcounting) if I have GC from Java, JavaScript, or Go.

3. Emulating high-level data structures in C with a library, means the static typing of those data structures is lost. I remember you wrote that you didn't want to use C++ because it is a mess. So apparently you decided to forsake static typing.

4. I would prefer to have a language which can statically type the data structures and which doesn't require the boilerplate of library calls for interfacing with higher-level data structures.

In other words, I see you have made compromises because of priorities which you think are more important. And what are those very important priorities? Performance?
full member
Activity: 322
Merit: 151
They're tactical
Well idk what format you want to use to define the api then ? ( to start somewhere)

Yes,  otherwise Yes see you in 6 month when you have code and an api to show.
sr. member
Activity: 336
Merit: 265
All the API is documented in the white papper

...

Now if you can't understand my grammar, don't have time to read my code

...

If you are not interested to work on collaboration

Your white paper is incomprehensible to me. I tried to read it.

Your low-level code is doing strange things, which I am not sure if they are good design or not.

I don't have time to reverse engineer your high-level concepts, by combing over 1000s of lines of low-level code.

Collaboration is a mutual responsibility. I will definitely collaborate with those who make me more efficient.

I am most interested in new ideas, when the progenitor of the ideas is able to explain their ideas succinctly, coherently, and cogently.

Most important is for me to make APIs and a testnet so that app developers can start coding. You can use what ever code you want to write apps. We don't really need to collaborate. You and I should be independent.

Now if there is something I can use from your framework in my work, then great. But it isn't really a requirement for what we need to do.

I think your concern is that my work won't be done in time. I understand that. That is a legitimate concern. You can surely go your own way, if you see my progress is too slow or if you feel my design decisions are incorrect. But as of this moment, you haven't even seen any APIs or design decisions from me yet. So it is difficult for you to judge.

No offense is intended. I am just being frank/honest. I am not intending to piss you off. But you keep slamming me with explanations which are not very cogent from my perspective. We aren't forced to collaborate on your framework. If your explanations were easier for me to readily grasp, then I could justify perhaps the tangential discussion on your framework. But if your explanations are difficult or cryptic for me to try to understand, then I reach the point I have by now where I see I am losing a lot of time reading stuff that doesn't quickly convey to me your high-level justifications and concepts.

Maybe its my fault or yours or both. But it isn't intended to be offensive. Just is what it is.
full member
Activity: 322
Merit: 151
They're tactical
And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

So you want to serialize binary data structures to JSON text and deserialize then back to binary data structures? Why?

The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

You mean you are supplementing C with a meta type system residing in the run-time parsing of these JSON tree structures? But then your meta type system is not statically checked at compile-time.

The p2p protocol use binary data format, opengl too, crypto too.

Yes it cant be checked at compile time, but there is way to have a static definition of type too like with DOM objects style, with a default structure template  associated with the meta type, and can make sure all object of this meta type have a certain forced structure on instanciation, even at binary level. ( so they can be serialized/hashed from json or binary data definition).
full member
Activity: 322
Merit: 151
They're tactical
I asked you to please:

For example, I have no idea why you are serializing to JSON textual format in your C code and not just passing (little or big endian) binary data that is compatible with C structs. And please don't write another long explanation that I can't really understand your descriptions well. You try to say too many things at the same time, and it is ends up not being that comprehensible.

Yet you still wrote another wall of text. Can't you make your point more concisely? Shouldn't you think more carefully about what you want to write?


In one sentence , the idea is to have a representation of a hierachy of objects and lists ( keyname-ref ) ( collection of collection of collection of collection) manipulated in C, and used as function arguments, to facilitate cross compiler compatbility, and memory leaks detection, and allowing to represent simple high level operator on object and variables from C, and being convertible to/from textual json with a generic functions.
sr. member
Activity: 336
Merit: 265
And there is the "binary form", a tree of referenced objects, constructed from a text json, or manually with the API. ( or eventually from a script)

The idea is to allow to completely abstract function calling and parameters passing using a format that can be converted to/from json transparently, automatically, with object reference counting, and stored internally as binary data aligned on 128 bits for fast SIMD operations.

So you want to serialize binary data structures to JSON text and deserialize then back to binary data structures? Why?

The same function prototype in C can be used to declare different functions with different input parameters.

Like a generic function type who can be implemented with different type of data format for arguments, and to define the json/rpc API from the C side.

You mean you are supplementing C with a meta type system residing in the run-time parsing of these JSON tree structures? But then your meta type system is not statically checked at compile-time.
full member
Activity: 322
Merit: 151
They're tactical
( the rest is to answer other points.)


I already have a full API, and i did not make this framework for blockchain specially, even if for application yes i have blockchain ecosystem in mind specifically, but the frame work can handle all kind of things ( ray tracing, manipulation of graphic object, manipulation of blockchain objects, private key, signature, in browser staking etc.

All the API is documented in the white papper , and in other places. All the code source is on the git. There are working example running.

You have all the low level code and explanation in the PMs.

Now if you can't understand my grammar, don't have time to read my code, and your idea is to start to program a blockchain and api for distributed application alone from scratch including high level OO interfacing, good luck with that Smiley I'll see where you get lol

If you are not interested to work on collaboration, again ok, i can have idea how to handle most of the issue you rise on the git discussion, with local stack frame, circular references, multi threading, no memory leak, asynchronous events with generic function declaration , compatibility with typescript/json and javascript objects, list /  array processing.
Pages:
Jump to: