Pages:
Author

Topic: Do you think "iamnotback" really has the" Bitcoin killer"? - page 10. (Read 79971 times)

sr. member
Activity: 336
Merit: 265
There is a serious inconsistency in how UTXO are referred.
On one hand, there is all the work of having a totally ordered consensus of transactions: the block chain.  It would have been extremely simple to refer to a transaction output in a block chain: the block number, the transaction number in the block, and the output number in the transaction uniquely specify the UTXO.  No need for a hash, no need for 256 bit !

Seriously you need to stop pretending you know anything about blockchain design.

This is beginners' egregious error.

Lol you just flunked the most fundamental issue of decentralized systems, which is there is no total order.

Well deep down blockchain are still a decentralized database, who preserve total order Smiley

Even if the way the chain will be constructed is not ordered, the system make in sort to garantee total order consistent across the network.

Incorrect. Chain reorganizations can happen at any time. PoW is probabilistically (i.e. never) final, not deterministically final.

Thus referencing by specific chains instead of by hash as @dinofelis suggested would be a DDoS security vulnerability at least and other cascading issues.

@IadixDev, that is why you leave the blockchain design work to me. I am expert. You are not.
hero member
Activity: 715
Merit: 500
Didn't realize he was back  Smiley. Bitnet is the name of your project? Do you have your whitepaper available?

I wrote one for the consensus algorithm in Oct/Nov 2016, but I haven't published it yet.

Much work to do yet.

Looking forward to it.
sr. member
Activity: 336
Merit: 265
Didn't realize he was back  Smiley. Bitnet is the name of your project? Do you have your whitepaper available?

I wrote one for the consensus algorithm in Oct/Nov 2016, but I haven't published it yet.

Much work to do yet.
sr. member
Activity: 336
Merit: 265
Re: Speculation Rule: sell when others are irrationally optimistic or too exuberant

and it will never be forgotten when you are actually "right" as it won't be forgotten when you are actually "wrong"

Actually the opposite is true, because people always blame their mistakes on someone else. So I get no credit for the numerous times I've been correct, and people invent mirages in their mind of how I was wrong, when it was really their own mistake.

I've found that I can be the first to make a very important correct statement, then it spreads around the forum and suddenly everybody thinks it was their idea and nobody knows who was the original seed.

This is why fungible money can be a useful information tool, because it measures phenomenons that humans can't accurately appraise (measure). He who has the most money was the one who was correct more than anyone else. Bullshit walks, money talks.

Note however that fungible money has some serious drawbacks, such as that it is a winner-take-all power vacuum.

Yes some rating system or just + or - it like youtube. Would be fun

Not fun for those who aren't idiots. It would be a clusterfuck power vacuum of ignorance and politics, just like democracy. As if this forum needs more of that.  Roll Eyes
hero member
Activity: 715
Merit: 500
Didn't realize he was back  Smiley. Bitnet is the name of your project? Do you have your whitepaper available?
hero member
Activity: 656
Merit: 500
From my deep study of the range of plausible designs for a blockchain consensus system (and I studied much deeper than in than what is contained in that linked thread), I conclude that it is impossible to have a fungible token on a blockchain in which the consensus doesn't become centralized iff the presumption is that the users of the system gain the most value from the system due to its monetary function.

However, I was able to outsmart the global elite, because I realized that if the users of the system gained more value from the system for its non-monetary function and iff that value can't be financed (i.e. its value can be leeched off by control of fungible money), and if I provided a way for the users to provide the Byzantine fault DETECTION as a check-and-balance against the power of the whales and if I provided this in a way that is not democracy and is a crab bucket mentality Nash equilibrium, then I would have defeated the problems with the concept of fungible money.

The elite simply weren't aware of these concepts, because I invented them. Nash didn't know this.

And that is what I intend to launch with BitNet.


Quoting, because this post is too valuable  Cheesy
full member
Activity: 149
Merit: 103
Thanks for your insights iamnotback!

From my deep study of the range of plausible designs for a blockchain consensus system (and I studied much deeper than in than what is contained in that linked thread), I conclude that it is impossible to have a fungible token on a blockchain in which the consensus doesn't become centralized iff the presumption is that the users of the system gain the most value from the system due to its monetary function.

What do you exactly mean by "monetary function"? The fact that miners receive rewards and fees?

However, I was able to outsmart the global elite, because I realized that if the users of the system gained more value from the system for its non-monetary function and iff that value can't be financed (i.e. its value can be leeched off by control of fungible money), and if I provided a way for the users to provide the Byzantine fault DETECTION as a check-and-balance against the power of the whales and if I provided this in a way that is not democracy and is a crab bucket mentality Nash equilibrium, then I would have defeated the problems with the concept of fungible money.

Do you imply that your system (only) offers non-monetary incentives to miners?

sr. member
Activity: 336
Merit: 265
Quote from: iamnotback
Besides the shadow elite are apt to love the altcoin I will launch, because they will see it as yet another speculation that falls under Bitcoin's umbrella.

Why would they love a currency that is designed to be truly decentralized? Please be more specific on that.

Because they don't think anything can be. They will view it as another speculation or if necessary something they can capture when needed.

I assume you meant "decentralized". But that leaves us with a dilemma:

a) They think it because they are very smart and proved the impossibility of decentralized currencies before releasing Bitcoin. Though that would mean that your design would turn out as impossible as well. Either because it's impossible as such or because it will finally get captured by them.

b) They didn't prove it and just think (or hope) it. In that case they would be very dumb (and thus cannot be called an "elite") since they must have been aware of the risk that someone would eventually come and fix Bitcoin's flaw of becoming centralized.

@iamnotback: Do you have any thoughts on how to solve that dilemma?

From my deep study of the range of plausible designs for a blockchain consensus system (and I studied much deeper than in than what is contained in that linked thread), I conclude that it is impossible to have a fungible token on a blockchain in which the consensus doesn't become centralized iff the presumption is that the users of the system gain the most value from the system due to its monetary function.

However, I was able to outsmart the global elite, because I realized that if the users of the system gained more value from the system for its non-monetary function and iff that value can't be financed (i.e. its value can be leeched off by control of fungible money), and if I provided a way for the users to provide the Byzantine fault DETECTION as a check-and-balance against the power of the whales and if I provided this in a way that is not democracy and is a crab bucket mentality Nash equilibrium, then I would have defeated the problems with the concept of fungible money.

The elite simply weren't aware of these concepts, because I invented them. Nash didn't know this.

And that is what I intend to launch with BitNet.



Satoshi clearly stated that he intended to have VISA-like transaction volumes on-chain with bitcoin, but that bitcoin would become a semi-centralized served thing.

He lied by not mentioning that isn't his intended use case. He was just responding to a question about if Bitcoin could scale from a bandwidth consideration alone. You can find other cases where he lied by not pointing out how impractical something would be, such as how he claimed some nodes would still be willing to process a transaction for free:

I read from Satoshi also that he realized that his system would only be viable in the long term in the hands of an oligarchy of miners.

So why can't you add 2+2?

He knows it will become centralized yet some how he thinks hobbyist nodes will still process for free. Satoshi was a liar.

Btw, John Nash was a prankster and deviant.

I was listening to him in an interview and he said he isn't concerned about helping the poor, because he said they are adjusted to their poverty.

So much for the P2P nature of bitcoin, which was only intended as a bootstrap with useful idiots.  He clearly didn't care about a long-term P2P network, and the importance of decentralized nodes:

Now you are starting to understand.

So why can't you add 2+2?

The block chain was just the ledger that a few oligarchs would share amongst them, hopefully keeping one another in check, to serve as the new centralized VISA backbone to which all users would connect.

However, the way bitcoin is evolving, and was actually designed with the 1 MB limit (and other practical limits), is that on chain transactions will be limited to a few big actors and will not reach large scale, but on the other hand, that most people will be able to download a chain with which they cannot do anything apart from contemplating how big guys are filling it with their expensive transactions.

Bitcoin is "rich sleasy business" OWN private money, NOT to be used by normal people, contrary to what Satoshi initially announced.   Bitcoin IS downloadable by anybody, but not usable ; Satoshi announced bitcoin to be usable by anybody, but not downloadable except for a few miner oligarchs.

And why did it become a rich sleazy business money and not a VISA administered by a few miners ?  Because Satoshi put himself a 1MB limit on the block chain.  If he understood the game structure of bitcoin, he would have known that this limit would become immutable because it was needed to generate fees (which he needed for reasons of his diminishing coin creation scheme in the longer term) but then it couldn't turn into a VISA kind of money and he would deny what he had been proposing from the start  - and if he didn't understand the consequences of him introducing a "temporary" 1 MB limit, then he couldn't foresee that it was going to become a rich-business-only crypto either.

Yup. So why can't you admit the evil genius of Satoshi?


Btw, I think it was necessary to murder John Nash before the blockchain scaling debate reached its boiling point. Because by now even people such as yourself are starting to realize something smells funny.
sr. member
Activity: 336
Merit: 265
Jihan Wu approves of my posts?

Last year, Kevin Pan recommended me a book called The Cathedral and the Bazaar. I got it. We will put lots of money.

I regretted one thing. In China, open source culture is not popular. I did not understand it. We put too less or 0 money into community.

Ahem. Is someone named Jihan reading my posts?

That chart is clearly indicating that Bitcoin can't move higher until Litecoin catches up.

Litecoin's price is undergoing the same technology adoption as Bitcoin and all the rest, it is just that the first hump is very volatile (because silver is more volatile than gold for the reasons I have explained). So this means Litecoin's price is going to $100+:

Jihan replied?

Let me correct a FUD: miners love LN. LN makes bitcoin price higher, and miners love bitcoin sold at high price, hence miners love LN.
sr. member
Activity: 336
Merit: 265
or some alt account

I haven't used any other account in 2017.

If you see any posts on my old accounts in 2017, that is because they've been hacked. I scrambled the passwords, but apparently not well enough. And TPTB_need_war I stopped using it because it was banned but I forgot to scramble the password and it since got hacked into. Archive.org has the historical record of that account before it got hacked. So if they delete or edit it, I can prove it.

And I had no accounts that aren't listed in the link in my signature.
sr. member
Activity: 336
Merit: 265
Interesting discussion going on over there... (click the quote if you're interested to go read the context)

Also BTC will not be used by billionaires only, that's stupid, people will demand changes so bitcoin cannot be used only by a handful of people on earth. If it takes UASF then UASF will happen so segwit + LN can happen and everyone can use bitcoin, additional blocksize increases will come too. Nobody will support the "billionares only blockchain", that's stupid.

Sorry you can't do anything to stop it:


Cry and scream all you want. You are wasting your time fighting what is inevitable.

Soon you will realize this. Go ahead and try. My popcorn is laughing.

It's as easy as UASF + PoW change with a new solution such as randomly changing algorithm to avoid efficient ASIC stacking.

"BillionaireChain" used by 2,000 people on earth will be seen as a joke by the rest of the population and it will no longer be Bitcoin. Progress will move on.

None of your democracy shenanigans will prosper. But feel free to lose all your wealth trying.

The opinion of the masses does not matter, if we presume that fungible money will remain supreme in the economy.

I have one alternative to offer which is the theory that the economy will bifurcate into fungible money driven tangible economy and a knowledge age economy in Inverse Commons. The latter is what my BitNet project is about. If I am correct, then that will be our only alternative.

But don't believe me. Please go waste your time and lose all your wealth. The smart money is starting to recognize my expertise. Please do your own due diligence.
legendary
Activity: 1554
Merit: 1000
You are without doubt, a professional psychologist's, wet-dream.

And again more evidence of my expertise.
What is it you would like people to think, when they read how absolutely fabulous you are, Shelby?
sr. member
Activity: 336
Merit: 265
You are without doubt, a professional psychologist's, wet-dream.

And again more evidence of my expertise.
legendary
Activity: 1554
Merit: 1000
It is important for me to clear up the record on the following because I am preparing to blog on a ToE which ties in everything we've been discussing lately.  Shocked

Re: OT crap from Compact Confidential Transactions for Bitcoin

Edit2: Thanks for the move, totally appropriate.

Hitler Gregory had moved it from the original thread where it belonged in context, and he renamed the thread to this adhominen insult name, OT crap from Compact Confidential Transactions for Bitcoin.

What is so ironic is that I think I ended up later potentially solving the proof-of-square requirement (required by the flaw Andrew Poelstra aka andytoshi has discovered) for Compact Confidential Transactions (CCT) when I merged that homomorphic encryption with Cryptonote ring signatures prior to the similar attempt to merge Blockstream's less efficient CT with Cryptonote.

Andrew Poelstra and Gregory Maxwell don't need any defense by me, their records stand on their own, but I'm thinking pointing this out may be helpful to those that aren't familiar with your antics. I'll also point out that most people, especially GMaxwell have been overwhelmingly patient with you.

https://bitcointalksearch.org/topic/m.5640949

Lol, you linked to where I had been the first one to point out to Gregory Maxwell, that CoinJoin can always be jammed with DoS because one can't blacklist the attacker because the entire point of CoinJoin is to provide mixing so that an attacker can obscure his UTXO history.

You are so careless that you didn't even realize that was my famous shaming of Gregory. Did you miss the post where I declared "checkmate" then Gregory responded with ad hominem and then by the force of my correct logic he had to STFU.



Lol, again you missed where at the end I showed the math derivation of how to defeat selfish-mining which was the basic idea behind published designs such as GHOST (which I wasn't aware at the time and only became aware of when I read Vitalik's blog).

You linked to a guy who is technologically ignorant and is currently a BU shill.



Yes Gregory did point an error in my conceptualization of Winternitz which I had only become aware of just hours or days before that, and I admitted it. I even went on to write Winternitz code and become quite expert on it, even incorporating Winternitz it into my anti-DDoS conceptualization.

But you failed to cite the other occasions where I put Gregory's foot in his mouth, such as my recent expose on how Bitmain checkmated Blockstream and in 2016 I pointed out that his flawed logic and math on why Ogg shouldn't have index (which was a format in which he was intimately involved as a co-designer of one of the key compression codes!):


And how is not having the index any worse than not allowing an index. I fail to see the logic. Seems you are arguing that the receiving end will expect indexes and not be prepared for the case where indexes are not present. But that is a bug in the receiving end's software then. And in that case, there is no assurance that software would have done the index-less seeking more efficiently for the status quo of not allowing an index. None of this makes sense to me.

Also I don't understand how you calculate 20% increase in file size for adding an index. For example, lets take an average 180 second song consuming roughly 5MB for VBR encoding. Let's assume my users are satisfied with seeking in 1 second increments, so that means means I need at most 180 of 22-bit indices, so that is only 495 bytes which is only a 0.01% increase! On top of that I could even compress those 22-bit indices into relative offsets if I want to shrink it by roughly 75% to 0.0025%.

Ah that reminds me why @stereotype keeps trolling my threads, again, and again and continuing to be habitually incorrect.

You are without doubt, a professional psychologist's, wet-dream.
sr. member
Activity: 336
Merit: 265
It is important for me to clear up the record on the following because I am preparing to blog on a ToE which ties in everything we've been discussing lately.  Shocked

Re: OT crap from Compact Confidential Transactions for Bitcoin

Edit2: Thanks for the move, totally appropriate.

Hitler Gregory had moved it from the original thread where it belonged in context, and he renamed the thread to this adhominen insult name, OT crap from Compact Confidential Transactions for Bitcoin.

What is so ironic is that I think I ended up later potentially solving the proof-of-square requirement (required by the flaw Andrew Poelstra aka andytoshi has discovered) for Compact Confidential Transactions (CCT) when I merged that homomorphic encryption with Cryptonote ring signatures prior to the similar attempt to merge Blockstream's less efficient CT with Cryptonote.

Andrew Poelstra and Gregory Maxwell don't need any defense by me, their records stand on their own, but I'm thinking pointing this out may be helpful to those that aren't familiar with your antics. I'll also point out that most people, especially GMaxwell have been overwhelmingly patient with you.

https://bitcointalksearch.org/topic/m.5640949

Lol, you linked to where I had been the first one to point out to Gregory Maxwell, that CoinJoin can always be jammed with DoS because one can't blacklist the attacker because the entire point of CoinJoin is to provide mixing so that an attacker can obscure his UTXO history.

You are so careless that you didn't even realize that was my famous shaming of Gregory. Did you miss the post where I declared "checkmate" then Gregory responded with ad hominem and then by the force of my correct logic he had to STFU.



Lol, again you missed where at the end I showed the math derivation of how to defeat selfish-mining which was the basic idea behind published designs such as GHOST (which I wasn't aware at the time and only became aware of when I read Vitalik's blog).

You linked to a guy who is technologically ignorant and is currently a BU shill.



Yes Gregory did point an error in my conceptualization of Winternitz which I had only become aware of just hours or days before that, and I admitted it. I even went on to write Winternitz code and become quite expert on it, even incorporating Winternitz it into my anti-DDoS conceptualization.

But you failed to cite the other occasions where I put Gregory's foot in his mouth, such as my recent expose on how Bitmain checkmated Blockstream and in 2016 I pointed out that his flawed logic and math on why Ogg shouldn't have index (which was a format in which he was intimately involved as a co-designer of one of the key compression codes!):


And how is not having the index any worse than not allowing an index. I fail to see the logic. Seems you are arguing that the receiving end will expect indexes and not be prepared for the case where indexes are not present. But that is a bug in the receiving end's software then. And in that case, there is no assurance that software would have done the index-less seeking more efficiently for the status quo of not allowing an index. None of this makes sense to me.

Also I don't understand how you calculate 20% increase in file size for adding an index. For example, lets take an average 180 second song consuming roughly 5MB for VBR encoding. Let's assume my users are satisfied with seeking in 1 second increments, so that means means I need at most 180 of 22-bit indices, so that is only 495 bytes which is only a 0.01% increase! On top of that I could even compress those 22-bit indices into relative offsets if I want to shrink it by roughly 75% to 0.0025%.

Ah that reminds me why @stereotype keeps trolling my threads, again, and again and continuing to be habitually incorrect.
sr. member
Activity: 336
Merit: 265
@iadix, PLEASE DO NOT REPLY TO THIS. We will talk about this in the future. I have some coding to do first.

This applies to what we've been discussing:

https://github.com/keean/zenscript/issues/11#issuecomment-287266460
sr. member
Activity: 336
Merit: 265
Re: Who else is tired of this shit?

People (like me) who might be thinking that Bitcoin is a platform that can really transform the whole world and can really be a big push for human development can be in for a big disappointment. It is just like any other nice technology or development introduced by man and can also be corrupted along the way because Bitcoin or cryptocurrency has no power to change human nature. We are still bringing the kind of greed and selfishness we have and this is quite true with the raging debate in the Bitcoin right now. While we seem to criticize the world order as it is now and how Bitcoin can change the system, people in the Bitcoin community are sadly manifesting the same sickness and the same traits we want to avoid. This can be getting so ironic but not surprising to me.

The fundamental solution to eliminate greed and power vacuums requires a shift away from fungible money to knowledge trading in Inverse Commons.

I have a plan.


Well if you are so tired of all of it I think you need to take a break and take a moment of silent or stop thinking about it, I think you are pretty stress out of the over and over topic and the nonsense that have been driven all people to be crazy with something about bitcoin well I will not make a rant because all of us are only human that need some rest with this kind of issues.

I mean I am tired of the greed and power vacuums, not that I am tired of only this specific instance them. Those negative attributes of fungible money are very noisy and waste so much of humanity's resources.



Any marketcap other than bitcoin's is a smoke and mirror illusion, any marketcap lower than $10B should not even be considered because some big investors could easily manipulate the entire ecosystem but not with marketcap above $10-$15B as the risks are too high for a few big whales.
So I wouldn't count on alts that much.

full member
Activity: 322
Merit: 151
They're tactical
The only point is see where my approach is different is with the GC vs ref count thing, but i think the two are orthogonal.

I want ref counting to avoid pointer ownsership and it's mostly used for short lived object, or permanent objects, i don't really make intermediate.

And you want GC to have macro management of memory with determined life time.



The thing is for me most GC language (like java or js) are incredibly inefficient with memory, and GC sucks for general purpose, and for most simple webapp, the case are not so complex.

The only kind of GC that really make sense to me is in video game engine, or game console SDK, because there is a high level abstraction of objects like levels, bsp tree and some high level abstraction of the objects and the execution flow inside of the engine.

So it allow for good management of objects life time in automatic manner because all execution context and the object it use are well defined from application level.

And mostly the whole GC is hard coded into another sdk, and hard coded into the game program.



Ultimately for me GC only make sense in the context of higher level paradigm to define object life time in relation with explicit life time boundary based on the application execution.


General purpose GC sucks and they just end up not freeing the memory.

Trying using any java or javascript app that is a bit intensive with opengl, multi media or such for long time and it will all end up with a big mess. Unless the user flush the GC manually.

Android cheat a lot on this with fake multi tasking and only the memory for one app has to be really loaded in memory at once, so it's not so bad when you have 40 app and tabs running, only one is in physical memory at one time, so even if the GC sucks, it doesn't matter too much.




But i would be all for defining high level application paradigm when it make sense to have fast GC when object life time boundaries can be determined easily at compile time or when it's possible to determine at a certain point of the program that no reference to a certain number of marked objects will not be used anymore anywhere and they can be safely sweeped.



But other than this, for general purpose it sucks, and for short lived objects it sucks.



As the way i program the blockchain node doesn't use much caching in memory, and deal mostly with short lived objects who will be coming from the network and either stored or sent back to the network, it's not too hard to manage object life time.



And with the reference counter, it still allow for lot of freedom when passing reference to other functions, even for short lived objects, or when life time doesn't matter, or can't be really known at the moment the object is instanciated.



Even if it's not extremely efficient to deal with complex cases, with the lockless internal allocator and memory pool it's still fast to allocate and free memory, and can easily create specific pools for object that have determined life time boundaries.
full member
Activity: 322
Merit: 151
They're tactical
I hope you don't disappear. And I hope I can show you something in code asap that makes you interested.

And maybe you will understand why for me compile time check is mostly irrelevant.

I shouldn't disappear normally, but never know these days  Shocked

Oki for code example Smiley

Rather I would say compile-time checks are important especially for small details, but we can't possibly type every semantic due to unbounded semantics.

Because every programmer will consider Turing law as some kind of holy bible, but in truth it's really useful in the paradigm of program to make scientific computation.

Turing model is for program who start with initial parameters, do their loop, and finish with the computed result.

Once you throw irq ( usb event, key/mouse input, network events, hard drive event ) , or threads, it's not a Turing machine anymore. States can change outside of the influence of the program


Well to me ultimately, i still stay in the idea that the only things cpu knows about are registers and memory address.

I still spent some time once uppon a time close to cracking scene, and i'm quite familiar with intel assembler, for me if you don't have security understanding at C level, you ain't got any security at all.

And most high level language will still use C compiler or interpreter, and still run on a CPU who only know about registers and memory address.

And no VM is perfectly safe, as far as i know, most exploit on JAVA comes more from exploiting bugs in the VM rather than the program itself. Or even from the layers underlying the VM, kernel, or libs etc

It's the problem with RUST and why it doesn't really bring that much more security from C to program operating system or kernel, or low level things as they say.

And why to me high level language are only false security, and in the context of event based programming based on plugins/components, the compiler can't check much.

C++ compiler can't really check much memory leak, neither JAVA sdk/VM  know how to prevent dead locks or screw ups, not even talking about exploit via VM or underlying layers down to the kernel.

It just give the impression it does by giving a sort of semantic to express it in programming language, but at the end it still can't really check much that the program will really function as the semantic imply.












This is unbounded nondeterminism.

Most of the program flow is not function calling each others in a precompiled order, but integrating component on top of an event processing layer, and the functions will be called by programs outside of the compiler scope.

It's even more relevant in the context of distributed application, and application server.

Most of the high level function will not be called by the node itself. The parameters with which they will be called will not be determined by the code of the node itself, and that's the only thing the compiler can see.

It look like oracle based Turing machines, but it's not even a problem of deterministic algorithm, because the code is determined by the hardware interupts, or other threads, rather than the algorithm itself.

Even if the algorithm is perfectly predictible in itself, if it's called from an interrupt or use shared state in multi task environment, the execution flow is still not predictible from the code.




That is only fundamentally incompatible with compile-time (i.e. static) typing in the sense of an exponential explosion of types in type signatures.

It doesn't matter the language used, it cannot determine most of the relevant code path at compile time, or check the client side of the interface. It doesn't even really make sense to try to figure that out at compile time.


Yes for me the trade off between explosion of data types and signature, and the benefit it bring in term of high level programming in this context is not in favor of compile time type checking Smiley


I don't think anyone was proposing dependent typing.

Any way @iadix, I must say that I better expend some time on developing my stuff and not talking about all this theory. I was already talking about all this theory for months with @keean. I don't want to repeat it all again now.

Let me go try to do some coding right now today. And let's see how soon I could show something in code you could respond to.

We can then trade ideas on specific coding improvements, instead of this abstract discussion.

I understand what you want, and I essentially want same. We just have perhaps a different idea about the exact form and priorities but let's see how close we are to agreement once I have something concrete in code to discuss.

Yes, i think anyway i got my most thinking across, or the part relevant for the moment and i can get your from the discussion on the github or on this forum Smiley


Most of the code defined will not be called from inside of the component, but by other programs/applications, and you know nothing about them, how they might represent data or objects internally, or how they might do function call passing those object and data. The compiler can't have a clue about anything of it. Neither about the high level code path, or global application logic.

It doesn't have to, and shouldn't. The application do not follow the Turing model anyway, and the high level function of C/C++ compiler are at best gadget, at worst it can force to complexify the code or add level of abstraction where it doesn't really matter, because it still can't check what is relevant in the application execution flow. It can just check the code flow of the abstraction layer, which is mostly glue code and doesn't matter that much in the big picture.

I think you may not be fully versed on typeclasses. The are not OOP classes. @keean and I had long discussions about how these apply to genericity and callbacks.

Please allow me  some time to try to show something. This talk about abstracts or your framework, will just slow me down.


Please two or three sentence replies only. We really need to be more concise right now.

I have read those discussion, i need a good deeper look into it, to me it seem similar in certain aspect to the purpose of my system of dynamic object tree, but not exactly in the same context Smiley

But i'm pretty sure this system of typeclass could be used as input to create those object tree in the script language.

Would not be too hard to implement the low level aspect of memory/instance/reference and concurrent access on the instances with the framework, large part of the low level function being already made to have a runtime to manipulate data and objects created out of this definitiion.

But it's not exactly done with same purpose in mind or coming from same place or to solve the same things exactly =)



Pages:
Jump to: