Pages:
Author

Topic: John Nash created bitcoin - page 14. (Read 22259 times)

hero member
Activity: 770
Merit: 629
April 11, 2017, 06:27:05 AM
==> something got wrong when I tried to post, so here it is, hopefully, again...


Well, this is the kind of cryptographic "common sense" that doesn't make sense.  As I said before, one has to assume, in a cryptographic design, that the cryptographic primitives are about at the security level that is known - for the simple reason that one cannot predict the deterioration of its security level by future cryptanalysis.  As far as one goes, it can be total.

You're either not comprehending or being disingenuous. Do you not know that Shor's algorithm is known to break ECC but not hash functions? I had already told you as follows, that the hash has greater security for expectations of quantum computing. It is a prudent precaution.


I answered that already.  If you think that your system has to work when a component is totally broken, you don't use that component.   My answer to your rebuttal was already written out above, as I expected this rebuttal, but you're missing the point I'm making:

if 160 bit ECC is not secure in the long term, then 128 bit ECC is not secure in the REALLY REALLY short term. So this argument backfires.
Note that I'm totally aware that any public key cryptographic system is *potentially* easier to attack crypt analytically than the symmetric-key mixers that are used in things like hash functions.  But that doesn't solve the issue on the contrary.

Again: if you need a symmetric-key hash algorithm to keep your public key of 320 bits (160 bit security) safe from a quantum computer (I don't believe in them, but that's another matter), then using a 128 bit public key is going to be cracked before it got included in the block chain.

If a 160 bit secure public key cannot survive 50 years of attack, a 128 bit secure public key cannot stand a second of attacks.

This is what I tried to explain to you: if you think that ECC is going to be fundamentally broken, by quantum computers, DON'T USE IT AT ALL.  And if you use it, assume that a certain security level is going to be secure, and apply it.  But don't "protect" the broken system with another system ; because you are going to use the broken system at a certain point, and then you're done and that's what happening here.

Again, if 160 bits security ECC won't do where bitcoin is hiding it with a hash, 128 bits security won't do openly in the very short term where bitcoin needs it.

Quote
If you argue that it doesn't matter if we have the hashes when ECC is broken by quantum computing, because the transactions can be intercepted and cracked by the attacker before they are confirmed in the network, you would not be thinking clearly. Because quantum computing would at its inception stages likely be only able to break long-term security but not short-term. So there would be a period to transition as I already stated in the above quote from my prior post.

No, you don't seem to understand that if you can crack an ECC of 160 bits security in the long term (say, 50 years of computing), you can crack an ECC of 128 bits security in a single second *with the same equipment*.  There's a factor of 2^32 = 4 billion between them.  I was only taking 16 bits, this is why I arrived at an afternoon but between 160 bits and 128 bits there are 32 bits.

Look: 50 years = 50 * 365 * 24 * 3600 = 1.57 billion seconds.  If your quantum computer can crack a 160 bit security ECC in 50 years, it can do that 4 billion times faster for a 128 bit ECC, because the search space is 4 billion times smaller.  ==> 0.4 seconds.

This is why I tell you that it doesn't make any sense to have (strong hash) 160 bit security in the long term, and (weak ECC) 128 bit security in the short term.  

Quote
But you're analogy does not apply, because Shor's algorithm (a form of cryptanalysis) is already known! It is not a future unknown.

My point is that if you assume this to work one day, then you shouldn't use ECC.  And I just showed you why.

Quote
Also (and this is a minor point which isn't really necessary for my slamdunk) you are conflating the breakage of discrete logarithm math theoretic security with the security of permutation algorithms of hash functions. I repeat the distinction between the two which you have failed to assimilate:

I'm perfectly aware of that.  But that's not the point I am making.  The 160 bit hash security is USELESS if the 128 bit ECC is not secure.  I'm perfectly aware that ECC has a higher chance of being broken than a hash function.  I'm not believing in quantum computers, though.  I have my own reasons to think so on which I will not digress.  But that's not even the point.  If you THINK that they will exist, simply don't use ECC (nor any known form of discrete log public key crypto) because it is TOTALLY BROKEN in that case.

There's no point in having 300 years of a secure hashed public key on the chain, which is then broken the second after you propagate your transaction and the node you send it to modifies it.  

You seem to be focussed on the long term security, but what fails utterly in the consistency of the system is the *short term security* under the assumptions made.  (and yes, I also made that error in the beginning of this discussion)

It is not the 160 bits that is insecure: it is the 128 bits after propagating one's signature and public key IF ever one makes the assumption that 160 bits ECC is not long-term secure.

There's no point discussing the rest as long as you didn't see my point here, so I state it once more:

If ever you have a doubt about the security of ECC in the long term with 160 bits, then 128 bits ECC security is gone essentially IMMEDIATELY upon broadcasting your public key and signature, making the entire concept broken.  This is why one should never use a crypto primitive of which one has a doubt over its security.
sr. member
Activity: 336
Merit: 265
April 11, 2017, 06:14:25 AM
Instead of we discussing it here let Nash come up with some genuine proofs.We all know Satoshi created bitcoin and Satoshi certainly knows how to proov his identity.

Umm. John Nash is (purportedly) dead.
legendary
Activity: 854
Merit: 1000
April 11, 2017, 06:12:29 AM
Instead of we discussing it here let Nash come up with some genuine proofs.We all know Satoshi created bitcoin and Satoshi certainly knows how to proov his identity.
hero member
Activity: 770
Merit: 629
April 11, 2017, 05:53:06 AM
@dinofelis you are so myopic. Give me a moment to compose my rebuttal.

I'm only capable of understanding logical arguments, unfortunately Smiley
sr. member
Activity: 336
Merit: 265
April 11, 2017, 05:52:04 AM
Well, this is the kind of cryptographic "common sense" that doesn't make sense.  As I said before, one has to assume, in a cryptographic design, that the cryptographic primitives are about at the security level that is known - for the simple reason that one cannot predict the deterioration of its security level by future cryptanalysis.  As far as one goes, it can be total.

You're either not comprehending or being disingenuous. Do you not know that Shor's algorithm is known to theoretically break ECC but not hash functions? I had already told you as follows, that the hash has greater security for expectations of quantum computing. It is a prudent precaution.

Another reason (in addition to the compression of UTXO) to hash the values on the block chain is because when the use of a quantum computer is detected, we have some protection against chaos and can map out a strategy for burning the values to a new design securely. Hashes are much more likely to be quantum computing resistant.

If you argue that it doesn't matter if we have the hashes when ECC is broken by quantum computing, because the transactions can be intercepted and cracked by the attacker before they are confirmed in the network, you would not be thinking clearly. Because quantum computing would at its inception (nascent stages) likely be only able to break long-term security but not short-term. So there would be a period to transition as I already stated in the above quote from my prior post.

Please be more thoughtful, because I am losing precious time where I am confident you are smart enough that you could have figured this out if you'd remove your highly irrational confirmation biases.

Let us take a very simplistic example to illustrate what I mean (it is simplistic, for illustrative purposes, don't think I'm cretin like that Smiley ).  Suppose that we take as a group, the addition group modulo a prime number, and that we don't know that multiplication forms a field with it.  We could have the "discrete log" problem in this group, where "adding together n times" the generator g, a random number between 1 and  p-1, is the "hard problem to solve", exactly as we do in an elliptic group.  Suppose that we take p a 2048 bit number.  Now THAT's a big group, isn't it ?  Alas, the Euclidean division algorithm solves my "discrete logarithm" problem as fast as I can compute the signature !

2048, 4096, 10^9 bit key, it doesn't matter: the difficulty of cracking goes polynomially with the difficulty of using it ! (Here, even linearly!).

So the day that one finds the "Euclidean division" in an ECC, it is COMPLETELY BROKEN.  The time it takes a user to calculate his signature, is the time it takes about, for an attacker to calculate the secret key from the public key.  As such, the ECC has become a simple MAC, and it doesn't even last 3 seconds once the key is broadcast.

You are describing future cryptanalysis breakage of the math theoretic security of the intractability of the discrete logarithm over certain fields.

But you're analogy does not apply, because Shor's algorithm (a form of cryptanalysis) is already known! It is not a future unknown.

Also (and this is a minor point which isn't really necessary for my slamdunk) you are conflating the breakage of discrete logarithm math theoretic security with the security of permutation algorithms of hash functions. I repeat the distinction between the two which you have failed to assimilate:

You are uninformed. Crypt-analysis breaks on hash functions typically lower the security in bits, but don't lower it to 0 bits.

As I had originally pointed out you are conflating two entirely different systems of security and each can benefit orthogonally from increased bit lengths when we are not concerned about an intractable brute force enumeration attack and instead concerned with math theoretic cryptanalysis breakage.

Thus...

--> if we assume that ECC will be broken one day, bitcoin's crypto scheme is IN ANY CASE not usable.  This is why the "common sense" in cryptography, of "protecting primitive crypto building blocks because we cannot assume they are secure" is just as much a no-go as the other common sense of security by obscurity.  It sounds logical, but it is a fallacy.  You think ECC will be broken, don't use it.  And if you use it, accept its security level as of today.  Because you cannot foresee HOW HARD it will be broken, and if it is totally broken, you are using, well, broken crypto.

Not only are you failing to assimilate the fact that Shor's breakage is already known (not a future thing not knowable as you are arguing) which is sufficient slamdunk on why you are incorrect, but you are also claiming that hash functions can typically be entirely broken in one swoop which afaik not the case (and I studied the cryptanalysis history on past SHA submissions from round 1 to final rounds).

Now, what is the reason we can allow LOWER security for the exposed public key, than for the long-term address in an output ?  The reason is a priori (and I also fell into that trap - as I told you before, my reason for these discussions is only to improve my proper understanding and here it helped) that the public key needs only to secure the thing between broadcasting and inclusion in the chain.  But as you point out, that can take longer if blocks are full than 10 minutes.  This can be a matter of hours.

Now, if we are on a security requirement of days or weeks, then there's essentially not much difference between days or weeks, and centuries.  The factor between them is 10000 or so.  That's 16 bits.  A scheme that is secure for days or weeks, only needs 16 bits of extra security, to be secure for centuries ====>  there is no reason to nitpick on 16 bits if we are talking about 128 bits or so.
There is no reason to introduce "short term security" if this is only 16 bits less than the long term security level.

You have incorrect conceptualization. The point of long-term security is not the difference in the time it takes to crack with a given level of technology, but rather that over the long-term we can't know when that moment comes that cracking has become sufficiently fast enough. The Bitcoin UTXO from 8 years ago that Satoshi has not spent, could have been under attack for the past 8 years. By having the hash for the long-term security, then we force all attacks to begin only when the UTXO are spent. This enables us to restrict damage to a very few number of transactions and the community will become alarmed and take corrective action.

Yes you are learning from me (and sometimes I learn from you also). I was quite busy the past 4 years learning this stuff.

But I truly hope we can wrap this up, because I have some more important work I want to do today than debating with you, unless you're sure you have an important point to make. But please try hard to think because I have thought deeply about all this stuff already. You are not likely to find a mistake in my past analysis unless it is math based above the level of math I'm knowledgeable.

You are not accurately accounting for the savings in portion of UTXO that must be stored in DRAM (for performance) versus what can be put on SSDs. Without that in DRAM, then the propagation time for blocks would be horrendous and the orphan rate would skyrocket (because nodes can't propagate block solutions until they re-validate all transactions due to the anonymity of who produced the PoW).

Of course not.  You don't have to keep all UTXO in DRAM of course.  You can do much smarter database lookup tables.  If the idea is that a node has to keep all UTXO in RAM, then bitcoin will be dead soon.

Satoshi just nailed you to the cross.  Tongue

Nope, Gavin Andresen is talking bullshit and confuses cryptographic hashes and lookup table hashes.

http://qntra.net/2015/05/gavin-backs-off-blocksize-scapegoats-memory-utxo-set/

If you need to keep a LOOKUP HASH of UTXO, then that doesn't need cryptographic security.  There's no point in having 160 bit hashes if you can only keep a few GB of them in memory !  160 bit lookup hashes means you expect of the order of 2^160 UTXO to be ordered.  Now try to fit 2^160 things in a few GB of RAM Wink

You only need about a 48 bit hash of the UTXO to keep a database in RAM.  That doesn't need to be cryptographically secure.  Completely crazy to keep 160 bit hashes as LOOKUP HASHES in a database hash table !   And there are smarter ways to design lookup tables in databases than keeping a long hash table in RAM, ask Google Smiley

Of course I am knowledgeable about Shannon information content (entropy) and also efficiency of various sparse space lookup algorithms (in fact I did considerable research on this in 2015 which is why I am prepared to refute you now), the problem is DDoS resistance. The collision resistance on 48-bits is too low (especially relate that to equations for a sparse space lookup algorithm and the relationship between how full it is and the probability of collisions) so the attack can DDoS the network with invalid address, which force you to go off to SSD storage and bring your system down to a crawl.

And don't give me some BS about using PoW to rate limit, because I already refuted @Gmaxwell when he offered that to me as a solution. Just remember that PoW is a vulnerability to botnets at the nascent stage and then later it is a winner-take-all power vacuum. Satoshi had thought out all of these issues.

I'm not even putting this on the back of Satoshi.  I claim he made sufficient errors for him not to be a math genius but he is a smart guy nevertheless.  I can criticise him because of hindsight, I'm absolutely not claiming to be at his level.  But I claim that he's not of the type of math genius as a guy like Nash.  This is the kind of argument I'm trying to build.  

But SUCH stupid errors, I don't even think Satoshi is capable of.  It is Gavin Andreesen who is talking bullshit to politically limit block size.  If ever it is true that RAM limits the amount of UTXO in a hard way, then bitcoin is dead from the start.  But it isn't.

I am sorry friend, but the errors continue to be all yours. This is the 3rd time I've replied and obliterated your arguments (pointed out the errors in your thought processes). I went down all the same thought processes you did, but had since realized that indeed Satoshi was a genius.

Maybe you will finally pop out of your pompous confirmation bias delusion and start to realize it too. I hope you aren't going to make a total asshat of yourself as Jorge Stolfi hath done. I know you are smart enough to stop insisting when you've become informed. You are informed now.

The first piece in bold is the network configuration we talked about earlier: the backbone of miner nodes, and all others directly connecting to it, no more P2P network. (has nothing to do with the current subject, but I thought it was interesting to note that Satoshi already conceived the miner centralization from the start).

The second part is indeed considering bitcoin scaling on chain to VISA-like transaction rates, with the chain growing at 100 GB per day.  He's absolutely not considering a P2P network here, but a "central backbone and clients" system.

I quoted that back in 2013 for making my points about Bitcoin was designed by the global elite.

The point however, is the fact that most certainly, he doesn't think of any RAM limits on cryptographic hashes and hence on the maximum amount of existing UTXO permissible.

No you don't understand.

The DRAM issue is for the early decentralized years of Bitcoin's adoption so that the illusion of the idealism could be sustain long enough until Bitcoin becames unstoppable.

Given that Bitcoin will be a settlement layer between power brokers, then UTXO will not be a problem and also centralized mining will mean that DRAM cost is also not a problem.

Bitcoin has already reached the stage of being unstopped, which will be proven when it can't be mutated. Not by anyone. Not even national governments. NWO will be another thing however.
full member
Activity: 322
Merit: 151
They're tactical
April 11, 2017, 04:57:07 AM
WOAH...guys....


guys... guys...

you're obviously overlooking the OBVIOUS here:

John Nash was AMERICAN....mkay???

Satoshi Nakamoto was JAPANESE.....

DUHHHHHHH.....

so he's not Satoshi, stupid.

unbelievable.




I asked my girl friend yesterday, she is indonesian and work with chinese ambassy and she know a bit of japonese i think.


What she told me that satoshis is a word related to taoist concept of balance of opposite (male/female, yin/yan) etc, and nakamoto mean 'the central source of'.

http://www.ancestry.com/name-origin?surname=nakamoto

Nakamoto Name Meaning Japanese: ‘central origin’ or ‘(one who lives) in the middle’; found mostly in the Ryukyu islands.

http://www.behindthename.com/name/satoshi/submitted

Given Name SATOSHI
GENDER: Feminine & Masculine
USAGE: Japanese
PRONOUNCED: sa-TOH-shee
OTHER FORMS: 覚 Japanese
CONTRIBUTOR: Spirited Sarah on 3/12/2011
LAST EDITOR: Spirited Sarah on 5/11/2011   [revision history]
Meaning & History
Means "wisdom" or "sense" in Japanese


So litterally satoshis nakamoto would mean ' the central source of balance '. (Cause wisdom with taoism is related to duality of symmetric opposite)

Because with the kanji also name can be translated to a meaning.

And i'm pretty sure it's not hard to find connection between nash theories of equilibrium from opposite agencies and taoist philosophy of harmonious whole from opposite and central balance.

In the idea, it's not because you name a coin 'archimedes' or 'descartes' that your name is necessarily this Smiley

It more look like a philosophic concept based on tao and/or nash game theories, involving surely persons from different field.

It's almost sure to me it involve at least two persons from at least mathematics and IT background.

Because to get into implementing a game theory model on a P2P network like this, need necessarily two persons. And it's obvious the code is not involved with game theory math. But in itself it doesn't mean the value and parameters are not computed from a math model issued from game theory.


And i'm being more and more convinced that there might be game theory model directly involved with the code of bitcoin now.

The two more important value for a node on POW is basically block reward and mining difficulty, that's the two things that matter, and it's exactly the same parameters you would find in game theory model (risk = difficulty = 1 / rateof ( good nonce) ).

Or it doesn't look like something that would not be very hard to express in term of nash game theories.

hero member
Activity: 770
Merit: 629
April 11, 2017, 04:00:26 AM
Yet another "strange aspect" of the bitcoin protocol is the following: the fact that we use bloody 256 bit transaction hashes.   Transaction hashes have limited cryptographic validity.  They are essentially "lookup" hashes.  Maybe I'm missing one or other security point, but I don't think so.  Being able to "fake a transaction hash" doesn't seem to win any advantage, because in the end, what counts is the signature as cryptographic "unlocking".  So in as much as the transaction hash is just a lookup hash, 256 bits is totally absurd.  There cannot be 2^256 transactions on the block chain ever.  64 bits would even be overkill.  At VISA rate with say, 10^11 (that is, 2^36 or so) we'd have thousands of years of spending with 48 bits, and with 64 bits, till the end of the universe or close.
If 64 bits is more than enough, we are wasting about 200 bits per spending for nothing.  If there was much ado to "compress" (erroneously) a 256 bit key into 160 bits to win 100 bits, then wasting 200 bits sounds cavalier.

But even if there is a cryptographic security to transaction hashes (which escapes me entirely), there's absolutely no reason to require much higher security levels for the transaction hash than for the key security !  The 200 bits extra room could be better spend on ECC security, or on key hash security, rather than on this silly transaction hash "security" which doesn't serve much of a purpose.

If bits are expensive, transaction hashes of 256 bits are ridiculous.

The 32 bit word indicating the "entry" in the transaction is also somewhat crazy.  Transactions won't contain 4 billion outputs.

So there's a lot of "spilled bits" which are totally useless in the specification of a transaction input.

===> EDIT: I should multiply my lookup hashes with two to avoid the birthday paradox of course.  But that doesn't change the principle.
hero member
Activity: 770
Merit: 629
April 11, 2017, 03:36:00 AM
There are seemingly only two valid reasons to hash the public key:

1) you think that the public key scheme is vulnerable in the long term
2) you want to separate long term and short term security.

I already told you that if the public key were exposed for a longer (indefinite!) time, so you would need to increase the security of the public key.  But to what level given quantum computing may be coming?

And 256-bit was about the upper limit of what was available and well accepted in 2008.

Well, this is the kind of cryptographic "common sense" that doesn't make sense.  As I said before, one has to assume, in a cryptographic design, that the cryptographic primitives are about at the security level that is known - for the simple reason that one cannot predict the deterioration of its security level by future cryptanalysis.  As far as one goes, it can be total.

Let us take a very simplistic example to illustrate what I mean (it is simplistic, for illustrative purposes, don't think I'm cretin like that Smiley ).  Suppose that we take as a group, the addition group modulo a prime number, and that we don't know that multiplication forms a field with it.  We could have the "discrete log" problem in this group, where "adding together n times" the generator g, a random number between 1 and  p-1, is the "hard problem to solve", exactly as we do in an elliptic group.  Suppose that we take p a 2048 bit number.  Now THAT's a big group, isn't it ?  Alas, the Euclidean division algorithm solves my "discrete logarithm" problem as fast as I can compute the signature !

2048, 4096, 10^9 bit key, it doesn't matter: the difficulty of cracking goes polynomially with the difficulty of using it ! (Here, even linearly!).

So the day that one finds the "Euclidean division" in an ECC, it is COMPLETELY BROKEN.  The time it takes a user to calculate his signature, is the time it takes about, for an attacker to calculate the secret key from the public key.  As such, the ECC has become a simple MAC, and it doesn't even last 3 seconds once the key is broadcast.

--> if we assume that ECC will be broken one day, bitcoin's crypto scheme is IN ANY CASE not usable.  This is why the "common sense" in cryptography, of "protecting primitive crypto building blocks because we cannot assume they are secure" is just as much a no-go as the other common sense of security by obscurity.  It sounds logical, but it is a fallacy.  You think ECC will be broken, don't use it.  And if you use it, accept its security level as of today.  Because you cannot foresee HOW HARD it will be broken, and if it is totally broken, you are using, well, broken crypto.

Now, what is the reason we can allow LOWER security for the exposed public key, than for the long-term address in an output ?  The reason is a priori (and I also fell into that trap - as I told you before, my reason for these discussions is only to improve my proper understanding and here it helped) that the public key needs only to secure the thing between broadcasting and inclusion in the chain.  But as you point out, that can take longer if blocks are full than 10 minutes.  This can be a matter of hours.  Also, in micro channel spending, you have to expose your public key to the counter party for the time the channel is open.

Now, if we are on a security requirement of days or weeks, then there's essentially not much difference between days or weeks, and centuries.  The factor between them is 10000 or so.  That's 16 bits.  A scheme that is secure for days or weeks, only needs 16 bits of extra security, to be secure for centuries ====>  there is no reason to nitpick on 16 bits if we are talking about 128 bits or so.
There is no reason to introduce "short term security" if this is only 16 bits less than the long term security level.

In other words, if you are afraid that 160 bits isn't good enough in ECC for the long term, well, then 128 bits (as it is now) is not good enough either in the short term.  If you think a "quantum computer" can crack a 320 bit ECC key in 50 years, then that quantum computer will be able to crack a 256 bit ECC key in less than a day.

So you may very well protect an address with an unbreakable hash of 160 bits for 50 years your quantum computer breaks its teeth on, the day that you use that address in a micro-payment channel, by the evening the key is cracked.

Quote
You are not accurately accounting for the savings in portion of UTXO that must be stored in DRAM (for performance) versus what can be put on SSDs. Without that in DRAM, then the propagation time for blocks would be horrendous and the orphan rate would skyrocket (because nodes can't propagate block solutions until they re-validate all transactions due to the anonymity of who produced the PoW).

Of course not.  You don't have to keep all UTXO in DRAM of course.  You can do much smarter database lookup tables.  If the idea is that a node has to keep all UTXO in RAM, then bitcoin will be dead soon.

Quote
Satoshi just nailed you to the cross.  Tongue

Nope, Gavin Andresen is talking bullshit and confuses cryptographic hashes and lookup table hashes.

http://qntra.net/2015/05/gavin-backs-off-blocksize-scapegoats-memory-utxo-set/

If you need to keep a LOOKUP HASH of UTXO, then that doesn't need cryptographic security.  There's no point in having 160 bit hashes if you can only keep a few GB of them in memory !  160 bit lookup hashes means you expect of the order of 2^160 UTXO to be ordered.  Now try to fit 2^160 things in a few GB of RAM Wink

You only need about a 48 bit hash of the UTXO to keep a database in RAM.  That doesn't need to be cryptographically secure.  Completely crazy to keep 160 bit hashes as LOOKUP HASHES in a database hash table !   And there are smarter ways to design lookup tables in databases than keeping a long hash table in RAM, ask Google Smiley

I'm not even putting this on the back of Satoshi.  I claim he made sufficient errors for him not to be a math genius but he is a smart guy nevertheless.  I can criticise him because of hindsight, I'm absolutely not claiming to be at his level.  But I claim that he's not of the type of math genius as a guy like Nash.  This is the kind of argument I'm trying to build.  

But SUCH stupid errors, I don't even think Satoshi is capable of.  It is Gavin Andreesen who is talking bullshit to politically limit block size.  If ever it is true that RAM limits the amount of UTXO in a hard way, then bitcoin is dead from the start.  But it isn't.

This is a very interesting read BTW:

http://satoshi.nakamotoinstitute.org/emails/cryptography/2/

Quote
>Satoshi Nakamoto wrote:
>> I've been working on a new electronic cash system that's fully
>> peer-to-peer, with no trusted third party.
>>
>> The paper is available at:
>> http://www.bitcoin.org/bitcoin.pdf
>
>We very, very much need such a system, but the way I understand your
>proposal, it does not seem to scale to the required size.
>
>For transferable proof of work tokens to have value, they must have
>monetary value.  To have monetary value, they must be transferred within
>a very large network - for example a file trading network akin to
>bittorrent.
>
>To detect and reject a double spending event in a timely manner, one
>must have most past transactions of the coins in the transaction, which,
>  naively implemented, requires each peer to have most past
>transactions, or most past transactions that occurred recently. If
>hundreds of millions of people are doing transactions, that is a lot of
>bandwidth - each must know all, or a substantial part thereof.
>


Long before the network gets anywhere near as large as that, it would be safe
for users to use Simplified Payment Verification (section Cool to check for
double spending, which only requires having the chain of block headers, or
about 12KB per day. Only people trying to create new coins would need to run
network nodes. At first, most users would run network nodes, but as the
network grows beyond a certain point, it would be left more and more to
specialists with server farms of specialized hardware.
A server farm would
only need to have one node on the network and the rest of the LAN connects with
that one node.


The bandwidth might not be as prohibitive as you think. A typical transaction
would be about 400 bytes (ECC is nicely compact). Each transaction has to be
broadcast twice, so lets say 1KB per transaction. Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.


If the network were to get that big, it would take several years, and by then,
sending 2 HD movies over the Internet would probably not seem like a big deal.

Satoshi Nakamoto

---------------------------------------------------------------------

The first piece in bold is the network configuration we talked about earlier: the backbone of miner nodes, and all others directly connecting to it, no more P2P network. (has nothing to do with the current subject, but I thought it was interesting to note that Satoshi already conceived the miner centralization from the start).

The second part is indeed considering bitcoin scaling on chain to VISA-like transaction rates, with the chain growing at 100 GB per day.  He's absolutely not considering a P2P network here, but a "central backbone and clients" system.

The point however, is the fact that most certainly, he doesn't think of any RAM limits on cryptographic hashes and hence on the maximum amount of existing UTXO permissible.

sr. member
Activity: 336
Merit: 265
April 11, 2017, 01:22:09 AM
There are seemingly only two valid reasons to hash the public key:

1) you think that the public key scheme is vulnerable in the long term
2) you want to separate long term and short term security.

I already told you that if the public key were exposed for a longer (indefinite!) time, so you would need to increase the security of the public key.  But to what level given quantum computing may be coming?

And 256-bit was about the upper limit of what was available and well accepted in 2008.

I remember seeing that 256-bit was only expected to be recommended security for ECC for only another decade or so.

https://www.keylength.com/en/3/

https://www.keylength.com/en/compare/

I will now show you why there's some craziness in this scheme:
Take Satoshi's system: L = 160 bits, S = 128 bits, which makes his B_hash(160,128) = 928.

Suppose that I would have taken L = 160 bits overall: B_nohash(160) = 960.

So I would only have used 32 bits on about 1 K more to have OVERALL SECURITY of 160 bits.

The hashing wins me 3% of room, to decrease the ECC security from 160 to 128 bits.

You are not accurately accounting for the savings in portion of UTXO that must be stored in DRAM (for performance) versus what can be put on SSDs. Without that in DRAM, then the propagation time for blocks would be horrendous and the orphan rate would skyrocket (because nodes can't propagate block solutions until they re-validate all transactions due to the anonymity of who produced the PoW). 320-bit public keys (i.e. 160-bit security) in UTXO would require 100% more (double the) DRAM.

Satoshi just nailed you to the cross.  Tongue

And if there is a suspicion on that fragility, it is very wasteful to take a useless 256 bit key which would in any case easily be cracked by assumption.

You are not assimilating all the information I already provided to you.

The public keys can be hacked off the users' wallets. So we need more than trivial security there for the ECC public key cryptography.

Another reason (in addition to the compression of UTXO) to hash the values on the block chain is because when the use of a quantum computer is detected, we have some protection against chaos and can map out a strategy for burning the values to a new design securely. Hashes are much more likely to be quantum computing resistant.

Satoshi's cryptography choices are so clever and obtuse that even a very smart person as yourself takes a long time to finally grasp his genius. That indicates how genius Satoshi is. When we find that PhDs (college professors?) are offended by the notion of Satoshi being a genius, and such PhDs are committing Dunning-Kruger blunders when analyzing Satoshi's work, then we have a very strong indication that Satoshi's IQ was in the 180+ range. For example, when listening to Freeman Dyson or John Nash (180+ IQ for both) speak initially the unsophisticated observer (not you @dinofelis) might conclude they are not super intelligent. But that is simply because the observer is incapable of perceiving the depth of complexity being communicated so concisely. I have had public+private discussions with college professor Jorge Stolfi on Reddit in 2016 and generally thought him to be intelligent and mathematical, but I was shocked to read his myopic presentation to the SEC recently concerning the decision on the approval of the ETF.

If we appreciate how rare 180 IQ is, then we understand that the set of people who could have been Satoshi is quite small.

P.S. readers I don't know who @dinofelis is. And I would guess he is probably more formally trained than I am in math and Physics and other STEM fields. I have some areas of programming level expertise that he may not have (not sure about that though). My main talent is I am highly creative non-conventional thinker, similar to John Nash but lacking the full breadth of Nash's mathematical genius. I was no where near a teenage math genius but this could be because I was so into athletics and also I wasn't even exposed to learning materials until about 8th grade (my parents had me in inner city public schools and changing schools every 6 months). I did ace Calculus at a college in night session while I was still in high school. My SAT was high in math but not a perfect score (although I had a hangover and still slightly drunk when I took it). I wasn't interested in studying for standardized tests and I never showed up for my math classes in high school, yet still aced the exams. In short, I excelled on the things I was motivated to excel on but more interested in my intellectual and athletic hobbies than in conforming with the structured curriculum. At the university, I hated to attend lectures and would learn independently and also doing my own research on things I was interested in the library. And spending the rest of my time partying and playing sports.
hero member
Activity: 770
Merit: 629
April 11, 2017, 01:10:58 AM
The point of using 160 bits is compression of block size. What is the #1 issue of Bitcoin right now? Block size.

The 160 bits is more than the 128-bit security level of the 256-bit ECC.

It is a perfectly balanced and clever choice.

A priori, by hashing the public key, you don't win, but you LOSE space on the chain.  The reason is that you should consider an input and an output together.  A UTXO by itself is worthless if it is not spend one day.  So you have to consider both together.

Now, if in the output, you HASH the public key, you will have to publish that key openly in the corresponding future input, because otherwise, nobody will be able to check the signature.  If, on the other hand, you publish the public key at the output directly, without a hash, you don't have to repeat that at the corresponding input, you only have to specify the signature as everyone can go and get the public key to check it.  

--> hashing the public key adds a bit load equal to the hash length.

There are seemingly only two valid reasons to hash the public key:

1) you think that the public key scheme is vulnerable in the long term
2) you want to separate long term and short term security.

It is true that hashing the public key of 256 bits (which has a security of 128 bits) INCREASES its security to the level of the number of hashed bits if that number is between 128 and 256.  So it is true that a hashed key to 160 bits, is 160 bits secure, while the key itself is only 128 bits secure.  This 160 bit security is maintained until the key is published in a transaction.

However, let us make a small calculation.  Consider H the hash length, and K the key length.

Let us call long term security L, and short term security S.

Let us call B the total bit cost of an input and an output.

If there is no hashing, that is, if you directly publish the public key from its outset, then:

L = S = K/2

B_nohash = 3 K = 6 L = 6 S

(because there is the public key of length K, and the signature size is twice the key length, hence 3K)

If there is hashing, and we assume H between K and 2 K, then:

L = H

S = K / 2

B_hash = H + 3 K = L + 6 S

I will now show you why there's some craziness in this scheme:
Take Satoshi's system: L = 160 bits, S = 128 bits, which makes his B_hash(160,128) = 928.

Suppose that I would have taken L = 160 bits overall: B_nohash(160) = 960.

So I would only have used 32 bits on about 1 K more to have OVERALL SECURITY of 160 bits.

The hashing wins me 3% of room, to decrease the ECC security from 160 to 128 bits.

If I would have a direct address with a 320 bit ECC key, I would use about as much room on the block chain, as Satoshi's scheme, which LOWERS the security of ECC to 128 bit in the short term.

If I consider 128 bits enough, I would  have B_nohash(128) = 768 bits, which is about 20% less room.

In other words, apart from a suspicion on the fragility of ECC, there was no point in doing what he did.  And if there is a suspicion on that fragility, it is very wasteful to take a useless 256 bit key which would in any case easily be cracked by assumption.

legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
April 10, 2017, 05:09:31 PM
WOAH...guys....


guys... guys...

you're obviously overlooking the OBVIOUS here:

John Nash was AMERICAN....mkay???

Satoshi Nakamoto was JAPANESE.....

DUHHHHHHH.....

so he's not Satoshi, stupid.

unbelievable.

full member
Activity: 322
Merit: 151
They're tactical
April 10, 2017, 02:06:05 PM
It just occured to me now how reward seeking is completely the base variable for game theory algorithm  Shocked

https://sites.google.com/a/nau.edu/game-theory/about/philosophy

In Economics, Game Theory models the behavior of individuals as if they are participating in a game. Much like any other game, they are playing to receive some sort of payoff or benefit. The goal of the game is to attain the highest reward for themselves by using any strategies available to them. Risk dominance and payoff dominance are two related refinements of the Nash Equilibrium solution concept in game theory defined by John Harsanyi and Reinhard Selten (Risk Dominance, 2013).

So it's still possibly someone familiar with game theory and mathematics. I was looking for links from mathematics jargon in the code.


Now I can completely see the equation with risk taking = computing a hash, reward = coin emission for the block miner, and how the thing is tied together with the proba/risk as work, and how low risk taking lead to seek for consensus on mutual benefice, and lead to an equilibrium.

Where the force of the market aka speculators/whales are still separated from decision power by risk taking of computing hash to win the reward, and they wont manipulate directly the network even if they own large part of the data it hold.

And it still make in sort everyone still will keep relaying the good transactions, either speculators/whales, miners, with different risk taking (buying the coin /  computing a proba hash (difficulty = 1/rateof (xx)), and different reward (coin emission for miner, high coin value for whales), well it still need more thinking on it to get it completely, but the similarities in thinking with game theory math start to appear to me  Grin

Im sure a good mathematician could pull out the 2x2 matrixes with coefficient being reward & difficulty (aka risk), weighted on if the risk is computing hash or buying coin, and deduce the good parameters for it to reach equlibrium on consencus. ( !  Grin fatal genius )

It's a bit more twisted than this because miner also get benefits from high price of the coin,  and also you would expect pow difficulty to get higher as trading volume & market cap increase due to higher coin value. And the inflation rate of the block reward also cross the speculators reward (and maybe involve a risk for them).

I really wonder if the parameters are pulled out of mathlab now  Huh Roll Eyes Roll Eyes

Im sure it's a very simple 2d stuff in game theory now with difficulty/reward with trading and mining  Grin



But it's hard to get how he went from the math theory to the code, or from buisness problematic to math and to the code.

If he went from math and ideal concept to the application code, there is definately a break in the chain somewhere and at least 2 persons involved.

But satoshis nakomoto looks more like a project name based on concept of balance and source of balance, involving person from different expertise, and probably inspired by Nash in some part.

sr. member
Activity: 336
Merit: 265
April 10, 2017, 01:56:23 PM
I will use moderated threads from now. I resisted because I despise censorship. But I need to consider that I am not of much value to the community if I am expending time fighting instead of producing.

Moderated threads = yes

I don't know if you've heard of Athene and his "Logic Nation" - https://logicnation.org/

Athene's Theory of Everything
https://www.youtube.com/watch?v=dbh5l0b2-0o

Science Finds God (documentary / sci-fi short film)
https://www.youtube.com/watch?v=SXDw73rToPE

One of debates regarding his "cult" - "logic nation":
[Livestream Debate] Glink vs Athene: "Click" Cult, Logic, and Reality
https://www.youtube.com/watch?v=EWq1VNk6T2g

(doesn't matter if you watch those videos, just want to let you know how he - as someone with huge audience - deals with trolls)

He had 100k+ followers when he was gaming. Millions of people heard about him.
Seems like it was his plan for ~15 years to just build an audience
for his project (he wanted to have proof of concept before going public with it).
He explained (I don't have source for this atm, but heard it in one of debates) that
he must ban trolls (mainly from his twitch channel),
it's simply not worth his time to constantly explain basics over and over again.
He learned by trial and error e.g. he didn't ban trolls at first - he gave them a chance,
but seeing that trolling never ends, he bans them on sight:

Athene bans "fuckin moron" debater after just 1 min!
https://www.youtube.com/watch?v=XOTvuJnf4yk
Ok maybe he overreacted there a bit, but I couldn't find better example Smiley


And yes, he received some negative feedback, but in the long run it pays off.
Noone's gonna miss a few trolls.

So... it kinda reminds me of you.

To me as a reader, it makes sense. You are better off with moderated threads.
You are not only saving your time, you are also saving our (readers) time.
People who want to learn can just browse your post history - or simply click your links
(no need to ask questions which you explained few posts ago).

I just wanted to clear up doubts (if you had any) whether to moderate threads or no - go for it. It's worth it.

Thank you.

Yes I see now that is the only reasonable way to have quality discussions.
sr. member
Activity: 336
Merit: 265
April 10, 2017, 01:50:36 PM
Back for sloppy seconds I see.

No need to read, for we know what he be.
sr. member
Activity: 279
Merit: 250
April 10, 2017, 01:48:16 PM
lol what a perfectly worded comment. It's almost as if it was crafted to look like he didn't read and just replied Cheesy I guess we will never know.

You thought you were a clever fuck didn't you? Wink

Really though, i was done 3 comments ago. It's similar to how a cat gets bored after the mouse he's playing with dies.
sr. member
Activity: 336
Merit: 265
April 10, 2017, 01:41:55 PM
I don't need to read, I can just type my reply.

Thought you were a clever fuck didn't you?

NaNaNa, NaNaNa, Hey, Hey, Goodbye.

Go take your meds dufus.
sr. member
Activity: 279
Merit: 250
April 10, 2017, 01:40:45 PM
lol we could, or we could kiss and make up and you just admit that all skeptics aren't idiots and that we got off on the wrong foot.
sr. member
Activity: 336
Merit: 265
April 10, 2017, 01:39:37 PM
I think we could continue this forever.

Until his F5 key craps out.

Let's take this to the absurdium that he wants to prove.

When this thread has 100 pages of his nonsense, then we'll see if the moderator takes action or not.
sr. member
Activity: 279
Merit: 250
April 10, 2017, 01:34:16 PM
Somebody forgot to take their meds today.

Would you like me to get you a glass of water? I think the Nash/millionaire delusions are coming back.

Those on my Ignore list (BitWhale was added), will get their posts deleted without even being read. This will foster civilized discussion.

Eliminating the baboons can only make the S/N ratio much higher.

 I thought i was on ignore?

I hope you don't say things you don't mean about your super-duper amazing multi-gagillion dollar altcoin you are creating, because what you SAY and DO are clearly two different things lmao.

God man, you really aren't as bright as you like to make yourself out to be are you? Why would you respond to that? I was literally done.
sr. member
Activity: 336
Merit: 265
April 10, 2017, 01:29:21 PM
Somebody forgot to take their meds today.
Pages:
Jump to: