Pages:
Author

Topic: [Fundraising] Finish “Ultimate blockchain compression” (Read 25588 times)

legendary
Activity: 905
Merit: 1012
UTXO commitments have always required a soft-fork, whether they are added to the coinbase or separate commitment transactions.

What open data fields in transactions? Do you mean OP_RETURN? If so, see my pull request for a soft-fork change to enable this kind of miner commitments with compact proofs:

https://github.com/bitcoin/bitcoin/pull/3977

I would not put proof of stake anywhere near a consensus system. Not with current designs at least.
legendary
Activity: 2618
Merit: 1007
So you went away from using available fields e.g. in coinbase and going for a forking solution?

What about the new open data fields for transactions? Any way to fit a UTXO hash in there and come to some kind of consensus e.g. via Proof of Stake?
legendary
Activity: 905
Merit: 1012
Here is something I just posted on a related pull request last night:

Quote
The UTXO commitments I'm working on are currently developer-time limited. There's a working Python implementation and complete test vectors, as well as an optimized commitment mechanism (#3977, which needs a rebase), and C++ serialization code for the proofs. All we need is the C++ code for updating proofs, and the LevelDB backend. So if there are other bitcoind hackers out there interested in doing this The Right Way, contact me. However it requires a soft-fork, so rolling it out necessitates some degree of developer consensus, community eduction, and miner voting process (or the beneficence of ghash.io...), all of which together requires as much as a year if past experience is a judge.
legendary
Activity: 1274
Merit: 1004
Any update on this?
legendary
Activity: 905
Merit: 1012
Oy, there is something wrong as I have not been receiving email updates about this thread. My apologies to everyone.

So an update on the current status:

A BIP describing the authenticated prefix tree resulting from my research has been published to the developer mailing list, and is available for view online. The feedback I got from that is being worked into a 2nd version of the proposal which is nearing completion (I'm currently adding descriptions of the peer-to-peer messages and JSON-RPC APIs, then it is finished).

I also have about 80% written of a BIP describing the validation index, the outline of one describing the wallet index, and an assortment of notes towards two future BIPs related to document timestamping and merged mining improvements. These are the first deliverables, as once developer consensus is achieved other people can start working on implementations of these various features.

Besides the python implementation hosted on Github, there's a C++ implementation suitable for inclusion in the reference client that is about half finished. It could use a lot more unit tests.

However since funding mostly dried up after Armory's very generous donation some months back, I have been working on this proposal only part time (1-2 days per week). The rest of my time is spent on various other funded projects including CoinJoin and preparation for blockchain pruning. Work has not stopped, just slowed down.

@dansmith, unfortunately I think there's some confusion. I did not switch addresses between the first and the second funding rounds, so most of the money received at that address was for the first round. (And, while it sounds like a lot, bitcoin was worth an order of magnitude less back then and it was mostly cashed out upon receipt. Too bad I don't have a knack for predicting markets :\ ) I was unable to finish out the round, so I've been forced to take on other (thankfully related) work as well which has unfortunately slowed things down.
full member
Activity: 202
Merit: 100
I am therefore requesting a total of 195btc to finish the implementation as described above. If this is received by the end of the month, then I can promise the deliverable by the end of the year.

maaku, was there a deliverable as per your promise?
full member
Activity: 182
Merit: 100
Any news on this yet or you got enough BTC already?
legendary
Activity: 2674
Merit: 3000
Terminated.
So how far is this? Still under development?
legendary
Activity: 905
Merit: 1012
Wow, that was very kind and unexpected. Thank you! I forwarded those coins into a cold-storage wallet that will be the start of her education endowment. Hopefully 20 years of deflation will turn it into a sizable chunk of her college education, and maybe even the start of a nest egg.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Education endowment for maaku's daughter:
1Naginiq19q9fdeFLipqFr4TZjjsSdhT3B
-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJSbiBOAAoJECsa1YSJj/UD6z4P/3jawSFnpMNt89elZmsm3OaJ
q2GMvC++X8+Y7n/RI5mN7Ubg4E2YLyGS6Xrf5f7mHBMq5FjW5UPL2GV0JDHEN+ni
ILQMvvNhI28Dj1rlZY2KdAYj6WUf50gVFY3Tv3hq3JZHEnoS2r2YPZ4dPH9Ls4zM
i6iQ/LJNs7hZFEiO+EDsPThjepFmRzpaBqi2mpW4BCnFl1gqYJiFUgSKrY2G8YYI
Eqh9Xq1rQft+utYY2gNuc8XtqrscmFmr5dHrWnqPe0cFPWrpG0HqY5QftaqfjMXg
WWZHuTPwul+QVwXOeimoPjdJ9gCi7sXyCZTa9MxvJ5n1QS/ubuHWKx5jUzbSx6m+
iFb+IX+tLYxcOecxfmftM645LQ8WlCJeY0p+EoskNpkdH4q63Sk1Nyq9dJMuQK+2
+MaFbB0Pv3AMk/Qoi/qrCF9F1BVD/CYK72uI8t+7r8iDvQ97iK5lGfLvCxR89Ev3
X3uCiJOpUctMozELjI7w3ga3xRah6mufKU2HgNRqkZAaphp7Cf5v34NwnSeaf9LM
A2tdBo8N86NVpjeyLz5U8TRnSrUcPDQPR4snz+IYF1sCKsRqYz9rNy1WhaIO7j6x
2A6nTnzvLUnenYLusu4hAIJ5XAlQMiGDL6sdALL2Z3or7mlx3IPz5elvn2FRd1fh
xkvdfNXeOkmrFd4x6m6h
=fzcm
-----END PGP SIGNATURE-----
legendary
Activity: 1120
Merit: 1164
There was a gap in funding in late September, and then I went incommunicado for a few weeks, because of this:



Congrats!

Sent you a donation for his or her college fund. Smiley

Now that we're back home and settled, I'm back to working on authentication trees. Thanks in large part to the generous donation of Armory Technologies, Inc. and a few others, and the recent rise in price, I have the ability to focus on this project for some more months.

Right now I am translating the Python code into a C++ libauthtree implementation, and working on a series of BIPs that describe the trie structure, various generic operations on it, and the specific application to txid indices, address indices, and merged mining. Invictus Innovations is helping with the BIP process due to their interested in using the structure for merged mining headers.

I will be consolidating the recent news into a blog post update soon, hopefully by the end of the week.

I should write up a similar document explaining TXO commitments.
donator
Activity: 1466
Merit: 1048
I outlived my lifetime membership:)
There was a gap in funding in late September, and then I went incommunicado for a few weeks, because of this:



Now that we're back home and settled, I'm back to working on authentication trees. Thanks in large part to the generous donation of Armory Technologies, Inc. and a few others, and the recent rise in price, I have the ability to focus on this project for some more months.

Right now I am translating the Python code into a C++ libauthtree implementation, and working on a series of BIPs that describe the trie structure, various generic operations on it, and the specific application to txid indices, address indices, and merged mining. Invictus Innovations is helping with the BIP process due to their interested in using the structure for merged mining headers.

I will be consolidating the recent news into a blog post update soon, hopefully by the end of the week.

Congrats!
legendary
Activity: 905
Merit: 1012
There was a gap in funding in late September, and then I went incommunicado for a few weeks, because of this:



Now that we're back home and settled, I'm back to working on authentication trees. Thanks in large part to the generous donation of Armory Technologies, Inc. and a few others, and the recent rise in price, I have the ability to focus on this project for some more months.

Right now I am translating the Python code into a C++ libauthtree implementation, and working on a series of BIPs that describe the trie structure, various generic operations on it, and the specific application to txid indices, address indices, and merged mining. Invictus Innovations is helping with the BIP process due to their interested in using the structure for merged mining headers.

I will be consolidating the recent news into a blog post update soon, hopefully by the end of the week.
sr. member
Activity: 337
Merit: 250
Interested in the status of this as well.
sr. member
Activity: 602
Merit: 254
🔰FERRUM NETWORK🔰
What's the latest on the development of this?
legendary
Activity: 1120
Merit: 1164
The performance cost of SHA256^2 compared to SHA256 is a lot less than 2. Heck, I'd bet you 1BTC that even in your existing Python implementation changing from SHA256 to SHA256^2 makes less than a 10% performance difference, and the difference in an optimized C++ version is likely unmeasurable. I really hope you understand why...

Enlighten me, please.

EDIT: After thinking about this for some time, the only conclusion I can come to is that you are assuming multiple blocks on the first pass. This is only true of about half of the nodes (most of which are two blocks), the other half are small enough to fit in a single block. While not a performance factor of 2, it's definitely >= 1.5.

Where do you think the first SHA256() gets the data it's working on? What about the second?
legendary
Activity: 905
Merit: 1012

I have read that, and I don't agree. I'm far more inclined to side with Gavin on the issue of block size. Bitcoin will either scale up or become irrelevant. These are the only two viable outcomes I see.

However this is now offtopic, so I'd rather keep discussions of your "keep bitcoin free" campaign outside of this thread.

Have some imagination: if the attack can cause technical problems with Bitcoin for whatever reason, there will be people and organizations with strong incentives to carry it out. For instance note how you could create a block that's fraudulent, but in a way where proving that it's fraudulent requires a proof that most implementations can't process because of the ridiculously large UTXO path length involved - an excellent way to screw up inflation fraud proofs and get some fake coins created.

You are arguing strawmen: You assume that nodes will validate blocks probabilistically, and then show how this will result in possible fraud. The problem lies in the security model you are proposing, namely reliance on fraud proofs instead of full validation.

Out of curiosity, have you ever worked on any safety critical systems and/or real-time systems? Or even just software where the consequences of it failing had a high financial cost? (prior to your work on Bitcoin that is...)

Yes.

Creating those outputs requires 0 BTC, not 14,675BTC - zero-satoshi outputs are legal. With one satoshi outputs it requires 27BTC. That you used the dust value suggests you don't understand the attack.

So now you're assuming that an attacker will be able to successfully mine 10,000 blocks in order setup a one-off 20-minute DoS? I'm pretty sure it'd be cheaper to just buy the 14,675BTC. If someone who wants to destroy or defraud the bitcoin network is able to mine that many blocks in preparation, it's either a *really* long con or Bitcoin is hosed for so many other reasons.

I'm not arguing that it is impossible. I'm arguing that it is not even remotely close to economically plausible, and so far beyond other attack vectors that it's not worth worrying about.

But even if that weren't the case, there are fundamentally simple ways to deal with the issue: introduce a hard limit on the number of index node updates per-block, just as the number of signature operations is currently limited.

As I say, such extreme differences make it very likely that you'll run into implementations - or just different versions - that can't handle such blocks, while others can, leading to dangerous forks. Look at how the March fork happened because of a previously unknown limit on how many database operations could happen in one go in BDB.

That's a wonderfully general argument you can use against anything you don't like. Implementation bugs can exist anywhere. There's no reason to suppose that we're more or less likely to run into them here than somewhere else.

The performance cost of SHA256^2 compared to SHA256 is a lot less than 2. Heck, I'd bet you 1BTC that even in your existing Python implementation changing from SHA256 to SHA256^2 makes less than a 10% performance difference, and the difference in an optimized C++ version is likely unmeasurable. I really hope you understand why...

Enlighten me, please.

EDIT: After thinking about this for some time, the only conclusion I can come to is that you are assuming multiple blocks on the first pass. This is only true of about half of the nodes (most of which are two blocks), the other half are small enough to fit in a single block. While not a performance factor of 2, it's definitely >= 1.5.

A re-org is a failure of consensus, though one that healed itself.

That's not a very useful definition of consensus. Reorgs happen even without malicious intent, and the network is eventually able to reach agreement about older blocks. A real error is when one node rejects a block that other validating nodes accept - no amount of time or work built on the other chain will convince that node to switch. If this proposal is susceptible to that kind of validation error, I'd be interested to know.

Currently re-orgs don't happen too often because the difference in performance between the worst, average, and best performing Bitcoin nodes is small, but if that's not true serious problems develop. As an example when I was testing out the Bloom filter IO attack vulnerability fixed in 0.8.4 I found I was able to get nodes I attacked to fall behind the rest of the network by more than six blocks; an attacker could use such an attack against miners to construct a long-fork to perform double-spend attacks.

If you know a way to do the same with this proposal, which isn't easily solved by processing blocks in parallel, and which doesn't require an additional network DoS vulnerability, please let me know. Until then there are simply too many "what if"s attached.
legendary
Activity: 1120
Merit: 1164
You can't assume that slowing down block propagation is a bad thing for a miner. First of all block propagation is uneven - it'd be quite possible for large pools to have much faster propagation than the more decentralized solo/p2pool miners we want in the network. Remember, if you can reliably get your blocks to just over 50% of the hashing power, you're better off than if they get to more than that because there are more fee paying transactions for you to pick from, and because in the long run you drive difficulty down due to the work your competitors are wasting. Of course reliably getting to 50% isn't easy, but reliably getting to, say, no more than 80% is probably quite doable, especially if efficient UTXO set handling requires semi-custom setups with GPUs to parallelize it that most full-node operators don't have.

Still, you're playing a game that is not in your favor. In this case you are trading a few extra seconds of alone time hashing the next block against the likelihood that someone else will find a competing (and faster validating) block in the mean time. Unless you are approaching 50% hash power, it's not worth it. The faster your blocks get validated, the better off you are.

Read this before making assumptions: https://bitcointalksearch.org/topic/how-a-floating-blocksize-limit-inevitably-leads-towards-centralization-144895

Under the right circumstances those perverse incentives do exist.

As you say, they would have to make at the very least one set of adjustments for the coinbase which becomes newly active, requiring access to part of the datastructure. This could be facilitated by miners including a proof with the block header that provides just the paths necessary for making the coinbase update. Miners know that any full node will reject a header with improperly constructed commitment hashes, so I fail to see what the issue is.

Shit... your comment just made me realize that with the variant of the perverse block propagation incentives where you simply want to keep other miners from being able to collect transaction fees profitably it's actually in your incentive to propagate whatever information is required to quickly update the UTXO set for the coinbase transaction that becomes spendable in the next block. Other miners will be able to build upon your block - making it less likely to be orphaned - but they won't have the information required to safely add transactions too it until they finish processing the rest of the UTXO updates some time later. Write the code to do this and miners will start using this to keep their profitability up regardless of the consequences down the road.

The full scriptPubKey needs to be stored, or else two lookups would be required - one from the wallet index to check existence, and one from the validation index to actually get the scriptPubKey. That is wasteful. The alternative is key by hash(scriptPubKey) and put scriptPubKey in resulting value: that nearly doubles the size of the index on disk. You'd have to have a really strong reason for doing so, which I don't think the one given is.

BTW, the structure is actually keyed by compress(scriptPubKey), using the same compression mechanism as ultraprune. A standard transaction (pubkey, pubkeyhash, or p2sh) compresses to the minimal number of bytes of either pubkey or hash data. "Attacking" pathways to pubkeyhash or p2sh keys requires multiple hash preimages. I put it in quotes because honestly, what exactly is the purpose of the attack?

Have some imagination: if the attack can cause technical problems with Bitcoin for whatever reason, there will be people and organizations with strong incentives to carry it out. For instance note how you could create a block that's fraudulent, but in a way where proving that it's fraudulent requires a proof that most implementations can't process because of the ridiculously large UTXO path length involved - an excellent way to screw up inflation fraud proofs and get some fake coins created.

Out of curiosity, have you ever worked on any safety critical systems and/or real-time systems? Or even just software where the consequences of it failing had a high financial cost? (prior to your work on Bitcoin that is...)

"Tricky and expensive" to create such a block? You'd have to create ~270,270,000 outputs, the majority of which are probably non-spendable, requiring *at minimum* 14675.661btc. For an "attack" which at worst requires more than the entire existing block chain history to construct, and adds only 22 minutes to people's resync times. I fail to see why this should be anything but an academic concern.

Creating those outputs requires 0 BTC, not 14,675BTC - zero-satoshi outputs are legal. With one satoshi outputs it requires 27BTC. That you used the dust value suggests you don't understand the attack.

As I say, such extreme differences make it very likely that you'll run into implementations - or just different versions - that can't handle such blocks, while others can, leading to dangerous forks. Look at how the March fork happened because of a previously unknown limit on how many database operations could happen in one go in BDB.

re: single SHA256, I'd recommend SHA256^2 on the basis of length-extension attacks. Who cares if they matter; why bother analyzing it? Doubling the SHA256 primitive is a well accepted belt-and-suspenders countermeasure, and done already elsewhere in Bitcoin.

A performance factor of 2 is worth a little upfront analysis work. Padding prevents you from being able to construct length-extension attacks - append padding bytes and you no longer have a valid node structure. Length-extension is not an issue, but performance considerations are.

The performance cost of SHA256^2 compared to SHA256 is a lot less than 2. Heck, I'd bet you 1BTC that even in your existing Python implementation changing from SHA256 to SHA256^2 makes less than a 10% performance difference, and the difference in an optimized C++ version is likely unmeasurable. I really hope you understand why...

Lots of optimizations sure, but given that Bitcoin is a system where failure to update sufficiently fast is a consensus failure if any of those optimizations have significantly different average case and worst case performance it could lead to a fork; that such optimizations exist isn't a good thing.

I think you mean it could lead to a reorg. "Failure to update sufficiently fast" does not in itself lead to consensus errors. Examples please?

A re-org is a failure of consensus, though one that healed itself. Currently re-orgs don't happen too often because the difference in performance between the worst, average, and best performing Bitcoin nodes is small, but if that's not true serious problems develop. As an example when I was testing out the Bloom filter IO attack vulnerability fixed in 0.8.4 I found I was able to get nodes I attacked to fall behind the rest of the network by more than six blocks; an attacker could use such an attack against miners to construct a long-fork to perform double-spend attacks.


FWIW thinking about it further, I realized that your assertion that the UTXO tree can be updated in parallel is incorrect because parallel updates don't allow for succinct fraud proofs of what modifications a given transaction made to the UTXO set; blocks will need to commit to what the top-level UTXO state hash was for every individual transaction processed because without such fine-grained commitments you can't prove that a given UTXO should not have been added to the set without providing all the transactions in the block.
legendary
Activity: 905
Merit: 1012
@jtimon, thank you for the reference and kind words. Yes, our “assets” colored-coin proposal is what drew me back to this proposal. If people think SatoshiDice brings a lot of traffic and bloats the block chain, wait until they see what a distributed exchange and transitive credit will do, especially once the block-size limit is relaxed.

I was under the impression coloured coins don't cause blockchain bloat, unlike Mastercoin, and that is one of its main benefits.  Am I wrong? 

I was looser with terminology than I should have been. I meant legitimate bitcoin traffic, not mastercoin-like data and dust bloat. Bitcoin revolutionized payments. Colored coins applies all that same innovation to every other kind of financial contract or property ownership. Of course switching from 1 specific application to infinity general applications will result in more traffic ... lots more traffic would be a bit of an understatement. Just the distributed exchange traffic alone would probably consume more blockchain space than payments, and that's just one application.

When colored coin technology becomes widespread, bitcoin will have to scale. Right now that would push out mobile or embedded devices that already can't keep up with the block chain in a secure manor. This proposal of authenticated indices would make miners and other full nodes do a little extra upfront work for the benefit of lightweight clients that desire to securely work with only the parts of the block chain that concern them.

BTW, we have since published our "user-issued assets" colored coin proposal, under the name Freimarkets:

https://bitcointalksearch.org/topic/crowdfund-user-issued-assets-p2p-exchange-off-chain-transactions-and-more-278671
legendary
Activity: 905
Merit: 1012
You can't assume that slowing down block propagation is a bad thing for a miner. First of all block propagation is uneven - it'd be quite possible for large pools to have much faster propagation than the more decentralized solo/p2pool miners we want in the network. Remember, if you can reliably get your blocks to just over 50% of the hashing power, you're better off than if they get to more than that because there are more fee paying transactions for you to pick from, and because in the long run you drive difficulty down due to the work your competitors are wasting. Of course reliably getting to 50% isn't easy, but reliably getting to, say, no more than 80% is probably quite doable, especially if efficient UTXO set handling requires semi-custom setups with GPUs to parallelize it that most full-node operators don't have.

Still, you're playing a game that is not in your favor. In this case you are trading a few extra seconds of alone time hashing the next block against the likelihood that someone else will find a competing (and faster validating) block in the mean time. Unless you are approaching 50% hash power, it's not worth it. The faster your blocks get validated, the better off you are.

Secondly if block propagation does become an issue miners will switch to mining empty or near empty blocks building on top of the most recent blockheader that they know about until they finally get around to receiving the block in full as a way to preserve their income. Unfortunately even with UTXO commits they can still do that - just re-use the still valid UTXO commitment from the previous block.(1) Obviously this has serious issues with regard to consensus... which is why I say over and over again that any UTXO commitment scheme needs to also have UTXO possession proofs as a part of it to make sure miners can't do that.

1) Modulo the fact that the UTXO set must change because a previously unspendable coinbase output became spendable; if we don't do explicit UTXO possession proofs we should think very carefully about whether or not a miner (or group of miners) can extract the info required to calculate the delta to the UTXO set that the coinbase created more efficiently than just verifying the block honestly. I'm not sure yet one way or another if this is in fact true.

As you say, they would have to make at the very least one set of adjustments for the coinbase which becomes newly active, requiring access to part of the datastructure. This could be facilitated by miners including a proof with the block header that provides just the paths necessary for making the coinbase update. Miners know that any full node will reject a header with improperly constructed commitment hashes, so I fail to see what the issue is.

re: content of the scriptPubKey, change your algorithm to index by H(scriptPubKey) please - a major part of the value of UTXO commitments is that they allow for compact fraud proofs. If your tree is structured by scriptPubKey the average case and worst case size of a UTXO proof differs by 454x; it's bad engineering that will bite us when someone decides to attack it. In any case the majority of core developers are of the mindset that in the future we should push everything to P2SH addresses anyway; making the UTXO set indexable by crap like "tags" - the rational for using the scriptPubKey directly - just invites people to come up with more ways of stuffing data into the blockchain/UTXO set.

The full scriptPubKey needs to be stored, or else two lookups would be required - one from the wallet index to check existence, and one from the validation index to actually get the scriptPubKey. That is wasteful. The alternative is key by hash(scriptPubKey) and put scriptPubKey in resulting value: that nearly doubles the size of the index on disk. You'd have to have a really strong reason for doing so, which I don't think the one given is.

BTW, the structure is actually keyed by compress(scriptPubKey), using the same compression mechanism as ultraprune. A standard transaction (pubkey, pubkeyhash, or p2sh) compresses to the minimal number of bytes of either pubkey or hash data. "Attacking" pathways to pubkeyhash or p2sh keys requires multiple hash preimages. I put it in quotes because honestly, what exactly is the purpose of the attack?

Worst case would be if a miner creates a block filled with transactions spending previously created 10,000 byte scriptPubKeys and creating no new ones. A minimal CTxIn is 37 bytes, so you'd get roughly 1,000,000/37=27,027 txouts spent. Each txout requires on the order of 10,000 hash operations to update it, so you've actually got an upper limit of ~270,270,000 updates, or 22 minutes by your metric... There's a good chance something else would break first, somewhere, which just goes to show that indexing by scriptPubKey directly is a bad idea. Sure it'd be tricky and expensive to create such a monster block, but it is possible and it leads to results literally orders of magnitude different than what is normally expected.

"Tricky and expensive" to create such a block? You'd have to create ~270,270,000 outputs, the majority of which are probably non-spendable, requiring *at minimum* 14675.661btc. For an "attack" which at worst requires more than the entire existing block chain history to construct, and adds only 22 minutes to people's resync times. I fail to see why this should be anything but an academic concern.

re: single SHA256, I'd recommend SHA256^2 on the basis of length-extension attacks. Who cares if they matter; why bother analyzing it? Doubling the SHA256 primitive is a well accepted belt-and-suspenders countermeasure, and done already elsewhere in Bitcoin.

A performance factor of 2 is worth a little upfront analysis work. Padding prevents you from being able to construct length-extension attacks - append padding bytes and you no longer have a valid node structure. Length-extension is not an issue, but performance considerations are.

Lots of optimizations sure, but given that Bitcoin is a system where failure to update sufficiently fast is a consensus failure if any of those optimizations have significantly different average case and worst case performance it could lead to a fork; that such optimizations exist isn't a good thing.

I think you mean it could lead to a reorg. "Failure to update sufficiently fast" does not in itself lead to consensus errors. Examples please?
hero member
Activity: 868
Merit: 1000

@jtimon, thank you for the reference and kind words. Yes, our “assets” colored-coin proposal is what drew me back to this proposal. If people think SatoshiDice brings a lot of traffic and bloats the block chain, wait until they see what a distributed exchange and transitive credit will do, especially once the block-size limit is relaxed.


I was under the impression coloured coins don't cause blockchain bloat, unlike Mastercoin, and that is one of its main benefits.  Am I wrong? 
Pages:
Jump to: