Pages:
Author

Topic: [XMR] Monero Improvement Technical Discussion - page 3. (Read 14760 times)

legendary
Activity: 2968
Merit: 1198
There are ways around this, such as out-of-band signaling, or recipient-provided keys. There are different trade-offs, though, and I haven't yet found the "perfect" one.

I figured you could just scan the blockchain without saving it. So basically find a way to get the daemon to synchronize with the network without saving the blockchain, and get simplewallet to scan those dloaded blocks as the daemon synchronizes.

Of course you can do that but (with some recognition of MKN's laws) does it really make sense to expect someone on a cell phone to receive and scan every transaction being generated by millions or billions of people? Probably not.

My personal favorite for in-person transactions is that the sender signs the transaction, transmits it locally to the recipient via QR/NFC/BT/etc., and the recipient transmits it to the blockchain. In doing so the recipient now knows which transactions require confirmation and scanning. The rest can be ignored. As luigi said though, there are multiple approaches that can work for different use cases.

In the original post, he didn't explicitly mention the user would be on a phone. I'm imagining a scenario where the blockchain is 600 gigs.. you surely don't need to keep all of it on your home PC. I think this has all been hashed out here: ... well, I can't find it on the forum.getmonero.org site. Basically some thread discussed different node setups - archival nodes (whole blockchain) vs. light nodes (part of the blockchain).

The in-person thing is a different boat with similar problems.

Of course you don't have to keep the whole blockchain. Pruning is fine, but it only addresses the storage component and doesn't help with the bandwidth and scanning (or more generally "processing") issues at all.

legendary
Activity: 1260
Merit: 1008
There are ways around this, such as out-of-band signaling, or recipient-provided keys. There are different trade-offs, though, and I haven't yet found the "perfect" one.

I figured you could just scan the blockchain without saving it. So basically find a way to get the daemon to synchronize with the network without saving the blockchain, and get simplewallet to scan those dloaded blocks as the daemon synchronizes.

Of course you can do that but (with some recognition of MKN's laws) does it really make sense to expect someone on a cell phone to receive and scan every transaction being generated by millions or billions of people? Probably not.

My personal favorite for in-person transactions is that the sender signs the transaction, transmits it locally to the recipient via QR/NFC/BT/etc., and the recipient transmits it to the blockchain. In doing so the recipient now knows which transactions require confirmation and scanning. The rest can be ignored. As luigi said though, there are multiple approaches that can work for different use cases.



In the original post, he didn't explicitly mention the user would be on a phone. I'm imagining a scenario where the blockchain is 600 gigs.. you surely don't need to keep all of it on your home PC. I think this has all been hashed out here: ... well, I can't find it on the forum.getmonero.org site. Basically some thread discussed different node setups - archival nodes (whole blockchain) vs. light nodes (part of the blockchain).

The in-person thing is a different boat with similar problems.

legendary
Activity: 1260
Merit: 1008
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations

marvelous, thanks for the leads.

I think in general, for the bolded, it would be kept as simple as possible. That braid paper seems to have really taken this to an extreme. Ultimately, I think orphanization should still be possible, it should just be less likely to occur. Braiding, or fusion, or whatver its called, should only be allowed to happen at fresh blockheights. Not deep. Cohorts / siblings? too much. Or thats an eventuality that can be figured out later.

I think you will find that the added complexity is needed to prevent various attacks where miners can get extra credit beyond what they deserve based on work done, or get more control of the chain with less hash power.


Very likely. I doubt in my random musings I'm going to stumble into obvious answers. In general though its good to see people working on this in bitcoin. Should be easy to integrate into other cryptocurrencies, especially those where network latency will really cause problems (e.g., adaptive blocksize like ours).

Is the bolded related to an instance where A is transaction 1,2,3,4,5 and A' is 5,6?

And blocks don't need to have multiple parents to get rid of orphans. What this whole thing depends on is the frequency of orphanization, and I don't know what it is, but I doubt its 100%. These two papers seek to replace blocks with something else. I think something can be made that coexists with blocks. I.e., you have A and A'. Both are valid. So you then add a meta-block (hashes those two blocks) that describes the details "block A and A' are both ok. treat the union set as single transactions, treat those outside the union set as single single transactions". All of this might actually be done at the re-org level.

I guess what you might be getting at is the problem of "well if A' is found, and then that propagates, and then A'+1 is found before A and A' are sewn together into a fusion block, then what?" I think the simple answer is that A'+1 then gets orphaned.
legendary
Activity: 2968
Merit: 1198
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations

marvelous, thanks for the leads.

I think in general, for the bolded, it would be kept as simple as possible. That braid paper seems to have really taken this to an extreme. Ultimately, I think orphanization should still be possible, it should just be less likely to occur. Braiding, or fusion, or whatver its called, should only be allowed to happen at fresh blockheights. Not deep. Cohorts / siblings? too much. Or thats an eventuality that can be figured out later.

I think you will find that the added complexity is needed to prevent various attacks where miners can get extra credit beyond what they deserve based on work done, or get more control of the chain with less hash power.
legendary
Activity: 1260
Merit: 1008
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations

marvelous, thanks for the leads.

I think in general, for the bolded, it would be kept as simple as possible. That braid paper seems to have really taken this to an extreme. Ultimately, I think orphanization should still be possible, it should just be less likely to occur. Braiding, or fusion, or whatver its called, should only be allowed to happen at fresh blockheights. Not deep. Cohorts / siblings? too much. Or thats an eventuality that can be figured out later.
legendary
Activity: 2968
Merit: 1198
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

It is also the case that if one block includes transactions 1,2,3,4, and 5, while another block includes 3,4,5,6, and 7, they can still be merged as long as transactions 3,4, and 5 are identical. So with this observation you can just pick any subset of transactions you want.

There is an issue that such blocks can be incompatible if they include incompatible (double spend) transactions. Then you get into what sort of rules to use to resolve the conflict and what sorts of attacks that allows.

There are various people working on ideas like that.

See "Breaking the chains of blockchain protocols" and "Braiding the chain" here: https://scalingbitcoin.org/hongkong2015/#presentations
legendary
Activity: 1260
Merit: 1008
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).



The goal of fusion blocks (and any offshoot) isn't to explicitly increase security, it is a countermeasure for orphanization, which I think is the primary reason large blocks are feared to cause centralization. You fix the orphanization problem, you fix the largeblock decentralization problem.

As detailed above, if there are 10 transactions in the pool, you could (with the right software and protocol mods) simultaneously make 2 blocks - 1 block has transactions 1 through 5, the second block has transactions 6 - 10. If you find a solution for 6-10, you broadcast the block, add it to the chain etc. However, it turns out your on a high-lag part of the network, so while you found your 6-10 block solution, someone else found a solution for 1-5. What I'm calling for is a protocol mod that would allow the daemon to recognize that two blocks are competing for the same blockheight but both are valid so they should be fused.

Merged mining of self is just a piece of this.
legendary
Activity: 2968
Merit: 1198
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.

I suppose you could possibly do that but I don't understand the purpose. Merged mining doesn't really add any security unless there are added rewards to mining (which would then increase the hash rate).

legendary
Activity: 1260
Merit: 1008
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.


Okay. So it would be a completely different approach / context than merged mining? I was thinking merged mining might be possible with self.
legendary
Activity: 2968
Merit: 1198
And finally, is there any reason why a miner can't be working on different blocks simultaneously?

Yes, the nature of proof of work is such that you can only work on one block at a time. Of course you can multitask, but the hash rate would just be lower on each one, so I'm not sure why you would want to do that.
legendary
Activity: 1260
Merit: 1008
There was nothing on-topic ("Improvement Technical Discussion") in your last post, thus no need for any further response. Please stay on topic.


(snipped fusion block concept, left the meat of it)

why couldn't something be implemented where the above fork turns into a bleb

                            .
._._._._._._._._._./\.
                          \,/

which is eventually resolved to a fusion block

._._._._._._._._._._!_._



First here is a link to Peter R's paper without all the marketing clutter generated by Scribd http://www.bitcoinunlimited.info/downloads/feemarket.pdf.

My take is that the solution to this issue is to mitigate it by creating a proper fee market in Monero. This is something I am actually very interested in working on. In such a scenario the blocks in the memory pool would have an order of fees per KB and a rational miner would prioritize blocks in order of return while the sender would pay a fee depending on the priority desired for a transaction. The existing penalty function for oversize blocks in Monero provides a start but it and the fee structure will likely, have to be optimized in such a way that minimizes the cost to legitimate users while maximizing the cost of spam and attacks.  

I don't know if we're talking about the same issue.  I'f I'm not mistaken the bolded above is actually quite possible and could already be implemented with the existing code. My question revolves around this: In a world where monero is the blockchain and has very high activity, these large blocks that will be created will probably cause centralization to occur on infrastructure that has high bandwidth. This is simply because those nodes connected to each other on wide pipes will be able to sling 200 MB blocks to each other. The first miner to get a new 200 MB block has the best chance of making the next block. Thats satoshi consensus, right?

To keep it decentralized, we need to find a way to keep the narrow-pipe miners in the game. The fusion block concept above may do that. Actually, the fusion block with the light block concept would really do the trick.

So ultimately, yes, the fee market will exist. But even with a fee market we still run into the centralization problem. Thus, instead of slinging 200 MB blocks around, a blocksolver can send the blockindex in advance (I've described this concept in the lightblock thing) which is significantly smaller, and a miner having received this blockindex can immediately create an alternative block that doesn't include these transactions, and either attempt to create a fusion block or the next block.

And finally, is there any reason why a miner can't be working on different blocks simultaneously?
legendary
Activity: 2282
Merit: 1050
Monero Core Team
There was nothing on-topic ("Improvement Technical Discussion") in your last post, thus no need for any further response. Please stay on topic.


Yeah I didn't want to delete things because I just figured this would run its course eventually so might as well. Are we done?

So five pages ago I posted this


So I was reading this
http://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit#scribd

and my thoughts started to drift when I encountered the concept that orphanization is one of the impediments to picking what to mine and the whole block size fee market debate etc...

Is there any work in this space regarding what could be called sister blocks, or fusion blocks?

Basically, the way I understand it (and granted, my assumptions could be flawed) is that there exists a set of transactions in the mempool. We'll just use 5 here

Trans1
Trans2
Trans3
Trans4
Trans5

If miner A decides to put 1,2,3 in his block (block A), and miner B decides to put 3,4,5 in his block (block B), they are both technically valid blocks (they both have the previous block's hash and contain valid transactions from the mempool). However, due to the nature of satoshi consensus, if block A makes it into the chain first, block B becomes orphan - even though it is entirely valid.

It's even easier to understand the inefficiency of satoshi consensus if block A has 1,2,3 and block B has 4,5. In this case, there's really no reason both blocks aren't valid.

I see now as I continue to think about this the problem lies in the transaction fees associated with each transaction, for if they exist in two blocks, which block finder gets the reward? But this isn't an intractable problem.

Essentially what I'm thinking is that you can imagine these two blocks existing as blebs on the chain.
                             .
._._._._._._._._._./
                            \,

each dot is a block, and the comma indicates a sister block
in current protocol, this would happen
                             ._._._
._._._._._._._._._./
                            \,_,

And eventually one chain would grow longer (which is ultimately influenced by bandwidth) and the entire sister chain would be dropped, and if your node was on that chain you'd experience a reorg (right?).

why couldn't something be implemented where the above fork turns into a bleb

                            .
._._._._._._._._._./\.
                            \,/

which is eventually resolved to a fusion block

._._._._._._._._._._!_._


where the ! indicates a fusion block. When encountering a potential orphan scenario (daemon receives two blocks in close proximity, or already has added a block but then receives a similar block for the same block height) instead of the daemon rejecting one as orphan, it scans the sister block as a candidate for fusion. There would be some parameters (X% of transactions overlap, only concurrent block height are candidates (this is effectively the time window)). As part of this, the system would somehow need to be able to send transaction fees to different blockfinders, but again this seems tractable (though I await to be schooled as to why its not possible). In addition, the block reward itself would need to be apportioned.

Or is this what a reorg does? The way I understand reorgs, this is different than a reorg.

Though upon creation of the fusion block a reorganization would have to occur. So at the cost of overall bandwidth we provide a countermeasure for the loss of economic incentive for large blocks.

And one problem to address is that you would need a new block header for the fusion block, but this could really just be the hash of the two sister blocks. Both sisters are valid, therefore the hash of those valid blocks is valid.

Ok back to work.


Is the above possible or am I crazy?

Edited to add: I will be enforcing the moderation again. Please talk technical improvements to monero.

First here is a link to Peter R's paper without all the marketing clutter generated by Scribd http://www.bitcoinunlimited.info/downloads/feemarket.pdf.

My take is that the solution to this issue is to mitigate it by creating a proper fee market in Monero. This is something I am actually very interested in working on. In such a scenario the blocks in the memory pool would have an order of fees per KB and a rational miner would prioritize blocks in order of return while the sender would pay a fee depending on the priority desired for a transaction. The existing penalty function for oversize blocks in Monero provides a start but it and the fee structure will likely, have to be optimized in such a way that minimizes the cost to legitimate users while maximizing the cost of spam and attacks.  
legendary
Activity: 1470
Merit: 1030
Wondering if anyone has looked at the possibility of using the iPhone as a mining device. I understand that since the 5S, AES has been present in the hardware, AES hardware being essential for efficient cryptonight mining. A combined wallet/miner app could be very compelling. I'm thinking mining while charging overnight. Certainly require jailbreaking the device if it's possible at all.
sr. member
Activity: 420
Merit: 262
You say that zk-snarks can't do any worthwhile scripts, yet an entirely anonymous coin has been implemented it that is superior anonymity than CN/RingCT.

Your stubbornness is the main reason I can't work with you. Leadership requires symbiosis of ideas and directions. It requires vision.

Smooth it is quite evident that open source needs leadership. Direction is not likely to be driven by random contributor (if he was that capable, he would fork or otherwise start his own than battle against the leadership which is not that focused on the direction the contributor wants to go).

Edit: you apparently haven't even yet quantified the metrics, which is pretty lame for a competitor not to do.

Edit#2: perhaps if you all spent less time talking about market movements on the exchanges, and more time doing technical research and marketing research...
legendary
Activity: 2968
Merit: 1198
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

There is no feasible technology to do non-trivial scripts using zksnarks at this time. It doesn't exist. Zerocash is pushing the limits already.

While there may be a market for zero knowledge smart contracts on a blockchain, that doesn't even matter because it can't be implemented.

Perhaps if you think that is the only market that exists you should just take a break and come back to the space in a few years and reevaluate.

You are cherry picking points. zk-snarks scripts wasn't my only nor even my main justification.

Afaik, zk-snarks can implement any circuit if one accepts the proving time and verification time (there might also be some other resource constraint such as RAM but I think not), with proving time being much worse than verification time. And one would expect that it can be radically sped up with ASICs to enable more complex circuits to be verified in realistic time!

All of this is entirely consistent with what I said about coming back in a few years. The proving and verification times for Zerocash, with the most simple scripts possible, are barely feasible to use, and even that might be disputed.

Quote
The point is there are very likely some simple scripts that can surely be done with zk-snarks in realistic times, and which are very useful for businesses interopting on the block chain. IoT is one likely candidate and probably many more.

Implementing custom scripts for specific use cases is way out of scope for Monero as I understand the project. But it is open source after all, so if interesting pull requests are submitted they would probably be merged (after sufficient testing and review). That could include "useful" scripts along with an open source zksnark library.

Quote
"build it and they will come after 5 years" is a nice pitch to speculators, but in my line of work I had to produce a marketed product to earn an income. You worked in (programming for) finance (something you acknowledged recently in public post) thus  I assume you never had to do this. So I understand that in for-profit software the mantra is "ship it, sell it", otherwise projects go on and on and on and are never finished.

Again consistent with take a break and come back when the technology is ready for a "build it, ship it, sell it" approach.

Also this is entirely irrelevant to Monero since Monero is an open source project not a product. So off topic for the thread. Please respect the thread starter, the forum, and the community and try to stay on topic.
sr. member
Activity: 420
Merit: 262
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

There is no feasible technology to do non-trivial scripts using zksnarks at this time. It doesn't exist. Zerocash is pushing the limits already.

While there may be a market for zero knowledge smart contracts on a blockchain, that doesn't even matter because it can't be implemented.

Perhaps if you think that is the only market that exists you should just take a break and come back to the space in a few years and reevaluate.

You are cherry picking points. zk-snarks scripts wasn't my only nor even my main justification.

Afaik, zk-snarks can implement any circuit if one accepts the proving time and verification time (there might also be some other resource constraint such as RAM but I think not), with proving time being much worse than verification time. And one would expect that it can be radically sped up with ASICs to enable more complex circuits to be verified in realistic time!

The point is there are very likely some simple scripts that can surely be done with zk-snarks in realistic times, and which are very useful for businesses interopting on the block chain. IoT is one likely candidate and probably many more.

"build it and they will come after 5 years" is a nice pitch to speculators, but in my line of work I had to produce a marketed product to earn an income. You worked in (programming for) finance (something you acknowledged recently in public post) thus  I assume you never had to do this. So I understand that in for-profit software the mantra is "ship it, sell it", otherwise projects go on and on and on and are never finished.
legendary
Activity: 2968
Merit: 1198
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

There is no feasible technology to do non-trivial scripts using zksnarks at this time. It doesn't exist. Zerocash is pushing the limits already.

While there may be a market for zero knowledge smart contracts on a blockchain, that doesn't even matter because it can't be implemented.

Perhaps if you think that is the only market that exists you should just take a break and come back to the space in a few years and reevaluate.
sr. member
Activity: 420
Merit: 262
What is missing from your analysis smooth is that at what level of featureness are businesses willing to embrace block chains. I argue CN/RingCT is below the acceptable level and can not be raised to that level because the fundamentals are not End-to-End principled (also because can only make the payers, payees, and values obscured and not any type of script and other aspects of the block chain data). Business will prefer private databases where they can hide all the data until public block chains mature enough to do so. Public block chains promise more interoption and network effects, once we can make them truly private.

I try to light a fire under you guys to get you refocused on technology that can meet your goal of being a privacy block chain for businesses. That is where the real market is.
legendary
Activity: 2968
Merit: 1198
So can we conclude that Monero's underlying cryptonote technology will not be the best privacy technology forever?

Can we conclude that Monero is one of the few fully functioning private cryptocurrency networks currently?

Can we conclude that off chain data (ip addresses) are something that needs to be addressed for all private cryptocurrency networks?

Can we conclude that a possible technical improvement to Monero would be some kind of zero-proof knowledge thing?

TPTB, I commend your enthusiasm, but one of the problems I think in this conversation is a lack of brevity. No one has time to read ALL of this, so things are missed, and you get frustrated. If you want to have useful discussions, it's probably better to not have paragraphs of text, regardless of how much needs to be said. Writing 1 paragraph is much more difficult than writing 10 pages.

Off the top of my head to return the favor for you not deleting posts and I may be missing a few points:

  • zk-snarks can be used to make any script anonymous, not just currency as for CN/RingCT
  • Anonymity of Zerocash (ZC) is never compromised by compromising the masterkey, only the coin supply is.
  • ZC makes the entire block chain a blob uncorrelated to meta-data, whereas CN/RingCT have distinct UTXO which can be so correlated.
  • ZC doesn't require Tor/I2P thus has more degrees-of-freedom and is End-to-End principled, whereas CN/RingCT are not.
  • Both ZC and CN/RingCT can lose anonymity or have undetectable increase in coin supply if the crypto is cracked.
  • CN/RingCT has the lowest common denominator anonymity which is usually I2P, i.e. maybe 99% vs 99.999% for ZC.
  • Businesses will favor the more provable, more End-to-End freedom choice of ZC.
  • I think the chance of jail time when using CN/RingCT for any action that the State doesn't want you to do, is very high. The anonymity is not robust, as I summarized above.
  • I can't think of any user adoption markets of any significant size of CN/RingCT, other than selling it to speculators. In other words, I view CN/RingCT as just another pump job albeit with some strong developers (who hopefully will get better leadership).
  • I am saying that CN/RingCT is not a viable technology. So arguing that it is the best we have for now, IMO doesn't make much sense, unless that is just a sales pitch to speculators (again keeping in mind the Securities Law and the Howey test in the USA and the implications of leading speculators into an investment with misleading prospectus and not registered with the SEC).

As you digged deeper into the topic and talk about businesses adopting ZC rather than CN... does ZC have the option to be auditable? Real Businesses favor something that can be audited. Can you actually proof you own xxx Amount of ZC without handing over your whole keys? CN got Viewkey for that, what does ZC have? ( besides neither guesses, of what or what not businesses will adopt by us actually hold any fact or argument, as its not up to us but those who run the businesses.

Sorry for the bad english hope you get the points

ZC is very immature at this point. You can't even make payments with more than one output. No multisig or other sort of contracts, even simple ones. There is no "view key" though it has been mentioned that one could be added. For more complex contracts, the current approach will be infeasible for the foreseeable future (it is barely feasible for simple coin pours -- it takes 1 minute on a desktop). Eventually that stuff can be worked out (feasibility beyond a certainly point is not guaranteed though, just reasonable to expect eventually with technological advances), but we are talking about some indefinite future.

There is always going to be some better technology on the horizon. By the time Zerocash becomes more mature, there will likely be something else on the horizon that is superior in various ways, yet itself not mature. And so it continues.
sr. member
Activity: 420
Merit: 262
As you digged deeper into the topic and talk about businesses adopting ZC rather than CN... does ZC have the option to be auditable? Real Businesses favor something that can be audited. Can you actually proof you own xxx Amount of ZC without handing over your whole keys? CN got Viewkey for that, what does ZC have? ( besides neither guesses, of what or what not businesses will adopt by us actually hold any fact or argument, as its not up to us but those who run the businesses.

Sorry for the bad english hope you get the points

Good point. Someone should check.

P.S. I edited my prior post.
Pages:
Jump to: