Pages:
Author

Topic: [XMR] Monero Improvement Technical Discussion - page 2. (Read 14744 times)

legendary
Activity: 1596
Merit: 1030
Sine secretum non libertas
legendary
Activity: 1260
Merit: 1008
Thought of this because I came across another bitcoin block propagation network and hope that monero can be designed to avoid any type of centralizing development.

I'm jumping ahead possibly many years, but suppose we encounter scaling issues regarding transaction propagation. Presumably, there are logical means for light block propagation that has been described elsewhere. But what if we encounter the fact that transactions do not propagate quickly enough? Monero transactions are larger than your average cryptocurrency, and its unknown whether a ring size of 3 (mixin of 3 in the old parlance) will stay the standard. I.e., what if a ring size of 10 becomes necessary? Or 100?

So my main question is whether the current block hash is made from all of the information in the transactions (inputs, outputs, etc), or if it is made from the hash of the transaction.

If its made from the hash of the transaction, then we could implement a hash-first transaction propagation. So you could imagine the network cache would be split into more layers. There would be a hash pool (just the hashes), then the transaction pool (hashes + rest of transaction data) and then however the rest of the cache is divied up (i.e., if lightblocks are implemented there's another caching layer).

So, first the transaction hash races through the relay network with its measly size of whatever it is. 64 bytes? Every node that gets it can now start mining with that transaction in their block. Presumably, while they are mining on this transaction, the rest of the data will catch up. Hell, it could even be possible to solve a block and broadcast a block before the rest of the transaction data even arrives.

If its made from the entire transaction, then we'd first need to modify block architecture to just be a hash of the hashes.

Hrm... I see it now. The main problem is validation. But then again, if the transaction data associated with a given hash turns out to be crap, then the transaction data isn't stored in the blockchain. So at worst you have empty hash placeholders in the blockchain (which could be pruned), so you sacrifice blockchain bloat (and potentially grease the ability to spam the network) for transaction speed propagation.

Unless there was some way to determine if a hash is valid. But then we're defeating the whole purpose of unidirectional mathematics (or whatever the term is called).

legendary
Activity: 2282
Merit: 1050
Monero Core Team
This brings us back to the Cryptonote adaptive blocksize limit combined with a tail emission found in Monero where:
1) The cost of mining a block is set by the block subsidy

Correct, meaning the amount of hashrate miners spend will be equal to the block subsidy[1] (where block subsidy will ultimately be Monero's perpetual tail reward which is necessarily a fixed # of coins), because (as I pointed out in our prior discussion) transaction fees will trend to costs, due to that the median block size MN will trend upwards to match market demand and thus there is no pricing power on transaction fees.

[1] Note this means the tail reward security of Monero will be very weak and insufficient.

2) The total amount in fees per block has to rise to a number comparable to, but most of the time smaller, than the block subsidy.

You wrote that before in our prior discussion:

The reason the above two scenarios do not apply to a Cryptonote coin with a tail emission such a Monero becomes apparent when one considers the economics of the total block reward components of fees and base reward (new coin emission). If the total in fees per block significantly exceed the base reward then it becomes economically attractive for miners to burn coins to the penalty by mining larger blocks. The block size rises until the total fees per block fall below a level where it is uneconomic for the miners to pay the penalty by increasing the blocksize. This level is comparable to the base reward. It is at this point where the need for a tail emission becomes clear, since without the tail emission the total block reward (fee plus base reward) would go to zero.

And it still doesn't make any sense to me. The block size will trend upwards to match transaction demand, because the penalty is driven to 0 as the median block size increases as  miners can justify burning some of the transaction fees to the penalty. That drives the median block size upwards, which drives the penalty to 0 again. The median block size doesn't have any incentive to decrease again, thus transaction fees then fall to costs.

Sorry as I told you before, Monero does not solve the Tragedy of the Commons in Satoshi's design. It does adaptively increase the block size while preventing spam surges.

I doubt John Conner's design has achieved any better, because as I explained at our prior discussion, there is no decentralized solution to that Tragedy of the Commons in the current proof-of-work designs. I have a solution, but it is a very radical change to the proof-of-work design that relies on unprofitable mining by payers.

As far as I can see, Monero has not solved the Tragedy of the Commons in Satoshi's design. I reiterated my rebuttal to ArticMine:

https://bitcointalksearch.org/topic/m.14599446

Clarification:

Security of a coin will be very tied to its transaction rate × average transaction size, i.e. velocity adoption and wealth of the velocity. The problem I have with the fixed size tail reward as compared to the design I am contemplating is that tail reward only captures those metrics indirectly through exchange price appreciation. I am not sure if the two models are equally powerful. I will need to think more deeply about it. My design also has an orthogonal tail reward.

Edit: some aspects of Monero's tail reward and block size adjustment algorithm are analogous to aspects of my design. There are some other things I didn't mention. I will need to really take the time to distil this into a carefully written white paper. So I would caution readers not to form any concrete conclusions (either for or against any design mentioned here) from these vague discussions.

BTW, I would suggest that Tragedy of the Commons is an ineffective analogy for explaining whatever it is you are trying to explain because obviously-intelligent people such as ArticMine don't understand it. It may be that you are entirely correct, but if you want to communicate effectively you need a differently-worded explanation.

Agreed at the appropriate time. I deem it necessary to be vague since I am months (or moar!) away from implementing my design.

The fundamental issue here is that transaction fees should not be seen as a way to secure a Cryptonote coin such as Monero. The security of the coin is based on the base reward. Now that does not mean tha transaction fees will tend to zero and stay there. In fact one would expect the total transaction fees per block to reach for the most part an equilibrium at some fraction of the block reward. To understand this we must understand that while transaction fees are needed to overcome the penalty in order increase the blocksize, there is no rebate on penalty when the blocksize falls. This means that just normal fluctuation in transaction demand will require a significant fraction of the block reward in transaction fees. The median blocksize adjustment time is less than a day in Monero. In practice one would expect total transaction fees per block to temporarily approach or even slightly exceed the block reward if there is a sharp rise in transaction demand, and conversely it is possible for transaction fees to temporarily fall close to zero if there is a sharp drop in transaction demand. Since security is only as strong as its weakest point for this reason alone one cannot count on transaction fees to secure a Cryptonote coin. The implications for Monero is that the tail emission alone becomes the source for POW security. A further important conclusion is that a coin that uses a Cryptonote adaptive blocksize or something similar and does not have a tail emission or equivalent (for example demurrage) big enough to secure the coin will become insecure and fail.

In the case of Monero one must keep in mind that over time one would expect the tail emission will actually reach an equilibrium with lost coins for a given purchasing power of Monero. The assumption here is that lost coins are proportional to the the total emission for a given purchasing power and transaction velocity. One advantage that Monero has here is that monitoring Bytecoin could provide a significant early indication of the risk of too low a tail emission with a two year or more lead time. Paradoxically this early warning system works because of the Bytecoin two year premine / ninjamine. Also the Bitcoin inflation rate will also fall below that of Monero so it could also provide an advance indication of risk. As has been indicated above this risk is of course largely mitigated with exchange rate appreciation, which would normally correlate very strongly with the use of the currency.
legendary
Activity: 1750
Merit: 1036
Facts are more efficient than fud
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited?

Human populations don't grow faster than Moore's law. Duh.

Disk arrays scale to anything we can fathom.

The issue is that no block chain consensus can maintain decentralization of validation, not because of scaling problems but because of the fundamental economic reality that not every miner can have an equal share of the hashrate, thus verification costs are not shared equally. The creates an asymmetry where economies-of-scale will maximize profit and grow hashrate the fastest thus centralizing mining.

The solution requires some clever innovation on proof-of-work.

Not necessarily, algorithms could be programmed to move your funds between coins if an attack threshold is passed--of course this requires trustless exchanges to fill in the gap left by the inability to decentralize mining, and also produces lemming effects if the coin's mining doesn't adjust responsively, though this assumes the attacker is just greedy and not actually trying to destroy the coin. In a full-on attack of crypto,  a war to end the battles,  the algorithm approach forces the attack to be multi-pronged, but would not prevent coins from attacking each other to gain market share, though my guess is miners would make algorithm adjustments and speed becomes an asset as much as hashing power--I'm rambling possabilities, my guess is that there some noob assumptions to flesh out, if not totally dismiss.
hyc
member
Activity: 88
Merit: 16
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited? Are there any proposals for a solution to the new faulty database that can easily fail an SSD? So far I've failed two during initial download at the 50% mark and the disk writes were well into the 100's of gigabytes(what is this database doing?).

To be blunt, you're full of s#it.

https://gist.github.com/hyc/33f3eec6bae83246209d
sr. member
Activity: 420
Merit: 262
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited?

Human populations don't grow faster than Moore's law. Duh.

Disk arrays scale to anything we can fathom.

The issue is that no block chain consensus can maintain decentralization of validation, not because of scaling problems but because of the fundamental economic reality that not every miner can have an equal share of the hashrate, thus verification costs are not shared equally. The creates an asymmetry where economies-of-scale will maximize profit and grow hashrate the fastest thus centralizing mining.

The solution requires some clever innovation on proof-of-work.
sr. member
Activity: 596
Merit: 251
How does Monero propose to resolve the fact that it's blockchain is growing faster than Moores Law and pruning is so limited? Are there any proposals for a solution to the new faulty database that can easily fail an SSD? So far I've failed two during initial download at the 50% mark and the disk writes were well into the 100's of gigabytes(what is this database doing?).
legendary
Activity: 1260
Merit: 1008
Ye olde selection of outputs problem. We'll call this the Monero problem. Maybe it will go down in the books like the byzantine generals problem or whatever.

I have a list of things. Some of the things in this list are mine - I own them. I want to use some of my things in this list without an observer knowing which of the things I'm using are actually mine. Therefore, I select my thing and some others as decoys, so that the observer doesn't know which of the things is mine. However, the nature of the things is such that once they are used by the true owner, they can not be used again. Thus, older things in this list have a higher probability that they have already been used and are only being used as decoys, though there is no way to determine whether a thing has been used.

Thus, how do I select my set of decoys to have the highest probability of appearing to be unused?

In triangular distribution (I think this is what is currently used in Monero), as demonstrated by mWo12 here: http://pastebin.com/raw/4TzcF9b9 , seems to use the pattern of

1. recent output - highly likely not used: seen as the columnar pattern on the far right.
2. then as far as I can tell, completely random selection throughout the blockchain. Unknown what the probability of usage actually is.

Probability of Usage - this is probably something that we could define somehow. Out of band information can tell us that probability of usage increases with more users of the blockchain, but how this manifests in the blockchain is a different beast.

legendary
Activity: 1260
Merit: 1008
As I was dosing off last night I remembered a possible point.... this puts a level of trust back into the system, which seems anathema for a trustless currency system. Unless we come up with a way to decentralize the auditing mechanism?

i dunno ... we share hashes of our wallets ?

fresh morning thoughts. possibly useless.
legendary
Activity: 2968
Merit: 1198
I guess ultimately its a matter of keeping your wallet.bin safe and noncorrupted once this becomes a stable thing. But i've gone through at least 5 wallet.bins for every one of my accounts by now.

Have you seen the corruption recently? I think the wallet saving was changed to not overwrite in place within the past several months, which should eliminate most off the corruption. In time it will be replaced with a database.


legendary
Activity: 1260
Merit: 1008
Over the past couple of weeks, I've seen at least two instances where its become apparent the auditability of Monero transactions is inadequate

It isn't entirely clear to me whether it is or not. Is auditability of cash or gold inadequate?

The auditability comes from surrounding processes and not from the asset itself.

Maybe it is a mistake to try to push audibility too far into the system itself. The current receive view keys might be a reasonable middle ground, because with one it can be proved that you received some funds. It is then up to you to show what you did with them, and if you can't you can be held responsible (for embezzlement, etc.)

Valid points - separating the asset from the rest of the mechanisms. I would argue, then, as have others, that we need to come up with a better way to describe the actual auditibility capacities of the extant monero network. I was under the impression, until I learned better, that Monero was private / opaque by default, but then you could turn it into a bitcoin-style thing if you gave someone the ability to do so. I think the term "viewkey" has this baked into it "oh, now I can view everything, where before I couldn't, because cryptography".

I just don't like the bolded part because its more work, and will probably be the number 1 reason future monerians would use a centralized service - to keep them compliant by keeping all their "receipts" in order. I guess ultimately its a matter of keeping your wallet.bin safe and noncorrupted once this becomes a stable thing. But i've gone through at least 5 wallet.bins for every one of my accounts by now.
legendary
Activity: 2968
Merit: 1198
Over the past couple of weeks, I've seen at least two instances where its become apparent the auditability of Monero transactions is inadequate

It isn't entirely clear to me whether it is or not. Is auditability of cash or gold inadequate?

The auditability comes from surrounding processes and not from the asset itself.

Maybe it is a mistake to try to push audibility too far into the system itself. The current receive view keys might be a reasonable middle ground, because with one it can be proved that you received some funds. It is then up to you to show what you did with them, and if you can't you can be held responsible (for embezzlement, etc.)
sr. member
Activity: 420
Merit: 262
I don't have a solution, but I'm hoping that we can collectively start thinking about a solution. Like I've said before, I don't understand the cryptography enough to understand why its difficult to view outgoing transactions without the ability to sign them.

Viewing the outgoing transactions would break the rings for others, not just own, by reducing the anonymity sets for example if the entity you provided the viewkey to was a very popular entity to provide viewkeys to.

Other than that, I think it would be technically possible to make a viewkey for outgoing transactions by having two private keys and then proving in zero knowledge that the two ring signatures were equivalent, without giving up your private key which has power to spend your outputs.
legendary
Activity: 1260
Merit: 1008
xpost: https://forum.getmonero.org/4/academic-and-technical/2525/enhancing-the-auditability-of-monero-transactions

Over the past couple of weeks, I've seen at least two instances where its become apparent the auditability of Monero transactions is inadequate, and this is a common critique of Monero for those that dive past the limitations of the viewkey. It's incredibly ironic that auditability is an issue, but it does make sense. There needs to be a trustless way to share your financial information if you choose to do so.

To date, there are two mechanisms: the view key, which has the fault of only showing incoming transactions, and the databasing by simplewallet of transaction history, which has the fault of relying on trust. "I trust that you have logged all of the transactions associated with this account."

I don't have a solution, but I'm hoping that we can collectively start thinking about a solution. Like I've said before, I don't understand the cryptography enough to understand why its difficult to view outgoing transactions without the ability to sign them.
hyc
member
Activity: 88
Merit: 16
Random writes can wear out the SSD much more rapidly is because it causes entire sectors to have to be cleared and rewritten (potentially even moved) even if only one bit is changed in the sector. Sequential writes are much healthier for the SSD.

Yes, but that's only part of the story. Flash sectors are 512 bytes each, LMDB writes in pages (4KB on common systems). So right off the top, the wear out factor is reduced by a factor of 8. Also LMDB is copy-on-write so rewriting of individual pages is a rare occurrence; rewriting of single random sectors is a non-occurrence. The one factor we can't control for is filesystem and partition alignment - if your disk layout doesn't line up with a multiple of 2MB, you're probably fragmenting all your accesses. This is the essential part of using SSDs - flash memory can be read and written in sectors, but erases can only be done in erase blocks that are commonly 2 or 4 MB in size. The insane thing is that for years, flash drive vendors shipped their devices with a default disk geometry of 255 heads, 63 sectors/track, which gives you cylinders of 16065 sectors per cylinder. Even though disk devices generally use LBA instead of CHS addressing, disk partitioning tools and filesystems still use cylinders. I always reformat to 256 heads, 32 sectors/track which gives 8192 sector cylinders: 4MB. A misaligned partition can cause major performance degradation and accelerated wearout.
sr. member
Activity: 420
Merit: 262
Random writes can wear out the SSD much more rapidly is because it causes entire sectors to have to be cleared and rewritten (potentially even moved) even if only one bit is changed in the sector. Sequential writes are much healthier for the SSD.
hyc
member
Activity: 88
Merit: 16
I was doing some research on long term storage solutions for work and came across the notion that the factor that limits a SSD's life is the number of writes performed. Reading from a cell doesn't really kill the cell, its the need to write to a cell that will eventually kill it.

So, my question is whether LMDB currently (or can be modified to) only write to the database once for a given entry.

My gut tells me that - yeah - because the blockchain is just a database, then once your node is sync'd up, everything is written. However, the innerworkings of LMDB are unknown to me - so perhaps it has to rewrite certain locations?

Reading from a cell can wear it out too, it just takes longer to happen.

LMDB is a copy-on-write design, so it almost never overwrites existing pages. It is actually proven to be more SSD/flash-friendly than most other DB designs in existence.

Here's some relevant work on a flash-optimized database, compared against LevelDB - LevelDB is over 70x worse in terms of write amplification and data wearout. https://www.usenix.org/conference/atc15/technical-session/presentation/marmol

(LMDB is not compared in this work, but we have other tests that show LMDB's write amplification is far smaller than LevelDB's http://symas.com/mdb/#bench )

Relax. There is nothing going on in the database world that LMDB hasn't already solved.

I've been developing systems on SSDs for over a decade, I encountered and solved these wearout problems long before any other DB authors even knew they existed. LMDB was designed for solid state storage.

http://forums.storagereview.com/index.php/topic/22805-does-mechanical-storage-have-a-future/#entry230060
http://forums.storagereview.com/index.php/topic/24749-160-gb-flash-ssd-anounced/#entry240229
legendary
Activity: 1260
Merit: 1008
I was doing some research on long term storage solutions for work and came across the notion that the factor that limits a SSD's life is the number of writes performed. Reading from a cell doesn't really kill the cell, its the need to write to a cell that will eventually kill it.

So, my question is whether LMDB currently (or can be modified to) only write to the database once for a given entry.

My gut tells me that - yeah - because the blockchain is just a database, then once your node is sync'd up, everything is written. However, the innerworkings of LMDB are unknown to me - so perhaps it has to rewrite certain locations?
legendary
Activity: 2282
Merit: 1050
Monero Core Team
I didn't intend to post in this thread again, but seems I remember Monero would soon add multi-sig, and I wanted to make you aware of a potential 51% attack hole enabled by multi-sig:

https://bitcointalksearch.org/topic/m.14002317

Thanks
sr. member
Activity: 420
Merit: 262
I didn't intend to post in this thread again, but seems I remember Monero would soon add multi-sig, and I wanted to make you aware of a potential 51% attack hole enabled by multi-sig:

https://bitcointalksearch.org/topic/m.14002317
Pages:
Jump to: