Pages:
Author

Topic: [XMR] Monero Improvement Technical Discussion (Read 14744 times)

legendary
Activity: 1596
Merit: 1030
Sine secretum non libertas
September 02, 2016, 12:21:21 PM
Post-quantum motivation: https://www.technologyreview.com/s/602283/googles-quantum-dream-may-be-just-around-the-corner/

TL;DR: End of 2017, a 50-qubit machine is possible, perhaps even likely.
legendary
Activity: 2282
Merit: 1050
Monero Core Team
arcticmine, sorry I haven't followed up on the adaptive blocksize algo. Par for the course with my manic tendencies. Ah well.

I've been following the bitfinex hack (as I'm sure we all have), and came across the Vault Address idea: http://fc16.ifca.ai/bitcoin/papers/MES16.pdf

I find it interesting and think it would be something that would add a lot of value to Monero. Based on my limited knowledge of both Monero and this "covenant" concept, it seems like Monero could pull it off, but I'm not sure. The more basic text on the concept comes from here: http://hackingdistributed.com/2016/02/26/how-to-implement-secure-bitcoin-vaults/


Quote
Operationally, the idea is simple. You send your money to a vault address that you yourself create. Every vault address has a vault key and a recovery key. When spending money from the vault address with the corresponding vault key, you must wait for a predefined amount of time (called the unvaulting period) that you established at the time you created the vault -- say, 24 hours. When all goes well, your vault funds are unlocked after the unvaulting period and you can move them to a standard address and subsequently spend them in the usual way. Now, in case Harry the Hacker gets a hold of your vault key, you have 24 hours to revert any transaction issued by Harry, using the recovery key. His theft, essentially, gets undone, and the funds are diverted unilaterally to their rightful owner. It’s like an “undo” facility that the modern banking world relies on, but for Bitcoin.

Now, the astute reader will ask what happens when Harry is really really good, and he lies in wait to steal not just your vault key, but also your recovery key. That is, he has thoroughly pwnd you and, as far as the network is concerned, is indistinguishable from you. Vaults protect you even in this case. The recovery keys have a similar lock period, allowing you to perpetually revert every transaction Harry makes. Unfortunately, at this point, Harry can do the same and revert every transaction you make. To avoid a perpetual standoff, the recovery keys can also burn the funds, so no one gets the money. The upshot is that Harry is not going to be able to collect a dime of proceeds from his theft. And this, in turn, means that Harry is unlikely to target vaults in the first place, because there is no positive outcome where he gets to keep the proceeds.

The guts of the paper go into detail about the bitcoin scripting that is necessary... and because Monero uses less scripting, it might not work? I dunno.

The important thing, though, is that this type of blockchain technology improvement requires a hardfork, which we're all fine with in Monero.


My first comment is that reversibility is not a substitute for poor security. This applies equally to both "banks" such a exchanges and individuals. The reality, when it comes to security, is that crypto currency by design turns back the clock back over 50 years if not 100 years. So the security mindset of the 1960's or earlier is actually appropriate here. Even better the security mindset of the 1890's. If all the bank's gold was stolen the bank went bust and the depositors lost all of their deposits. End of story.  Monero, after all, is by design a form of digital gold.

Many people today would be shocked to find out that as recently, as 50 years ago, most fiat transactions were not reversible. Cash and bearer instruments were the norm. Those transactions that were reversible were only reversible for a very limited period of time. Cheques for example could only be reversed while they were in transit. Once the post office delivered the cheque and it was cashed it was no longer reversible. Reversibility of transactions on a large scale started with the use of credit cards for card not present transactions first by the telephone, fax and mail and then over the Internet. When a credit card is used over the Internet or the phone, for example, there is no way to legally bind the payer to the transaction. Even with fax or mail there is no way for the merchant to verify that the signature provided is valid since they do not see the card. This callous lack of security in the banking system has also been extended to many other transactions.  For example in Canada if one deposits a cheque in an ATM the banks do not verify that the account to which the cheque was deposited to is actually that of the named payee on the cheque.

If the above is not enough this is all compounded by the design of the most popular proprietary operating systems. Security of the end user, especially for "consumer" products is not the primary design goal. The primary design goals are instead the deterrence of copyright infringement (DRM) and the collection of information from the end user for commercial exploitation. The Microsoft Windows registry was designed primarily to make it very difficult to copy an installed application form one computer to another, while the primary design goals of the security in IOS is to prevent the copying of installed apps from one phone to another. The result of this is that most people with good reason do not trust their computers, and instead in many cases keep their crypto currencies in centralized exchanges. This to a large degree defeats the whole point of Bitcoin and to a much larger degree of Monero.

For individuals basic security starts with using a FLOSS OS such as GNU/LInux or FreeBSD. One has of course to practice sound computer security and "harden" the system. There are already hardened distributions such as Qubes OS https://www.qubes-os.org/, but even a mainstream GNU/Linux distribution such a Ubuntu, combined with some simple common sense security practices is orders of magnitude ahead of what Microsoft and Apple sells to consumers.

For "banks" I would start with finding a retired banker over the age of 80 preferably over the age of 90 and getting some training on the security practices that were in place in the banking system say 50 - 60 years ago. I mean lets start here with some of the basics such as multiple signatures for amounts over a certain threshold.  I mean say 10,000 USD rather than 50,000,000 USD. I mean a hot wallet with over 100,000 XBT in it? Seriously. The regulators also have a role to play here in setting standards for security and internal controls.

After my lengthy security rant, I will address the articles. The concept here is to create a time locked transaction that can be reversed by the sender for a fixed period of time. There is some merit to this. This would require multi signature transactions that are time locked. My instinct is that this could be done in Bitcoin with what the Lightening Network is proposing. Multi signatures will be possible in Monero after RingCT is implemented. I am not very clear on how time locked signatures could be implemented in Monero, so someone more expert on this than myself may be able to comment. In the meantime Monero "banks" will have to focus on preventing the "gold" from being stolen in the first place, just as in the 19th century.  
legendary
Activity: 1260
Merit: 1008
arcticmine, sorry I haven't followed up on the adaptive blocksize algo. Par for the course with my manic tendencies. Ah well.

I've been following the bitfinex hack (as I'm sure we all have), and came across the Vault Address idea: http://fc16.ifca.ai/bitcoin/papers/MES16.pdf

I find it interesting and think it would be something that would add a lot of value to Monero. Based on my limited knowledge of both Monero and this "covenant" concept, it seems like Monero could pull it off, but I'm not sure. The more basic text on the concept comes from here: http://hackingdistributed.com/2016/02/26/how-to-implement-secure-bitcoin-vaults/


Quote
Operationally, the idea is simple. You send your money to a vault address that you yourself create. Every vault address has a vault key and a recovery key. When spending money from the vault address with the corresponding vault key, you must wait for a predefined amount of time (called the unvaulting period) that you established at the time you created the vault -- say, 24 hours. When all goes well, your vault funds are unlocked after the unvaulting period and you can move them to a standard address and subsequently spend them in the usual way. Now, in case Harry the Hacker gets a hold of your vault key, you have 24 hours to revert any transaction issued by Harry, using the recovery key. His theft, essentially, gets undone, and the funds are diverted unilaterally to their rightful owner. It’s like an “undo” facility that the modern banking world relies on, but for Bitcoin.

Now, the astute reader will ask what happens when Harry is really really good, and he lies in wait to steal not just your vault key, but also your recovery key. That is, he has thoroughly pwnd you and, as far as the network is concerned, is indistinguishable from you. Vaults protect you even in this case. The recovery keys have a similar lock period, allowing you to perpetually revert every transaction Harry makes. Unfortunately, at this point, Harry can do the same and revert every transaction you make. To avoid a perpetual standoff, the recovery keys can also burn the funds, so no one gets the money. The upshot is that Harry is not going to be able to collect a dime of proceeds from his theft. And this, in turn, means that Harry is unlikely to target vaults in the first place, because there is no positive outcome where he gets to keep the proceeds.

The guts of the paper go into detail about the bitcoin scripting that is necessary... and because Monero uses less scripting, it might not work? I dunno.

The important thing, though, is that this type of blockchain technology improvement requires a hardfork, which we're all fine with in Monero.

legendary
Activity: 2282
Merit: 1050
Monero Core Team
Yes. The above is my current understanding.

This is actually a discrete optimization problem.

For a small number of transactions one can test all the possibilities to find the optimum; however this may become computationally expensive to the miner if the number of transactions to include becomes very large.

For a very large number of transactions, with each individual transaction size very small when compared to the blocksize, a simple solution would be to add transactions in order of per KB fees, while testing each addition for an increase in revenue (total fees - penalty). When adding transactions causes a decrease in revenue, stop adding transactions.

It may be simplest to choose between one of the two cases above depending on the number of transactions and their individual size relative to the block size.
legendary
Activity: 1260
Merit: 1008

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

Where MN = the median size of the last N blocks.

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

Where B is the relative increase in the block size over the median blocksize

-------

@arcticmine, would you say all the above equations are your current state of thinking on the matter? If so, I might start toying around with some simulations / scenarios for determining optimum block inclusion algos. I might stick with bash, but it could end up in R. Either or, it'll be on github. I just wanna be sure I got the right formulars before I dive into it. I figure I'll use a toy data set first, and then try it against a real data set (I think one of us linked to litecoin dataset somewhere, and bitcoin should be obtainable).
legendary
Activity: 1260
Merit: 1008
i can't help it, i drink coffee.

Peerfriends Metanode Clusters-

what: self organizing nodes clusters that share blockchain and transaction data (somehow).

how - the daemon monitors peers, finds ones that send good data and have good connectivity.

The clusters will self organize. Individual nodes maintain block header chains, request data from peerfriends if needed. Small cluster sizes (relative to network perhaps) will ensure that blockchain data doesn't "go missing". Peerfriends share information about their peer connections, so there is no overlap. In this way, each member of the peerfriend cluster will be less likely to receive duplicate information, because they are receiving information from different parts of the network. This lightens the load for individual nodes, because it reduces the amount of transactions coming in per node. May also reduce the cost of relaying larger blocks.



btw, this website is awesome: https://sketch.io/sketchpad/
legendary
Activity: 1260
Merit: 1008
I dunno why this is in my head. I think because I recently saw xthin blocks mentioned somewhere.

But I'm curious if another level of thinness, and speed, could be achieved in a blockchain consensus network.

As with the thin block approaches, an index of transactions is used - a hash of each transaction. However, instead of a daemon sending out this index and then forcing a downstream daemon to request the transactions to flesh out the block, why not just mine in the hashes? Consensus can be reached in a fraction of the time, and the actual transaction data can be filled in later. There's a block window where monero transactions have to mature anyway, so we have that window to fill in the data.

node1 - finds block, sends out meta-block of block header and transaction hashes.
node 2 - accepts block, starts solving next block. While solving, rest of block data is filled in and validated. If filled in data is not valid, block is rejected and node 2 waits to receive another block candidate or reverts to finding its own.

Obvious attack vectors are to spam with hashes that don't have corresponding transactions. Would also need to come up with someway to prevent things from being filled in if they shouldn't be filled in. E.g., in the meta block there's a hash of something not in the mempool - anyone's mempool at time of block forming (a malicious miner just put it in there). Meta block gets added, nodes then request data to fill that index. Malicious miner then transmits matching index transaction...

I guess my point is there exists a certain network buffer due to possibility of re-organizations, and we can use this buffer to increase the speed of consensus state independent of the transfer of data.

hashes are so cool.

it seems that the only thing, block protocol wise, that would need to be changed is the how the block ID is hashed. Instead of the transaction, the merkle root is just hashed from the transaction hash.

5. Calculation of Block Identifier

   The identifier of a block is the result of hashing the following data
   with Keccak:

      - size of [block_header, Merkle root hash, and the number of
        transactions] in bytes (varint)

      - block_header,

      - Merkle root hash,

      - number of transactions (varint).

   The goal of the Merkle root hash is to "attach" the transactions
   referred to in the list to the block header: once the Merkle root
   hash is fixed, the transactions cannot be modified.


5.1  Merkle Root Hash Calculation

   Merkle root hash is computed from the list of transactions as
   follows: let tx be the i-th transaction in the block, where 0 <= i
   <= n-1 (n is the number of transactions) and tx[0] is the base
   transaction. Let m be the largest power of two, less than or equal to
   n. Define the array h as follows:

      h = H(h[2*i] || h[2*i+1])
        where 1 <= i <= m-1 or 3*m-n <= i <= 2*m-1.
      h = H(tx[i-m])
        where m <= i <= 3*m-n-1
      h = H(tx[i-4*m+n])
        where 6*m-2*n <= i <= 4*m-1.

from: https://cryptonote.org/cns/cns003.txt
legendary
Activity: 2968
Merit: 1198
Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

We have on multiple occasions. Not with the sort of smarter algorithms that ArticMine discusses though.

I'm not sure I agree we can ignore miner optimization. Such behavior may affect the overall incentives. Transaction creators and miners are in a sort of conversation, with fees as the language. You probably can't understand one side of this conversation, or the conclusion of it, without considering the other.


miner optimization as in miners creating better algos to include transactions in blocks to reap most reward?

So this gives us some choices - we either think of the most optimal block inclusion algo based on the parameters set in place by the auto-fee adjuster (can we give this thing a name?), or we make the auto-fee adjuster parameters such that gaming them doesn't totally bork the system.

In either case, you still have to consider what strategies might be used.
legendary
Activity: 1260
Merit: 1008
Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

We have on multiple occasions. Not with the sort of smarter algorithms that ArticMine discusses though.

I'm not sure I agree we can ignore miner optimization. Such behavior may affect the overall incentives. Transaction creators and miners are in a sort of conversation, with fees as the language. You probably can't understand one side of this conversation, or the conclusion of it, without considering the other.


miner optimization as in miners creating better algos to include transactions in blocks to reap most reward?

So this gives us some choices - we either think of the most optimal block inclusion algo based on the parameters set in place by the auto-fee adjuster (can we give this thing a name?), or we make the auto-fee adjuster parameters such that gaming them doesn't totally bork the system.

edited to add - so its engineering perfection vs. fault tolerance.
legendary
Activity: 2968
Merit: 1198
Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

We have on multiple occasions. Not with the sort of smarter algorithms that ArticMine discusses though.

I'm not sure I agree we can ignore miner optimization. Such behavior may affect the overall incentives. Transaction creators and miners are in a sort of conversation, with fees as the language. You probably can't understand one side of this conversation, or the conclusion of it, without considering the other.
legendary
Activity: 1260
Merit: 1008
...

Okay. So in the equations state above, I think you're just laying out the existing penalty fee as it exists in Monero.

From an earlier post, you state

Quote
In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN.

I think this is the only place I see a clear exposition of how per KB fees will be calculated, and it is directly tied to Mn, so therefore the blocksize penalty mechanics are sort of encapsulated by Mn . So the base reward will eventually hit 0.6 (at current 2 minute blocktime), so at that point the the median blocksize is the main driver.

Quote
I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.
With this I can see where your head is at - basically we assume that if the MN has grown, the network has become more valuable due to the increase in activity. The increased external valuation has to be countered with an internal decrease in the xmr cost to add to the chain, though it must be done with the proper safeguards to prevent bloat attack (or something). In your approach, this is done by directly scaling the blockchain fee with the penalty, which is directly coupled to MN.

So, in practice (for humans using the network), we will have a statistic that details MN in real time, and the software client will make suggestions based on MN for per KB fees.

Meanwhile, the consensus protocol creating a given block will have calculated MN for a given span, determined the optimal size of the block to create. If the optimal size of the block to create is <=  the previous block, any transaction meeting the current blockchain-fee threshold can be included in that block. If the optimal size of the block to create is > the previous block, any transaction meeting the current blockchain-fee threshold can be included in addition to transactions with lower fees.

My brain hurts just imagining how the optimal block size calculations will look.

I wonder if we should pull the training wheels with the 60 kb thing and just let the protocol do its thing, then we could see how the blocksize penalty actually affects things. Though that would force miners to not mine any transactions until the mempool is stuffed. Hrm...

Setting the fees in the penalty area is actually the easy part.

It is coming up with an optimal or at least close to optimal algorithm for determining the optimal transactions to include in a block from the miner's point of view that I am still wrapping my head around. I do have some ideas at this point but nothing concrete yet. By the way we have had already blocks that have triggered the penalty going all the way back to 2014. Furthermore there was an attack in 2014 that produced a fair number of blocks into the penalty area when at the time M0 was 20,000 K. This was before the recent fork to 2 min blocks. So in effect we already have experience with the training wheels taken off. While the penalty works as designed the economics are very far from optimal with only one fixed fee tier that is approximately in the low end of the penalty range and basically little if any optimization on the miner side.

Re: bolded section - I would argue that we shouldn't invest too much into optimizing this. This is an algorithm that the market will figure out. Pool ops will claim they have a better block maker than other pool ops, etc. etc.

Indeed I know we've seen blocks trigger the penality, but we haven't seen a movement out of the median.

Ultimately I think I like what you're putting down, but for it to have any effect on things now we'd either have to remove the 60 kb psuedo window thing or come up with something different for the current era of pure speculation without significant financial activity on the chain.
legendary
Activity: 2282
Merit: 1050
Monero Core Team
...

Okay. So in the equations state above, I think you're just laying out the existing penalty fee as it exists in Monero.

From an earlier post, you state

Quote
In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN.

I think this is the only place I see a clear exposition of how per KB fees will be calculated, and it is directly tied to Mn, so therefore the blocksize penalty mechanics are sort of encapsulated by Mn . So the base reward will eventually hit 0.6 (at current 2 minute blocktime), so at that point the the median blocksize is the main driver.

Quote
I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.
With this I can see where your head is at - basically we assume that if the MN has grown, the network has become more valuable due to the increase in activity. The increased external valuation has to be countered with an internal decrease in the xmr cost to add to the chain, though it must be done with the proper safeguards to prevent bloat attack (or something). In your approach, this is done by directly scaling the blockchain fee with the penalty, which is directly coupled to MN.

So, in practice (for humans using the network), we will have a statistic that details MN in real time, and the software client will make suggestions based on MN for per KB fees.

Meanwhile, the consensus protocol creating a given block will have calculated MN for a given span, determined the optimal size of the block to create. If the optimal size of the block to create is <=  the previous block, any transaction meeting the current blockchain-fee threshold can be included in that block. If the optimal size of the block to create is > the previous block, any transaction meeting the current blockchain-fee threshold can be included in addition to transactions with lower fees.

My brain hurts just imagining how the optimal block size calculations will look.

I wonder if we should pull the training wheels with the 60 kb thing and just let the protocol do its thing, then we could see how the blocksize penalty actually affects things. Though that would force miners to not mine any transactions until the mempool is stuffed. Hrm...

Setting the fees in the penalty area is actually the easy part.

It is coming up with an optimal or at least close to optimal algorithm for determining the optimal transactions to include in a block from the miner's point of view that I am still wrapping my head around. I do have some ideas at this point but nothing concrete yet. By the way we have had already blocks that have triggered the penalty going all the way back to 2014. Furthermore there was an attack in 2014 that produced a fair number of blocks into the penalty area when at the time M0 was 20,000 K. This was before the recent fork to 2 min blocks. So in effect we already have experience with the training wheels taken off. While the penalty works as designed the economics are very far from optimal with only one fixed fee tier that is approximately in the low end of the penalty range and basically little if any optimization on the miner side.
legendary
Activity: 1260
Merit: 1008
...

I'm going to copy the formula you posted earlier to make sure I get it -

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

I'm assuming that B is the current block? Is it the size of the block in bytes? kb? I think its the 1 + B that is throwing me off. Does that just mean "the next block"?

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

I apologize that I can't parse the equations. Its even worse as I type now with the bbcode or whatever formatting it is.

Quote
Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

So if I'm reading this right, you're using hashrate as an indicator of the size of the network? Or just use something other than blocksize scaling / penalty formula for the first two tiers?

I apologize for being obtuse in my understanding.

The way that I understand things is that we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain. With the multi-tiered system you propose, something is used to adjust the xmr cost for the first two tiers, and then for tiers 3,4,5 , it uses a component of the block size penalty. I think what I'm not getting is that as the network becomes more valuable, the internal xmr cost has to go down to maintain the usability of the network. Now, if the change in the transaction fee is coupled to a transaction priority system and is dependent on the block penalty... wouldn't all those things imply that the transaction fee is increasing?

No apologies needed. You are not being obtuse, in fact, on the contrary, you are being very helpful. These are far from easy concepts to understand and explain. Furthermore I am not aware of any other crypto currency that has even seriously looked at these issues let alone proposed a solution or set of solutions, even though these are issues that every crypto currency faces.

You are of course correct. that "we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain".

Monero however imposes a significant linear correlation between the internal cost (xmr) and the external value of adding data to the blockchain by virtue of the penalty function for blocksize scaling. I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.

B is the relative  increase in the block size over the median blocksize, It can range from 0 (no increase) to 1 (100% increase / doubling of the blocksize). The critical point here is that B attracts the same penalty in XMR for an increase in blocksize from say 1 MB to 1,1 MB than for say 1 GB to say 1.1 GB, since in both cases B = 0.1. In the latter case there are 1024x more transactions to absorb the cost of the penalty so the cost per transaction falls by a factor of 1024. Again I am assuming for simplicity the same distribution by size of transactions in the 1.1 MB and 1.1 GB blocks.

Now here is where it can get interesting. If the natural relationship between network value and network size is say MN log (MN) rather than MN then it is possible for the cost of a transaction, in real terms, in the penalty area to rise with log (MN) at least for a period of time. This can happen if for example market responds to this difference by optimizing transactions in order to minimize paying the penalty. This would occur because all transactions do not have the same priority. It is for this reason that there can be a very significant merit to use a different scaling formula (difficulty adjusted for block reward) for the low tier fee levels than for the high tier fee levels where the fees are effectively set by the base reward and median blocksize.

Okay. So in the equations state above, I think you're just laying out the existing penalty fee as it exists in Monero.

From an earlier post, you state

Quote
In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN.

I think this is the only place I see a clear exposition of how per KB fees will be calculated, and it is directly tied to Mn, so therefore the blocksize penalty mechanics are sort of encapsulated by Mn . So the base reward will eventually hit 0.6 (at current 2 minute blocktime), so at that point the the median blocksize is the main driver.

Quote
I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.
With this I can see where your head is at - basically we assume that if the MN has grown, the network has become more valuable due to the increase in activity. The increased external valuation has to be countered with an internal decrease in the xmr cost to add to the chain, though it must be done with the proper safeguards to prevent bloat attack (or something). In your approach, this is done by directly scaling the blockchain fee with the penalty, which is directly coupled to MN.

So, in practice (for humans using the network), we will have a statistic that details MN in real time, and the software client will make suggestions based on MN for per KB fees.

Meanwhile, the consensus protocol creating a given block will have calculated MN for a given span, determined the optimal size of the block to create. If the optimal size of the block to create is <=  the previous block, any transaction meeting the current blockchain-fee threshold can be included in that block. If the optimal size of the block to create is > the previous block, any transaction meeting the current blockchain-fee threshold can be included in addition to transactions with lower fees.

My brain hurts just imagining how the optimal block size calculations will look.

I wonder if we should pull the training wheels with the 60 kb thing and just let the protocol do its thing, then we could see how the blocksize penalty actually affects things. Though that would force miners to not mine any transactions until the mempool is stuffed. Hrm...
legendary
Activity: 2492
Merit: 1473
LEALANA Bitcoin Grim Reaper
One of the ideas I had which I'm not sure how much demand there will be for is a selectable list of fees & mixing for a tx.

It would calculate the tx fee based on different mixins and you could select the "best" tx fee for the best mixin.

I've noticed this is all determined on the fly as outputs to mix with are selected in a "random" fashion using a triangular distribution (at least thats what I remember after reviewing the source).

But one thing I noticed is that the tx fee can actually be higher with a lower mixin than a higher mixin in some cases.

In terms of efficiency I can see people wanting to see a range of mixins and fees that go along with it when creating the tx and select the one they want to actually execute based on their determination of what is "efficient" for them.

This may not end up saving users much money on tx fees now but it could in the long term if they can see a range of mixins/fees to pick from.

Obviously this would make the most sense to incorporate into a GUI and not try to do this via command line, as it would be that more confusing to the average user.
legendary
Activity: 2282
Merit: 1050
Monero Core Team
...

I'm going to copy the formula you posted earlier to make sure I get it -

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

I'm assuming that B is the current block? Is it the size of the block in bytes? kb? I think its the 1 + B that is throwing me off. Does that just mean "the next block"?

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

I apologize that I can't parse the equations. Its even worse as I type now with the bbcode or whatever formatting it is.

Quote
Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

So if I'm reading this right, you're using hashrate as an indicator of the size of the network? Or just use something other than blocksize scaling / penalty formula for the first two tiers?

I apologize for being obtuse in my understanding.

The way that I understand things is that we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain. With the multi-tiered system you propose, something is used to adjust the xmr cost for the first two tiers, and then for tiers 3,4,5 , it uses a component of the block size penalty. I think what I'm not getting is that as the network becomes more valuable, the internal xmr cost has to go down to maintain the usability of the network. Now, if the change in the transaction fee is coupled to a transaction priority system and is dependent on the block penalty... wouldn't all those things imply that the transaction fee is increasing?

No apologies needed. You are not being obtuse, in fact, on the contrary, you are being very helpful. These are far from easy concepts to understand and explain. Furthermore I am not aware of any other crypto currency that has even seriously looked at these issues let alone proposed a solution or set of solutions, even though these are issues that every crypto currency faces.

You are of course correct. that "we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain".

Monero however imposes a significant linear correlation between the internal cost (xmr) and the external value of adding data to the blockchain by virtue of the penalty function for blocksize scaling. I say linear because for a given base reward the cost in xmr adding a particular transaction of a given size in KB to a particular part of the penalty area falls liniarly with the the median of the blocksize. For example if MN is 10x larger the cost per transaction falls by a factor of 10 since there are 10x as many transactions paying for a given amount of penalty. In this example I am assuming the transactions are all of the same size for simplicity.

B is the relative  increase in the block size over the median blocksize, It can range from 0 (no increase) to 1 (100% increase / doubling of the blocksize). The critical point here is that B attracts the same penalty in XMR for an increase in blocksize from say 1 MB to 1,1 MB than for say 1 GB to say 1.1 GB, since in both cases B = 0.1. In the latter case there are 1024x more transactions to absorb the cost of the penalty so the cost per transaction falls by a factor of 1024. Again I am assuming for simplicity the same distribution by size of transactions in the 1.1 MB and 1.1 GB blocks.

Now here is where it can get interesting. If the natural relationship between network value and network size is say MN log (MN) rather than MN then it is possible for the cost of a transaction, in real terms, in the penalty area to rise with log (MN) at least for a period of time. This can happen if for example market responds to this difference by optimizing transactions in order to minimize paying the penalty. This would occur because all transactions do not have the same priority. It is for this reason that there can be a very significant merit to use a different scaling formula (difficulty adjusted for block reward) for the low tier fee levels than for the high tier fee levels where the fees are effectively set by the base reward and median blocksize.
legendary
Activity: 1260
Merit: 1008
...
I'm confused, though I think I'm putting it together. I see you're threading the needle of making the fee adaptive while giving the fee the ability to serve its intended purpose of preventing blocksize expansion. I think you were on to something before with using the difficulty as an on-chain surrogate of external value. I think the need for that factor will exist at any stage of the chain's life - during initial distribution curve and during the tail emission. Your second scenario above seems focused on the tail emission portion of the coins existence, which doesn't happen for another ... however many years.




No the second scenario is determined by the median blocksize becoming larger than 60,000 bytes, not Monero reaching the tail emission. One can consider Bitcoin here as a special case where the penalty is infinite and with a fixed blocksize of 300,000 bytes. This was to a large degree the situation with Bitcoin until the spring of 2013.  

In either case the top three fee tiers would be determined by the blocksize penalty formula. There is really not much choice here if these fee tiers are going to challenge the blocksize penalty formula.  There is no reason, however, why the bottom two fee tiers could be not based on a difficulty ratio as was my original idea even after tail emission. The one proviso here would be to cap the lower two tiers at some percentages of the third tier. If the difficulty were to increase with time due to much more efficient hardware this could cause a spread in the fees over time that would make the Monero transactions when there was no pressure on the blocksize very affordable.  Spammers would still be blocked by the third and higher tiers.

Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong

Edit 1: There was an effective maximum blocksize in Bitcoin of around 256 KB until the spring of 2013.
Edit 2: A better distinction here is between those fee tiers at the lower end that are not challenging the penalty and those above that are challenging the penalty rather than between the two scenarios I indicated before.

I'm going to copy the formula you posted earlier to make sure I get it -

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

I'm assuming that B is the current block? Is it the size of the block in bytes? kb? I think its the 1 + B that is throwing me off. Does that just mean "the next block"?

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

I apologize that I can't parse the equations. Its even worse as I type now with the bbcode or whatever formatting it is.

Quote
Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong (http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong)

So if I'm reading this right, you're using hashrate as an indicator of the size of the network? Or just use something other than blocksize scaling / penalty formula for the first two tiers?

I apologize for being obtuse in my understanding.

The way that I understand things is that we need a way to match the internal cost (xmr) with the external value of xmr for adding data to the blockchain. With the multi-tiered system you propose, something is used to adjust the xmr cost for the first two tiers, and then for tiers 3,4,5 , it uses a component of the block size penalty. I think what I'm not getting is that as the network becomes more valuable, the internal xmr cost has to go down to maintain the usability of the network. Now, if the change in the transaction fee is coupled to a transaction priority system and is dependent on the block penalty... wouldn't all those things imply that the transaction fee is increasing?
legendary
Activity: 2282
Merit: 1050
Monero Core Team
...
I'm confused, though I think I'm putting it together. I see you're threading the needle of making the fee adaptive while giving the fee the ability to serve its intended purpose of preventing blocksize expansion. I think you were on to something before with using the difficulty as an on-chain surrogate of external value. I think the need for that factor will exist at any stage of the chain's life - during initial distribution curve and during the tail emission. Your second scenario above seems focused on the tail emission portion of the coins existence, which doesn't happen for another ... however many years.




No the second scenario is determined by the median blocksize becoming larger than 60,000 bytes, not Monero reaching the tail emission. One can consider Bitcoin here as a special case where the penalty is infinite and with a fixed blocksize of 300,000 bytes. This was to a large degree the situation with Bitcoin until the spring of 2013.  

In either case the top three fee tiers would be determined by the blocksize penalty formula. There is really not much choice here if these fee tiers are going to challenge the blocksize penalty formula.  There is no reason, however, why the bottom two fee tiers could be not based on a difficulty ratio as was my original idea even after tail emission. The one proviso here would be to cap the lower two tiers at some percentages of the third tier. If the difficulty were to increase with time due to much more efficient hardware this could cause a spread in the fees over time that would make the Monero transactions when there was no pressure on the blocksize very affordable.  Spammers would still be blocked by the third and higher tiers.

Here is some research on Metcalfe's Law which to a large degree does support the case for not relying solely on the blocksize scaling / penalty formula for the lower two fee tiers, since the research indicates n log (n) rather than n as the rate of growth of the value of a network with n nodes. The extrapolation here is to replace the size of the network by the number of transactions in a given period of time. http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong

Edit 1: There was an effective maximum blocksize in Bitcoin of around 256 KB until the spring of 2013.
Edit 2: A better distinction here is between those fee tiers at the lower end that are not challenging the penalty and those above that are challenging the penalty rather than between the two scenarios I indicated before.
legendary
Activity: 1260
Merit: 1008
( snip )
There are two very distinct scenarios I see for setting fees in Monero.

The first scenario is when the  median of the blocksize over the last N blocks, MN < effective median of the blocksize over the last N blocks, M0 = 60000 bytes. This is the current  scenario and not the scenario I discuss above, since MN is replaced above by M0 eliminating the penalty for a blocksize increase. The current MN is less than 300 bytes.

The second scenario is where MN > M0 and the penalty applies. In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN. The objective here is to stimulate an efficient market between blocksize scaling and fees. One way to achieve this is to set 5 fee levels corresponding to transaction confirmation priorities. minimal, low,  normal, high, very high. The per KB fees are calculated using the known values of the block reward, and, the median of the block size. One places  minimal and low below the penalty boundary,  normal at the penalty boundary and high and very high above the penalty boundary.

The prior discussion in this thread can in reality only be possibly applied to the first scenario for the minimal fee and the low fee. The rest of the fees would have to be set as above but using M0 rather than MN.

Edit: One important consideration is that there is a 200x increase in transaction volume and possible corresponding increase in price before we trigger the second scenario, so we will have to scale down the fees with no trigger of the penalty. It is here where my original idea as edited and modified above may be a possibility to set the low minimal and low fees.

I'm confused, though I think I'm putting it together. I see you're threading the needle of making the fee adaptive while giving the fee the ability to serve its intended purpose of preventing blocksize expansion. I think you were on to something before with using the difficulty as an on-chain surrogate of external value. I think the need for that factor will exist at any stage of the chain's life - during initial distribution curve and during the tail emission. Your second scenario above seems focused on the tail emission portion of the coins existence, which doesn't happen for another ... however many years.


legendary
Activity: 2282
Merit: 1050
Monero Core Team
Thanks for reviving this thread and the old discussions from last summer. My thoughts on this have evolved considerably since then largely due to gaining a further understanding of the adaptive blocksize limit in Monero and how it impacts fees. In the following post I discussed the response of the adaptive blocksize limit to a spam attack and this is critical in understanding what determines fees in Monero ove the long term.

ArticMine PMed me after I wrote that flaming post, and said he would reply after studying my posts. He has not yet replied. Does that mean I am correct and there is no solution for Monero. I think so.

It is fundamental. Afaics, you'd have to completely rewrite Moaneuro. Tongue

Rewrite Monero, is not necessary at all but some documentation on how the Cryptonote adaptive blocksize limits actually work is needed, especially given the formula in section 6.2.3 of the Cryptonote Whitepaper is wrong. https://cryptonote.org/whitepaper.pdf. My response will come in time.

I will start by examining the Cryptonote Penalty Function for oversize blocks. This is critical to understand any form of spam attack against a Cryptonote coin. From the Cryptonote whitepaper I cited above the penalty function is:

Penalty = BaseReward (BlkSize / MN - 1)2

The new reward is:

NewReward = BaseReward - Penalty

Where MN is the median of the blocksize over the last N blocks
BlkSize is the size of the current block
BaseReward is the reward as per the emission curve or where applicable the tail emission
NewReward is the actual reward paid to the miner
The Maximum allowed blocksize, BlkSize, is 2MN
The penalty is only applied when BlkSize > (1 + Bmin) MN Where 0 < Bmin < 1 In the Cryptonote whitepaper Bmin = 0.1.
 
The error in the Cryptonote Whitepaper was to set NewReward = Penalty

For simplicity I will define:
BlkSize = (1+B) MN
BaseReward = Rbase
Penalty (for a given B) = PB
NewReward (for a given B) = RB

The penalty for a given B becomes:
PB = RbaseB2
While the new reward for a given B becomes:
RB = Rbase(1 - B2)
The first derivative of PB with respect to B is
dPB / dB = 2RbaseB

In order to attack the coin by bloating the blocksize the attacker needs to cause at least over 50% of the miners to mine oversize blocks and for an expedient attack close to 100% or the miners to mine oversize blocks. This attack must be a maintained over a sustained period of time and more importantly must be maintained in order to keep the oversized blocks, since once the attack stops the blocks will fall back to their normal size.  There are essentially two options here:

1) A 51% attack. I am not going to pursue this for obvious reasons.

2) Induce the existing miners to mine oversize blocks. This is actually the more interesting case; however after cost analysis it becomes effectively a rental version of 1 above. Since the rate of change (first derivative) of PB is proportional to B the most effective option for the attacker is to run the attack with B = 1. The cost of the attack has as a lower bound Rbase but would be higher, and proportional to, Rbase  because miners will demand a substantial premium over the base reward to mine the spam blocks due to the increased risk of orphan blocks as the blocksize increases and competition from legitimate users whose cost per KB for transaction fees needed to compete with the attacker will fall as the blocksize increases. The impact on the coin is to stop new coins from being created while the attack is going on. These coins are replaced by the attacker having to buy coins on the open market in order to continue the attack. The impact of this is to further increase the costs to the attacker.

It at this point where we see the critical importance of a tail emission since if Rbase = 0 this attack has zero cost and the tragedy of the commons actually occurs. This is the critical difference between those Cryptonote coins that have a tail emission, and have solved the problem, such as Monero and those that do not, and will in a matter of time become vulnerable, such as Bytecoin.




There are two very distinct scenarios I see for setting fees in Monero.

The first scenario is when the  median of the blocksize over the last N blocks, MN < effective median of the blocksize over the last N blocks, M0 = 60000 bytes. This is the current  scenario and not the scenario I discuss above, since MN is replaced above by M0 eliminating the penalty for a blocksize increase. The current MN is less than 300 bytes.

The second scenario is where MN > M0 and the penalty applies. In this scenario per KB fees are proportional to the base reward divided by the median of the blocksize over the last N blocks, Rbase/MN. The objective here is to stimulate an efficient market between blocksize scaling and fees. One way to achieve this is to set 5 fee levels corresponding to transaction confirmation priorities. minimal, low,  normal, high, very high. The per KB fees are calculated using the known values of the block reward, and, the median of the block size. One places  minimal and low below the penalty boundary,  normal at the penalty boundary and high and very high above the penalty boundary.

The prior discussion in this thread can in reality only be possibly applied to the first scenario for the minimal fee and the low fee. The rest of the fees would have to be set as above but using M0 rather than MN.

Edit: One important consideration is that there is a 200x increase in transaction volume and possible corresponding increase in price before we trigger the second scenario, so we will have to scale down the fees with no trigger of the penalty. It is here where my original idea as edited and modified above may be a possibility to set the low minimal and low fees.
legendary
Activity: 1260
Merit: 1008
Okay. First things first - I deleted a lot of posts during the TPTB_need_war threadsplosion that is best summarized here -
https://bitcointalksearch.org/topic/m.13634887

there are remnants of the discussion still in there, because some contain some good tidbits and frankly I just got lazy. Deleting posts one by one is a bitch.

After some pruning of this thread I want to start things up with the auto adjusting transaction fees.

@arcticmine, I remember that you were going to lead this thought train. I'm curious if you've had any insights into things?

During the last dev meeting, you mentioned that the minimum block inclusion fee (what we commonly think of as the transaction fee) would be based on a calculation that uses the block size as the independent variable. However, in your original musings on the topic, you suggested that the difficulty would be used

https://bitcointalksearch.org/topic/m.12199259

and edited for the final post:

There should be a factor for block subsidy.


Yes. In order for the difficulty to be a surrogate for the value of XMR one needs to factor in the actual block reward, So the formula would look as follows:

F = F0*D0*RA/(DA*R0)

Where

F0 is a constant comparable to the current per kb fee
R0 is a constant comparable to the current block reward
RA is the average block reward of the blocks N-2K to N-K
D0 is a constant comparable to the current difficulty
DA is the average difficulty of the blocks N-2K to N-K
N is the last block number
K is a constant for example 1000  

One could replace RA by the average of base block reward namely by netting out fees and penalties; however then it would not track value as closely particularly if fees were to dominate the actual block reward in the future.

The question of the responsiveness of the difficulty to a sudden increase in price is valid to a degree; however it is tempered by the fact that the increase would be temporary, the increase would hit XMR holders at a time when they are experiencing a significant increase in the value of their XMR,  and in the worst case scenario any drop in transaction volume would likely temper the sudden price movement.  Attacks based on manipulating the hash rate I do not see as an issue here simply because any impact on fees is extremely minor compared to the risk of a 51% type attack.

My main concern here is to not create a situation similar to what is currently happening in Bitcoin with the blocksize debate. It is fairly easy to get the necessary consensus to fork an 18 month old coin, it is quite another matter to try to undo the fork 5 years later; particularly when there is no immediate threat. This is the critical lesson we must learn form Bitcoin.    

The discussion initiated by that post was good, and I recommend re-reading.

We could also use the data from the recent price spike and the subsequent difficulty adjustment to see how the fee would have played out. I'm using the queenly we here and should be smacked for doing so. Maybe I'll play around with excel in a bit.
Pages:
Jump to: