Pages:
Author

Topic: [Aug 2022] Mempool empty! Use this opportunity to Consolidate your small inputs! - page 9. (Read 88487 times)

sr. member
Activity: 1666
Merit: 310
why double so often?

btw as i read the two of you argue it looks like scrypt with doge and ltc merged mined is the better solution.


than frozen btc blocks or endless btc block doubling .
DOGE is highly inflationary (maybe not as much as USD, but they still issue over 14m coins per day).

LTC could play second fiddle to BTC (like gold vs silver).


that is a common mis conception doge is superior to both ltc and btc. it has solved the reward/fee ratio issue.

year 1 = x                           2013
year 2 = 2x 100% inflation
year 3 = 3x  50% inflation
year 4 = 4x  33% inflation
year 5 = 5x  25% inflation
x
x
x
x
year 10 = 10x
year 11 = 11x 10% inflation
x
x
x
x
x
x
x
x
year 20 = 20x
year 21 = 21x 5% inflation  2034
year 22 = 22x 4.5% inflation 2035
year 23 = 23x 4.34% inflation 2036


btc by 2024  3.125
2028            1.5625
2032            0.78125
2036            0.390625             serious issues for the reward/ fee ratio

not solved as of today just 12 years from now


As time goes on Doge looks better and BTC looks worse

Doge a perfectly steady decreasing rate of inflation
BTC as of today the reward/fee ratio is neat to be fixed.

BY 2036 looks like it is an issue


Its why people argue over how to fix fees in the future.

Doge has obviously fixed inflation issue.

4.34% in 2036
2.00% in 2064 the 51st year of it.

1.00% in 2114 the 101st year of it.Never running out of reward money for fees. and 10x the blocks which allows a clone of any fix btc does. but 10x the volume.
DOGE's inflation is even higher than FED's 2% target. Shocked

If you ask me, I don't consider deflation a bad thing, especially if the human population is going to decrease accordingly (Japan is a good example).

By 2036 BTC should be the global reserve currency and extremely valuable.

0.39 BTC could be a small fortune by then, even if you measure it in today's dollars. Imagine if 1 BTC is worth 10 million (as Hal Finney envisioned back in 2009)... you think mining will stop being profitable? Let alone the ever increasing fees!

Also, AI & robots will be everywhere, which is going to cause huge deflationary pressure everywhere. Cost of labor will be basically zero by then.

I know this sounds nuts, but the humanity of 2036 will be very different compared to today.

We won't really need inflation in the future. Smiley
legendary
Activity: 4326
Merit: 8950
'The right to privacy matters'
why double so often?

btw as i read the two of you argue it looks like scrypt with doge and ltc merged mined is the better solution.


than frozen btc blocks or endless btc block doubling .
DOGE is highly inflationary (maybe not as much as USD, but they still issue over 14m coins per day).

LTC could play second fiddle to BTC (like gold vs silver).


that is a common mis conception doge is superior to both ltc and btc. it has solved the reward/fee ratio issue.

year 1 = x                           2013
year 2 = 2x 100% inflation
year 3 = 3x  50% inflation
year 4 = 4x  33% inflation
year 5 = 5x  25% inflation
x
x
x
x
year 10 = 10x
year 11 = 11x 10% inflation
x
x
x
x
x
x
x
x
year 20 = 20x
year 21 = 21x 5% inflation  2034
year 22 = 22x 4.5% inflation 2035
year 23 = 23x 4.34% inflation 2036


btc by 2024  3.125
2028            1.5625
2032            0.78125
2036            0.390625             serious issues for the reward/ fee ratio

not solved as of today just 12 years from now


As time goes on Doge looks better and BTC looks worse

Doge a perfectly steady decreasing rate of inflation
BTC as of today the reward/fee ratio is neat to be fixed.

BY 2036 looks like it is an issue


Its why people argue over how to fix fees in the future.

Doge has obviously fixed inflation issue.

4.34% in 2036
2.00% in 2064 the 51st year of it.

1.00% in 2114 the 101st year of it.Never running out of reward money for fees. and 10x the blocks which allows a clone of any fix btc does. but 10x the volume.
sr. member
Activity: 1666
Merit: 310
why double so often?

btw as i read the two of you argue it looks like scrypt with doge and ltc merged mined is the better solution.


than frozen btc blocks or endless btc block doubling .
DOGE is highly inflationary (maybe not as much as USD, but they still issue over 14m coins per day).

LTC could play second fiddle to BTC (like gold vs silver).
legendary
Activity: 4326
Merit: 8950
'The right to privacy matters'
why double so often?

btw as i read the two of you argue it looks like scrypt with doge and ltc merged mined is the better solution.


than frozen btc blocks or endless btc block doubling .
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
If the on chain volume doesn't escalate and grow by 2x every time you will NOT have 1GB blocks!
Except that you should always expect that the blocks will be filled. In the worst, spammers will fill it with garbage. May I remind you that spammers currently pay for highest priority? I'm pretty sure there will still be around if they can spam for free.

If the chain volume doesn't grow one bit from now you will not have even 2 MB blocks, since unless I have lived in a different reality having the maximum limit at 1MB didn't make all blocks at least 1MB, it made them at most 1MB!
Doesn't work that way. If a spammer sees they can fill blocks with movies, expect movies on-chain. We've already seen this on BSV, and we all know what is its demand.

As for sustainability, the fee in reward right now is 9%, you only need 10x demand to completely override the block reward which will not be gone tomorrow. If you don't think on-chain demand will be x10 times more in 25 years then you don't have to worry about the fate of Bitcoin anymore since it's pretty obvious what that would be!
You've misunderstood the assignment. There's a supply too in demand and supply, not only demand. If you're doubling the block size every four years, transaction fees will drop a lot. When mempool weighs less than the block size, miners are incentivized to even confirm 0.01 sat/vb. For your math to apply, demand (for on-chain volume) needs to double every four years.
legendary
Activity: 2912
Merit: 6403
Blackjack.fun
Alright. In the present epoch, it's 16. In the subsequent epoch, it's 32, then 64, then 128, and so on. Do you understand that the on-chain volume doesn't escalate at the same pace, ensuring the system's sustainability? What's your plan for 25 years ahead, when the block size surpasses 1 GB, there's no competition in the fee market, and Bitcoin inevitably comes to an end?

As I said it before, make up your mind!
If the on chain volume doesn't escalate and grow by 2x every time you will NOT have 1GB blocks!
If the chain volume doesn't grow one bit from now you will not have even 2 MB blocks, since unless I have lived in a different reality having the maximum limit at 1MB didn't make all blocks at least 1MB, it made them at most 1MB!

So again, your fear is irrational, since if you say there will be no volume for that then there will be no big blocks either!

As for sustainability, the fee in reward right now is 9%, you only need 10x demand to completely override the block reward which will not be gone tomorrow. If you don't think on-chain demand will be x10 times more in 25 years then you don't have to worry about the fate of Bitcoin anymore since it's pretty obvious what that would be! But again it's a funny contradiction
- you don't expect 10x more people to pay 3$ for a tx
- you expect people to pay $30 for a tx
Yup!  Wink




legendary
Activity: 1512
Merit: 7340
Farewell, Leo
I already did
That's my definition of scaling as well, but I was more concerned about the linear growth. What's your opinion about growing linearly?

Responding to your mail:
Stompix himself, in his great wisdom wanted the block doubling since before the last having, and when I said doubling I meant that it would have been already doubled by now and we would be looking in the future for another one, so 4MB or a maximum theoretically size of 16.
Alright. In the present epoch, it's 16. In the subsequent epoch, it's 32, then 64, then 128, and so on. Do you understand that the on-chain volume doesn't escalate at the same pace, ensuring the system's sustainability? What's your plan for 25 years ahead, when the block size surpasses 1 GB, there's no competition in the fee market, and Bitcoin inevitably comes to an end?
legendary
Activity: 2912
Merit: 6403
Blackjack.fun
It still is surrounded by loads of controversy. First things first, what's the ideal block size increase? We're agreeing on the 16 MB limit, but stompix wants it to double on every halving. Another one might think 16 MB is still very small. I can already predict that the hardfork will end up just like Bitcoin Cash; uncertainty on the ideal adjustment will encourage people to stick with the tested, conservative 4 MB.

Just to make things clear!
Stompix himself, in his great wisdom wanted the block doubling since before the last having, and when I said doubling I meant that it would have been already doubled by now and we would be looking in the future for another one, so 4MB or a maximum theoretically size of 16.
So I never advocated for a sudden increase of xxx1000 times now, I said it's possible and all the technical mumbo jumbo about the capacity that would restrain it is bs , but I never was that radical in it!

Things like:

Also, increasing the size of the block makes this attack worse, than it currently is: https://bitcointalksearch.org/topic/m.1491085
Quote
Code:
The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.

Just a tiny reminder this is no longer 2013, and a server 2Gbit/s guaranteed line is 99$ per month.
So how about we talk in 2024 prices and capacities, not a decade ago things, deal   Grin

But again this is turning funny, first we had no need for bigger blocks as bigger blocks are empty over at bch and bsv, then we have 100GB blocks of full transactions because...majik..., I swear, and I mentioned a hundred times already why can't you guys be constant on your scenarios?

Oh, and back to the main topic...
Bad weekend for consolidations!  Wink
 


copper member
Activity: 909
Merit: 2301
Scaling is directly related with compression. If you can use the same resources to achieve more goals, then that thing is "scalable". So, if the size of the block is 1 MB, and your "scaling" is just "let's increase it into 4 MB", then it is not a scaling anymore. It is just a pure, linear growth. You increase numbers four times, so you can now handle 4x more traffic. But it is not scaling. Not at all.

Scaling is about resources. If you can handle 16x more traffic with only 4x bigger blocks, then this is somewhat scalable. But we can go even further: if you can handle 100x more traffic, or even 1000x more traffic with only 4x bigger blocks, then this has even better scalability.

Quote
I think you support small block size, because I'd read you in the bitcoin-dev mailing list, and you're all small blockers there, IIRC.
Yes, for example here: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019988.html

As you can see, I wrote about commitments long time ago, and today, I still think they are better than legacy OP_RETURNs, but instead of creating a separate output, they can be moved into R-value of the signature, then it is even better, because then it can be used on every address type.

Quote
I highly respect the endless hours you've spent, discussing on that mailing list. I think you can enlighten us.
Well, everyone can post on the mailing list. The main difference is that you have to wait some time for publication, so your post is not visible immediately, but is first read by some human, and then manually accepted. But besides that, it is similar to forums, and it doesn't matter that much, because many times I write some posts on disk, and they are sitting there, unpublished, and wait for future input. If after some days or weeks, they are still good enough to be published, then they are released.

By the way, in the current queue, I have some notes about RIPEMD-160 (and how it is related to SHA-1 or other hash functions), but I have to implement it from scratch, to write properly about it. And I guess Garlo Nicon is thinking about DLEQ, for example in the context of secp160k1. But all of that is work in progress.

Quote
And it's worth to mention usage of CPU instruction (such as SSE2) also somewhat allow more scaling.
In general, yes, but it depends, how things are internally wired. A lot of optimizations are based on parallelism, and if you have to do some sequential hashing, then it equally hurts all full nodes. Which means, that if some transaction is complex, then not only it is a bottleneck for mining pools, but it is also a bottleneck for non-mining nodes, used to propagate transactions in the network.

Some good video about x86 internal assembly, and why we are doomed to stick with some problems for years, even if we switch to another architecture: https://www.youtube.com/watch?v=xCBrtopAG80 (by the way, I expect sooner or later, the Script will be also splitted into smaller parts, like micro-ops, and it will be possible to deal with them more directly than today, so maybe the cost would not be measured in "satoshis per bytes").
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
--snip--

Also note, that if something can be done in parallel, then yes, you can use 64-core processor, and execute 64 different things at the same time. However, many steps during validation are sequential. The whole chain is a sequence of blocks. The whole block is a sequence of transactions (and their order does matter, if one output is an input in another transaction in the same block). The hashing of legacy transactions is sequential (also in cases like bare multisig, which has O(n^2) complexity for no reason).

So yes, you can have 64-core processor with 4 GHz each, but a single core with 256 GHz would allow much more scaling. And this is one of the reasons, why we don't have bigger blocks. The progress in validation time is just not sufficient to increase it much further.

But on other hand, a node already verify all transaction on it's mempool. So when the node receive new block, the node only need to verify few TX not in their block and other block data. And it's worth to mention usage of CPU instruction (such as SSE2) also somewhat allow more scaling.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
What happens during those 12 minutes? I'd expect the mining pools to continue mining for that block, and chances are they'll find a new block within 12 minutes. Once that happens, they can stop verifying the attacker's block, and the attacker risks having his block replaced.
It is true that the rest of the mining pools can trivially notice that the attacker is trying to make them run late, and ignore validating his block deliberately, because it's not worth the effort  / risk. Currently, no mining pool software does that, but it might be updated if there's need to.

One strategy is to mine a block with just the coinbase transaction, and nothing else. But: on top of which block it should be mined?
Sounds like a bad strategy regardless. If it's mined on top of the attacker's block, then it depends on the attacker's block validity (trusting instead of verifying). If it ignores attacker's block, and mines at the same height, then there's no reason not to include highest paying transactions, since it competes with the attacker.

Maybe all of those transactions were flying in mempools, and some mining pool just included all of them, without having any evil plan in mind? But then, how to quickly check all rules, without performing full block validation?
Would this megabyte type of transaction be propagated normally? I think it is non-standard.

Also, vjudeu, I'd appreciate if you could spare your two cents on the previous block size debate. Bigger block size arguments doesn't sound like that bad (anymore). This kind of attack is what still discourages the arbitrary block size increase, in my mind (apart from political conflicts which will, without doubt, arise). I think you support small block size, because I'd read you in the bitcoin-dev mailing list, and you're all small blockers there, IIRC.

I highly respect the endless hours you've spent, discussing on that mailing list. I think you can enlighten us.  Smiley
sr. member
Activity: 1666
Merit: 310
Code:
The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.
There is one problem with that approach: verification. Sending the whole chain is not a problem. But verifying still is. And what is the bottleneck of verification? For example CPU speed, which depends on frequency:

2011-09-13: Maximum Speed | AMD FX Processor Takes Guinness World Record
Quote
On August 31, an AMD FX processor achieved a Guiness World Record with a frequency of 8.429GHz, a stunning result for a modern, multi-core processor. The record was achieved with several days of preparation and an amazing and inspired run in front of world renowned technology press in Austin, Texas.

2022-12-21: First 9 GHz CPU (overclocked Intel 13900K)
Quote
It's over 9000. ElmorLabs KTH-USB: https://elmorlabs.com/product/elmorla... Validation: https://valid.x86.fr/t14i1f

Thank you to Asus and Intel for supporting the record attempt!

Intel Core i9-13900K
Asus ROG Maximus Z790 Apex
G.Skill Trident Z5 2x16GB
ElmorLabs KTH-USB Thermometer
ElmorLabs Volcano CPU container

See? Humans are still struggling with reaching 8-9 GHz, and you need a liquid nitrogen to maintain that value. And more than a decade ago, the situation was pretty much the same. So, the CPU speed is not "doubled" every year. Instead, you have just more and more cores, and you have for example 64-core processor, instead of having 2-core or 4-core.

Which means that yes, you can download 100 GB, maybe even more. But is the whole system really trustless, if you have no chance of verifying that data, and you have to trust, that all of them are correct? Imagine that you can download the whole chain very quickly, but it is not verified. What then?

Also note, that if something can be done in parallel, then yes, you can use 64-core processor, and execute 64 different things at the same time. However, many steps during validation are sequential. The whole chain is a sequence of blocks. The whole block is a sequence of transactions (and their order does matter, if one output is an input in another transaction in the same block). The hashing of legacy transactions is sequential (also in cases like bare multisig, which has O(n^2) complexity for no reason).

So yes, you can have 64-core processor with 4 GHz each, but a single core with 256 GHz would allow much more scaling. And this is one of the reasons, why we don't have bigger blocks. The progress in validation time is just not sufficient to increase it much further.
Very insightful post... I guess we'll need a graphene TeraHertz CPU for proper verification scaling. Smiley
copper member
Activity: 909
Merit: 2301
Quote
What happens during those 12 minutes?
Different pools will do different things. The attacker will know in advance, that "yes, this block is correct" or "no, it abuses sigops limit, or any other quirky rule". And during those 12 minutes, different pools may apply different strategies. One strategy is to mine a block with just the coinbase transaction, and nothing else. But: on top of which block it should be mined?

An honest miner can decide to mine on top of what is already validated. But then, there is a risk, that this "12 minutes block" is not an attack. Maybe all of those transactions were flying in mempools, and some mining pool just included all of them, without having any evil plan in mind? But then, how to quickly check all rules, without performing full block validation?

Quote
Once that happens, they can stop verifying the attacker's block
It depends. Mining a block with some coinbase transaction, and nothing else, would be always valid, if the previous block is valid. Which means, that mining pools can mine on top of attacker's block, and then they will keep validating it.

Some example of sigops limit violation: https://bitcointalksearch.org/topic/m.62014494

And then, imagine that you know in advance, that some mining pools use some kind of simplified block validation, and they don't check every rule (to get a better performace, because of using outdated custom software, or for whatever reason). Then, you may broadcast for example a lot of 1-of-3 multisig transactions into the network on purpose, and wait for other pools to grab them, and mine a new block. Then, you have 80k sigops limit, but some of their blocks may accidentally have 80,003. And there are more quirky rules to exploit.

Also, if you broadcast transactions, which are valid, but can become invalid in a particular context, then you can always say later: "See? Those transactions are valid, because they are included in other blocks, we did nothing wrong!".
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
if the block size limit is 16 MB, a mining pool attacker could always fill a quarter of their block with these transactions. This would force the rest of the network to spend more than 12 minutes verifying them, effectively giving the attacker 12 minutes to mine alone.
What happens during those 12 minutes? I'd expect the mining pools to continue mining for that block, and chances are they'll find a new block within 12 minutes. Once that happens, they can stop verifying the attacker's block, and the attacker risks having his block replaced.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Also, increasing the size of the block makes this attack worse, than it currently is: https://bitcointalksearch.org/topic/m.1491085
That's the attack I was looking for but couldn't find. Thanks, garlonicon! As I understand it, this attack requires dedicating a little less than 1 MB of block space to execute. Currently, it is very expensive, but if the block size limit is 16 MB, a mining pool attacker could always fill a quarter of their block with these transactions. This would force the rest of the network to spend more than 12 minutes verifying them, effectively giving the attacker 12 minutes to mine alone.

Is that perhaps one of the best arguments in favor of a small block size? I see that it can practically only be resolved if we start implementing even more severe exclusions in the Script, but I'm not sure if it could be trivially resolved otherwise.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Miners is another obstacle I completely overlooked. By rising the block size to 16 MB, you need to convince them that this will be more beneficial for their pockets, which is very debatable. At the moment, a simple wave of Ordinals can rise their block fees income by 100%.

You need to convince them that on-chain transaction volume will eventually increase by orders of magnitude compared to before, but none of us can responsibly make that promise. No one can be held accountable for the potential shortcomings.
In a way, it's a flaw of the way Bitcoin works: miners have a financial incentive to keep transaction fees high, even if that reduces Bitcoin's usability.
copper member
Activity: 821
Merit: 1992
Code:
The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.
There is one problem with that approach: verification. Sending the whole chain is not a problem. But verifying still is. And what is the bottleneck of verification? For example CPU speed, which depends on frequency:

2011-09-13: Maximum Speed | AMD FX Processor Takes Guinness World Record
Quote
On August 31, an AMD FX processor achieved a Guiness World Record with a frequency of 8.429GHz, a stunning result for a modern, multi-core processor. The record was achieved with several days of preparation and an amazing and inspired run in front of world renowned technology press in Austin, Texas.

2022-12-21: First 9 GHz CPU (overclocked Intel 13900K)
Quote
It's over 9000. ElmorLabs KTH-USB: https://elmorlabs.com/product/elmorla... Validation: https://valid.x86.fr/t14i1f

Thank you to Asus and Intel for supporting the record attempt!

Intel Core i9-13900K
Asus ROG Maximus Z790 Apex
G.Skill Trident Z5 2x16GB
ElmorLabs KTH-USB Thermometer
ElmorLabs Volcano CPU container

See? Humans are still struggling with reaching 8-9 GHz, and you need a liquid nitrogen to maintain that value. And more than a decade ago, the situation was pretty much the same. So, the CPU speed is not "doubled" every year. Instead, you have just more and more cores, and you have for example 64-core processor, instead of having 2-core or 4-core.

Which means that yes, you can download 100 GB, maybe even more. But is the whole system really trustless, if you have no chance of verifying that data, and you have to trust, that all of them are correct? Imagine that you can download the whole chain very quickly, but it is not verified. What then?

Also note, that if something can be done in parallel, then yes, you can use 64-core processor, and execute 64 different things at the same time. However, many steps during validation are sequential. The whole chain is a sequence of blocks. The whole block is a sequence of transactions (and their order does matter, if one output is an input in another transaction in the same block). The hashing of legacy transactions is sequential (also in cases like bare multisig, which has O(n^2) complexity for no reason).

So yes, you can have 64-core processor with 4 GHz each, but a single core with 256 GHz would allow much more scaling. And this is one of the reasons, why we don't have bigger blocks. The progress in validation time is just not sufficient to increase it much further.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Miners is another obstacle I completely overlooked. By rising the block size to 16 MB, you need to convince them that this will be more beneficial for their pockets, which is very debatable. At the moment, a simple wave of Ordinals can rise their block fees income by 100%.

You need to convince them that on-chain transaction volume will eventually increase by orders of magnitude compared to before, but none of us can responsibly make that promise. No one can be held accountable for the potential shortcomings.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
SHA-256 miners are in charge of the BTC network
Miners are in charge of creating new blocks, every user is in charge of which consensus rules they accept. Of course, without miners that means there are no new blocks.
It's kinda sad the "one computer, one vote" thing can't work.
sr. member
Activity: 1666
Merit: 310
For the sake of the discussion, let's assume that the 16 MB limit is a harmless one. How do we enforce it in a softfork way? Segwit was enforced in a clever way, by separating the witness data from the transaction data. AFAIK, it's impossible to achieve it in softfork, unless you've figured out of another way to restructure the transaction data.
I don't think a softfork can do this, but then again, when the limit was lowered, it must have been a hardfork too. Except for back then there wasn't much controversy about it.
Are you sure about that?

Because even serious bugs early on were fixed with a soft fork.

Quote
"Why shouldn't this be done in a hardfork way?". It's already history, called "Bitcoin Cash". There is no point in redoing the same thing.
That was 7 years ago, surrounded by loads of contoversy, and promoted by some people with their own agenda. I don't think it's right to use that failure to keep blocks the same size for eternity. I don't think a proper change, coming from the Bitcoin Core devs, that actually improves Bitcoin's future will be rejected.
But this isn't ETH where Vitalik's team controls the network... SHA-256 miners are in charge of the BTC network, so good luck trying to convince them.
Pages:
Jump to: