Pages:
Author

Topic: Elastic block cap with rollover penalties - page 3. (Read 24075 times)

donator
Activity: 2058
Merit: 1054
I think a better solution would be to require miners to do more work for larger block sizes. Instead of hashing just the header of a block, miners have to hash something more: perhaps something proportional to the block size. So if a header is 80 bytes, it takes up 80/1000000=8e-05 of the whole block. So for any block size x > 1 MB, require a miner to hash the first (8e-05)x of the block in order for it to be valid. This will make Bitcoin automatically scale to the power of computers in the future, as big blocks will only be plentiful if computers (ASICs) are fast enough that it is worth taking the extra transaction fees with bigger blocks. Any problems with this?
That's the basic idea behind Greg's proposal. I've yet to examine it in detail; I think it was actually what I thought about first, before eschewing it in favor of a deductive penalty.

I think there are errors in your description of how to implement this. It's not about what you hash, it's about what your target hash is. Also, you need to carefully choose the function that maps block size to mining effort.
legendary
Activity: 1106
Merit: 1026
mining hardware has little to do with the issue, besides the fact that if large blocks are slow to download it could allow large miners to [unintentionally] start creating small forms while the slower miners create orphans. The mining process is virtually unchanged

Sorry, poor choice of words. I wasn't thinking about mining hardware, but deploying additional bandwidth/adjusting hosting plans/[...] and alike, to handle larger blocks.
legendary
Activity: 2128
Merit: 1005
ASIC Wannabe
i think as long as it averages over at least 1-2 weeks, thats sufficient to prevent any sort of rampant spamming.

I'm not sure, if "how long can spam last" covers the whole picture. I'd like to ask "how fast can miners deploy new hardware/adjust to an increased cap?" in addition.

mining hardware has little to do with the issue, besides the fact that if large blocks are slow to download it could allow large miners to [unintentionally] start creating small forms while the slower miners create orphans. The mining process is virtually unchanged
legendary
Activity: 1106
Merit: 1026
i think as long as it averages over at least 1-2 weeks, thats sufficient to prevent any sort of rampant spamming.

I'm not sure, if "how long can spam last" covers the whole picture. I'd like to ask "how fast can miners deploy new hardware/adjust to an increased cap?" in addition.
member
Activity: 100
Merit: 16
I think a better solution would be to require miners to do more work for larger block sizes. Instead of hashing just the header of a block, miners have to hash something more: perhaps something proportional to the block size. So if a header is 80 bytes, it takes up 80/1000000=8e-05 of the whole block. So for any block size x > 1 MB, require a miner to hash the first (8e-05)x of the block in order for it to be valid. This will make Bitcoin automatically scale to the power of computers in the future, as big blocks will only be plentiful if computers (ASICs) are fast enough that it is worth taking the extra transaction fees with bigger blocks. Any problems with this?
legendary
Activity: 2128
Merit: 1005
ASIC Wannabe
The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem
this

actually, just use twice the average blocksize of the last two weeks.

 And you don't really need any of this complicated system.
1. Floating block limits have their own set of problems, and may result in a scenario where there is no effective limit at all.

2. Even if you do place a floating block limit, it doesn't really relieve of the need for an elastic system. Whatever the block limit is, tx demand can approach it and then we have a crash landing scenario. We need a system that gracefully degrades when approaching the limit, whatever it is.
1) i beg to differ, so long as the timespan is sufficient that only a LONG lasting spam attack or other growth could cause massive block caps. Personally, i think as long as it averages over at least 1-2 weeks, thats sufficient to prevent any sort of rampant spamming.
2) if the cap is set at double the recent volumes, it should provide enough room for fuller blocks so long as we dont see 5x network growth within less than a 1-2 month timespan. even then, the cap would grow over time and lower prioriy transactions may just be pushed back a few blocks. fees take priority until everything balances out after a few days/weeks

(I suggest 2.5x the average of 40days, or 6000 blocks) OR ((2x the last 6000 blocks) + (0.5x the last 400 blocks)).  The second allows for slightly faster growth if there's sudden demand for room.
donator
Activity: 2058
Merit: 1054
The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem
this

actually, just use twice the average blocksize of the last two weeks.

 And you don't really need any of this complicated system.
1. Floating block limits have their own set of problems, and may result in a scenario where there is no effective limit at all.

2. Even if you do place a floating block limit, it doesn't really relieve of the need for an elastic system. Whatever the block limit is, tx demand can approach it and then we have a crash landing scenario. We need a system that gracefully degrades when approaching the limit, whatever it is.
member
Activity: 133
Merit: 26
The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem

this

actually, just use twice the average blocksize of the last two weeks.

 And you don't really need any of this complicated system.
legendary
Activity: 1106
Merit: 1026
So far I like this proposal very much, too.

I'm for Gavins simple 20MB kicking-the-can-down-the-road proposal. With the rollover penalty in place I might be willing to wait longer and let some pressure build on developing scaling solutions.

The elastic cap, penalty fee pool and the hard limit could be addressed seperately, although it probably makes not much sense to introduce this mechanism with the current block size limit.

In this context I'd like to add some visibility to TierNolan's post on the second page:

... increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork.

It seems viable to set a high hard limit, and start with a lower-than-max elastic cap, which could be increased further at some point in the future.
donator
Activity: 2772
Merit: 1019
I really like this idea! Keep up the great work Meni Rosenfeld  Smiley

I like it, too.

Thinking about the next steps I re-skimmed the OP (pretending to be someone just being introduced to the idea) and I think the introduction of (and reference to) the 'rollover fee pool' is very misleading. I know it is explained right after that it's really a 'rollover size penalty', but I fear it might lead people onto the wrong track and make it harder than necessary for them to grasp the idea. Maybe it'd be less confusing and easier to understand the concept if that part was removed?

I have a feeling this idea is a hard sell, mainly because it isn't what many might expect: it's neither...

  • a way to dynamically set the block size cap nor
  • a solution for scaling nor
  • a rollover fee pool

It concerns itself with a different (although related) issue, namely the way the system behaves when approaching the transaction throughput limit.

I personally think this is a very important issue and my expectation of the current behavior and the ramifications thereof regarding user experience and media coverage is one of the reasons I'm for Gavins simple 20MB kicking-the-can-down-the-road proposal. With the rollover penalty in place I might be willing to wait longer and let some pressure build on developing scaling solutions. I'm not opposed to seeing how a fee market would develop, I'm also not opposed to seeing business opportunities for entities working on scaling solutions. I just don't want to hit a brick wall, as Meni so aptly put it... it would do much damage and can potentially set us back years, I fear.

So what are peoples ideas of how a roadmap could look like, what kind of funds we might need and how we could organize enough (monetary and political) support?
member
Activity: 84
Merit: 10
I really like this idea! Keep up the great work Meni Rosenfeld  Smiley
donator
Activity: 2058
Merit: 1054
I'll try to repeat the calculations with a different demand curve, to demonstrate my point. But this will take some time and Shabbat is in soon, so that will have to wait.
Let's assume the demand curve - the number of transactions demanded as a function of the fee, per 10 minutes - is d(p) = 27/(8000p^2). It's safe to have d(p)->infinity as p->0 because supply is bounded (if there was no bound on supply, we'd need a more realistic bound on demand to have meaningful results). The behavior below is the same for other reasonable demand curves, as long as demand diminishes superlinearly with p (sublinear decay is less reasonable economically, and results in very different dynamics).

We'll assume 4000 transactions go in a MB, and that T=1MB. So the penalty, as a function of the number n of transactions, is f(n) = max(n-4000,0)^2 / (4000*(8000-n)).

We'll also assume that transactions are in no particular rush - users will pay the minimal fee that gives them a good guarantee to have the tx accepted in reasonable time (where this time is long enough to include blocks from the different miner groups). So there is a specific fee p for which the tx demand clears with the average number of txs per block (the number of txs can change between blocks). It would have been more interesting to analyze what happens when probabilistic urgency premiums enter the scene, but that's not relevant to the issue of mining centralization.

Scenario 1: 100 1% miners.

Each miner reclaims 1% of the penalty. If the optimal strategy is to have n txs per block, resulting in a fee of p, then n=d(p) and the marginal penalty (derivative of f) at n, corrected for the reclaiming, must equal p (so that adding another transaction generates no net profit). In other words, 0.99f'(d(p)) = p. Solving this gives p = 0.7496 mBTC, n = 6007.

Penalty is 0.5053 BTC, so pool size is 50.53.
Miners get 4.5027 BTC per block (6007 * 0.0007496 from txs + 0.5053 collection - 0.5053 penalty).
6007 txs are included per block.

Scenario 2: One 90% miner and 10 1% miners.

The market clears with a tx fee of p, with the 90% miner including n0 txs per block and the 1% miners including n1 txs per block.
The average #txs/block must equal the demand, so 0.9n0 + 0.1n1 = d(p).
Every miner must have 0 marginal profit per additional transaction, correcting for reclaiming. So
0.1 f'(n0) = p
0.99 f'(n1) = p

Solving all of this results in:
n0 = 7251
n1 = 5943
p = 0.6885 mBTC (lower than in scenario 1)

Penalty paid by 1% miners: f(5943) = 0.4589 BTC
Penalty paid by 90% miner: f(7251) = 3.5294 BTC
Average penalty: 0.9*3.5294 + 0.1*0.4589 = 3.2223 BTC
Pool size: 322.23 BTC

Reward per block for 1% miner: 5943 * 0.0006885 + 3.2223 - 0.4589 = 6.8552 BTC (more than in scenario 1)
Reward per block for 90% miner: 7251 * 0.0006885 + 3.2223 - 3.5294 = 4.68521 BTC (less than 1% miners in this scenario; more than the miners in scenario 1).

Average number of txs per block: 0.9 * 7251 + 0.1 * 5943 = 7120, more than in scenario 1.

Miners are happy - big or small, they gain more rewards.
Users are happy - more of their transactions are included, at a lower fee.
Nodes are not happy - they have to deal with bigger blocks.
Exactly as with the previously discussed demand curve.

Over time, difficulty will go up, nullifying the extra mining reward; and whoever is in charge of placing the checks and balances, will make the function tighter (or hold on with making it looser), to keep the block sizes at the desired level.


There is another issue at play here - the ones who benefit the most from the big miner's supersized blocks, are the small miners. The big miner could threaten to stop creating supersized blocks if the small miners don't join and create supersized blocks themselves. Forming such a cartel is advantageous over not having supersized blocks at all - however, I think the big miner's bargaining position is weak, and small miners will prefer to call the bluff and mine small blocks, avoiding the penalty and enjoying the big miner's supersized blocks. This is classic tragedy of the commons, but in a sort of reverse way - usually, TotC is discussed in this context when the mining cartel wants to exclude txs, not include them.
legendary
Activity: 2268
Merit: 1141
An examination of the prior art is warranted.
Pointing to Monero as an examination of prior art is asking a bit much. Are you expecting us to dig through the Monero source code? How do they get around the problem?

This is not very helpful;

Quote
The Basics
A special type of transaction included in each block, which contains a small amount of monero sent to the miner as a reward for their mining work.

https://getmonero.org/knowledge-base/moneropedia/coinbase


Did you miss this link? -> https://github.com/monero-project/bitmonero/blob/c41d14b2aa3fc883d45299add1cbb8ebbe6c9ed8/src/cryptonote_core/blockchain.cpp#L2230-L2244
donator
Activity: 2058
Merit: 1054
So more analysis is still in order, but overall, I don't think these dynamics encourage the formation of big miners.
This is encouraging... it sounded yesterday as if you had almost regretted making this thread and were about to pull your own support from the proposal because of this and now it looks like it might be less of a problem than initially thought.
At the moment I'm confident that the claimed centralization issue does not invalidate the method - whatever effect there might be, it's nullified with a correct parameter choice. I'm not as confident in my ability to convince others of this, but I can try.

Note, though, that superlinear mining rewards is a problem in general, I just don't see how this proposal contributes to it.

  • Do the 2 25% miners (or 2 of the 10% miners) have a higher-than-in-current-system incentive to collude?
  • Is Menis proposal making it easier for the 2 25% miners to try to drive out small (as in bandwidth) miners by mining disproportionally large blocks?
Both questions boil down to
  • Does Menis proposal encourage centralization more than the current system?
Before I go on... am I asking the correct questions?
To be honest, I'm not sure. There are different elements at play here with complex interplay at different timescales. The usual question I'd ask is "do big miners have a superlinear advantage over small miners?". I claim above that the answer is no, but I'm now sure that answering this question alone is sufficient.

Obviously if you can find someone to work a PoC for your proposal, that would be fantastic.
That was one of the purposes of this thread.

If the proposal finds enough support, I'm sure we can crowd-fund a PoC.
FWIW, I'll be happy to contribute to it.
donator
Activity: 2058
Merit: 1054
Here are example scenarios, with made up values for the penalty function. I assume for simplicity (not a necessary assumption) that there is endless demand for transactions paying 1mBTC fee,  that typical blocks are around 2K txs, and that there are no minted coins. The pool clears at 1% per block.

Scenario 1: The network has 100 1% miners.
Every 1% miner knows he's not going to claim much of any penalty he pays, so he includes a number of transactions that maximize fees-penalty for the block. This happens to be 2K txs, with a total fee of 2 BTC and penalty of 1 BTC.

The equilibrium for the pool size is 100 BTC.
Miners get 2 BTC per block (2 fees + 1 pool collection - 1 penalty).
There are 2K txs per block.

Scenario 2: The network has 1 90% miner, and 10 1% miners.
The 1% miners build blocks with 2K txs, fee 2 BTC, penalty 1 BTC, like before.
The 90% miner knows that if he includes more txs, he'll be able to reclaim most of the penalty, so the marginal effective penalty exceeds the marginal fee only with larger blocks - say, which have 4K txs, 4 BTC fees, 4 BTC penalty.

The average penalty per block is 3.7 BTC. The equilibrium pool size is 370 BTC.
There are on average 3.8K txs per block.
A 1% miner gets, per block he finds, (2 + 3.7 - 1) = 4.7 BTC - more than in scenario 1!
The 90% miner gets, per block he finds, (4 + 3.7 - 4) = 3.7 BTC - less than small miners get in this scenario, but more than miners get in scenario 1!
Those examples do not stand. They hinge on the premise that there is endless demand for transactions paying 1mBTC fee. I understand the need to simplify these demonstrations but that defeats the underlying premise of this entire discussion. You example assumes there is no competition over fees, which is the premise of a "no block limit + hard fees" system. Your system sets both soft and hard caps to the block size, so there is no reason to believe people will sit at a 1mBTC fee when there is an endless demand for transactions

Model your demonstration with a fee structure using the Pareto principle i.e. 20% of the transactions pay for the 80% of the total fees available in the mempool (which is a lot closer to the current network's fee distribution than your examples), and the system falls apart. Anyone building blocks large enough to get penalized is just giving his rewards away to miners that prioritize big fee transactions and make a point of staying under the soft cap.

The issue with your proposal is not the penalty per say, it's the reward: there is a point where it is more profitable to let others get penalized. The existence of this point creates a threat that keeps all miners functioning below the soft cap. The threat is that they lose profitability in comparison to other pools, and those pools start siphoning away their hash rate as miners migrate.
I'm sorry, but I disagree. The scenarios don't "hinge" on this assumption, I just had to assume something in order to give concrete numbers. The effects I discussed should remain intact whatever the true transaction demand curve is.

The assumption also does not defeat the premise of the discussion, either. There is some demand curve for txs which depends on the real-world usage of the system, and we're talking about matching a supply curve to it. It's completely legitimate (though of course grossly exaggerated) to assume the demand curve is - infinite demand for txs with fee <1mBTC, and 0 for higher fees. With this hypothetical demand curve, fees will remain at 1mBTC because no one is willing to pay a higher fee, and no tx with lower fee will be accepted. In other words, the supply curve must intersect demand at its vertical drop line.

I'll try to repeat the calculations with a different demand curve, to demonstrate my point. But this will take some time and Shabbat is in soon, so that will have to wait.

As for the analysis, I also disagree. You can't "let others get penalized", every miner chooses his own penalty. The best you can do is penalize yourself, knowing that you'll reclaim the penalty later - but in so doing, you also increase the rewards for others.

The drawback is that since there is no reward, obviously the penalties are just destroyed. I'm not sure that's a drawback per say, for the following reasons:

1) It's trivial to blackhole bitcoins and it's been agreed that this is not damaging to the system. So this method isn't introducing some new DOS attacks on the system.
2) By destroying the penalty, the value of every other BTC just went up. As opposed to your system where you want to reward other miners from the penalties, this time everyone is getting rewarded, albeit in a much smaller magnitude. This means both miners, but everyone else holding coins is rewarded when a miner builds a block above the soft cap. Incidentally, that also means people running nodes (as long as they hold BTC, which is expectable).
Destroying coins in small amounts is not very harmful. But if done continuously as part of the system, it has negative macroeconomic implications.

per say
Grammar Nazi regulations state that if you make the same error twice in a discussion, you must be corrected. We're already at 4. It's "per se".
legendary
Activity: 3738
Merit: 1360
Armory Developer
firstly, what we're talking about here is not, as DumbFruit generalizes, a "rollover fee". It's a disporportional penalty on mining large blocks. I'm not sure wether this changes his argument or its validity.

Unless I missed something huge, the proposal is not only to penalize large blocks, but to redistribute the penalties collected from these blocks back to other miners. In that sense there is a rollover of mining rewards (since penalized miners stand to earn their own penalties back), just not "fees rollover" per say.

Here are example scenarios, with made up values for the penalty function. I assume for simplicity (not a necessary assumption) that there is endless demand for transactions paying 1mBTC fee,  that typical blocks are around 2K txs, and that there are no minted coins. The pool clears at 1% per block.

Scenario 1: The network has 100 1% miners.
Every 1% miner knows he's not going to claim much of any penalty he pays, so he includes a number of transactions that maximize fees-penalty for the block. This happens to be 2K txs, with a total fee of 2 BTC and penalty of 1 BTC.

The equilibrium for the pool size is 100 BTC.
Miners get 2 BTC per block (2 fees + 1 pool collection - 1 penalty).
There are 2K txs per block.

Scenario 2: The network has 1 90% miner, and 10 1% miners.
The 1% miners build blocks with 2K txs, fee 2 BTC, penalty 1 BTC, like before.
The 90% miner knows that if he includes more txs, he'll be able to reclaim most of the penalty, so the marginal effective penalty exceeds the marginal fee only with larger blocks - say, which have 4K txs, 4 BTC fees, 4 BTC penalty.

The average penalty per block is 3.7 BTC. The equilibrium pool size is 370 BTC.
There are on average 3.8K txs per block.
A 1% miner gets, per block he finds, (2 + 3.7 - 1) = 4.7 BTC - more than in scenario 1!
The 90% miner gets, per block he finds, (4 + 3.7 - 4) = 3.7 BTC - less than small miners get in this scenario, but more than miners get in scenario 1!

Those examples do not stand. They hinge on the premise that there is endless demand for transactions paying 1mBTC fee. I understand the need to simplify these demonstrations but that defeats the underlying premise of this entire discussion. You example assumes there is no competition over fees, which is the premise of a "no block limit + hard fees" system. Your system sets both soft and hard caps to the block size, so there is no reason to believe people will sit at a 1mBTC fee when there is an endless demand for transactions

Model your demonstration with a fee structure using the Pareto principle i.e. 20% of the transactions pay for the 80% of the total fees available in the mempool (which is a lot closer to the current network's fee distribution than your examples), and the system falls apart. Anyone building blocks large enough to get penalized is just giving his rewards away to miners that prioritize big fee transactions and make a point of staying under the soft cap.

The issue with your proposal is not the penalty per say, it's the reward: there is a point where it is more profitable to let others get penalized. The existence of this point creates a threat that keeps all miners functioning below the soft cap. The threat is that they lose profitability in comparison to other pools, and those pools start siphoning away their hash rate as miners migrate.

If you were to take away the reward from the system a few things would be smoother:

1) No opportunities to game the system anymore. It all comes down to where the acceptable margin of fee vs penalty stands for the given mempool.
2) Very simple to implement.

The drawback is that since there is no reward, obviously the penalties are just destroyed. I'm not sure that's a drawback per say, for the following reasons:

1) It's trivial to blackhole bitcoins and it's been agreed that this is not damaging to the system. So this method isn't introducing some new DOS attacks on the system.
2) By destroying the penalty, the value of every other BTC just went up. As opposed to your system where you want to reward other miners from the penalties, this time everyone is getting rewarded, albeit in a much smaller magnitude. This means both miners, but everyone else holding coins is rewarded when a miner builds a block above the soft cap. Incidentally, that also means people running nodes (as long as they hold BTC, which is expectable).

This is perhaps the only proposition so far that has some sort of reward mechanism for node maintainers (granted it's tiny) which take equal part in the cost of node propagation and validation as miners.
donator
Activity: 2772
Merit: 1019
Obviously if you can find someone to work a PoC for your proposal, that would be fantastic.

If the proposal finds enough support, I'm sure we can crowd-fund a PoC.
donator
Activity: 2772
Merit: 1019
So more analysis is still in order, but overall, I don't think these dynamics encourage the formation of big miners.

This is encouraging... it sounded yesterday as if you had almost regretted making this thread and were about to pull your own support from the proposal because of this and now it looks like it might be less of a problem than initially thought.

I'm having some trouble following the logic of the objection. I dug it from upthread:

That said, a problem with any kind of rollover fee is that it assumes that moving fees to future blocks is also moving fees to different nodes.

Put differently; centralizing nodes is a way of avoiding the penalties you're trying to introduce with this protocol.

Put differently again; Paying fees over consecutive blocks gives a competitive advantage to larger mining entities when making larger blocks.

Put triply differently; A node that can reliably get another block within X blocks is less penalized than a node that cannot, where "X" is the number of blocks that the rollover fee is given.

So if the goal is to avoid centralization, then the protocol does the opposite of the intention. If the goal is to make Bitcoin fail-safe, I'm not convinced that Bitcoin isn't already. When blocks fill, we will see higher transaction fees, potentially lengthier times before a transaction is included in a block, and as a result more 3rd party transactions.

TLDR: How does a fee over "X" blocks not incentivize a centralization of nodes?

firstly, what we're talking about here is not, as DumbFruit generalizes, a "rollover fee". It's a disporportional penalty on mining large blocks. I'm not sure wether this changes his argument or its validity.

For thinking about this I'm using the following hypothetical mining landscape: 25%, 25%, 5 x 10%.

Now I think there are at least 2 interesting questions we can ask:

  • Do the 2 25% miners (or 2 of the 10% miners) have a higher-than-in-current-system incentive to collude?
  • Is Menis proposal making it easier for the 2 25% miners to try to drive out small (as in bandwidth) miners by mining disproportionally large blocks?

Both questions boil down to

  • Does Menis proposal encourage centralization more than the current system?

Before I go on... am I asking the correct questions?
legendary
Activity: 3738
Merit: 1360
Armory Developer
I'd be in his position I'd would too ask to see some code or at least some data analysis supporting the design. You can't just propose stuff and expect the people reviewing it to do all the leg work. An implementation at least proves your design is conceptually sound. It's easy to forget certain aspects when you theorycraft, and having to implement at least the PoC certainly motivates you to keep it as simple as possible.
Not sure to which extent this is criticism to me. But I believe everyone has a part to play in this world, and should be doing what he's best at. My comparative advantage is in coming up with ideas and discussing them; and in unrelated work (to those who don't know me, my day "job" is in promoting Bitcoin in Israel). It's not in coding and empirical analysis - I'll leave that to others. This methodology worked quite well at the time I helped mining pool operators with implementing DGM. Perhaps the discussion I've started will result in this or a similar idea being implemented and accepted. But if not, so be it.

I'll clarify that I think Gavin's request is perfectly legitimate. I didn't exactly expect him to be so dazzled by the idea that he'd drop everything he was doing and start working on it.

It's not criticism directed towards anyone per say. In the course of my work with Armory, I get suggestions to implement this and that, but an idea that can be summarized in a single sentence can often demand 10k LoC. I'm much more inclined to look at a pull request than just some formulated concepts. As I said, having a PoC to support the idea has several advantages, one of which is to make the task of reviewers simpler, another which is to go through the most obvious optimizations right away. An idea without a PoC is not diminished, but an idea with a PoC is certainly improved. I felt like I should share that. It wasn't even an attempt to defend Gavin.

Obviously if you can find someone to work a PoC for your proposal, that would be fantastic.

You're an idea man, I'm a nuts and bolts guy, I can't help but look at this from my perspective. Your natural stance towards people with my skill set is "you don't sophisticate enough". My natural stance towards people with your skill set is "you complicate too much". This isn't about to change anytime soon, yet that doesn't make it a personal attack. Present a patient with some general syndrome to N different medical specialists, all in different fields, and they will come up with N different diagnosis. They're not all necessary wrong.

If you think there is some underlying ad hominem in my criticism of your proposal, that is not my intent. There are plenty of other sections in this forum which are ripe for this kind of rhetoric. I'm going to defend my point of view with every opportunities I get, I don't expect less from others. The intensity of the criticism may come across as unwarranted but that's only cause I'm genuinely interested in this discussion. That should vouch on its own for the importance I bear to theoretical research.
donator
Activity: 2058
Merit: 1054
I'd be in his position I'd would too ask to see some code or at least some data analysis supporting the design. You can't just propose stuff and expect the people reviewing it to do all the leg work. An implementation at least proves your design is conceptually sound. It's easy to forget certain aspects when you theorycraft, and having to implement at least the PoC certainly motivates you to keep it as simple as possible.
Not sure to which extent this is criticism to me. But I believe everyone has a part to play in this world, and should be doing what he's best at. My comparative advantage is in coming up with ideas and discussing them; and in unrelated work (to those who don't know me, my day "job" is in promoting Bitcoin in Israel). It's not in coding and empirical analysis - I'll leave that to others. This methodology worked quite well at the time I helped mining pool operators with implementing DGM. Perhaps the discussion I've started will result in this or a similar idea being implemented and accepted. But if not, so be it.

I'll clarify that I think Gavin's request is perfectly legitimate. I didn't exactly expect him to be so dazzled by the idea that he'd drop everything he was doing and start working on it.
Pages:
Jump to: