Pages:
Author

Topic: ToominCoin aka "Bitcoin_Classic" #R3KT - page 75. (Read 157162 times)

sr. member
Activity: 409
Merit: 286
March 16, 2016, 07:38:00 AM
And the plan is that every user will 1.) buy bitcoin 2.) send this bitcoin with a big fee (~10$) to a lightning hub and 3) pay for ln-transactions? Correct me, if I'm wrong.
There is no plan. Let the market decide.

I wonder where you pull that ~$10 from. Are you a prophet or something? Do you predict prices?


Have a look on this chart
cost for each transaction

I know it is a very rough and questionable chart, but it could give us numbers how much we have to pay for one transaction if there was no miner's reward and we wanted to keep the system as secure as it is today.

(it's also an excellent measure of how strong a bubble and, imho, how healthy the system is).



No miner reward = year 2140

So send this bitcoin with a big fee (~10$) in year 2140. You predict that $10 in year 2140 will be a big fee. Are you a prophet or something? Do you predict prices?

I initially thought your prediction is for the near term Sad


I'm not keen on talking about the far future. I was asked something and gave an answer. Don't blame the answer when you think the question is wrong.
legendary
Activity: 1260
Merit: 1002
March 16, 2016, 07:21:01 AM
And the plan is that every user will 1.) buy bitcoin 2.) send this bitcoin with a big fee (~10$) to a lightning hub and 3) pay for ln-transactions? Correct me, if I'm wrong.
There is no plan. Let the market decide.

I wonder where you pull that ~$10 from. Are you a prophet or something? Do you predict prices?


Have a look on this chart
cost for each transaction

I know it is a very rough and questionable chart, but it could give us numbers how much we have to pay for one transaction if there was no miner's reward and we wanted to keep the system as secure as it is today.

(it's also an excellent measure of how strong a bubble and, imho, how healthy the system is).



No miner reward = year 2140

So send this bitcoin with a big fee (~10$) in year 2140. You predict that $10 in year 2140 will be a big fee. Are you a prophet or something? Do you predict prices?

I initially thought your prediction is for the near term Sad


maybe 10$ could not even buy you a needle by then... inflation and political money is a bitch y' know. Roll Eyes
sr. member
Activity: 471
Merit: 250
BTC trader
March 16, 2016, 07:16:50 AM
And the plan is that every user will 1.) buy bitcoin 2.) send this bitcoin with a big fee (~10$) to a lightning hub and 3) pay for ln-transactions? Correct me, if I'm wrong.
There is no plan. Let the market decide.

I wonder where you pull that ~$10 from. Are you a prophet or something? Do you predict prices?


Have a look on this chart
cost for each transaction

I know it is a very rough and questionable chart, but it could give us numbers how much we have to pay for one transaction if there was no miner's reward and we wanted to keep the system as secure as it is today.

(it's also an excellent measure of how strong a bubble and, imho, how healthy the system is).



No miner reward = year 2140

So send this bitcoin with a big fee (~10$) in year 2140. You predict that $10 in year 2140 will be a big fee. Are you a prophet or something? Do you predict prices?

I initially thought your prediction is for the near term Sad
sr. member
Activity: 409
Merit: 286
March 16, 2016, 06:32:55 AM
And the plan is that every user will 1.) buy bitcoin 2.) send this bitcoin with a big fee (~10$) to a lightning hub and 3) pay for ln-transactions? Correct me, if I'm wrong.
There is no plan. Let the market decide.

I wonder where you pull that ~$10 from. Are you a prophet or something? Do you predict prices?


Have a look on this chart
cost for each transaction

I know it is a very rough and questionable chart, but it could give us numbers how much we have to pay for one transaction if there was no miner's reward and we wanted to keep the system as secure as it is today.

(it's also an excellent measure of how strong a bubble and, imho, how healthy the system is).


sr. member
Activity: 471
Merit: 250
BTC trader
March 16, 2016, 06:23:29 AM
And the plan is that every user will 1.) buy bitcoin 2.) send this bitcoin with a big fee (~10$) to a lightning hub and 3) pay for ln-transactions? Correct me, if I'm wrong.
There is no plan. Let the market decide.

I wonder where you pull that ~$10 from. Are you a prophet or something? Do you predict prices?
sr. member
Activity: 409
Merit: 286
March 16, 2016, 05:57:09 AM
If I need a datacentre to host a lightning hub it will be not for free. There will be a huge centralization pressure / which results in censorship of transactions.

The "datacenter model" is a result of bandwidth pressures on nodes that need to validate and relay every transaction they receive. LN doesn't have such requirements because hops aren't committed to the blockchain. It's much, much more scalable than bitcoin's broadcast network model. (As an aside, it's a commonly held belief that broadcast networks are not scalable)

Does a LN hub need to get and send transactions? Does he need to save them to keep record?

Why does lightning prevent us from applying a solution for a problem we have now / in the near-term future?

LN doesn't prevent anything. But the burden is on you to explain how increasing the block size is a "solution" for anything.

I explained it several times.

In other words: fees are not enough to prevent spam. What's your idea? Some kind of transaction authority where you, even if you pay fees, have to tell what the economical sense of a transaction is?

People are doing a lot hand-waving around the word "spam" lately. It has a pretty specific connotation in bitcoin and usually refers to dust (unspendable outputs). If fees are not enough to prevent spam, it simply means there is not much demand for block space and transactions are cheap. For example, my last dozen or so transactions paid 3-8 cents each, depending on network load. At that price, it apparently makes sense to spam advertisements, etc. across the blockchain with as many dust outputs as possible, all the time.

My solution? Well, I don't know how much support this would receive among developers. Between general statistical principles (see Pareto) and simply in looking at thousands of spam transactions, I would guess that 20% of users occupy 80% of block space. Thus I think users would support this, since spammers are very likely to be a smaller population taking up more room per capita than other users: Instead of a pure validation-cost metric like we have, integrate a UTXO-cost metric that contemplates not only block size (validation cost) but also a transaction's net effect on the UTXO set. It would apply a cost in extra fees required to create unspent outputs. This makes dust outputs considerably more expensive than regular spend outputs.

Right now all we have are band-aid solutions to dust attacks. Raise the the dust threshold node policy, etc. Here, we could directly target spammers rather than externalizing the costs of their dust onto users in the form of transaction fees and nodes in the form of UTXO bloat.

I think it's a win-win, but in the current atmosphere, nobody cares about this shit. They only care about "the block size." Roll Eyes

why not? Not the worst idea.

Quote
Do you not understand that in regards to the block size -- with or without Segwit -- that the limit is a function of the network's infrastructural limitations? That applies to both cases, with or without Segwit. I'm pretty confused at how you could construe this as Segwit being an obstacle to scaling.

You don't seem to understand.

SegWit does change nothing of throughput efficiency. There are indicators it makes is worse by adding ~50 byte to every tx. But SegWit raises the block capacity to "at worst 4 mb" (gregory maxwell) while providing a capacity increase for typical transactions of x1.7-2.0.

This is the reason that with a network capacity of 4mb blocks we could have
- 4mb blocks without sw and
- 2mb blocks with sw

That's all. It's similar to building a road with 50% of the stripes reserved for special traffic and not usable for usual traffic.

Quote
Maybe it's because you're hung up on round numbers for block size? The issue is throughput capacity.

No, I'm not, numbers of block size are just an important variable for capacity. It's not everything, but saying it is nothing - like you try - is even wrong as saying it is everything. And fact is that SW decreases our capacity to scale with this variable while, at the moment, adding nothing to throughput effiicency.



Quote
Quote
Don't calculate rewards in bitcoin but in dollar. If you cut the grow now, the reward in dollar will stop to raise. That's a good method to cut miner's income now.

Sorry, but there is no evidence that growth is being "cut" nor that the market will do, frankly, anything. You're just spreading FUD.

Since it is not possible to meassure the absenc of growth, your accusation is not adequate. There are strong indications that the current situation already cuts growth and that the plan of core's roadmap will limit growth for a long time.

Quote
Ah, right. And how do you guarantee that a massive amount of onchain transactions will exist? With unlimited capacity, why would anyone even pay fees at all? And if there is no block subsidy, why would any rational miner expend resources securing such a network for no reward?

You are essentially saying that a small number of transactions with high fees that are the basis for offchain fee-payed transactions are a good basis to secure miner's income? Right?

And the plan is that every user will 1.) buy bitcoin 2.) send this bitcoin with a big fee (~10$) to a lightning hub and 3) pay for ln-transactions? Correct me, if I'm wrong.

sr. member
Activity: 400
Merit: 250
March 16, 2016, 05:18:17 AM
If I need a datacentre to host a lightning hub it will be not for free. There will be a huge centralization pressure / which results in censorship of transactions.

The "datacenter model" is a result of bandwidth pressures on nodes that need to validate and relay every transaction they receive. LN doesn't have such requirements because hops aren't committed to the blockchain. It's much, much more scalable than bitcoin's broadcast network model. (As an aside, it's a commonly held belief that broadcast networks are not scalable)

Why does lightning prevent us from applying a solution for a problem we have now / in the near-term future?

LN doesn't prevent anything. But the burden is on you to explain how increasing the block size is a "solution" for anything. It certainly isn't a solution for scalability since it does absolutely nothing to scale throughput.

Quote
How? It provides less latency reduction than the relay network (which can be run in a decentralized environment). As for non-relay bandwidth, 0.12's blocksonly mode has all bases covered.

Urg. Sorry. Did you try thinblocks? Did you connect your node with thinblocks to other nodes using thingblocks? Did you watch the bandwith? I did. Without thinblocks I, with my shitty internet connection, knocked on my limits after blocks were found. With thinblocks I never ever reached 20% of my limits.

I don't know the exact mechanism, but for sure thinblocks does a great job in
- reducing node bandwith
- fastening the propagation of blocks in the WHOLE network

Did you check the links I posted? I'm guessing you didn't.

Feel free to test.

That's not my responsibility as an end user.

Ok, here we are again. Nobody is talking about "optimal". There is no optimal solution for anything, since everything has its tradeoffs. We are talking about a solution that is needed now or in the short-term future and that is possible.

You still think raising the limit does not solve any problem. I told you why it solves a problem - that of capacity limits and a restriction of growth - and it's hard for me to realize that you think this is a non-solution / non-problem

We should be talking about "optimal." Apart from a contentious hard fork -- which should always be avoided under any circumstances -- that's all that matters. Sure, you said lots of stuff, but you didn't make a case for how it solves any problem. How is growth being restricted? How do you know, for instance, that new users aren't replacing space previously occupied by spam (dust)?

Meanwhile, this debating around a constant threat of hard forking without consensus has likely taken a lot of time and energy away from developing real scaling solutions that actually attempt to make bitcoin more accessible, rather than simply doubling the cost to run a node and slapping small miners with fatter orphan risks.

You know what? I've been hearing Gavin claim that the sky was falling since a year ago. And according to Mike Hearn, bitcoin is already dead. And yet, everything is working just fine. The mempool has <2500 transactions, the average size of the last 6 blocks = 538kB and a fee of .0001 (less, actually) gets you in the next block. Remind me -- when is the sky scheduled to fall?

In other words: fees are not enough to prevent spam. What's your idea? Some kind of transaction authority where you, even if you pay fees, have to tell what the economical sense of a transaction is?

People are doing a lot hand-waving around the word "spam" lately. It has a pretty specific connotation in bitcoin and usually refers to dust (unspendable outputs). If fees are not enough to prevent spam, it simply means there is not much demand for block space and transactions are cheap. For example, my last dozen or so transactions paid 3-8 cents each, depending on network load. At that price, it apparently makes sense to spam advertisements, etc. across the blockchain with as many dust outputs as possible, all the time.

My solution? Well, I don't know how much support this would receive among developers. Between general statistical principles (see Pareto) and simply in looking at thousands of spam transactions, I would guess that 20% of users occupy 80% of block space. Thus I think users would support this, since spammers are very likely to be a smaller population taking up more room per capita than other users: Instead of a pure validation-cost metric like we have, integrate a UTXO-cost metric that contemplates not only block size (validation cost) but also a transaction's net effect on the UTXO set. It would apply a cost in extra fees required to create unspent outputs. This makes dust outputs considerably more expensive than regular spend outputs.

Right now all we have are band-aid solutions to dust attacks. Raise the the dust threshold node policy, etc. Here, we could directly target spammers rather than externalizing the costs of their dust onto users in the form of transaction fees and nodes in the form of UTXO bloat.

I think it's a win-win, but in the current atmosphere, nobody cares about this shit. They only care about "the block size." Roll Eyes

It is true if we belive what the authors of SW say.
It's quiet strange that you say all the time we need better solutions to make the network scale, but now, when we agree that SW is an obstacle for future scaling, you say it's the problem of the network? Why?

We don't agree on that. We just need to account for the Segwit discount when contemplating the scope of future block size increases. Do you not understand that in regards to the block size -- with or without Segwit -- that the limit is a function of the network's infrastructural limitations? That applies to both cases, with or without Segwit. I'm pretty confused at how you could construe this as Segwit being an obstacle to scaling. Maybe it's because you're hung up on round numbers for block size? The issue is throughput capacity.

Quote
it only means that we must plan block size increases with the additional witness capacity in mind (if 6MB is safe but 8MB is not, switch to 1.5MB blocks instead of 2MB); if 8GB is safe but 16GB is not, switch to 2GB blocks instead of 4GB blocks). We can also change the witness discount if this were a real issue (hint: it's not).

That's the definition of the opposite of making a network scale. That's anti-scaling, because it reduces the networks capability to scale enormously

What are you talking about? It has no effect on that. Is the issue throughput or not? Then account for total throughput capacity rather than obsessing over block size. Please read what I wrote again. If 6MB is safe and 8MB is not, then a 1.5MB block size = total capacity of 6MB.

With or without Segwit, the network is subject to the same limitations. Simply put: what you are trying to paint as a problem is simply a misunderstanding of math -- or an intentional conflation of "throughput capacity" with "block size." Isn't throughput the reason you're incessantly complaining? Or at this point, are you just focused on achieving the biggest blocks possible, as soon as possible?

Whoever ingrained the idea into you that Segwit cannot scale past 1MB was spreading disinformation. (for example, See Matt Corallo's proposal for a hard fork to 1.5MB after implementing Segwit)

But the old nodes are no longer nodes with full functionality, right?

No, they are not, in the one sense that they don't validate Segwit transactions. It's a security trade-off. All else equal, we can expect increased throughput to continue to pressure nodes off the network. We can see them forced off the network entirely or we can retain them as partially-validating nodes which still provide great value.

Quote
Once the reward subsidy ends (and it is rapidly decreasing, with a reward halving in a few months),

Don't calculate rewards in bitcoin but in dollar. If you cut the grow now, the reward in dollar will stop to raise. That's a good method to cut miner's income now.

Sorry, but there is no evidence that growth is being "cut" nor that the market will do, frankly, anything. You're just spreading FUD.

Quote
how do you propose to secure the network?

Possibly not by routing fees to lightning hubs and by limiting the number of products miners can sell.
More sustainable would be a massive amount of onchain transactions with low fees.

Ah, right. And how do you guarantee that a massive amount of onchain transactions will exist? With unlimited capacity, why would anyone even pay fees at all? And if there is no block subsidy, why would any rational miner expend resources securing such a network for no reward?
sr. member
Activity: 409
Merit: 286
March 16, 2016, 03:34:57 AM
I can't help to feel like I say "blue is a color" and you say "no, I prefer red" -- or the other way round ... seems like we try hard to misunderstood and to play up details to big problems ...

But let's go on. If we just manage to agree on one point I call it a success. If not, we spent some time with discussing and sharpening our minds / arguments.

bigger blocks might cause node-centralization (maybe), while Lightning causes transaction-processing centralization. It's the question what's more important: that the poor can do a transaction or that the poor can host a node.

How? LN is a trustless model with a competitive fee market which anyone can join. And this has nothing to do with rich vs. poor, but rather infrastructural limits based on region.

Yes. We are talking about an unbaked bread, so it's hard to say anything about it.

I just argue that
infinite transaction volume (the famous visa) === high throughput === centralization.
If it are hubs or nodes is not that big of a difference. Neither a node nor a lightning hub will be able to have a visa-level throughput without industrial memory, harddisk and bandwith.

If I need a datacentre to host a lightning hub it will be not for free. There will be a huge centralization pressure / which results in censorship of transactions.

If we don't need datacentre sized hubs, because we don't have visa-volumes (I think we will never have) we could simply scale onchain.

So, you see - why do we have to discuss a solution that is not needed now and maybe never needed now and that will have very similiar hypothetical problems than onchain-scaling to just raise the blocksize a bit? Why does lightning prevent us from applying a solution for a problem we have now / in the near-term future?


But the imho biggest advance was thinblocks, not made by core and not included in 0.12 version of core (which makes another version superior)

How? It provides less latency reduction than the relay network (which can be run in a decentralized environment). As for non-relay bandwidth, 0.12's blocksonly mode has all bases covered.

Urg. Sorry. Did you try thinblocks? Did you connect your node with thinblocks to other nodes using thingblocks? Did you watch the bandwith? I did. Without thinblocks I, with my shitty internet connection, knocked on my limits after blocks were found. With thinblocks I never ever reached 20% of my limits.

I don't know the exact mechanism, but for sure thinblocks does a great job in
- reducing node bandwith
- fastening the propagation of blocks in the WHOLE network


blablabla.

This has nothing to do with what is optimal for the network. 2x transactions may do little to nothing to alleviate complaints over congestion, as there is no sign that spam (tx that create dust) is being forced off the blockchain. Engineering shouldn't be based on what makes us feel good. "Conquering the limit" is meaningless if it doesn't solve any problems -- and for scalability, it doesn't.

Ok, here we are again. Nobody is talking about "optimal". There is no optimal solution for anything, since everything has its tradeoffs. We are talking about a solution that is needed now or in the short-term future and that is possible.

You still think raising the limit does not solve any problem. I told you why it solves a problem - that of capacity limits and a restriction of growth - and it's hard for me to realize that you think this is a non-solution / non-problem

Second: No, we have no evidence, if 2 MB blocks will be filled instantly with spam, obviously, because it did not happen.

Evidence could take the form of assessing the prevalence of the attacks the 1MB limit was intended to prevent. [-snip-] Do we have any evidence that 2MB blocks won't simply be filled to capacity more or less immediately, due to the (temporarily) decreased fees? [-snip-] Perhaps the fee mechanism as coded is not quite optimal for that. But if so, I don't see how that justifies simply allowing all blockchain and UTXO bloat with no regard for the validation and UTXO costs, which have measurable impacts on nodes/relay.

In other words: fees are not enough to prevent spam. What's your idea? Some kind of transaction authority where you, even if you pay fees, have to tell what the economical sense of a transaction is?

The major reason why I prefer hf bigger blocks about SW is that SW opens attacks with 4 MB blocks while only providing 1.7-2.0 mb. This means: Segregated Witness is a major obstacle in every future onchain scaling.

If that were true, then the network's infrastructural limits are to blame, and those will limit any attempt to linearly increase capacity via block size too. It actually doesn't change anything --

It is true if we belive what the authors of SW say.
It's quiet strange that you say all the time we need better solutions to make the network scale, but now, when we agree that SW is an obstacle for future scaling, you say it's the problem of the network? Why?

SW blinds every node that does not upgrade. A non-SW-node will be a blind, non-validating node.

No, this is inaccurate and would defeat the point. Non-updated nodes do fully validate transactions. You are correct that they cannot validate those transactions that have witness data segregated. The point is to retain these partially-validating nodes on the network (which are still valuable for non-Segwit transaction validation and block relay) rather than forcing them to use SPV protocols and therefore off the network. 

Ok, we have a winner: it's a bit lighter than a softfork. But the old nodes are no longer nodes with full functionality, right?

That idea is the proof that you should not let technicians decide about economics. The fee market model is the model of the medieval guilds, that set a limit on e. G. furniture production to keep prizes high. Modern economics prefer to enable a high throughput with low margins.

Firstly, Core developers are not limited to knowledge on coding. Adam Back isn't even a coder, for instance. Further, their opinions are not the only ones that get considered here.

That Adam Back is a no-code doesn't make medieval economics better.

Quote
Once the reward subsidy ends (and it is rapidly decreasing, with a reward halving in a few months),

Don't calculate rewards in bitcoin but in dollar. If you cut the grow now, the reward in dollar will stop to raise. That's a good method to cut miner's income now.

Quote
how do you propose to secure the network?

Possibly not by routing fees to lightning hubs and by limiting the number of products miners can sell.
More sustainable would be a massive amount of onchain transactions with low fees.

legendary
Activity: 996
Merit: 1013
March 16, 2016, 02:46:21 AM
Without a scarcity mechanism (and we are not talking about an artificial one, given that we need to account for the real (bandwidth and latency) costs of transactions, this puts miners more and more between a rock and hard place as time goes on. The choices: 1) increase orphaning risk to capture more transaction fees or 2) publish only small blocks to prevent orphaning. People commonly make the claim that more transactions = more fees for miners (hence miners should support bigger blocks) without considering that this comes at a cost (orphaning risk) that may not be economical. Here is an excellent post from Pieter Wuille on the centralization pressure caused by larger blocks that can squeeze the profitability of smaller miners: https://www.reddit.com/r/bitcoin_devlist/comments/3bsvm9/mining_centralization_pressure_from_nonuniform/

What scarcity mechanism do you have in mind here?

Big thanks for the link to P Wuille's simulation, I didn't know of this (or that there were
a subreddit for the devlist discussions). However, as the ensuing discussion makes clear,
the results can be interpreted as reflecting connectivity rather than blocksize issues.


The biggest problem with all of BU's code is that there is little to no peer review, and it's not clear that their code is properly tested.

We've got smaller resources over there at Unlimited. What we mainly
lack currently are the automated regression tests.
sr. member
Activity: 400
Merit: 250
March 15, 2016, 05:46:33 PM
bigger blocks might cause node-centralization (maybe), while Lightning causes transaction-processing centralization. It's the question what's more important: that the poor can do a transaction or that the poor can host a node.

How? LN is a trustless model with a competitive fee market which anyone can join. And this has nothing to do with rich vs. poor, but rather infrastructural limits based on region.

But the imho biggest advance was thinblocks, not made by core and not included in 0.12 version of core (which makes another version superior)

How? It provides less latency reduction than the relay network (which can be run in a decentralized environment). As for non-relay bandwidth, 0.12's blocksonly mode has all bases covered.

The biggest problem with all of BU's code is that there is little to no peer review, and it's not clear that their code is properly tested. I commend Core for maintaining very high standards for auditing software changes since we have so much money on the line. Sometimes I wonder how heavily invested people are that push for hasty software changes.

Obviously 2 MB blocks are such a big problem that a lot of people threaten to split the network if someone activates them. Saying it is no problem while fighting it on the risk to break bitcoin is kind of ... doublespeak?

Only if you ignore the actual debate. The big issues are 1) how to scale (increasing block size does nothing to answer that and is not a sustainable solution to lack of scalability) and 2) that a contentious hard fork could permanently split the network and erode trust in the protocol. It really isn't clear at all that increasing the limit to 2MB will do much of anything to alleviate transaction congestion, while it exposes the network to huge systemic risks.

2 MB affords us 2x space for transactions and demonstrates that bitcoin is able to move on and leave some limits, that have never been planned to be forever, behind. 2 MB blocks show that bitcoin is allowed to grow and it reunifies the community. I think this is really really a major win.

This has nothing to do with what is optimal for the network. 2x transactions may do little to nothing to alleviate complaints over congestion, as there is no sign that spam (tx that create dust) is being forced off the blockchain. Engineering shouldn't be based on what makes us feel good. "Conquering the limit" is meaningless if it doesn't solve any problems -- and for scalability, it doesn't.

Second: No, we have no evidence, if 2 MB blocks will be filled instantly with spam, obviously, because it did not happen.

Evidence could take the form of assessing the prevalence of the attacks the 1MB limit was intended to prevent. On that, context matters:

The major reason why I prefer hf bigger blocks about SW is that SW opens attacks with 4 MB blocks while only providing 1.7-2.0 mb. This means: Segregated Witness is a major obstacle in every future onchain scaling.

If that were true, then the network's infrastructural limits are to blame, and those will limit any attempt to linearly increase capacity via block size too. It actually doesn't change anything -- it only means that we must plan block size increases with the additional witness capacity in mind (if 6MB is safe but 8MB is not, switch to 1.5MB blocks instead of 2MB); if 8GB is safe but 16GB is not, switch to 2GB blocks instead of 4GB blocks). We can also change the witness discount if this were a real issue (hint: it's not).

SW blinds every node that does not upgrade. A non-SW-node will be a blind, non-validating node.

No, this is inaccurate and would defeat the point. Non-updated nodes do fully validate transactions. You are correct that they cannot validate those transactions that have witness data segregated. The point is to retain these partially-validating nodes on the network (which are still valuable for non-Segwit transaction validation and block relay) rather than forcing them to use SPV protocols and therefore off the network. 

That idea is the proof that you should not let technicians decide about economics. The fee market model is the model of the medieval guilds, that set a limit on e. G. furniture production to keep prizes high. Modern economics prefer to enable a high throughput with low margins.

Firstly, Core developers are not limited to knowledge on coding. Adam Back isn't even a coder, for instance. Further, their opinions are not the only ones that get considered here.

Once the reward subsidy ends (and it is rapidly decreasing, with a reward halving in a few months), how do you propose to secure the network? Block space has definitive costs (bandwidth, latency) -- if you perpetually increase capacity limits without those in mind, you are diminishing any capacity the system has to incentivize miners to secure the network with proof of work. If all transactions are free due to unlimited capacity, no rational miner would ever secure the network once reward subsidy is gone.

So, how will bitcoin be secured once the reward subsidy is gone?

That has nothing to do with Segregated Witness vs. Bigger blocks. The other implementations plan the same.

Block propagation is probably the single most contentious issue of the debate and is central to the notion of increased load on the system. Developers of other implementations can plan all they want, but all I've seen so far is continually forking Core's work into their own codebases. I don't have any faith in their plans and I've seen nothing to suggest that they are capable of maintaining the repo of the dominant implementation. Core, on the other hand, has demonstrated such capability with years of tireless, excellent work.

If Classic succeeds in implementing a contentious hard fork, the case may be that they can't just continue forever forking Core's code. What happens when they need to implement real optimizations or see the network subjected to centralization pressures? From what I can tell, their answer is always to ignore such concerns and increase capacity without considering scalability at all.
sr. member
Activity: 409
Merit: 286
March 15, 2016, 04:33:58 PM
Nobody claimed increasing the blocksize would fix anything but capacity limits. And that's what it does. It scales bitcoin to process more transactions.

How does processing more transactions make bitcoin "scale?"

It doesn't make Bitcoin a system with infinite scalability, but it scales bitcoin to a throughput of 2 mb. Sometimes the obvious answer is the correct answer.


That idea is the proof that you should not let technicians decide about economics. The fee market model is the model of the medieval guilds, that set a limit on e. G. furniture production to keep prizes high. Modern economics prefer to enable a high throughput with low margins.

Just do the math.

Quote

This are two points.
Fraud proofs may be a real advantage. But I don't know the details.
As far as I know "new script functionality" means that we can enable new scripts with a softfork instead of a hardfork with Segregated Witness cause we don't need to change the transaction composition fundamentally. That can be good or bad. Plans like integrating Schnorr signatures sound really fantastic (but could also be done with a hard fork). But if it is good or bad depends on the trust you set in the devs and the miners. After all what happens I find it hard to trust Core, but that's a question of personal taste and should be no objective point. But it should demonstrate, that this is not an advantage for everyone.

Quote

That's the same like 1./2. and it's not true. With SegWit old nodes do no longer validate transactions. They may stay "officially" part of the network, but they are not really a part.

Quote
  6) Core is addressing propagation delays head on with IBLTs and weak blocks to make a future block size increase safer.

That has nothing to do with Segregated Witness vs. Bigger blocks. The other implementations plan the same. But unfortunately core seems not to be willing to cooperate (examples are thinblocks and the trafficshaper)

sr. member
Activity: 400
Merit: 250
March 15, 2016, 03:32:50 PM
Nobody claimed increasing the blocksize would fix anything but capacity limits. And that's what it does. It scales bitcoin to process more transactions.

How does processing more transactions make bitcoin "scale?" Scalable systems can see increased load without degrading the quality or robustness of the system. You've got the "increased load" part down, but you've done nothing to suggest that we can simply increase capacity limits (repeatedly?) without optimizing the system to retain its quality and robustness. The latter depends on the distribution of nodes and miners, which have financial pressures directly related to increased load (bandwidth and latency). If the solution is simply to squeeze nodes and miners at the behest of some users (by externalizing the cost of cheap/free transactions), I think that's a bad solution that doesn't contemplate the incentives of all actors in the system. As a node operator, my nodes vote against such a "solution" by ignoring any blocks > 1MB. The bandwidth optimizations in 0.12 do much to mitigate the risks of node centralization under a moderately increased block size (Thanks, Core!), but we desperately need solutions for propagation delays.

Inflation is rapidly decreasing. In the future, fees will be the only mechanism to ensure that miners will secure the network. Without a scarcity mechanism (and we are not talking about an artificial one, given that we need to account for the real (bandwidth and latency) costs of transactions, this puts miners more and more between a rock and hard place as time goes on. The choices: 1) increase orphaning risk to capture more transaction fees or 2) publish only small blocks to prevent orphaning. People commonly make the claim that more transactions = more fees for miners (hence miners should support bigger blocks) without considering that this comes at a cost (orphaning risk) that may not be economical. Here is an excellent post from Pieter Wuille on the centralization pressure caused by larger blocks that can squeeze the profitability of smaller miners: https://www.reddit.com/r/bitcoin_devlist/comments/3bsvm9/mining_centralization_pressure_from_nonuniform/

Is 2MB where this becomes a real problem? Maybe not. But testing by XT (and later Classic) developers showed that 4MB blocks could significantly threaten network relay as propagation from within to outside the Great Firewall is delayed anywhere from 3x-30x: https://np.reddit.com/r/btc/comments/3ygo96/blocksize_consensus_census/cye0bmt

So these questions logically follow: 1) What will an increase to 2MB really afford us? 2) Are we forced to choose between Segwit and 2MB hard fork, and if so, what is the most beneficial choice to make?

On the first question... are dust outputs being forced off the blockchain? Given the recent months of stress testing dust attacks and mempool-induced node crashes, I'd suggest they haven't been. A few months ago, the default dust threshold was increased 5x due to the RAM impact of spam (uneconomical transactions that create unspendable outputs) on nodes. Do we have any evidence that 2MB blocks won't simply be filled to capacity more or less immediately, due to the (temporarily) decreased fees? Unfortunately, node policies on dust relay have historically been band-aid solutions that don't do much to address the UTXO impact of spammy (uneconomical) transactions. The intention of fees -- at least in the early stages, as reward subsidy remains high -- is to prevent spam/DOS attacks. Perhaps the fee mechanism as coded is not quite optimal for that. But if so, I don't see how that justifies simply allowing all blockchain and UTXO bloat with no regard for the validation and UTXO costs, which have measurable impacts on nodes/relay. An interesting thought on that:

Quote from: Mark Friedenbach
Separately, many others including Greg Maxwell have advocated for a "net-UTXO" metric instead of, or in combination with a validation-cost metric. In the pure form the block size limit would be replaced with a maximum UTXO set increase, thereby applying a cost in extra fee required to create unspent outputs. This has the distinct advantage of making dust outputs considerably more expensive than regular spend outputs.

On the second question... if even jtoomim agrees that 4MB could causes problems for network relay, then we know that the upper bound for total throughput that Segwit allows per block at a 2MB limit (8MB) is a problem. Given that we must then make a choice, Segwit is superior for the following reasons:
   1) On node centralization -- a network of fully and partially validating nodes is superior to a network that increasingly shifts towards SPV.
   2) The dangers of a hard fork, particularly a contentious one are not justified when we have safer methods to increase throughput capacity.
   3) The necessity of a fee market to secure the blockchain.
   4) Additional features that solve transaction malleability, make SPV nodes safer (fraud proofs), and add new scripting functionality.
   5) Backward compatibility prevents nodes from being forked off the network.
   6) Core is addressing propagation delays head on with IBLTs and weak blocks to make a future block size increase safer.

In other words, "Patience, young grasshopper."
sr. member
Activity: 689
Merit: 269
full member
Activity: 154
Merit: 100
I2VPN Lead developer.Antidote to 3-letter agencies
March 15, 2016, 05:19:19 AM
As for my signature, it's not *me* telling people what Bitcoin is/isn't but rather three noted experts stating the facts.
>noted experts
>LukeJr
Ahahahahaha Cheesy
Another anti-LukeJr idiot.
If you can't understand his words, go cry in a corner, none will care.

No thinblock BIP = no thinblocks.  You are welcome to go deep diving into the esoteric specifics of why thinblocks aren't the best solution, but don't spread lies about how 'they were rejected because bandwidth doesn't matter.'

You offer no evidence supporting the assertion that Bitcoin's usefulness is "already declining."  The market says you are wrong about that.

Your personal anecdotes about banksters are not relevant.  VC/fintech investment in BTC is exploding; Hearn's R3 consortium is going hard.

And here's the thing that really makes you look clueless: USAA is now offering Coinbase integration on all customer accounts.

As for my signature, it's not *me* telling people what Bitcoin is/isn't but rather three noted experts stating the facts.  Sorry if that hurts your butt because you think you understand Bitcoin better than Hal Finney.   Smiley

What's wrong with you?

Because there is no BIP we can't talk about it? (there was a BUIP, btw)

Saying investments are "exploding" is a strange way to say they are drying out (that's a fact. Most investments go to some other coins currently or to "blockchain" / "sidechain" techs)

USAA just completes an integration to a watch-only wallet they startet nearly a year ago. Is that what you were waiting for?

And about your signature: you take the quotes that fit your opinion. Do you really think I'm so stupid to don't know that it's possible to lie / manipulate with quotes? Please, show some respect. I'm not stupid.

And Hal Finney - do you think you understand Bitcoin better than Satoshi? Smiley

https://www.reddit.com/r/btc/comments/4ahgqf/i_created_an_ad_unit_calling_miners_to_support/
Here you go, there's another kid calling miners to support Classic.
Damn dumb people are doing it tho.

https://www.reddit.com/r/Bitcoin/comments/2pvhs3/we_are_moving_towards_1mb_average_block_size_very/cn0ez6w
A good explanation for the newbs.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
March 15, 2016, 01:17:40 AM
"Satoshi's Bitcoin" Officially REKT




The lifespan of these hopelessly contentious vanity forks (XT, Unlimited, Classic, Satoshi's) keeps getting shorter.   Grin
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
March 14, 2016, 09:50:45 PM

No thinblock BIP = no thinblocks.  You are welcome to go deep diving into the esoteric specifics of why thinblocks aren't the best solution, but don't spread lies about how 'they were rejected because bandwidth doesn't matter.'

You offer no evidence supporting the assertion that Bitcoin's usefulness is "already declining."  The market says you are wrong about that.

Your personal anecdotes about banksters are not relevant.  VC/fintech investment in BTC is exploding; Hearn's R3 consortium is going hard.

And here's the thing that really makes you look clueless: USAA is now offering Coinbase integration on all customer accounts.

As for my signature, it's not *me* telling people what Bitcoin is/isn't but rather three noted experts stating the facts.  Sorry if that hurts your butt because you think you understand Bitcoin better than Hal Finney.   Smiley

What's wrong with you?

Because there is no BIP we can't talk about it? (there was a BUIP, btw)

Saying investments are "exploding" is a strange way to say they are drying out (that's a fact. Most investments go to some other coins currently or to "blockchain" / "sidechain" techs)

USAA just completes an integration to a watch-only wallet they startet nearly a year ago. Is that what you were waiting for?

And about your signature: you take the quotes that fit your opinion. Do you really think I'm so stupid to don't know that it's possible to lie / manipulate with quotes? Please, show some respect. I'm not stupid.

And Hal Finney - do you think you understand Bitcoin better than Satoshi? Smiley

Once again, you are welcome to discuss xthin blocks all you like, just please don't spread the false idea they are not being implemented in Core 'because bandwidth doesn't matter.'  LMK when there is a BIP, and we can go from there.

Where is the lie in my quotes?  They all seem like solid facts to me.  Sorry they don't support your ideas for CoinbaseCoin.

Try focusing on the quotes that are actually in my sig, not the general class of all quotes ever (including dishonest, selective ones).   Smiley
sr. member
Activity: 409
Merit: 286
March 14, 2016, 07:28:05 PM

No thinblock BIP = no thinblocks.  You are welcome to go deep diving into the esoteric specifics of why thinblocks aren't the best solution, but don't spread lies about how 'they were rejected because bandwidth doesn't matter.'

You offer no evidence supporting the assertion that Bitcoin's usefulness is "already declining."  The market says you are wrong about that.

Your personal anecdotes about banksters are not relevant.  VC/fintech investment in BTC is exploding; Hearn's R3 consortium is going hard.

And here's the thing that really makes you look clueless: USAA is now offering Coinbase integration on all customer accounts.

As for my signature, it's not *me* telling people what Bitcoin is/isn't but rather three noted experts stating the facts.  Sorry if that hurts your butt because you think you understand Bitcoin better than Hal Finney.   Smiley

What's wrong with you?

Because there is no BIP we can't talk about it? (there was a BUIP, btw)

Saying investments are "exploding" is a strange way to say they are drying out (that's a fact. Most investments go to some other coins currently or to "blockchain" / "sidechain" techs)

USAA just completes an integration to a watch-only wallet they startet nearly a year ago. Is that what you were waiting for?

And about your signature: you take the quotes that fit your opinion. Do you really think I'm so stupid to don't know that it's possible to lie / manipulate with quotes? Please, show some respect. I'm not stupid.

And Hal Finney - do you think you understand Bitcoin better than Satoshi? Smiley

sr. member
Activity: 409
Merit: 286
March 14, 2016, 07:23:47 PM

You realize how stupid this sentence is? Bitcoin can scale. We just need to raise to blocksize to, let's say, 4 mb...


You don't address what I really mean by ignoring out the most important part of the sentence.

No matter how many participants join the node network (f.e. 10k individual computers running nodes around the world vs let's say 100 supercomputers) transaction processing remains the same. Bitcoin has no mechanism to take advantage of how many people participate to improve transaction processing times. Increasing the block size does not fix that.

Not only that but it also makes the network more prone to attacks as it'd undoubtedly decrease the number of participants.  

Nobody claimed increasing the blocksize would fix anything but capacity limits. And that's what it does. It scales bitcoin to process more transactions.

You really don't need a doctor's degree to realize that this means that bitcoin can scale and that saying anyhing other is wrong.

Neither me nor anybody else from camp big blocks said bitcoin should scale to the sun and in eternity. That has nothing to do with scaling to current demand.

People like you answering scaling demands on and on with wrongly saying "bitcoin can't scale" are manipulating others (maybe without knowing).
legendary
Activity: 2422
Merit: 1451
Leading Crypto Sports Betting & Casino Platform
March 14, 2016, 03:53:01 PM

You realize how stupid this sentence is? Bitcoin can scale. We just need to raise to blocksize to, let's say, 4 mb...


You don't address what I really mean by ignoring out the most important part of the sentence.

No matter how many participants join the node network (f.e. 10k individual computers running nodes around the world vs let's say 100 supercomputers) transaction processing remains the same. Bitcoin has no mechanism to take advantage of how many people participate to improve transaction processing times. Increasing the block size does not fix that.

Not only that but it also makes the network more prone to attacks as it'd undoubtedly decrease the number of participants.  
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
March 14, 2016, 03:21:54 PM
As for my signature, it's not *me* telling people what Bitcoin is/isn't but rather three noted experts stating the facts.
>noted experts
>LukeJr
Ahahahahaha Cheesy

LukeJr is a noted expert on Bitcoin.  That's a fact.

Do you resent the fact so greatly that you must use forced, canned laughter to avoid the cognitive dissonance entailed by accepting it?

How sad for you.   Cheesy
Pages:
Jump to: