If I need a datacentre to host a lightning hub it will be not for free. There will be a huge centralization pressure / which results in censorship of transactions.
The "datacenter model" is a result of bandwidth pressures on nodes that need to validate and relay every transaction they receive. LN doesn't have such requirements because hops aren't committed to the blockchain. It's much, much more scalable than bitcoin's broadcast network model. (As an aside, it's a commonly held belief that
broadcast networks are not scalable)
Why does lightning prevent us from applying a solution for a problem we have now / in the near-term future?
LN doesn't prevent anything. But the burden is on you to explain how increasing the block size is a "solution" for anything. It certainly isn't a solution for
scalability since it does absolutely nothing to scale throughput.
Urg. Sorry. Did you try thinblocks? Did you connect your node with thinblocks to other nodes using thingblocks? Did you watch the bandwith? I did. Without thinblocks I, with my shitty internet connection, knocked on my limits after blocks were found. With thinblocks I never ever reached 20% of my limits.
I don't know the exact mechanism, but for sure thinblocks does a great job in
- reducing node bandwith
- fastening the propagation of blocks in the WHOLE network
Did you check the links I posted? I'm guessing you didn't.
Feel free to test.
That's not my responsibility as an end user.
Ok, here we are again. Nobody is talking about "optimal". There is no optimal solution for anything, since everything has its tradeoffs. We are talking about a solution that is needed now or in the short-term future and that is possible.
You still think raising the limit does not solve any problem. I told you why it solves a problem - that of capacity limits and a restriction of growth - and it's hard for me to realize that you think this is a non-solution / non-problem
We should be talking about "optimal." Apart from a contentious hard fork -- which should always be avoided under any circumstances -- that's all that matters. Sure, you said lots of stuff, but you didn't make a case for how it solves any problem. How is growth being restricted? How do you know, for instance, that new users aren't replacing space previously occupied by spam (dust)?
Meanwhile, this debating around a constant threat of hard forking without consensus has likely taken a lot of time and energy away from developing real scaling solutions that actually attempt to make bitcoin more accessible, rather than simply doubling the cost to run a node and slapping small miners with fatter orphan risks.
You know what? I've been hearing Gavin claim that the sky was falling since a year ago. And according to Mike Hearn, bitcoin is already dead. And yet, everything is working just fine. The mempool has <2500 transactions, the average size of the last 6 blocks = 538kB and a fee of .0001 (less, actually) gets you in the next block.
Remind me -- when is the sky scheduled to fall? In other words: fees are not enough to prevent spam. What's your idea? Some kind of transaction authority where you, even if you pay fees, have to tell what the economical sense of a transaction is?
People are doing a lot hand-waving around the word "spam" lately. It has a pretty specific connotation in bitcoin and usually refers to dust (unspendable outputs). If fees are not enough to prevent spam, it simply means there is not much demand for block space and transactions are cheap. For example, my last dozen or so transactions paid 3-8 cents each, depending on network load. At that price, it apparently makes sense to spam advertisements, etc. across the blockchain with as many dust outputs as possible, all the time.
My solution? Well, I don't know how much support this would receive among developers. Between general statistical principles (see Pareto) and simply in looking at thousands of spam transactions, I would guess that 20% of users occupy 80% of block space. Thus I think users would support this, since spammers are very likely to be a smaller population taking up more room per capita than other users: Instead of a pure validation-cost metric like we have, integrate a UTXO-cost metric that contemplates not only block size (validation cost) but also a transaction's net effect on the UTXO set. It would apply a cost in extra fees required to create unspent outputs. This makes dust outputs considerably more expensive than regular spend outputs.
Right now all we have are band-aid solutions to dust attacks. Raise the the dust threshold node policy, etc. Here, we could directly target spammers rather than externalizing the costs of
their dust onto users in the form of transaction fees and nodes in the form of UTXO bloat.
I think it's a win-win, but in the current atmosphere, nobody cares about this shit. They only care about "the block size."
It is true if we belive what the authors of SW say.
It's quiet strange that you say all the time we need better solutions to make the network scale, but now, when we agree that SW is an obstacle for future scaling, you say it's the problem of the network? Why?
We don't agree on that. We just need to account for the Segwit discount when contemplating the scope of future block size increases. Do you not understand that in regards to the block size -- with or without Segwit -- that the limit is a function of the network's infrastructural limitations? That applies to both cases, with or without Segwit. I'm pretty confused at how you could construe this as Segwit being an obstacle to scaling. Maybe it's because you're hung up on round numbers for block size? The issue is throughput capacity.
it only means that we must plan block size increases with the additional witness capacity in mind (if 6MB is safe but 8MB is not, switch to 1.5MB blocks instead of 2MB); if 8GB is safe but 16GB is not, switch to 2GB blocks instead of 4GB blocks). We can also change the witness discount if this were a real issue (hint: it's not).
That's the definition of the opposite of making a network scale. That's anti-scaling, because it reduces the networks capability to scale enormously
What are you talking about? It has no effect on that. Is the issue throughput or not? Then account for total throughput capacity rather than obsessing over block size. Please read what I wrote again. If 6MB is safe and 8MB is not, then a 1.5MB block size = total capacity of 6MB.
With or without Segwit, the network is subject to the same limitations. Simply put: what you are trying to paint as a problem is simply a misunderstanding of math -- or an intentional conflation of "throughput capacity" with "block size." Isn't throughput the reason you're incessantly complaining? Or at this point, are you just focused on achieving
the biggest blocks possible, as soon as possible?
Whoever ingrained the idea into you that Segwit cannot scale past 1MB was spreading disinformation. (for example, See Matt Corallo's proposal for a hard fork to 1.5MB after implementing Segwit)
But the old nodes are no longer nodes with full functionality, right?
No, they are not, in the one sense that they don't validate Segwit transactions. It's a security trade-off. All else equal, we can expect increased throughput to continue to pressure nodes off the network. We can see them forced off the network entirely
or we can retain them as partially-validating nodes which still provide great value.
Once the reward subsidy ends (and it is rapidly decreasing, with a reward halving in a few months),
Don't calculate rewards in bitcoin but in dollar. If you cut the grow now, the reward in dollar will stop to raise. That's a good method to cut miner's income now.
Sorry, but there is no evidence that growth is being "cut" nor that the market will do, frankly, anything. You're just spreading FUD.
how do you propose to secure the network?
Possibly not by routing fees to lightning hubs and by limiting the number of products miners can sell.
More sustainable would be a massive amount of onchain transactions with low fees.
Ah, right. And how do you guarantee that a massive amount of onchain transactions will exist? With unlimited capacity, why would anyone even pay fees at all? And if there is no block subsidy, why would any rational miner expend resources securing such a network for no reward?