Pages:
Author

Topic: . - page 23. (Read 24757 times)

legendary
Activity: 2674
Merit: 2965
Terminated.
February 04, 2016, 01:41:47 PM
I am not wrong in saying that the Lighting network is being developed by Blockstream, and that segwit arbitrarily favors this transaction type for this technology over other transaction types, call it a subsidy if you will.
Your post made it seem like Blockstream is the only one behind Lightning Network (and implied control). You should watch your wording then.
Even if Blockstream is benign, the conflict of interest is clear. Just another example of Core attempting to apply a centrally planned economic policy onto the Bitcoin protocol.
As has already been demonstrated a number of times, Bitcoin does not scale efficiently via the block size limit. Bitcoin was never and will never be suited for microtransactions. These two things you can't change unless you redesign Bitcoin from scratch (obviously not possible anymore; you can create a new coin though). The only way that Bitcoin could ever acquire mainstream adoption would be with the use of second layer solutions such as LN.
hero member
Activity: 546
Merit: 500
February 04, 2016, 01:39:13 PM
One thing I wonder about is how to motivate miners to fill up blocks; partial blocks when there's a backlog is pretty dumb.  Here's my thought; when a block is announced then look at the backlog and if the backlog is big enough then keep working on the pre-announced work trying to find a fuller block.  Orphaning partial blocks motivates the miners to fill up the blocks.  Full blocks will attract subsequent work and eventually build a longer chain.  Everyone on the fuller (and hopefully longer) chain wins; everyone on chains with partial blocks lose.
orphaning off blocks like that would cause more problems than solutions. the main result is the smaller miners who dont have hashpower will get orphaned off more, because they include less tx's to try gaining advantage against the large farms. making mining network less distributed and in favour of the large mining farms
Hmm, https://blockchain.info/block-index/1010926/0000000000000000001e7d9e5bf68fda5332ee606963b407353fc8b0f1f4d38b came from AntPool.
Meanwhile, I haven't seen the much smaller Slush pool put out such empty blocks.
The blocksize does not effect pools such as Slush, it would have to be much larger for it to effect mining centralization in this way. If anything it would move the mining nodes out of China having a temporary decentralizing effect. We can increase the blocksize much more before it would have any significant effect on mining centralization since it is trivial for pools like Slush to run a full node for the purpose of mining, considering the scale at which these pools need to operate to even be viable in the first place. The miners themselves do not run full nodes for the purpose of mining, we pool our hash power. I mine with Slush myself, it is a good pool. Smiley
hero member
Activity: 709
Merit: 503
February 04, 2016, 01:37:04 PM
One thing I wonder about is how to motivate miners to fill up blocks; partial blocks when there's a backlog is pretty dumb.  Here's my thought; when a block is announced then look at the backlog and if the backlog is big enough then keep working on the pre-announced work trying to find a fuller block.  Orphaning partial blocks motivates the miners to fill up the blocks.  Full blocks will attract subsequent work and eventually build a longer chain.  Everyone on the fuller (and hopefully longer) chain wins; everyone on chains with partial blocks lose.
orphaning off blocks like that would cause more problems than solutions. the main result is the smaller miners who dont have hashpower will get orphaned off more, because they include less tx's to try gaining advantage against the large farms. making mining network less distributed and in favour of the large mining farms
Hmm, https://blockchain.info/block-index/1010926/0000000000000000001e7d9e5bf68fda5332ee606963b407353fc8b0f1f4d38b came from AntPool.
Meanwhile, I haven't seen the much smaller Slush pool put out such empty blocks.
Filling a block takes a tiny amount of time compared to finding a compliant nonce et al for any sized pool or even a solo miner.
legendary
Activity: 4410
Merit: 4788
February 04, 2016, 01:36:12 PM
Just another example of Core attempting to apply a centrally planned economic policy onto the Bitcoin protocol.

don't worry. the blockstream fanboys have been blasting that bitcoinocracy link between all their friends.. its not a fair view of the community.

hell even hilary clinton can fake a poll by only asking her best friends to get on TV and show their support for her
hero member
Activity: 709
Merit: 503
February 04, 2016, 01:34:07 PM
One thing I wonder about is how to motivate miners to fill up blocks; partial blocks when there's a backlog is pretty dumb.  Here's my thought; when a block is announced then look at the backlog and if the backlog is big enough then keep working on the pre-announced work trying to find a fuller block.  Orphaning partial blocks motivates the miners to fill up the blocks.  Full blocks will attract subsequent work and eventually build a longer chain.  Everyone on the fuller (and hopefully longer) chain wins; everyone on chains with partial blocks lose.
orphaning off blocks like that would cause more problems than solutions. the main result is the smaller miners who dont have hashpower will get orphaned off more, because they include less tx's to try gaining advantage against the large farms. making mining network less distributed and in favour of the large mining farms
Hmm, https://blockchain.info/block-index/1010926/0000000000000000001e7d9e5bf68fda5332ee606963b407353fc8b0f1f4d38b came from AntPool.
Meanwhile, I haven't seen the much smaller Slush pool put out such empty blocks.
hero member
Activity: 546
Merit: 500
February 04, 2016, 01:32:02 PM
Core completes a years long development work which speeds signature validation >5x, invents a clean way to allow new efficiencies and security trade-offs, gets node pruning working, and comes up with a way to get roughly 2MB worth of capacity without a the risky and coordination problematic hardfork, while simultaneously fixing some long time bugs like malleability and misaligned incentives (utxo bloat).
The work Core has done is good, besides certain details within the code discounting some transaction types in favor of lighting network developed by Blockstream. It being open source I am sure it can be adapted and adopted into another client which will scale Bitcoin to its true potential.
This is what you label as 'good'? You obviously don't even have the slightest clue as far as the complexity is concerned. Another thing that you're wrong about is "LN developed by Blockstream". There are multiple implementations of the Lightning Network, the most advanced ones are:
Joseph, Tadge and roasbeef's version: https://github.com/LightningNetwork/lnd/  
Rusty'sversion (Blockstream): https://github.com/ElementsProject/lightning

The only way to "prove", "community support" is through proof of work. The moment of truth is approaching.
Power to the people! Grin
Look at all the support for the 'forkers':
Quote
I am not wrong in saying that the Lighting network is being developed by Blockstream, and that segwit arbitrarily favors this transaction type for this technology over other transaction types, call it a subsidy if you will. Even if Blockstream is benign, the conflict of interest is clear. Just another example of Core attempting to apply a centrally planned economic policy onto the Bitcoin protocol.
legendary
Activity: 2674
Merit: 2965
Terminated.
February 04, 2016, 01:27:19 PM
Core completes a years long development work which speeds signature validation >5x, invents a clean way to allow new efficiencies and security trade-offs, gets node pruning working, and comes up with a way to get roughly 2MB worth of capacity without a the risky and coordination problematic hardfork, while simultaneously fixing some long time bugs like malleability and misaligned incentives (utxo bloat).
The work Core has done is good, besides certain details within the code discounting some transaction types in favor of lighting network developed by Blockstream. It being open source I am sure it can be adapted and adopted into another client which will scale Bitcoin to its true potential.
This is what you label as 'good'? You obviously don't even have the slightest clue as far as the complexity is concerned. Another thing that you're wrong about is "LN developed by Blockstream". There are multiple implementations of the Lightning Network, the most advanced ones are:
Joseph, Tadge and roasbeef's version: https://github.com/LightningNetwork/lnd/  
Rusty'sversion (Blockstream): https://github.com/ElementsProject/lightning

The only way to "prove", "community support" is through proof of work. The moment of truth is approaching.
Power to the people! Grin
Look at all the support for the 'forkers':
Quote
hero member
Activity: 709
Merit: 503
February 04, 2016, 01:26:46 PM
One thing I wonder about is how to motivate miners to fill up blocks; partial blocks when there's a backlog is pretty dumb.  Here's my thought; when a block is announced then look at the backlog and if the backlog is big enough then keep working on the pre-announced work trying to find a fuller block.  Orphaning partial blocks motivates the miners to fill up the blocks.  Full blocks will attract subsequent work and eventually build a longer chain.  Everyone on the fuller (and hopefully longer) chain wins; everyone on chains with partial blocks lose.
orphaning off blocks like that would cause more problems than solutions. the main result is the smaller miners who dont have hashpower will get orphaned off more, because they include less tx's to try gaining advantage against the large farms. making mining network less distributed and in favour of the large mining farms
Hmm, https://blockchain.info/block-index/1010926/0000000000000000001e7d9e5bf68fda5332ee606963b407353fc8b0f1f4d38b came from AntPool.
hero member
Activity: 546
Merit: 500
February 04, 2016, 01:24:26 PM
The only way to "prove", "community support" is through proof of work. The moment of truth is approaching.

Power to the people! Grin

You are seriously deluded (or more likely your account was bought).

The miners are not supporting 2MB.

(did you forget that I live in China?)
Better check your facts. There is no reason to insult me either, just check my post history, you will see that my writing style and views are consistent. I am also a miner and I support 2MB. Smiley

https://bitcoinclassic.com/
https://docs.google.com/spreadsheets/d/1Cg9Qo9Vl5PdJYD4EiHnIGMV3G48pWmcWI3NFoKKfIzU/edit?pref=2&pli=1#gid=0
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
February 04, 2016, 01:21:34 PM
The only way to "prove", "community support" is through proof of work. The moment of truth is approaching.

Power to the people! Grin

You are seriously deluded (or more likely your account was bought).

The miners are not supporting 2MB.

(did you forget that I live in China?)
legendary
Activity: 4410
Merit: 4788
February 04, 2016, 01:19:47 PM
SegWit does seem good although it is a little tricky and wonder if we will get it perfect the first time.

Increasing the blocksize limit seems good to me.  It seems trivial to get it right.

Doing both seems ok to me but does contradict the wisdom of only making one change at a time.  Can we change one and then a little later the other or is there some compelling reason to do them at the same time?

2mb is just a few lines of code that just sit there as a buffer.. just like 1mb set as a buffer for years never hitting top capacity. segwit is a total change

One thing I wonder about is how to motivate miners to fill up blocks; partial blocks when there's a backlog is pretty dumb.  Here's my thought; when a block is announced then look at the backlog and if the backlog is big enough then keep working on the pre-announced work trying to find a fuller block.  Orphaning partial blocks motivates the miners to fill up the blocks.  Full blocks will attract subsequent work and eventually build a longer chain.  Everyone on the fuller (and hopefully longer) chain wins; everyone on chains with partial blocks lose.

orphaning off blocks like that would cause more problems than solutions. the main result is the smaller miners who dont have hashpower will get orphaned off more, because they include less tx's to try gaining advantage against the large farms. making mining network less distributed and in favour of the large mining farms
hero member
Activity: 546
Merit: 500
February 04, 2016, 01:17:20 PM
so dont make it out to sound like the community who want 2mb real blocks, are following gavin.. its the other way round.. gavins giving into the community
Again - you have never provided proof of any such "community support" for 2MB.
The only way to "prove", "community support" is through proof of work. The moment of truth is approaching.

Power to the people! Grin
hero member
Activity: 709
Merit: 503
February 04, 2016, 01:14:58 PM
SegWit does seem good although it is a little tricky and wonder if we will get it perfect the first time.

Increasing the blocksize limit seems good to me.  It seems trivial to get it right.

Doing both seems ok to me but does contradict the wisdom of only making one change at a time.  Can we change one and then a little later the other or is there some compelling reason to do them at the same time?

One thing I wonder about is how to motivate miners to fill up blocks; partial blocks when there's a backlog is pretty dumb.  Here's my thought; when a block is announced then look at the backlog and if the backlog is big enough then keep working on the pre-announced work trying to find a fuller block.  Orphaning partial blocks motivates the miners to fill up the blocks.  Full blocks will attract subsequent work and eventually build a longer chain.  Everyone on the fuller (and hopefully longer) chain wins; everyone on chains with partial blocks lose.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
February 04, 2016, 01:14:17 PM
so dont make it out to sound like the community who want 2mb real blocks, are following gavin.. its the other way round.. gavins giving into the community

Again - you have never provided proof of any such "community support" for 2MB (you just keep on repeating the word community like a fucking retard).

Hmm.. maybe that's because you are a fucking retard?

(now go and sulk because I called you a bad name)
legendary
Activity: 4410
Merit: 4788
February 04, 2016, 01:11:49 PM

So why isn't Gavin going for 32MB instead of the 2MB (after he has downsized a few times)?


because he is following community demand. rather than personal choice.
it was the community that first said 2mb was acceptable.. gavin followed afterwards..

so dont make it out to sound like the community who want 2mb real blocks, are following gavin.. its the other way round.. gavins giving into the community
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
February 04, 2016, 01:08:09 PM
Fact: Satoshi reduced the blocksize to an arbitrary 1MB as a temporary security measure only. It was never intended to stay fixed at 1MB and certainly never intended as a consensus rule.

Fact: Satoshi's original software had 32MB as the limit.

So why isn't Gavin going for 32MB instead of the 2MB (after he has downsized a few times)?
hero member
Activity: 546
Merit: 500
February 04, 2016, 01:07:22 PM
Just some spit-balling here, I'm sure I have some ordering and details wrong.
I am going to correct some of the things you said here, in the interest of truth.

Mike and Gavin argue for no limits and ~free transactions forever, regardless of the costs.
I think that you might be lying here, since I think you should know better, though please supply a source where Gavin definitively states his support for free transactions forever?

Some in core suggest that if we really need to show the flexibility of a blocksize increase, 2MB could probably be sold and made safe enough (with planned future improvements) if we really had to.
Even today Core has not given the community a date for a hard fork increase of the blocksize limit. This is unacceptable, and you should not expect the community to trust Core. This is why the community can fork the network themselves, if the interests of the "reference client" no longer reflect the will of the economic majority then we are certainly justified in doing so.

Gavin gets some clue and realizes no limit at all makes no sense; Mike grudgingly agrees to go along with a 20MB proposal.
Its called compromise, Core could learn from this.

Miners reject that as unrealistically large.
Because miners decide, incentives aligning their interests with those of the economic majority. Ultimatly this economic majority therefore decides, not a centralized group technocrats in charge of a implementation of Bitcoin.

Gavin and Mike retrench with a 8MB that rapidly ramps to gigabytes. Announce intent to adversarial fork the network in Bitcoin XT. A new client which is 99.99% code from Core but with the addition of the new expanded blocksize and Mike hearn declared benevolent dictator of the project.
What you describe as rapidly actually takes place over a twenty year period. It is fine to be skeptical of the schedule within BIP101 but you are dramatically exaggerating the facts with the language you choose to use.

Furthermore you fail to acknowledge that Core is also effectively a dictatorship. Every implementation of Bitcoin is arguably a dictatorship, it is the nature of open source projects. In Bitcoin the democratic will of the economic majority can be represented through the choice of multiple alternative implementations. This solves this particular problem within the governance of Bitcoin.

In spite of announced intentions by some, almost no one adopts Bitcoin XT and XT development sits at a near standstill compared to Core.
Again with the exaggeration, around ten percent of the nodes have been alternatives to Core since the launch of XT, I would not call that nothing. When Classic is released I am sure it will take away a even bigger share of nodes running Core. Not to mention the supermajority of miners now supporting Classic.

Systems testing is finally performed on XT via testnet a month before it was intended to activate. Many performance problems are found with 8MB, person testing it suggests that 4MB or 3MB initially may be more realistic.
While other tests showed that it was safe, I am not a technical expert myself but there are other technical experts beside yourself that say eight megabyte would be fine.

Core completes a years long development work which speeds signature validation >5x, invents a clean way to allow new efficiencies and security trade-offs, gets node pruning working, and comes up with a way to get roughly 2MB worth of capacity without a the risky and coordination problematic hardfork, while simultaneously fixing some long time bugs like malleability and misaligned incentives (utxo bloat).
Hard forks allow us to test consensus and serves as a check against the power of any development team. It is not something that should be avoided, it should be embraced. The work Core has done is good, besides certain details within the code discounting some transaction types in favor of lighting network developed by Blockstream. It being open source I am sure it can be adapted and adopted into another client which will scale Bitcoin to its true potential.

Core posts a capacity roadmap including these solutions, along with a plan for further development to allow more capacity into the future.
We have discussed this before, it comes down to a fundamental disagreement in terms of the vision and future of Bitcoin. For you to say that Core stands for scaling Bitcoin directly is somewhat of a misnomer, at least Peter Todd was able to acknowledge this. Core seems to want to fundamentally change the economic policy of Bitcoin. Many people do not agree with this, having signed up to the original vision of Bitcoin. Not everyone accepts off chain solutions as a way to scale Bitcoin.

Almost the entire development community and many in industry sign on an open letter in support of this plan. On the order of fifty people in all, it includes all of the most active contributors to Bitcoin Core and many other pieces of Bitcoin software.
This is a very misleading statement, especially considering that at the time three out of the five Core commiters did not even sign the road map. Gavin Andreses, Jeff Garzick and Peter Todd. Prominent developers to say the least. Not to mention the many companies that also disagree with the road map.

Gavin, Jeff, and a few other people (including people involved with the recently insolvent Crypsy exchange) announce that they're creating "Bitcoin Classic"; a retry of the XT approach but with added popular voting on a centralized web voting site.
Explain to me how is Classic a retry of XT? The answer is it is not, the only thing they have in common is that they increase the blocksize and that Gavin is involved. I think you are falsy equating these two projects.

Mike Hearn catches fire, slams Bitcoin with a one-sided attack piece piece in the NYT calling Bitcoin a failure. Some argue that Mike's position is driven by his employment at R3, an adversarial to Bitcoin company working with major banks. Astute followers know this isn't true: Mike's misalignment with Bitcoin has existed forever.
Mikes loss of believe and going off to work for R3 is unfortunate. However it is wrong to think that he was always a shill for R3 without evidence. You should know better then to throw around baseless accusations.

Bitcoin market price crashes significantly.
I have called you out on doing this before, relating these events to the markets, unless you are also an expert in markets you are just fear mongering.

Core creates a public test network for the new improvements and many people are actively testing on it. Several wallets begin their integration process for the new improvements. Development moves rapidly, several standards documents are written.

Market price substantially recovers.
You are attributing the rise in price with the work done in Core? Seriously you should know better.

Gavin finally announces code for the new "Classic", largely duplicating the XT functionality. Instead of the BIP101 rapid growth scheme, it features a 2MB hardfork, and none of the other improvements that are recently in core and in the works.

Bitcoin market price drops significantly again.
You are using peoples fear of their monetary investment in order to sway people over to your ideological side. That is manipulation, what you are doing here is propaganda.

I'm hoping we get to the point where the market realizes it's being toyed with here and repeated XT reloaded attempts are pretty meaningless. We're not seemingly there yet.
What are you even trying to say here? That all alternative implementations that want to increase the blocksize are meaningless? It sounds like you have contempt for the freedom of choice, and the very check on the power of Core. Bitcoin is freedom, and I think that you are trying to control it.

Have I basically summarized the last year? Anyone want to add any bullets?
Classic will be released soon, already having gained a supermajority support from the miners. The narrative has evolved and more people are aware of the divergence of vision and possible conflict of interest within Core. Bitcoin is freedom and I am confident that the original vision of Satoshi will triumph. Smiley
legendary
Activity: 2506
Merit: 1030
Twitter @realmicroguy
February 04, 2016, 01:05:07 PM
I have been so done with the block fight I don't even know what segwit is and why it can't work with 2mb blocks, if it can then why not both?

As I understand it, SegWit and increased blocksizes are two completely separate issues. SegWit improves efficiency, and is the equivalent of a 2Mb blocksize, or better. It introduces a number of other possibilities as well. As there is no immediate need for a blocksize increase, it would be better to implement SegWit, and then see what is needed for the future. Who knows, maybe the Litecoin and Bitcoin chains should be combined to facilitate exchanges. Smiley

An irrational fear of hard forking to 2MB is not valid justification for Blockstream's paid/owned core devs to ignore the wider community and strap on their experimental protocol-perverting sidechain.

Fact: Satoshi reduced the blocksize to an arbitrary 1MB as a temporary security measure only. It was never intended to stay fixed at 1MB and certainly never intended as a consensus rule. What we are seeing is a 'classic' example of a hostile corporate takeover having fully co-opted command and control of bitcoin core.
legendary
Activity: 4410
Merit: 4788
February 04, 2016, 12:57:34 PM
I have been so done with the block fight I don't even know what segwit is and why it can't work with 2mb blocks, if it can then why not both?

As I understand it, SegWit and increased blocksizes are two completely separate issues. SegWit improves efficiency, and is the equivalent of a 2Mb blocksize, or better. It introduces a number of other possibilities as well. As there is no immediate need for a blocksize increase, it would be better to implement SegWit, and then see what is needed for the future. Who knows, maybe the Litecoin and Bitcoin chains should be combined to facilitate exchanges. Smiley

there was no immediate need for 1mb when blocks were only being made of less than 500k in 2009-2013.. but the setting was there at 1mb.. AS A BUFFER to allow for growth without any rush, or debate
legendary
Activity: 2814
Merit: 2472
https://JetCash.com
February 04, 2016, 12:52:35 PM
I have been so done with the block fight I don't even know what segwit is and why it can't work with 2mb blocks, if it can then why not both?

As I understand it, SegWit and increased blocksizes are two completely separate issues. SegWit improves efficiency, and is the equivalent of a 2Mb blocksize, or better. It introduces a number of other possibilities as well. As there is no immediate need for a blocksize increase, it would be better to implement SegWit, and then see what is needed for the future. Who knows, maybe the Litecoin and Bitcoin chains should be combined to facilitate exchanges. Smiley
Pages:
Jump to: