Author

Topic: <<How Software Gets Bloated: From Telephony to Bitcoin>> (Read 1449 times)

staff
Activity: 4284
Merit: 8808
Is not placing the Merkle root for the block's witness data in a transaction rather than in the block's header, not a quintessential example of a "kludge" or a "hack"?
The block header would emphatically _not_ be the right place to put it. Increasing the header size would drive up costs for applications of the system, including ones which do not need the witness data. No one proposing a hardfork is proposing header changes either, so even if was-- thats a false comparison. (But I wish you luck in a campaign to consign much of the existing hardware to the scrap heap, if you'd like to try!)

Quote
What about the accounting trick needed to permit more bytes of transactional data per block without a hard-forking change?  This hack doesn't just add technical debt, it adversely affects the economics of the system itself.
Fixing the _incorrect costing_ of limiting based on a particular serialization's size-- one which ignores the fact that signatures are immediately prunable while UTXO are perpetual state which is far more costly to the system-- was an intentional and mandatory part of the design. Prior to segwit the discussion coming out of Montreal was that some larger sizes could potentially be safe, if there were a way to do so without worsening the UTXO bloat problem.

In a totally new system that costing would likely have been done slightly differently, e.g. also reducing the cost of the vin txid and indexes relative the txouts; but it's unclear that this would have been materially better; and whatever it was would likely have been considerably more complex (look at the complex table of costs in the ethereum greenfield system), likely with little benefit.

Again we also find that your technical understanding is lacking. No "discount" is required to get a capacity increase. It could have been constructed as two linear constraints: a 1MB limit on the non-witness data, and a 2MB limit on the composite. Multiple constraints are less desirable, because the multiple dimensions potentially makes cost calculation depend on the mixture of demands; but would be perfectly functional, the discounting itself is not driven by anything related to capacity; but instead is just a better approximation of the system's long term operating costs. By better accounting for the operating costs the amount of headroom needed for safety is reduced.
legendary
Activity: 1162
Merit: 1007
None of these soft fork things result in things like mislabeled fields in an implementation or other similar technical debt...

Is not placing the Merkle root for the block's witness data in a transaction rather than in the block's header, not a quintessential example of a "kludge" or a "hack"?

What about the accounting trick needed to permit more bytes of transactional data per block without a hard-forking change?  This hack doesn't just add technical debt, it adversely affects the economics of the system itself.
staff
Activity: 4284
Merit: 8808
He glosses over the important part in order to make an extremely tenuous inference;
"Most of my brain feels that this is a brilliant trick, except my deja-vu neurons are screaming with 'this is the exact same repurposing trick as in the phone switch.' It's just across software versions in a distributed system, as opposed to different layers in a single OS."
Bolded for emphasis.

That is not "just" a minor difference that can be hand-waved. That is the raison détre of the soft fork. It is not because Bitcoin developers are concerned that they might mess something up, or that it would be a lot more work to do a hard fork, which seems to be the implication in this article.

It also ignores, or isn't aware, of the fact that we implemented segwit in the "green-field" form (with no constraint on history), in Elements Alpha-- and consider the form proposed for Bitcoin superior and are changing future elements test chains to use it.

Applying this to Bitcoin conflates misleading source code with protocol design. None of these soft fork things result in things like mislabeled fields in an implementation or other similar technical debt-- if that happens it's purely the fault of the implementation.

legendary
Activity: 1708
Merit: 1049
I feel bloat at I/O ..

Jeff Dean's "Numbers that everyone should know"7 emphasizes the painful latencies involved with all forms of I/O. For years, the consistent message to developers has been that good performance is guaranteed by keeping the working set of an application small enough to fit into RAM, and ideally into processor caches. If it isn't that small, we are in trouble.

...This could change pretty soon - but give it a few years before it hits mainstream adoption (maybe they want to milk the corporate profits first):

http://www.extremetech.com/extreme/211087-intel-micron-reveal-xpoint-a-new-memory-architecture-that-claims-to-outclass-both-ddr4-and-nand
sr. member
Activity: 433
Merit: 267
He glosses over the important part in order to make an extremely tenuous inference;
"Most of my brain feels that this is a brilliant trick, except my deja-vu neurons are screaming with 'this is the exact same repurposing trick as in the phone switch.' It's just across software versions in a distributed system, as opposed to different layers in a single OS."
Bolded for emphasis.

That is not "just" a minor difference that can be hand-waved. That is the raison détre of the soft fork. It is not because Bitcoin developers are concerned that they might mess something up, or that it would be a lot more work to do a hard fork, which seems to be the implication in this article.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
http://hackingdistributed.com/2016/04/05/how-software-gets-bloated/

In my experience, software bloat almost always comes from smart, often the smartest, devs who are technically the most competent. Couple their abilities with a few narrowly interpreted constraints, a well-intentioned effort to save the day (specifically, to save today at the expense of tomorrow)
.
.
.
As systems get bloated, the effort required to understand and change them grows exponentially. It's hard to get new developers interested in a software project if we force them to not just learn how it works, but also how it got there, because its process of evolution is so critical to the final shape it ended up in. They became engineers precisely because they were no good at history
Jump to: