Pages:
Author

Topic: [POLL] Is bigger block capacity still a taboo? - page 3. (Read 1730 times)

legendary
Activity: 2422
Merit: 1451
Leading Crypto Sports Betting & Casino Platform
Many people keep bringing the results of this poll as a way of saying that the community is divided.

First off, bitcointalk polls receive anonymous responses from any account so don't take them too seriously.

But even as a pointer, I don't think it shows any division. If anything, these results show that the discussion has matured a lot since 2015 and the community has grown a lot:



The environment was very hostile in these years circa the "block size wars" because many had jumped on the opportunity to profit against BTC itself by forking off to their own coins and whatnot.
Long time has passed since then and most of these altcoins are no longer relevant. So to me this vote indicates progress.
If a sound solution to scaling was to be proposed now it would probably receive better level headed discussion.
copper member
Activity: 901
Merit: 2244
Quote
Sure, but why is linear growth necessarily bad?
Note the difference between being "bad" and being "non-scalable". It is only non-scalable, it doesn't mean it is bad, it only means you cannot call it "scaling", because it has exactly the same scale. Unless you think that 1:1 scale is something entirely different than 10:10 scale or 100:100 scale.

Quote
You need a linearly grown basis to base your compression model later on.
Why later, and why not now? Is compressing 4 MB somehow different than compressing 32 MB? In the end, if you want to talk about scaling, you will still express your compression in percentage, or in the number of total bytes used, if your initial size is constant.

Also note that signatures are easier to compress than arbitrary data, because you can add public keys, and it is cryptographically equivalent, while you cannot losslessly drop data chunks in the middle, because then you lose that data. Which means, Ordinals are hard to compress, while signatures are easier to compress. Which also means, regular signatures should be compressed, and Ordinals should be left as they are. In this way, regular users could get any way to defend themselves, when fees are too high.

Quote
What was the reasoning of this rejection?
There were many different reasons. You can read the mailing list, or read topics like that: https://bitcointalksearch.org/topic/drivechain-critiques-by-gmaxwell-revisited-maybe-you-changed-your-mind-5231460
Maybe I will add something more, is this is not enough, but maybe not in this topic.

Quote
And by whom were they rejected?
By some of the mailing list participants, and code reviewers, of course. Also, I can look for more details, if you need them, but maybe not in this topic, because as you can see above, there were some other topics about sidechains, and they probably fit better.

Also, maybe "rejected" is a too strong word. Some people could argue, that they were "postponed" or "paused, and sent back to fix some points". It depends, who you ask, because this is how decentralization works: different people will tell you different things, and maybe a better way to say that is "those specific, and submitted BIPs, related to sidechains, failed to reach consensus (yet)".
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Is it because it would also depend on what that person's definition of "scaling" is?
I'll have to agree with vjudeu's definition of scaling, even though I'd endorse a block size increase which is based on a studied rationale.

It is just a pure, linear growth.
Sure, but why is linear growth necessarily bad? You need a linearly grown basis to base your compression model later on.

But because they were rejected as a soft-fork, I am trying to create something else.
What was the reasoning of this rejection? And by whom were they rejected?
copper member
Activity: 821
Merit: 1992
Quote
Spend 0.0053% of system cost to scale the entire system by 10x. Is this a good scaling?
Go on, give it a try. Or find somebody, who thinks in the same way, and encourage that person into coding it. Or maybe you already did it? If so, where can I download the client?

If you want to ask me the same question, then sidechains can be downloaded here: https://www.drivechain.info/
But because they were rejected as a soft-fork, I am trying to create something else.
newbie
Activity: 6
Merit: 2
Quote
It's strange that you count the resources spent in bytes, in "virtual bytes". When all over the world, resources spent are usually assessed in monetary terms.
Testnet can handle exactly the same bytes, with exactly the same hardware, but those coins are worthless. Does it mean that we should take the current price of Bitcoin in dollars, and base our scalability on that? Because guess what: 1 BTC is worth 1 BTC. If you want to measure it "in monetary terms", then you have to use a price, relative to altcoins, or to fiat currencies (for example USD), or to goods and services you can buy with that (for example food). Is this what you want? Do you want to measure for example fees in dollars to decide, if the price is too high or not?
Did I understand correctly that you are trying to refute that costs are measured in money? There is nothing much to comment on here.

"1 BTC is worth 1 BTC"
OK. Let's calculate in BTC. How many bitcoins do you currently need to rewrite the code? Let’s estimate it at 1000btc. I won't be much mistaken.  Total bitcoins are ~19 million.

Spend 0.0053% of system cost to scale the entire system by 10x. Is this a good scaling? Smiley
copper member
Activity: 901
Merit: 2244
Quote
It's strange that you count the resources spent in bytes, in "virtual bytes". When all over the world, resources spent are usually assessed in monetary terms.
Testnet can handle exactly the same bytes, with exactly the same hardware, but those coins are worthless. Does it mean that we should take the current price of Bitcoin in dollars, and base our scalability on that? Because guess what: 1 BTC is worth 1 BTC. If you want to measure it "in monetary terms", then you have to use a price, relative to altcoins, or to fiat currencies (for example USD), or to goods and services you can buy with that (for example food). Is this what you want? Do you want to measure for example fees in dollars to decide, if the price is too high or not?
newbie
Activity: 6
Merit: 2
Scaling is about resources. If you can handle 16x more traffic with only 4x bigger blocks, then this is somewhat scalable. But we can go even further: if you can handle 100x more traffic, or even 1000x more traffic with only 4x bigger blocks, then this has even better scalability.

It's strange that you count the resources spent in bytes, in "virtual bytes". When all over the world, resources spent are usually assessed in monetary terms.

Now, thanks to the development of technology, a modern computer can process blocks of 10 MB in size without problems and without significant additional costs. (It's time to record this fact and not come back again)

And it turns out that simply by rewriting the code, we can increase the capabilities of the system, which is now valued at ~$700 billion, by 10 times. I repeat, just rewriting the code.

Evaluate the ratio of the results obtained to the resources spent. Isn't this ideal scaling? Smiley
copper member
Activity: 901
Merit: 2244
Quote
Is it because it would also depend on what that person's definition of "scaling" is?
Yes. I like that kind of approaching the problem, by the way.

Quote
What is your definition or idea of how scaling Bitcoin should actually be?
Scaling is directly related with compression. If you can use the same resources to achieve more goals, then that thing is "scalable". So, if the size of the block is 1 MB, and your "scaling" is just "let's increase it into 4 MB", then it is not a scaling anymore. It is just a pure, linear growth. You increase numbers four times, so you can now handle 4x more traffic. But it is not scaling. Not at all.

Scaling is about resources. If you can handle 16x more traffic with only 4x bigger blocks, then this is somewhat scalable. But we can go even further: if you can handle 100x more traffic, or even 1000x more traffic with only 4x bigger blocks, then this has even better scalability.

Also, scaling is directly related to intelligence. You can read more about Hutter Prize, which was advertized by Garlo Nicon some time ago: http://prize.hutter1.net/

Quote
I want to know everyone's definition/idea about scaling.
I will add a bonus question: how do you measure if your model is scalable or not? Write it as a function in big-O-notation, or anything you like. I support "constant-based scaling", which means O(1) scaling, which means leaving current resources as they are, and improving algorithms to build things on top of that, for example through commitments.
legendary
Activity: 2898
Merit: 1823

I believe some of you might start going deeper into the argument for bigger blocks that you might also start to think Roger Ver and Bitcoin Cash made the right decision, and you might also start debating for Satoshi's original vision.

It really depends on what classifies as "right". Opinions fundamentally differ on block size policy.


Is it because it would also depend on what that person's definition of "scaling" is?

I'm actually now very curious. What is your definition or idea of how scaling Bitcoin should actually be? Because we might not have a common definition/idea, and that's where the issues start.

It's not just for BlackHatCoiner. I want to know everyone's definition/idea about scaling.
copper member
Activity: 901
Merit: 2244
Quote
Script is necessary for bitcoin to work as a network that apps and new type of transactions can be build upon .
1. If it is necessary, then why it is not included in the whitepaper?
2. You won't believe how many things can be coded with public keys only. Imagine that if you have a Schnorr signature, then you can add and multiply 256-bit numbers. This is enough to implement a lot of conditions.
3. There is no need to include Script in the first version. It can be added later if needed, exactly in the same way as Taproot was added: you can spend by P2PK, or spend by P2SH, and wrap all of that in a single address type. This is how Taproot works: you never know, if there is some huge Ordinal behind some P2TR address, or if there is just a public key, and nothing else. The same could happen with P2PK back then, there were no technical difficulties in implementing that.

Quote
I can't get into the techinical details as i'm not educated in this .
This is the problem. As Garlo Nicon mentioned above, "you cannot beat something with nothing". So, if you don't know, how to write some code, then ask someone, who thinks like you, to do that. Or join some existing team. Because the best thing is when you figure out, that your ideas are already implemented, because then no additional work is needed, and you can just support the right team.

Quote
But , you can understand that at that time HDD's and broadband speed were not the same as today .
Then, I have another question: why Satoshi didn't make it gradually? Why it was not "double the size of the block every halving" or any other "gradually increase the size of the block"? Today, we even have BIPs for that! If people around 2017 thought about it before implementing Segwit, then why Satoshi didn't code it in that way, and just made it constant?

Quote
Bitcoin isn't currently practical for very small micropayments.
How do you define "very small"? Is it millisatoshi in Lightning Network, that is never enforced on-chain? Or is it microsatoshi, nanosatoshi, or even smaller unit? Been there, done that. I had some channels with a friend, when we used CPU mining (with Merged Mining) to pay each other the smallest amount we could, proportional to the amount of satoshis in the coinbase transaction, scaled down to the minimal difficulty we produced. So yes, you can send someone a millisatoshi, or even smaller unit, as long as both clients support that feature. But of course, it doesn't change the fact that to enforce it on-chain, you need to pay quite high on-chain fee. Which is why testnet, or some separate network, fits better for such purposes, because then you can use zero satoshis to have any on-chain representation, and to actually push your millisatoshis on-chain (or wrap them into a multisig, and enforce on-chain by using a proper Script).

Also note that you don't need LN to enforce such things. The minimal fees for routing any transaction is something you can change in your node settings. You can accept free transactions into your mempool, but then be prepared to deal with spam somehow. Guess what: even today, there are still transactions flying around, with fees below one satoshi per virtual byte. They are usually used by mining pools, but you can build anything on top of that, if you have two nodes, which accepts more transactions than usual. But of course, to enforce microtransactions on-chain, you need some computing power, similar to what centralized mining pools already have. Or, if you don't want that, then you can send a lot of cheap transactions, and then batch them, and broadcast into official network. If you use a proper Script, you can do that without any trust, see 2P-ECDSA and Homomorphic Encryption for more details.

Quote
the road for hard forks open . If satoshi was against it , he wouldn't post such a thing , but would propose something in a backwards compatible way .
This is quite simple: Satoshi could do some hard-fork then, because the network was small enough. In fact, the current version is not fully compatible with the first version, because there were some minor changes here and there, for example some P2P messages were changed to include checksum, and the longest chain rule was changed into the heaviest chain rule in a hard-fork way.

But even if you assume that Satoshi was pro-hard-fork, then I have another question: do you think that Satoshi would do some hard-fork today, and ignore the miners? I don't think so. Even if we have Segwit and Taproot, then still, cleaning it up doesn't mean ignoring the miners, or reverting the chain. What happened, is already set in stone. So, you can change the code to disable Segwit or Taproot, but you cannot overwrite the history. But good luck creating a proposal to roll-back those soft-forks, and make those coins spendable by anyone. It will simply not happen.

Quote
Script is necessary for bitcoin to work as a network that apps and new type of transactions can be build upon .
Technically, you can start with P2PK, and add a Script later. There are no technical problems with that approach, and also, in this case you end up with a single address type, which is P2PK, and this could be good for privacy, because then you don't know, if something is a public key, or a Script, until it is spent. And also, in this case, users can try non-standard scripts, without risking their money, because then they can always spend by public key, if their Script will turn out to be non-standard or unspendable.

Quote
Also states " There are other things we can do if necessary" . What he meant by that ? Who knows .
It means "if fees will not be sufficient, then we can do something else to limit it further". If you think I just made it up from the thin air, then you can go to the link, and read the whole context. Also, again, as Garlo Nicon said, "you cannot beat something with nothing", which means if you think that my interpretation is wrong, then what is yours? Because your answer "I don't know" will not push us any further.

Quote
He doesn't explicitly states that using OP_pushdata4 is forbidden like you make it look like .
It is "forbidden" beyond 32 MiB, because he coded it in that way. And 32 MiB takes more than two bytes, which means you need at least four bytes, to handle it correctly. But guess what: he put 32 MiB as the limit, not 4 GiB.

Quote
And to put it another way , if he didn't want it he wouldn't have added it to OP's .
You cannot handle 32 MiB with three bytes. It is 0x02000000. It needs at least four bytes.

I also have another question: if you think that Satoshi didn't care about the size, then why he introduced VarInt to compress things? Why he stored difficulty by using four bytes, instead of using 32 bytes to store the target explicitly, with arbitrary precision?
hero member
Activity: 1111
Merit: 588
But, I can ask you another question: if Satoshi wanted to fully use OP_PUSHDATA4, then why he limited MAX_SIZE of the P2P message into 32 MiB (0x02000000), and later limited block size into 1 MB (1000000)? Why November 2008 version was limited to only 32 MiB P2P message size? Yes, that November 2008 version, which contained quite huge Script support, what you can discover by exploring the Script of the coinbase transaction, used in that version (also note that it contained 100.00 BTC, expressed with 2-digit precision only as "10k satoshis", instead of 8-digit precision we have today, and that OP_CODESEPARATOR was mandatory at that time).
I can't get into the techinical details as i'm not educated in this . But , you can understand that at that time HDD's and broadband speed were not the same as today . Also the network was at it's infancy . Things changed since then .
I'll quote satoshi :

The nature of Bitcoin is such that once version 0.1 was released, the core design was set in stone for the rest of its lifetime.  Because of that, I wanted to design it to support every possible transaction type I could think of.  The problem was, each thing required special support code and data fields whether it was used or not, and only covered one special case at a time.  It would have been an explosion of special cases.  The solution was script, which generalizes the problem so transacting parties can describe their transaction as a predicate that the node network evaluates.  The nodes only need to understand the transaction to the extent of evaluating whether the sender's conditions are met.

The script is actually a predicate.  It's just an equation that evaluates to true or false.  Predicate is a long and unfamiliar word so I called it script.

The receiver of a payment does a template match on the script.  Currently, receivers only accept two templates: direct payment and bitcoin address.  Future versions can add templates for more transaction types and nodes running that version or higher will be able to receive them.  All versions of nodes in the network can verify and process any new transactions into blocks, even though they may not know how to read them.

The design supports a tremendous variety of possible transaction types that I designed years ago.  Escrow transactions, bonded contracts, third party arbitration, multi-party signature, etc.  If Bitcoin catches on in a big way, these are things we'll want to explore in the future, but they all had to be designed at the beginning to make sure they would be possible later.

I don't believe a second, compatible implementation of Bitcoin will ever be a good idea.  So much of the design depends on all nodes getting exactly identical results in lockstep that a second implementation would be a menace to the network.  The MIT license is compatible with all other licenses and commercial uses, so there is no need to rewrite it from a licensing standpoint.

"Bitcoin isn't currently practical for very small micropayments.  Not for things like pay per search or per page view without an aggregating mechanism, not things needing to pay less than 0.01.  The dust spam limit is a first try at intentionally trying to prevent overly small micropayments like that.
Bitcoin is practical for smaller transactions than are practical with existing payment methods.  Small enough to include what you might call the top of the micropayment range.  But it doesn't claim to be practical for arbitrarily small micropayments. "

"Quote from: theymos on October 03, 2010, 08:28:39 PM
Applying this patch will make you incompatible with other Bitcoin clients.

+1 theymos.  Don't use this patch, it'll make you incompatible with the network, to your own detriment.

We can phase in a change later if we get closer to needing it."

And for the backwards compatibility part that most believe should not change , the moment satoshi proposed

"It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don’t have it are already obsolete. "

the road for hard forks open . If satoshi was against it , he wouldn't post such a thing , but would propose something in a backwards compatible way .

I admire the flexibility of the scripts-in-a-transaction scheme, but my evil little mind immediately starts to think of ways I might abuse it.  I could encode all sorts of interesting information in the TxOut script, and if non-hacked clients validated-and-then-ignored those transactions it would be a useful covert broadcast communication channel.

That's a cool feature until it gets popular and somebody decides it would be fun to flood the payment network with millions of transactions to transfer the latest Lady Gaga video to all their friends...
That's one of the reasons for transaction fees.  There are other things we can do if necessary.
See? The answer was not "just use OP_PUSHDATA4, and upload it". The answer was "fees are needed to discourage that". And even more: "if fees will not be sufficient, then we can do something else to limit it further". That's what he said, as you can see above, and click on the link to the quote to confirm it.
Nope , don't make words look like what you want them to be . He says "that's one of the reasons for transaction fees" . If you want to use the network that way you have to pay the price .
Also states " There are other things we can do if necessary" . What he meant by that ? Who knows .
He doesn't explicitly states that using OP_pushdata4 is forbidden like you make it look like .
And to put it another way , if he didn't want it he wouldn't have added it to OP's .
legendary
Activity: 1582
Merit: 1006
beware of your keys.
in 2019, there are 108.6 million card transactions per day.

so the block size increase should have done far sooner and radical if bitcoin is to be for mass adoption, 4MB per block under today's load is still treating bitcoin as if it is still a community currency. when we look at the mempool right now, there are more than 200k unconfirmed transactions, which you already can imagine how slow it is.

let's say a 2MiB block has 5k transactions on average, and the each block happens every 10 minutes, it means 720k transactions per day. if we have to match the 2019's rate, it is about 300MiB per block that equivalates 108m transactions per day.

but the problem is the entire blockchain which stands at about 526GB right now. we need to change the system of the blockchain synchronisation requirement to increase efficiency and decrease burden for those who do so. for example, fragmented blockchain which maintains blockchain size per user at a level they can afford to (about 15~100GB per user depending their ability and the network stability).

i still remember the earlier days using bitcoin wallet would take me at least a week just to do that at the time it was 75GB.
copper member
Activity: 821
Merit: 1992
Quote
What are your thoughts on increasing bitcoin's block capacity? (please explain in response)
Quote
I am for it
Go for it. Write some code. Try to get there, and create your own altcoin, and fail (the first scenario, if you are not experienced enough), or submit a proposal, and get it rejected (the second scenario, if you know, how to write code), or make it a no-fork, and be blamed for spamming the network (Ordinals scenario, if you know, how to write a no-fork block size increase).

For now, the only thing I can see is just a discussion. I wonder, how many people, who are big blockers, actually tried to write some code, and submit it. Because if they would try it at least once, then I guess they would reach at least one of those three scenarios explained above, maybe also some other outcomes I didn't think about.

About blaming the Core: you cannot beat something with nothing. If you think their code is bad, then I guess you are using your own client. So, where is it? How can I download it? Talking is easy, but if big blockers will do nothing, then even if they will win in this poll, it will not change anything.

But of course, if I really voted "I am for it", then I should present my own solution as well, right? Good news: I did that. Been there, done that, I support sidechains, and they were officially rejected as a soft-fork, proposed by Paul Sztorc. And now, I am trying to reach the gate number three (no-fork), but without spamming the chain in the way Ordinals do. So far, so good, but the code is not fully tested yet, so it is not yet public. And finally, maybe I will join some other no-fork proposal, instead of publishing my own version, we will see.
copper member
Activity: 1330
Merit: 899
🖤😏
LN, ...snip... only answer.
Oh hi, when did you land? From mars I mean, seems you haven't been on earth for a few month.

What keeps a network really strong is it's userbase, and we all know after whales are done moving their funds around, there will only remain those end users who can only afford to pay from $1 up to $3, if they can't do that, there will be 2 certain outcomes, some will dump all they have and move to alternatives, and only half would keep a part of their funds in BTC, and they would stop transacting, and would just hodl for long term. Then we can say bye to at least %35 of loyal users, since Bitcoin is not a ponzi, we can't expect new bloods joining and keep the system strong, no because people have eyes.


If I was an online shop, I would immediately drop the Bitcoin payments, now if 5000 online shops do that, say bye to 5000 potential long standing users each of them capable of using Bitcoin dozens of times on a daily basis.
legendary
Activity: 4410
Merit: 4766
LN, as well as other off-chain solutions (present and future solutions) are the only answer.

LN is not a present solution
and suggesting "bitcoin do nothing, everyone use LN" is not the solution
the solution is not also "just dont transact and wait a couple months the congestion might die down by then"

there are reasons people need to use bitcoins confirmation/settlements security of the blockchain. there are security pitfuls of using other systems.. so trying to suggest people should move to other systems and stay away from bitcoin wont be helpful either..

yes subnetworks have a niche. and in the future new better ones will be made to meet the crtieria they were made for. but confirming/settling on bitcoin will still be a need so things still need to be done to bitcoin.

and its not just "bigger blocks" its actually
leaner transactions.
count every byte, make every byte count
condition all transaction formats to perform, whereby every byte serves a purpose.
limit the transaction overhead computationally
make spammers/bloaters personally penalised to not cause everyone to be expensed out of using bitcoin
penalise spammer/bloater by enough of a multiple that they actually stop due to the expense

all these things can help make genuine bitcoin transactors able to confirm/settle their transactions with less remorse of using bitcoin.
hero member
Activity: 2240
Merit: 848
Circa 2014 the block size wars took bitcoin by storm.
Eventually after a lot of drama that has been well documented elsewhere (I made my own attempt documenting a time line of the blocksize wars, its outcomes and conclusions here).

But I'm starting this poll in hopes of getting the community's opinion. So for this post I'll not go through my opinion in the OP.
Please feel free to answer the question in the poll and even better if you can also explain your answer.

Back in 2015, and after the adoption of a compromise in the form of SegWit, the topic of the block capacity was very taboo.
Well, we're in 2023 now, and for different reasons congestion is still an issue. Increasing the capacity of bitcoin blocks could potentially be a solution to the issues of high fees and congestion.

So I think it's worth exploring what people think about this issue. Please share your thoughts below.


There's nothing inherently wrong with bigger blocks. But its not a real solution to anything, it's at best a temporary partial solution. You don't get mass usage through big blocks, all you'd get is a weak broken blockchain. It also requires a hard fork and the continuity of Bitcoin's blockchain being backwards compatible is kinda important. If one day the community decides for some reason to do a hard fork upgrade then I'd be fine with like a 2x or 4x increase in blocks, as I don't think that'd be enough to really have negative impacts, but it's not a real solution to much at all, maybe at best it would help miners in the decades to come when the coinbase reward gets low. Bitcoin capacity is reached off-chain, not on-chain. Off-chain solutions are the only real solution to mass adoption. This is why the community chose segwit, while those who thought a weak bitcoin would be okay created altcoins like BCH and BSV, which are useless random tokens no different than thousands of others.

LN, as well as other off-chain solutions (present and future solutions) are the only answer.
legendary
Activity: 4410
Merit: 4766
t anyone who wants micropayments will adopt LN one way or another, sooner or later, and scaling the network to make room for non-micro transactions is begging to happen.

LN subnetwork is too flawed and even proven and admitted so by LN devs themselves..
im sure for the niche that want microtransactions should start afresh and build a subnetwork that can meet its promises and learn from LN mistakes.. but users wanting microtransactions should not be looking towards LN as their saviour.. its not the solution that was promised. not matter how much they utopian dream advertise it is

millions of users have looked at LN seen its failures and decided to use other subnetwork bridges for microtransactions.. its time LN devs go back to the drawing board and learn from the mistakes

just look at DCG who was a big sponsor of blockstream(core+LN devs) to do what they did(segwit as gateway key to LN) to push to create LN. but now 6 years later if you look at DCG portfolio of sister companies and look at how LITTLE of them actually advocate and use LN. just shows even they dont see it as a ready to use product they paid for. EG  if you need to ask why coinbase(dcg sister) doesnt use LN.. you already know the answer
copper member
Activity: 1330
Merit: 899
🖤😏
to stand up to game theory.  If there are ways to manipulate or rig the system to an outcome which suits an attackers goals, then it's no good.  We can't inadvertently introduce that kind of weakness.
Friendly fire! Friendly fire!, lol. Are we still talking about future upgrades here? Because despite my lack of understanding about the entirety of taproot, I am pretty much certain that what you just described is happening right now, I swear I'm not lying or trolling, you can check to see those orange blocks turning to red and are about to explode on mempool.space.

Believe it or not, I'm an extremist in regards to have a central and united team of development, because despite it being the central point of failure, it keeps strangers hands off the changes in the code, and because if they start showing any signs of incompetency, we could replace them by persuading the whole community, my concern is that I can see some  yellow flags, and don't wanna wait to see the obvious red ones.

I'm confident that anyone who wants micropayments will adopt LN one way or another, sooner or later, and scaling the network to make room for non-micro transactions is begging to happen.
legendary
Activity: 4410
Merit: 4766
b. not have the cludge of 1mb base 3mb witness. and just have full open access 4mb for transactions to fully utilise

This reeks of potential technical debt.  It's something you've proposed on numerous occasions for the last few years now, but have never fleshed out.  How, exactly, do you propose enacting this?  Don't just give us the wishlist, tell us step by step what this option actually entails.
the technical debt already exists.. its the cludge they already put in it to get segwit working

the code sipa added when rewriting bitcoin core to make segwit work (the cludge) reeks of technical debt because it is.. however not having the cludge and getting back to simpler straight forward block sizes, proper coding, proper byte counting, proper validation.. lessens/undoes the technical debt

if you dont thing core screwed up and introduced a open door to let junk in.. explain the ordinal inscription junk that abuses cores unconditioned opcodes due to segwit/taproot

as for the cludge,
yes there are many lines of code in different sections that deal with the legacy*witness and serialised / witness.. and all the other cludge of vb and weight unit manipulations of byte counting..  but thats cores own fault for creating the cludgy way of trying to slide segwit in..

getting back to clean code where its a unified 4mb where both legacy and segwit can be side by side under a unified single merkle to allow more transactions in the full 4mb space will require undoing the cludge (technical debt)


for instance in 2016 when core was a simple blocksize=1000000
it was much easier to move to blocksize=2000000 blocksize=4000000 blocksize=8000000 without affecting other area's of code much

but the way they patched segwit together with the max block weight = 40000000 witness scale factor = 4
(1mb base 3mb witness=4mb)
its not simple to go to: max block weight = 80000000 witness scale factor = 4
(2mb base 6mb witness=8mb)
because the cludge of the other parts of the code then has to be changed too, in many places..

however undoing the cludge to have a unified, simplified blocksize=4000000 which yes will take rewriting lots of area's.. will later make it easier to go to blocksize=8000000.. but before even going to 8mb will help more transactions utilise the 4mb space in the unified blocksize=4000000

and yes while they are at it cleaning the code up. they can put conditions on the opcodes to not allow full blocksize wastage per script(witness)
legendary
Activity: 4256
Merit: 8551
'The right to privacy matters'
If 4GB blocks become the norm
BSV, BCH etc are all irrelevant, talking about GB blocks is also irrelevant to Bitcoin block size discussions, no one is stupid enough to actually think about anything above 100MB by 2030. So I think bringing GB is bad, it makes mining centralized etc is a fallacy.
I don't know who you are but it seems you have a few pockets filled by recent spam attacks and you are trying to deviate from the actual concern, which is ; Bitcoin has to scale. I would say if we had 8MB blocks, we could hit the walls of $80,000 by now, but that's just speculation right?

and it needs to scale better than the ltc/doge 12x tx capacity edge.

So are we at 4mb right now. 12x that is 48mb

maybe it tries 32mb blocks and 0.0001000 as a minimum dust send with 0.00001 as a minimum fee.

See what that does for ordinal prevention.
Pages:
Jump to: