Pages:
Author

Topic: [POLL] Is bigger block capacity still a taboo? - page 6. (Read 1816 times)

legendary
Activity: 966
Merit: 1042
#SWGT CERTIK Audited
try reading again

a miner does not even see full blockdata. a miner simply hashes a header (hashes a hash of a header to be more precise)
the header and hash which miners are involved in never change their length no matter how big a transaction or blocksize is

I think I've gone wrong somewhere while expressing it what I meant by this was it will directly increase the storage required by the minner to download the entire blockchain, and it will require a higher network bandwidth. So if the minner is not reading the entire block data still it's keeping a copy of the full data if the computation resources are not increasing but the network & storage resources are.

I was generally expressing my view on it and somehow mixed the nodes & minner at the same time. Anyway, there are some concerns for both miners and Nodes as well.
legendary
Activity: 3472
Merit: 10611
Since I have to dot all the eyes and cross all the tees: Legitimate use is anything that is not a spam attack, meaning where people use bitcoin to transfer money or in other words when Bitcoin is the "peer to peer digital cash system" that it was intended to.

If the sole purpose of bitcoin was just to transfer money what's the role of OP_pushdata ? And especially OP_pushdata4 ?
Seems that the creator had a different view than yours .
Satoshi was not a god, he has made a lot of weird decisions when implementing Bitcoin. Unless he has explicitly expressed his intentions for a decision you can not make such conclusions like the one you just made here.

In this case if I had to guess I'd say there may be two reasons for this decision:
1- Scalability of the code.
As a developer, specially when you are creating a consensus critical code for a decentralized system, it is easier to have an functionality and never use it than not having it and wanting to add it in the future. Introduction of other functionality like interpretation of SegWit outputs, adding OP_SUCCESSx, etc. are doing exactly that.

2. Translating DER lengths.
I think this is the most possible reason. Since Satoshi was aware of DER encoding (used for signatures) and DER lengths have a similar approach to encoding data lengths and also support very large values, Satoshi might have adopted the same approach and implemented as big a size as he could.
BTW same approach is used to encode VarInt (used for script lengths) which is another way of encoding lengths some of which can never be used in Bitcoin but they can encode extremely large values nonetheless.

Otherwise I doubt that Satoshi ever intended to let anyone push 4+ GB data to the stack that only supports pushing data that is no bigger than 520 bytes.

The reason is pretty simple: the adoption hasn't grown enough for the current block capacity to not-be enough for the legitimate usage.
And how do you measure that? Even if we were to follow that route, and have a block size limit that corresponds to the "legitimate usage", how do we begin? In my opinion, the adoption has grown a lot since the last time the Bitcoin community had had this debate.
It has definitely grown but considering how fee spikes have never been severe or long lasting without spam attacks we can say with a high degree of certainty that the adoption has not yet surpassed the capacity.
legendary
Activity: 4424
Merit: 4794
windfury
none of your replies are about REAL data, code, opcodes, limits.
none of your replies are about REAL maths related to the data/limits
none of your replies are even about technology limitations IN REALITY

try for once to make an ontopic reply that show some thinking about the subject, outside of idolising certain people and insulting others.

when even your idol god devs say nodes can handle the creating, signing, relaying co-signing and relaying again and then verifying "a million payments a second" pre confirm.. it shows nodes have no issues with just a few thousand transactions. especially if mempools of nods can verify and maintain 300mb-500mb pre confirm  transactions without blowing up

when 4mb blocks are 6kb/s internet speed
when 10mb blocks(your groups suggestion) are 15kb/s internet speed
when 100mb blocks(your groups exaggeration) are 150kb/s internet speed

so please tell me the real defined reason your against it... apart from dev god cult following and wanting to appease your mentor
legendary
Activity: 2898
Merit: 1823
That's because that's what you believe. In my personal opinion, it's probably better to maintain its current course of development and value decentralization and security more than the accomodation of a higher number of transactions through bigger blocks.


that is not your personal opinion. its plageurised script from a group of idiots that lulled you into repeating their narrative

just reciting the mantra of the party line.. without doing any math, or independent thought/analysis of what he is about to write


You're right, it's not my opinion, BUT in my personal opinion, I agree with the fact that the Core Developers have made the design decisions that value maintaining decentralization and security for the network MORE than increasing the transaction througput per block.

I know you're gaslighting everyone into believing I am wrong, and therefore you are right. BUT OK, they either DYOR and learn the truth, OR they listen to you and learn the truth the HARD WAY like I did. Cool

Plus do you actually believe a single decentralized database could process all of the trillions of financial transactions per day? No? Then use Netflix's centralized/federated data centers with their very-low-latency, high-rate-of-data transfer. That would probably fix the problem. But that wouldn't be a protocol called Bitcoin.
you then double down by using nonsense exaggerations, exaggerations that are not even original or well thought out

there are 8 billion people on the entire planet. and not all of them use one currency..
but lets say they did.. for "trillions of transactions" requires each person to buy 12 items a days SEPARATELY for one trillion.. you used plural so multiply it by your magic plural amount you have in your head..

but becasue not everyone uses one currency naturally, lets take yours
250m mature people(toddlers dont shop for themselves) using the dollar. they do not even do 3 billion transactions per day

so stop with the exaggeration that one currency needs to do trillions.. let alone billions of transactions a day

you training of a discussion of reasonable adjustments at regular periods vs your stupid training of pretending the argument is about "gigabytes by next year" huge leaps, make you look foolish

get away from your training officer(forum-daddy) and think for yourself. he is not helping you
try to use actual data, real statistics, math and logic.. not some story some troll you call daddy tells you


Is that you trying to convince me again that "bigger blocks are better"? Pardon mer ser, but I have already learned the hard way that you're wrong.

Roger Ver and Jihan Wu had a thesis that if they hard fork to bigger blocks and call their shitcoin "Bitcoin", then the community would follow because "big blocks are what the community needs". How is Bitcoin Cash today, frankandbeans?
legendary
Activity: 4424
Merit: 4794
Edit: Also, Satoshi explicitly said something about uploading videos:
I admire the flexibility of the scripts-in-a-transaction scheme, but my evil little mind immediately starts to think of ways I might abuse it.  I could encode all sorts of interesting information in the TxOut script, and if non-hacked clients validated-and-then-ignored those transactions it would be a useful covert broadcast communication channel.

That's a cool feature until it gets popular and somebody decides it would be fun to flood the payment network with millions of transactions to transfer the latest Lady Gaga video to all their friends...
That's one of the reasons for transaction fees.  There are other things we can do if necessary.
See? The answer was not "just use OP_PUSHDATA4, and upload it". The answer was "fees are needed to discourage that". And even more: "if fees will not be sufficient, then we can do something else to limit it further". That's what he said, as you can see above, and click on the link to the quote to confirm it.

emphasising.. some trolls that dont want junk to stop. think the only way is fee's
they also pretend the only way via fee's is everyone pays more equally..

there are many other ways to limit it. such as adding rules to opcodes to limit length(remove assumes and add conditions). and also penalise just certain opcodes for using certain opcodes. multiply base fee by 2000x much like the dev policis decision to multiply legacy by 4
copper member
Activity: 909
Merit: 2301
I admire the flexibility of the scripts-in-a-transaction scheme, but my evil little mind immediately starts to think of ways I might abuse it.  I could encode all sorts of interesting information in the TxOut script, and if non-hacked clients validated-and-then-ignored those transactions it would be a useful covert broadcast communication channel.

That's a cool feature until it gets popular and somebody decides it would be fun to flood the payment network with millions of transactions to transfer the latest Lady Gaga video to all their friends...
That's one of the reasons for transaction fees.  There are other things we can do if necessary.
See? The answer was not "just use OP_PUSHDATA4, and upload it". The answer was "fees are needed to discourage that". And even more: "if fees will not be sufficient, then we can do something else to limit it further". That's what he said, as you can see above, and click on the link to the quote to confirm it.
Quote
Seems that the creator had a different view than yours .
sr. member
Activity: 1666
Merit: 310
You are asking the wrong people here
Really?

I thought at least some Bitcoiners were IT-savvy. Guess I was wrong. My bad!

IP format is related to telecommunication companies controlled by governments
That's not true.

RFC documents have nothing to do with ISPs/governments. Internet is decentralized, there is no central entity controlling the infrastructure (just like Bitcoin).

also I see no backward compatibility issues if we were to have for example 6 or 8MB blocks, even if we had 8MB blocks, the spammers would keep spamming
Sure, if you can invent something like SegWit... a soft fork.

that's why other than blocksize, we need to consider increasing the fees for non-monetary transactions somehow.
Possibly.

I was never a fan of storing arbitrary data in the blockchain, but that's how BTC was designed from the get-go.

Hell, some people want 4GB blocks to store video feeds from surveillance cameras. Shocked

I can understand why Satoshi wanted to store text messages (like the bank bailout one), but storing entire HD videos in the blockchain? That's absurd!
copper member
Activity: 909
Merit: 2301
Quote
What are your thoughts on increasing bitcoin's block capacity?
Quote
I'm against it
Think about the size of the block header. It takes only 80 bytes. Only that is really mined, the whole size of the block is just committed to it. Does it mean that we only have 80-byte blocks in practice? Of course not, because 32 bytes is all you need to form a valid commitment. Which means, you can have gigabyte-sized blocks, or even terabyte-sized blocks, if you just use commitments to make them. Just tweak your R-value in your signature, and then your terabyte-sized block will be connected with your signature. And it would cost you no additional on-chain bytes.

Which means, every time you move any coins with some standard transaction, and every time you use a single ECDSA signature, you can use random R-value, and not commit to any data, or tweak your random R-value by some non-random value, and commit to any data you want, without increasing the size of your transaction.

Technical details were also explained on mailing list here: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-November/022176.html

Quote
To be able to verify sign-to-contract, you reveal R0 and data, and the verification is just checking that R=R0+SHA256(R0, data)*G. That works with both ecdsa and schnorr signatures, so doesn't require any advance preparation.
copper member
Activity: 1330
Merit: 899
🖤😏
You are asking the wrong people here, IP format is related to telecommunication companies controlled by governments, also I see no backward compatibility issues if we were to have for example 6 or 8MB blocks, even if we had 8MB blocks, the spammers would keep spamming, that's why other than blocksize, we need to consider increasing the fees for non-monetary transactions somehow.
sr. member
Activity: 1666
Merit: 310
Every online application has plans to scale when userbase increases, can you imagine what would have happened if bitcointalk never upgraded it's servers?  Or what would have happened to instagram if they had the same server capacity for more than 4 years?  I can imagine what would have happened, they'd be dead by now, why Bitcoin isn't dead yet? Because it still has support from the community, if I were a mining pool, I'd think about that before including spam into my block, loyalty has it's limit.
Care to explain why the internet still uses 32-bit addresses (IPv4) despite the fact we have around 15 billion devices connected online?

Why don't we ditch IPv4 in favor of IPv6 (128-bit)?

Why did we invent NAT/RFC 1918 (a solution that doesn't compromise backwards compatibility, SegWit-style)?

I'm still waiting for someone to answer these questions.

I thought this forum was IT-dominated. Nobody here is an expert in computer networks/internetworking? Huh
copper member
Activity: 1330
Merit: 899
🖤😏
Anyone who is against increasing the block size is either a shill for miners or has another agenda which benefits them as it benefits mining pools. I excluded ignorant opposition.

Every online application has plans to scale when userbase increases, can you imagine what would have happened if bitcointalk never upgraded it's servers?  Or what would have happened to instagram if they had the same server capacity for more than 4 years?  I can imagine what would have happened, they'd be dead by now, why Bitcoin isn't dead yet? Because it still has support from the community, if I were a mining pool, I'd think about that before including spam into my block, loyalty has it's limit.
legendary
Activity: 4424
Merit: 4794
I hear a lot about how big blocks will reduce decentralization and make things harder for I dividual miners.

Mining is already pretty centralized, almost no blocks are mined by miners outside of pools... The difficulty is so high it's hard to mine on your own. But even with bigger blocks, how would that prevent someone from mining like he does now? If someone is going to invest hundreds of thousands $ in mining equipment, are we to assume that they can't invest at an internet connection that is at least at ADSL or 3g speeds?

mining has nothing to do with blocksize and blocksize has nothing to do with mining
mining hashes a blockheader.. blockheaders and hashes remain the same size no matter the blockdata part containing transactions becomes

people can still mine from home. they can buy their own full size asics or USB miners. the hashrate is linked to difficulty but difficulty is not related to blocksize. the reward amount of sats is related hashrate and to market price not blocksize directly

here is the thing if bitcoin transaction users have to pay more per fee.. less people want to use bitcoin, less want to buy bitcoin. less people want to transact with bitcoin.. this is a bad economic game like only having one table in a restaurant thinking the restaurant can stay profitable serving one customer a day by charging that one customer huge price for a meal..

however more transactions per block does not harm miners or break miners but does allow more transactions and more users and more opportunity to get profit from more people without charging people outrageous prices.. thus more sustainable economic model

Can somebody show me some data or a paper to support this?

I am genuinely curious, bitcoin's block size from 1 MB going to 4 MB didn't cause any such issues. Propagation time on Bitcoin is already pretty good. We are not talking about decreasing block times here.

the 6 trolls against blocksize increases are not thinking with their head about pros-cons for bitcoin. they instead are thinking using their training on how to promote that people should use other networks.
the fun part is their other network promotions/training admits that nodes can handle relay/verification of millions of transactions a second.. so there is no node problem for dozens of thousands per second reasonable request

legendary
Activity: 3780
Merit: 1170
www.Crypto.Games: Multiple coins, multiple games
This is such a weird position to be in, the results are very close to each other. All sides have super reasonable explanation for why they are feeling that way, bigger blocks people like me want cheaper and faster transactions but the ones who are against it also wants to make sure security doesn't get lower because of it, which makes sense.

I understand not wanting something like this, because in the end we are talking about something that may hurt the blockchain all together. Remember all those alts with blockchain issues and hacks? We wouldn't want that in bitcoin. I am sure in an ideal situation everyone would want it, why would we be against it, why would anyone be? But people are fearing what could go wrong about it.
legendary
Activity: 4382
Merit: 9330
'The right to privacy matters'
it is not a solution.

1mb 4bm 8mb 16mb 32mb 64mb all fail to solve why this occurs.

scrypt has 2 viable coins for merge mining and they make 12x the blocks per hour or day or week or month or year.

scrypt is the better design for small value frequent moves.

Oh but btc added LN
and Scrypt added LN for ltc and could do so for Doge.

oh a big block fixes things and scrypt can make a big block.

A  large freighter can carry more than 1 million barrels of oil

you do not use it to carry  a few cans of oil

Sha-256/BTC will not ever really win over scrypt/LTC/DOGE  if you are doing many 100 usd or less transactions.

If BTC users understood this they would

A) hodl large  BTC in self custody 10k or higher
B) small amounts of btc ltc and doge in an exchange like coinbase maybe 1k total
C) never pay small amount of BTC to anyone use LTC or Doge

all issues solved.

this fixes fees.

end of story.


If someone can show me a way for the 12x blocks an hour edge that scrypt has I would reconsider but frankly I do not see a way for btc to win by using a freight to move 5 dollars worth of oil


legendary
Activity: 2422
Merit: 1451
Leading Crypto Sports Betting & Casino Platform
lol at these numbers, the beast is calling  Grin



I hear a lot about how big blocks will reduce decentralization and make things harder for I dividual miners.

Can somebody show me some data or a paper to support this?

I am genuinely curious, bitcoin's block size from 1 MB going to 4 MB didn't cause any such issues. Propagation time on Bitcoin is already pretty good. We are not talking about decreasing block times here.

Mining is already pretty centralized, almost no blocks are mined by miners outside of pools... The difficulty is so high it's hard to mine on your own. But even with bigger blocks, how would that prevent someone from mining like he does now? If someone is going to invest hundreds of thousands $ in mining equipment, are we to assume that they can't invest at an internet connection that is at least at ADSL or 3g speeds?
newbie
Activity: 6
Merit: 2
(even though 5.2 TB per year is quite too much).
What does a lot mean? That's ~$100 per year. When users pay $10 fees per transaction.

Why didn't you calculate this for 10 MB blocks? After all, we are talking about this increase now.

Or you did the math and realized that posting such calculations is somehow inconvenient.

And you decided to take 100MB at once. blocks. Only now an error occurred. These calculations don't scare anyone.

Why be modest? Others usually go straight to 1GB blocks. There the numbers will be more impressive. Smiley

We are now at 33% for, 33% undecided, and 33% against!  Roll Eyes

Actually, who cares about this. That issue was solved a long time ago by splitting BCH and BTC.
Creating a fork with an increased block size did not solve the block size problem for Bitcoin. Absolutely not. Smiley

This problem was and remains the same. And this problem will have to be solved. There is no scenario in which Bitcoin with its current block size will be able to reach 1 billion users. What a billion. Bitcoin in its current form cannot handle even 100 million users.
legendary
Activity: 4424
Merit: 4794
We are now at 33% for, 33% undecided, and 33% against!  Roll Eyes

Actually, who cares about this. That issue was solved a long time ago by splitting BCH and BTC.

FRANCY1, save me from the sensors who deleted my subject like common spam. Is the truth disturbing? Sorry to impose on myself like this before my probable disappearance...

What does this memory dump have to do with block size?

nothing
he posted nonsense in a topic unrelated to what he was trying to announce(newbies cant create new topics)

its some troll that every year creates a new account just to announce he knows the identity of satoshi due to some book he read.. it seems he enjoys me debunking him, i think he tries to use lessons from my debunks to improve his trolling/tweak his script later.. but he never wins..
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
We are now at 33% for, 33% undecided, and 33% against!  Roll Eyes

Actually, who cares about this. That issue was solved a long time ago by splitting BCH and BTC.

FRANCY1, save me from the sensors who deleted my subject like common spam. Is the truth disturbing? Sorry to impose on myself like this before my probable disappearance...

What does this memory dump have to do with block size?
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
100mb blocks at your extreme is not 500GB... its actually 5.2GB a year
100 MB x 144 x 365 = 5,256,000 MB ~= 5.2TB per year. The 500 GB per year correspond to previously mentioned 10 MB block size.

Edit: You've corrected yourself. But, as I have said many times, it's not just the storage requirement (you can find HDD externals pretty inexpensive, even though 5.2 TB per year is quite too much). It's the verification process which takes most of the time, not the downloading. It already takes too much time, I can't fathom how much it'll take if we are to verify 100 MB.
legendary
Activity: 4424
Merit: 4794
Yeah, and they why should I care about that?
Peer-to-peer cash. Not trusting third parties. Do these remind you anything? I'm pretty sure you'll have to trust some data center after the 100 MB proposal is accepted, unless you're willing to pay ~500 GB of space each year.

no one is saying 100MB by 2024.. you and your forum family troll scenarios numbers the exaggerations, miscounted and just make you look dumb..
but heres the thing you dont know.. math
100mb blocks at your extreme is not 500GB/year... its actually 5.2TB a year
and hard drives these days you can get 16TB for less than your household TV

you dont need to buy a $xbillion server centre

now put the cost of a few years of running a bitcoin node.. true math
and then look at logical 1tx a day cost per transaction for the same few years..

then come back and tell the forum which costs more.. securing or spending.. (hint the fee's are the problem not the hardware)
Pages:
Jump to: