Pages:
Author

Topic: [POLL] Is bigger block capacity still a taboo? (Read 1730 times)

legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
November 26, 2023, 04:03:03 AM
Maybe that could make Initial Blockchain Download faster?
or maybe the code in bitcoin protocol nodes already has "milestones" to treat blocks older than X height as valid by default

But is it really needed to implement the "milestone" on Bitcoin protocol when several Bitcoin full node software already do that? As reference, here are few example,

  • Gocoin

Source: https://github.com/piotrnar/gocoin/blob/master/website/gocoin_manual_config.html#L57-L60
Code:
LastTrustedBlock
string
updated regularly
Hash of the highest trused block (used to speed up initial chain sync).

Source: https://github.com/piotrnar/gocoin/blob/master/client/common/config.go#L21
Code:
const LastTrustedBTCBlock = "00000000000000000001fcf207ce30e9172433f815bf4ca0e90ecd0601286a20" // #817490

  • Bitcoin Core

Source: https://github.com/bitcoin/bitcoin/blob/v25.1/src/kernel/chainparams.cpp#L107
Code:
consensus.defaultAssumeValid = uint256S("0x000000000000000000035c3f0d31e71a5ee24c5aaf3354689f65bd7b07dee632"); // 784000

Quote
Increasing linear growth in the witness space is inherently more inclusive than forcing an increase upon everyone and risking greater division.
As far as I know, increasing witness is incompatible with old nodes. You cannot change 4 MB into 32 MB, just by changing a single constant. It is consensus-level change, which means, you need a second witness to make it compatible. Which means, you probably need commitments, that can be used on legacy, witness, or whatever, and can be increased or decreased at will. In other cases, you will end up with more than one witness, and people will complain, that your code is convoluted.

It's possible by increasing discount factor for witness data from 4 to 32. Although soft-fork is needed and far higher discount factor doesn't bring far higher average block size since witness data is only small part of average Bitcoin TX.
legendary
Activity: 4410
Merit: 4766
Maybe that could make Initial Blockchain Download faster?
or maybe the code in bitcoin protocol nodes already has "milestones" to treat blocks older than X height as valid by default

But is it really needed to implement the "milestone" on Bitcoin protocol when several Bitcoin full node software already do that? As reference, here are few example,

maybe read a lil
legendary
Activity: 3472
Merit: 10611
Your problem Franky is that you have some abstract idea of a problem and then expand that to the whole system and bash everything in your way.

back then they had buzzwords for them  empty opcode, called nops, nulls.. these days for the new subclass of opcodes of segwit and then the next subclass of opcodes for taproot they buzzword names like opsuccess
LEARN THEM, learn what they do and dont do
OP_NOPs have always existed in Bitcoin and they are a good thing. They exist to allow future expansion while having their usage restricted by standard rules. Need I remind you of how OP_CHECKLOCKTIMEVERIFY was activated?

Quote
anyways
the very next bytes you can put in a op_0 is op_push4 which say the next bytes after that can be 4,294,967,295‬ bytes(4.29GB) which are only prevented by the block limit to not actually be 4.29gb
also multisig did not use op_0 it used other operation bytes..
Wrong.
As I said witness version 0 is literally the strictest script that exists in Bitcoin. After OP_0 there can only and only be either 20 byte or 32 bytes and absolutely nothing else. If you include anything else (eg. 19 bytes) your transaction is rejected as invalid (not even non-standard, it is outright invalid).
https://github.com/bitcoin/bitcoin/blob/b5a271334ca81a6adcb1c608d85c83621a9eae47/src/script/interpreter.cpp#L1901

Quote
so it can be abused in so many ways due to its openness at the time of writing it in 2016
I don't know what SegWit was or may have been before it was activated. It is as it is right now and your statements in the previous quote chunk are in present tense not something that might have been and your bashing of SegWit is still happening today not in some distant past with all false information such as the previous quote chunk!

Quote
seems doomad and pooya have rejoined each other in the cultish narrative of merit circle jerkin and defending each other again, to prevent bitcoin scaling by not even knowing about the code or exploits that are causing the congestion that is not helping bitcoin have lean transactions or increased transaction counts
It is one thing to talk about the actual problem which is the "loose ends" in witness evaluation rules some of which introduced in version 1. Nobody has talked about them more than I did. But it is another thing to bash SegWit as a whole and use the strictest one (which is version 0) as your whole false arguments!
copper member
Activity: 901
Merit: 2244
Quote
Either every block is a checkpoint or none of them are.
This is a wrong take, because one-block chain reorganization is something that can happen, no matter how high hashrate the network will reach. However, 210,000 blocks chain reorganization is something that never happened.

Also, there is a reason, why we have coinbase maturity, set to 100 blocks. If you don't agree with protecting the last 210,000 blocks, then what about last 100 blocks?

Another place when there is such assumption, is pruning: you cannot prune the last 288 blocks. So, what about 288, if 100 is not enough? Or maybe 2016, to align it with difficulty adjustments, and compress those four bytes, that are mostly identical during those two weeks?

So, if protecting a single block would be enough, then we wouldn't have coinbase maturity (100 blocks) or minimal pruning level (288 blocks, which means around 2 days).

Quote
If users want to carry a smaller burden, they can just sync the 'legacy' blockdata and leave out the 'witness' while still contributing to, and taking part in, the network.
Note that by doing things in this way, you open a potential gate in the future, to disable SegWit, or other soft-forks. It is up to you to decide, if you want that. But yes, it is possible, and can be easily achieved, for example by just running some old client. And yes, that famous block with almost 4 MB witness, is very small, if you have non-Segwit client.

Quote
Increasing linear growth in the witness space is inherently more inclusive than forcing an increase upon everyone and risking greater division.
As far as I know, increasing witness is incompatible with old nodes. You cannot change 4 MB into 32 MB, just by changing a single constant. It is consensus-level change, which means, you need a second witness to make it compatible. Which means, you probably need commitments, that can be used on legacy, witness, or whatever, and can be increased or decreased at will. In other cases, you will end up with more than one witness, and people will complain, that your code is convoluted.
legendary
Activity: 4410
Merit: 4766
The ideal solution, in my mind, would be one in which users have some level of flexibility to determine their own level of involvement.  Similar to what SegWit achieved.  If users want to carry a smaller burden, they can just sync the 'legacy' blockdata and leave out the 'witness' while still contributing to, and taking part in, the network.  The opt-in nature to retain a low barrier to entry is preferable to a one-size-fits-all approach, where we risk people digging their heels in and increasing the potential for a contentious fork.  Increasing linear growth in the witness space is inherently more inclusive than forcing an increase upon everyone and risking greater division.

people don't opt-in, 'a nature to retain a low barrier of entry'... they are opting-out of being a full node
you opt-in by upgrading your node..
you are opted out by the network when new funky tx start appearing but you did not upgrade to be ready

only downloading the legacy data is not contributing because
old 'backward' nodes dont validate pre-confirm funky tx.
old 'backward' nodes dont relay pre-confirm funky tx.
old 'backward' nodes dont validate post-confirm
old 'backward' nodes dont retain the signature proof of new funky tx in a block
old 'backward' nodes dont keep full data to then offer initial block download to other nodes

yes you are still on the network. but become a leecher not a seeder of blockdata, you become a network burden on full nodes because you become a downstream bottleneck endpoint, not an equal peer

there is a big difference between treating assume valid blocks that are 8-12 years old vs live data unconfirmed or just confirmed today being assumed as valid without as many nodes doing the full checks and archiving

there was a big reason for proper consensus votes. and that was to ensure there were a healthy amount of upgraded nodes ready to fully validate a new proposed change.. when new funky transactions were being introduced... its called decentralisation

using assume valids to allow new junk without a consensus vote of readiness hurts the network.. alot more then using a assume valid of blocks 8-12 years ago that can be audited and double milestoned into a audited utxoset

sr. member
Activity: 1666
Merit: 310
I don't personally believe "milestones" or "checkpoints" to be the right approach.  I outlined some reasons as to why back in 2020.  Either every block is a checkpoint or none of them are.

The ideal solution, in my mind, would be one in which users have some level of flexibility to determine their own level of involvement.  Similar to what SegWit achieved.  If users want to carry a smaller burden, they can just sync the 'legacy' blockdata and leave out the 'witness' while still contributing to, and taking part in, the network.  The opt-in nature to retain a low barrier to entry is preferable to a one-size-fits-all approach, where we risk people digging their heels in and increasing the potential for a contentious fork.  Increasing linear growth in the witness space is inherently more inclusive than forcing an increase upon everyone and risking greater division.
What you're proposing is actually an ingenious solution and reminds me of AMD's x86-64 architecture 20+ years ago.

What did AMD do? They took Intel's i386 proven architecture (despite not being perfect, especially compared to RISC behemoths) and extended it (double register width + double the amount of registers).

You can operate an AMD64 processor in various ways (legacy i386 or 64-bit mode or even 16-bit mode for DOS apps)!

On the other hand, Intel invented Itanium (a brand new, clean slate architecture) and it failed miserably. Eventually they were forced to copy AMD's approach.

What's the lesson here for IT-minded folks? Smiley

I'm honestly surprised we have this kind of discussions in the Bitcoin community... I'm really curious to know the professional background of big blockers, if they have any.

Something tells me they have almost zero IT experience, which explains their fallacious arguments... Roll Eyes
legendary
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
I don't personally believe "milestones" or "checkpoints" to be the right approach.  I outlined some reasons as to why back in 2020.  Either every block is a checkpoint or none of them are.

The ideal solution, in my mind, would be one in which users have some level of flexibility to determine their own level of involvement.  Similar to what SegWit achieved.  If users want to carry a smaller burden, they can just sync the 'legacy' blockdata and leave out the 'witness' while still contributing to, and taking part in, the network.  The opt-in nature to retain a low barrier to entry is preferable to a one-size-fits-all approach, where we risk people digging their heels in and increasing the potential for a contentious fork.  Increasing linear growth in the witness space is inherently more inclusive than forcing an increase upon everyone and risking greater division.
legendary
Activity: 4410
Merit: 4766
Quote
or maybe the code in bitcoin protocol nodes already has "milestones" to treat blocks older than X height as valid by default
This is not trustless. If we could do that, then it would be possible to make it a consensus rule. For example: only the last 210,000 blocks are stored, and the rest is automatically marked as valid. Which means, if some bug remain unnoticed for 4 years, it is then set in stone, and cannot be fixed by any chain reorganization.

Which means, in that case, it would allow us to have "const storage client". Then, you could multiply for example 210,000 blocks by 4 MB, and guarantee, that 840 GB is sufficient to run your node. Of course, UTXO set is also important, so it would be potentially bigger than that, but that approach would just mean, that pruning could be set for example to 210,000 blocks, and consensus could enforce it.

Another consequence is that new nodes would start synchronization from 210,000 blocks in the past (or would start from the latest block, and process it backwards), and check only the last N blocks. And then, the whole model should be more UTXO-based, because in your scenario, you have to also somehow remember, which coins were created for example 10 years ago, and were not moved in the last 4 years, (so they are still valid, and should not be dropped).

its not about starting a sync from block 210k(your example) its about all blocks from 0 forward are downloaded still..  but no transaction validation is done they are just treated as valid and put/taken from utxoset depending if sent or spent..  but all blockdata is stored, just less validations done on earlier blockdata
however code is a great thing and many things can be done. such as locking in a UTXOset image of the position of the 420k block where by we hash the utxoset at that position and put the hash into coinbase reward output and when nodes match their own matched-set to not reject block. then after time of having this hash locked in. the utxoset becomes a ball of data of unspent history to not need the TXdata previous due to milestoned depth agreeing the immutable data is soo deep that a re-org of that magnitude would be more of a risk event in of itself

some are calling this a ball and chain 0-420k ball of utxo and chain of 420,001-now
ofcourse the block headers and txid tree of 0-420k still remain to show the tx of the utxoset did exist in said block

code can do many great things
legendary
Activity: 4354
Merit: 3614
what is this "brake pedal" you speak of?
Quote
or maybe the code in bitcoin protocol nodes already has "milestones" to treat blocks older than X height as valid by default
This is not trustless. If we could do that, then it would be possible to make it a consensus rule. For example: only the last 210,000 blocks are stored, and the rest is automatically marked as valid. Which means, if some bug remain unnoticed for 4 years, it is then set in stone, and cannot be fixed by any chain reorganization.

wouldnt 4 year reorg basically be catastrophic anyway?
copper member
Activity: 901
Merit: 2244
Quote
or maybe the code in bitcoin protocol nodes already has "milestones" to treat blocks older than X height as valid by default
This is not trustless. If we could do that, then it would be possible to make it a consensus rule. For example: only the last 210,000 blocks are stored, and the rest is automatically marked as valid. Which means, if some bug remain unnoticed for 4 years, it is then set in stone, and cannot be fixed by any chain reorganization.

Which means, in that case, it would allow us to have "const storage client". Then, you could multiply for example 210,000 blocks by 4 MB, and guarantee, that 840 GB is sufficient to run your node. Of course, UTXO set is also important, so it would be potentially bigger than that, but that approach would just mean, that pruning could be set for example to 210,000 blocks, and consensus could enforce it.

Another consequence is that new nodes would start synchronization from 210,000 blocks in the past (or would start from the latest block, and process it backwards), and check only the last N blocks. And then, the whole model should be more UTXO-based, because in your scenario, you have to also somehow remember, which coins were created for example 10 years ago, and were not moved in the last 4 years, (so they are still valid, and should not be dropped).
legendary
Activity: 4410
Merit: 4766
Maybe that could make Initial Blockchain Download faster?
or maybe the code in bitcoin protocol nodes already has "milestones" to treat blocks older than X height as valid by default


the important part of a bitcoin block for answering "did mining pool solve a block" is actually just
the blockheader
and a list of TXID

the TXID are in an order that a node can merkle tree to a merkle root to compare to the merkle root in the blockheader.

the tests about transaction validity are mostly already done by nodes pre-confirm. when a new block is mined
the actual transactions are mostly already in most nodes mempool(relayed pre-confirm) or can be sent separately in batches on request. its not like 4mb needs to be sent as one lump in one go

with the knowledge that millions of transactions can be relayed per second, requesting just a few thousand or a batch of less then a few thousand is not a hardware/bandwidth hardship

the blockheader+txid that represents "4mb" is actually far less than actual 4mb of data people think needs to be transmitted

there are extra efficiencies that can be done(different to bloom filters)
the list of TXID can be batched into smaller lumps instead of one lump.. made into say 10 lumps of 400tx instead of 1 lump of X000tx. and if a node sees the blockheaders list of TXID and notices a TXID the node doesnt already have it can just make a request for blockID:XXXX-txbatch:4
meaning it just grabs transactions 1601-2000 of X000 instead of all X000
copper member
Activity: 901
Merit: 2244
Quote
I really wonder if someone came up with a better solution that doesn't only include block size increase.
1. Sidechains: blocks can be big and small at the same time, if you have two-way-peg. Then, you decide, what you want, by making signatures.
2. Cut-through: if you have Alice->Bob->Charlie transaction chain, then you need to store only Alice->Charlie. If you have long unconfirmed transaction chains, you can batch it. If you have to show someone, that Bob was inside, then you can do so through commitments.
3. Scanners: if some data is already pushed on-chain, then you can create a scanner, that would locate it in already existing chain, instead of pushing it again, like some user, who tried to push the whitepaper again, but as an Ordinal this time (ignoring the fact, that it was already pushed long time ago with some multisig).
4. No-fork compression: you can compress things in any way you want, and stay compatible with the existing network, as long as you decompress things before sending to old nodes. If it will be standardized, then new nodes will have better performance, so people will upgrade soon.
5. Commitments: you don't have to push big data on-chain. In many cases, all you need is just proving that a given data existed on a given time, and just use Bitcoin as a timestamping server. Commitments can be used to do that, and it will cost no additional on-chain bytes.

Should I continue? Or maybe there are some questions? Coming up with some new idea is not that hard. The most difficult part is turning all of that into code. And this is what I am trying to do. One of my experimental project assumes that everything is P2PK-only, and is fully commitment-based (I don't have to care about things, that are not OP_CHECKSIG-based, because they can be easily simplified into OP_TRUE; also you cannot use a coin in any real-life scenario, without using any public key). Maybe that could make Initial Blockchain Download faster? We will see, if I will manage to do that; after all, it is also very likely to merge my ideas with someone else, than to release it alone.

Quote
I don't understand why do people join mining pools that don't share fees with their customers. Every big mining pool leaves fee for themselves instead of sharing while smaller ones give away fees but still don't attract people.
It is a matter of time. Note that with each and every halving, the basic block reward is getting smaller and smaller. So, we will eventually reach a point, where those pools would have a choice: convince miners to work for free, or lose them. Because the basic block reward will eventually be zero, and that would force those pools to change their rules.
hero member
Activity: 882
Merit: 792
Watch Bitcoin Documentary - https://t.ly/v0Nim
If there are ways to manipulate or rig the system to an outcome which suits an attackers goals, then it's no good.  We can't inadvertently introduce that kind of weakness.
There will always be a way to manipulate the system even if we increase the block size up to 1 TB. Miners have a huge advantage, they can send transactions with very high fees while losing nothing because all the paid fees finally end up in their pocket. So, if I am a big miner and make tens of thousands of transactions and pay millions in fees, I will increase fee for everyone and collect additional money from it but I also get all my spent millions back because fees and block reward comes together to miners.

I really wonder if someone came up with a better solution that doesn't only include block size increase. I don't understand why do people join mining pools that don't share fees with their customers. Every big mining pool leaves fee for themselves instead of sharing while smaller ones give away fees but still don't attract people.
legendary
Activity: 4410
Merit: 4766

thats what "scaling" is for rather then "LEAPING"
just saying no is not the solution to LEAPING.. scaling is the solution to leaping

also SCALING is about the long run.. its not about the high-jump.. (two different sporting events)

Unfortunately, I don't follow you. I will quote myself:

Quote
To this extent, I think increasing the block size a little will not hurt, but I am against it because it will not solve the "problem" on the long run.

I don't want to increase the blocksize "just a little" because in my opinion it will not solve the issue.
I don't want to increase the blocksize "a lot" because of the reasons I have already mentionned.

So I don't want to scale, leap, high-jump, run, dive or do any sporting event.

If the blocksize scales "a little", I will accept it, even though I disagree.
If the blocksize scales "a lot", I am leaving.

So simple!


SCALING is not a one time "just a little" its not a one time "alot".
 its a long run of progressive steps..
you solve the long run by doing the long run.. one foot infront of the other..

we dont need 1gb blocks today.
today we can fix the issues of the 4mb block bloat, spam, its badly utilised 3mb witness segregation space offset.  to allow more transactions to actually get to potentials of 16k transactions instead of stuck at averages under 4.2k. without adjusting more then 4mb..
then move to 8mb  and at later date progress again. and more later to progress again..
one step infront of the other.. step by step
.. this is literally the meaning of scaling..  the long run

all the other detractors/objectors talking about needing to high jump now to solve the long run later..(facepalm)
all the other detractors/objectors talking about a high jump cant happen now so lets do nothing at all..(facepalm)
all the other detractors/objectors completely ignoring/avoiding/ twist/lie about discussion of SCALING (facepalm)

even though 99% of transactions are relayed/broadcast PRE CONFIRM so the "4mb" block limit is not actually "4mb" sent in one go anymore
but lets say it was.. heck lets say it was 10mb

10mb /10min = 1mb/min = 16kb/sec
a 10mb block is less data than dial-up

those in africa who dont live near cabled internet use 4g/5g cellular, which they 'hotspot' to a PC
millions of africans livestream, watch tv and game via 4g/5g..

do you think netflix (data of gigabytes an hour to users) said "lets not start a business because customers cant handle gigabytes an hour"

as for the initial block download.. things can be done about the user interface of things like core to not be a splash screen telling people to wait to sync-up, when people just want to generate an address to get started, many things can be done to make that a better user experience. not just the user interface but also the underlying download process
hero member
Activity: 560
Merit: 1060

thats what "scaling" is for rather then "LEAPING"
just saying no is not the solution to LEAPING.. scaling is the solution to leaping

also SCALING is about the long run.. its not about the high-jump.. (two different sporting events)

Unfortunately, I don't follow you. I will quote myself:

Quote
To this extent, I think increasing the block size a little will not hurt, but I am against it because it will not solve the "problem" on the long run.

I don't want to increase the blocksize "just a little" because in my opinion it will not solve the issue.
I don't want to increase the blocksize "a lot" because of the reasons I have already mentionned.

So I don't want to scale, leap, high-jump, run, dive or do any sporting event.

If the blocksize scales "a little", I will accept it, even though I disagree.
If the blocksize scales "a lot", I am leaving.

So simple!
legendary
Activity: 4410
Merit: 4766

I really don't get this logic .  

There are many people who don't share the same logic that I do and it is totally fine.

There are people on the planet ( close to 50% ) that have no pc or internet access . Does that mean that the network should stop because of that ?

The network shouldn't stop. No! However, the inspiration was that everyone should be able to run a node.

To this extent, I think increasing the block size a little will not hurt, but I am against it because it will not solve the "problem" on the long run.

In case you are running a node , because someone else hasn't that comfort does it mean that you should stop having one too ?

I didn't say that. If someone doesn't have the comfort to run a node, it doesn't mean I will stop running a node. I will, however, keep the blocksize low, so that I don't make it even more difficult for them to run a node.

thats what "scaling" is for rather then "LEAPING"
just saying no is not the solution to LEAPING.. scaling is the solution to leaping

also SCALING is about the long run.. its not about the high-jump.. (two different sporting events)

dont be fooled into disregarding progress via scaling..
dont be fooled by trolls talking about the high jump when rational people are talking about the long run
hero member
Activity: 560
Merit: 1060

I really don't get this logic .   

There are many people who don't share the same logic that I do and it is totally fine.

There are people on the planet ( close to 50% ) that have no pc or internet access . Does that mean that the network should stop because of that ?

The network shouldn't stop. No! However, the inspiration was that everyone should be able to run a node.

To this extent, I think increasing the block size a little will not hurt, but I am against it because it will not solve the "problem" on the long run.

In case you are running a node , because someone else hasn't that comfort does it mean that you should stop having one too ?

I didn't say that. If someone doesn't have the comfort to run a node, it doesn't mean I will stop running a node. I will, however, keep the blocksize low, so that I don't make it even more difficult for them to run a node.

hero member
Activity: 1111
Merit: 588
Of course, but it seriously depends on where you live. Unfortunately some people live in countries that don't advance in the same pace. Meaning that poverty is too high to be able to cope with the technological progress...
There are people on the planet ( close to 50% ) that have no pc or internet access . Does that mean that the network should stop because of that ? In case you are running a node , because someone else hasn't that comfort does it mean that you should stop having one too ? If we have to follow a standard let's follow the one of the most vulnerable and quit using our pc's or phones . That would be the most fair .
I really don't get this logic .  
legendary
Activity: 4410
Merit: 4766
Clearly no one saw this current situation coming.  

funny part is i was having lengthy debates with gmax and others in 2016 about the exploit-ability of "anyonecanspend" (unconditioned opcodes).. it is funny how things come full circle where core gods and their cult followers lie, deceive try to deny things to push their agenda using tactics that the person on other side of debate is lying and deceiving

because you, data in the blockchain proves whos right in the end.

Congratulations, one of your hundreds of Chicken Little "Sky is falling!" cries turned out not to be 100% wrong.  Have a cookie.  Purely by the law of averages, even a fruitloop like you has to get it right once in a while.  Don't let it go to your head.
Actually the thing he quoted has nothing to do with this topic and the example chosen in that comment is actually wrong and the silliest one since P2WPKH (ie. OP_0 + 20 bytes) is literally the strictest SegWit scripts that exists and there is absolutely no way of exploiting it.

incorrect.. because there is no expectation of content in op_0
remember op_0 was not a strict segwit script.. because.. 2016... yep segwit wasnt even a thing!!

back then they had buzzwords for them  empty opcode, called nops, nulls.. these days for the new subclass of opcodes of segwit and then the next subclass of opcodes for taproot they buzzword names like opsuccess
LEARN THEM, learn what they do and dont do

anyways
the very next bytes you can put in a op_0 is op_push4 which say the next bytes after that can be 4,294,967,295‬ bytes(4.29GB) which are only prevented by the block limit to not actually be 4.29gb
also multisig did not use op_0 it used other operation bytes..

so it can be abused in so many ways due to its openness at the time of writing it in 2016

seems doomad and pooya have rejoined each other in the cultish narrative of merit circle jerkin and defending each other again, to prevent bitcoin scaling by not even knowing about the code or exploits that are causing the congestion that is not helping bitcoin have lean transactions or increased transaction counts

but let them pat themselves on the back merit cycling each other while not wanting to learn how things really work

i can literally hear it now, them high fiving each other shouting "Got-em'" but not realising they just proved they dont even know about the opcodes or the abuse the openness of the opcode caused by the stuff it allowed.. so give each other a pat on the back for proving you didnt know what your talking about.

it doesnt pay to learn from each others idiocies,, it pays to actually learn bitcoin. doomad already proved he cant earn an income from having blind adoration and just repeat what they told him.

so dont waste your time circle jerking things you dont know, and take the time to learn bitcoin
hero member
Activity: 560
Merit: 1060
But don't forget technology continue to progress, so increasing block size doesn't always mean sacrificing decentralization.

Of course, but it seriously depends on where you live. Unfortunately some people live in countries that don't advance in the same pace. Meaning that poverty is too high to be able to cope with the technological progress...
Pages:
Jump to: