Pages:
Author

Topic: (Ordinals) BRC-20 needs to be removed - page 9. (Read 7771 times)

sr. member
Activity: 1190
Merit: 469
April 19, 2024, 06:05:19 PM
This part is strange: many people want bigger blocks, but they don't want to run their own nodes to handle all of that traffic. Why?
the same reason that end users of any product don't want or shouldn't have to involve themselves in how the back end works. if they pay the fee they expect the service.

Quote
If you are the one who wants big blocks, then you should also be the one, who runs your node 24/7, and shows everyone: "See? I can handle that without any problems!". I wonder, what is the reason behind willing to increase the maximum block size, and not willing to participate in the costs of doing so. And the same is true for Ordinals: people want to use existing network (like Bitcoin), and put their data here, instead of maintaining their own chain, and participating honestly in all costs of storing additional data, on their own chain.
lets not try and force people into running a bitcoin node for being critical of bitcon's capacity limits. as internet speeds, storage space and cpu power goes up so should bitcoin's transactions per second. and i'm not saying those should include people looking to store their home videos or pictures of monkeys.

Quote from: pooya87
That's just unrelated to what I brought up and also unrelated to this topic which is about the Ordinals Attack. Otherwise I don't entire disagree with what you have in mind.
I have always said that at some point in near future Bitcoin needs a solid capacity increase where the block size is also increased. That is alongside the second layer scaling solution and as a complementary improvement because something like LN can not solve much alone.

at least someone agrees with me. but back to the ordinals attack. it's a problem bitcoin caused for itself. so bitcoin needs to fix it. if not then there's no one to complain to.
hero member
Activity: 813
Merit: 1944
April 19, 2024, 08:24:34 AM
Quote
I would like to use BRC-20 for completely uncensored communication.
Guess what: you don't need a blockchain to achieve that. If you want "uncensored communication", then what you probably care about, is to deliver your messages to your recipient. I guess you don't want to also broadcast your neighbours' conversations, and keep track of that. And there are some applications, which can provide it, and the common factor is that none of them use blockchain. Because you don't need it. You don't need "double-spending resistance" on your conversations. If Alice will say "hi", and Bob will reply with "hello", you don't need to keep track of that, to make sure, that nobody said "hello" twice.

Also note, that when Satoshi tried to bootstrap the first nodes, then he just used IRC, an existing communication protocol, which was there long before Bitcoin, and which you can use for "uncensored communication" even today. And also, he used SMTP (e-mail), another existing communication protocol. Which means, that if you say "I need instant communication", then the answer is not "BRC-20", but rather something like "IRC". And if you say "I want to receive messages, when I am offline", then again, you could use something like "SMTP", instead of "BRC-20".

Quote
That is alongside the second layer scaling solution and as a complementary improvement because something like LN can not solve much alone.
Well, if the problem is that on-chain peg-in transactions are not sufficient to bring all people inside LN, then there are other solutions. One of them is to put channel-opening transactions to the off-chain world: https://delvingbitcoin.org/t/can-game-theory-secure-scaling/797/1 And another is of course CoinPool, or other similar proposals, to switch from 2-of-2 multisig into N-of-N multisig: https://coinpool.dev/
legendary
Activity: 3472
Merit: 10611
April 19, 2024, 07:37:51 AM
just imagine the day when everyone has 50TB hdds (yes that day is going to come) and bitcoin is only using 80GB per year. that seems like a serious problem but maybe no one else thinks so.
That's just unrelated to what I brought up and also unrelated to this topic which is about the Ordinals Attack. Otherwise I don't entire disagree with what you have in mind.
I have always said that at some point in near future Bitcoin needs a solid capacity increase where the block size is also increased. That is alongside the second layer scaling solution and as a complementary improvement because something like LN can not solve much alone.
sr. member
Activity: 1518
Merit: 264
April 19, 2024, 07:29:51 AM
I would like to use BRC-20 for completely uncensored communication.

What. You gonna bring the Catholic with 11 children's up? Raving lunatic.



Quote
that would mean someone can't create a new UTXO unless someone else spends one.
It depends on underlying scripts. Because in the past, people mainly used single-key scripts, so a single person owned a single coin. Now, 2-of-2 multisig is quite popular, because of the Lightning Network, and in general, it seems to be a basic building block for many second layers. However, it doesn't have to stop there: since Taproot, you can make N-of-N multisig, behind a single public key. In that case, if you want to introduce someone else to the network, you could change it into (N+1)-of-(N+1) multisig, and that wouldn't increase the size of the UTXO set.

Your on another level. I be excused and find my beer. Cheers.

Mod note: Consecutive posts merged
hero member
Activity: 813
Merit: 1944
April 19, 2024, 04:46:43 AM
Quote
that would mean someone can't create a new UTXO unless someone else spends one.
It depends on underlying scripts. Because in the past, people mainly used single-key scripts, so a single person owned a single coin. Now, 2-of-2 multisig is quite popular, because of the Lightning Network, and in general, it seems to be a basic building block for many second layers. However, it doesn't have to stop there: since Taproot, you can make N-of-N multisig, behind a single public key. In that case, if you want to introduce someone else to the network, you could change it into (N+1)-of-(N+1) multisig, and that wouldn't increase the size of the UTXO set.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
April 19, 2024, 04:23:02 AM
Currently, the size of the whole UTXO set is not limited by consensus rules, but I guess some people really want to abuse that, and force developers to introduce some limits there.
How? You can't really put a limit on UTXOs, that would mean someone can't create a new UTXO unless someone else spends one.
legendary
Activity: 3934
Merit: 3190
Leave no FUD unchallenged
April 19, 2024, 02:18:30 AM
This part is strange: many people want bigger blocks, but they don't want to run their own nodes to handle all of that traffic. Why? If you are the one who wants big blocks, then you should also be the one, who runs your node 24/7, and shows everyone: "See? I can handle that without any problems!". I wonder, what is the reason behind willing to increase the maximum block size, and not willing to participate in the costs of doing so.

It's just the something-for-nothing brigade.  Happy for others to bear the burden, while getting a free ride.  And yet they don't realise that's precisely why blocksize hasn't magically increased just because they're moaning about it.

Pure sense of entitlement.  But it won't get them anywhere.  Tangible and practical factors will always take precedence over the noise of the rabble.  

Those securing the chain will almost certainly make the choice based on the impact it has on them.  The opinions of freeloaders won't be part of that assessment.
hero member
Activity: 813
Merit: 1944
April 19, 2024, 01:18:37 AM
Quote
i'm not interested in downloading 450GB and then having to keep it in sync all the time.
This part is strange: many people want bigger blocks, but they don't want to run their own nodes to handle all of that traffic. Why? If you are the one who wants big blocks, then you should also be the one, who runs your node 24/7, and shows everyone: "See? I can handle that without any problems!". I wonder, what is the reason behind willing to increase the maximum block size, and not willing to participate in the costs of doing so. And the same is true for Ordinals: people want to use existing network (like Bitcoin), and put their data here, instead of maintaining their own chain, and participating honestly in all costs of storing additional data, on their own chain.

Also, if you would try running your own node with increased limits, and perform some testing, then you would know, that "downloading 450GB" is not the biggest problem. Storing it on your server is a bigger issue (because bandwidth is usually sufficient, but you need to rent additional disks), and verification is even bigger problem (because then you also need to rent more CPUs). And then, you would also know, that Initial Blockchain Download can currently take something around a week (it depends on your hardware), and if you increase the size of the block, then you could have a situation, which you can see on some altcoins: you can take some CPU-mineable altcoin, and download their chain very quickly, because it takes for example 10 GB, but verification time is something like a month.

Quote
But if you allocate enough RAM for all UTXO, then it's boring.
Currently, the size of the whole UTXO set is not limited by consensus rules, but I guess some people really want to abuse that, and force developers to introduce some limits there.
sr. member
Activity: 1190
Merit: 469
April 19, 2024, 12:35:36 AM

That's a completely different topic.
Regardless of what Bitcoin's block capacity is, whether it is 1 byte or 1 terabyte, Bitcoin should continue staying as a payment system and not a cloud storage.

just imagine the day when everyone has 50TB hdds (yes that day is going to come) and bitcoin is only using 80GB per year. that seems like a serious problem but maybe no one else thinks so.



400MB per year is not the growth rate of Bitcoin's blockchain. A conservative estimate with approx. 1.5MB per block gives you a growth of roughly 2016 blocks per two weeks times 26 (for a year) times 1.5MB which equals 78,624MB per year, thus somewhere in the ballpark of 80GB growth per year at minimum.

I wonder where you got this 400MB figure from...

my mistake.

Quote
And those 240TB harddrives are still vaporware.
apparently there is a working prototype. not good enough for you? but expect to open up that wallet when they come to market. unless you're willing to wait for 10 years for prices to come down...

Quote
Try an Initial Blockchain Download and node sync on a mechanical harddrive, it's a real funnot so much for the drive. You can keep the irony if you detect it.
no thanks. i'm not interested in downloading 450GB and then having to keep it in sync all the time. you're welcome.  Angry


legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
April 18, 2024, 06:17:32 AM
Try an Initial Blockchain Download and node sync on a mechanical harddrive, it's a real funnot so much for the drive. You can keep the irony if you detect it.

It's generality true. But if you allocate enough RAM for all UTXO, then it's boring.
legendary
Activity: 2898
Merit: 1823
April 18, 2024, 03:36:31 AM
Quote

But from the viewpoint of the Ordinals/soon Runes users, if they have paid the fees and they're transactions are following the consensus rules, are they truly "exploiting" the system?


Yes. In the same way, you could try to copy-paste the whole chain from some altcoin into Bitcoin, by using "a coin in a coin" scheme. Or copy-paste all posts from bitcointalk and put it into Bitcoin transactions. Or even abandon GitHub, and copy-paste all commits behind "OP_SHA1 OP_EQUALVERIFY OP_CHECKSIG". The purpose of Bitcoin is not to be "the global chain for every use case".


Ser, I'm confused because none of what you have just said make sense. What matters is the protocol as is right NOW. Bitcoin is still a decentralized, permissionless, censorship-resistant protocol isn't it? If the Core developers propose a "fix" for the "bug", it would need to go through the proper process, unless UASF.


Piling every proof-of-work quorum system in the world into one dataset doesn't scale.


And then he also continues:


Bitcoin and BitDNS can be used separately.  Users shouldn't have to download all of both to use one or the other.  BitDNS users may not want to download everything the next several unrelated networks decide to pile in either.


So, if other use cases would use some separate chains for that data, it could be fine. They could build some honest, valuable protocol out of that. But the problem is, that they just decided not to, and abuse the Bitcoin network instead.

And yet another sentence, from the same post:


The networks need to have separate fates.  BitDNS users might be completely liberal about adding any large data features since relatively few domain registrars are needed, while Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.


Which means, that if you put all of those additional features on separate chains, then Bitcoin can still stay "easy for lots of users and small devices", and just take a role of "a notarization chain", providing Proof of Work to timestamp all other chains. But if you put "everything on Bitcoin" instead, then guess what: the current limits will take down existing payments. Because it is always a choice: confirm this payment, or this Ordinal. Which means, that if you allow using and abusing Bitcoin for everything else than the payment system, then it may stop being useful for payments, and lose its utility.


I'm not saying Satoshi is wrong, plus currently there are solutions being built/have been built, but it's irrelevant because if users want to inscribe/etch digital artifacts in the blockchain, and if they are willing to pay for the fees/follow the consensus rules, then who could stop them?
hero member
Activity: 714
Merit: 1010
Crypto Swap Exchange
April 17, 2024, 03:06:17 PM
How would bitcoin continue to justify itself with only 400mb per year when people have 240TB hard drives? I just don't think it could. Adapt or die.

400MB per year is not the growth rate of Bitcoin's blockchain. A conservative estimate with approx. 1.5MB per block gives you a growth of roughly 2016 blocks per two weeks times 26 (for a year) times 1.5MB which equals 78,624MB per year, thus somewhere in the ballpark of 80GB growth per year at minimum.

I wonder where you got this 400MB figure from...

And those 240TB harddrives are still vaporware. Try an Initial Blockchain Download and node sync on a mechanical harddrive, it's a real funnot so much for the drive. You can keep the irony if you detect it.


But from the viewpoint of the Ordinals/soon Runes users, if they have paid the fees and they're transactions are following the consensus rules, are they truly "exploiting" the system? It's not their fault it's there. They're merely using it because the system allows them to. Plus if it wasn't Casey Rodarmor, it would be another developer that would use this "hack/bug" and make something from it.

I'm not sure if I get it right, so excuse if I confuse consensus rules with something else. My standpoint is: it's a flaw in consensus rules to allow arbitrary sized witness data for an input in a transaction. There's likely no real need for this. It's an oversight which is now exploited. Can it be fixed easily without having weird or problematic side-effects. I don't know, that's a bit over my technical Bitcoin expertise.

I don't care who exploits it for whatever reason. The inscription shitheads and shit-token on Bitcoin blockchain morons exploit it and abuse the blockchain for storage of bullshit data. They don'c care about Bitcoin, period!

Sure, I'm exaggerating that it's bullshit of whatever flavor in my personal opinion. I don't expect anybody to agree on this with me. I'm free to have my own opinion and defend it.


I believe that they should be careful. It might start a hash war again.

Did you rather meant a blocksize war? I believe, yes, because I can't make up anything related to hash power with this topic.


I don't like it too, it's making on-transactions a little more expensive to use for plebs like us. But what can we do? Literally ANYONE can use Bitcoin the way it allows us to if we pay the fees and follow the consensus rules.

We can debate the magnitude of "more expensive" fees, but that's not what I dislike in particular. More adoption and more transactions would also fill up the mempool and make it more expensive for you and me. That's the fee market and I have no valid point to complain about it if more adoption and "normal" transaction volume were the reason for  mempool clogs.

OP_RETURN data costs 4WU per byte. I wouldn't be happy if people start to use it excessivly but at least they'd have to pay a fair fee for it. And it's limited in size for a reason. Someone who wants to pay for it could use as many OP_RETURN outputs in his transaction(s) as this someone feels (s)he needs to. Would I like it? Certainly no. But I'm pretty much sure, it wouldn't be exploited in the extend we see with the inscription shit.

This superfluous data burdens every archival node and because of arbitrary size within the limits of max. blocksize any exploiter can cram in data that could become a problem with law. OP_RETURN is somewhat similar but you have the 80 bytes limit, even if used in multiples, you have a segmentation of problematic data like pictures or movies that collide with law.
I hope that such a forced segmentation wouldn't allow to say there's a whole problematic picture, intact in it's entirety in the blockchain. (Yes, I'm aware that such things were attempted in the past or maybe also in the presence. Stupid data abusers...)

Another issue is the bloating of the UTXO set by idiots who like to send dust or more to the genesis block's coinbase or rather the derived P2PKH address of the P2PK coinbase public key or other "Patoshi" blocks. But that's another, entirely different topic, though it shows to some extend that many people simply don't care to look at the whole picture and consequences for all of us.

Sorry for the rant but such sort of asocial ego-centric narrow-bubbled beings piss me off from time to time.
hero member
Activity: 813
Merit: 1944
April 17, 2024, 04:19:27 AM
Piling every proof-of-work quorum system in the world into one dataset doesn't scale.
And then he also continues:

Bitcoin and BitDNS can be used separately.  Users shouldn't have to download all of both to use one or the other.  BitDNS users may not want to download everything the next several unrelated networks decide to pile in either.
So, if other use cases would use some separate chains for that data, it could be fine. They could build some honest, valuable protocol out of that. But the problem is, that they just decided not to, and abuse the Bitcoin network instead.

And yet another sentence, from the same post:

The networks need to have separate fates.  BitDNS users might be completely liberal about adding any large data features since relatively few domain registrars are needed, while Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.
Which means, that if you put all of those additional features on separate chains, then Bitcoin can still stay "easy for lots of users and small devices", and just take a role of "a notarization chain", providing Proof of Work to timestamp all other chains. But if you put "everything on Bitcoin" instead, then guess what: the current limits will take down existing payments. Because it is always a choice: confirm this payment, or this Ordinal. Which means, that if you allow using and abusing Bitcoin for everything else than the payment system, then it may stop being useful for payments, and lose its utility.
legendary
Activity: 2898
Merit: 1823
April 17, 2024, 03:40:52 AM

There's the clear inequity under current rules that one byte of data in an OP_RETURN output is worth 4WU and one byte in witness data as exploited by inscriptions is only 1WU. If you want data on the blockchain to be treated equally, current rules simply don't do it and I consider this a fault/bug and/or ongoing exploit.


But from the viewpoint of the Ordinals/soon Runes users, if they have paid the fees and they're transactions are following the consensus rules, are they truly "exploiting" the system? It's not their fault it's there. They're merely using it because the system allows them to. Plus if it wasn't Casey Rodarmor, it would be another developer that would use this "hack/bug" and make something from it.

Quote

I'm not so much following discussions of Core devs, but frankly, my perception is a disturbing unwillingness to tackle this weight unit inequity.


I believe that they should be careful. It might start a hash war again.

Quote

I don't need to like what's "inscribed" to the blockchain, it's not my business to judge or censor. Bitcoin is not made to judge or censor.


I don't like it too, it's making on-transactions a little more expensive to use for plebs like us. But what can we do? Literally ANYONE can use Bitcoin the way it allows us to if we pay the fees and follow the consensus rules.
legendary
Activity: 3472
Merit: 10611
April 16, 2024, 11:14:56 PM
Keep in mind that bottom is that Bitcoin is not a cloud storage, it is a payment system.
well, all i can say about that is bitcoin is going to be LEFT BEHIND if it clings to some type of small block sizes when consumer hard drives end up being 250 TB in size.
That's a completely different topic.
Regardless of what Bitcoin's block capacity is, whether it is 1 byte or 1 terabyte, Bitcoin should continue staying as a payment system and not a cloud storage.
sr. member
Activity: 1190
Merit: 469
April 16, 2024, 09:11:48 PM

Keep in mind that bottom is that Bitcoin is not a cloud storage, it is a payment system.

well, all i can say about that is bitcoin is going to be LEFT BEHIND if it clings to some type of small block sizes when consumer hard drives end up being 250 TB in size.

Hard drives aren't dead yet as Seagate demos new multi-layer 3D magnetic tech with potential for 240TB capacities
https://www.pcgamer.com/hardware/graphics-cards/hard-drives-arent-dead-yet-as-seagate-demos-new-multi-layer-3d-magnetic-tech-with-potential-for-240tb-capacities/

How would bitcoin continue to justify itself with only 400mb per year when people have 240TB hard drives? I just don't think it could. Adapt or die.
legendary
Activity: 3472
Merit: 10611
April 15, 2024, 11:49:04 PM
Being part of the consensus rule doesn't mean something is not a bug or exploitable. Read my two examples again, they were also part of the consensus rules and yet they were bugs in the protocol that could have been exploited.

Your premise appears to be that any kind of data storage, unrelated to transactional data, isn't a valid use of Bitcoin.  But I'm not convinced developers see things in such black and white terms.  I've certainly seen some developer discussion relating to the standardisation of data storage, but I don't see any particular push to restrict it completely.  Perhaps the patches you're referring to weren't the correct format in which devs were looking to support data storage. 

Is it possible you might be working under the assumption that devs want to prevent data storage because they made those particular changes?  If so, I think you might be misinterpreting what they were looking to achieve.
My examples were about "exploits" and fixing them more than being about data storage. But I agree that in Bitcoin we are slightly "flexible" when it comes to data storage (but not that much). For example we already have OP_RETURN that is used for data storage and is the standard way but it is a limited way that is acceptable.

Keep in mind that bottom is that Bitcoin is not a cloud storage, it is a payment system.
hero member
Activity: 813
Merit: 1944
April 14, 2024, 03:48:38 PM
Quote
There's the clear inequity under current rules that one byte of data in an OP_RETURN output is worth 4WU and one byte in witness data as exploited by inscriptions is only 1WU.
Guess what: on the mailing list, there was a topic about exactly this issue, and it was continued on Delving Bitcoin: https://delvingbitcoin.org/t/bug-spammers-get-bitcoin-blockspace-at-discounted-price-lets-fix-it/327

But I think it won't be fixed, because of that line of thinking:
Quote
The byte size of transactions in the P2P protocol is an artifact of the encoding scheme used. It does matter, because it directly correlates with bandwidth and disk usage for non-pruned nodes, but if we really cared about the impact these had we could easily adopt more efficient encodings for transactions on the network or on disk that encodes some parts of transactions more compactly. If we would do that, the consensus rules (ignoring witness discount) would still count transaction sizes by their old encoding, which would then not correspond to anything physical at all. Would you then still say 1 byte = 1 byte?
And in general, I agree with that statement, but I also agree, that in the current model, not all bytes are counted properly. For example: there is an incentive to send coins into P2WPKH, but spend by key from P2TR. However, if you count the total on-chain footprint of P2WPKH, and compare it with P2TR with key-path spending, then P2WPKH is cheaper to send to, but it takes more on-chain bytes (and the cost is just moved to the recipient, so it is cheaper for the sender).

Also, if we would optimize things, and represent them on-chain differently, for example by saving raw public keys, and just using a single byte to indicate, that "this should use old DER encoding", and "this pubkey is wrapped in P2SH", then the whole chain could probably be much smaller, than it currently is. However, as compression is a no-fork, it can be always applied, so the only problem is standardizing the data, so then you can rely on other nodes, getting exactly the same results, and compressing data with the same algorithms. So, it is all about making "ZIP format for blockchain data": it is easier to send ZIP file, if you can unzip them in the same way on both computers. The same is true for historical blockchain data (and of course, some custom algorithm would be more effective, than just zipping it, because it could take into account, that you have to compress for example secp256k1 points, so it could efficiently use for example x-only-pubkeys, without having to build a weird auto-generated "dictionary" for that).

Quote
I've certainly seen some developer discussion relating to the standardisation of data storage, but I don't see any particular push to restrict it completely.
Well, if you want to simplify the current model, then standardisation is the first step in the right direction. And then, if you have for example some widely-deployed model, where you have a huge table of all public keys, which appeared on-chain, then you can generalize it, switch to a different model (utreexo), or view the scripting language from a different perspective: https://delvingbitcoin.org/t/btc-lisp-as-an-alternative-to-script/682

So, I guess that it could be restricted, if the network will be abused too much, but people are currently focused on things, which needs to be done, no matter if you want to restrict it, or not. Because simply "the status quo" is the default, so if you work on a change, that does not require touching consensus, then it can be easily merged. But if you work on some serious soft-fork instead, then you may end up with some working code, that simply won't be merged. And of course, writing some code, which is not consensus-critical is still needed, and is often required as a dependency to your soft-fork (you need to have standardized data compression, to compress and decompress the chain reliably, and to "undo your pruning" if needed).
hero member
Activity: 714
Merit: 1010
Crypto Swap Exchange
April 14, 2024, 02:20:16 PM
There's the clear inequity under current rules that one byte of data in an OP_RETURN output is worth 4WU and one byte in witness data as exploited by inscriptions is only 1WU. If you want data on the blockchain to be treated equally, current rules simply don't do it and I consider this a fault/bug and/or ongoing exploit.

I'm not so much following discussions of Core devs, but frankly, my perception is a disturbing unwillingness to tackle this weight unit inequity.

I don't need to like what's "inscribed" to the blockchain, it's not my business to judge or censor. Bitcoin is not made to judge or censor.
legendary
Activity: 2898
Merit: 1823
April 14, 2024, 10:53:51 AM
You can call it an "exploit", or a "bug", or something else, but from the network's viewpoint those transactions followed the consensus rules, paid the fees, and miners are also incentivized to include them in their blocks because of that. You can have your opinion about what it is, but that's merely what it is. An Opinion. But if removing the "exploit/bug" gets community consensus through a soft/hard fork, then OK. Fork accepted.


Being part of the consensus rule doesn't mean something is not a bug or exploitable. Read my two examples again, they were also part of the consensus rules and yet they were bugs in the protocol that could have been exploited.


But from the viewpoint of the network, if a transaction paid for the fees and followed the consensus rules then those transactions are technically valid, and they will be included in the blocks. You can have an opinion on what to call it. It's an "exploit" according to your opinion? OK, but for dick pic/fart sound collectors, their usage of the blockchain is merely something you don't approve of.
Pages:
Jump to: