Pages:
Author

Topic: NFTs in the Bitcoin blockchain - Ordinal Theory - page 5. (Read 9532 times)

copper member
Activity: 909
Merit: 2301
Quote
In practice, there needs network upgrade with let such process happen non-interactively.
Yes, it requires soft-fork or no-fork upgrades. Some things can be deployed as a no-fork, for example if you have one-input-one-output transaction, then it should be signed with SIGHASH_SINGLE|SIGHASH_ANYONECANPAY. And then, there should exist some SIGHASH_PREVOUT_SOMETHING, to allow chaining more than one transaction, without affecting txid. So yes, in the current state, it is half-baked, but I hope we will get there in the future. And I think if it will be done properly, then people will upgrade soon, because it will make their transactions cheaper.

Quote
And i doubt it can merge transaction with custom spending condition.
This is a feature, not a bug. You want to compress transactions based on public keys and signatures, while leaving for example Ordinals uncompressed. In this way, regular users could have cheap transactions, while Ordinals users will still pay a lot of satoshis for pushing their data on-chain. Unless they will compress them in some way, but I doubt it. Compressing regular public keys in some aggregated signature is much easier than compressing some arbitrary data pushes.

In general, Ordinals are incompatible with many protocols, for example CoinJoin or cut-through. But it is their problem, not mine. If users want to use protocols, that were not tested enough, it is their choice, and they should pay for their spam, while regular users should have a way to make their transactions cheaper.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
Quote
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem.
You cannot do that in a general case, for all kinds of transactions. But for specific cases? There is no problem to do that in practice. Because of course, compressing 1 GB of Ordinals could be hard, if you want to preserve that data, and put everything on-chain. But compressing 1 GB of regular transactions, based on public keys alone? This is definitely possible. Why?

1. Public keys can be combined. Schnorr signatures can show you, that you can have R=R1+R2. And in the general case, if you can express something as R-value, then you can use sum to combine it. And then, instead of having 100 different signatures, you can have a single signature, that will be equivalent to 100-of-100 multisig, on a given data.
2. Transactions in the middle could be batched, skipped, or otherwise summed up, without landing in a block. If you have Lightning Network, then you have one transaction to open a channel, one transaction to close it, and in-between, you can have 1 GB, or even 1 TB of transaction traffic, that will never be visible on-chain later, when the channel will be closed.
3. Transactions flying in mempools can be batched by using full-RBF. Which means, if you have Alice->Bob->Charlie->...->Zack transactions in mempools, and they all take 1 GB, then guess what: if you put Alice->Zack transaction in the final block, it could take 1 MB instead, as long as you properly pick the final transaction after batching, and protect users from double-spends. Which means, the whole state of the mempool could be confirmed in every block, even through commitments, long before it will be batched, and confirmed on-chain. That would protect transactions from double-spending, if nodes would handle that properly, and if miners would respect those rules.

But it's only on theory. In practice, there needs network upgrade with let such process happen non-interactively. And i doubt it can merge transaction with custom spending condition.

--snip--
Bootstraping would significantly reduce that time from days/weeks to hours . 

Do you mean using bootstrap.dat file? At least for Bitcoin Core software, it's slower than letting Bitcoin Core just sync normally.
hero member
Activity: 1114
Merit: 588
Initial Blockchain Download takes more and more time. Today, fully synchronizing the chain could take one week. Before it took few days, and in the old times, you just needed few hours. And you have to download everything, even if you use pruning. Downloading everything once is still needed. And it can take a lot of time. And that is only getting worse.

Which means, it doesn't matter that you have to download 500 GB. Validation time is what matters the most. If you can download 1 GB in 10 seconds, it means 5,000 seconds for downloading 500 GB. But downloading is not a problem. Validation is. One week validation for 500 GB means around 1 MB data being validated every second. And this is too slow. It doesn't matter that after few hours, you will have the full chain, if you will spend many days validating it. And if you increase it into too huge values, then guess what: if your validation time is around 1 MB per second, it means you can validate 600 MB per 10 minutes. Which means, if you will have 1 GB every 10 minutes, you will never validate it, even if everything will be fully downloaded on your local disk.

Bootstraping would significantly reduce that time from days/weeks to hours . 

legendary
Activity: 1512
Merit: 7340
Farewell, Leo
This is never the case, because users have different hardware.
There is an endless list of reasons why this is never the case, but theoretically if everyone owned the same part of the network in both terms of computational and financial power, then that'd be ideal decentralization.

If that would be the case, then BCH could be "the true Bitcoin" for a short moment, when they had higher hashrate.
Never used the term "true", and also, never claimed that hash rate per se is directly proportional to decentralization. It's the share of the hash rate that matters. The hash rate itself has to do with the security, not the decentralization.  

Even if you have 51%, then still, if you have Proof of Work, instead of Proof of Stake, you still cannot get those coins
To clarify my position: if you have 51% of the hash rate and are malicious, then these coins are worthless. There is nothing else to be said. You don't need to reorg and access the coins. Simply reversing confirmed transaction  proves the concept failure.
copper member
Activity: 909
Merit: 2301
Quote
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem.
You cannot do that in a general case, for all kinds of transactions. But for specific cases? There is no problem to do that in practice. Because of course, compressing 1 GB of Ordinals could be hard, if you want to preserve that data, and put everything on-chain. But compressing 1 GB of regular transactions, based on public keys alone? This is definitely possible. Why?

1. Public keys can be combined. Schnorr signatures can show you, that you can have R=R1+R2. And in the general case, if you can express something as R-value, then you can use sum to combine it. And then, instead of having 100 different signatures, you can have a single signature, that will be equivalent to 100-of-100 multisig, on a given data.
2. Transactions in the middle could be batched, skipped, or otherwise summed up, without landing in a block. If you have Lightning Network, then you have one transaction to open a channel, one transaction to close it, and in-between, you can have 1 GB, or even 1 TB of transaction traffic, that will never be visible on-chain later, when the channel will be closed.
3. Transactions flying in mempools can be batched by using full-RBF. Which means, if you have Alice->Bob->Charlie->...->Zack transactions in mempools, and they all take 1 GB, then guess what: if you put Alice->Zack transaction in the final block, it could take 1 MB instead, as long as you properly pick the final transaction after batching, and protect users from double-spends. Which means, the whole state of the mempool could be confirmed in every block, even through commitments, long before it will be batched, and confirmed on-chain. That would protect transactions from double-spending, if nodes would handle that properly, and if miners would respect those rules.

Also, I think people should see "the effective UTXO size diff", instead of just seeing the size of transactions in mempools. Which means, people should not see "oh no, our mempool contains 1 GB of transactions, waiting for confirmation". They should see instead: "our mempool contains N inputs and M outputs to be confirmed". Which means, everything in-between could be batched, stripped, and in many cases, all you need is the list of all inputs, which are confirmed, and all outputs, which are created. If you have Alice->Bob->Charlie->...->Zack transactions, then you don't have to include Bob->Charlie transaction into a new block, if Alice->Zack transaction will be included. All that matters, is if the last user can get the proper amount of coins, and if no double-spending is going on in the middle. That's all that matters. Five years later, nobody cares if Bob->Charlie transaction was there or not.

Quote
Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable.
Initial Blockchain Download takes more and more time. Today, fully synchronizing the chain could take one week. Before it took few days, and in the old times, you just needed few hours. And you have to download everything, even if you use pruning. Downloading everything once is still needed. And it can take a lot of time. And that is only getting worse.

Which means, it doesn't matter that you have to download 500 GB. Validation time is what matters the most. If you can download 1 GB in 10 seconds, it means 5,000 seconds for downloading 500 GB. But downloading is not a problem. Validation is. One week validation for 500 GB means around 1 MB data being validated every second. And this is too slow. It doesn't matter that after few hours, you will have the full chain, if you will spend many days validating it. And if you increase it into too huge values, then guess what: if your validation time is around 1 MB per second, it means you can validate 600 MB per 10 minutes. Which means, if you will have 1 GB every 10 minutes, you will never validate it, even if everything will be fully downloaded on your local disk.

Quote
If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.
Two things:
1. On-chain fees will eventually be high, because when block reward will be zero, then fees will be the only reward.
2. It could still be cheap for users, if their low-fee transactions will be batched into high-fee transactions, and then stored on-chain. Which means, if 500 satoshi as a fee is too expensive, then imagine 500 users paying one satoshi each, and having their transactions batched into a single on-chain transaction.

Quote
A network of 10,000 independently running nodes is fully decentralized if each node has an equal Proof-of-Work vote (i.e., 1 CPU = 1 vote).
This is never the case, because users have different hardware. And also, their wealth is never equal, there are rich and poor users. Which means, if you try to reach a situation, where every user would have the same CPU power, it is the same as trying to reach a situation, where each user would own the same amount of satoshis. This will simply never happen.

And even if you will have "1 CPU = 1 vote", then still, Alice could have 5 CPUs, and Bob could own 3 CPUs. And then, their voting power will never be equal.

Quote
So, there appears to be an extra parameter; the subset of that group which produces the votes.
If that would be the case, then BCH could be "the true Bitcoin" for a short moment, when they had higher hashrate. But still, having enough hashrate is not everything, because you have to also produce blocks in the proper format, accepted by other full nodes.

But surprisingly, Merged Mining could solve that issue, in case of some attack. If we would always trace the strongest chain of SHA-256 headers, and distribute coinbase rewards accordingly, then in case of some attack, if some network has 10% of the global SHA-256 hashrate, then those miners should receive only 10% of the coinbase amount, and the rest should be burned or timelocked into the future. Then, the attackers would not receive more coins on the real network, supported by users, but the attack would be noticed, and the coinbase amount would react to that properly.

Quote
If we compare a network wherein a person owns 90% of the total coins in circulation from the beginning versus a network wherein a person releases the cryptocurrency without exploiting the financial advantage, we can sense there is an orders of mangitude difference in centralization.
See? Hashrate is not everything. Even if you have 51%, then still, if you have Proof of Work, instead of Proof of Stake, you still cannot get those coins, unless you trigger a huge chain reorganization. And in that case:
1. It will be noticed by every full node. Some pruned nodes will even require re-downloading the chain, and that would bring a lot of attention, and alert a lot of people.
2. It will require a huge amount of Proof of Work, that could be put instead into producing new coins on top of the chain.

Quote
I would not be so quick to call ordinals the issue.
They are, because you cannot compress them that easily. If you have public-key-based transactions, they could be joined. If you have data-based transactions, then you need to reveal that data, and then it is harder to compress, because then you cannot "just add keys, and produce a Schnorr signature out of that".

Quote
and LN is not the best answer
Of course, sidechains could be potentially better, if they would be decentralized, and if they would have as strong peg, similar to LN. But still, some sidechain proposals were rejected, which means, a new ones should be made, maybe even no-fork based, if soft-fork ones will not be accepted.

Quote
Basically attacking ordinals is trying to prevent murder by restricting guns. people will use other weapons to do it.
Of course. Those "other weapons" could mean "UTXO flood", and for that reason, you don't have any code censoring Ordinals in the official Bitcoin Core release.
legendary
Activity: 4326
Merit: 8950
'The right to privacy matters'
Quote
In short , if you think that 1 MB limit is sensible in our times , then i don't know what to say .
It depends if you add compression into your equations or not. Pure 1 MB, without any compression, is not enough. 4 MB witness we have today, without any compression, is also not enough, because you can see congested mempool as well, and that situation is far from perfect. However, if you can imagine 1 GB blocks, that could be compressed down to 1 MB, then would you agree on such thing? You didn't expect that kind of question from me, do you?
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem. Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable. It's a simply demand and supply process. When demand is high and supply remains the same, then prices increase. In our case, demand is getting very high and supply remains as low as it was when only hundreds of people were using Bitcoin.

But to be frank, in our case, the main problem is not block size (it's still a problem though) but Bitcoin Ordinals. If this problem continues to exist long-term, then something should really be done to give people a relief. If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.


I would not be so quick to call ordinals the issue. 

Ordinals are simply what is being used to make the fees higher this time.

Until you make btc as easy to scale as scrypt there will be an incentive to jack the fees on the btc chain.

and LN is not the best answer.

read how fees were jacked in 2017

https://bitcointalksearch.org/topic/why-all-miners-need-to-mine-on-a-pool-that-pays-them-the-tx-fees-2634505


and see the largest pool right now is foundry which does not allow you to join unless you are at 20ph.

I can see them prepping to jack fees without using ordinals.

they have a relationship with bitmain and with riot.

the combined size of the two main pools  foundry and ant pool is over 40%.

they can simple repeat what I describe in my thread.

making sure big players are on foundry and moving the high fees to them.

make sure smaller players are on ant pool and using that pool to flood the memspace.

No need to have any ordinal to do it.


Basically attacking ordinals is trying to prevent murder by restricting guns. people will use other weapons to do it.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Of course, because in my definition I mentioned about the number of independent nodes. Which means, 10000 nodes won't help, if they are owned by a single entity.
Your definition is close to what I consider to be decentralized, but I feel like throwing my 2 cents in. A network of 10,000 independently running nodes is fully decentralized if each node has an equal Proof-of-Work vote (i.e., 1 CPU = 1 vote). Respectively, a network where 2 out of the 10,000 running nodes are the only Proof-of-Work voters, then it is-- if not close to centralized-- definitely not fully decentralized. So, there appears to be an extra parameter; the subset of that group which produces the votes.

For the sake of simplicity, I'm ignoring the centralization involved into minting money, but even that is another parameter, because owning bitcoins means owning a part of the network. If we compare a network wherein a person owns 90% of the total coins in circulation from the beginning versus a network wherein a person releases the cryptocurrency without exploiting the financial advantage, we can sense there is an orders of mangitude difference in centralization.
hero member
Activity: 882
Merit: 792
Watch Bitcoin Documentary - https://t.ly/v0Nim
Quote
In short , if you think that 1 MB limit is sensible in our times , then i don't know what to say .
It depends if you add compression into your equations or not. Pure 1 MB, without any compression, is not enough. 4 MB witness we have today, without any compression, is also not enough, because you can see congested mempool as well, and that situation is far from perfect. However, if you can imagine 1 GB blocks, that could be compressed down to 1 MB, then would you agree on such thing? You didn't expect that kind of question from me, do you?
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem. Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable. It's a simply demand and supply process. When demand is high and supply remains the same, then prices increase. In our case, demand is getting very high and supply remains as low as it was when only hundreds of people were using Bitcoin.

But to be frank, in our case, the main problem is not block size (it's still a problem though) but Bitcoin Ordinals. If this problem continues to exist long-term, then something should really be done to give people a relief. If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.
member
Activity: 172
Merit: 20
Anyone else heard of the Sophon bot that snipes brc-20 in the mempool and front-run them? Looks like this was the reason why we had an inscriptions break last month...
https://decrypt.co/205377/a-bitcoin-devs-bot-bucked-brc-20s-now-he-might-share-the-sophon-with-the-world
legendary
Activity: 2436
Merit: 6643
be constructive or S.T.F.U
Partially, because Bitcoin's value proposition doesn't depend only from its market cap and security, but also on the "value of use" the network has for certain user groups. If we've a service which brings value in bubbles, like Ordinals, but harms the "value of use" of other user groups, like those who don't care about NFTs and only want to transact BTC, then the general impact on Bitcoin's value proposition can be negative.

It's impossible to guess the outcome of this unless enough time has passed, depending on how you look at it, you can easily be in favor of one thing against the other, the way I see it now is that the majority of on-chain transactions are not being used for P2P payments nor the average daily-life payment, also I see that blocks on average are nearly half empty/full, I also think that the majority of people treat BTC as a store of value simply "bad money drives out good".

I can also bet that this will never change, in reality, it will only get worse/better (depending on how you look it at), some years back people used to buy Pizza using their BTC. who is unwise enough now to spend BTC on Pizza? the more value BTC has the less people will spend it.

So it all looks like BTC could use another set of users without having a huge negative impact on the existing group.

Below is a weekly report of the current year's Max / Average blocksize, does it seem like the "average" users are fully utilizing the blocks?



Code:
Date / Week         Max            Average
02.01.2023 2.385818 1.078639058
09.01.2023 2.218972 1.215408896
16.01.2023 2.314076 1.242836436
23.01.2023 2.308702 1.081787341
30.01.2023 3.955272 1.532155448
06.02.2023 3.922801 2.22460364
13.02.2023 3.952315 2.171715432
20.02.2023 3.942952 1.992871463
27.02.2023 3.934367 1.966236953
06.03.2023 3.898503 2.03098848
13.03.2023 3.937095 2.047473786
20.03.2023 3.899083 2.217817512
27.03.2023 3.912969 1.895896211
03.04.2023 3.838533 1.914944445
10.04.2023 3.787417 1.789833782
17.04.2023 3.978938 1.729685294
24.04.2023 2.944742 1.62130792
01.05.2023 3.060343 1.638557167
08.05.2023 2.879682 1.697199985
15.05.2023 3.68587 1.724362221
22.05.2023 3.692033 1.738233043
29.05.2023 3.615734 1.690612679
05.06.2023 3.544835 1.738112999
12.06.2023 3.552997 1.751367282
19.06.2023 3.882649 1.721703219
26.06.2023 3.834804 1.707652426
03.07.2023 3.760753 1.740363174
10.07.2023 3.403137 1.673777459
17.07.2023 3.763782 1.668705377
24.07.2023 3.314762 1.672129874
31.07.2023 3.857157 1.657433457
07.08.2023 3.350484 1.632211423
14.08.2023 3.279122 1.679278465
21.08.2023 3.250997 1.657125809
28.08.2023 3.934083 1.659440563
04.09.2023 2.511372 1.623018103
11.09.2023 2.263405 1.660510314
18.09.2023 2.318758 1.671535611
25.09.2023 3.799031 1.726022796
02.10.2023 3.403939 1.685859414
09.10.2023 3.114767 1.657934399
16.10.2023 3.516149 1.625550023
23.10.2023 3.819042 1.68099429
30.10.2023 2.431199 1.660580813
06.11.2023 2.129122 1.653148697
copper member
Activity: 909
Merit: 2301
Quote
The limit wasn't 32MB in the begining...in version 0.1.0 of core is in open( no limit).
This is November 2008 version:
Quote
Code:
static const unsigned int MAX_SIZE = 0x02000000;
static const int64 COIN = 1000000;
static const int64 CENT = 10000;
static const int64 TRANSACTIONFEE = 1 * CENT; /// change this to a user options setting, optional fee can be zero
///static const unsigned int MINPROOFOFWORK = 40; /// need to decide the right difficulty to start with
static const unsigned int MINPROOFOFWORK = 20;  /// ridiculously easy for testing
As you can see, MAX_SIZE is equal to 0x02000000, which means 32 MiB.

This is "BitCoin v0.01 ALPHA", as you can read in "readme.txt". And it contains this code:
Quote
Code:
static const unsigned int MAX_SIZE = 0x02000000;
static const int64 COIN = 100000000;
static const int64 CENT = 1000000;
static const int COINBASE_MATURITY = 100;

static const CBigNum bnProofOfWorkLimit(~uint256(0) >> 32);
See? Also MAX_SIZE is equal to 32 MiB.

This is the current master branch:
Quote
Code:
/** The maximum allowed size for a serialized block, in bytes (only for buffer size limits) */
static const unsigned int MAX_BLOCK_SERIALIZED_SIZE = 4000000;
/** The maximum allowed weight for a block, see BIP 141 (network rule) */
static const unsigned int MAX_BLOCK_WEIGHT = 4000000;
/** The maximum allowed number of signature check operations in a block (network rule) */
static const int64_t MAX_BLOCK_SIGOPS_COST = 80000;
/** Coinbase transaction outputs can only be spent after this number of new blocks (network rule) */
static const int COINBASE_MATURITY = 100;

static const int WITNESS_SCALE_FACTOR = 4;
Of course, it exists also MAX_SIZE value in different places in the code, and it is equal to 32 MiB. Which means, you cannot send bigger message via Bitcoin protocol, even if you increase the size of the block, because that limit is also present in some other places, and you have to also change them, if you want for example 1 GB blocks.

Edit: I can give you a better link: this is the exact commit, where Satoshi added this code: https://github.com/bitcoin/bitcoin/commit/a30b56ebe76ffff9f9cc8a6667186179413c6349#diff-506a3b93711ef8e9623d329cf0a81447492e05867d2f923c6fa9fcffeca94f35
Quote
Code:
static const unsigned int MAX_SIZE = 0x02000000;
static const unsigned int MAX_BLOCK_SIZE = 1000000;
static const int64 COIN = 100000000;
static const int64 CENT = 1000000;
static const int COINBASE_MATURITY = 100;

static const CBigNum bnProofOfWorkLimit(~uint256(0) >> 32);
See? There was a limit, enforced by MAX_SIZE, set to 32 MiB, and Satoshi added another constant, called MAX_BLOCK_SIZE, which limited it further into 1 MB. Also note the difference between those two constants: one is 32 MiB, and another is 1 MB. One is hexadecimal, and another is decimal.
legendary
Activity: 2898
Merit: 1823
That's with the presumption that every node would have access to above-average internet connection speeds.
Plus for the question, how many nodes are required to be "appropriately" decentralized. I believe there's no right number, BUT I could tell you that the MORE full nodes = MORE security assurances.

2009:
https://www.bbc.com/news/technology-10786874
Quote
The data, from network giant Akamai reveals the average global net speed is only 1.7Mbps (megabits per second) although some countries have made strides towards faster services.

2021:
Quote
According to internet speed specialists Ookla the global average download speed on fixed broadband as of September 2021 was 113.25 Mbps on fixed broadband and 63.15 Mbps on mobile.

If 1MB was not a global problem then I kind of doubt 10 or 25 MB would be a problem now!
Weid that Satoshi didn't have the same attitude, otherwise he would have made blocks 50kb!  Wink


Although as plebs, we can't merely pull those numbers from a few news clippings, and truly claim that we have found the answer, no? The solution to scale the network and maintain decentralization, would definitley be more complicated than that.

I believe to help better understand ACTUAL SCALING and for the more technical people, this might help, https://www.youtube.com/watch?v=EHIuuKCm53o

But to be honest, please ELI-5, I don't understand most of that.
jr. member
Activity: 38
Merit: 22
The limit wasn't 32MB in the begining...in version 0.1.0 of core is in open( no limit).

Fast forward to 2023...theres less chance of a ddos because of the fee's!! Lightning network is a compromise. If its offchain...lets just keep the data on central databases again?
legendary
Activity: 2912
Merit: 6403
Blackjack.fun
It is not about how high you will push the limit. It is also about, how technically you want to do that. Note that we already increased 1 MB into 4 MB witness, and it was accepted. It could be even 1 GB witness, if needed, this is not only about the size you want to pick.

Yeah, it's not about the limit, it's about ego! There are dev ready to die on the 1/4 block size barricade.

Quote
Wait till the US goes full South Korea mode and Foundry will decide to only accept transactions between whitelisted addresses in its blocks.
Good luck.
1. Today, people observe blocks more carefully than in the past. On some block explorers, for example mempool.space, you can see some additional parameters, like "block health". If there will be more censorship, it will not remain unnoticed.
2. Foundry, or any other huge pool, is not the sole owner of all mining equipment. They own that power only because miners are connected to them. If they start censoring blocks, then many people will start switching to another mining pools.
3. Having some centralized mining pools is not the only way to mine blocks. It is quite effective, and for that reason it is so popular. But: if centralized mining pools will destroy their reputation, then we will switch to fully P2P-based mining.

One tiny flaw in your theory!
Foundry is a closed pool, it's basically the big guys and big farms mining there, you don't have 20 Peta and US based, you have no way of mining there, so since most of them are publicly traded companies if the gov would go full AML/KYC/FATF they will have nothing to do but comply.
And you can add Mara on top of that since it's again a private pool.

Imagine right now a 30% reduction in network capacity!

And then, on-chain supporters will face a serious problem: support their customers properly, or lose them.

Second flaw!
Lose your freedom for your customer or have a drink with Uncle Sam and agree with one-third of your former business revenue but no jail time.
Guess what those companies tied down by investors, assets, and other liabilities to US soil would do?

The thing is I agree with you in theory, that's how things should work, but you see, the reality on the ground is different unfortunately.
copper member
Activity: 909
Merit: 2301
Quote
If 1MB was not a global problem then I kind of doubt 10 or 25 MB would be a problem now!
It is not about how high you will push the limit. It is also about, how technically you want to do that. Note that we already increased 1 MB into 4 MB witness, and it was accepted. It could be even 1 GB witness, if needed, this is not only about the size you want to pick.

Quote
Weid that Satoshi didn't have the same attitude, otherwise he would have made blocks 50kb!
At the very beginning, the limit was set to 32 MiB. Since then, it was changed several times, to address spam. Because guess what: at that time, anyone with a CPU, could fully fill the blocks, and then Initial Blockchain Download would be even worse today, and we would enter "UTXO flood era" even sooner. But fortunately, that point is still in the future, so maybe developers will deal with UTXOs correctly, before it will become serious.

Quote
Wait till the US goes full South Korea mode and Foundry will decide to only accept transactions between whitelisted addresses in its blocks.
Good luck.
1. Today, people observe blocks more carefully than in the past. On some block explorers, for example mempool.space, you can see some additional parameters, like "block health". If there will be more censorship, it will not remain unnoticed.
2. Foundry, or any other huge pool, is not the sole owner of all mining equipment. They own that power only because miners are connected to them. If they start censoring blocks, then many people will start switching to another mining pools.
3. Having some centralized mining pools is not the only way to mine blocks. It is quite effective, and for that reason it is so popular. But: if centralized mining pools will destroy their reputation, then we will switch to fully P2P-based mining. There are some promising ideas, and by scamming customers, you will just push some decentralized solutions further. Because then, programmers will be mad, and they will start revealing some proposals (I saw some of them, I even tried "LN-based mining with Merged Mining model").
4. With each halving, the basic block reward will be smaller and smaller. If some mining pools will still decide to keep fees for themselves, and only share the basic block reward with miners, then eventually people will switch into P2P solutions, just because it will be more profitable. You cannot expect that showing a middle finger to some miners will have no consequences.

Quote
Suddenly you will realize you can have 1 billion decentralized nodes but all you can relay is a centralized decision
Maybe you don't realize that, but since Lightning Network was deployed, there are more and more ways to transact off-chain. So, if you lose all trust in all typical transactions, that happen on-chain, then guess what: it is possible to reach a state, where more coins will flow off-chain than on-chain. And then, on-chain supporters will face a serious problem: support their customers properly, or lose them.

So, you don't want to block miners. You don't want to turn off power grids, censor transactions, and do other stuff like that. Because then, if the model we have today will be destroyed, then another model will be created. Which means, if some off-chain model will be more stable, because of destruction of some on-chain model, then nodes will be more and more important. And then, if you take Proof of Work out of the equation, then you are left with "chaumean e-cash", that may be worse, but still, it works, and it will work well in case of emergency.
legendary
Activity: 4326
Merit: 8950
'The right to privacy matters'
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io


yeah I talked to foundry told them I had 5.0 ph which is around 50 s19s.

they only accept miners with 20ph.

even if i switch everything to s21 gear I can go to about 16ph so I simply am shut out of the largest pool.

I burn about 210kwatts and will be fully expanded to 280kwatts in a few weeks.

so mining for the 300-1000 kwatt guy is tough even with cheap power.

the big issue for me is 280kwatts cost me
What kind of power do you use? Grid? Renewable?

solar and grid and a prepaid contract to the grid.

my power is cheap but location one  the master transformer is 320 kwatts.

this allows us 230 in winter and 190 in summer

the second location is about 60 kwatts winter 50 kwatts summer.

so 240-290 kwatts.

we may add a second transformer we are trying to do a telsa super charger deal .

if that works we add three super chargers. run two of them and get  a 500kwatt transformer.  this could spare us 200 more kwatts.

so divide by 3.5  maybe 80 s21s on 1 transformer max

our prices for power will always make money but right now a s21 makes 200 x 8.5 = 17 dollars.

it burns 85 kwatts so 85 x 4  cents = 3.40 power

thus 17-3.4 = 13.60 per machine which is decent for 70 units . but i am still shut out of foundry.
and 70 units is 280k out of pocket.

If the machines show in jan I could get 90 days at 1000 a day then the ½ ing  which means 350 a day.

we are always slowly adding the new gear.

maybe just ten machine for january. say 43k
sr. member
Activity: 1666
Merit: 310
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io


yeah I talked to foundry told them I had 5.0 ph which is around 50 s19s.

they only accept miners with 20ph.

even if i switch everything to s21 gear I can go to about 16ph so I simply am shut out of the largest pool.

I burn about 210kwatts and will be fully expanded to 280kwatts in a few weeks.

so mining for the 300-1000 kwatt guy is tough even with cheap power.

the big issue for me is 280kwatts cost me
What kind of power do you use? Grid? Renewable?
legendary
Activity: 4326
Merit: 8950
'The right to privacy matters'
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io


yeah I talked to foundry told them I had 5.0 ph which is around 50 s19s.

they only accept miners with 20ph.

even if i switch everything to s21 gear I can go to about 16ph so I simply am shut out of the largest pool.

I burn about 210kwatts and will be fully expanded to 280kwatts in a few weeks.

so mining for the 300-1000 kwatt guy is tough even with cheap power.

the big issue for me is 280kwatts cost me
jr. member
Activity: 38
Merit: 22
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io
legendary
Activity: 2912
Merit: 6403
Blackjack.fun
That's with the presumption that every node would have access to above-average internet connection speeds.
Plus for the question, how many nodes are required to be "appropriately" decentralized. I believe there's no right number, BUT I could tell you that the MORE full nodes = MORE security assurances.

2009:
https://www.bbc.com/news/technology-10786874
Quote
Who defines what is decentralisation ?
Of course full nodes. The more independent full nodes you have, the more decentralized the whole system is. And obviously, they should be owned by independent people, and controlled from independent machines. Which brings us back to the first quote: if you think that miners can "vote", and full nodes can "only observe", then what kind of decentralization is present in your model?

Wait till the US goes full South Korea mode and Foundry will decide to only accept transactions between whitelisted addresses in its blocks.
Suddenly you will realize you can have 1 billion decentralized nodes but all you can relay is a centralized decision  Wink
Much freedom! Such beauty! Wow!


Pages:
Jump to: