Pages:
Author

Topic: NFTs in the Bitcoin blockchain - Ordinal Theory - page 5. (Read 9163 times)

hero member
Activity: 1111
Merit: 584
Initial Blockchain Download takes more and more time. Today, fully synchronizing the chain could take one week. Before it took few days, and in the old times, you just needed few hours. And you have to download everything, even if you use pruning. Downloading everything once is still needed. And it can take a lot of time. And that is only getting worse.

Which means, it doesn't matter that you have to download 500 GB. Validation time is what matters the most. If you can download 1 GB in 10 seconds, it means 5,000 seconds for downloading 500 GB. But downloading is not a problem. Validation is. One week validation for 500 GB means around 1 MB data being validated every second. And this is too slow. It doesn't matter that after few hours, you will have the full chain, if you will spend many days validating it. And if you increase it into too huge values, then guess what: if your validation time is around 1 MB per second, it means you can validate 600 MB per 10 minutes. Which means, if you will have 1 GB every 10 minutes, you will never validate it, even if everything will be fully downloaded on your local disk.

Bootstraping would significantly reduce that time from days/weeks to hours . 

legendary
Activity: 1344
Merit: 6415
Farewell, Leo
This is never the case, because users have different hardware.
There is an endless list of reasons why this is never the case, but theoretically if everyone owned the same part of the network in both terms of computational and financial power, then that'd be ideal decentralization.

If that would be the case, then BCH could be "the true Bitcoin" for a short moment, when they had higher hashrate.
Never used the term "true", and also, never claimed that hash rate per se is directly proportional to decentralization. It's the share of the hash rate that matters. The hash rate itself has to do with the security, not the decentralization.  

Even if you have 51%, then still, if you have Proof of Work, instead of Proof of Stake, you still cannot get those coins
To clarify my position: if you have 51% of the hash rate and are malicious, then these coins are worthless. There is nothing else to be said. You don't need to reorg and access the coins. Simply reversing confirmed transaction  proves the concept failure.
hero member
Activity: 667
Merit: 1529
Quote
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem.
You cannot do that in a general case, for all kinds of transactions. But for specific cases? There is no problem to do that in practice. Because of course, compressing 1 GB of Ordinals could be hard, if you want to preserve that data, and put everything on-chain. But compressing 1 GB of regular transactions, based on public keys alone? This is definitely possible. Why?

1. Public keys can be combined. Schnorr signatures can show you, that you can have R=R1+R2. And in the general case, if you can express something as R-value, then you can use sum to combine it. And then, instead of having 100 different signatures, you can have a single signature, that will be equivalent to 100-of-100 multisig, on a given data.
2. Transactions in the middle could be batched, skipped, or otherwise summed up, without landing in a block. If you have Lightning Network, then you have one transaction to open a channel, one transaction to close it, and in-between, you can have 1 GB, or even 1 TB of transaction traffic, that will never be visible on-chain later, when the channel will be closed.
3. Transactions flying in mempools can be batched by using full-RBF. Which means, if you have Alice->Bob->Charlie->...->Zack transactions in mempools, and they all take 1 GB, then guess what: if you put Alice->Zack transaction in the final block, it could take 1 MB instead, as long as you properly pick the final transaction after batching, and protect users from double-spends. Which means, the whole state of the mempool could be confirmed in every block, even through commitments, long before it will be batched, and confirmed on-chain. That would protect transactions from double-spending, if nodes would handle that properly, and if miners would respect those rules.

Also, I think people should see "the effective UTXO size diff", instead of just seeing the size of transactions in mempools. Which means, people should not see "oh no, our mempool contains 1 GB of transactions, waiting for confirmation". They should see instead: "our mempool contains N inputs and M outputs to be confirmed". Which means, everything in-between could be batched, stripped, and in many cases, all you need is the list of all inputs, which are confirmed, and all outputs, which are created. If you have Alice->Bob->Charlie->...->Zack transactions, then you don't have to include Bob->Charlie transaction into a new block, if Alice->Zack transaction will be included. All that matters, is if the last user can get the proper amount of coins, and if no double-spending is going on in the middle. That's all that matters. Five years later, nobody cares if Bob->Charlie transaction was there or not.

Quote
Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable.
Initial Blockchain Download takes more and more time. Today, fully synchronizing the chain could take one week. Before it took few days, and in the old times, you just needed few hours. And you have to download everything, even if you use pruning. Downloading everything once is still needed. And it can take a lot of time. And that is only getting worse.

Which means, it doesn't matter that you have to download 500 GB. Validation time is what matters the most. If you can download 1 GB in 10 seconds, it means 5,000 seconds for downloading 500 GB. But downloading is not a problem. Validation is. One week validation for 500 GB means around 1 MB data being validated every second. And this is too slow. It doesn't matter that after few hours, you will have the full chain, if you will spend many days validating it. And if you increase it into too huge values, then guess what: if your validation time is around 1 MB per second, it means you can validate 600 MB per 10 minutes. Which means, if you will have 1 GB every 10 minutes, you will never validate it, even if everything will be fully downloaded on your local disk.

Quote
If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.
Two things:
1. On-chain fees will eventually be high, because when block reward will be zero, then fees will be the only reward.
2. It could still be cheap for users, if their low-fee transactions will be batched into high-fee transactions, and then stored on-chain. Which means, if 500 satoshi as a fee is too expensive, then imagine 500 users paying one satoshi each, and having their transactions batched into a single on-chain transaction.

Quote
A network of 10,000 independently running nodes is fully decentralized if each node has an equal Proof-of-Work vote (i.e., 1 CPU = 1 vote).
This is never the case, because users have different hardware. And also, their wealth is never equal, there are rich and poor users. Which means, if you try to reach a situation, where every user would have the same CPU power, it is the same as trying to reach a situation, where each user would own the same amount of satoshis. This will simply never happen.

And even if you will have "1 CPU = 1 vote", then still, Alice could have 5 CPUs, and Bob could own 3 CPUs. And then, their voting power will never be equal.

Quote
So, there appears to be an extra parameter; the subset of that group which produces the votes.
If that would be the case, then BCH could be "the true Bitcoin" for a short moment, when they had higher hashrate. But still, having enough hashrate is not everything, because you have to also produce blocks in the proper format, accepted by other full nodes.

But surprisingly, Merged Mining could solve that issue, in case of some attack. If we would always trace the strongest chain of SHA-256 headers, and distribute coinbase rewards accordingly, then in case of some attack, if some network has 10% of the global SHA-256 hashrate, then those miners should receive only 10% of the coinbase amount, and the rest should be burned or timelocked into the future. Then, the attackers would not receive more coins on the real network, supported by users, but the attack would be noticed, and the coinbase amount would react to that properly.

Quote
If we compare a network wherein a person owns 90% of the total coins in circulation from the beginning versus a network wherein a person releases the cryptocurrency without exploiting the financial advantage, we can sense there is an orders of mangitude difference in centralization.
See? Hashrate is not everything. Even if you have 51%, then still, if you have Proof of Work, instead of Proof of Stake, you still cannot get those coins, unless you trigger a huge chain reorganization. And in that case:
1. It will be noticed by every full node. Some pruned nodes will even require re-downloading the chain, and that would bring a lot of attention, and alert a lot of people.
2. It will require a huge amount of Proof of Work, that could be put instead into producing new coins on top of the chain.

Quote
I would not be so quick to call ordinals the issue.
They are, because you cannot compress them that easily. If you have public-key-based transactions, they could be joined. If you have data-based transactions, then you need to reveal that data, and then it is harder to compress, because then you cannot "just add keys, and produce a Schnorr signature out of that".

Quote
and LN is not the best answer
Of course, sidechains could be potentially better, if they would be decentralized, and if they would have as strong peg, similar to LN. But still, some sidechain proposals were rejected, which means, a new ones should be made, maybe even no-fork based, if soft-fork ones will not be accepted.

Quote
Basically attacking ordinals is trying to prevent murder by restricting guns. people will use other weapons to do it.
Of course. Those "other weapons" could mean "UTXO flood", and for that reason, you don't have any code censoring Ordinals in the official Bitcoin Core release.
legendary
Activity: 4102
Merit: 7765
'The right to privacy matters'
Quote
In short , if you think that 1 MB limit is sensible in our times , then i don't know what to say .
It depends if you add compression into your equations or not. Pure 1 MB, without any compression, is not enough. 4 MB witness we have today, without any compression, is also not enough, because you can see congested mempool as well, and that situation is far from perfect. However, if you can imagine 1 GB blocks, that could be compressed down to 1 MB, then would you agree on such thing? You didn't expect that kind of question from me, do you?
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem. Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable. It's a simply demand and supply process. When demand is high and supply remains the same, then prices increase. In our case, demand is getting very high and supply remains as low as it was when only hundreds of people were using Bitcoin.

But to be frank, in our case, the main problem is not block size (it's still a problem though) but Bitcoin Ordinals. If this problem continues to exist long-term, then something should really be done to give people a relief. If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.


I would not be so quick to call ordinals the issue. 

Ordinals are simply what is being used to make the fees higher this time.

Until you make btc as easy to scale as scrypt there will be an incentive to jack the fees on the btc chain.

and LN is not the best answer.

read how fees were jacked in 2017

https://bitcointalksearch.org/topic/why-all-miners-need-to-mine-on-a-pool-that-pays-them-the-tx-fees-2634505


and see the largest pool right now is foundry which does not allow you to join unless you are at 20ph.

I can see them prepping to jack fees without using ordinals.

they have a relationship with bitmain and with riot.

the combined size of the two main pools  foundry and ant pool is over 40%.

they can simple repeat what I describe in my thread.

making sure big players are on foundry and moving the high fees to them.

make sure smaller players are on ant pool and using that pool to flood the memspace.

No need to have any ordinal to do it.


Basically attacking ordinals is trying to prevent murder by restricting guns. people will use other weapons to do it.
legendary
Activity: 1344
Merit: 6415
Farewell, Leo
Of course, because in my definition I mentioned about the number of independent nodes. Which means, 10000 nodes won't help, if they are owned by a single entity.
Your definition is close to what I consider to be decentralized, but I feel like throwing my 2 cents in. A network of 10,000 independently running nodes is fully decentralized if each node has an equal Proof-of-Work vote (i.e., 1 CPU = 1 vote). Respectively, a network where 2 out of the 10,000 running nodes are the only Proof-of-Work voters, then it is-- if not close to centralized-- definitely not fully decentralized. So, there appears to be an extra parameter; the subset of that group which produces the votes.

For the sake of simplicity, I'm ignoring the centralization involved into minting money, but even that is another parameter, because owning bitcoins means owning a part of the network. If we compare a network wherein a person owns 90% of the total coins in circulation from the beginning versus a network wherein a person releases the cryptocurrency without exploiting the financial advantage, we can sense there is an orders of mangitude difference in centralization.
hero member
Activity: 840
Merit: 756
Watch Bitcoin Documentary - https://t.ly/v0Nim
Quote
In short , if you think that 1 MB limit is sensible in our times , then i don't know what to say .
It depends if you add compression into your equations or not. Pure 1 MB, without any compression, is not enough. 4 MB witness we have today, without any compression, is also not enough, because you can see congested mempool as well, and that situation is far from perfect. However, if you can imagine 1 GB blocks, that could be compressed down to 1 MB, then would you agree on such thing? You didn't expect that kind of question from me, do you?
Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem. Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable. It's a simply demand and supply process. When demand is high and supply remains the same, then prices increase. In our case, demand is getting very high and supply remains as low as it was when only hundreds of people were using Bitcoin.

But to be frank, in our case, the main problem is not block size (it's still a problem though) but Bitcoin Ordinals. If this problem continues to exist long-term, then something should really be done to give people a relief. If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.
member
Activity: 172
Merit: 20
Anyone else heard of the Sophon bot that snipes brc-20 in the mempool and front-run them? Looks like this was the reason why we had an inscriptions break last month...
https://decrypt.co/205377/a-bitcoin-devs-bot-bucked-brc-20s-now-he-might-share-the-sophon-with-the-world
legendary
Activity: 2170
Merit: 6279
be constructive or S.T.F.U
Partially, because Bitcoin's value proposition doesn't depend only from its market cap and security, but also on the "value of use" the network has for certain user groups. If we've a service which brings value in bubbles, like Ordinals, but harms the "value of use" of other user groups, like those who don't care about NFTs and only want to transact BTC, then the general impact on Bitcoin's value proposition can be negative.

It's impossible to guess the outcome of this unless enough time has passed, depending on how you look at it, you can easily be in favor of one thing against the other, the way I see it now is that the majority of on-chain transactions are not being used for P2P payments nor the average daily-life payment, also I see that blocks on average are nearly half empty/full, I also think that the majority of people treat BTC as a store of value simply "bad money drives out good".

I can also bet that this will never change, in reality, it will only get worse/better (depending on how you look it at), some years back people used to buy Pizza using their BTC. who is unwise enough now to spend BTC on Pizza? the more value BTC has the less people will spend it.

So it all looks like BTC could use another set of users without having a huge negative impact on the existing group.

Below is a weekly report of the current year's Max / Average blocksize, does it seem like the "average" users are fully utilizing the blocks?



Code:
Date / Week         Max            Average
02.01.2023 2.385818 1.078639058
09.01.2023 2.218972 1.215408896
16.01.2023 2.314076 1.242836436
23.01.2023 2.308702 1.081787341
30.01.2023 3.955272 1.532155448
06.02.2023 3.922801 2.22460364
13.02.2023 3.952315 2.171715432
20.02.2023 3.942952 1.992871463
27.02.2023 3.934367 1.966236953
06.03.2023 3.898503 2.03098848
13.03.2023 3.937095 2.047473786
20.03.2023 3.899083 2.217817512
27.03.2023 3.912969 1.895896211
03.04.2023 3.838533 1.914944445
10.04.2023 3.787417 1.789833782
17.04.2023 3.978938 1.729685294
24.04.2023 2.944742 1.62130792
01.05.2023 3.060343 1.638557167
08.05.2023 2.879682 1.697199985
15.05.2023 3.68587 1.724362221
22.05.2023 3.692033 1.738233043
29.05.2023 3.615734 1.690612679
05.06.2023 3.544835 1.738112999
12.06.2023 3.552997 1.751367282
19.06.2023 3.882649 1.721703219
26.06.2023 3.834804 1.707652426
03.07.2023 3.760753 1.740363174
10.07.2023 3.403137 1.673777459
17.07.2023 3.763782 1.668705377
24.07.2023 3.314762 1.672129874
31.07.2023 3.857157 1.657433457
07.08.2023 3.350484 1.632211423
14.08.2023 3.279122 1.679278465
21.08.2023 3.250997 1.657125809
28.08.2023 3.934083 1.659440563
04.09.2023 2.511372 1.623018103
11.09.2023 2.263405 1.660510314
18.09.2023 2.318758 1.671535611
25.09.2023 3.799031 1.726022796
02.10.2023 3.403939 1.685859414
09.10.2023 3.114767 1.657934399
16.10.2023 3.516149 1.625550023
23.10.2023 3.819042 1.68099429
30.10.2023 2.431199 1.660580813
06.11.2023 2.129122 1.653148697
hero member
Activity: 667
Merit: 1529
Quote
The limit wasn't 32MB in the begining...in version 0.1.0 of core is in open( no limit).
This is November 2008 version:
Quote
Code:
static const unsigned int MAX_SIZE = 0x02000000;
static const int64 COIN = 1000000;
static const int64 CENT = 10000;
static const int64 TRANSACTIONFEE = 1 * CENT; /// change this to a user options setting, optional fee can be zero
///static const unsigned int MINPROOFOFWORK = 40; /// need to decide the right difficulty to start with
static const unsigned int MINPROOFOFWORK = 20;  /// ridiculously easy for testing
As you can see, MAX_SIZE is equal to 0x02000000, which means 32 MiB.

This is "BitCoin v0.01 ALPHA", as you can read in "readme.txt". And it contains this code:
Quote
Code:
static const unsigned int MAX_SIZE = 0x02000000;
static const int64 COIN = 100000000;
static const int64 CENT = 1000000;
static const int COINBASE_MATURITY = 100;

static const CBigNum bnProofOfWorkLimit(~uint256(0) >> 32);
See? Also MAX_SIZE is equal to 32 MiB.

This is the current master branch:
Quote
Code:
/** The maximum allowed size for a serialized block, in bytes (only for buffer size limits) */
static const unsigned int MAX_BLOCK_SERIALIZED_SIZE = 4000000;
/** The maximum allowed weight for a block, see BIP 141 (network rule) */
static const unsigned int MAX_BLOCK_WEIGHT = 4000000;
/** The maximum allowed number of signature check operations in a block (network rule) */
static const int64_t MAX_BLOCK_SIGOPS_COST = 80000;
/** Coinbase transaction outputs can only be spent after this number of new blocks (network rule) */
static const int COINBASE_MATURITY = 100;

static const int WITNESS_SCALE_FACTOR = 4;
Of course, it exists also MAX_SIZE value in different places in the code, and it is equal to 32 MiB. Which means, you cannot send bigger message via Bitcoin protocol, even if you increase the size of the block, because that limit is also present in some other places, and you have to also change them, if you want for example 1 GB blocks.

Edit: I can give you a better link: this is the exact commit, where Satoshi added this code: https://github.com/bitcoin/bitcoin/commit/a30b56ebe76ffff9f9cc8a6667186179413c6349#diff-506a3b93711ef8e9623d329cf0a81447492e05867d2f923c6fa9fcffeca94f35
Quote
Code:
static const unsigned int MAX_SIZE = 0x02000000;
static const unsigned int MAX_BLOCK_SIZE = 1000000;
static const int64 COIN = 100000000;
static const int64 CENT = 1000000;
static const int COINBASE_MATURITY = 100;

static const CBigNum bnProofOfWorkLimit(~uint256(0) >> 32);
See? There was a limit, enforced by MAX_SIZE, set to 32 MiB, and Satoshi added another constant, called MAX_BLOCK_SIZE, which limited it further into 1 MB. Also note the difference between those two constants: one is 32 MiB, and another is 1 MB. One is hexadecimal, and another is decimal.
legendary
Activity: 2898
Merit: 1823
That's with the presumption that every node would have access to above-average internet connection speeds.
Plus for the question, how many nodes are required to be "appropriately" decentralized. I believe there's no right number, BUT I could tell you that the MORE full nodes = MORE security assurances.

2009:
https://www.bbc.com/news/technology-10786874
Quote
The data, from network giant Akamai reveals the average global net speed is only 1.7Mbps (megabits per second) although some countries have made strides towards faster services.

2021:
Quote
According to internet speed specialists Ookla the global average download speed on fixed broadband as of September 2021 was 113.25 Mbps on fixed broadband and 63.15 Mbps on mobile.

If 1MB was not a global problem then I kind of doubt 10 or 25 MB would be a problem now!
Weid that Satoshi didn't have the same attitude, otherwise he would have made blocks 50kb!  Wink


Although as plebs, we can't merely pull those numbers from a few news clippings, and truly claim that we have found the answer, no? The solution to scale the network and maintain decentralization, would definitley be more complicated than that.

I believe to help better understand ACTUAL SCALING and for the more technical people, this might help, https://www.youtube.com/watch?v=EHIuuKCm53o

But to be honest, please ELI-5, I don't understand most of that.
jr. member
Activity: 38
Merit: 21
The limit wasn't 32MB in the begining...in version 0.1.0 of core is in open( no limit).

Fast forward to 2023...theres less chance of a ddos because of the fee's!! Lightning network is a compromise. If its offchain...lets just keep the data on central databases again?
legendary
Activity: 2828
Merit: 6108
Jambler.io
It is not about how high you will push the limit. It is also about, how technically you want to do that. Note that we already increased 1 MB into 4 MB witness, and it was accepted. It could be even 1 GB witness, if needed, this is not only about the size you want to pick.

Yeah, it's not about the limit, it's about ego! There are dev ready to die on the 1/4 block size barricade.

Quote
Wait till the US goes full South Korea mode and Foundry will decide to only accept transactions between whitelisted addresses in its blocks.
Good luck.
1. Today, people observe blocks more carefully than in the past. On some block explorers, for example mempool.space, you can see some additional parameters, like "block health". If there will be more censorship, it will not remain unnoticed.
2. Foundry, or any other huge pool, is not the sole owner of all mining equipment. They own that power only because miners are connected to them. If they start censoring blocks, then many people will start switching to another mining pools.
3. Having some centralized mining pools is not the only way to mine blocks. It is quite effective, and for that reason it is so popular. But: if centralized mining pools will destroy their reputation, then we will switch to fully P2P-based mining.

One tiny flaw in your theory!
Foundry is a closed pool, it's basically the big guys and big farms mining there, you don't have 20 Peta and US based, you have no way of mining there, so since most of them are publicly traded companies if the gov would go full AML/KYC/FATF they will have nothing to do but comply.
And you can add Mara on top of that since it's again a private pool.

Imagine right now a 30% reduction in network capacity!

And then, on-chain supporters will face a serious problem: support their customers properly, or lose them.

Second flaw!
Lose your freedom for your customer or have a drink with Uncle Sam and agree with one-third of your former business revenue but no jail time.
Guess what those companies tied down by investors, assets, and other liabilities to US soil would do?

The thing is I agree with you in theory, that's how things should work, but you see, the reality on the ground is different unfortunately.
hero member
Activity: 667
Merit: 1529
Quote
If 1MB was not a global problem then I kind of doubt 10 or 25 MB would be a problem now!
It is not about how high you will push the limit. It is also about, how technically you want to do that. Note that we already increased 1 MB into 4 MB witness, and it was accepted. It could be even 1 GB witness, if needed, this is not only about the size you want to pick.

Quote
Weid that Satoshi didn't have the same attitude, otherwise he would have made blocks 50kb!
At the very beginning, the limit was set to 32 MiB. Since then, it was changed several times, to address spam. Because guess what: at that time, anyone with a CPU, could fully fill the blocks, and then Initial Blockchain Download would be even worse today, and we would enter "UTXO flood era" even sooner. But fortunately, that point is still in the future, so maybe developers will deal with UTXOs correctly, before it will become serious.

Quote
Wait till the US goes full South Korea mode and Foundry will decide to only accept transactions between whitelisted addresses in its blocks.
Good luck.
1. Today, people observe blocks more carefully than in the past. On some block explorers, for example mempool.space, you can see some additional parameters, like "block health". If there will be more censorship, it will not remain unnoticed.
2. Foundry, or any other huge pool, is not the sole owner of all mining equipment. They own that power only because miners are connected to them. If they start censoring blocks, then many people will start switching to another mining pools.
3. Having some centralized mining pools is not the only way to mine blocks. It is quite effective, and for that reason it is so popular. But: if centralized mining pools will destroy their reputation, then we will switch to fully P2P-based mining. There are some promising ideas, and by scamming customers, you will just push some decentralized solutions further. Because then, programmers will be mad, and they will start revealing some proposals (I saw some of them, I even tried "LN-based mining with Merged Mining model").
4. With each halving, the basic block reward will be smaller and smaller. If some mining pools will still decide to keep fees for themselves, and only share the basic block reward with miners, then eventually people will switch into P2P solutions, just because it will be more profitable. You cannot expect that showing a middle finger to some miners will have no consequences.

Quote
Suddenly you will realize you can have 1 billion decentralized nodes but all you can relay is a centralized decision
Maybe you don't realize that, but since Lightning Network was deployed, there are more and more ways to transact off-chain. So, if you lose all trust in all typical transactions, that happen on-chain, then guess what: it is possible to reach a state, where more coins will flow off-chain than on-chain. And then, on-chain supporters will face a serious problem: support their customers properly, or lose them.

So, you don't want to block miners. You don't want to turn off power grids, censor transactions, and do other stuff like that. Because then, if the model we have today will be destroyed, then another model will be created. Which means, if some off-chain model will be more stable, because of destruction of some on-chain model, then nodes will be more and more important. And then, if you take Proof of Work out of the equation, then you are left with "chaumean e-cash", that may be worse, but still, it works, and it will work well in case of emergency.
legendary
Activity: 4102
Merit: 7765
'The right to privacy matters'
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io


yeah I talked to foundry told them I had 5.0 ph which is around 50 s19s.

they only accept miners with 20ph.

even if i switch everything to s21 gear I can go to about 16ph so I simply am shut out of the largest pool.

I burn about 210kwatts and will be fully expanded to 280kwatts in a few weeks.

so mining for the 300-1000 kwatt guy is tough even with cheap power.

the big issue for me is 280kwatts cost me
What kind of power do you use? Grid? Renewable?

solar and grid and a prepaid contract to the grid.

my power is cheap but location one  the master transformer is 320 kwatts.

this allows us 230 in winter and 190 in summer

the second location is about 60 kwatts winter 50 kwatts summer.

so 240-290 kwatts.

we may add a second transformer we are trying to do a telsa super charger deal .

if that works we add three super chargers. run two of them and get  a 500kwatt transformer.  this could spare us 200 more kwatts.

so divide by 3.5  maybe 80 s21s on 1 transformer max

our prices for power will always make money but right now a s21 makes 200 x 8.5 = 17 dollars.

it burns 85 kwatts so 85 x 4  cents = 3.40 power

thus 17-3.4 = 13.60 per machine which is decent for 70 units . but i am still shut out of foundry.
and 70 units is 280k out of pocket.

If the machines show in jan I could get 90 days at 1000 a day then the ½ ing  which means 350 a day.

we are always slowly adding the new gear.

maybe just ten machine for january. say 43k
sr. member
Activity: 1624
Merit: 294
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io


yeah I talked to foundry told them I had 5.0 ph which is around 50 s19s.

they only accept miners with 20ph.

even if i switch everything to s21 gear I can go to about 16ph so I simply am shut out of the largest pool.

I burn about 210kwatts and will be fully expanded to 280kwatts in a few weeks.

so mining for the 300-1000 kwatt guy is tough even with cheap power.

the big issue for me is 280kwatts cost me
What kind of power do you use? Grid? Renewable?
legendary
Activity: 4102
Merit: 7765
'The right to privacy matters'
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io


yeah I talked to foundry told them I had 5.0 ph which is around 50 s19s.

they only accept miners with 20ph.

even if i switch everything to s21 gear I can go to about 16ph so I simply am shut out of the largest pool.

I burn about 210kwatts and will be fully expanded to 280kwatts in a few weeks.

so mining for the 300-1000 kwatt guy is tough even with cheap power.

the big issue for me is 280kwatts cost me
jr. member
Activity: 38
Merit: 21
I've ran a full node for 4 years using a small EC2 instance. It very occasionally has a hicup. This is pretty impressive low cost to run a Bitcoin mainnet node.


A few things to note, though. Since people are quoting Satoshi. He never flat out disagreed with data. He also never had a block limit in the begining. It was Hal who convinced him to do so. Also, it was temporary at 1MB...its now 20203. Due to high transaction fee's there are now 1000 shit coins!


Satoshi also explained that eventually Bitcoin will be ran by a few big data farms, and users would use thin clients. I think that future is inevitable. It is already here when considering the centralisation of mining.

Data from NFT's is stored in public keys not on chain. You can verify data existed ( supplying the preimsge) but not recover it. So it wont bloat the chain as the address contains the data already. Its true that subsequent  tx's from the wallet will likely be related to their coloured status though.


A contentious subject...but I created a protocol thst allows for storage of data on Bitcoin and many other networks. Although many disagree, there are also others who believe in the free market.

https://aidios.io
legendary
Activity: 2828
Merit: 6108
Jambler.io
That's with the presumption that every node would have access to above-average internet connection speeds.
Plus for the question, how many nodes are required to be "appropriately" decentralized. I believe there's no right number, BUT I could tell you that the MORE full nodes = MORE security assurances.

2009:
https://www.bbc.com/news/technology-10786874
Quote
Who defines what is decentralisation ?
Of course full nodes. The more independent full nodes you have, the more decentralized the whole system is. And obviously, they should be owned by independent people, and controlled from independent machines. Which brings us back to the first quote: if you think that miners can "vote", and full nodes can "only observe", then what kind of decentralization is present in your model?

Wait till the US goes full South Korea mode and Foundry will decide to only accept transactions between whitelisted addresses in its blocks.
Suddenly you will realize you can have 1 billion decentralized nodes but all you can relay is a centralized decision  Wink
Much freedom! Such beauty! Wow!


legendary
Activity: 2898
Merit: 1823
Pardon me ser, but Bitcoin is NOT a Democracy where there its participants vote for something to be activated. It's a Proof Of Work system that requires nodes to do "the work" and solve "math problems" to mine blocks/verify transactions.

Although in your definition, organizing for a proposal to be heard, IS part of the Democratic process.

I totally agree that bitcoin is not democratic , that's why i put quotes . In bitcoin you only vote with your associated hashpower as a mining pool . Full nodes don't vote/play part in consensus . They are just observers of the outcome ( block creation ) that mining nodes do . I think we agree on that ?


I agree, but it would depend on the node. If it's an economic node, then it plays an important role. Because what secures Bitcoin? The Miners or the Full Nodes? It's a long debate, BUT if Bitcoin requires consensus to "define" what "Bitcoin" is, which is a system/protocol of key assignment and re-assignment of values, then what secures Bitcoin? The nodes. They enforce the rules.

Quote

Quote
It's my thread, and I won't consider it a derailement if it's a debate where we could learn from each other. Please enter into a greater detail. Cool

The article is based on one argument . That bigger blocks will create problem to propagation and decentralisation . What it should do is first define propagation and decentralisation .

Propagation with nodes that are on what bandwidth ? And with what specs ?

The average download speed globally is more than 50 Mbps and upload is close to 30 ( source https://www.broadbandsearch.net/blog/average-internet-speed-around-the-world ) .  Let's see how fast a 1 MB block is downloaded with 50 Mbps ( that's a speed an average user has currently ) . We will use https://www.omnicalculator.com/other/download-time .

So for 50 Mbps the time to download 1 MB blocks is under 1 sec ( website says 0 sec ) .
For 10 MB blocks download time is again under 1 sec ( website says 0 ) .
For 100 MB blocks download time is 2 sec .
Time to propagate these blocks to it's connected nodes will probably be the double . All that process is under 10 seconds for 100 MB blocks .

Offcourse nodes need time to validate blocks . If i try to do it with hand it will take much time , if i try with rapspi that's the most common for btc it will be faster , and if i do it with high end machines it will be really fast . The problem is that the author doesn't consider the third option at all . So does most of the btc community . The belief is that even a user with a low specs machine and an awful bandwidth should be able to validate transactions . And all the other users should suffer high fees , accept adoption limits , wait for years for L2 solutions that doesn't work etc because someone who might never use or use occasionally the network wants to validate it's own transactions .

Now let's get to decentralisation . Who defines what is decentralisation ? And how much decentralised or centralised a network is ? Common question i make , can a 5 nodes network be decentralised and a 10000 nodes network be centralised ?

Decentralisation doesn't come from the number of nodes . Decentralisation comes from the economic incentives participants have . Being an active node offers to that , and that's achieved only through PoW .
At some part the article says that soft forks are inclusive while hard forks are exclusive . Why ? Because he thinks so . That's the whole article , his opinion without any scientific facts .
Soft forks ( easy path ) makes certain that no one is out of the network , even if he doesn't update . Who do you think would be interested more than anyone to stay on the network ? Those who profit from it , not by speculation on prices but from providing real services . Miners would do it . Listening nodes that can profit would do it . But , we want to protect the most lazy , the most cheapskate user and i don't know what else definition i can give " because decentralisation " .

Decentralisation is not a new concept . Tocqueville wrote his book " Democracy in America " close to 1830 . In that book he writes about the importance of participation , voluntary structures , local government and more . If all these were done by following what the most weak person wanted then there couldn't be decentralisation ( and that's why current "democracy" sucks ) . This would be a race to the bottom as everyone should follow what the worst person wants and usually the worst person wants to do nothing and have anything . Bitcoin is a race to the top . If you want to be a participant in the network that provides work you have to learn to cooperate . Look how pools were created . You have to be honest . Look how many dishonest pools dissapeared . You gotta learn to trust . Look how miners get paid for their shares because pools earned their trust .

Listening nodes would be more expensive , yes . But that doesn't mean that there can be collaborative nodes . In an environment such as btc has become where no one trusts anyone that is a no option . And the fun part is that this forum has a trust evaluation system . And taking that a little further the one of the key elements in finance , trade and much more is trust .  

That article mentions about the internet rules and if you change the rules you create a new network , so no one would ever do a hard fork to change it's rules . Imagine an internet that only 7 users per second could use it . There would be no online commerce . There would be no fast innovation as information/knowledge would be limited . And i can't even think what else wouldn't be possible . In summary that author provides examples without any reasonable facts behind . He just express his opinion which is pure nonsense .


That's with the presumption that every node would have access to above-average internet connection speeds.

Plus for the question, how many nodes are required to be "appropriately" decentralized. I believe there's no right number, BUT I could tell you that the MORE full nodes = MORE security assurances.
hero member
Activity: 1111
Merit: 584
If you think so, then what do you think about this page? https://en.bitcoin.it/wiki/Bitcoin_is_not_ruled_by_miners
There are two options: this page is wrong, or you are wrong.

I'll add some context and i'll let you decide if i'm wrong or the page .
The whitepaper is written based on the premise that all nodes at that time were mining nodes , which the author of wiki doesn't take into account . There was no reason back then to have a "full node" as your node had high chances to earn rewards .
Satoshi points that at version 0.1.0 https://github.com/trottier/original-bitcoin/commit/4184ab26345d19e87045ce7d9291e60e7d36e096
" To support the network by running a node, select: Options-> Generate Coins " . He doesn't just say support the network by just running the client .
It is also mentioned in the whitepaper , section 5 , what nodes do in the network . He didn't have another paragraph about what full nodes role is .
"Network" :
The steps to run the network are as follows:
1) New transactions are broadcast to all nodes.
2) Each node collects new transactions into a block.
3) Each node works on finding a difficult proof-of-work for its block.
4) When a node finds a proof-of-work, it broadcasts the block to all nodes.
5) Nodes accept the block only if all transactions in it are valid and not already spent.
6) Nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash

And then the wiki author get's into what he thinks is decentralisation and why it should be rejected .
"If Bitcoin were ruled by miners, then this would currently be quite terrible security-wise. As of 2017, less than 10 individuals command a majority of hashrate. This is probably far more centralized than even most fiat currencies, and completely defeats the main point of Bitcoin, which is to be decentralized money."
How many individual entities with their own personal interests command fiat money of their own countries so that makes bitcoin even more centralised than fiat system ? Fiat is centralised because there's not a consensus between different economic entities with personal interest . So nonsense again .
Let's move to "efficiency" . He says if you don't like what i've said so far then move to a chaumian ecash which is better by adding a taste of PoW . But bitcoin is the exact opposite of chaumian money . Not centralised , not just trust based , with a public auditable ledger . And PoW by whom ? Designated signers that "as long as a majority of signers are honest, the system remains secure . What do these signers have to lose ? Who decides if they are trusted ? What's the mechanism to punish the non trusted ? No mention , just believe what i say . His thought process is for laugh .

Let's move to legal issues . "If it is possible to say that some group of 10 or so miners control Bitcoin absolutely, then these miners may be viewed from a legal perspective as issuing a currency, transmitting money, etc., which are often highly regulated activities." . There's no such case as mining nodes do nothing other than adding and confirming transactions into blocks . They don't tranfer anything , they are just confirming change of ownership by timestamping it ( i guess you understand UTXO model far better than me ) . Fincen was specific https://www.fincen.gov/sites/default/files/shared/FIN-2014-R001.pdf .

So all that wiki author wants is just to demonise pools . Why ? Because it fits his narratives of what he wants bitcoin to be .

And just to sum up . Who do think that has more interest in bitcoin . Those who just run a "full node" and have spend 100 dollars or someone who has spend hundreds of millions and want to gain profit from it . Don't forget that an attack to the network just gives you the opportunity to double spend your own money , you can't steal other's . So would you as a pool risk to have
your block rejected by other pools ( and not "full nodes" ) as they would be the first in the network to see your invalid block ? And also lose the work committed to that block ? And face the chance to lose your trust and future income if you were a honest node so far ? An attempt of such a double spend would have to involve several millions . And who would accept all of those millions with just 1 conf or 0 conf ? No one .

So to my conclusion , the page is wrong in many fields .    


The system is not about "the weakest" or "the worst". It is about "compatible" vs "incompatible". You could have terabyte blocks, here and now. But please, compress your terabyte-sized blocks, and include some proofs, to not force absolutely everyone to process that. Make it as a commitment. Guess what: the block header has 80 bytes. Does it mean that you can process 80 bytes per 10 minutes? No, you can process a lot more. And the same is true here: just commit your terabyte-sized network to the existing 4 MB witness, and you are good to go. Also, by tweaking public keys, you won't need any additional on-chain bytes to form a commitment. Taproot can show you that: you have 256-bit public key, no matter, how big Ordinals are behind a single address. And in the same way, the network can process much more than 4 MB per 10 minutes, without any hard-fork or soft-fork, if you know, how to make a commitment.
Compatible or incompatible for whom ? Pools/nodes that have economic interest or a user with a rapspi ?
I don't follow you on the rest , not techy enough .
Pages:
Jump to: