Pages:
Author

Topic: please delete - page 4. (Read 2165 times)

legendary
Activity: 1512
Merit: 7340
Farewell, Leo
September 20, 2021, 10:20:11 AM
#75
It's not a huge increase. Well it's a 50% increase but it's not out of the question. peoples' hard drive could handle it.
Nope. I guess most us who run a full node (such as in a RPi), do it with a 1TB external drive. If we extended the chain with 10.5GB every month, we'd sooner or later need more storage. So the ability for me to run my own node becomes more expensive.

If bitcoin sv can be pumping out gigabyte sized blocks every now and then
You know that there aren't many transactions broadcasted in BSV right? At least not if you compare it with Bitcoin.

once you put the entire utxo set into a block that's a checkpoint and people don't need to download anything prior to that
If you're trying to find a cheaper way for me to run a node, don't even think of it with checkpoints. If you just want to verify payments, SPV is the solution.

(Sorry, but I didn't read the entire discussion and I don't know exactly where you're referring to)
legendary
Activity: 3038
Merit: 4418
Crypto Swap Exchange
September 20, 2021, 10:12:43 AM
#74
Snapshot blocks are possible without any forks, just nobody did it (yet). As long as new nodes are backward-compatible, there is no problem with that. I can imagine for example two week snapshots, done once per difficulty change. Then, your new client can download the latest UTXO set and start backward verification, processing 2016 blocks at a time in compressed form. Later, if snapshot from 2016 blocks is not enough, it could be replaced by 210,000 blocks snapshot, that would be 104+1/6 two-week blocks.

There is no need for a second Genesis Block. One is needed to set some starting point and prevent overwriting the whole chain in early days, when messing with timestamps by mining for example blocks from 1970 could be possible. But if you want to create some universal solution for a foreseeable future, then some rule of creating that kind of blocks regularly is needed, for example every 2016 blocks, every 210,000 blocks, and so on.
Backwards validation is not efficient, because you need the state of the blockchain and build it as you go. Your UTXO set should be defined within the client, instead of it being downloaded from other sources because that would result in certain security issues.

The checkpoints are actually being deprecated in Core and it doesn't have a lot of use right now given that we're using assumevalid. You can also disable both assumevalid and checkpoint if you want, so it is a Bitcoin Core specific implementation.
It's not a huge increase. Well it's a 50% increase but it's not out of the question. peoples' hard drive could handle it. If bitcoin sv can be pumping out gigabyte sized blocks every now and then, then bitcoin should be able to put the entire utxo set into a block every now and then. once you put the entire utxo set into a block that's a checkpoint and people don't need to download anything prior to that. thus solving the "downloading the blockchain" issue. Some nodes might keep 2 checkpoints, some might keep 10. Some might only keep one. No problem!
We absolutely cannot handle blocks that big. You're going to have loads of nodes being bottlenecked and taking an hour to download and validate your data set, especially with those underpowered ones.

I'm going to have to ask again, if you think that is an issue, then wouldn't it be better for you to just run an SPV client? Not being able to validate everything is antithetical to what Bitcoin Core has been striving for. Not to argue with the practicality of it but given there is already a ready-made solution, why would we have to further complicate it? You're going to run into groups of people who don't want people to choose and define checkpoints for them.
sr. member
Activity: 1190
Merit: 469
September 20, 2021, 08:53:56 AM
#73

Rather than trust few for-profit company, why don't you just don't remove older block and replace "new" genesis block with snapshot block?

Yeah exactly!


Quote
Simpler, but whole UTXO set is very big & quickly bloat blockchain. On Bitcoin Core, chainstate folder (which store UTXO) is around 4GB. Assuming each block has 2.5MB size, in 1 month (30 days), blockchain size would increased by ~10.5GB. If you also store whole UTXO set on blockchain every month, blockchain size would increased by ~14.5GB.

It's not a huge increase. Well it's a 50% increase but it's not out of the question. peoples' hard drive could handle it. If bitcoin sv can be pumping out gigabyte sized blocks every now and then, then bitcoin should be able to put the entire utxo set into a block every now and then. once you put the entire utxo set into a block that's a checkpoint and people don't need to download anything prior to that. thus solving the "downloading the blockchain" issue. Some nodes might keep 2 checkpoints, some might keep 10. Some might only keep one. No problem!
copper member
Activity: 909
Merit: 2301
September 20, 2021, 06:02:18 AM
#72
Quote
Is this project "assume UTXOS" similar to what you are suggesting???
It is somehow similar, but I assume full backward-compatibility. So, creating some UTXO set for blocks from M to N is possible. Sharing that set with new nodes is possible, but old nodes will know nothing about it. New nodes could use things like that, because it would be better than putting trust in SPV clients or Electrum servers. However, replacing existing system with all pruned nodes and removing old data entirely is impossible, because it is backward incompatible change, and then creating new full nodes would be impossible.
full member
Activity: 228
Merit: 156
September 20, 2021, 04:58:59 AM
#71
Snapshot blocks are possible without any forks, just nobody did it (yet). As long as new nodes are backward-compatible, there is no problem with that. I can imagine for example two week snapshots, done once per difficulty change. Then, your new client can download the latest UTXO set and start backward verification, processing 2016 blocks at a time in compressed form. Later, if snapshot from 2016 blocks is not enough, it could be replaced by 210,000 blocks snapshot, that would be 104+1/6 two-week blocks.

Is this project "assume UTXOS" similar to what you are suggesting???
copper member
Activity: 909
Merit: 2301
September 20, 2021, 04:24:45 AM
#70
Snapshot blocks are possible without any forks, just nobody did it (yet). As long as new nodes are backward-compatible, there is no problem with that. I can imagine for example two week snapshots, done once per difficulty change. Then, your new client can download the latest UTXO set and start backward verification, processing 2016 blocks at a time in compressed form. Later, if snapshot from 2016 blocks is not enough, it could be replaced by 210,000 blocks snapshot, that would be 104+1/6 two-week blocks.

There is no need for a second Genesis Block. One is needed to set some starting point and prevent overwriting the whole chain in early days, when messing with timestamps by mining for example blocks from 1970 could be possible. But if you want to create some universal solution for a foreseeable future, then some rule of creating that kind of blocks regularly is needed, for example every 2016 blocks, every 210,000 blocks, and so on.

Edit: Also, we have some checkpoints, so overwriting them is impossible:
Code:
checkpointData = {
{
{ 11111, uint256S("0x0000000069e244f73d78e8fd29ba2fd2ed618bd6fa2ee92559f542fdb26e7c1d")},
{ 33333, uint256S("0x000000002dd5588a74784eaa7ab0507a18ad16a236e7b1ce69f00d7ddfb5d0a6")},
{ 74000, uint256S("0x0000000000573993a3c9e41ce34471c079dcf5f52a0e824a81e7f953b8661a20")},
{105000, uint256S("0x00000000000291ce28027faea320c8d2b054b2e0fe44a773f3eefb151d6bdc97")},
{134444, uint256S("0x00000000000005b12ffd4cd315cd34ffd4a594f430ac814c91184a0d42d2b0fe")},
{168000, uint256S("0x000000000000099e61ea72015e79632f216fe6cb33d7899acb35b75c8303b763")},
{193000, uint256S("0x000000000000059f452a5f7340de6682a977387c17010ff6e6c3bd83ca8b1317")},
{210000, uint256S("0x000000000000048b95347e83192f69cf0366076336c639f9b7228e9ba171342e")},
{216116, uint256S("0x00000000000001b4f4b433e81ee46494af945cf96014816a4e2370f11b23df4e")},
{225430, uint256S("0x00000000000001c108384350f74090433e7fcf79a606b8e797f065b130575932")},
{250000, uint256S("0x000000000000003887df1f29024b06fc2200b55f8af8f35453d7be294df2d214")},
{279000, uint256S("0x0000000000000001ae8c72a0b0c301f67e3afca10e819efa9041e458e9bd7e40")},
{295000, uint256S("0x00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983")},
}
};
That means the first 295,000 blocks are impossible to overwrite, even if you have enormously huge computing power and somehow create not just 51% attack, but 99,99% attack.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
September 20, 2021, 03:35:33 AM
#69
people want to argue that you can't trust the new genisys block. bullcrap. you cuold. if enough big websites published the genisys block then people could come to a consensus pretty fast. i'm sure bitcoin.com and coinbase would oblige. (also I heard twitter is useful for publishing new genisys blocks). Grin and twitter has "verified" accounts so we could be sure!

Rather than trust few for-profit company, why don't you just don't remove older block and replace "new" genesis block with snapshot block? In this scenario, new node could either
1. Trust the snapshot block, which means only sync starting from snapshot block.
2. Sync starting from snapshot block, then download & verify all older blocks.
3. Download and verify whole block sequentially (just like how Bitcoin Core currently works).

but seriously, bitcoin was never designed to be able to verify the utxo set without having the entire blockchain so trying to put that feature into place now results in something that probably shouldn't even be tried to be done. a redesign of bitcoin might be more appropriate. from the ground up.

That would require hard-fork on Bitcoin protocol & major changes on Bitcoin software.

i mean why not just stick the entire darn utxo set into a block once a month. that seems way simpler.

Simpler, but whole UTXO set is very big & quickly bloat blockchain. On Bitcoin Core, chainstate folder (which store UTXO) is around 4GB. Assuming each block has 2.5MB size, in 1 month (30 days), blockchain size would increased by ~10.5GB. If you also store whole UTXO set on blockchain every month, blockchain size would increased by ~14.5GB.
sr. member
Activity: 1190
Merit: 469
September 20, 2021, 12:07:01 AM
#68

I agree it's complicated, but what makes you think it seems overly complicated?

just cut the blockchain at a specific block height. have a new genisys block. jettison the blocks before that. problem solved. if u can have a geneisis block in 2008 or whenever satoshi made it, you can surely do that today. you could make the new geneisis block say something like "biden presidency is in shambles, afghanistan situation makes him look very bad." that way people would know that your genisys block occured at some time after august 2021.

people want to argue that you can't trust the new genisys block. bullcrap. you cuold. if enough big websites published the genisys block then people could come to a consensus pretty fast. i'm sure bitcoin.com and coinbase would oblige. (also I heard twitter is useful for publishing new genisys blocks). Grin and twitter has "verified" accounts so we could be sure!

but seriously, bitcoin was never designed to be able to verify the utxo set without having the entire blockchain so trying to put that feature into place now results in something that probably shouldn't even be tried to be done. a redesign of bitcoin might be more appropriate. from the ground up. i mean why not just stick the entire darn utxo set into a block once a month. that seems way simpler. 

Quote

If all version of Electrum (including forked for altcoin) doesn't work, it's more likely something wrong with your OS.


Doubtful. I can install pretty much any other software and run it with no problem.

Quote
Than what you're looking for? Something like this (https://bitcointalksearch.org/topic/m.57186698)?

No not really. that's just the same thing as going to a block explorer and pasting in the signed transaction. It would be nice if bitcoin core could just let me do a sendrawtransaction without having to dowload the entire damn blockchain! I know it's possible but they dont' want to let people do that.
sr. member
Activity: 1190
Merit: 469
September 19, 2021, 01:00:17 AM
#67

There are discussion about it into Bitcoin, but the community generally not interested or don't like because some trust is required.

Yeah there seemed to be alot of pushback from some members about this scheme although not everyone was against the idea.

Quote
Obviously there are downside. I don't remember if there are any standard for UTXO commitment, but here are few past discussion about UTXO commitment or other similar idea,

Yeah what stand out to me is this utxo commitment scheme seems overly complicated. Maybe more complicated than anything else in bitcoin. When you have to make new things that are more complicated than the old things just to gain some type of convenience, then that's like letting the tail wag the dog.


Quote
You could always use Electrum or block explorer though.

Well me personally I can't use electrum. It stopped working on my computer. All versions of it, not just the btc version but the ltc version and others. They don't care about that though. Used to work but then it stopped working. All you can do with a block explorer is push signed transactions (I think!). That's not what I'm looking for.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
September 18, 2021, 03:55:32 AM
#66
Similar technology already exist and it's called "UTXO commitment". The difference are,
1. It doesn't delete all older block before snapshot. So it;s possible to verify whether the snapshot itself is valid.
2. The goal is to speed up node sync process. Node have option to blindly trust it or verify it later after downloading whole block.
If it's go great then how come no one ever introduced it into bitcoin?

There are discussion about it into Bitcoin, but the community generally not interested or don't like because some trust is required.

There has to be some downsides as well, right? Exactly how does this utxo commitment thing work? We should go into some details so we can see if it really is any good or not. The thing we dont want to do in bitcoin is add alot of complexity and overhead in terms of processing and storage. But other than that, I'm all ears. Grin

Obviously there are downside. I don't remember if there are any standard for UTXO commitment, but here are few past discussion about UTXO commitment or other similar idea,
1. Idea:Add the UTXO set to blocks
2. Proposal: including (UTXO) state hash in blocks (to eliminate IBD for new nodes)

Especially if it would finally allow me to execute "sendrawtransaction" out of btc core without having to download an entire blockchain.

You could always use Electrum or block explorer though.
sr. member
Activity: 1190
Merit: 469
September 18, 2021, 01:14:58 AM
#65

Similar technology already exist and it's called "UTXO commitment". The difference are,
1. It doesn't delete all older block before snapshot. So it;s possible to verify whether the snapshot itself is valid.
2. The goal is to speed up node sync process. Node have option to blindly trust it or verify it later after downloading whole block.


If it's go great then how come no one ever introduced it into bitcoin? There has to be some downsides as well, right? Exactly how does this utxo commitment thing work? We should go into some details so we can see if it really is any good or not. The thing we dont want to do in bitcoin is add alot of complexity and overhead in terms of processing and storage. But other than that, I'm all ears. Grin

Especially if it would finally allow me to execute "sendrawtransaction" out of btc core without having to download an entire blockchain.
sr. member
Activity: 1190
Merit: 469
September 18, 2021, 12:22:14 AM
#64

I don't understand. Why should an opendime be less secure than another kind of paper / offline wallet?

Because you can't make a backup of them.
legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
September 17, 2021, 10:24:32 AM
#63
I don't understand. Why should an opendime be less secure than another kind of paper / offline wallet?

Opendimes can't survive if they are crushed, melted from high temperatures, or exposed to water for long periods of time. Your typical steel backup will stick around ... but it's a different use case.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
September 17, 2021, 06:38:42 AM
#62
...
they still have to download the entire blockchain though. and keep it updated. that's kind of a hassle...

Why is it a hassle?
No matter what the blockchain looks like it's the same process, start node an wait till it's done.
Then keep it running to get all the new blocks as they are mined.

If you prune off a bunch of the blockchain, let me check the process....yep...start a node, wait till it's done and then keep it running.

A 1TB drive is under $40 here in the US new.
A 4th gen i5 PC with 8GB ram and a 1TB drive is under $150 delivered to your door.

Blockchain size is only an issue for people who want to make it an issue.

-Dave
Edit to add that as of now there is a sale going on at newegg.com a 16TB drive for $325, so you can just get one of those and not worry about it ever again....
hero member
Activity: 910
Merit: 5935
not your keys, not your coins!
September 17, 2021, 04:33:20 AM
#61
yeah that really needs to be looked at because I have no interest in storing 400GB on a hard drive. Let alone ten times that.
Not sure where you get the '10 times that' from, but it will take a looong time for the blockchain to get that large Cheesy By then, it might will be cheaper to get a 4TB SSD than getting a 400GB SSD now.
I agree with Dabs here.
Mobile phones will be 10 TB in 10 years, and your average low budget laptop would be running on 16 to 20 TB SSDs.
For anyone who stores more than a few thousand dollars worth of bitcoin, it would not be a bad idea to run a full node that costs a hundred bucks.

In general, you're trying to argument for huge, substantial changes to Bitcoin just to save 4GB of disk space, which cost around 28 cents at a price of currently ~8 cents per GB. Nobody would change Bitcoin in such a way to save 28 cents per node.

This makes the whole topic pointless until there is a solution for the blockchain size 'problem'. Because nobody cares about 4GB more or less if they have to store 400GB of blockchain. On that note, even this is not an issue, as has been explained. I can get a 2TB (!!) SSD now for 140 bucks, that's barely the cost of many hardware wallets, plug it into an old PC or laptop and have a fully verified node with enough storage to run for like 20 years or more.

Maybe 0.001 btc but 0.01 BTC is like $5000 $500. I don't think I woudl trust an opendime to that much money. But I guess it's all relative. If someone can afford to lose 5 grand hundred and it wouldn't hurt them then maybe they would be fine with having an opendime lying around with that much on it. It really is like a ticking time bomb until you take off the funds.
I don't understand. Why should an opendime be less secure than another kind of paper / offline wallet?
legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
September 16, 2021, 06:55:19 AM
#60
As time goes on, less and less people/organizations are going to maintain a copy of the entire blockchain. Maybe oneday no one will have an entire copy. That would be a pretty bad situation.

Unlikely. There are lots of companies that do not delete tons of data. All the social media networks. ...

There are plenty of otherwise "normal" people with large hard drives that keep data and backups of those data.

And, there are the ever present pirates who hoard abandoned software and games and movies with terabytes of storage.

Someone will always be keeping the entire copy. Right now, that's at least 10k full nodes, and another 20k lightning nodes (I think they need to be full nodes as well).

I do like the idea of committed transactions and summary "re-genesis" blocks, but I don't think that will be a problem for several decades, and by then maybe storage will have advanced and become cheaper. Mobile phones will be 10 TB in 10 years, and your average low budget laptop would be running on 16 to 20 TB SSDs.

For anyone who stores more than a few thousand dollars worth of bitcoin, it would not be a bad idea to run a full node that costs a hundred bucks.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
September 16, 2021, 04:30:02 AM
#59
It doesn't... Relative to the rest of the spam that the network has to deal with. UTXOs are always getting added and deleted so the growth shouldn't be that significant. If your logic is that each UTXO imposes a burden on the network, then each transaction imposes a much higher burden than that, for which there is zero compensation to the nodes. How do we go about solving that?
I get the feeling that your question is somewhat rhetorical in nature and that you don't really believe that anything needs to be "solved". But there are possible solutions to the blockchain size issue. I even thought of one myself recently. Actually 2 of them. I'll share one of them here. the other deserves a bigger platform like its own thread maybe!

Just take a snapshot of the utxo set after mining block #XYZ. The utxo set at that time will becomes the new genesis block. And that is all that would need to be saved. Rince and repeat every 10 years or so. Bitcoin purists and crypto historians can enjoy keeping the old blockchain and actually seeing how things got to the state they are in if they want to while the network goes on about its business in a more efficient manner.

Similar technology already exist and it's called "UTXO commitment". The difference are,
1. It doesn't delete all older block before snapshot. So it;s possible to verify whether the snapshot itself is valid.
2. The goal is to speed up node sync process. Node have option to blindly trust it or verify it later after downloading whole block.

Or just use SPV wallet if you can't or don't want afford to run full node.
legendary
Activity: 3038
Merit: 4418
Crypto Swap Exchange
September 16, 2021, 12:18:33 AM
#58
they still have to download the entire blockchain though. and keep it updated. that's kind of a hassle.

Once the network come to a consensus about the new genesys block it wuold be just like satoshi's genesys block. everyone could trust it just as much. there's always a risk of the things you mentioned even without doing this. As time goes on, less and less people/organizations are going to maintain a copy of the entire blockchain. Maybe oneday no one will have an entire copy. That would be a pretty bad situation.
Why do you need a full node? For the majority, any SPV clients would suffice and it wouldn't incur that much resource usage as well.

The problem with a snapshot is that someone has to define the interval or the block height for which it occurs, and the consensus will never be reached on that. The decision will ultimately fall with a certain group of people instead of the community as a whole. Satoshi's genesis block was universally defined as the block zero, for which the first Bitcoin client was shipped with, there was no consensus about that. I find having to trust someone on the state of the blockchain, instead of verifying it as a whole would just be akin to me using a SPV client.

The issue that you've stated so far is not a pressing issue at all. The HDD argument is not very valid; areal density is improving year on year and the reason the price has decreased is due to the cost of production and the increase in HDD density.

I never heard that one before but if Satoshi said that then I guess I have to rethink my idea about making utxos expire. I can't be contradicting the man.
Not saying that we shouldn't follow that but it really doesn't make sense. Bitcoin belongs to the community, not to Satoshi. It really doesn't matter what he says, just look at the issue on its practicality as well as the ethics of it. On this issue, the problem lies with the ethics.
sr. member
Activity: 1190
Merit: 469
September 15, 2021, 11:37:49 PM
#57

It wasn't really a rhetorical question. Given how the argument is centered about UTXO size, I was wondering if you were concerned about blockchain size as well.

Yeah of course, I'm concerned about it. Unless you download the entire blockchain then you can't really interact fully with it.

Quote
I think this "solution" has been discussed quite a few times, and I don't really find it an issue anymore. We've had pruning for quite a few years now, and it describes exactly what you're talking about. Where users choose to save how much block data they need while discarding the rest and retaining the UTXO.

they still have to download the entire blockchain though. and keep it updated. that's kind of a hassle.


Quote
Anyhow, throughout my time here, I've seen this solution being proposed more than a dozen times. The reason why the UTXO commitment or blockchain truncating isn't viable right now is that the user has to trust that the commitment is accurate and isn't intentionally manipulated. I would think that this only serves to solve the problem with regards to the storage space, for which pruning is sufficient.

Once the network come to a consensus about the new genesys block it wuold be just like satoshi's genesys block. everyone could trust it just as much. there's always a risk of the things you mentioned even without doing this. As time goes on, less and less people/organizations are going to maintain a copy of the entire blockchain. Maybe oneday no one will have an entire copy. That would be a pretty bad situation.

Quote
Will you listen to Satoshi?

Quote from: satoshi
You should never delete a wallet.

Wallets/UTXOs *never* expire.

I never heard that one before but if Satoshi said that then I guess I have to rethink my idea about making utxos expire. I can't be contradicting the man.
legendary
Activity: 990
Merit: 1108
September 15, 2021, 05:28:58 AM
#56
Maybe 0.001 btc but 0.01 BTC is like $5000.

More like $500.
Pages:
Jump to: