Pages:
Author

Topic: FACT CHECK: Bitcoin Blockchain will be 700GB in 4 Years - page 2. (Read 9345 times)

legendary
Activity: 1456
Merit: 1000
does it mean that every miner wool have 700 GB of data ?
I don't have idea about that. does blockchain reside in all miners system or it is stored in a particular system? But well 700 GB is quit a large number. and database if only 3 columns of 700GB would be a huge database.

Miners find ways to avoid having to run full nodes with a full data set. Don't worry about the miners use of full nodes, worry about the miners being concentrated into small pockets of big industrial groups. All it takes is for two or three big miners to collude and they can fix the game to suit their ends. The only thing stopping them from colluding to rig the game is self-interest so as not to collapse the market price. If they get a government visit, however, governments don't care about corporate self-interests. Maybe we should have a mining Canary?

It is looking like users will just be users. So for the average person they can just use Bitcoin without worrying about such things as the backbone of the system.

We are likely to be in a transition state where home connections are going to reach max capacity. The transition then is going to be towards professionally hosted nodes.

This guy on r/btc posted some stats about his node consumption. He has 200mbit download and 20mbit upload, which is a very good connection in the western world:

Code:
month        rx      |     tx      |    total    |   avg. rate
------------------------+-------------+-------------+---------------
  Feb '16      8.87 GiB |   22.38 GiB |   31.25 GiB |  104.62 kbit/s
  Mar '16    109.58 GiB |  635.21 GiB |  744.79 GiB |    2.33 Mbit/s
  Apr '16    144.85 GiB |    1.05 TiB |    1.19 TiB |    3.95 Mbit/s
  May '16    112.24 GiB |    1.08 TiB |    1.19 TiB |    3.80 Mbit/s
  Jun '16     95.28 GiB |  880.11 GiB |  975.38 GiB |    3.16 Mbit/s
  Jul '16     90.72 GiB |  925.71 GiB |    0.99 TiB |    3.18 Mbit/s
  Aug '16    178.99 GiB |    1.02 TiB |    1.20 TiB |    3.84 Mbit/s
  Sep '16    133.12 GiB |    1.03 TiB |    1.16 TiB |    3.83 Mbit/s
  Oct '16    115.43 GiB |    1.18 TiB |    1.30 TiB |    4.16 Mbit/s
  Nov '16     15.69 GiB |  213.81 GiB |  229.50 GiB |    6.46 Mbit/s
------------------------+-------------+-------------+---------------
estimated    136.41 GiB |    1.82 TiB |    1.95 TiB |

He is reaching his connection limits, but that does depend on how he sets his in / out connections.

Code:
>$ bitcoin-cli getpeerinfo | grep subver | sort | uniq -c | sort -nr
   21     "subver": "/Satoshi:0.13.0/",
   19     "subver": "/Satoshi:0.12.1/",
    9     "subver": "/bitcoinj:0.13.4/Bitcoin Wallet:4.46/",
    9     "subver": "/bitcoinj:0.13.2/MultiBitHD:0.1.4/",
    9     "subver": "/bitcoinj:0.13.1/Bitsquare:0.3.3/",
    9     "subver": "/bitcoinj:0.12.2/Bitcoin Wallet:2.9.3/",
    9     "subver": "/BitCoinJ:0.11.2/MultiBit:0.5.18/",
    7     "subver": "/Satoshi:0.12.0/",
    7     "subver": "/Satoshi:0.11.2/",
    6     "subver": "/Satoshi:0.11.0/",
    6     "subver": "/BitcoinUnlimited:0.12.1(EB16; AD4)/",
    4     "subver": "/Satoshi:0.9.99/",
    4     "subver": "/Satoshi:0.8.1/",
    3     "subver": "/iguana 0.00/",
    3     "subver": "/bitcoinj:0.14-SNAPSHOT/",
    3     "subver": "/bitcoinj:0.14.3/Bitcoin Wallet:5.03/",
    3     "subver": "/bitcoinj:0.12.2/",
    2     "subver": "/Satoshi:0.13.1/",
    2     "subver": "/Satoshi:0.10.1/",
    2     "subver": "/Classic:0.12.0/",
    2     "subver": "/btcwire:0.4.1/btcd:0.12.0/",
    2     "subver": "/bitcore:1.1.0/",
    1     "subver": "/ViaBTC:bitpeer.0.2.0/",
    1     "subver": "/Satoshi:0.9.1/",
    1     "subver": "/Satoshi:0.12.1(bitcore)/",
    1     "subver": "/Satoshi:0.11.1/",
    1     "subver": "/Bitcoin XT:0.11.0/",
    1     "subver": "/bitcoinj:0.14.3/Bitcoin Wallet:5.02/",
    1     "subver": "/bitcoinj:0.14.3/",
    1     "subver": "/BitCoinJ:0.11.3/",
    1     "subver": "",

https://www.reddit.com/r/btc/comments/5b0g4x/remember_that_time/
sr. member
Activity: 280
Merit: 250
does it mean that every miner wool have 700 GB of data ?
I don't have idea about that. does blockchain reside in all miners system or it is stored in a particular system? But well 700 GB is quit a large number. and database if only 3 columns of 700GB would be a huge database.
sr. member
Activity: 812
Merit: 250
A Blockchain Mobile Operator With Token Rewards
devs should just build the clients with a built-in highly pruned blockchain, and let that be the starting point for syncing old blocks when running that client.

nodes dont need the history from day 1 to work, all they need is a starting point they know will agree with the rest of the network.
So exactly what is this supposed to do? Centralize Bitcoin in order to avoid downloading a part of the blockchain? That's a terrible idea. If you don't have enough storage space right now, you can run a pruned node.
all it would centralize is the src from which you get this "built-in highly pruned blockchain start point". but i guess you could go a step futher and get the network to validate that this "start point" is not a lie, and does in fact accurately represent the past history. hell the network itself could make this start point available for download, instated of having the network send the blockchain in its entirety .

it means placing some trust in the client you run, but that's already the case today, and it kinda always will be...
False.
if you subscribe to the idea that you dont need to trust the code behind the client you choose to run because its been peer reviewed by many people and is known to do exactly what you'd expect it to do.
then it isn't much of a stretch to say that you dont need to trust the "built-in highly pruned blockchain start point" for the same reason.
legendary
Activity: 2674
Merit: 3000
Terminated.
devs should just build the clients with a built-in highly pruned blockchain, and let that be the starting point for syncing old blocks when running that client.

nodes dont need the history from day 1 to work, all they need is a starting point they know will agree with the rest of the network.
So exactly what is this supposed to do? Centralize Bitcoin in order to avoid downloading a part of the blockchain? That's a terrible idea. If you don't have enough storage space right now, you can run a pruned node.

it means placing some trust in the client you run, but that's already the case today, and it kinda always will be...
False.
sr. member
Activity: 812
Merit: 250
A Blockchain Mobile Operator With Token Rewards
devs should just build the clients with a built-in highly pruned blockchain, and let that be the starting point for syncing old blocks when running that client.

nodes dont need the history from day 1 to work, all they need is a starting point they know will agree with the rest of the network.

it means placing some trust in the client you run, but that's already the case today, and it kinda always will be... unless core not only insists that everyone must be able to run a node, but code one from scratch too.
legendary
Activity: 1456
Merit: 1000
will probably reference this in update on chart.

Quote from: nullc
Rought numbers as I might of omitted a byte here or there as this is a Reddit comment:
A typical 2 in 2 out transaction,
nversion + ninputs + 2(txid + vin + sequence + scriptlen + pubkey + signature) + noutpus + 2(scriptlen + dup + hash160 + hash + equalverify + checksig + value) + nlocktime
1+1+2(32+4+4+1+34+73) + 1+ 2(1+1+1+1+21+1+1+8) + 4 = 373
Maximum per block: 2680
With segwit (p2wphk):
1+1+2(32+4+4+1+(34+73).25) + 1+ 2*(1+1+1+1+21+1+1+8) + 4 = 212
Maximum per block: 4716
If instead we do the same with 2 of 3 multisig, which are a significant percentage of the transactions on the network, we get..
nversion + ninputs + 2(txid + vin + sequence + scriptlen + 3pubkey + 2signature + redeempush + dummy + checksig) + noutpus + 2(scriptlen + hash160 + hash + equal + value) + nlocktime
1+1+2(32+4+4+1+334+273+1+1+1) + 1+ 2(1+1+21+1+8) + 4 = 655
Maximum per block: 1526
with segwit (p2wsh):
1+1+2(32+4+4+1+(334+273+1+1+1).25) + 1+ 2*(1+1+21+1+8) + 4 = 278.50
Maximum per block: 3590.
These examples use direct P2WS instead of P2SH-P2WSH, which will be somewhat more common at first which is somewhat less efficient.

https://www.reddit.com/r/btc/comments/5azo8c/eli_40_how_do_we_get_17x_the_current_transaction/
legendary
Activity: 2674
Merit: 3000
Terminated.
lauda why do you go from talking about a block of 1mb.. to jump immediately to blocks of [random number] or  22.8mb (100gb a month)..
why are you not thinking rational of what was previously consensual safe numbers of something like
2mb in 2015, 4mb by 2018.. 8mb by 2021.. 16mb by 2024
Calm down. This has nothing to do with what you're writing about, not has it anything to do with what the user wanted. I gave them the current internet speed required to download the maximum size of a block (currently) and some arbitrary guess at which people would likely not bother at all. He asked for *size* at which users would not want to download at current speeds, not what was consensually safe or not (which I clearly stated that we are disregarding). One more time: the 100GB per month is completely arbitrary.

-snip-
Everything else is doomsday, propaganda, mumbo jumbo not relevant to this thread.
legendary
Activity: 4424
Merit: 4794
In this thread some guy talks about the 700GB in 4 years prediction, which seems to be wrong according to DannyHamilton. It seems there is nothing worry about. You would need to raise the blocksize + have blocks full all the time to reach numbers like those... so the blockchain will be lighter.

https://bitcointalksearch.org/topic/ive-read-somewhere-in-4-years-the-blockchain-will-be-700gb-1667806

the exponential graph on earlier pages are wrong not so much about the number.. but how he assumes that number based purely on an curve
because he didnt take into account the consensus limits.. which holds back natural growth
EG
if segwit does not activate, we will see the bloat held under the consensus limit of 1mb a block. meaning maximum bloat in 4 years for the entire blockchain will be about 300gb.

but if segwit was activated within the year followed by a year of adoption.. we will see the bloat held under the limit of ~1.8mb a block. meaning maximum bloat in 4 years for the entire blockchain will be about 300gb.

but if other features was activated followed by a year of adoption to fill the remaining 2.2mb of weight spare.. we will see the bloat held under the consensus limit of ~4mb a block. meaning maximum bloat in 4 years for the entire blockchain will be about 650gb.

thus its not a curve.. but a curve then line, curve then line, curve then line

ofcourse thats all theory.
because segwit may not activate or may activate sooner
also everyone may move their funds over to segwit compatible HD seed addresses sooner to all use it sooner causing bloat to hit the red line in 2017 instead of 2018 and then if other features fill the remaining weight sooner causes the bloat to hit the yellow line sooner and thus we have more than 650gb of bloat in 4 years. which makes 700mb+ possible and not irrational number. even if the OP of topic didnt come to the number using a more logical/rational methodology
legendary
Activity: 1610
Merit: 1183
In this thread some guy talks about the 700GB in 4 years prediction, which seems to be wrong according to DannyHamilton. It seems there is nothing worry about. You would need to raise the blocksize + have blocks full all the time to reach numbers like those... so the blockchain will be lighter.

https://bitcointalksearch.org/topic/ive-read-somewhere-in-4-years-the-blockchain-will-be-700gb-1667806
legendary
Activity: 4424
Merit: 4794
It will grow but I'm not sure by that much.  What size would it have to get to in order to be prohibitive for users to not want to download it at their current connection speeds?
Well, currently we are talking about a ridiculously low speed: 1 MB /10 /60 = 1.666 kb/s. We can't really estimate where the problem will start popping up for most users. The blockchain is already heavy as is, and the bigger it gets, the less people will want to use a full wallet or run a full node. Current average (global) internet speeds are almost able to download 1 GB blocks under 10 minutes (which requires less than 2 MB/s). However, this is way beyond the realm of possibility at this time, because if they were only half full on average we'd have over 2 TB worth of blocks each month (500 x 6 x 24 x 30 = 2 160 000).

I think a good portion would probably give up if we were talking about 100 GB monthly (probably even less) in blockchain growth.

lauda why do you go from talking about a block of 1mb.. to jump immediately to blocks of [random number] or  22.8mb (100gb a month)..
why are you not thinking rational of what was previously consensual safe numbers of something like
2mb in 2015, 4mb by 2018.. 8mb by 2021.. 16mb by 2024

why are you always trying to push a doomsday of atleast a decade away to be a problem of today.. seriously 16 years ago people were on dialup and the largest hard drive was 4gb.. so what may seem a problem today when you scream blue murder about any randomly large number you seem to pull out of a hat.. wont be a problem in a decade. you dont seems to get the concept of natural progressive growth

if you want to talk about the possibility of blocks of 22.8mb atleast assume that we would get there in a few years and not today.
you know, like emphasis that you are speculating the year 202X and not actually talking about 2016/2017
because all your doing with your fake dooms days of 1mb today 22mb tomorrow is creating ammo to avoid safe and agreeable amounts of 2mb 4mb pushing the fake rhetoric just so that one group of devs can do their own thing.

if you actually came to the realisation that there is no need for one entity to make 5000 input/outputs in one tx. you would realise that limiting sigops and using libsecp256k1 optimisations would have solved your other fake doomsday of 'quadratics'.
you would at this point of reading this post probably want to rebut that yes quadratics wont be an issue by limiting sigops, but malleability wont be solved and double spends are still an issue.
which i would rebut that double spends are still an issue even after a malleability fix due to RBF, CPFP and other new features added to mess with unconfirmed transactions.
you would rebut that malleability also has issues for LN..
i would rebut that LN has even bigger issues to overcome first. such as the address reuse dilemma

i actually laughed that you were on the 2mb was bad bandwagon but now 4mb is acceptable bandwagon.
care to comment on your revelation concerning BLOAT(hard drive and bandwith doomsdays you were squawking just months ago).

also care to comment how half of that 4mb weight will be bloated with privacy features and other mundane data rather than expanding the capacity of actual transactions.
yep thats right. cores 1mb base 4mb weight. will be at most 2mb(~1.Cool of serialised(full tx+sig included data) and 2mb(~2.2) left vacant until filled with confidential payment codes, and whatnot
EG
1mb base 0.8mb witness.... leaving 2.2mb spare for feature bloat rather than more transactions = total 4mb weight

ok lets word it a different way.
whats your future mindset proposition and propaganda preference:

1mb base 0.8mb witness 2.2mb bloat for future features, for 4500tx
or
2mb base 1.6mb witness for 9000tx

where in both cases, bloat is ~<4mb

how would you like to see 4mb of data used??
hero member
Activity: 1022
Merit: 564
Need some spare btc for a new PC
Thais came to my mind a few days ago, how can the aize be lowered tho? Can it be separated somehow, archived? Also, the usage of wallets (if people do not wish to download 100gb) will decrease, will that effect the btc somehow or will people switch to the new way of storing and mining btc?
legendary
Activity: 2674
Merit: 3000
Terminated.
It will grow but I'm not sure by that much.  What size would it have to get to in order to be prohibitive for users to not want to download it at their current connection speeds?
Well, currently we are talking about a ridiculously low speed: 1 MB /10 /60 = 1.666 kb/s. We can't really estimate where the problem will start popping up for most users. The blockchain is already heavy as is, and the bigger it gets, the less people will want to use a full wallet or run a full node. Current average (global) internet speeds are almost able to download 1 GB blocks under 10 minutes (which requires less than 2 MB/s). However, this is way beyond the realm of possibility at this time, because if they were only half full on average we'd have over 2 TB worth of blocks each month (500 x 6 x 24 x 30 = 2 160 000).

I think a good portion would probably give up if we were talking about 100 GB monthly (probably even less) in blockchain growth.

Thanks for clarifying though, Pepe now understands that this is just one giant mental masturbation (*).
*Intellectual activity that serves no practical purpose
member
Activity: 120
Merit: 13
Pepe is NOT a hate symbol

but those lines are meaningless in reality and are just showing hypothetical


Ok, Pepe now understands.
When you put hypotheticals on top of assumptions which are based on suppositions then Pepe's head can't gain a foot hold and gets confused.
Thanks for clarifying though, Pepe now understands that this is just one giant mental masturbation (*).

*Intellectual activity that serves no practical purpose
legendary
Activity: 2702
Merit: 1072
It will grow but I'm not sure by that much.  What size would it have to get to in order to be prohibitive for users to not want to download it at their current connection speeds?
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
If we were to remedy this issue, here's how: re-validate every transaction in evvery block in the entire block chain, EVERY time someone loads up their Bitcoin network.

Well, no, Carlton. I would advocate that every transaction be validated at least once by each client. Upon initial download.

I would be more than happy to trust that data on local storage had not been meddled with. Heck, we've got the world's best cryptographers, right? We could secure data-at-rest with some sort of hash or something to ensure that nobody de-installed the drive, modified the data, and reinstalled it since the last time we ran the node. If that were a worry.

I am just pointing out that -- with the current Core implementation -- we do indeed need to trust in the goodness of others. For validation of older transactions. We are decidedly not operating in a trustless manner. I am heartened to learn that -- on this point at least -- you agree.
legendary
Activity: 2674
Merit: 3000
Terminated.
Pepe thinks this graph is absolutely misleading.
Why make assumptions based on something back in 2012?
Indeed. I would prefer if that was removed in the next update.

Also, in 2016 the blockchain is 100 GB big, and not 350 GB as graph wants to make Pepe believe.
I think you need a pair of glasses mister Pepe.

all the lines are hypothetical..
I think that it the point. The graph was supposed to represent the potential growth that the blockchain size might see in the future (as it clearly says "potential").
legendary
Activity: 4424
Merit: 4794
ill just leave this here


Bump, just because its better than my chart.

What will the next halving bring?

edits

(fucking hate mobile auto-correct sometimes)

Pepe thinks this graph is absolutely misleading.
Why make assumptions based on something back in 2012?
Also, in 2016 the blockchain is 100 GB big, and not 350 GB as graph wants to make Pepe believe.

Pepe is not stupid, you know?
So why fudge the numbers like that?
What is the point?

the green glow line is not actual data.. its what could have been POTENTIAL if every block was filled from 2009.
the orange glow line is not actual data.. its what could have been POTENTIAL if the hard fork was implemented in block 210000

the funny part is that i actually moved some of the goal posts in favour of the anti-bloater. afterall satoshi wanted the hardfork in block 115000
but based on other historic things i just set the hypothetical line at 210,000

but those lines are meaningless in reality and are just showing hypothetical

anyway

actual data was the grey filled in shape.. more realistic POTENTIAL data growth is in the other grey shape

all the lines are hypothetical.. the greyed out shape is more of the actual and then expectant results based on the hypothetical lines and scenarios of delay in timeline to produce and activate features and then user adoption of those features.

here. ill throw another graph in
this time the orange glow hypothetical is based on:
Quote from: satoshi
It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

and the important demonstration (grey shape data) has the assumption this time of segwit and other features never activating


not sure why anyone should actually care about the green /orange glow lines as they are just arbitrary lines. its the grey shape that should be emphasised.
member
Activity: 120
Merit: 13
Pepe is NOT a hate symbol
ill just leave this here


Bump, just because its better than my chart.

What will the next halving bring?

edits

(fucking hate mobile auto-correct sometimes)

Pepe thinks this graph is absolutely misleading.
Why make assumptions based on something back in 2012?
Also, in 2016 the blockchain is 100 GB big, and not 350 GB as graph wants to make Pepe believe.

Pepe is not stupid, you know?
So why fudge the numbers like that?
What is the point?
legendary
Activity: 3430
Merit: 3080
"only a few last blocks are verified at startup, but they were validated before they were written, so everything gets validated. Doesn't it?"

If one is starting up a new node, and only the newest blocks are verified, you are not verifying the older blocks. You are trusting others to have done the verification of the earlier blocks. Ergo, Core does not validate all transactions.

To some this may be an acceptable risk. However, it is most certainly not operating in a trustless mode.


The usual trolling nonsense from you, j breher


If we were to remedy this issue, here's how: re-validate every transaction in evvery block in the entire block chain, EVERY time someone loads up their Bitcoin network. Great proposal jbreher, it'll only take people a few days every time they want to send a transaction, lol. Tell the BU devs to add that "upgrade" to the latest version, then someone might at least think that you believe what you're saying Cheesy
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
According to Gregory Maxwell, even Core does not validate all transactions.

[citation needed]

please, can you elaborate on this? do you have a link or something? I'd like to know what escapes validation.

I understand that things that it reads from disk are not validated, and only a few last blocks are verified at startup, but they were validated before they were written, so everything gets validated. Doesn't it?

I understand your request for a citation. Seems implausible, right? I agree that the burden of proof - as far as 'according to GM' is concerned - is on me. A cursory check does not turn up a quote. I will continue to look. However, you point out the issue yourself:

"only a few last blocks are verified at startup, but they were validated before they were written, so everything gets validated. Doesn't it?"

If one is starting up a new node, and only the newest blocks are verified, you are not verifying the older blocks. You are trusting others to have done the verification of the earlier blocks. Ergo, Core does not validate all transactions.

To some this may be an acceptable risk. However, it is most certainly not operating in a trustless mode.

IOW to answer your question - to wit: "Doesn't it?", the answer is: 'no, it doesn't'.
Pages:
Jump to: