Pages:
Author

Topic: Developer: Write code to generate all possible private keys (Read 571 times)

legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
Tape drives don't really have a capacity. the tape has the capacity but lto-9 is 18TB. hard drives of that size sell for maybe $300 if you get a sale. the lto-9 tape maybe sells for half that at best. not much of an advantage in the size and price category to justify spending $3000 on a tape drive.

While i know tape drive capacity isn't much different from HDD, is tape drive really that expensive? Looking at website such as Newegg and tapeandmedia, some tape drive price isn't that different from HDD.

plus the hard drive is way faster.

Not true, for comparison WD Gold 8TB speed is 255MB/s while IBM TS1160 and IBM LTO 9 speed are 400MB/s (without compression). Check their product specification,
https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-gold/product-brief-wd-gold-hdd.pdf
https://www.ibm.com/downloads/cas/ZV2V7D8Q
https://www.ibm.com/downloads/cas/3MD86RLJ

anything goes wrong with your tape drive and it's another $3000 to spend.

Also applies to HDD.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
Don't forget high cost to perform read/write operation.

Under the DNA fountain scheme, Erlich and Zielinski (2017) spent USD 7000 to encode 2.14 MB data. Hence, DNA fountain costs about ~ USD 3500 per MB of data writing and another USD 1000 to read it (Service 2017).

I'm sure when hard drives were first developed they had a high research cost to write to them also. but obviously it goes without saying price has to come down to reach consumers desktop. it always does.

While i agree the price most likely will decline over time, but i disagree with part it'll reach/popular among desktop user. For example, Tape Drive have high capacity, high speed (compared with HDD) and very long lifespan (usually up to 20-30 years). But almost no desktop user use it, although it's still popular option for archival and enterprise user. Besides, storage isn't the only concern in this case.
sr. member
Activity: 1106
Merit: 430

While i know tape drive capacity isn't much different from HDD, is tape drive really that expensive? Looking at website such as Newegg and tapeandmedia, some tape drive price isn't that different from HDD.

As I think I mentioned, you can get an 18TB hard drive for about twice the price you can get the same size tape. Check the prices yourself.

anything goes wrong with your tape drive and it's another $3000 to spend.

Quote
Also applies to HDD.

I'd rather have 30 things that costed $100 than one thing that costed $3000. Big point of failure there but I guess to each their own. Grin Plus hard drives are commodity item. Tape drives aren't. You cant just pick one up at best buy.
sr. member
Activity: 1106
Merit: 430


While i agree the price most likely will decline over time, but i disagree with part it'll reach/popular among desktop user. For example, Tape Drive have high capacity, high speed (compared with HDD) and very long lifespan (usually up to 20-30 years). But almost no desktop user use it, although it's still popular option for archival and enterprise user. Besides, storage isn't the only concern in this case.

Tape drives don't really have a capacity. the tape has the capacity but lto-9 is 18TB. hard drives of that size sell for maybe $300 if you get a sale. the lto-9 tape maybe sells for half that at best. not much of an advantage in the size and price category to justify spending $3000 on a tape drive. plus the hard drive is way faster. anything goes wrong with your tape drive and it's another $3000 to spend.
hero member
Activity: 882
Merit: 5829
not your keys, not your coins!
I'm looking for a developer to write a script to generate all possible private keys and write them to an SQL database. Payment available

You're too late. Someone else did it already.

https://allprivatekeys.com

'All private keys list
Whole range of Bitcoin and Bitcoin Cash Private Keys, compressed/ uncompressed, SegWit and HD wallet. Whole wallets including YOURS.
Don't believe?

Just open to see.'
This is not an offline database though (which OP was looking for) and instead generates the keys on the fly. As was shown already in this thread, the whole world's storage wouldn't be able to store all the private keys.
Just calculating a public key from a private key is a pretty trivial thing.

Nearly every 256-bit number is a valid ECDSA private key. Specifically, any 256-bit number from 0x1 to 0xFFFF FFFF FFFF FFFF FFFF FFFF FFFF FFFE BAAE DCE6 AF48 A03B BFD2 5E8C D036 4140 is a valid private key.


sr. member
Activity: 1106
Merit: 430

The price to read/write to "DNA" will come down, but the cost to process that much data will exceed the available amount of resources required to process that much data. See the above picture posted by NeuroticFish above.


that picture is referring to 2^256, not 2^64. big difference. but if you're referring to 2^64 and the 3kg of dna then i guess it depends on your definition of "process". i guess you already decided that 3kg of dna can't be processed efficiently. ok.
copper member
Activity: 1624
Merit: 1899
Amazon Prime Member #7
when will that be feasible? probably not in the next 10 years right?
That's 390 zettabytes. Various estimates (linked below) put global storage at around 175-200 zettabytes by 2025. So globally we will be storing 390 zettabytes by around 2030, I would imagine. How long will it take to turn the storage for 8 billion people in to a medium which can be bought, owned, and operated by a single person? I would say well over 100 years.

 dna could store that in about 3 kilograms apparently. dna data storage has its issues though. so it won't make the cut to users desktops.

I'm sure when hard drives were first developed they had a high research cost to write to them also. but obviously it goes without saying price has to come down to reach consumers desktop. it always does.
The price to read/write to "DNA" will come down, but the cost to process that much data will exceed the available amount of resources required to process that much data. See the above picture posted by NeuroticFish above.

in order to check n private keys, the computer would need to perform n calculations.
It's far more than a single calculation per private key to arrive at an address which can be checked for balance. And if you don't want to perform those calculations every single time you want to check for balance and would rather just have a list of addresses to look up, then you are going to need to multiply your storage capacity several times if you want to cover every address type.
You are right. I was thinking in terms of Big O Notation for the time complexity of calculating an address, based on a private key. So if you want to calculate j addresses from their private keys, you will perform p * j calculations, and if you want to calculate (j +1) addresses from their private keys, you will need to perform p * (j + 1) calculations. Or, to put it another way, for every additional address you want to calculate from a private key, you will need to perform a consistent additional number of calculations, with the consistent being a positive integer. 
sr. member
Activity: 1106
Merit: 430


Don't forget high cost to perform read/write operation.

Under the DNA fountain scheme, Erlich and Zielinski (2017) spent USD 7000 to encode 2.14 MB data. Hence, DNA fountain costs about ~ USD 3500 per MB of data writing and another USD 1000 to read it (Service 2017).

I'm sure when hard drives were first developed they had a high research cost to write to them also. but obviously it goes without saying price has to come down to reach consumers desktop. it always does.
legendary
Activity: 2268
Merit: 18588
Don't forget high cost to perform read/write operation.

Under the DNA fountain scheme, Erlich and Zielinski (2017) spent USD 7000 to encode 2.14 MB data. Hence, DNA fountain costs about ~ USD 3500 per MB of data writing and another USD 1000 to read it (Service 2017).
Nice! So when bitcoin hits $65 trillion per coin, then if we sell all 21 million bitcoin we can encode every private key in to DNA. Unfortunately, we'll have no money left over to read or perform any operations on the data.

When moon?
sr. member
Activity: 1106
Merit: 430
when will that be feasible? probably not in the next 10 years right?
That's 390 zettabytes. Various estimates (linked below) put global storage at around 175-200 zettabytes by 2025. So globally we will be storing 390 zettabytes by around 2030, I would imagine. How long will it take to turn the storage for 8 billion people in to a medium which can be bought, owned, and operated by a single person? I would say well over 100 years.

 dna could store that in about 3 kilograms apparently. dna data storage has its issues though. so it won't make the cut to users desktops.
legendary
Activity: 3668
Merit: 6382
Looking for campaign manager? Contact icopress!
Will OP have enough energy for generating all those private keys?
I remember a picture telling otherwise:

legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
Storing 264 (or 269) bits is possible with todays technology. To get 269 bits in a cubic meter, we need 100x100x100 nm cell size. IIRC a SRAM cell fits 100nm. Making a new Si layer is used today - CVD grows layers 10-20 nm per minute.

There are some concerns though. The power consumption might be too big, even for SRAM. And the biggest concern is bit rot. Such amount of memory will start degrading instantly (cosmic rays, etc.), so one needs to use lots of power for repairing it. Orders of magnitude more than just storing.

Since the stored information is easy to regenerate, it could be better to checksum, and regenerate it on access in case of error.

Some search gave me failure rate of around 10-13 = 2-43. So even storing it all would need workarounds in order to be error-free.

It is quite pointless to do all this. Moreover, it might be more cost efficient to do a brute force search every time a new address appears.

You also left out cooling and controller chips, even very efficient low power memory will get hot at that density. And although you did mention power, its also the power regulation circuity needed.
Interesting thought experiment but beyond that, not really a real word option. Unless you have Bezos / Musk money to throw around.

-Dave
full member
Activity: 206
Merit: 444
Storing 264 (or 269) bits is possible with todays technology. To get 269 bits in a cubic meter, we need 100x100x100 nm cell size. IIRC a SRAM cell fits 100nm. Making a new Si layer is used today - CVD grows layers 10-20 nm per minute.

There are some concerns though. The power consumption might be too big, even for SRAM. And the biggest concern is bit rot. Such amount of memory will start degrading instantly (cosmic rays, etc.), so one needs to use lots of power for repairing it. Orders of magnitude more than just storing.

Since the stored information is easy to regenerate, it could be better to checksum, and regenerate it on access in case of error.

Some search gave me failure rate of around 10-13 = 2-43. So even storing it all would need workarounds in order to be error-free.

It is quite pointless to do all this. Moreover, it might be more cost efficient to do a brute force search every time a new address appears.
legendary
Activity: 2268
Merit: 18588
when will that be feasible? probably not in the next 10 years right?
That's 390 zettabytes. Various estimates (linked below) put global storage at around 175-200 zettabytes by 2025. So globally we will be storing 390 zettabytes by around 2030, I would imagine. How long will it take to turn the storage for 8 billion people in to a medium which can be bought, owned, and operated by a single person? I would say well over 100 years.

https://cybersecurityventures.com/the-world-will-store-200-zettabytes-of-data-by-2025/
https://www.prnewswire.com/news-releases/the-world-will-store-200-zettabytes-of-data-by-2025-301072627.html
https://www.networkworld.com/article/3325397/idc-expect-175-zettabytes-of-data-worldwide-by-2025.html

in order to check n private keys, the computer would need to perform n calculations.
It's far more than a single calculation per private key to arrive at an address which can be checked for balance. And if you don't want to perform those calculations every single time you want to check for balance and would rather just have a list of addresses to look up, then you are going to need to multiply your storage capacity several times if you want to cover every address type.
legendary
Activity: 3472
Merit: 10611
If I am not mistaken, you are describing the amount of space required to store all private keys as 32 bit integers. Most private keys are numbers that are greater than 32 bits.
Bitcoin private keys are 256 bits or 32 bytes.
The total number of keys in "range 64" (which I assume dextronomous meant between 1 and 264 like the puzzle people love these days!) is 264 (-1 which we ignore) and each of them are 32 bytes so we multiply total with 32 to get the total size in bytes.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
The question is why would one save 264 private keys in their hard drive? I mean what's the point? Do you also want to store the addresses after each key, just to make searching easier for the first 264 uncompressed addresses? There's no reason to store only the keys.

If I am not mistaken, you are describing the amount of space required to store all private keys as 32 bit integers. Most private keys are numbers that are greater than 32 bits.
He's talking about the [1, 264] range. Storing all the private keys would cost much much more space.
copper member
Activity: 1624
Merit: 1899
Amazon Prime Member #7

Code:
32 * 2^64 = 2^69 = 5.9E+20 bytes = 590,000,000 terabytes

when will that be feasible? probably not in the next 10 years right?
Storing all private keys is pretty pointless, IMO. Simply storing a list of private keys without the ability to a) quickly access the corresponding public key and address, and b) quickly check if the associated address in "a" has received any transactions, will not provide much value to anyone.

As an example, I currently have all possible private keys stored in my head. This includes all of your private keys (although I have no way of filtering out all private keys that do not belong to you). However, my brain cannot quickly calculate an associated address from a private key, so the process of obtaining an address from any private key in my head is very slow. The process for me to look up if an address has received a transaction is even slower.

The above concept can be applied to a computer that is able to store all private keys on a hard drive. Even if a computer could quickly check many private keys to see if a private key's associated address has received a transaction, in order to check n private keys, the computer would need to perform n calculations. The number of private keys is too large for any computer to ever perform any calculation on all possible private keys, given theoretical computational limits.

was thinking 
how much is it gonna be in data TB , if range 64 would be safed uncompressed,
raw txt file, and only this range is it doable?
It would be silly to store things in string form, for example in this case it would be 51-52 bytes versus 32. So to compute the total size needed you just multiply the number of items by the raw-byte size which is 32.
Code:
32 * 2^64 = 2^69 = 5.9E+20 bytes = 590,000,000 terabytes
If I am not mistaken, you are describing the amount of space required to store all private keys as 32 bit integers. Most private keys are numbers that are greater than 32 bits.
legendary
Activity: 3472
Merit: 10611
when will that be feasible? probably not in the next 10 years right?
I don't really follow hardware development to be able to give an informed response but considering that over past 10 years we've gone from about 60 TB to the maximum 100 TB SSDs which is roughly a 2x rise, I don't see how a revolution could occur in the next 10 years that could increase this maximum capacity 5.9 million times!
sr. member
Activity: 1106
Merit: 430

Code:
32 * 2^64 = 2^69 = 5.9E+20 bytes = 590,000,000 terabytes

when will that be feasible? probably not in the next 10 years right?
legendary
Activity: 3472
Merit: 10611
was thinking 
how much is it gonna be in data TB , if range 64 would be safed uncompressed,
raw txt file, and only this range is it doable?
It would be silly to store things in string form, for example in this case it would be 51-52 bytes versus 32. So to compute the total size needed you just multiply the number of items by the raw-byte size which is 32.
Code:
32 * 2^64 = 2^69 = 5.9E+20 bytes = 590,000,000 terabytes
Pages:
Jump to: