Pages:
Author

Topic: List of all Bitcoin addresses ever used - currently UNavailable on temp location - page 7. (Read 3652 times)

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Just yesterday, I got a good deal on a new VPS (more memory, more disk, more CPU and more bandwidth). It's dedicated to only this project (and I have no idea how reliable it's going to be). I've updated the OP.

There's a problem though. There are:
756,494,121 addresses according to addresses_in_order_of_first_appearance.txt.gz
756,524,407 addresses according to addresses_sorted.txt.gz
Obviously, these numbers should be the same. I haven't scheduled automated updates yet, I first want to recreate this data from scratch to see which number is correct.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Are you downloading Blockchair dumps at the slow rate?
Yes. But 100 kB/s isn't a problem anymore: the initial download took a long time, but for daily updates it doesn't take that long.

Quote
I just contacted Blockchair for an API key, which enables people to download at the fast rate, and a support rep told me they cost $500/month.
I thought they'd offer it for free for certain users, but this makes sense from a business point of view.

Quote
If network bandwidth is a problem I'm able to host this on my hardware if you like.
Just this month I'm at 264 GB for this project, and 174 GB for all Bitcoin addresses with a balance. That means this full list is only downloaded a few times per month, but the funded addy list is downloaded a few times per day.
I'm more in need for more disk space for sorting this data, but I haven't decided yet where to host it. 100 GB disk space isn't enough.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
@LoyceV

Are you downloading Blockchair dumps at the slow rate? I just contacted Blockchair for an API key, which enables people to download at the fast rate, and a support rep told me they cost $500/month.

If network bandwidth is a problem I'm able to host this on my hardware if you like.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Sample: unique_addresses.txt.gz: all Bitcoin addresses ever used, without duplicates, sorted by address (Warning: 15 GB)
I didn't have enough disk space to process the 31 GB file the way I want it, so I've (temporarily) removed this file. After I'm done with that, I'll restore the missing file. Give it a few days.
Well, that didn't go as planned Sad Although I can keep all unique addresses in order of first appearance, it turns out 100 GB disk space is not enough for the temporary space it needs. Because of the large data traffic, I don't want to use loyce.club's AWS hosting for this, and I'm not sure yet if I should get another VPS just for this.

An alternative would be to run it from my home PC, but the heavy writing will just wear out my SSD. So this project is on hold for now. Daily updates still continue.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Sample: unique_addresses.txt.gz: all Bitcoin addresses ever used, without duplicates, sorted by address (Warning: 15 GB)
I didn't have enough disk space to process the 31 GB file the way I want it, so I've (temporarily) removed this file. After I'm done with that, I'll restore the missing file. Give it a few days.

Since I got no response to my question above, I'll go with 2 versions:
  • All addresses ever used, without duplicates, in order of first appearance.
  • All addresses ever used, without duplicates, sorted.
The first file feels nostalgic, the second file will be very convenient to match addresses with a list of your own.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Sample: addresses.txt.gz: all addresses in chronological order, with duplicates (Warning: 31 GB):
Code:
1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa
12c6DSiU4Rq3P4ZxziKxzrL5LmMBrzjrJX
1HLoD9E4SDFFPDiYfNYnkBLQ85Y51J3Zb1
.......
3GFfFQAFgXKiA1qqUK6rqBpEpG4vZDos6t
3Mbtv47gZ2eN6Fy7owpgHHwSLYHS42P56P
38JyF2RQknBUMETyRT2yGndDJFYSp6hJNg
Due to limitations on disk space, I'm considering removing this file. Unless anyone has a need for it, so: can anyone tell me what this can be used for? I know it can be used to make a Top 100 of addresses with most receiving transactions.

Instead of this list, I want to make a new list without duplicates, but still in order of first appearance of each address. Thanks to bob123, I can do that now!
I'll also keep the sorted list, because that list is very convenient to find matches on a list.



I need some time to process all data. When done, I'll rewrite some of my posts.
legendary
Activity: 2982
Merit: 2681
Top Crypto Casino
This is an awesome apport for the community, some weeks ago i see a user asking for a list like this to make a bruteforce... Some users use their addy as password, that's why a list like this is a great tool, thanks again to LoyceV for making it fo us.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
If someone has enough RAM to experiment, I'd love to see the result of this (on the 31 GB file):
This looks very promising:
Code:
cat -n input.txt | sort -uk2 | sort -nk1 | cut -f2- > output.txt
I'll be testing it soon.

Some results: The awk-thing uses just over 1 GB memory for 10 million addresses. So for 1.5 billion addresses, a 256 GB server should be enough. At AWS, that would cost a few dollars per hour.

I've tested with the first 10 million lines, and can confirm both give the same result:
Code:
head -n 10000000 addresses.txt | awk '!a[$0]++' | md5sum
head -n 10000000 addresses.txt | nl | sort -uk2 | sort -nk1 | cut -f2 | md5sum
As expected, awk is faster.
newbie
Activity: 29
Merit: 50
I actually can Cheesy I found this regexp on Stackoverflow:
Code:
egrep --regexp="^[13][a-km-zA-HJ-NP-Z1-9]{25,34}$" filename
With some slight changes it stops matching parts of Eth-addresses:
Code:
egrep -w --regexp="[13][a-km-zA-HJ-NP-Z1-9]{25,34}" *


I have compiled these from various sources and use them to automatically set my blockchain explorer options based on user input, and also keep them at my .zshrc :
Code:
#cryptocurrency greps

#btc1 and btc2 combined
alias btcgrep="grep -Ee '\b[13][a-km-zA-HJ-NP-Z1-9]{25,34}\b' -e '\bbc(0([ac-hj-np-z02-9]{39}|[ac-hj-np-z02-9]{59})|1[ac-hj-np-z02-9]{8,87})\b'"

#legacy addresses only
alias btcgrep1="grep -E '\b[13][a-km-zA-HJ-NP-Z1-9]{25,34}\b'"
#http://mokagio.github.io/tech-journal/2014/11/21/regex-bitcoin.html

#bech32 v1 and v0 addresses
alias btcgrep2="grep -E '\bbc(0([ac-hj-np-z02-9]{39}|[ac-hj-np-z02-9]{59})|1[ac-hj-np-z02-9]{8,87})\b'"
#https://stackoverflow.com/questions/21683680/regex-to-match-bitcoin-addresses

#bech32 addresses only
alias btcgrep3="grep -E '\bbc1[ac-hj-np-zAC-HJ-NP-Z02-9]{11,71}\b'"

#both legacy and bech32
alias btcgrep4="grep -E '\b([13][a-km-zA-HJ-NP-Z1-9]{25,34}|bc1[ac-hj-np-zAC-HJ-NP-Z02-9]{11,71})\b'"
#http://mokagio.github.io/tech-journal/2014/11/21/regex-bitcoin.html

#private keys
alias btcgrep5="grep -E '\b[5KL][1-9A-HJ-NP-Za-km-z]{50,51}\b'"
#word boundary: '\b'
#https://bitcoin.stackexchange.com/questions/56737/how-can-i-find-a-bitcoin-private-key-that-i-saved-in-a-text-file

#transaction hashes
alias btcgrep6="grep -E '\b[a-fA-F0-9]{64}\b'"
#https://stackoverflow.com/questions/46255833/bitcoin-block-and-transaction-regex
#https://bitcoin.stackexchange.com/questions/70261/recognize-bitcoin-address-from-block-hash-and-transaction-hash

#block hashes
alias btcgrep7="grep -E '\b[0]{8}[a-fA-F0-9]{56}\b'"
#https://stackoverflow.com/questions/46255833/bitcoin-block-and-transaction-regex

#ethereum address hash
#test for 'plausibility'
alias ethgrep="grep -E '\b(0x)?[0-9a-fA-F]{40}\b'"
#https://ethereum.stackexchange.com/questions/1374/how-can-i-check-if-an-ethereum-address-is-valid

#ethereum transaction hash
alias ethgrep2="grep -E '\b(0x)?([A-Fa-f0-9]{64})\b'"  #parentheses are not necessary
#https://ethereum.stackexchange.com/questions/34285/what-is-the-regex-to-validate-an-ethereum-transaction-hash/34286

Flag -w is 'word bondary' and can also be set within the regex with '\b' at the ends.

Very good work on compiling those addresses, mate!
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
If someone has enough RAM to experiment, I'd love to see the result of this (on the 31 GB file):
I suggest instead of the awk one-liner you look at gz-sort, it is a small linux program that sorts gzip-compressed files on disk while using a very small memory buffer, as low as 4 megabytes.
I checked, but it does what I'm doing already. The awk-command removes duplicate lines without sorting the lines. I'd like to do it, but I can't run it.

Quote
This prints 1111111111111111111114oLvT2. This address was used 55405 times (!)
I'd be interested to see which real address is the shortest. The 111111111-addresses are all burn addresses. I'm not entirely sure what determines address length, but from what I've seen, shorter addresses are much harder to find. I've been looking for short addresses created from mini-private-keys, and they were quite rare.
To find a real short address, it needs to have sent funds too.

Quote
Maybe you can also make a list of addresses sorted by balance
See List of all Bitcoin addresses with a balance.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
If someone has enough RAM to experiment, I'd love to see the result of this (on the 31 GB file):

I suggest instead of the awk one-liner you look at gz-sort, it is a small linux program that sorts gzip-compressed files on disk while using a very small memory buffer, as low as 4 megabytes.

You sort the file using
Code:
gz-sort -u addresses.txt.gz addresses_sorted.txt.gz

The -u switch removes duplicate lines from the sorted output, and you can increase the buffer size to give it a larger buffer for transporting stuff, but this isn't necessary. I used -S 1G to give it a 1 gigabyte buffer and it took around 7 hours to complete so not much shorter than the advertised completion time, 9 or 10 hours. So this program will run well in your VM, the RAM factor isn't important.

You need to compile it yourself using make but it has minimal dependencies, only zlib and GNU headers.

I used it to find the smallest address in the dump using
Code:
zcat addresses_sorted.txt.gz | head -n 55405 | uniq

This prints 1111111111111111111114oLvT2. This address was used 55405 times (!)

Here are some the other smallest addresses:

Code:
1111111111111111111114oLvT2
111111111111111111112BEH2ro
111111111111111111112xT3273
1111111111111111111141MmnWZ
111111111111111111114ysyUW1
1111111111111111111184AqYnc
11111111111111111111BZbvjr
11111111111111111111CJawggc
11111111111111111111HV1eYjP
11111111111111111111HeBAGj
11111111111111111111QekFQw
11111111111111111111UpYBrS
11111111111111111111g4hiWR
11111111111111111111jGyPM8
11111111111111111111o9FmEC
11111111111111111111ufYVpS
111111111111111111121xzjPWX1
111111111111111111128gzo7iT
11111111111111111112AmVxQeF
11111111111111111112Fr3DURyz
11111111111111111112GvNtZ1K
11111111111111111112VUYD4wA
1111111111111111111313xyAwW
111111111111111111137vGPgFbT
11111111111111111113aT9ZSLG
111111111111111111168xDACCG
11111111111111111116B8w87yU



Maybe you can also make a list of addresses sorted by balance, now that you have an efficient way to deduplicate them.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
@LoyceV how large is the uncompressed addresses.txt.gz?
It gets around 50% larger, Bitcoin addresses don't compress very well.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
@LoyceV how large is the uncompressed addresses.txt.gz? It is at least 200GB and counting and it's still extracting legacy addresses. I'm worried I may run out of disk space before it's all extracted. I have a 1TB quota. If you know how big is the uncompressed unique_addresses.txt.gz while you're at it that will be useful to know.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
That's strange because all AWS servers have an SSD configured as the boot disk.
I guess it wasn't clear that alladdresses.loyce.club:20319 doesn't run at AWS. It uses HDD.

legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Quote
-S will tell your machine to use at most 65% CPU
I think you mean RAM, not CPU. This VM has only 256 MB, so I'll let "sort" figure it out on it's own.

That is correct, the argument to -S is the amount of memory for sort(1) to use for its main buffer (manpage source). With a percentage it should calculate the amount of memory to reserve. But I think even a 256MB buffer is too small for the size of the dataset you're sorting, it will hit the disk too much.

Quote
-T puts temporary files in a directory (here named "tmp") and not in RAM; if you have an SSD, the speed isn't too shabby
That's default behaviour Smiley It doesn't have an SSD though, and I'm using "cputool" to keep server load low. I'm okay without daily updates on this, I wouldn't want users to download this large file on a daily basis anyway.

Quote
I have sorted huge lists (>80 GB) on budget laptops using these two arguments. Worth a shot! If you want better hosting, PM me.
Since last year, I'm using an AWS server donated by suchmoon for loyce.club. However, since AWS charges $0.15/GB, I'm not comfortable hosting very large files on suchmoon's server.
When I tested sorting data on AWS, it started throtting disk IO after a while, which made it very slow. I've also tested a pay-by-the-hour-VPS, and obviously it was a lot faster.

That's strange because all AWS servers have an SSD configured as the boot disk. If you are sorting in a VM, then all that sorting is done in a virtual hard disk, so not only are you moving memory into temporary host SSD space, it's being moved inside a virtual disk file inside said SSD and that puts extra strain on your hypervisor's emulated disk controller.

So, it's emulating all the disk controller calls that read and write data from the disk, updates disk cache and its other jobs while sort(1) moves data between its memory buffer in RAM and the hard disk (which is actually a file on your host). And it's doing that for the entire 31GB of addresses, and the algorithm sort uses needs an O(n log(n)) space, which I calculate to be 310GB for your data. All this while running emulated disk writes and reads. On top of that there is the hardware-accelerated reads and writes that the host does for the VM to it's disk file. That explains the poor performance while sorting.

You'll have better disk performance if you sort outside of a VM.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Code:
cat unsorted.txt | sort -u -S 65% -T tmp > sorted.txt
I'm already using "sort", which uses /tmp by default.

I'll try "sort -u" though, it might need less temporary storage than "sort | uniq". The next update is scheduled for tomorrow, I'll see how it performs.

Quote
-S will tell your machine to use at most 65% CPU
I think you mean RAM, not CPU. This VM has only 256 MB, so I'll let "sort" figure it out on it's own.

Quote
-T puts temporary files in a directory (here named "tmp") and not in RAM; if you have an SSD, the speed isn't too shabby
That's default behaviour Smiley It doesn't have an SSD though, and I'm using "cputool" to keep server load low. I'm okay without daily updates on this, I wouldn't want users to download this large file on a daily basis anyway.

Quote
I have sorted huge lists (>80 GB) on budget laptops using these two arguments. Worth a shot! If you want better hosting, PM me.
Since last year, I'm using an AWS server donated by suchmoon for loyce.club. However, since AWS charges $0.15/GB, I'm not comfortable hosting very large files on suchmoon's server.
When I tested sorting data on AWS, it started throtting disk IO after a while, which made it very slow. I've also tested a pay-by-the-hour-VPS, and obviously it was a lot faster.

There's one thing on my wish list though: a method to show only unique addresses in order of appearance (without sorting them). It can be done with awk '!a[$0]++', but this requires a lot of memory and doesn't use temporary files.
copper member
Activity: 193
Merit: 234
Click "+Merit" top-right corner

Updates
Sorting a list that doesn't fit in the server's RAM is very slow. Therefore I only update unique_addresses.txt.gz twice a month (on the 6th and 21st). Check the file date here to see how old it is. If an update fails, please post here.
In between updates, I create daily updates: alladdresses.loyce.club:20319/daily_updates/. These txt-files contain unique addresses (for that day) in order of appearance.
Due to limitations in disk space, I don't do automatic updates for addresses.txt.gz. It's complete until blockchair_bitcoin_outputs_20200719.tsv.gz.



This is a wonderful initiative! A comment: Sorting a very large list with little RAM is not necessarily a problem! Try:


Code:
mkdir tmp
cat unsorted.txt | sort -u -S 65% -T tmp > sorted.txt
rm -r tmp

-S will tell your machine to use at most 65% CPU; this is some sort of optimum, according to my experience
-T puts temporary files in a directory (here named "tmp") and not in RAM; if you have an SSD, the speed isn't too shabby

I have sorted huge lists (>80 GB) on budget laptops using these two arguments. Worth a shot! If you want better hosting, PM me.
legendary
Activity: 2758
Merit: 6830
Great, saves me the trouble Smiley
Can I request a CSV of all the results? That makes it so much easier to use all data than getting them per address through your site.
Just something with (at least) "address,userID,msgID" would be great for further analysis.
Of course. Once in the database, it's pretty easy to export them to the format I want.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
This is planned for my post archive. I had done that but only with ETH addresses and the 15m posts you sent me + the new scraped one.
Great, saves me the trouble Smiley
Can I request a CSV of all the results? That makes it so much easier to use all data than getting them per address through your site.
Just something with (at least) "address,userID,msgID" would be great for further analysis.

I'm still on the planning stage to which should I go first and with many scraped data you've done, it would help me to make less scraping but rather make an API to just look up on your data.
I can get you a copy of all archived posts like I gave TryNinja if it helps. It beats scraping the forum again, although I didn't keep track of board names per topic.
legendary
Activity: 2758
Merit: 6830
I could run this code on 53 million archived posts, but the main problem will be excluding quotes. That's annoying and slow to do, and if I don't exclude them, it will completely mess up the data. On the other hand, quotes may still contain information that was deleted by the user who posted it.
Even without quotes, users still post Bitcoin addresses that aren't theirs, for instance when providing evidence on a scammer.
This is planned for my post archive. I had done that but only with ETH addresses and the 15m posts you sent me + the new scraped one.

I plan to scan all old posts + new ones for ETH and BTC addresses after everything is working fine (new bot + full database with the whole post archive).
Pages:
Jump to: