Pages:
Author

Topic: LoyceV's small Linux commands for handling big data (Read 916 times)

legendary
Activity: 3346
Merit: 3130
@seoincorporation: thanks for the scripts you provided, but: this should be very time-consuming and slow. Imagine you would run your script against the file of LoyceV which contains all funded addresses (1.8 GB file size). It would take weeks (?) until it gets finished ? what do you think, any ways for optimization ?

Code:
$ cat addyBalance.tsv | cut -f1 | sed '/.\{70\}/d' | sed '/^3/d' | sed '/^1/d' |wc -l
11407170

I know the data is big 1.1 Million addys starting with 1, but i don't think it would take weeks.

I replace cat with head -n 10000, and with the time command i get:

Code:
real 0m37.877s
user 0m31.956s
sys 0m5.951s

So, 10,000 on 40 seconds, that's 4,000 seconds for 1 million, that's 66 minutes, or a little more than 1 hour.

I think it should be faster if you do it all with Python and not calling python from Bash as i did.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
@seoincorporation: thanks for the scripts you provided, but: this should be very time-consuming and slow. Imagine you would run your script against the file of LoyceV which contains all funded addresses (1.8 GB file size). It would take weeks (?) until it gets finished ? what do you think, any ways for optimization ?
legendary
Activity: 3346
Merit: 3130
Hello LoyceV, i have been working in the Address to HASH160 conversion and i made some scripts that i would like to add to your Linux Commands.

Script to get all the HASH160 from the  addyBalance.tsv file from address starting with 1.

Code:
for a in $(cat addyBalance.tsv | cut -f1 | sed '/^b/d' | sed '/^1/d')
do
python3 -c "import binascii, hashlib, base58; hash160 = binascii.hexlify(base58.b58decode_check(b'$a')).decode()[2:]; print(hash160)";
done

Script to get all the HASH160 from the  addyBalance.tsv file from address starting with bc1.

Code:
for a in $(cat addyBalance.tsv | cut -f1 | sed '/.\{70\}/d' | sed '/^3/d' | sed '/^1/d')
do
python3 -c "import bech32; hash1 = bech32.decode(\"bc\", \"$a\"); hash2 = bytes(hash1[1]); print(hash2.hex())";
done

Script to get all the HASH160 from the  addyBalance.tsv file from address starting with 1.

Run:

You can print the HASH with:
Code:
sh addy.sh
Or oyu can send it to a file:
Code:
sh addy.sh > a.txt

The script prints an error because the first word in the file is 'Addres', but it works fine:
Code:
$ sh addy.sh
Traceback (most recent call last):
  File "", line 1, in
  File "/home/windows/.local/lib/python3.10/site-packages/base58/__init__.py", line 157, in b58decode_check
    raise ValueError("Invalid checksum")
ValueError: Invalid checksum
23e522dfc6656a8fda3d47b4fa53f7585ac758cd
cec49f4d16b05fe163e41b15856732d985d36277
d401a46b6d847399a45878b1f25f499aad959830
4017a4219d12f11e4649d2ae9eef5bd4c9bf0d80
c8ca150ee82589d47f69b8dcd7cad684d88283f1
288b23e4a5886136a544159550a8e99f2e5672ab
cd49d5f5215aaaa0fdbf1bd2e454250edf8a54e2
cafacdc40cf8d3daa60aa479774ccd9085b4c512
b91e28f4f8f6fced313112c0c72407d85ecec39a
4a782fe173a0b6718d39667b420d9c8b07e94262
9518af9ff9c31a47f94a36b95dce267e5edcd82d

And I made a small script for single address too:

sh bc.sh bc1qBitcoinAddress
Code:
python3 -c "import bech32; hash1 = bech32.decode(\"bc\", \"$1\"); hash2 = bytes(hash1[1]); print(hash2.hex())"

sh 1.sh 1BitcoinAddress
Code:
python3 -c "import binascii, hashlib, base58; hash160 = binascii.hexlify(base58.b58decode_check(b'$1')).decode()[2:]; print(hash160)"

You will need the Python dependencies to run this script.

Code:
pip install base58 bech32 binascii hashlib
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Addresses with balance=0 AND outputs=0 should not be listed.
No balance and no outputs, that means the address is unused. Those aren't in any of the data dumps.

Quote
Only those matching this condition
if balance>0
That list I have Smiley

Quote
OR (balance=0 AND outputs>0)
I'm confused: why would the 2 addresses I gave above not qualify for this?
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Not really. I am interested in addresses like that:

address, balance, outputs

1aDdressExampLeFundedxXxx, 123456, 789
bc1qnotfundedbutspent0utput, 0, 3

Addresses with balance=0 AND outputs=0 should not be listed. Only those matching this condition

if balance>0 OR (balance=0 AND outputs>0)
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
are funded (=positive balance)
See List of all Bitcoin addresses with a balance.

outputs:
Code:
block_id transaction_hash index time value value_usd recipient type script_hex is_from_coinbase is_spendable
152764 498ec7f88857d23bd19370252da439876960dad296a64534987ad54081a9cc39 0 2011-11-11 00:05:55 5001400010 143.5402 17aA19GvhzMHsq8xPwSXAPZutyr6kuzLEB pubkey 4104a8dd3f118a6122c2f8c6be3261670f7af76568cac8ff0ed95e5ef63238018e69b223dbdef9f9d5e94a053ff1afc390e230844c0b71f3648405807cd668979958ac 1 1
152764 5cc203b9389f0f7eb50669eba04ac32d666892fc95f07c25603da8a6ed9316ae 0 2011-11-11 00:05:55 1199480 0.0344 1KPxwAbFVoDimPrVECF2zgiyfX9jGW9TCy pubkeyhash 76a914c9ca1b452087cdc6b89754c1090928d7a67ef23988ac 0 -1
Would 17aA19GvhzMHsq8xPwSXAPZutyr6kuzLEB and 1KPxwAbFVoDimPrVECF2zgiyfX9jGW9TCy be what you're looking for?
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Hello all and thanks to LoyceV providing this great ressource of information. For a certain query I'd like to have a file containing all addresses which

either

are funded (=positive balance)

or

had an output in the past (=sent some coins to someone)

Is it possible somehow to generate such a big file with this data which I could use for a query? Alternatively, I don't mind having two separate files: one that already exists and one additional which contains all addresses with outputs. I could run my query agains both of those, that would certainly do the job.

I'm grateful for any helpful information. Thank you so much!
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
The question would be if you look for double hits per day/per file, or totally in any of files.
The total Smiley For now, I got it covered.
I can try something else too: if I use the list of addresses that are used more than once, it's only 3 GB (instead of 67), and I can search against that list. That was slower in my initial test, but that didn't cause memory problems, and I have to do it less often so it may pay off.

I'm looking for all addresses funded with 1, 2, 4, ...., 8196 mBTC in one transaction, that don't have more transactions than 1 funding and (possibly) one spending. I want to count how many of those chips exist on each day. It could be a good measure for privacy.
legendary
Activity: 952
Merit: 1386
I do not understand what do you mean by "directory" - is it a file with addresses? Or directory on hdd where each file has name like an address? Then how you may have the same address twice?
It's a directory with files. Each file has all Bitcoin addresses that were used that day, some of them more than once.

Ok, I understand now. That way I may change program to process all the files from the given directory, not only the one file (daily snapshot). The question would be if you look for double hits per day/per file, or totally in any of files.

If you change your mind, give it a try, maybe it will use less resources. I do not know how much memory will take 8.7mln addresses.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
I do not understand what do you mean by "directory" - is it a file with addresses? Or directory on hdd where each file has name like an address? Then how you may have the same address twice?
It's a directory with files. Each file has all Bitcoin addresses that were used that day, some of them more than once.

To my surprise, my grep script actually completed! It used up all RAM and added some swap, and after 24 hours of high load, it's done Cheesy

Quote
I have prepared a small program for you:
https://github.com/PawelGorny/Loyce60787783
It reads into memory list of addresses and then reads "directory" file with addresses - if address exists in memory, is marked, if the same address in hit for the second time, is removed from memory and saved to file.
Thanks for this! I tried to test it, but I don't really want to install java on the server just for this. I am curious how this would perform though.
legendary
Activity: 952
Merit: 1386
Question

I have a file (300 MB) with 8.9 million Bitcoin addresses. I also have a directory (67 GB) with all Bitcoin addresses. I want to know which address from the file is in the direcory more than once.

Hi,
I do not understand what do you mean by "directory" - is it a file with addresses? Or directory on hdd where each file has name like an address? Then how you may have the same address twice?
I have prepared a small program for you:
https://github.com/PawelGorny/Loyce60787783
It reads into memory list of addresses and then reads "directory" file with addresses - if address exists in memory, is marked, if the same address in hit for the second time, is removed from memory and saved to file. If you want to calculate how many times address was hit, the change is needed.
legendary
Activity: 3346
Merit: 3130
...
Main question: how do I put this in .db format?

You don't have to put it in a DB format at all because you can import text files to a data base. The tric is to use tabs and not space betweet the address and the balance.

Code:
echo "hello word" | sed -e 's/ /\t/g'

Once you have changed that, then you can load it in to a table with:

Code:
LOAD DATA INFILE '/tmp/addys.txt' INTO TABLE AddresTable;

Source: https://stackoverflow.com/questions/13579810/how-to-import-data-from-text-file-to-mysql-database
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Question

I have a file (300 MB) with 8.9 million Bitcoin addresses. I also have a directory (67 GB) with all Bitcoin addresses. I want to know which address from the file is in the direcory more than once.

I use this:
Code:
grep -hf addresses.txt alladdys/* | sort | uniq -d > output.txt
For small lists of addresses, this works fine! However, with the 300 MB list, grep takes 94% of my 16 GB RAM, and there doesn't seem to be any progress. I didn't expect grep would use this much memory for a 300 MB file.
What would be a better solution?
newbie
Activity: 24
Merit: 9
 I also prefer the clear tex model.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Thanks! I only now realize I can just add all balances for each address, and sum them later when needed.

The drawback of such a versatile database is that I have a lot of catching up to do. Thanks for the links, I'll see if I can get something working tomorrow Smiley

Update: I don't need a database anymore, I'll stick to what I know: clear text Smiley
legendary
Activity: 952
Merit: 1386
Mysql is also good, they give https://www.mysql.com/products/workbench/
I will not force you to install MS SQLServer or any monster from Oracle.
Let's say you decide to use postgresql. Then you receive a very nice client - pgAdmin https://www.pgadmin.org/  Using tool like that will be very helpful for you.

Then you may for example: create table (tx id, address, balanceChange),
txid could be our primary key (unique), you should also create index on recipient, as you will launch search using that field.
load data:
 https://sunitc.dev/2021/07/17/load-csv-data-into-a-database-table-mysql-postgres-etc/
(create index after you load data, otherwise loading will take ages)

Then you may very easily check balance change (delta) for each recipient:
Code:
select address, sum(balance) from tableName group by address
It should give you list of addresses with their balance changes.

Just try to list all the possible use cases, think what do you need, how you want to use it - to build a correct data model. It may be the most difficult task - just not to duplicate data, etc
https://www.guru99.com/relational-data-model-dbms.html
https://en.wikipedia.org/wiki/Database_normalization

But if you start, sky is the limit Wink it will be much easier than playing with text files.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
I do not really understand what do you mean as a "database". Do you think about any particular implementation? What do you mean by ".db format"?
That confirms that I know nothing about databases Sad

Quote
Why not to launch mysql or maybe better postgresql server? Loading file like that to database table is trivial.
RAM has nothing to do with that I think. I mean - it helps, but is not a blocking constraint.
I've heard of mysql, but not PostgreSQL.

"Trivial" sounds great Cheesy But I have no idea how Tongue Google shows this, if that's the right track I can try it.
Any idea how to handle duplicate addresses: 1 address with 2 balances that have to be added together?
legendary
Activity: 952
Merit: 1386
I do not really understand what do you mean as a "database". Do you think about any particular implementation? What do you mean by ".db format"?
Why not to launch mysql or maybe better postgresql server? Loading file like that to database table is trivial.
RAM has nothing to do with that I think. I mean - it helps, but is not a blocking constraint.

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
@iceland2k14: thanks, but it feels like I'm in over my head. I've abandoned the pubkey project (at least for now).



It looks like I'm going to need to learn using a database.
Let's say I have a list like this:
Code:
sender recipient value fee
0xae0bf57678cf8151ff95889078e944a7696e18d5 0x930509a276601ca55d508cb5983c2c0d699fd7e9 1 39890258223793255
0x8c0fcd139568055e92a2b96c48ac85fa076c6c6a 0x202f1cbc8a208ee6dece54bb8837950b89e704b6 0 316430894723098310
0x1c7e19f5283aa41a496c1f351b36e96dbaad507f 0x7e75aefd78dbfd7e0846cf608151563164fbb7b2 0 42016257624091770
0xeca2e2d894d19778939bd4dfc34d2a3c45e96456 0xeca2e2d894d19778939bd4dfc34d2a3c45e96456 0 7521743554059000
0x26bce6ecb5b10138e4bf14ac0ffcc8727fef3b2e 0x26bce6ecb5b10138e4bf14ac0ffcc8727fef3b2e 0 7521743554059000
0x57845987c8c859d52931ee248d8d84ab10532407 0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f 63984998281303121920 9298601644341820
0x6604ac53a82cd784525e5f90652c4d6e6b2252af 0x8a0f69b5f97d5c5a2573314e91ef9d7f46ba6da1 0 32647788000000000
0x23d9e4be4d1d2b2a43a51cc66da725f0bd25ec43 0x95172ccbe8344fecd73d0a30f54123652981bd6f 0 17067300000000000
0x6e90ae41af1dea6f0006aa7752d9db2cf5e6a49f 0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f 55455427699986136143 38403951353497129
Want I want, is a database that only contains Addresses and their Value. To get this, Value and Fee get subtracted from Sender's balance, and Value gets added to Recipient's balance. The input data will be around 200 GB, including many duplicate addresses.

Given that I know nothing about databases, how would I start doing this? Is it going to be a problem if the database is larger than my RAM? If needed, I can (easily) split this list up into 2 lists: one with Sender, Value and Fee, and the other with Recipient and Value.

@TryNinja: Considering the performance you managed to get on ninjastic.space, I think you're the right person to ask Smiley Allow me to notify you Smiley



To make it easier to understand what I need, I can turn the above table into this:
Code:
0x930509a276601ca55d508cb5983c2c0d699fd7e9 1
0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f 63984998281303121920
0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f 55455427699986136143

0xae0bf57678cf8151ff95889078e944a7696e18d5 -1
0x57845987c8c859d52931ee248d8d84ab10532407 -63984998281303121920
0x6e90ae41af1dea6f0006aa7752d9db2cf5e6a49f -55455427699986136143

0xae0bf57678cf8151ff95889078e944a7696e18d5 -39890258223793255
0x8c0fcd139568055e92a2b96c48ac85fa076c6c6a -316430894723098310
0x1c7e19f5283aa41a496c1f351b36e96dbaad507f -42016257624091770
0xeca2e2d894d19778939bd4dfc34d2a3c45e96456 -7521743554059000
0x26bce6ecb5b10138e4bf14ac0ffcc8727fef3b2e -7521743554059000
0x57845987c8c859d52931ee248d8d84ab10532407 -9298601644341820
0x6604ac53a82cd784525e5f90652c4d6e6b2252af -32647788000000000
0x23d9e4be4d1d2b2a43a51cc66da725f0bd25ec43 -17067300000000000
0x6e90ae41af1dea6f0006aa7752d9db2cf5e6a49f -38403951353497129

Sorting gives this:
Code:
0x1c7e19f5283aa41a496c1f351b36e96dbaad507f -42016257624091770
0x23d9e4be4d1d2b2a43a51cc66da725f0bd25ec43 -17067300000000000
0x26bce6ecb5b10138e4bf14ac0ffcc8727fef3b2e -7521743554059000
0x57845987c8c859d52931ee248d8d84ab10532407 -63984998281303121920
0x57845987c8c859d52931ee248d8d84ab10532407 -9298601644341820
0x6604ac53a82cd784525e5f90652c4d6e6b2252af -32647788000000000
0x6e90ae41af1dea6f0006aa7752d9db2cf5e6a49f -38403951353497129
0x6e90ae41af1dea6f0006aa7752d9db2cf5e6a49f -55455427699986136143
0x8c0fcd139568055e92a2b96c48ac85fa076c6c6a -316430894723098310
0x930509a276601ca55d508cb5983c2c0d699fd7e9 1
0xae0bf57678cf8151ff95889078e944a7696e18d5 -1
0xae0bf57678cf8151ff95889078e944a7696e18d5 -39890258223793255
0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f 55455427699986136143
0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f 63984998281303121920
0xeca2e2d894d19778939bd4dfc34d2a3c45e96456 -7521743554059000
Sorting is going to be slow for the full data set, and probably takes about 500 GB tmp space, but it's doable if it helps.

Main question: how do I put this in .db format?
jr. member
Activity: 37
Merit: 68
@LoyceV The Values in Sigscript (Contains R,S, Pubkey) is not fixed but they have defined structure. One piece of the Structure is As shown by @MrFreeDragon in this Link https://pastebin.com/Q55PyUgB
But even in this structure the length is not always 0x21 or 0x20 or 0x41. it varies and therefore the length of R and S and Pubkey will vary. You will need to use dynamic sizing variables to extract them. Perhaps use a Awk script or Python. That might be easier. Don't know if the Bash Shell can do all of it.

The basic way to decode and extract the variable size of the data can be taken by following code below...
Code:
def get_rs(sig):
    rlen = int(sig[2:4], 16)
    r = sig[4:4+rlen*2]
#    slen = int(sig[6+rlen*2:8+rlen*2], 16)
    s = sig[8+rlen*2:]
    return r, s
    
def split_sig_pieces(script):
    sigLen = int(script[2:4], 16)
    sig = script[2+2:2+sigLen*2]
    r, s = get_rs(sig[4:])
    pubLen = int(script[4+sigLen*2:4+sigLen*2+2], 16)
    pub = script[4+sigLen*2+2:]
#    assert(len(pub) == pubLen*2)
    return r, s, pub

r, s, pub = split_sig_pieces(script)

Code:
script:  8b4830450221008bf415b6c4bc7118a1d93ef8f6c63b0801d9abe2e41e390670acf9677ee58e5602200da3df76f11ae04758c947a975f84dd7dba990e00c146b451dc4fa514c6cb52d01410421557041f930252b79b0fa28e6587680053b3a3672ff0c1dca6a623c79bdc0b6125a7a2be5450e28e49731ba8f60231dd8eceeff170923717d97a1ca5a67acd4
R:  008bf415b6c4bc7118a1d93ef8f6c63b0801d9abe2e41e390670acf9677ee58e56
S:  0da3df76f11ae04758c947a975f84dd7dba990e00c146b451dc4fa514c6cb52d
pub:  0421557041f930252b79b0fa28e6587680053b3a3672ff0c1dca6a623c79bdc0b6125a7a2be5450e28e49731ba8f60231dd8eceeff170923717d97a1ca5a67acd4

This way you can not only extract all the Pubkeys but can also extract all the R & S values of the Signature, if needed.
Pages:
Jump to: