Pages:
Author

Topic: Keyhunt - development requests - bug reports - page 9. (Read 11879 times)

newbie
Activity: 5
Merit: 0
Thank you for your help!!!
hero member
Activity: 861
Merit: 662
I don't currently have a file called KEYFOUNDKEYFOUND.txt
Will this file be created automatically when the private key is found?

Yes it wll be created automatically
newbie
Activity: 5
Merit: 0
I don't currently have a file called KEYFOUNDKEYFOUND.txt
Will this file be created automatically when the private key is found?
hero member
Activity: 861
Merit: 662
Hello! Launched Keyhunt. Tell me, please, how can I understand that the key is found and where can I view it?
Sincerely!

Read the FAQ

https://github.com/albertobsd/keyhunt#faq
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Where did you download the tool, RTFM ?
newbie
Activity: 5
Merit: 0
Hello! Launched Keyhunt. Tell me, please, how can I understand that the key is found and where can I view it?
Sincerely!

$ ./keyhunt -m bsgs -f tests/125.txt -b 125 -R -k 256 -q -t 8 -s 10 -S
  • Version 0.2.230507 Satoshi Quest, developed by AlbertoBSD
  • Random mode
  • K factor 256
  • Quiet thread output
  • Threads : 8
  • Stats output every 10 seconds
  • Mode BSGS random
  • Opening file tests/125.txt
  • Added 1 points from file
  • Bit Range 125
  • -- from : 0x10000000000000000000000000000000
  • -- to   : 0x20000000000000000000000000000000
  • N = 0x100000000000
  • Bloom filter for 1073741824 elements : 3680.66 MB
  • Bloom filter for 33554432 elements : 115.02 MB
  • Bloom filter for 1048576 elements : 3.59 MB
  • Allocating 16.00 MB for 1048576 bP Points
  • processing 1073741824/1073741824 bP points : 100%     
  • Making checkums .. ... done
  • Sorting 1048576 elements... Done!
  • Writing bloom filter to file keyhunt_bsgs_4_1073741824.blm .... Done!
  • Writing bloom filter to file keyhunt_bsgs_6_33554432.blm .... Done!
  • Writing bP Table to file keyhunt_bsgs_2_1048576.tbl .. Done!
  • Writing bloom filter to file keyhunt_bsgs_7_1048576.blm .... Done!
  • Total 187789848216885788672 keys in 8590 seconds: ~21 Pkeys/s (21861449152140371 keys/s)
newbie
Activity: 4
Merit: 0
Hi Alberto,

everything worked, but after a reinstall (on Virtual box), it didn't anymore. process hangs on "bP points" step however, it uses 100% CPU and 64gb~ of RAM. Do you know what could be wrong?
I use Ubuntu 18.04, here is the output:


How much RAM do you have? I mean real RAM not virtualized

128GB


p.s. full restart & reinstall on Ubuntu 20.04. seems like everything is working just fine.

Code:
ubuntu@keyhunt:~$ git clone https://github.com/albertobsd/keyhunt.git
Cloning into 'keyhunt'...
remote: Enumerating objects: 566, done.
remote: Counting objects: 100% (255/255), done.
remote: Compressing objects: 100% (134/134), done.
remote: Total 566 (delta 161), reused 163 (delta 118), pack-reused 311
Receiving objects: 100% (566/566), 796.20 KiB | 3.83 MiB/s, done.
Resolving deltas: 100% (320/320), done.
ubuntu@keyhunt:~$ cd keyhunt
ubuntu@keyhunt:~/keyhunt$ make
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c oldbloom/bloom.cpp -o oldbloom.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c bloom/bloom.cpp -o bloom.o
gcc -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c base58/base58.c -o base58.o
gcc -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c rmd160/rmd160.c -o rmd160.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c sha3/sha3.c -o sha3.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c sha3/keccak.c -o keccak.o
gcc -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c xxhash/xxhash.c -o xxhash.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c util.c -o util.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/Int.cpp -o Int.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/Point.cpp -o Point.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/SECP256K1.cpp -o SECP256K1.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/IntMod.cpp -o IntMod.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c secp256k1/Random.cpp -o Random.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c secp256k1/IntGroup.cpp -o IntGroup.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/ripemd160.o -ftree-vectorize -flto -c hash/ripemd160.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/sha256.o -ftree-vectorize -flto -c hash/sha256.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/ripemd160_sse.o -ftree-vectorize -flto -c hash/ripemd160_sse.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/sha256_sse.o -ftree-vectorize -flto -c hash/sha256_sse.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -o keyhunt keyhunt.cpp base58.o rmd160.o hash/ripemd160.o hash/ripemd160_sse.o hash/sha256.o hash/sha256_sse.o bloom.o oldbloom.o xxhash.o util.o Int.o  Point.o SECP256K1.o  IntMod.o  Random.o IntGroup.o sha3.o keccak.o  -lm -lpthread
rm -r *.o
ubuntu@keyhunt:~/keyhunt$ mcedit addresses.txt

ubuntu@keyhunt:~/keyhunt$ nice ./keyhunt -m bsgs -k 4096 -c btc -t 8 -s 0 -q -S -R -r 0:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140
[+] Version 0.2.230507 Satoshi Quest, developed by AlbertoBSD
[+] K factor 4096
[+] Threads : 8
[+] Turn off stats output
[+] Quiet thread output
[+] Random mode
[+] Mode BSGS random
[+] Opening file addresses.txt
[+] Added 25 points from file
[+] Range
[+] -- from : 0x0
[+] -- to   : 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140
[+] N = 0x100000000000
[+] Bloom filter for 17179869184 elements : 58890.60 MB
[+] Bloom filter for 536870912 elements : 1840.33 MB
[+] Bloom filter for 16777216 elements : 57.51 MB
[+] Allocating 256.00 MB for 16777216 bP Points
[+] processing 17179869184/17179869184 bP points : 100%
[+] Making checkums .. ... done
[+] Sorting 16777216 elements... Done!
[+] Writing bloom filter to file keyhunt_bsgs_4_17179869184.blm .... Done!
[+] Writing bloom filter to file keyhunt_bsgs_6_536870912.blm .... Done!
[+] Writing bP Table to file keyhunt_bsgs_2_16777216.tbl .. Done!
[+] Writing bloom filter to file keyhunt_bsgs_7_16777216.blm .... Done!



I'm not sure what was wrong with Ubuntu 18.04. My apologies for wasted time
hero member
Activity: 861
Merit: 662
Hi Alberto,

everything worked, but after a reinstall (on Virtual box), it didn't anymore. process hangs on "bP points" step however, it uses 100% CPU and 64gb~ of RAM. Do you know what could be wrong?
I use Ubuntu 18.04, here is the output:


How much RAM do you have? I mean real RAM not virtualized
newbie
Activity: 4
Merit: 0
Hi Alberto,

everything worked, but after a reinstall (on Virtual box), it didn't anymore. process hangs on "bP points" step however, it uses 100% CPU and 64gb~ of RAM. Do you know what could be wrong?
I use Ubuntu 18.04, here is the output:
Code:
git clone https://github.com/albertobsd/keyhunt.git
Cloning into 'keyhunt'...
remote: Enumerating objects: 566, done.
remote: Counting objects: 100% (255/255), done.
remote: Compressing objects: 100% (134/134), done.
remote: Total 566 (delta 161), reused 163 (delta 118), pack-reused 311
Receiving objects: 100% (566/566), 796.20 KiB | 4.52 MiB/s, done.
Resolving deltas: 100% (320/320), done.
ubuntu@ubuntu:~$ cd keyhunt/
ubuntu@ubuntu:~/keyhunt$ make
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c oldbloom/bloom.cpp -o oldbloom.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c bloom/bloom.cpp -o bloom.o
gcc -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c base58/base58.c -o base58.o
gcc -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c rmd160/rmd160.c -o rmd160.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c sha3/sha3.c -o sha3.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c sha3/keccak.c -o keccak.o
gcc -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c xxhash/xxhash.c -o xxhash.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c util.c -o util.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/Int.cpp -o Int.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/Point.cpp -o Point.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/SECP256K1.cpp -o SECP256K1.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -c secp256k1/IntMod.cpp -o IntMod.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c secp256k1/Random.cpp -o Random.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -flto -c secp256k1/IntGroup.cpp -o IntGroup.o
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/ripemd160.o -ftree-vectorize -flto -c hash/ripemd160.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/sha256.o -ftree-vectorize -flto -c hash/sha256.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/ripemd160_sse.o -ftree-vectorize -flto -c hash/ripemd160_sse.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -o hash/sha256_sse.o -ftree-vectorize -flto -c hash/sha256_sse.cpp
g++ -m64 -march=native -mtune=native -mssse3 -Wno-unused-result -Wno-write-strings -Ofast -ftree-vectorize -o keyhunt keyhunt.cpp base58.o rmd160.o hash/ripemd160.o hash/ripemd160_sse.o hash/sha256.o hash/sha256_sse.o bloom.o oldbloom.o xxhash.o util.o Int.o  Point.o SECP256K1.o  IntMod.o  Random.o IntGroup.o sha3.o keccak.o  -lm -lpthread
rm -r *.o
ubuntu@ubuntu:~/keyhunt$ nice ./keyhunt -m bsgs -k 4096 -c btc -t 8 -s 0 -q -S -R -r 0:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140
[+] Version 0.2.230507 Satoshi Quest, developed by AlbertoBSD
[+] K factor 4096
[+] Threads : 8
[+] Turn off stats output
[+] Quiet thread output
[+] Random mode
[+] Mode BSGS random
[+] Opening file addresses.txt
[+] Added 25 points from file
[+] Range
[+] -- from : 0x0
[+] -- to   : 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140
[+] N = 0x100000000000
[+] Bloom filter for 17179869184 elements : 58890.60 MB
[+] Bloom filter for 536870912 elements : 1840.33 MB
[+] Bloom filter for 16777216 elements : 57.51 MB
[+] Allocating 256.00 MB for 16777216 bP Points
[+] processing 0/17179869184 bP points : 0%

Legacy version:
Code:
[+] Version 0.2.230507 Satoshi Quest (legacy), developed by AlbertoBSD
[+] K factor 4096
[+] Threads : 8
[+] Turn off stats output
[+] Quiet thread output
[+] Random mode
[+] Mode BSGS random
[+] Opening file addresses.txt
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
ParsePublicKeyHex: Error invalid public key specified (Not lie on elliptic curve)
[+] Added 7 points from file
[+] Range
[+] -- from : 0x0
[+] -- to   : 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140
[+] N = 0x100000000000
[+] Bloom filter for 17179869184 elements : 58890.60 MB
[+] Bloom filter for 536870912 elements : 1840.33 MB
[+] Bloom filter for 16777216 elements : 57.51 MB
[+] Allocating 256.00 MB for 16777216 bP Points
[+] processing 0/17179869184 bP points : 0%
member
Activity: 177
Merit: 14
Guys leave Alberto alone and let him take and manage his time freely.... Alberto, we won't pressure you, take your time and develop freely whatever & whenever you want.
hero member
Activity: 861
Merit: 662
@Op, you were going to buy a laptop with a GPU

I am going to buy a desktop computer wiht some GPU (not a fancy one) but a decent GPU.

When? I hope this month,  I haven't had much time lately as I've been pretty busy.

What happened?

Oh, you know, life just happened. I've got some other things to do and going on besides this crypto community, so I had to prioritize those for a bit.

Anyways, how come you have no GPU but have CPU?

The only good computer I have isn't actually mine. I borrowed it from someone else, but unfortunately, it doesn't have a  decent GPU.

AFAIK, there are no CPU only systems for years, maybe I'm wrong though.

well the laptop that i borrowed, comes with some Intel basic GPU if that is what you mean...

But common guys, i understand you and  got your point, every one want to hit a puzzle with their powerful GPUs that is OK, that is why i going to develop it for GPU and also i want to create the poolclient and server for my program...

I am not rich to buy all those fancy hardware, regardless there is some asshole user out there who think that I somehow magically received some coins from him.

I fully support that. I think developers should work on slow computers (at least slower than average user's computer), it would force them to improve their code without relying on the latest/fastest hardware.

Agree, that is my point.
legendary
Activity: 952
Merit: 1367
I just want to say that if everyone only focuses on developing for GPUs just because they're faster, they might miss out on some cool speed hacks and math shortcuts that can be really helpful for people like me who don't have top-of-the-line hardware yet.

I fully support that. I think developers should work on slow computers (at least slower than average user's computer), it would force them to improve their code without relying on the latest/fastest hardware.
copper member
Activity: 1330
Merit: 899
🖤😏
@Op, you were going to buy a laptop with a GPU, what happened? Anyways, how come you have no GPU but have CPU? AFAIK, there are no CPU only systems for years, maybe I'm wrong though.
hero member
Activity: 861
Merit: 662
Will there be any speed improvement using AVX2 and hugepages for CPUs? As well as L3 cache usage of ryzen cpus which in some cases may replace ram reads increasing the speed by times.

I will check that... thanks to rememberme this instrucctions  i need to verify what i can do with them.

What does KV abbreviation mean? Can you point me to his github repository ?

kanhavishva

https://github.com/secp8x32


@albert0bsd
if I read between the lines correctly, there seems to be something bothering you about the versions of KeyHunt-Cuda available on github (I'm thinking of the versions of Qalander and/or Manyu). I'm guessing that you may be upset because they may have stolen code from you or otherwise misused it without giving you credit. But that's just my guess. At least I read out that you have no interest in a cooperation with them. Nevertheless, thank you very much for your work so far and for sharing your programs with the community. Hats off and keep it up!

WHO? i never heard about them and actually i don't care much...

Out of pure curiosity and personal interest in a working KeyHunt-Cuda version, I'd like to ask you: for what reasons haven't you designed a CUDA-enabled version yet? Working with CPU is completely behind for the purposes after all, GPUs simply have the highest power and performance. Or did you design a KeyHunt-Cuda version in the meantime and I missed it and didn't notice? Am curious and look forward to your answer.

My project start like a proof of concept program for myself, i was learning all the things about the elliptic curve at the same time that i develop it.

it start with some small code with Libgmp and then I start to add some extra functions, here and there.

The main reason that i haven't designed it for CUDA... or GPU is because I HAVEN'T ONE

So, I moved from using libgmp to the same libsecp256k1 code that JLP uses in his BSGS implementation. I only used a part of his code and made some small tweaks to make it work with my initial approach. Right now, I think my BSGS implementation is the fastest one you can get for CPU, and I'm hoping to double its speed again soon.

My C code style is somekind old, maybe some other developers also notice it, but is what actually know by self learning it.

I just want to say that if everyone only focuses on developing for GPUs just because they're faster, they might miss out on some cool speed hacks and math shortcuts that can be really helpful for people like me who don't have top-of-the-line hardware yet.
full member
Activity: 1078
Merit: 219
Shooters Shoot...
What does KV abbreviation mean? Can you point me to his github repository ?
Was his name I believe; he changed his username.

Brilliant programmer as is albert0bsd!

I will message you his github.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
What does KV abbreviation mean? Can you point me to his github repository ?
full member
Activity: 1078
Merit: 219
Shooters Shoot...
Quote
What or who do you mean by KV ?
KV was the first to build Keyhunt-cuda.

Quote
If understood correctly the mismatch occurs after 536870912 items, so up to 536,870,912 (2^29) there are no issues and errors in KeyHunt-Cuda implementation by Qalander or Manyu, can anyone confirm this?
My own version, a mod to the original/last KV version, works fine with 2^28 xpoints/addresses; cannot confirm with 2^29 xpoints/addresses.

Quote
I'm guessing that you may be upset because they may have stolen code from you or otherwise misused it without giving you credit.
KV, really just borrowed the name, KeyHunt. It does implement a bloom filter, but that is no secret/special code. The rest of the original keyhunt-cuda was heavily based off of JLP's VanitySearch, with the addition of Ethereum addresses.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Not is not that, the bloom filter that he is using have some bugs i report those bugs to the developer of the bloomfilter
https://github.com/jvirkki/libbloom/issues/20
But the developer update it some year before that, i solve by my own implementation.


I notify to KV but he didn't solve as far i can remember
What or who do you mean by KV ?

So for small list of address it work fime, but for bigger lists it can fail.
Yes, for small list of address or a single address the bug that I'm describing don't exist, so it will work fine, but i don't know if there are other errors there.
Are you saying if your address list contains close to 2^32 addresses, a bug may exist, as you described above? That’s a large number of addresses! I’ve ran it with 2^28 addresses and found the key I had put in there as a POW address.
Yes the bug exists, but the behaivor of it is unexpected

if you see this PoC:
https://github.com/jvirkki/libbloom/issues/20

Code:
Items 4194304 bloom filter bytes 7537997
Items 8388608 bloom filter bytes 15075994
Items 16777216 bloom filter bytes 30151987
Items 33554432 bloom filter bytes 60303973
Items 67108864 bloom filter bytes 120607946
Items 134217728 bloom filter bytes 241215893
Items 268435456 bloom filter bytes 482431785
Items 536870912 bloom filter bytes 427992657
Items 1073741824 bloom filter bytes 319114402
Items 2147483648 bloom filter bytes 101357891

Why more items in the bloom lead to a lower number of bytes? Look carefully:

Items 268435456 bloom filter bytes 482431785
Items 536870912 bloom filter bytes 427992657
Items 1073741824 bloom filter bytes 319114402
Items 2147483648 bloom filter bytes 101357891

If understood correctly the mismatch occurs after 536870912 items, so up to 536,870,912 (2^29) there are no issues and errors in KeyHunt-Cuda implementation by Qalander or Manyu, can anyone confirm this?

I updated my bloom filter implementation to use xxhash (64 bits) instead of murmurhash (32bits), because with murmurhash if the number of addresses in the list, is near to 32 bits then the bloom may be some saturated because almost any data tha that you pass to the bloom filter will do a collision. Look the original code in https://github.com/jvirkki/libbloom/blob/master/bloom.c#L57 With a 64 bit hash the problem will be solved, but the original bloom may never change it because compatibility reasons. the KV version may have some other bugs but since it was some kind of copy of my keyhunt, i prefer to improve my own version than fix another versions.

@albert0bsd
if I read between the lines correctly, there seems to be something bothering you about the versions of KeyHunt-Cuda available on github (I'm thinking of the versions of Qalander and/or Manyu). I'm guessing that you may be upset because they may have stolen code from you or otherwise misused it without giving you credit. But that's just my guess. At least I read out that you have no interest in a cooperation with them. Nevertheless, thank you very much for your work so far and for sharing your programs with the community. Hats off and keep it up!

Out of pure curiosity and personal interest in a working KeyHunt-Cuda version, I'd like to ask you: for what reasons haven't you designed a CUDA-enabled version yet? Working with CPU is completely behind for the purposes after all, GPUs simply have the highest power and performance. Or did you design a KeyHunt-Cuda version in the meantime and I missed it and didn't notice? Am curious and look forward to your answer.

I would be very interested in a multi-GPU CUDA capable KeyHunt versio that comes with a fully functional bloom filter and can handle not only p2pkh legacy (compressed and uncompressed) but also p2sh and bech32 addresses. That would be gigantic. But I know, that's not a wishful thinking and surely needs a lot of work. Does anyone know if something like this already exists ? I am looking forward to your feedback.
newbie
Activity: 29
Merit: 0
Will there be any speed improvement using AVX2 and hugepages for CPUs? As well as L3 cache usage of ryzen cpus which in some cases may replace ram reads increasing the speed by times.
Pages:
Jump to: