Will there be any speed improvement using AVX2 and hugepages for CPUs? As well as L3 cache usage of ryzen cpus which in some cases may replace ram reads increasing the speed by times.
I will check that... thanks to rememberme this instrucctions i need to verify what i can do with them.
What does KV abbreviation mean? Can you point me to his github repository ?
kanhavishva
https://github.com/secp8x32@albert0bsd
if I read between the lines correctly, there seems to be something bothering you about the versions of KeyHunt-Cuda available on github (I'm thinking of the versions of Qalander and/or Manyu). I'm guessing that you may be upset because they may have stolen code from you or otherwise misused it without giving you credit. But that's just my guess. At least I read out that you have no interest in a cooperation with them. Nevertheless, thank you very much for your work so far and for sharing your programs with the community. Hats off and keep it up!
WHO? i never heard about them and actually i don't care much...
Out of pure curiosity and personal interest in a working KeyHunt-Cuda version, I'd like to ask you: for what reasons haven't you designed a CUDA-enabled version yet? Working with CPU is completely behind for the purposes after all, GPUs simply have the highest power and performance. Or did you design a KeyHunt-Cuda version in the meantime and I missed it and didn't notice? Am curious and look forward to your answer.
My project start like a proof of concept program for myself, i was learning all the things about the elliptic curve at the same time that i develop it.
it start with some small code with Libgmp and then I start to add some extra functions, here and there.
The main reason that i haven't designed it for CUDA... or GPU is because I HAVEN'T ONE
So, I moved from using libgmp to the same libsecp256k1 code that JLP uses in his BSGS implementation. I only used a part of his code and made some small tweaks to make it work with my initial approach. Right now, I think my BSGS implementation is the fastest one you can get for CPU, and I'm hoping to double its speed again soon.
My C code style is somekind old, maybe some other developers also notice it, but is what actually know by self learning it.
I just want to say that if everyone only focuses on developing for GPUs just because they're faster, they might miss out on some cool speed hacks and math shortcuts that can be really helpful for people like me who don't have top-of-the-line hardware yet.