Pages:
Author

Topic: VanitySearch (Yet another address prefix finder) - page 61. (Read 31159 times)

sr. member
Activity: 462
Merit: 696
Thanks for the link Smiley
On the GPU, I must say I don't have a clear idea. Nsight is not obvious and its difficult to interpret results. It's good for determining if the GPU is well used (grid size, stream processor occupancy, memory transfers, ...) but I didn't manage to get a clear profile function by function. The GPU does not make Base58, it computes up to the hash160 and send them back to the CPU which check full base58 addresses.
Concerning the OpenCL version, I will see, I'm not familiar with it.

legendary
Activity: 1914
Merit: 2071
Hello,

I would like to thanks arulbero who gave me by MP a great tip to improve speed by MP using some symmetries Wink
I missed this, shame on me.
It will save few modular mult. But however, ~40% of cpu is used for modular mult, other 60% mainly go to SHA,RIPE,Base58,ModInv and byteswapping, so I don't know if I can reach the 2.0MKey/s (x 1.66)
For linux (cpu side), I have to work on code generation optimization but assembly using AT&T syntax makes me crazy.

As reference for SHA and RIPE, you could look here: https://github.com/klynastor/supervanitygen

I don't use Base58 in my code, because I need only address in hex format, not Base58.

When an OpenCL implementation?  Smiley


EDIT: on cpu 40% is used for ecc arithmetic; on gpu? I'm curious.
sr. member
Activity: 462
Merit: 696
Hello,

I would like to thanks arulbero who gave me by MP a great tip to improve speed by MP using some symmetries Wink
I missed this, shame on me.
It will save few modular mult. But however, ~40% of cpu is used for modular mult, other 60% mainly go to SHA,RIPE,Base58,ModInv and byteswapping, so I don't know if I can reach the 2.0MKey/s (x 1.66)
For linux (cpu side), I have to work on code generation optimization but assembly using AT&T syntax makes me crazy.

Anyway, I managed to set-up CUDA sdk 8.0 on the old Ubuntu PC. I had to patch the nvidia driver, a nightmare.
But now CUDA works, I managed to compile sample code and make it work, so i will be able to develop the multi GPU release of vanitysearch.
sr. member
Activity: 462
Merit: 696
I have to way 1 hour to answer to your last MP Sad
It's time for me to go to sleep.
See you Smiley
sr. member
Activity: 462
Merit: 696
b) a * b = c mod p   a*b --> 8 * 64 bit, then first 4 limbs * (2**256 - p) + lower 4 limbs.

I tried this. ~same performance as the multiplication by P (for secpk1) for mmult  can be reduced in a single 64bit mult. So I'm interested in c.
OK, on linux, performace are still bad, i'm sorry. Some problem with intrinsic....
legendary
Activity: 1914
Merit: 2071
Linux or windows ?
Is it open source ? Can i try it ?
Linux. You have a PM
sr. member
Activity: 462
Merit: 696
Linux or windows ?
Is it open source ? Can i try it ?
legendary
Activity: 1914
Merit: 2071
A group size of 512 does not bring significant improvement (less than 1%). The DRS62 ModInv is fast and almost negligible with a group size of 256.
If you have a modular mult faster than the digit serial Montgomery mult on a 256bit field, I'm obviously fully open. A folding does not improve thing on 256 bit when working with 64bit digits. I'm not sure if Barrett could be faster, I must say I didn't try and for "medium size field", there can be traps.


On my pc:

VanitySearch -stop -u -t 1 1tryme --> 1,2 MKeys/s

my ecc library  --> 2,0 MKeys/s  (17 M Public keys/s)

EDIT:
I use:

a) group of 4096 points
b) a * b = c mod p   a*b --> 8 * 64 bit, then first 4 limbs * (2**256 - p) + lower 4 limbs.
c) exploit some properties of secp256k1 curve



sr. member
Activity: 462
Merit: 696
A group size of 512 does not bring significant improvement (less than 1%). The DRS62 ModInv is fast and almost negligible with a group size of 256.
If you have a modular mult faster than the digit serial Montgomery mult on a 256bit field, I'm obviously fully open. A folding does not improve thing on 256 bit when working with 64bit digits. I'm not sure if Barrett could be faster, I must say I didn't try and for "medium size field", there can be traps.


legendary
Activity: 1914
Merit: 2071
Hello,

Affine coordinates for search (faster):
Each group perform p = startP + i*G, i in [1..group_size] where i*G is a pre-computed table containing G,2G,3G,.... in affine coordinates. The inversion of deltax (dx1-dx2) is done once per group (1 ModInv and 256*3 mult). group_size is 256 key long.

Protective coordinates for EC multiplication (computation of starting keys). Normalization of the key is done after the multiplication for starting key.

Edit:
You also may have noticed that I have an innovative implementation of modular inversion (DRS62) which is almost 2 times faster than the Montgomery one. Some benchmark and comments are available in IntMop.cpp.


Ok.
two questions:

1) why only 256 for the group size? There is a memory problem? Less inversions are better

2) the field multiplication a*b = c mod p ;  why do you use Montgomery, are you sure it is worth it?
sr. member
Activity: 462
Merit: 696
Hello,

Some news:
I just published (1.4) a new release with few fixes (especially for Linux) but the un-initialized memory bug may also affect Windows (I didn't manage to reproduced this bug on Windows but it can be random).

I managed to get back an old PC from my company (~8 years old) with 2 Quadro 600 inside Smiley
Unfortunately the Quadro 600 (fermi) has only compute capability 2.1 and I will have to set-up CUDA SDK 8.0 (the last one which supports fermi). I set up Ubuntu on this PC and I will try to develop the multi GPU release under Linux.
Hope I will manage to get good drivers for the Quadro 600 and to make it work.
sr. member
Activity: 462
Merit: 696
Hello,

Affine coordinates for search (faster):
Each group perform p = startP + i*G, i in [1..group_size] where i*G is a pre-computed table containing G,2G,3G,.... in affine coordinates. The inversion of deltax (dx1-dx2) is done once per group (1 ModInv and 256*3 mult). group_size is 256 key long.

Protective coordinates for EC multiplication (computation of starting keys). Normalization of the key is done after the multiplication for starting key.

Edit:
You also may have noticed that I have an innovative implementation of modular inversion (DRS62) which is almost 2 times faster than the Montgomery one. Some benchmark and comments are available in IntMop.cpp.
legendary
Activity: 1914
Merit: 2071
Hello,

I would like to present a new bitcoin prefix address finder called VanitySearch. It is very similar to Vanitygen.
The main differences with Vanitygen are that VanitySearch is not using the heavy OpenSSL for CPU calculation and that the kernel is written in Cuda in order to take full advantage of inline PTX assembly.
On my Intel Core i7-4770, VanitySearch runs ~4 times faster than vanitygen64. (1.32 Mkey/s -> 5.27  MK/s)
On my  GeForce GTX 645, VanitySearch runs ~1.5 times faster than oclvanitygen. (9.26 Mkey/s -> 14.548 MK/s)
If you want to compare VanitySearch and Vanitygen result, use the -u option for searching uncompressed address.

There is still lots of improvement to do.
Feel free to test it and to submit issue.


Are you using affine or jacobian coordinates for the points?
sr. member
Activity: 462
Merit: 696
Hello,
No problem.
Done Wink
member
Activity: 117
Merit: 32
Hello jean_luc I would like to send you an MP but it would be necessary if you would like to activate this option in your profile because otherwise the New can not. Smiley
sr. member
Activity: 462
Merit: 696
Hello,

I published a new release (1.3) with a ~15% global performance increase, (~20% on GPU).
On my hardware, VanitySearch is now 2 times faster (GPU) than oclvanitygen.
My goal was to reach a 8 characters (case sensitive) prefix in a reasonable time on my 6 years old hardware, it still need 2 weeks of computation for a 50% probability.
I'm not sure I will reach my goal of 2 or 3 days without changing my hardware Cheesy
The next step will be to handle multiple GPU and to support CUDA for linux.

sr. member
Activity: 462
Merit: 696
Hello,

Thank you for your interest and for reporting issues Smiley

I just published a new release (v1.2):
-Updated probability calculation for very large prefix
-Avoid that default configuration hangs the system when gpu is enabled
-Performance increase (~10%)
donator
Activity: 4718
Merit: 4218
Leading Crypto Sports Betting & Casino Platform
Great to see someone continuing to develop an open source vanity application.  I'll have to check this out when I get an opportunity. 
sr. member
Activity: 462
Merit: 696
Thanks for testing  Smiley
The 2 "Check" fields are here especially for debugging/checking purposes. The 2 'checked' addresses are recomputed from the private key by a direct multiplication. To reach the desired address, during the search, generator points are added one by one.
You're right by default, if you just add the -gpu option, all CPU cores are used and it slows down much the system and even the GPU. The CPU cannot handle GPU/CPU transfer efficiently. I wrote few words about this on the README but I will let one CPU core free if the gpu is selected.
sr. member
Activity: 462
Merit: 696
Hello  Smiley

I've just published a new release. There is also a makefile for Linux but it supports only CPU release. CUDA release for Linux is coming. I'm very interested in knowing performance you get on your hardware (Linux/Windows/CPU/GPU).

Thanks for testing and reporting issues.
Pages:
Jump to: