Pages:
Author

Topic: Neural Networks and Secp256k1 (Read 470 times)

member
Activity: 182
Merit: 30
May 19, 2021, 04:51:15 AM
#41
good or not if change 32 to max at 256

~

I try to use modify sample code for keras to use pubkey dataset
use keras predict titanic and keras predict California housing
just test may be wrong algorithm that should be use

on github have a ot of bitcoin trading bot use neural networks predict price easy than predict elliptic curve

I'm no ML expert but I don't think adding more NN layers is going to improve the accuracy of this neural network. Remember that we basically have a hexadecimal number as input (trying to feed millions of hex numbers as input to the same neurons makes no sense because the numbers have no relationship with each other), and as such, they should be in bytes form instead of ASCII, and then we'd do something like assign each bit of the number to one or more input neurons, and then apply layers on it.

If you do want to use millions of numbers as input then you should use a ML classification algorithm and also give as input the polarity of these points for training. Neural networks are meant to work on one input only e.g. given a drawing of the number "2", identify which digit it is.

It's on github, I posted a few years ago here, RNN converting training btc addresses to private keys, all the code is there to train, and of course if you know tensorflow its easy add layers and do as you wish

https://github.com/btc-room101/bitcoin-rnn

The reason I did this is to get a ballpark private-key estimate for input to kangaroo, where you need to be withon 2^40 to make it work, but certainly any relation can be found if you have enough data

This code is written in python, all the source is there

I'll tell you how the NSA does this backdoor, you use endomorphisms of sepc256k1 1 to infinity and train from simple cases to advanced

See the gtx1060 post I made today to reference endo's

It's best to start small use the sagemath to learn about secp256k1 on simple cases of [0,7], keep the 'p' small until you know what your doing, don't jump right into the 2^256 stuff; all the data comes from sage-math, then is processed in python,

Sagemath has all the glue for secp256k1 already installed, here is what follows is the sage to get you started, if you want to add NN then grab the github

from tinyec.ec import SubGroup, Curve

# Domain parameters for the `secp256k1` curve
# (as defined in http://www.secg.org/sec2-v2.pdf)
name = 'secp256k1'
p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f
n = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141
a = 0x0000000000000000000000000000000000000000000000000000000000000000
b = 0x0000000000000000000000000000000000000000000000000000000000000007
g = (0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798,
     0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8)
h = 1
curve = Curve(a, b, SubGroup(p, g, n, h), name)
print('curve:', curve)

privKey = int('0x51897b64e85c3f714bba707e867914295a1377a7463a9dae8ea6a8b914246319', 16)
print('privKey:', hex(privKey)[2:])

pubKey = curve.g * privKey
pubKeyCompressed = '0' + str(2 + pubKey.y % 2) + str(hex(pubKey.x)[2:])
print('pubKey:', pubKeyCompressed)

# endomorphism
lamN=pow(3,(n-1)/3,n)
betP=pow(2,(p-1)/3,p)

# G is 1/2 special point, n is order o E, G is sep256k1 generator
00000000000000000000003b78ce563f89a0ed9414f5aa28ad0d96d6795f9c63
c0c686408d517dfd67c2367651380d00d126e4229631fd03f8ff35eef1a61e3c
which is easily computable as ((n+1)/2)×G.
sage: h=(n+1)/2
sage: int(h)*G
(86918276961810349294276103416548851884759982251107 : 87194829221142880348582938487511785107150118762739500766654458540580527283772 : 1)
sage: hex(86918276961810349294276103416548851884759982251107)
'3b78ce563f89a0ed9414f5aa28ad0d96d6795f9c63'

***

High-Value Key 123NW9MjzA1ruijnfVepMBRkAgHeNAeRDC
50.0 btc
sage: V=E.lift_x(100257432922100916568143421243988773726881105572982433208434735787631308087100)
sage: V
(100257432922100916568143421243988773726881105572982433208434735787631308087100 : 17976700734488631664679765453517918083636436394515771729223336359641082615751 : 1)

legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
May 17, 2021, 06:58:39 AM
#39
good or not if change 32 to max at 256

~

I try to use modify sample code for keras to use pubkey dataset
use keras predict titanic and keras predict California housing
just test may be wrong algorithm that should be use

on github have a ot of bitcoin trading bot use neural networks predict price easy than predict elliptic curve

I'm no ML expert but I don't think adding more NN layers is going to improve the accuracy of this neural network. Remember that we basically have a hexadecimal number as input (trying to feed millions of hex numbers as input to the same neurons makes no sense because the numbers have no relationship with each other), and as such, they should be in bytes form instead of ASCII, and then we'd do something like assign each bit of the number to one or more input neurons, and then apply layers on it.

If you do want to use millions of numbers as input then you should use a ML classification algorithm and also give as input the polarity of these points for training. Neural networks are meant to work on one input only e.g. given a drawing of the number "2", identify which digit it is.
newbie
Activity: 15
Merit: 0
May 16, 2021, 08:55:27 AM
#38
good or not if change 32 to max at 256


Layer (type)                 Output Shape             
=============================
input_1 (InputLayer)         [(None, 256)]           
_____________________________________
multi_category_encoding (Mul (None, 256)               
_____________________________________
dense (Dense)                (None, 32)               
_____________________________________
re_lu (ReLU)                 (None, 32)               
_____________________________________
dense_1 (Dense)              (None, 32)               
_____________________________________
re_lu_1 (ReLU)               (None, 32)               
_____________________________________
regression_head_1 (Dense)    (None, 1)                 
=============================



I will test add more 5 layer
activate I use ReLU


Layer (type)                 Output Shape             
=============================
input_1 (InputLayer)         [(None, 256)]           
_____________________________________
multi_category_encoding (Mul (None, 256)               
_____________________________________
dense (Dense)                (None, 256)               
_____________________________________
re_lu (ReLU)                 (None, 256)               
_____________________________________
dense_1 (Dense)              (None, 128)               
_____________________________________
re_lu_1 (ReLU)               (None, 128)               
_____________________________________
regression_head_1 (Dense)    (None, 1)                 
=============================



I try to use modify sample code for keras to use pubkey dataset
use keras predict titanic and keras predict California housing
just test may be wrong algorithm that should be use

on github have a ot of bitcoin trading bot use neural networks predict price easy than predict elliptic curve

Don't forget to make youre X,Y length to 32 or 256.
while(X.length != 256)
X = "0" + X;
member
Activity: 406
Merit: 47
May 15, 2021, 08:22:24 PM
#37
good or not if change 32 to max at 256


Layer (type)                 Output Shape             
=============================
input_1 (InputLayer)         [(None, 256)]           
_____________________________________
multi_category_encoding (Mul (None, 256)               
_____________________________________
dense (Dense)                (None, 32)               
_____________________________________
re_lu (ReLU)                 (None, 32)               
_____________________________________
dense_1 (Dense)              (None, 32)               
_____________________________________
re_lu_1 (ReLU)               (None, 32)               
_____________________________________
regression_head_1 (Dense)    (None, 1)                 
=============================



I will test add more 5 layer
activate I use ReLU


Layer (type)                 Output Shape             
=============================
input_1 (InputLayer)         [(None, 256)]           
_____________________________________
multi_category_encoding (Mul (None, 256)               
_____________________________________
dense (Dense)                (None, 256)               
_____________________________________
re_lu (ReLU)                 (None, 256)               
_____________________________________
dense_1 (Dense)              (None, 128)               
_____________________________________
re_lu_1 (ReLU)               (None, 128)               
_____________________________________
regression_head_1 (Dense)    (None, 1)                 
=============================



I try to use modify sample code for keras to use pubkey dataset
use keras predict titanic and keras predict California housing
just test may be wrong algorithm that should be use

on github have a ot of bitcoin trading bot use neural networks predict price easy than predict elliptic curve
member
Activity: 406
Merit: 47
May 15, 2021, 07:46:06 PM
#36
my GPU is small gtx 1050 just 1 millions is slow training

A bit off-topic, but i've heard my friend they're using Google colab and Kaggle which give you access to high-end professional/data server GPU. I don't know the limitation, but it might worth to try.

A bit off-topic,

I try Neural Networks with colab

work with dataset of Secp256k1
generate pubkey and save to colab
Limited storage 100GB (system 40gp limited to use 60gb)

under 1 million dataset may be can work with colab
over 1 million is too much for colab

Google colab pro is good to can use Tesla V100 (limited on 24 hour)
Google colab design for work with neural networks
Google colab pro good value for $9.99 cost
name colab "LAB" is already tell for  testing and proof of concept not for production for real work because use time more 24 hour

Kaggle is good for find sample code (most way use github)
I like Google colab more than kaggle

Google colab free you need to finish job on 12 hour work only 1 gpu limited
Google colab pro ($9.99)  finish job on 24 hour
it good for testing job

colab you still need to coding (no GUI)
colab risk to loss you job if disconnect from server or session reset ( do new one again)
colab save file to ssd risk

colab need to use with google drive to use you file on drive is better (but make it work slow than on ssd on colab)

Real work with Neural Networks you should be use you own GPU is better only way
I run test pubkey small dataset with autokeras ()optimize code) use time over 48 hour on low end  gtx 1050 (slow work)

Neural Networks with Secp256k1 use very long time to train it. it use time a lot for try optimize code for get best result

some problem
use pubkey dataset 1000 to 10000 no problem on test
use pubkey dataset  1 million have a lot of error message between training

optimize code on small dataset is fine
but when try with large dataset 1 million

dataset
on test
use dataset by hex (split each)
use dataset by convert to decimal (split each)
use dataset by convert to binary  (binary 10101 not byte code) (split each)
my best result convert to binary 10101

I just on testing on small dataset

agree to try wok with over 1 billion dataset
try on large dataset and fix problem same time
I test on small and change to use large is difference like to try test new one on large dataset

test algorithm Neural Networks
try apply algorithm
use classification is give answer with percent
use predict number is give predict number with out percent

I think it using classification training is better because give result with percent of predict it useful for use decision
member
Activity: 406
Merit: 47
May 15, 2021, 06:49:58 PM
#35
NNs can capture very hard dependecies, depends on type and number of hidden layers.

https://www.sciencedirect.com/science/article/pii/S0895717707000362

You wish, but the reality says no.

In the paper they look at 14, 20, and 32 bit elliptic curves. The corresponding weights storage is 213, 213.5, 214.2. Theoretically the storage would be in the order of 27, 210, and 216. So all looks good, no breakthrough.

To even learn the secp256k1 weights you'd need at least 2128 examples. Good luck executing that.



I think small neural networks  can not handle with Secp256k1 success problem curve with large number it make very complex
neural networks is small digit work with neurons
problem is still on large number

other idea is create some algorithm to predict will be small and easy than may be correct at 50% only can call success
full member
Activity: 206
Merit: 447
May 15, 2021, 07:46:43 AM
#34
NNs can capture very hard dependecies, depends on type and number of hidden layers.

https://www.sciencedirect.com/science/article/pii/S0895717707000362

You wish, but the reality says no.

In the paper they look at 14, 20, and 32 bit elliptic curves. The corresponding weights storage is 213, 213.5, 214.2. Theoretically the storage would be in the order of 27, 210, and 216. So all looks good, no breakthrough.

To even learn the secp256k1 weights you'd need at least 2128 examples. Good luck executing that.

member
Activity: 406
Merit: 47
May 15, 2021, 07:42:21 AM
#33
my GPU is small gtx 1050 just 1 millions is slow training

A bit off-topic, but i've heard my friend they're using Google colab and Kaggle which give you access to high-end professional/data server GPU. I don't know the limitation, but it might worth to try.

Google colab is very good you need to subscription $9.99 dollar per month to use colab pro available to use Tesla V100 it is very good deal

colab pro limited use 5 note same time and can use with 24 hour will be re-connect again require to open web browser can not close work page

you need to use with google drive to save you job subscriot more 100gp drive will be good

I just try to use colab pro
newbie
Activity: 15
Merit: 0
May 15, 2021, 06:44:38 AM
#32
my GPU is small gtx 1050 just 1 millions is slow training

A bit off-topic, but i've heard my friend they're using Google colab and Kaggle which give you access to high-end professional/data server GPU. I don't know the limitation, but it might worth to try.

Colab has heavy limitations on the GPU in its free tier where they'll stop your whole notebook once you exceed a certain number of hours.


problem on ML.NET
dataset like a random no pattern

training with 1 million dataset result very low accuracy at 0.0001%
it not works
I will try on keras 5 layer and 256 NN
possible get result same

neural networks may be work only on dataset have pattern, NN can find pattern
but Secp256k1 or elliptic curve like a random


1m is nothing, i've tried 50m. I suppose there need tests for 1billion dataset
And for calculation need take small curves and increase numbers if success, to get formula and calculate how much data required for Secp256.

What are your detection rates for the 50m dataset (true positive/negative %, false positive/negative % etc.)?

Selection error was 0.9916. But in tests it was 50%, shaking from 1000 to -1000 (good - bad). So still close to 50%
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
May 15, 2021, 06:21:41 AM
#31
my GPU is small gtx 1050 just 1 millions is slow training

A bit off-topic, but i've heard my friend they're using Google colab and Kaggle which give you access to high-end professional/data server GPU. I don't know the limitation, but it might worth to try.

Colab has heavy limitations on the GPU in its free tier where they'll stop your whole notebook once you exceed a certain number of hours.


problem on ML.NET
dataset like a random no pattern

training with 1 million dataset result very low accuracy at 0.0001%
it not works
I will try on keras 5 layer and 256 NN
possible get result same

neural networks may be work only on dataset have pattern, NN can find pattern
but Secp256k1 or elliptic curve like a random


1m is nothing, i've tried 50m. I suppose there need tests for 1billion dataset
And for calculation need take small curves and increase numbers if success, to get formula and calculate how much data required for Secp256.

What are your detection rates for the 50m dataset (true positive/negative %, false positive/negative % etc.)?
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
May 15, 2021, 04:35:22 AM
#30
my GPU is small gtx 1050 just 1 millions is slow training

A bit off-topic, but i've heard my friend they're using Google colab and Kaggle which give you access to high-end professional/data server GPU. I don't know the limitation, but it might worth to try.
member
Activity: 406
Merit: 47
May 15, 2021, 04:26:16 AM
#29

1m is nothing, i've tried 50m. I suppose there need tests for 1billion dataset
And for calculation need take small curves and increase numbers if success, to get formula and calculate how much data required for Secp256.

great if can do with 1billion dataset

my GPU is small gtx 1050 just 1 millions is slow training
may be need to optimize model code

test at 1000 and 10000 is no problem but 100000 and 1000000 have a lot of error show on training

I can do only test and train
for use may be do by manual use


newbie
Activity: 15
Merit: 0
May 15, 2021, 03:57:41 AM
#28

problem on ML.NET
dataset like a random no pattern

training with 1 million dataset result very low accuracy at 0.0001%
it not works
I will try on keras 5 layer and 256 NN
possible get result same

neural networks may be work only on dataset have pattern, NN can find pattern
but Secp256k1 or elliptic curve like a random


1m is nothing, i've tried 50m. I suppose there need tests for 1billion dataset
And for calculation need take small curves and increase numbers if success, to get formula and calculate how much data required for Secp256.
member
Activity: 406
Merit: 47
May 14, 2021, 07:36:17 PM
#27

problem on ML.NET
dataset like a random no pattern

training with 1 million dataset result very low accuracy at 0.0001%
it not works
I will try on keras 5 layer and 256 NN
possible get result same

neural networks may be work only on dataset have pattern, NN can find pattern
but Secp256k1 or elliptic curve like a random
newbie
Activity: 15
Merit: 0
May 14, 2021, 09:27:54 AM
#26
Using NN for cracking cryptographic functions is pointless. NN can capture only simple dependencies.

I expect the number of weights needed for capturing one bit with bigger than insignificant probability to be in the order of 2128.



NNs can capture very hard dependecies, depends on type and number of hidden layers.

https://www.sciencedirect.com/science/article/pii/S0895717707000362
full member
Activity: 206
Merit: 447
May 14, 2021, 09:19:45 AM
#25
Using NN for cracking cryptographic functions is pointless. NN can capture only simple dependencies.

I expect the number of weights needed for capturing one bit with bigger than insignificant probability to be in the order of 2128.

newbie
Activity: 15
Merit: 0
May 14, 2021, 09:12:02 AM
#24
sample python script create dataset for neural networks

this script just for testing (test on ml.net)
for use need to upgrade and fix

you need to modify to fit as you use

my test on ml.net use binary to 1 and 0 get result better than number (Dec)


test 1
datasetNN1.py
Code:
import random
import time
from bit import Key
import math
 
timestr = time.strftime("%Y%m%d-%H%M%S")
filename = "datasetNN_" + str(timestr) + ".csv"
print(time.strftime("%Y-%m-%d-%H:%M:%S"))
print(filename)

feature = ""

f = open(filename, "w")
j = 1
while j <= 256:
    #print(j)
    feature = feature + "f" + str(j) + ","
    j += 1
header = feature+"Lebel"
#print(header)
f.write(header+"\n")
f.close()


i = 1
while i < 1000:
#while i < 1000000:
    #label_output  = '0'
    label_output  = 'even'
    #print(i)
    seed = random.randrange(2**119,2**120)
    #seed = random.randrange(2**256)
    key = Key.from_int(seed)
    address = key.address
    pubkey = key.public_key.hex()
    x,y = key.public_point
    if y % 2 == 0:
        #label_output = 0  # even
        #label_output = 'even'  # even
        label_output = 1  # even
    else:
        #label_output = 1  # odd
        #label_output = 'odd'  # odd
        label_output = 2  # odd
    
    y2_bin = bin(y)[2:]
    bin2_split = list(y2_bin)

    if len(bin2_split) == 256:
        feature_binary = ""
        for x in range(len(bin2_split)):
            feature_binary = feature_binary + bin2_split[x] + ","

    
        adddataline = feature_binary + str(label_output)
        #print(addline)
        f = open(filename, "a")
        f.write(adddataline+"\n")
        f.close()
        i += 1

    
print(time.strftime("%Y-%m-%d-%H:%M:%S"))



test 2
datasetNN2.py
Code:
import random
import time
from bit import Key
import math
 
timestr = time.strftime("%Y%m%d-%H%M%S")
filename = "datasetNN_" + str(timestr) + ".csv"
print(time.strftime("%Y-%m-%d-%H:%M:%S"))
print(filename)

feature = ""

f = open(filename, "w")
j = 1
#while j <= 256:
while j <= 64:
    #print(j)
    #feature = feature + "f" + str(j) + ","
    feature = feature + "x" + str(j) + ","
    j += 1
header = feature+"Lebel"
#print(header)
f.write(header+"\n")
f.close()


i = 1
while i < 1000:
#while i < 1000000:
    #label_output  = '0'
    label_output  = 'even'
    #print(i)
    seed = random.randrange(2**119,2**120)
    #seed = random.randrange(2**256)
    key = Key.from_int(seed)
    address = key.address
    pubkey = key.public_key.hex()
    x,y = key.public_point

    if y % 2 == 0:
        #label_output = 0  # even
        #label_output = 'even'  # even
        label_output = 1  # even
    else:
        #label_output = 1  # odd
        #label_output = 'odd'  # odd
        label_output = 2  # odd
    
    #y2_bin = bin(y)[2:]
    #bin2_split = list(y2_bin)
    bin2_split = list(pubkey[2:])

    #if len(bin2_split) == 256:
    #if len(pubkey) == 64:
    feature_hex = ""
    for x in range(len(bin2_split)):
        #feature_hex = feature_hex + bin2_split[x] + ","
        hex2_num = int(bin2_split[x], 16)
        feature_hex = feature_hex + str(hex2_num) + ","


    adddataline = feature_hex + str(label_output)
    #print(addline)
    f = open(filename, "a")
    f.write(adddataline+"\n")
    f.close()
    i += 1

    
print(time.strftime("%Y-%m-%d-%H:%M:%S"))



Wrong, Key % 2 or key > n/2  not y % 2.
member
Activity: 406
Merit: 47
May 14, 2021, 09:02:05 AM
#23
sample python script create dataset for neural networks

this script just for testing (test on ml.net)
for use need to upgrade and fix

you need to modify to fit as you use

my test on ml.net use binary to 1 and 0 get result better than number (Dec)


test 1
datasetNN1.py
Code:
import random
import time
from bit import Key
import math
 
timestr = time.strftime("%Y%m%d-%H%M%S")
filename = "datasetNN_" + str(timestr) + ".csv"
print(time.strftime("%Y-%m-%d-%H:%M:%S"))
print(filename)

feature = ""

f = open(filename, "w")
j = 1
while j <= 256:
    #print(j)
    feature = feature + "f" + str(j) + ","
    j += 1
header = feature+"Lebel"
#print(header)
f.write(header+"\n")
f.close()


i = 1
while i < 1000:
#while i < 1000000:
    #label_output  = '0'
    label_output  = 'even'
    #print(i)
    seed = random.randrange(2**119,2**120)
    #seed = random.randrange(2**256)
    key = Key.from_int(seed)
    address = key.address
    pubkey = key.public_key.hex()
    x,y = key.public_point
    if y % 2 == 0:
        #label_output = 0  # even
        #label_output = 'even'  # even
        label_output = 1  # even
    else:
        #label_output = 1  # odd
        #label_output = 'odd'  # odd
        label_output = 2  # odd
   
    y2_bin = bin(y)[2:]
    bin2_split = list(y2_bin)

    if len(bin2_split) == 256:
        feature_binary = ""
        for x in range(len(bin2_split)):
            feature_binary = feature_binary + bin2_split[x] + ","

   
        adddataline = feature_binary + str(label_output)
        #print(addline)
        f = open(filename, "a")
        f.write(adddataline+"\n")
        f.close()
        i += 1

   
print(time.strftime("%Y-%m-%d-%H:%M:%S"))



test 2
datasetNN2.py
Code:
import random
import time
from bit import Key
import math
 
timestr = time.strftime("%Y%m%d-%H%M%S")
filename = "datasetNN_" + str(timestr) + ".csv"
print(time.strftime("%Y-%m-%d-%H:%M:%S"))
print(filename)

feature = ""

f = open(filename, "w")
j = 1
#while j <= 256:
while j <= 64:
    #print(j)
    #feature = feature + "f" + str(j) + ","
    feature = feature + "x" + str(j) + ","
    j += 1
header = feature+"Lebel"
#print(header)
f.write(header+"\n")
f.close()


i = 1
while i < 1000:
#while i < 1000000:
    #label_output  = '0'
    label_output  = 'even'
    #print(i)
    seed = random.randrange(2**119,2**120)
    #seed = random.randrange(2**256)
    key = Key.from_int(seed)
    address = key.address
    pubkey = key.public_key.hex()
    x,y = key.public_point

    if y % 2 == 0:
        #label_output = 0  # even
        #label_output = 'even'  # even
        label_output = 1  # even
    else:
        #label_output = 1  # odd
        #label_output = 'odd'  # odd
        label_output = 2  # odd
   
    #y2_bin = bin(y)[2:]
    #bin2_split = list(y2_bin)
    bin2_split = list(pubkey[2:])

    #if len(bin2_split) == 256:
    #if len(pubkey) == 64:
    feature_hex = ""
    for x in range(len(bin2_split)):
        #feature_hex = feature_hex + bin2_split[x] + ","
        hex2_num = int(bin2_split[x], 16)
        feature_hex = feature_hex + str(hex2_num) + ","


    adddataline = feature_hex + str(label_output)
    #print(addline)
    f = open(filename, "a")
    f.write(adddataline+"\n")
    f.close()
    i += 1

   
print(time.strftime("%Y-%m-%d-%H:%M:%S"))

member
Activity: 406
Merit: 47
May 14, 2021, 07:34:11 AM
#22

Simple GUI try use ML.NET Model Builder GPU Support (Preview)

Download Visual Studio 2019 for Windows Community and install ML.NET Model Builder

but ML.NET Model Builder is not Neural Networks (NN use perceptron)

ML.NET Model Builder is use algorithm and try select best algorithm automatic for predict

(real Neural Networks should be use Keras no GUI or AutoKeras still need to coding)

ML.NET Model Builder try apply by use Text classification or Value prediction


newbie
Activity: 15
Merit: 0
May 14, 2021, 06:30:22 AM
#21

There no know relationship between Y and -Y. Atleast for polynominal. Thats why im trying to use neural network to discover that,

Do you have sample dataset  of Y and result ?
Qhat input? , Qhat output?


Ofc. I've tried bunch of them.

Code:
X1;X2;X3;X4;X5;X6;X7;X8;X9;X10;X11;X12;X13;X14;X15;X16;X17;X18;X19;X20;X21;X22;X23;X24;X25;X26;X27;X28;X29;X30;X31;X32;Y1;Y2;Y3;Y4;Y5;Y6;Y7;Y8;Y9;Y10;Y11;Y12;Y13;Y14;Y15;Y16;Y17;Y18;Y19;Y20;Y21;Y22;Y23;Y24;Y25;Y26;Y27;Y28;Y29;Y30;Y31;Y32;target
233;18;54;13;167;132;41;227;170;248;134;37;7;113;94;20;171;72;185;202;195;2;247;229;78;165;85;239;238;206;187;41;122;192;153;227;188;238;59;122;136;199;95;24;167;130;100;146;89;92;190;97;65;161;50;95;94;31;78;35;162;106;231;205;1
233;18;54;13;167;132;41;227;170;248;134;37;7;113;94;20;171;72;185;202;195;2;247;229;78;165;85;239;238;206;187;41;181;59;102;28;66;17;196;133;119;56;160;231;88;125;155;109;166;163;65;158;190;94;205;160;161;224;177;220;93;149;24;50;0
Thats example of X,Y,is positive.
ToByteArray.



Again, we don't need large sophisticated neural networks, small ones will do. Though at this point you're making more of an empirical test of already used pubkeys since the Y polarity of the entire public key space converges to 50%.

if do small one, neural networks

for who know about neural networks. you can do with you Nvidia CUDA GPU. you know already to do.
but still not yet have sample code on github for try and testing

but for who don't know must about neural networks

try AutoML Tables by google cloud
https://cloud.google.com/automl-tables

just upload dataset to server and training, that easy to use without no coding
 
(other service both Microsoft Azure AutoML and Amazon AWS SageMaker have Automatic neural networks service same)

or OpenNN. NeuralDesigner have friendly GUI, its easy to use.
Pages:
Jump to: