Pages:
Author

Topic: Ultra-Lightweight Database with Public Keys (for puzzle btc) (Read 1532 times)

member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
If its about precomputing and reuse, oh well, precomputing that reduces the number of computations after every run can already be applied to basically any other algorithm as well, with better results and with much lower need for storage.

And yes it was confirmed #120 was solved using kangaroos. No one on Earth can stop someone from building an ASIC kangaroo that runs millions of times faster than any CPU, but also no one on Earth will ever be able to do the same with BSGS (they would also need to pretty much change the basics of how a computer works as well, to use an insanely huge amount of memory that doesn't even exist). Only a flat-earther like the OP would refuse to understand this indeed.

It's absurd to debate with you bro, you are very ignorant, you say wrong things as facts, and you are boring.
Coincidentally, since you are a Kangaroo believer, you don't doubt that it was with Kangaroo and you give it as an irrefutable fact. Just because someone said it, what science you have.

use an insanely huge amount of memory that doesn't even exist). Only a flat-earther like the OP would refuse to understand this indeed.

You are ignorant, there may be 1000 ways to deal with this, it's not like I strictly have to use a lot of memory, simple answer, Alberto did it that way because it was the best for his approach.

I'm sure that one day, someone will reveal that they unlocked puzzle 130 with a different version of bsgs or something else and you'll say it's a lie.
member
Activity: 165
Merit: 26
In my previous example I split 2^40 = 2^20 * 2^20

but you can split the same interval in this way: 2^40 = 2^22 (space) * 2^18 (time)

We can never split 40 bits in two parts, such that none of the parts is higher or equal than half. The part that is the "space", no matter if big or small, needs to first be filled (using the respective amount of operations). It does not matter how well it is optimized, stored, scanned, or queried, the first thing that is required is that it needs to be created.

This is the issue OP refuses to understand, this sqrt(n) bound. He basically provides an idea that makes a good algorithm tens to hundreds of times slower than it should be (the higher the range, the bigger the slowdown), at the expense of lower used space, and highly increased amount of work.

If its about precomputing and reuse, oh well, precomputing that reduces the number of computations after every run can already be applied to basically any other algorithm as well, with better results and with much lower need for storage.

And yes it was confirmed #120 was solved using kangaroos. No one on Earth can stop someone from building an ASIC kangaroo that runs millions of times faster than any CPU, but also no one on Earth will ever be able to do the same with BSGS (they would also need to pretty much change the basics of how a computer works as well, to use an insanely huge amount of memory that doesn't even exist). Only a flat-earther like the OP would refuse to understand this indeed.
legendary
Activity: 1932
Merit: 2077
I guess the lesson here is that is always a constant!

....

Faster = Larger
Slower = Smaller
---

Now image you find a interval of 2^20 keys where a certain property occurs with probability 1/4 while the same property occurs with probability 1/2 in the rest  of the space (example: points with coordinates X lower than 0.5 * 2^256)

i.e. a set where a certain property occurs less frequently than the expected value.

----


By the time you're doubling up on the amount of work (time) just to halve the required space, you're already far behind a different approach, which uses the same amount of time (work), but a constant amount of required space...


In my previous example I split 2^40 = 2^20 * 2^20

but you can split the same interval in this way: 2^40 = 2^22 (space) * 2^18 (time)

In my idea you do more work at precomputing the keys, but you store fewer keys (2^22 / 2^2 = 2^20 keys stored if there is a certain property with probability 1/4)

then you need to generate only 2^18 *2 = 2^19 keys (time) to retrieve the private key.

In this way I got : 2^20 (space) x 2^19 (time) = 2^39  (the constant is lower than 2^40)

You have to do more work to generate the DB, but then you need to do fewer calculations.

In general you can reduce 2^40 constant by a factor = (probability of a certain property in the entire 2^40 interval) / (frequency of the same property in a small interval, the interval of the precomputed keys)  

I'm not searching to break #135 key, I'm trying to optimize the search in small intervals (just for fun, to understand better how it works).
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
By the time you're doubling up on the amount of work (time) just to halve the required space, you're already far behind a different approach, which uses the same amount of time (work), but a constant amount of required space...

The costs of really wanting to get a 100% guaranteed deterministic solution does not scale from a point on, unfortunately. But getting a cheap probabilistic method to scale, and never finding it to NOT work, but still insisting it doesn't work, is like pretending that humanity can advance to a Type 4 civilization by tomorrow (possible? sure! likely? not so much!). It's more likely that something like that will happen, rather that a probabilistic algorithm to not find a solution. A probabilistic method is not like some lottery or whatever people may try to get an analogue to; it is unimaginable to actually find a counter-example that beats all odds.

false, once the limit is exceeded, kangaroo has no advantage, so it is worth the time spent generating the db. if you would use mathematical references to validate your claims, that would be great.
this is a method of improving scalability.
but hey, keep wasting your time with kangaroo, 3emi sends you his best wishes.

So what limit is that? The fact that #120 was just solved using "crazy fast kangaroos" doesn't help your case very much, it just proves again that they indeed work, exactly according to the math you're so much dismissing as "false". Let me see some BSGS solving any 90+ bits puzzle please. Without months of "building a database", as if that should not be  taken into account at all as "work".


Don't worry, I understand my maths, this post is just an idea, don't take Kangaroo as a religion bro.
I've made a lot of progress with a group of friends (secretly), are you sure it was with Kangaroo? Be careful when pursuing your beliefs.
member
Activity: 165
Merit: 26
By the time you're doubling up on the amount of work (time) just to halve the required space, you're already far behind a different approach, which uses the same amount of time (work), but a constant amount of required space...

The costs of really wanting to get a 100% guaranteed deterministic solution does not scale from a point on, unfortunately. But getting a cheap probabilistic method to scale, and never finding it to NOT work, but still insisting it doesn't work, is like pretending that humanity can advance to a Type 4 civilization by tomorrow (possible? sure! likely? not so much!). It's more likely that something like that will happen, rather that a probabilistic algorithm to not find a solution. A probabilistic method is not like some lottery or whatever people may try to get an analogue to; it is unimaginable to actually find a counter-example that beats all odds.

false, once the limit is exceeded, kangaroo has no advantage, so it is worth the time spent generating the db. if you would use mathematical references to validate your claims, that would be great.
this is a method of improving scalability.
but hey, keep wasting your time with kangaroo, 3emi sends you his best wishes.

So what limit is that? The fact that #120 was just solved using "crazy fast kangaroos" doesn't help your case very much, it just proves again that they indeed work, exactly according to the math you're so much dismissing as "false". Let me see some BSGS solving any 90+ bits puzzle please. Without months of "building a database", as if that should not be  taken into account at all as "work".
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
I guess the lesson here is that is always a constant!

....

Faster = Larger
Slower = Smaller

The only way to reduce this constant would be to find a property satisfied with differenties probabilities in 2 sets:

the set of the keys stored (Space)
the set of the keys generated (Time)

For example, if you have to work in a 2^40 interval,

usually you have to precompute 2^20 keys to store (space)
and 2^20 keys at run time (time)
then 2^20 * 2^20 = 2^40

Now image you find a interval of 2^20 keys where a certain property occurs with probability 1/4 while the same property occurs with probability 1/2 in the rest  of the space (example: points with coordinates X lower than 0.5 * 2^256)

i.e. a set where a certain property occurs less frequently than the expected value.

You can store 2^20/2^2 = 2^18 keys, only the keys that satisfy such a property,

and then you need to generate 2^20 keys * 2 = 2^21 keys (on average you need to generate 2 keys at each step, because only 1 of 2 satisfies the property).

So: 2^18 (space) x 2^21 (time) = 2^39

But you have to find such property.


By the time you're doubling up on the amount of work (time) just to halve the required space, you're already far behind a different approach, which uses the same amount of time (work), but a constant amount of required space...

The costs of really wanting to get a 100% guaranteed deterministic solution does not scale from a point on, unfortunately. But getting a cheap probabilistic method to scale, and never finding it to NOT work, but still insisting it doesn't work, is like pretending that humanity can advance to a Type 4 civilization by tomorrow (possible? sure! likely? not so much!). It's more likely that something like that will happen, rather that a probabilistic algorithm to not find a solution. A probabilistic method is not like some lottery or whatever people may try to get an analogue to; it is unimaginable to actually find a counter-example that beats all odds.

false, once the limit is exceeded, kangaroo has no advantage, so it is worth the time spent generating the db. if you would use mathematical references to validate your claims, that would be great.
this is a method of improving scalability.
but hey, keep wasting your time with kangaroo, 3emi sends you his best wishes.
member
Activity: 165
Merit: 26
I guess the lesson here is that is always a constant!

....

Faster = Larger
Slower = Smaller

The only way to reduce this constant would be to find a property satisfied with differenties probabilities in 2 sets:

the set of the keys stored (Space)
the set of the keys generated (Time)

For example, if you have to work in a 2^40 interval,

usually you have to precompute 2^20 keys to store (space)
and 2^20 keys at run time (time)
then 2^20 * 2^20 = 2^40

Now image you find a interval of 2^20 keys where a certain property occurs with probability 1/4 while the same property occurs with probability 1/2 in the rest  of the space (example: points with coordinates X lower than 0.5 * 2^256)

i.e. a set where a certain property occurs less frequently than the expected value.

You can store 2^20/2^2 = 2^18 keys, only the keys that satisfy such a property,

and then you need to generate 2^20 keys * 2 = 2^21 keys (on average you need to generate 2 keys at each step, because only 1 of 2 satisfies the property).

So: 2^18 (space) x 2^21 (time) = 2^39

But you have to find such property.


By the time you're doubling up on the amount of work (time) just to halve the required space, you're already far behind a different approach, which uses the same amount of time (work), but a constant amount of required space...

The costs of really wanting to get a 100% guaranteed deterministic solution does not scale from a point on, unfortunately. But getting a cheap probabilistic method to scale, and never finding it to NOT work, but still insisting it doesn't work, is like pretending that humanity can advance to a Type 4 civilization by tomorrow (possible? sure! likely? not so much!). It's more likely that something like that will happen, rather that a probabilistic algorithm to not find a solution. A probabilistic method is not like some lottery or whatever people may try to get an analogue to; it is unimaginable to actually find a counter-example that beats all odds.
legendary
Activity: 1932
Merit: 2077
I guess the lesson here is that is always a constant!

....

Faster = Larger
Slower = Smaller

The only way to reduce this constant would be to find a property satisfied with differenties probabilities in 2 sets:

the set of the keys stored (Space)
the set of the keys generated (Time)

For example, if you have to work in a 2^40 interval,

usually you have to precompute 2^20 keys to store (space)
and 2^20 keys at run time (time)
then 2^20 * 2^20 = 2^40

Now image you find a interval of 2^20 keys where a certain property occurs with probability 1/4 while the same property occurs with probability 1/2 in the rest  of the space (example: points with coordinates X lower than 0.5 * 2^256)

i.e. a set where a certain property occurs less frequently than the expected value.

You can store 2^20/2^2 = 2^18 keys, only the keys that satisfy such a property,

and then you need to generate 2^20 keys * 2 = 2^21 keys (on average you need to generate 2 keys at each step, because only 1 of 2 satisfies the property).

So: 2^18 (space) x 2^21 (time) = 2^39

But you have to find such property.


A way would be to perform a statistical analyse of (1*G, 2*G, 3*G, ...., 2^20*G) set.

Generate all the points, sort them by the x  coordinates;

on average you should get:

 more or less 1% in 0 more or less 1% in 0.01 ....

you select  the range where the actual percentual is minimal; maybe is 0.5% or 0.7% , but below 1%.

This is the property needed to reduce the set of keys to store more than the required increase in the number of calculations.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
How to create a baby table file (bPfile.bin) using your method here?
https://github.com/iceland2k14/bsgs/tree/main/v2_gmp

Basically you need to rewrite all the code for this... I did my own approach and it was slower than my current bsgs version.

The Ultra-Lightweight Database version of (mcdouglasx) may solve the speed on bsgs, but it need to pre-process a lot of points near of 2^50 points just to get the same speed of my current version of BSGS on github. (such  pre-process task may need some months alone).

So extending my answer to your question, you need to understand the algorithm  mcdouglasx and rewrite the code, there is NO an easy way to do that, once that you edit that code, you need to pre-process the points up to 2^50 or something like that just to get the same speed of current keyhunt.

@mcdouglasx i watnt to write my toughts onthis topic, because it caused debate and friction between you and other users. I want to order my ideas and write here down for all of you. I am going to do that later.



I think that if an algorithm does not directly depend on computing power and has a cumulative improvement efficiency (improves over time) after a certain time it will end up being more efficient than the rest.
hero member
Activity: 862
Merit: 662
How to create a baby table file (bPfile.bin) using your method here?
https://github.com/iceland2k14/bsgs/tree/main/v2_gmp

Basically you need to rewrite all the code for this... I did my own approach and it was slower than my current bsgs version.

The Ultra-Lightweight Database version of (mcdouglasx) may solve the speed on bsgs, but it need to pre-process a lot of points near of 2^50 points just to get the same speed of my current version of BSGS on github. (such  pre-process task may need some months alone).

So extending my answer to your question, you need to understand the algorithm  mcdouglasx and rewrite the code, there is NO an easy way to do that, once that you edit that code, you need to pre-process the points up to 2^50 or something like that just to get the same speed of current keyhunt.

@mcdouglasx i watnt to write my toughts onthis topic, because it caused debate and friction between you and other users. I want to order my ideas and write here down for all of you. I am going to do that later.

jr. member
Activity: 42
Merit: 0
How to create a baby table file (bPfile.bin) using your method here?
https://github.com/iceland2k14/bsgs/tree/main/v2_gmp
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
In short, you are only hindering research with your supposed theoretical intuitions without real scientific basis in practice. Respect the community. Of your five messages, all contain fallacies that do not represent reality because you speak from misinformation. That’s why you were so disliked when you used the name @digaran. Self-analyze, bro. If you find this topic useless, just ignore it.

OK, I admit it, I am digaran! I only have one problem: I can't prove it.

Fuck all the research papers and all the mathematicians that dedicated their lives to them, and of course, even all the very practical software that proves you're full of non-sense and negate what is in front of your eyes. You live in your own reality, the one that broke the square-root bound (congrats on that) and I'm really impressed about how everyone managed to put in practice your ultra-lightweight DB. Or not, because it can't work the way you think it can, no matter if von Neumann himself rises up from the dead and takes a stab at it, because of the square root bound (5th time I mention it, since you probably didn't even looked it up what it means). You're lost, buddy. Buy a GPU, try out JLP's kangaroo, does it work 50x faster than your CPU, while using less power? If yes, than you have smth to meditate on for a while. Go read the CUDA introduction, look at the nice drawings on the first page, understand what transistors are used for, or the difference between a generic CPU vs dedicated computing units. Only then we can have a serious talk. No amount of storage optimizations will ever decrease the number of actual operations needed to solve a problem. You are looking in the wrong place.

Ok, Digaran, bye, end of topic, "You won"
member
Activity: 165
Merit: 26
In short, you are only hindering research with your supposed theoretical intuitions without real scientific basis in practice. Respect the community. Of your five messages, all contain fallacies that do not represent reality because you speak from misinformation. That’s why you were so disliked when you used the name @digaran. Self-analyze, bro. If you find this topic useless, just ignore it.

OK, I admit it, I am digaran! I only have one problem: I can't prove it.

Fuck all the research papers and all the mathematicians that dedicated their lives to them, and of course, even all the very practical software that proves you're full of non-sense and negate what is in front of your eyes. You live in your own reality, the one that broke the square-root bound (congrats on that) and I'm really impressed about how everyone managed to put in practice your ultra-lightweight DB. Or not, because it can't work the way you think it can, no matter if von Neumann himself rises up from the dead and takes a stab at it, because of the square root bound (5th time I mention it, since you probably didn't even looked it up what it means). You're lost, buddy. Buy a GPU, try out JLP's kangaroo, does it work 50x faster than your CPU, while using less power? If yes, than you have smth to meditate on for a while. Go read the CUDA introduction, look at the nice drawings on the first page, understand what transistors are used for, or the difference between a generic CPU vs dedicated computing units. Only then we can have a serious talk. No amount of storage optimizations will ever decrease the number of actual operations needed to solve a problem. You are looking in the wrong place.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
You lack long-term vision, I assure you that BSGS with a sufficiently large database in the future, would double your current speed with Kangaroo, and to surpass it you would have to double your computing power, on the contrary BSGS would only have to store more keys, this is the interesting point of this research, which is not limited to how much computing power you have, and I'm not saying that Kangaroo is not efficient, I'm just saying that it depends on the net computing power, which is already beginning to be seen as a stone in the shoe.

You might as well be right. I started this off-topic with is a constant, so obviously if you use a ton of "space" then you need less "time" to solve the same problem.

This is valid no matter what algorithm you use. So I got it now - you want to store lots and lots of keys, in less space, maybe so more keys fit is fast memory (RAM?).

The fallacy in your logic is however this one: you are thinking only in terms of a system that actually uses memory in a fast way, and comparing two algorithms on this same system. In this case, the one that trades off more memory for less time is BSGS. I wish you good luck with that.

However, if we start comparing BSGS vs Kangaroo on a system that trades off slow/no memory with a lot more computing power, then what you find is that BSGS does not even apply (since memory is either really really slow, or non-existent), and an algorithm that is based on computing power alone, will always outperform it, simply because the amount of extra computing far surpasses any level of storage you optimized for, on a system with a low amount of computing power.

Your theory is biased by your own interests, so please stop spamming here. Your theories without test models are just that: theories based on what you believe.

Sometimes, what seems logical in theory doesn’t always work in practice. This can be due to many factors, such as unconsidered variables, implementation errors, or simply because reality is more complex than anticipated.

Let me tell you a story about the arrogance of thinking you know everything and how it can lead to self-humiliation. When Marilyn vos Savant published her solution to the Monty Hall problem in 1990, she received a lot of criticism, especially from mathematicians and statisticians. More than 1,000 people with doctorates wrote to the magazine where she published her response to tell her she was wrong. Over time, her solution was accepted and became a classic example of how intuition can fail in probabilistic problems.

In short, you are only hindering research with your supposed theoretical intuitions without real scientific basis in practice. Respect the community. Of your five messages, all contain fallacies that do not represent reality because you speak from misinformation. That’s why you were so disliked when you used the name @digaran. Self-analyze, bro. If you find this topic useless, just ignore it.
member
Activity: 165
Merit: 26
You lack long-term vision, I assure you that BSGS with a sufficiently large database in the future, would double your current speed with Kangaroo, and to surpass it you would have to double your computing power, on the contrary BSGS would only have to store more keys, this is the interesting point of this research, which is not limited to how much computing power you have, and I'm not saying that Kangaroo is not efficient, I'm just saying that it depends on the net computing power, which is already beginning to be seen as a stone in the shoe.

You might as well be right. I started this off-topic with is a constant, so obviously if you use a ton of "space" then you need less "time" to solve the same problem.

This is valid no matter what algorithm you use. So I got it now - you want to store lots and lots of keys, in less space, maybe so more keys fit is fast memory (RAM?).

The fallacy in your logic is however this one: you are thinking only in terms of a system that actually uses memory in a fast way, and comparing two algorithms on this same system. In this case, the one that trades off more memory for less time is BSGS. I wish you good luck with that.

However, if we start comparing BSGS vs Kangaroo on a system that trades off slow/no memory with a lot more computing power, then what you find is that BSGS does not even apply (since memory is either really really slow, or non-existent), and an algorithm that is based on computing power alone, will always outperform it, simply because the amount of extra computing far surpasses any level of storage you optimized for, on a system with a low amount of computing power.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
You show that you do not know how this DB works(That's why I call you ignorant).

If you give a monkey infinite time typing randomly, at some point it will discover all the existing keys.
So this is not about strength and birthday paradoxes, this is beyond what kangaroo can do today.

I don't need to care how it works, but you fail to understand my question: does it help find a ECDLP solution faster than the sqrt bound?

It has zero relevance that you can store lots and lots of keys, if at the end of the day it only works based on EC scalar multiplications. I hope you do realize that Kangaroo requires only a small fixed amount of multiplications, that is, simply the number of kangaroos for initial setup, and everything else is simple additions, which are hundreds/thousands of times faster than a point multiply? And it can solve a key, on average, in around 1.71 sqrt(N) operations (by operation, I mean addition, not multiplication)? Which is a lot less than what BSGS requires? It has zero squat relevance how well you optimize the storage, because the first thing that matters is how many EC operations are performed, not how many trillions of keys you can scan as visited or not. Because first, those trillion keys need to be computed one way or the other Smiley

It's like drawing on a sheet some mountains and saying - here's the world's mountains, but you actually need to climb them, not looking at a pretty picture. These are totally separate levels of magnitude we are talking about. But anyway, you answered my question. Which is a no. Case closed.

You lack long-term vision, I assure you that BSGS with a sufficiently large database in the future, would double your current speed with Kangaroo, and to surpass it you would have to double your computing power, on the contrary BSGS would only have to store more keys, this is the interesting point of this research, which is not limited to how much computing power you have, and I'm not saying that Kangaroo is not efficient, I'm just saying that it depends on the net computing power, which is already beginning to be seen as a stone in the shoe.

It's like drawing on a sheet some mountains and saying - here's the world's mountains, but you actually need to climb them, not looking at a pretty picture. These are totally separate levels of magnitude we are talking about. But anyway, you answered my question. Which is a no. Case closed.

Did you know that to measure the height of a pole, you don't need to climb it, you calculate it just by measuring the portion of the shadow, that's how it works, but you keep giving these silly examples that don't reflect the reality of this.
member
Activity: 165
Merit: 26
You show that you do not know how this DB works(That's why I call you ignorant).

If you give a monkey infinite time typing randomly, at some point it will discover all the existing keys.
So this is not about strength and birthday paradoxes, this is beyond what kangaroo can do today.

I don't need to care how it works, but you fail to understand my question: does it help find a ECDLP solution faster than the sqrt bound?

It has zero relevance that you can store lots and lots of keys, if at the end of the day it only works based on EC scalar multiplications. I hope you do realize that Kangaroo requires only a small fixed amount of multiplications, that is, simply the number of kangaroos for initial setup, and everything else is simple additions, which are hundreds/thousands of times faster than a point multiply? And it can solve a key, on average, in around 1.71 sqrt(N) operations (by operation, I mean addition, not multiplication)? Which is a lot less than what BSGS requires? It has zero squat relevance how well you optimize the storage, because the first thing that matters is how many EC operations are performed, not how many trillions of keys you can scan as visited or not. Because first, those trillion keys need to be computed one way or the other Smiley

It's like drawing on a sheet some mountains and saying - here's the world's mountains, but you actually need to climb them, not looking at a pretty picture. These are totally separate levels of magnitude we are talking about. But anyway, you answered my question. Which is a no. Case closed.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
1 KB file is not relevant to the subject.
Here you show your ignorance, are you saying that the size of the db is irrelevant in the context of the post made exclusively for that? Lol.


Yes, that is exactly what I said. More than that, you yourself admit your so called DB as having some whatever % probability of false positives, but at the same time (!) you argue about deterministic vs probabilistic methods, I mean really, WTF? I won't even mention again the non-sense about probability decay or whatever you called that thing you believe is happening when the interval size grows.

Ignorance is when you make claims without actually having anything to back them up with. So - do you know of any key that can't be solved by Kangaroo, if I show you that ANY public key you give me in some interval, I can break in sub-sqrt time, which proves you wrong? Would you say that using some 10 GB of helper data that can solve any key is worse or better than having a 1 MB file that takes forever to process, and isn't even as deterministic as you claim? For Kangaroo, we can simply add a false collision to a secondary table, which by itself makes it much more as a guarantee of never missing a true positive collision, unlike your super-tiny ultra lightweight bitmap.


You show that you do not know how this DB works(That's why I call you ignorant). When we talk about false positives, I am referring to the basic context of the test script that was done this way on purpose so as not to have to create a massive DB to test it, so "IT IS NOT PROBABILISTIC" although it could be, it would be a decision of the end user.

So - do you know of any key that can't be solved by Kangaroo,

If you give a monkey infinite time typing randomly, at some point it will discover all the existing keys.
So this is not about strength and birthday paradoxes, this is beyond what kangaroo can do today.

Do you see me writing in JLP's post talking about databases? No, because that would be off-topic. That's what you do. You come here just to spam because you don't understand how it works. Contrary to what you say, "1 MB that takes forever to process." The DB is no longer inefficient as storage. If you understood before commenting, you wouldn't be treated as ignorant.
member
Activity: 165
Merit: 26
1 KB file is not relevant to the subject.
Here you show your ignorance, are you saying that the size of the db is irrelevant in the context of the post made exclusively for that? Lol.


Yes, that is exactly what I said. More than that, you yourself admit your so called DB as having some whatever % probability of false positives, but at the same time (!) you argue about deterministic vs probabilistic methods, I mean really, WTF? I won't even mention again the non-sense about probability decay or whatever you called that thing you believe is happening when the interval size grows.

Ignorance is when you make claims without actually having anything to back them up with. So - do you know of any key that can't be solved by Kangaroo, if I show you that ANY public key you give me in some interval, I can break in sub-sqrt time, which proves you wrong? Would you say that using some 10 GB of helper data that can solve any key is worse or better than having a 1 MB file that takes forever to process, and isn't even as deterministic as you claim? For Kangaroo, we can simply add a false collision to a secondary table, which by itself makes it much more as a guarantee of never missing a true positive collision, unlike your super-tiny ultra lightweight bitmap.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
Is it the same to use Kangaroo with 4 cores as with 12 cores? No, right? Because it is an algorithm that depends on computing power, which is limited (technology does not advance so quickly and the difficulty of the puzzles grows exponentially). That is, your probability decreases exponentially with time and the difficulty of the puzzle.


.. rest of BS ...

Let's cut straight to the chase: did you break the square root bound of finding a key in an interval? I am talking about the ECDLP problem in an interval.

If you did, congrats, I wait for the paper.

If you didn't (which is most likely) then we are definitely not on the same page, so engaging further in a discussion is useless, just like with COBRAS, since you didn't understand what the issues are. Shrinking some exakeys in a 1 KB file is not relevant to the subject.

I know what I’m doing and I know my approach and my limits. I’m not interested in breaking ECC. I know you are digaran (“your actions and opinions are admirable and those of others are worthless,” that’s your pseudo-scientific motto), but that doesn’t matter to me. We live in a free world, and there are faster methods without revealing. I reserved one since the beginning of the year, as I already mentioned, because I only need a more powerful PC. I didn’t think my life was going to take so many turns, but so it is. Starting over until the day comes, I’m not in a hurry because money is only a necessity. I will only feel the gratitude of victory before revealing it, while I share ideas that in the future I think will be useful. I only answer you to leave you without arguments and run away with some of these nonsense that more than arguments are from immature personalities.

What is your argument in the last message? This is not how you win debates; you only portray yourself.

1 KB file is not relevant to the subject.
Here you show your ignorance, are you saying that the size of the db is irrelevant in the context of the post made exclusively for that? Lol.
Pages:
Jump to: