Seriously? You say the second one has better or equal success rate?
Nah, I just mean that if you run the second one as much as needed to get the same number of matches as the first one, then it runs in less time, on average, on the same system, under the same resources and under the same conditions. You know, in order to compare them, by bringing both to a common denominator. But again, the code is not a demonstration of either kang or b-day paradox, so God knows what you were trying to prove... It might as well count apples in Satoshi's bag in the story above, as far as I'm concerned.
My script is the simplest demonstration of Kangaro's obsolescence.
Your script is sampling some arbitrary number of items and then traverses pseudo-randomly some interval, it is neither a demonstration of Kangaroo, and neither using the b-day paradox.
If you want to talk about Kangaro, just try bit 66,67,68...125.
If you want to talk about Kangaroo, then understand it correctly, code a proper implementation, and you'll be amazed - it works at the same speed and at the same efficiency no matter what interval you are running it. This is why we have notions like "algorithm complexity", big O notation, which are relative to the problem size and on fundamental time concepts, not on random opinions.
That way you see that the higher the puzzle, the worse it performs, and there comes a point where not even getting the necessary strength would be more expensive than the prize itself.
That is generic to any problem and any algorithm, however some algorithms will depend more on space than on time, and the main issue here is that the higher bound on the space you can use to solve is much much lower than on the higher bound on the time which you can decrease (translation: faster speed, due to more compute power).
And I won't touch on the subject of Bsgs or databases because you don't understand it, and you don't even try to understand in order to speak properly.
Yeah, sure, I don't understand that even the fact of using a database slows down any algorithm that requires fast memory in the algorithm itself. I mean, fast memory assumes O(1) steps to do a read/write, while a database, guess what, works in a logarithmic number of steps relative to the number of entries, which is slower than O(1), so it must be taken into account if you intend to analyze the performance of an algorithm. Cool. Do you realize that a database is actually a practical-side emulation of the fast-memory concept?
You just spout fallacies with huge texts without any code.
And I'm supported by the fact that all my posts, whether considered good or bad, have codes to support the idea.
What fallacies are you referring to, or is it just your opinion?
The fact that you actually post code is working against you, when there is missing correlation between your idea, the problem, and the code. Seriously, do you believe that your two scripts are in any way supporting the text that was in front of them, you know, the one that starts with "The correct option is..."? Your code does not dismiss anything you were given arguments to, since it has nothing to do with the said arguments, and rather with some dubious problem: let's take some random numbers and traverse an interval pseudo-randomly, and see what happens. Zero relevance to either kang or bday paradox, it's just some arbitrary created problem.
One other thing you should meditate on: there is no need to make any changes to Kangaroo to benefit from pre-computation, which is what you highly suggest, probably because you didn't understand correctly. You might as well have some already-compiled binary executable, and if you simply save results from one run to the next, then the next run will solve the next problem (same key, or other key), in less and less time. There is no change in the computing speed, only linear slowdown at the collision check layer, which of course, if it's a database, means you need to double the amount of stored items to ever need an additional step to get to a stored value. So overall, the efficiency grows, but it never decreases, no matter if you run it once, twice, or a quadrillion times. It will simply asymptotically approach on having a single run just needing to do the minimal possible amount of computing to find some already-computed point.