Havent read it yet, will later, maybe I don't fully understand. I do know that ICMP is slower. But my point is that if I can do 300 hashes in .7 milliseconds, the "hardest" to brute force bet is a 95% win, in which case you will need, on average, 11 hashes to force. In extreme cases, it will take 350+ or so (the largest number of "losses" in a row I saw for 5% bets, which is obviously not the maximum, but in 100,000,000 runs, a 95% chance roll happened 349 times once or twice), which is why I chose to hash 300 and time that.
what about autobets where you have to iterate through several just to see if that one is the way you want? eg it works like a multiplier?
What about the load of doing this to everyone as is implied? Presumably they are not just screwing you over (if they are doing it at all).
hundreds of bets per second, maybe thousands with some doing autobetting (there are several userscripts to do that) plus the load of the web server, MSSQL server, etc. It is a one box show apparently. Its also running on windows 2008 which depending on who you talk to is better in terms of performance than windows 8 server.
The contention rate that would exist is actually quite high, if what you propose is going on. It would be extremely noticeable if all that was going on. However even with that timestamps have been used to fingerprint the clock skew of a specific system even if it physically moves networks, or hides behind TOR hidden services. There are methods that have some pretty fine accuracy. Read the footnotes of the papers I provided as well, that will give you 10 or so other papers you can read on the subject. The Pearson book I listed is partially available on Google Books so partially free to read (I only found it because I was searching for who cited my paper, they misspelled my name which makes me think there are no fact checkers on it so it may not be worth buying - I read nothing other than the footnote so I dunno its overall quality).
And on a web server that appears to be hosted in germany, with random amounts of internet traffic, unknown amounts of server load, unknown amounts of user load on the site, you'd be hard pressed to notice a delay which could be anywhere between .002ms and 10ms, and attribute it to brute forcing a new hash. I havent checked on tcp timestamps, but are they accurate to the .000001th of a second?
you really should read the one about guessing valid usernames. In that one using tcp timestamps they were able to tell if a username was valid because it would return faster by not comparing the password and doing the single hash on the supplied password. That is just one hash, on a server far away, with other things going on.
It would at least let you confirm the theory if its really happening.