Pages:
Author

Topic: Isn't this a massive vulnerability built into the system? - page 2. (Read 1852 times)

hero member
Activity: 812
Merit: 1022
No Maps for These Territories
What's to stop someone from writing a rigged mining client with the random number generation function replaced with code to try sequential values so it never tries the same value twice?  Like "value += 1" for example.
This is already how miners work. They try all 2^32 values for the last bits of the nonce in sequence or in parallel (GPU).

But as everyone is working with different block contents, there is no 'race'. Every miner is exploring a different space and there are multiple 'right' answers, as there are more inputs that produce a hash value lower than the threshold.

So it doesn't matter at all what your iteration algorithm does. You cannot cheat this way.


legendary
Activity: 966
Merit: 1004
Keep it real
I'll admit I don't know exactly how mining works... but I would assume you're wrong otherwise everyone who wrote miners would have done it differently.  I haven't looked at any source for the miners, but I'm pretty sure they wouldn't rehash the same stuff over and over because that would be a huge waste of time.
sr. member
Activity: 392
Merit: 250
I couldn't get past the whole "there is no progress" thing because with everything to do with hashing I've ever heard about, there obviously is progress.  Like brute forcing by generating hashes to compare to the one you want to decrypt, you're making progress by finding out ones that didn't work.  So why doesn't bitcoin mining make progress?

I thought about it and came to this hopefully incorrect conclusion because otherwise everyone's gonna be pretty pissed off at me Tongue

The only thing that would make that statement about "there is no progress" true is if people's miners didn't remember what it already tried so it could theoretically hash the same value twice or more.  Here's my poorly arranged proof-ish thing Tongue Note: all the numeric values are completely made up.
-----------------------------------------------------------------------------------
The rate at which hashes are calculated by my miner is extremely static so the size of the values being hashed is static, right?  I think that's how it works.  Like if it was some 16 bit values being hashed then some 256 bit ones after that, the hash rate would be erratic cuz it takes longer to hash larger data, right?  So I assume it's feeding in values to be hashed that are always the same bit-size.

Variable time!
The size of each value to be hashed by my GPU is A bits.  So like A = 256 bits for example.

There's X amount of total hashes possible based on the size of A (like if A is 64 bits of data, there are like 100 billion possible hash results with a character set of 52 or whatever, assuming it's string data, which it apparently is since the first block was based on text from a news story and strings and characters are really binary data anyway)

So back on the network, it decides that difficulty level 0.23934 means all hashes of 00000xxxxxxxxxxx and below would complete the block. So that results in a "low enough" range of hashes containing Y amount of hash values that will complete the block.  Let's say given the difficulty rating of whatever, 12,345 out of a possible 100 billion hashes are "low enough" so Y = 12,345.

Your miner takes data from the last block, adds a random value resulting in a static total value of size A-bits, then at any given point in time has tried Z amount of hashes so far.  So at 40.0MH/s for 5 seconds, you tried 200 million hashes so Z = 200 million.  Let's say all the hashes so far were found to be outside the "low enough" range.

So while mining, at any given point, there's only X minus Z hash values left to check (total hashes possible minus amount of hashes you already tried) because your client knows the ones it already tried aren't inside the range,

So in that case you would be "making progress."

But...everyone keeps saying there is no progress and you're not making any progress.

That would be true under one single condition. Your client tried the same values multiple times because it's picking them completely at random with no memory of past tries.  As far as I understand, that's how it works, right?

Which brings me to my point about the vulnerability.  I think you're thinking "He's gonna say, 'Can't someone rig a client to remember the values it already tried'?" but no, that'd be gigabytes of data and nobody has the RAM for that.  You don't have to "remember" the values, you just have to not try them twice....like trying them sequentially for example.

What's to stop someone from writing a rigged mining client with the random number generation function replaced with code to try sequential values so it never tries the same value twice?  Like "value += 1" for example.  Instead of hashing values based on random numbers like everyone else, they're strategically trying a sequence so none get repeated which means they hit a hash value inside the "low enough" range a hell of a lot quicker than everyone else.

That would give them such an enormous advantage over everyone in the long term. They'd get an abnormally high bit coin share and cause blocks to be created more quickly than the system predicted random chance would allow and who knows what the effect of that would be.  So hopefully this works a little differently than I'm imagining or we've got a problem here.
Pages:
Jump to: