Pages:
Author

Topic: Isn't this a massive vulnerability built into the system? (Read 1901 times)

donator
Activity: 2058
Merit: 1054
oh, so if the overall odds are against you, you want lower variance or you're more likely to have an overall loss.  Like if your goal is 60% heads in coin flips, only flip 3 coins cuz if you flip 100, it's not gonna happen.
Depends... But generally in a situation like you describe you want higher variance. 3 flips is higher variance than 100 flips.
sr. member
Activity: 392
Merit: 250
oh, so if the overall odds are against you, you want lower variance or you're more likely to have an overall loss.  Like if your goal is 60% heads in coin flips, only flip 3 coins cuz if you flip 100, it's not gonna happen.

Btw I won $15 on  $5 once and got $15 in mixed tickets and on those, I won $14 back in a combination of about 3 winning cards out of the 5.  Talk about an unexpected outcome.  And then I cashed in the $14 for money and chicken sandwich Cheesy nom nom nom nom nom
newbie
Activity: 42
Merit: 0
Well since we're on the topic...well off the topic Tongue ummm, I have a theoretic question that's been bugging me because I suck at remotely complicated probability calculations.

If you were to win let's say $51 on $1 scratch off for a profit of $50 and because you're dumb, you decide to take the payout in 100% more lottery tickets  Cheesy Grin

Is it more beneficial to get 5 $10 tickets or 25 $2 tickets?  I know in the real world, the payout vs odds scale up slightly unevenly on more expensive tickets but at least sort of close so ignore that.  In fact, lets say the average overall win on the $10 one is 500x the card's value with an overall average odds of 1 in 1000.  The $2 tickets have an average overall win of 50x the card's value and average overall odds of 1 in 100.  So they look identical cuz the $2 one has 10x better odds of winning 10x less money BUT the $2 choice results in more tries but for less money but all the tries are trying to win the same high value jackpots which there are a finite number of so they're not individual tries, they're sequential ones so the odds get better the more you have, making them appear to be superior to buying less cards of a higher value.  So would the $2 one be a significantly better choice or the $10 ones?

A week ago I bought $10 worth of scratch off tickets.  a $2 ticket, and 8 $1 tickets.  Only two of the $1 tickets were winners, overall winning were $11.

I then put that money into 11 $1 tickets.  Again, only two winners, and again, $11.

I then, AGAIN, put that money into 11 $1 tickets.  No winners.

IMO, go to a Casino and play Craps.  Much better odds.  Read up on the rules first, because it can get quite advanced.  And always remember, Roulette is NEVER 50/50.  You can place a bet on both red and black, yet the ball could come up 0/00 Green.
donator
Activity: 2058
Merit: 1054
Well since we're on the topic...well off the topic Tongue ummm, I have a theoretic question that's been bugging me because I suck at remotely complicated probability calculations.

If you were to win let's say $51 on $1 scratch off for a profit of $50 and because you're dumb, you decide to take the payout in 100% more lottery tickets  Cheesy Grin

Is it more beneficial to get 5 $10 tickets or 25 $2 tickets?  I know in the real world, the payout vs odds scale up slightly unevenly on more expensive tickets but at least sort of close so ignore that.  In fact, lets say the average overall win on the $10 one is 500x the card's value with an overall average odds of 1 in 1000.  The $2 tickets have an average overall win of 50x the card's value and average overall odds of 1 in 100.  So they look identical cuz the $2 one has 10x better odds of winning 10x less money BUT the $2 choice results in more tries but for less money but all the tries are trying to win the same high value jackpots which there are a finite number of so they're not individual tries, they're sequential ones so the odds get better the more you have, making them appear to be superior to buying less cards of a higher value.  So would the $2 one be a significantly better choice or the $10 ones?
If you assume that:
1. There are few tickets in total, so buying many tickets has a noticeable effect on your odds, and
2. You stop buying tickets if you find a winning one,
Then there's an advantage to the $2 tickets.

Regardless you also need to consider the variance. With the $10 tickets you have more variance, whether that's good or bad depends on your perspective. If you don't want variance you're better off not buying any tickets at all.
sr. member
Activity: 392
Merit: 250
Well since we're on the topic...well off the topic Tongue ummm, I have a theoretic question that's been bugging me because I suck at remotely complicated probability calculations.

If you were to win let's say $51 on $1 scratch off for a profit of $50 and because you're dumb, you decide to take the payout in 100% more lottery tickets  Cheesy Grin

Is it more beneficial to get 5 $10 tickets or 25 $2 tickets?  I know in the real world, the payout vs odds scale up slightly unevenly on more expensive tickets but at least sort of close so ignore that.  In fact, lets say the average overall win on the $10 one is 500x the card's value with an overall average odds of 1 in 1000.  The $2 tickets have an average overall win of 50x the card's value and average overall odds of 1 in 100.  So they look identical cuz the $2 one has 10x better odds of winning 10x less money BUT the $2 choice results in more tries but for less money but all the tries are trying to win the same high value jackpots which there are a finite number of so they're not individual tries, they're sequential ones so the odds get better the more you have, making them appear to be superior to buying less cards of a higher value.  So would the $2 one be a significantly better choice or the $10 ones?
sr. member
Activity: 448
Merit: 250
You can buy 100 scratch off tickets, scratch them off and lose 100 times, and the 101st ticket still has the same odds as before you scratched any. It doesn't matter if you keep going 24/7 or take breaks and stop then restart. The odds are the same for each ticket (hash) no matter what you've already calculated in the past (assuming you aren't duplicating work, which they don't).

This isn't a good comparison, as there are a finite amount of lottery tickets printed. Each one that is consumed reduces the overall volume and affects the odds. Say you have 1000 tickets with 20 jackpots and you win one, then your odds of the next ticket being a jackpot are 19/999. If you lose on 100 tickets in a row after that getting that jackpot, the odds are 19/899. Obviously the tickets are spread out temporally, you aren't the only one playing and there are millions and millions of them so the effect is extremely minute, but it is there.

Either way, lottery tickets serve to tax the poor and the stupid...but as I tell my super-Jew ladyfriend who scoffs at my gambling, "You're never going to win the jackpot. I might..."
newbie
Activity: 42
Merit: 0
one thing to keep in mind with the "coin flip 50/50 chance" thing is that there are more factors involved than simply the two sides of the coin.  Slight differences like how it is flipped, wind speeds, imperfections of the coin, etc all change the chances and the outcome.  Although that imperfect 50/50 chance of a coin flip does fit very well with BitCoin, since those slight differences are part of Bitcoin as a whole.
sr. member
Activity: 392
Merit: 250
Oh yeah, I forgot timestamps were involved Sad that does in fact throw it off I guess cuz there's a new pool every single time then.  Now I get what you all were saying Tongue

Though you're wrong about the lottery tickets though Tongue  If there are 100,000 of one type of scratch off and 5,000 are winners and you buy one and lose at it, there are now 99,999 tickets and 5,000 winners.  And if you lose again, there's 99,998 and 5,000 winners.  And you're probably gonna keep going on that pattern cuz the odds are freakin terrible rofl.

Coin flips though, yeah Tongue
newbie
Activity: 42
Merit: 0
I just posted a "simple question" thread and then read this one.

Keep in mind that the Nonce is not the only changing value that is hashed within the attempts to create the next block.  The "timestamp" also changes, presumably every second in the case of a UNIX timestamp.

Lets assume we have a mining rig that can check 1 GigaHashes per second.  Within that 1 second, all of thoses hashes are tested with the exact same timestamp value as part of their computation.  The next second, the next 1 GigaHashes are tested with +1 second change to the timestamp, and we all know that even a small difference like that between a 10 and a 11 can have vastly different outcomes with their hash counterpart.  Example:

the MD5 hash value of '10' is : d3d9446802a44259755d38e6d163e820
the MD5 hash value of '11' is : 6512bd43d9caa6e02c990b0a82652dca

the SHA-256 has value of '10' is : 4a44dc15364204a80fe80e9039455cc1608281820fe2b24f1e5233ade6af1dd5
the SHA-256 has value of '11' is : 4fc82b26aecb47d2868c4efbe3581732a3e7cbcc6c2efb32062c08170a05eeb8

And this is just hashing two numbers, one of which, numerically and Spinal Tappingly speaking, is 1 louder than the other.  The actual block contains far more information than a simple 10 or 11.
member
Activity: 112
Merit: 10
When they say you don't "make progress", it means that the rules of probability have no memory when it comes to independent events. You can buy 100 scratch off tickets, scratch them off and lose 100 times, and the 101st ticket still has the same odds as before you scratched any. It doesn't matter if you keep going 24/7 or take breaks and stop then restart. The odds are the same for each ticket (hash) no matter what you've already calculated in the past (assuming you aren't duplicating work, which they don't).

The odds of a coin coming up tails is 0.5 or 50%. But if you flip a coin and it comes up heads 100 times in a row, what are the odds that the next time it'll come up tails?  Yep, still 50%.  You weren't "making progress" towards successfully getting a tails by doing a bunch of failed attempts. The attempts are independent, probabilistically, from each other regardless of success or failure.
hero member
Activity: 527
Merit: 500
Transaction fees.
newbie
Activity: 6
Merit: 0
I heard that the maximum number of bit coins will be 21 million. So what is the incentive to keep mining after this amount has been reached?
donator
Activity: 2058
Merit: 1054
But I would think everyone would use an incrementing function instead of a random one cuz there's got to be a many times more math involved in generating a random number than just adding 1 to the last number.  I mean it like reads a value off the clock and does some whole big equation thing and then splits it out based on the range you're looking for and then hashes it.  That would throw my hash rate out the window, not to mention how I doubt the clock's value interval would be small enough to actually chanve between hashes at let's say 250 million hash calculations per second.  I dunno, maybe it's based on 1 billionth of a second.
That's true, generating a new pseudorandom number for every hash wouldn't be very efficient. But the clock granularity has little to do with it, it's just used as a seed. Once you have the seed you can generate as many pseudorandom numbers in a sequence as you want.
sr. member
Activity: 392
Merit: 250
Well now that makes a lot more sense Tongue btw I somehow missed that wiki page, I mostly read the FAQ and some technical explanations on I think wikipedia.  But since I didn't know the size of the data involved, I assumed that since a block can be solved in about 10 minutes by a pool instead of like 100 years, then trying random numbers over again in that relatively short period of time when you consider all the computers involved, would be inefficient as a whole with a huge percentage of repeats.

But I would think everyone would use an incrementing function instead of a random one cuz there's got to be a many times more math involved in generating a random number than just adding 1 to the last number.  I mean it like reads a value off the clock and does some whole big equation thing and then splits it out based on the range you're looking for and then hashes it.  That would throw my hash rate out the window, not to mention how I doubt the clock's value interval would be small enough to actually chanve between hashes at let's say 250 million hash calculations per second.  I dunno, maybe it's based on 1 billionth of a second.
donator
Activity: 2058
Merit: 1054
Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.

Thanks dude with more technical bitcoin knowledge than me!

Helped me understand the inner workings of bitcoin more.
Oh, I don't know anything myself, I just quote what I read here Smiley.

Wait wait wait a minute.  If they're using non-repeating values of any sort, then....how are clients not "making progress" towards a low enough hash value? If they're leaving behind non-matching hashes that they're not going to try again, obviously they are making progress then because there are less total possible hashes left to try.
Implementation details. If the nonce was longer (say 128 bits), you could just pick nonces randomly. If the Merkle root was easy to change, you could just use a different random Merkle root and nonce for every hash. But a compromise was reached with a short nonce which is checked sequentially, followed by a random change in the Merkle root. There's "no progress" in the sense that if you hashed for X minutes and didn't find a block, you aren't any closer to finding one than you were in the beginning. (That you don't try the same header twice only means that your progress isn't negative.)

But also, how is everyone working with different block contents?  You mean previous block contents or current block contents?  It has to be based on the block before it so does everyone just grab a random partial piece of the last block and not all of it then when it claims to have a legit block, it tells the verification system what specific chunk it used?  Or is the difference the transactions they're processing and the transactions are included in the about-to-be-hashed chunk of data and each ransaction only gets grabbed by one pool so each pool's attempted block is different?
You say you read the documentation, yet you keep asking these very basic questions.
This page details the contents of a block header. One of the fields is the hash of the previous block, this much doesn't change between different miners at the same time. The main thing that changes is the Merkle root, a hash of a data structure of all of the transactions to be included in the block. Assuming everyone knows and willing to include all transactions, most of the data doesn't change between different miners. What does change is the generation transaction. Everyone uses an address of his own in it. Also, there's "extra nonce" in this transaction which can be chosen freely. Hash functions being what they are, every such change alters the Merkle root in ways you can't imagine. Thus, the Merkle root can be for all purposes chosen randomly.

This will all become so much clearer to you if you hang around http://blockexplorer.com/ for a while.
legendary
Activity: 966
Merit: 1004
Keep it real
Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.

Thanks dude with more technical bitcoin knowledge than me!

Helped me understand the inner workings of bitcoin more.
donator
Activity: 2058
Merit: 1054
Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.
legendary
Activity: 966
Merit: 1004
Keep it real
It's different current block contents (the transactions being included), not everyone will have the exact same thing... but most will be pretty similar.

Your best bet is to look at source code for some of the major miners, that way you can figure out how they're generating hashes/nonces/etc.
sr. member
Activity: 392
Merit: 250
Wait wait wait a minute.  If they're using non-repeating values of any sort, then....how are clients not "making progress" towards a low enough hash value? If they're leaving behind non-matching hashes that they're not going to try again, obviously they are making progress then because there are less total possible hashes left to try.
sr. member
Activity: 392
Merit: 250
I thought a "nonce" was by definition a number generated at random and used once.  So it's used and forgotten about so you might try it twice and not know it.  Everyone seemed to imply random numbers were being used to generate them but if you say they're sequential or parallellaly logically processed in an overall non-repeating way (oh yeah, I made a word Tongue), I'll take your word for it Tongue

Also GO PACKERS.

But also, how is everyone working with different block contents?  You mean previous block contents or current block contents?  It has to be based on the block before it so does everyone just grab a random partial piece of the last block and not all of it then when it claims to have a legit block, it tells the verification system what specific chunk it used?  Or is the difference the transactions they're processing and the transactions are included in the about-to-be-hashed chunk of data and each ransaction only gets grabbed by one pool so each pool's attempted block is different?
Pages:
Jump to: