Author

Topic: Does this simple provably fair scheme work? (Read 860 times)

hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
October 18, 2014, 11:46:22 AM
#4
I just noticed this now.

And I just noticed now that we're both responding to a post made in November 2013 Tongue (a spammer necroposted, but the spammer's post was since deleted by the mods).
legendary
Activity: 3416
Merit: 1912
The Concierge of Crypto
October 18, 2014, 10:37:25 AM
#3
I just noticed this now. You'd get faster answers if you post this in the gambling sub forum. There are also articles and posts there about this topic every now and then (and older ones too, if you do a search.)

And there's a bunch of people there, who actually take the time to understand any new game sites that appear, to see if they are provably fair.

A fair number have been exposed as not really truly provably fair, or their implementation is flawed.
hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
October 13, 2014, 12:00:49 PM
#2
...
What I can't understand is the need for all of the extra operations (initial shuffle, cutting the deck, factoradics, etc.) over what I think is a much simpler scheme:

- Server computes server_seed and shares hash(server_seed) with the client
- Client computes client_seed and shares it with the server
- Server uses (server_seed + client_seed) as the seed for a PRNG
- Server generates N random variables required for the result of the game (cards, numbers, slot positions, etc.) with the PRNG. For games that require unique values (card games), we keep generating until they are all unique.
- Share the game results (random variables) and server_seed with the client, who can run the same deterministic PRNG with (server_seed + client_seed) to obtain the same N random variables and ensure the game is indeed fair.

I simply might have misunderstood some fundamental need behind all of the extra complication. The only downside I can think of with this approach is the situation where we need unique random variables (like card games). The idea here is not to compute the full deck, since that is never needed (maximum number of cards that can occur in a game of single-deck blackjack, for example, is <25). Yes, a few collisions might happen while generating uniques, but PRNGs are very efficient and deterministic (so they don't use up the precious entropy), so that shouldn't be a problem.
...

The fair shuffling article you mentioned actually explains why, but you may have missed it. PRNGs are typically seeded with either 32 or 64 bit numbers. Let's say we choose a PRNG that has a 64-bit seed (the article went with one using a 32-bit seed). That means it has about 1.8 × 10^19 different starting points, so at most that many different combinations of hands can be dealt.

Single-player blackjack, assuming an 8-deck shoe (very common) and a 4-hand split limit (also common), can consume at least 68 cards (that's the best I could figure, anyways). Producing 68 choices from an 8-deck shoe yields 1.5 × 10^79 different combinations (around 15% the count of atoms in the visible universe Wolfram|Alpha tells me) edited because that's all wrong, it's closer to, but larger than 2.7 × 10^114 different permutations (20ish million billion billion billion times the count of atoms in the visible universe). This analysis isn't entirely fair, because many of those combinations would never be chosen when hands bust early, but suffice it to say that if you start with only a 64-bit seed, you are immediately eliminating the the ability to generate the vast majority of possible combinations. Their solution was: (1) server shuffles first, using a long-running CSPRNG that does not have this limitation, and then (2) we both identically shuffle the result, using a simpler PRNG with a smallish seed.

Edited to add: if you're using a 32-bit seed, as they were in the article, it's also more likely than it should be that you deal the exact same hand multiple times.

Regarding your random card selection algorithm, I'd suggest that read up on the Fisher-Yates shuffling algorithm instead. It's actually a bit faster than your proposed algorithm (because it never has "collisions"), and like yours it does not need to shuffle the entire deck if it's not required (although it's much faster to do so if desired). I suspect that a gambling site would prefer to go ahead and shuffle the entire deck anyways just because it would appear more transparent to the average user even if it's not really necessary.

Does this answer some of your questions?


Also, to Tetraplay Casino: congratulations on being the very first user I've ever added to my ignore list. Your obnoxious off-topic 100% spam response fully qualifies you for this honor, thanks. I happily see that the mods have nuked said spam, thanks!
newbie
Activity: 22
Merit: 0
November 28, 2013, 11:58:25 AM
#1
I'm trying to wrap my head around how Provably Fair works and recently ran into Bitzino's articles on deck shuffling and factoradics. What I can't understand is the need for all of the extra operations (initial shuffle, cutting the deck, factoradics, etc.) over what I think is a much simpler scheme:

- Server computes server_seed and shares hash(server_seed) with the client
- Client computes client_seed and shares it with the server
- Server uses (server_seed + client_seed) as the seed for a PRNG
- Server generates N random variables required for the result of the game (cards, numbers, slot positions, etc.) with the PRNG. For games that require unique values (card games), we keep generating until they are all unique.
- Share the game results (random variables) and server_seed with the client, who can run the same deterministic PRNG with (server_seed + client_seed) to obtain the same N random variables and ensure the game is indeed fair.

I simply might have misunderstood some fundamental need behind all of the extra complication. The only downside I can think of with this approach is the situation where we need unique random variables (like card games). The idea here is not to compute the full deck, since that is never needed (maximum number of cards that can occur in a game of single-deck blackjack, for example, is <25). Yes, a few collisions might happen while generating uniques, but PRNGs are very efficient and deterministic (so they don't use up the precious entropy), so that shouldn't be a problem.

What have I overlooked?

Update:
I just realized that some of the games don't occur in a single step but rather involves sharing some of the random variables (ie. cards) between the steps. Thus, theoretically it would be possible to observe some of the deterministic outcomes, and by datamining them, in the worst case be able to deduct the rest of the results. However, given that the seed involves a true random component and is unknown by the client, and the period of the PRNGs are HUGE (2^19937 − 1 for the Mersenne Twister), I'm not sure if this would really matter.
Jump to: