Pages:
Author

Topic: In which language top gambling sites are coded ? - page 6. (Read 12759 times)

hero member
Activity: 626
Merit: 500
https://satoshibet.com
We use mySQL & mongo for db (satoshibet)
legendary
Activity: 1876
Merit: 1295
DiceSites.com owner
Well, for example pocketdice.io sends a PHPSESSID cookie, so it's safe to say they use PHP ( https://pocketdice.io/index.php works too Wink). Besides that, you can easily see the socket.io socket.


A socket is an open connection between the client (browser) and server with bidirectional communication. If you make a bet, you send the info through the socket (and you get the result) and if others bet you get the bets from there. So it's all real-time, you only connect once. There are different libraries/servers for this, Socket.io is a popular one made in node.js. Perfect for a dice site.

An alternative is indeed getting new data with AJAX requests. With that, the browser would make a connection/request every x seconds to see if there are new bets. Obviously for a dice site this would be pretty terrible and give big lag and other problems. However AJAX is useful for other purposes.
legendary
Activity: 2380
Merit: 1209
The revolution will be digital
PocketDice.io = nginx, PHP, socket.io, AngularJS



About that lack of PHP sites:

IMO every developer has to look per project what language and frameworks to use.

Node.js is single threaded by design and is perfect for realtime communication between the browser and server. It just makes sense to use it for chatrooms, games, etc. aka dice/gambling sites Smiley

It doesn't mean that PHP is necessarily bad, it's just bad for this type of website. Better yet: the 2 websites with PHP still use socket.io (node.js) for the socket. On the other hand, suggesting a MEAN (MongoDB, Express, AngularJS, and Node.js) stack for a normal informative company CMS wouldn't make much sense either (IMO.) I would prefer PHP and/or an open-source CMS for that specific goal.

How do u get to know what is being used in the backend ? Moreover, as I understand, the use of socket.io is to auto-update the latest data in the front end. So, what would be the drawback if we store the data temporarily in a flat file and auto-refresh the flat file using JS ?
legendary
Activity: 1876
Merit: 1295
DiceSites.com owner
PocketDice.io = nginx, PHP, socket.io, AngularJS



About that lack of PHP sites:

IMO every developer has to look per project what language and frameworks to use.

Node.js is single threaded by design and is perfect for realtime communication between the browser and server. It just makes sense to use it for chatrooms, games, etc. aka dice/gambling sites Smiley

It doesn't mean that PHP is necessarily bad, it's just bad for this type of website. Better yet: the 2 websites with PHP still use socket.io (node.js) for the socket. On the other hand, suggesting a MEAN (MongoDB, Express, AngularJS, and Node.js) stack for a normal informative company CMS wouldn't make much sense either (IMO.) I would prefer PHP and/or an open-source CMS for that specific goal.
legendary
Activity: 2380
Merit: 1209
The revolution will be digital
sr. member
Activity: 323
Merit: 254
thanks for sorting this out to a table format!  way easier on the eyes
legendary
Activity: 1876
Merit: 1295
DiceSites.com owner
I like the idea of knowing this, so let's make an overview (grouped by backend language, not really sorted)



SiteServerBackendSocket libDB/StorageFrontend
Just-Dice.com                    Node.js               socket.io             redis+MySQL      
PrimeDice.comCowboyNode.jssocket.ioPostgreSQLEmberJS
DiceBitco.innginxNode.js+ExpressPrimusAngularJS
Dicenow.comnginxNode.js+Expresssocket.ioAngularJS
Satoshibet.comnginxNode.jssocket.io
MoneyPot.comCowboyNode.jssocket.ioPostgreSQL
PRCDice.euMS-IIS.NETSignalR
Win88.meMS-IIS.NETServiceStackAngularJS
Peerbet.orgMS-IIS.NET---
Dice.ninjacyclonePythonsockjs-cycloneMySQL
luckyb.itPython+Flask---
Rollin.ioApachePHP+laravelsocket.io
BitDice.menginxRoR+RabbitMQRedis+MySQL


More / corrections? Let us know Smiley
legendary
Activity: 2380
Merit: 1209
The revolution will be digital
Committing 10k rolls at a time may go against the atomic consistency of the DB resulting in unexpected effect to the bankroll (+ve/-ve).

The bulk insert is applied atomically, but each inserted row still has to respect its constraints.

One big reason you'd want to accumulate rolls in-memory and then commit them to the DB in bulk is if each roll needs to be applied sequentially to the DB and you don't want sequential roll insertion to be the bottleneck.

For example, imagine if each roll needs to calculate the total house bankroll to adjust the edge of the roll. The bankroll changes each roll. If each roll has to hit the DB server, then you're going to be dealing with a lot of insert contention.

Instead, let's say you only make one DB insertion at the turn of each second. As rolls come in, you buffer the results in memory (assured that the DB cannot change during this time) and then commit all the rolls at once.

Here's an idea: imagine if you write a function `roll(prevDB, params)` that returns `newDB` where prevDB and newDB are just associative datastructures that represent the state of the DB in memory.

That means you can `reduce(roll, prevDB, [params, ...])` where `[params, ...]` is a sequence of user roll parameters coming down, say, a websocket.

At the turn of each second, you stop the reduction, take the latest newDB value (the result of all the buffered rolls), and commit it to the DB. The post-commit value of the DB is now your `prevDB` that you feed back into the reduction and resume processing rolls.



I was thinking about what u said. Instead of storing in the memory pool, how about storing it in a text file and insert the data of that text file after a certain time interval, say one minute and remove the old data from there. Real time display can be done from the text file itself.
newbie
Activity: 41
Merit: 0
Committing 10k rolls at a time may go against the atomic consistency of the DB resulting in unexpected effect to the bankroll (+ve/-ve).

The bulk insert is applied atomically, but each inserted row still has to respect its constraints.

One big reason you'd want to accumulate rolls in-memory and then commit them to the DB in bulk is if each roll needs to be applied sequentially to the DB and you don't want sequential roll insertion to be the bottleneck.

For example, imagine if each roll needs to calculate the total house bankroll to adjust the edge of the roll. The bankroll changes each roll. If each roll has to hit the DB server, then you're going to be dealing with a lot of insert contention.

Instead, let's say you only make one DB insertion at the turn of each second. As rolls come in, you buffer the results in memory (assured that the DB cannot change during this time) and then commit all the rolls at once.

Here's an idea: imagine if you write a function `roll(prevDB, params)` that returns `newDB` where prevDB and newDB are just associative datastructures that represent the state of the DB in memory.

That means you can `reduce(roll, prevDB, [params, ...])` where `[params, ...]` is a sequence of user roll parameters coming down, say, a websocket.

At the turn of each second, you stop the reduction, take the latest newDB value (the result of all the buffered rolls), and commit it to the DB. The post-commit value of the DB is now your `prevDB` that you feed back into the reduction and resume processing rolls.

hero member
Activity: 776
Merit: 522
BitDice.me and family use RoR with Redis & RabbitMQ as backend, MySQL as main database.

Most sites choose node.js because of websockets.
legendary
Activity: 1662
Merit: 1050
But my guess is we'll be seeing a lot more Go in the coming years. Built a small dice project in that handling 10.000 rolls in about 500ms.


10K bets in 500ms?.. incredible. I really wonder how did you achieve that? I mean, there are stuffs like calculating lucky number, inserting to DB, broadcasting to user again etc..

Hehe, I should have mentioned it only returned the profit/loss of those rolls combined to the user. It did simulate (so calculate the lucky number etc) and insert 10.000 rolls in the DB at once though, so with a single query, not repeating an insert query 10.000 times. It also updated the user balance at once too after all the rolls were done.

I bet this will be fast in other languages too.

Committing 10k rolls at a time may go against the atomic consistency of the DB resulting in unexpected effect to the bankroll (+ve/-ve).
m19
full member
Activity: 186
Merit: 100
But my guess is we'll be seeing a lot more Go in the coming years. Built a small dice project in that handling 10.000 rolls in about 500ms.


10K bets in 500ms?.. incredible. I really wonder how did you achieve that? I mean, there are stuffs like calculating lucky number, inserting to DB, broadcasting to user again etc..

Hehe, I should have mentioned it only returned the profit/loss of those rolls combined to the user. It did simulate (so calculate the lucky number etc) and insert 10.000 rolls in the DB at once though, so with a single query, not repeating an insert query 10.000 times. It also updated the user balance at once too after all the rolls were done.

I bet this will be fast in other languages too.
legendary
Activity: 1008
Merit: 1000
GigTricks.io | A CRYPTO ECOSYSTEM FOR ON-DEMAND EC
But my guess is we'll be seeing a lot more Go in the coming years. Built a small dice project in that handling 10.000 rolls in about 500ms.


10K bets in 500ms?.. incredible. I really wonder how did you achieve that? I mean, there are stuffs like calculating lucky number, inserting to DB, broadcasting to user again etc..
m19
full member
Activity: 186
Merit: 100
The main point of focus for building a gambling site is ensuring data/domain integrity at the database level.

You need to be able to answer questions like:

- What happens to in-flight DB transactions when they fail or the app crashes?
- Are all of your domain-atomic data updates wrapped in transactions? For example, if the server crashes right after you increment my account balance but before you decrement your house bankroll, is my balance rolled back or did you just persist a bad ledger?
- If your provably-fair mechanism depends on sequential, gapless nonces (like Just-Dice's "bet number"), can you actually guarantee a gapless sequence? For example, AUTOINCREMENT in MySQL and SERIAL In Postgres do not produce gapless sequences.
- What do you do when the values change out from under your DB transactions? If some button should only increment a value in the DB once per hour and someone double-clicks it (launches two queries in flight at once), is the value incremented twice (bad)? Is it incremented once only because the queries both incremented the initial value (bad)? Or is it only incremented once because the second query fails because of your one-increment-per-hour domain constraint (good)?

The stack you use for your application layer matters very little because what matters is your ability to sufficiently answer these types of questions.

You shouldn't use the ID for nonces in my opinion. Just add an extra DB field for the nonce. Only way to really prove it.

I like your thoughts on transactions. This should be used more often I think. Using transactions they will only be committed when all are successful. Nothing will (should) happen if one query fails / the app crashes.

How would you deal with your last point? This is always interesting and I haven't found a very good way to deal with it.
newbie
Activity: 41
Merit: 0
The main point of focus for building a gambling site is ensuring data/domain integrity at the database level.

You need to be able to answer questions like:

- What happens to in-flight DB transactions when they fail or the app crashes?
- Are all of your domain-atomic data updates wrapped in transactions? For example, if the server crashes right after you increment my account balance but before you decrement your house bankroll, is my balance rolled back or did you just persist a bad ledger?
- If your provably-fair mechanism depends on sequential, gapless nonces (like Just-Dice's "bet number"), can you actually guarantee a gapless sequence? For example, AUTOINCREMENT in MySQL and SERIAL In Postgres do not produce gapless sequences.
- What do you do when the values change out from under your DB transactions? If some button should only increment a value in the DB once per hour and someone double-clicks it (launches two queries in flight at once), is the value incremented twice (bad)? Is it incremented once only because the queries both incremented the initial value (bad)? Or is it only incremented once because the second query fails because of your one-increment-per-hour domain constraint (good)?

The stack you use for your application layer matters very little because what matters is your ability to sufficiently answer these types of questions.
member
Activity: 67
Merit: 10
Most sites use Node.js for the backend. Mainly because it's the easiest (not always the best though) way to built a real time site handling a nice amount of users.

But my guess is we'll be seeing a lot more Go in the coming years. Built a small dice project in that handling 10.000 rolls in about 500ms.

Just Dice was also built in Node.js + MySQL by the way.

Just-Dice uses redis as well as MySQL.

Did you store all the J-D bets on the MySQL DB, I've been trying to figure out a cool way to store crazy amounts of bets (think tens of terrabytes) without having the weirdest SQL database on earth.
legendary
Activity: 2940
Merit: 1333
Most sites use Node.js for the backend. Mainly because it's the easiest (not always the best though) way to built a real time site handling a nice amount of users.

But my guess is we'll be seeing a lot more Go in the coming years. Built a small dice project in that handling 10.000 rolls in about 500ms.

Just Dice was also built in Node.js + MySQL by the way.

Just-Dice uses redis as well as MySQL.
m19
full member
Activity: 186
Merit: 100
Most sites use Node.js for the backend. Mainly because it's the easiest (not always the best though) way to built a real time site handling a nice amount of users.

But my guess is we'll be seeing a lot more Go in the coming years. Built a small dice project in that handling 10.000 rolls in about 500ms.

Just Dice was also built in Node.js + MySQL by the way.
hero member
Activity: 626
Merit: 500
https://satoshibet.com
SatoshiBet.com is coded in node.js
member
Activity: 67
Merit: 10

Thank U Stunna for dropping by. So, does it mean that till PD2 a few months ago PHP-PostgreSQL was doing fine in handling traffic ? I'm asking this because elsewhere another Dice site owner is moving from PHP to Java for better performance. Is PHP less efficient in handling concurrency just because it does not support threading like Java ? Then, how come FaceBook is running primarily on PHP ?


It's not that PHP or mySQL is necessarily bad, it can handle traffic just fine under the right conditions.Facebook and PHP can kinda be seen as a mistake from their end, they've put crazy amounts of money of compiling it to C++ to avoid the pitfalls of PHP. Node.js offers interesting and easy handling of async. PHP should have no effect on profit...
Pages:
Jump to: