Pages:
Author

Topic: Flexible mining proxy - page 7. (Read 88832 times)

sr. member
Activity: 402
Merit: 250
June 23, 2011, 08:21:03 PM
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).

Quick glance at DB shows no reason why it should be slow, if queries match. So those who are having speed issues probably have bad MySQL config just. MySQL scales really efficiently and i don't see immediate reasons why this would be slow, but have to wait for data set to increase.

If anyone got say 8G+ dataset they don't mind sharing, i would be willing to look into it. Or any size with which someone is having some serious perf issues.

Code should be PLENTY more commented btw Wink

Check out the join that creates the status display.

LOL! Yeah that would cause some serious issues (first query in admin/index.php) 3rd query is a monstrosity.

Well there is the problem, using dynamic (on the fly created) tables etc.

These queries is almost like SELECT *, wonder if they ever hit any indexes ...

In any case need bigger data set before i can optimize them properly.

But on first query FROM ( XXXX XXXX ) sw should be change to just choose the damn table, moving the LIMIT 10 at the whole query.
Inner joins -> just choose from multiple tables, use same p.id = sw.pool_id

So something like

SELECT w.name AS worker, p.name AS pool, p.pool_id AS poolId, p.id AS poolId, sw.worker_id AS workerId  [AND SO ON FOR ALL FIELDS REQUIRED]
FROM submitted_work sw, pool p, woker
WHERE pool.id = sw.pool_id AND worker.id=sw.workerid
ORDER BY
LIMIT

INNER JOINS are like SELECT *, if i recall right Tongue

Bottomline is that carefully crafted queries can do a full text match scored on a 100G dataset with multiple text matches, joining multiple tables (not using JOIN clause tho, but still called joining), on a Quad Core Xeon with 16G ram (pre-i7 xeon) in well under 100ms. (Target was 15searches/sec, achieved peak was above that, and bottleneck was actually the PHP syntax parsing for real world scenario which transformed our simplified custom query language into a mysql query, for easier use for the end users)
kjj
legendary
Activity: 1302
Merit: 1026
June 23, 2011, 08:10:22 PM
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).

Quick glance at DB shows no reason why it should be slow, if queries match. So those who are having speed issues probably have bad MySQL config just. MySQL scales really efficiently and i don't see immediate reasons why this would be slow, but have to wait for data set to increase.

If anyone got say 8G+ dataset they don't mind sharing, i would be willing to look into it. Or any size with which someone is having some serious perf issues.

Code should be PLENTY more commented btw Wink

Check out the join that creates the status display.
sr. member
Activity: 402
Merit: 250
June 23, 2011, 08:07:36 PM
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).

Quick glance at DB shows no reason why it should be slow, if queries match. So those who are having speed issues probably have bad MySQL config just. MySQL scales really efficiently and i don't see immediate reasons why this would be slow, but have to wait for data set to increase.

If anyone got say 8G+ dataset they don't mind sharing, i would be willing to look into it. Or any size with which someone is having some serious perf issues.

Code should be PLENTY more commented btw Wink
sr. member
Activity: 402
Merit: 250
June 23, 2011, 01:34:46 PM
Can this include some actual proxy kind of features, ie. caching?
So that this could be used behind a flaky internet connection to keep miners 100% at work, if the flakyness is in the seconds range?

Or does this try to connect the pool directly on the same instance as the miner connects to etc.?

What i'm wondering, would this enable me to run miners behind a 3G connection Wink

I'll try this out soon, and i will then optimize the mysql db for you after it starts to slow off (one my of specialties is optimization, esp. mysql), to increase the performance.
Not sure how well you've done performance wise, but we can make it work even with Tb sized datasets if you want to. Contact me via PM or freenode #PulsedMedia if you want to chat about it.

I have a mining hosting idea (in the mining hw section) for which using this as intermediary could be PERFECT. Just need more stats etc. and eventually tagging, grouping nodes/gpus etc. "advanced workflow features", and RESTful API (or more like my own relaxed version using of which is way easier than full REST spec).

Sorry for the basic questions, just yet is not time for me to research into this, i got other dev. tasks i need to finish first before i get to play around with this Smiley
newbie
Activity: 18
Merit: 0
June 23, 2011, 11:34:39 AM
Also, please add rejected per hour and efficiency.
Another user has a nearly-ready patch that adds a lot of this information.
I misread you the first time -- the rejected-per-hour stats are there, but efficiency is not.  How do you define efficiency, and how would you compute it?

Based on context, I think he's referring to a % good/stale shares stat.  So (#good shares) / (#good + #stale shares).  Any better efficiency metric would require you to load the current difficulty...
full member
Activity: 182
Merit: 107
June 23, 2011, 10:33:50 AM
the Dashboard is getting slower and slower with the whole data.
There are some indexes missing from the schema.  I need to add those and provide a migration script for older installs, but this will take a bit of work and I haven't finished it yet.  Once this is done, running the upgrade script will create the indexes and the dashboard will load much faster.

Does sombody have a quick and dirty soulution to erase the "submitted_work" and the "work_data" every 24h ?
I thougt about dropping every 24h the data from hour 0 -> 23 ... so i keep some stats (like hashrate of the last hour).
See the "Database maintenance" section of the readme.
full member
Activity: 182
Merit: 107
June 23, 2011, 10:29:16 AM
Also, please add rejected per hour and efficiency.
Another user has a nearly-ready patch that adds a lot of this information.
I misread you the first time -- the rejected-per-hour stats are there, but efficiency is not.  How do you define efficiency, and how would you compute it?
newbie
Activity: 28
Merit: 0
June 23, 2011, 09:48:44 AM
Hello,

the Dashboard is getting slower and slower with the whole data.

Does sombody have a quick and dirty soulution to erase the "submitted_work" and the "work_data" every 24h ?
I thougt about dropping every 24h the data from hour 0 -> 23 ... so i keep some stats (like hashrate of the last hour).

If i make my cronjob working ...  i let you know ... if somebody does allready have one please let me know Smiley

cheers

How big is your database? Mine is at 19MB and my dashboard still loads in under a second. I have about 4-5 days worth of data in there. This is the same speed I get when accessing the dashboard from work as well, while it is hosted on a little web server in my basement.
full member
Activity: 133
Merit: 100
June 23, 2011, 08:51:22 AM
Hello,

the Dashboard is getting slower and slower with the whole data.

Does sombody have a quick and dirty soulution to erase the "submitted_work" and the "work_data" every 24h ?
I thougt about dropping every 24h the data from hour 0 -> 23 ... so i keep some stats (like hashrate of the last hour).

If i make my cronjob working ...  i let you know ... if somebody does allready have one please let me know Smiley

cheers
full member
Activity: 182
Merit: 107
June 23, 2011, 08:40:13 AM
My client tried to connect twice. The proxy tried to connect a number of times and received several responses, but forwarded none on to my miner.
If you can save a pcap file and email it to me, maybe I could have a look and see what's going on.

I wonder if there is an issue here: this server has two IPs on the same subnet. Is there a way to prioritize your proxy to choose one of them exclusively? I'm afraid it's expecting an answer on eth0 but receiving one on eth1 and not properly handling it.
The kernel should take care of routing stuff correctly.  If traffic is coming back on the wrong interface, that would be a problem with an upstream router and not any particular configuration on your box.  If you can load other websites fine then the network config shouldn't be interfering.
full member
Activity: 210
Merit: 105
June 22, 2011, 06:22:14 PM
I'm getting the response
Code:
{"error":"No enabled pools responded to the work request.","result":null,"id":1}
with Multipool. I just verified -- Multipool is up. In fact, when I make the exact same work request which I made to the proxy (with updated credentials) to multipool.hpc.tw, I get a response in under a second.

How can I troubleshoot this?
If you can, run a sniffer on the web server and see if it even tries to connect.  If it does, see if you can diagnose the problem from the content of the HTTP conversation.

If it does not try to connect:

  • Verify that you have pools assigned to the worker you are using.
  • Verify that the php allow_url_fopen configuration flag is set to On.
My client tried to connect twice. The proxy tried to connect a number of times and received several responses, but forwarded none on to my miner.

I wonder if there is an issue here: this server has two IPs on the same subnet. Is there a way to prioritize your proxy to choose one of them exclusively? I'm afraid it's expecting an answer on eth0 but receiving one on eth1 and not properly handling it.
full member
Activity: 182
Merit: 107
June 22, 2011, 05:17:37 PM
Do you want patches?  I'm testing putting 50 miners through this proxy and was running into stability issues.  I have mysql on a separate machine and am used to that being a bottleneck, so I looked into making it use persistent connections.  I am not familiar with the PDO extension, but found the instantiator in common.inc.php on line 31; I added array(PDO::ATTR_PERSISTENT => true) to the new call and this took care of my problems.

I now have 32 connections from the proxy web server staying open (which is what I want).
Sounds like a good idea.  I'll test this on my setup and commit if there are no issues.

There should be a way to copy worker/pool configs

so instead of setting up 10x pools on 10x workers you can set it up once then copy it to the rest, then go in and change the logins as necessary.  Or a way to use a variable in the config so you build X workers with the suffix X changed on each one
I might add one or more of these ideas in the future.  Right now I'm trying to keep the number of features low and work on the ones that are having trouble.

Problem 3 was solved by adding 'http://' to the pool's url. Won't work without.
Yup.  Smiley  I'll add something like this to the readme, and maybe add some validation too so that it will bail unless it sees a scheme in the URL.

Really wish there was a way to monitor anything on solo mining besides getwork w/ the proxy
I've no clue what you're asking for here.

What sort of values can i set for 'average_interval'? and what effects will it have?
When reporting on the number of shares submitted and the miner speed, this is the window of time it will look back to gather data.  If you set it longer, the query will take longer and will generally even out more; if you set it shorter, the query will run faster but will result in wild fluctuations of the worker speed (for example) as the worker has lucky and unlucky periods.

It is purely used for reports; it will have no effect on mining.

Also, please add rejected per hour and efficiency.
Another user has a nearly-ready patch that adds a lot of this information.

connections to the proxy started timing out, so restarting apache fixed it, but i didn't trust it enough to keep going

This was on a VM w/ 1GB of ram

What kind of hardware are you guys running on?
I'm running with 768MB, most of which is in use by a Minecraft server...

Try increasing the number of worker children in your Apache configuration.  With more miners, you need more workers to keep up with the demand.  If Apache hits its limit then it will simply stop responding to requests.

I'm getting the response
Code:
{"error":"No enabled pools responded to the work request.","result":null,"id":1}
with Multipool. I just verified -- Multipool is up. In fact, when I make the exact same work request which I made to the proxy (with updated credentials) to multipool.hpc.tw, I get a response in under a second.

How can I troubleshoot this?
If you can, run a sniffer on the web server and see if it even tries to connect.  If it does, see if you can diagnose the problem from the content of the HTTP conversation.

If it does not try to connect:

  • Verify that you have pools assigned to the worker you are using.
  • Verify that the php allow_url_fopen configuration flag is set to On.

Unfortunately, I get extreme stale rates (>10%) when using the proxy on my local mining-rig. But I'm behind a firewall - could it be that the Mining Proxy needs to be listening on port 80 from the "outside" to be able to receive LP-queries?
No, the proxy needs no ports open except for the miners themselves.  It may be worth running a packet sniffer and seeing if the LP requests are actually making it through.  If you are not running on Apache, some config tweaks may be necessary to get LP to work at all.
full member
Activity: 182
Merit: 100
June 22, 2011, 09:25:26 AM
Randomly get this using DiabloMiner. Hashkill will just crash. Phoenix seems to be 100% stable, never crashes but i dont doubt it runs into the same problem behind the scenes.

Code:
[23/06/11 12:20:47 AM] DEBUG: Attempt 26 found on Juniper (#1)
[23/06/11 12:20:47 AM] Accepted block 26 found on Juniper (#1)
[23/06/11 12:21:08 AM] ERROR: Cannot connect to Bitcoin: Bitcoin returned error message: No enabled pools responded to the work request.
[23/06/11 12:21:09 AM] ERROR: Cannot connect to Bitcoin: Bitcoin returned error message: No enabled pools responded to the work request.
[23/06/11 12:21:09 AM] ERROR: Cannot connect to Bitcoin: Bitcoin returned error message: No enabled pools responded to the work request.
[23/06/11 12:21:17 AM] DEBUG: Attempt 27 found on Juniper (#1)
[23/06/11 12:21:23 AM] Rejected block 1 found on Juniper (#1)
[23/06/11 12:21:46 AM] DEBUG: Attempt 28 found on Juniper (#1)
[23/06/11 12:21:46 AM] Rejected block 2 found on Juniper (#1)
[23/06/11 12:22:43 AM] DEBUG: Attempt 29 found on Juniper (#1)
[23/06/11 12:22:43 AM] Accepted block 27 found on Juniper (#1)

192.168.1.10 - beasty [23/Jun/2011:00:20:13 +1000] "POST / HTTP/1.1" 200 394 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:20:47 +1000] "POST / HTTP/1.1" 200 394 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:21:06 +1000] "POST / HTTP/1.1" 200 372 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:21:06 +1000] "POST / HTTP/1.1" 200 373 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:21:06 +1000] "POST / HTTP/1.1" 200 373 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:21:17 +1000] "POST / HTTP/1.1" 200 395 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:21:46 +1000] "POST / HTTP/1.1" 200 395 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:22:08 +1000] "POST / HTTP/1.1" 200 951 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:22:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:22:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:22:43 +1000] "POST / HTTP/1.1" 200 394 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:23:09 +1000] "POST / HTTP/1.1" 200 951 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:23:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18"
192.168.1.10 - beasty [23/Jun/2011:00:23:09 +1000] "POST / HTTP/1.1" 200 952 "-" "Java/1.6.0_18"
full member
Activity: 126
Merit: 100
June 22, 2011, 08:25:00 AM
Unfortunately, I get extreme stale rates (>10%) when using the proxy on my local mining-rig. But I'm behind a firewall - could it be that the Mining Proxy needs to be listening on port 80 from the "outside" to be able to receive LP-queries?
full member
Activity: 210
Merit: 105
June 21, 2011, 10:03:05 PM
I'm getting the response
Code:
{"error":"No enabled pools responded to the work request.","result":null,"id":1}
with Multipool. I just verified -- Multipool is up. In fact, when I make the exact same work request which I made to the proxy (with updated credentials) to multipool.hpc.tw, I get a response in under a second.

How can I troubleshoot this?
sr. member
Activity: 520
Merit: 253
555
June 21, 2011, 10:26:13 AM
Has anyone got this working with Lighttpd (under Gentoo)? I am trying to rule out wider system-level errors before digging deeper into this code. Basically, my clients (Phoenix) are connecting to the proxy, but they are not getting any work.

I got exactly the same error with Apache, and I even did some debugging via tcpdump. It seemed like the proxy was not even asking for any work from the pools. The solution turned out simple, I had to set

allow_url_fopen = On

in php.ini.

So, lighttpd (which uses PHP via fastcgi) seems to work. As I just got it working, I have no long-term data which might be important for long polling, but so far things are OK Smiley
newbie
Activity: 12
Merit: 0
June 19, 2011, 07:53:25 AM
Every time the client requests new work from the proxy, so every 40-50 seconds, this will happen:

The proxy asks the first pool in his priority list for new work. If that pool does not respond within X seconds, the proxy goes on and asks the next pool in his priority list.
This will repeat until a pool does respond and delivers new work which the proxy can relay to the client, or the end of the prioriry list is reached.
full member
Activity: 182
Merit: 100
June 19, 2011, 07:31:02 AM
So when does it switch back to the highest priority?
newbie
Activity: 12
Merit: 0
June 19, 2011, 07:29:35 AM
EDIT: Once a pool goes down, does it auto switch as a fail over and then switch back once it's back up?
EDIT: How do you test if the pool is down? Mine seems to be constantly cycling through the pools even though they seem to be ok.
If a pool does not respond within a given time, which is about 3 seconds, the proxy will continue with the next pool in the priority list.
full member
Activity: 182
Merit: 100
June 17, 2011, 09:10:38 PM
Is it safe to drop the submitted work data? I want to reset my stats so i can use it for benchmarking.

Also what is average_interval?

EDIT: Once a pool goes down, does it auto switch as a fail over and then switch back once it's back up?
EDIT: How do you test if the pool is down? Mine seems to be constantly cycling through the pools even though they seem to be ok.
Pages:
Jump to: