Pages:
Author

Topic: Flexible mining proxy - page 6. (Read 88832 times)

full member
Activity: 182
Merit: 107
June 24, 2011, 04:10:00 PM
Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

I will take a look at this for you over the weekend.
Damn, beat me to it, wyze.  Smiley

Yes, if nobody can connect to multipool then it's likely some communication issue caused by the proxy code.  If wyze figures it out then he'll probably fix it (he has a fork over on Github you could use until I merge his fix) and if not I'll have a look sometime this weekend too.
newbie
Activity: 28
Merit: 0
June 24, 2011, 04:07:31 PM
Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....

I will take a look at this for you over the weekend.
member
Activity: 79
Merit: 14
June 24, 2011, 02:46:54 PM
Has anyone successfully used Multipool as a target pool through the Flexible Mining Proxy? cdhowie, have you tried it?

I've seen at least 2-3 reports in addition to mine claiming it doesn't work, and nobody saying it works for them.

No signup or setup is needed -- just connect to http://multipool.hpc.tw:8337 with a Bitcoin address as your username, and anything for your password.  I'd be interested to hear if multipool is broken for everyone....
full member
Activity: 182
Merit: 107
June 24, 2011, 12:08:54 PM
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).
There is no such thing as "weighting."  Did you read the readme?  Particularly the section on how priority works?

Code should be PLENTY more commented btw Wink
Probably, particularly index.php.  The rest is fairly well-separated and should be readable as-is.  My philosophy on comments is that if you have to comment a lot, your code doesn't explain itself well enough and should be refactored.
full member
Activity: 182
Merit: 107
June 24, 2011, 11:59:49 AM
Note that a few indexes that are required to prevent table scans are not present in the schema; these were added later and I don't have a database migration script just yet, so it's expected that these queries will run a bit slow unless you've manually created the needed indexes.
I just took a few minutes to knuckle-down and get this done.  I recommend that all users get the latest from master and apply the DB migration script.  Dashboard performance should significantly improve (to sub-second response times).
full member
Activity: 182
Merit: 107
June 24, 2011, 10:59:51 AM
Can this include some actual proxy kind of features, ie. caching?
So that this could be used behind a flaky internet connection to keep miners 100% at work, if the flakyness is in the seconds range?
A non-PHP getwork backend is planned to resolve this and other LP issues.  PHP is not well-suited to this kind of task.

Check out the join that creates the status display.

LOL! Yeah that would cause some serious issues (first query in admin/index.php) 3rd query is a monstrosity.

Well there is the problem, using dynamic (on the fly created) tables etc.

These queries is almost like SELECT *, wonder if they ever hit any indexes ...
On my database, an EXPLAIN against that query shows zero table scans and very little processing.  I spent a lot of time optimizing that query.  Note that a few indexes that are required to prevent table scans are not present in the schema; these were added later and I don't have a database migration script just yet, so it's expected that these queries will run a bit slow unless you've manually created the needed indexes.

Well there is the problem, using dynamic (on the fly created) tables etc.
This in particular made me lol.  Subqueries can be an effective optimization technique if you know how to use them correctly, and any DBA knows that.  In the "last 10 submissions" cases, MySQL creates a plan that executes the LIMIT after the JOIN, which results in a full table scan of work_data/submitted_work.  Querying those tables in a subquery with LIMIT forces it to execute the limit first, which results in a very fast join.  This was a pain point until I refactored the query to use a subquery to derive the data tables.  Please know WTF you are talking about and use EXPLAIN, kthx.

EDIT: I just checked, and interestingly doesn't hit index, which is really wierd. Oh well, i check why at better time.
Exactly.  MySQL doesn't use the indexes in this case because it has decided to apply the LIMIT after the joins.  So it does a table scan.  And you don't need indexes to do a table scan, now do you?  Essentially, MySQL's query analyzer sucks, and the subquery is the workaround.

So let's do some investigation:

Code:
mysql> SELECT COUNT(*) FROM work_data;
+----------+
| COUNT(*) |
+----------+
|    76422 |
+----------+
1 row in set (0.01 sec)

mysql> SELECT COUNT(*) FROM submitted_work;
+----------+
| COUNT(*) |
+----------+
|   126715 |
+----------+
1 row in set (0.00 sec)

After executing the dashboard status query:
Code:
3 rows in set (0.11 sec)

EXPLAIN on the dashboard status query:

Code:
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+
| id | select_type | table          | type   | possible_keys                                  | key                     | key_len | ref                             | rows | Extra                                        |
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+
|  1 | PRIMARY     | w              | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    3 | Using temporary; Using filesort              |
|  1 | PRIMARY     |     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  1 | PRIMARY     |     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    2 |                                              |
|  1 | PRIMARY     |     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  6 | DERIVED     | sw             | range  | dashboard_status_index2                        | dashboard_status_index2 | 8       | NULL                            |  136 | Using where; Using temporary; Using filesort |
|  4 | DERIVED     |     | ALL    | NULL                                           | NULL                    | NULL    | NULL                            |    2 | Using temporary; Using filesort              |
|  4 | DERIVED     | sw             | ref    | dashboard_status_index,dashboard_status_index2 | dashboard_status_index  | 13      | sw2.worker_id,sw2.latest        |    1 |                                              |
|  4 | DERIVED     | p              | eq_ref | PRIMARY                                        | PRIMARY                 | 4       | bitcoin-mining-proxy.sw.pool_id |    1 |                                              |
|  5 | DERIVED     | submitted_work | range  | NULL                                           | dashboard_status_index  | 5       | NULL                            |    3 | Using where; Using index for group-by        |
|  2 | DERIVED     |     | system | NULL                                           | NULL                    | NULL    | NULL                            |    1 |                                              |
|  2 | DERIVED     | wd             | ref    | PRIMARY,worker_time                            | worker_time             | 12      | const,const                     |    1 |                                              |
|  2 | DERIVED     | p              | eq_ref | PRIMARY                                        | PRIMARY                 | 4       | bitcoin-mining-proxy.wd.pool_id |    1 |                                              |
|  3 | DERIVED     | work_data      | range  | NULL                                           | worker_time             | 4       | NULL                            |    3 | Using index for group-by                     |
+----+-------------+----------------+--------+------------------------------------------------+-------------------------+---------+---------------------------------+------+----------------------------------------------+

Sorry, the query is fine.  A bit big, but it's attempting to reduce quite a bit of data down to three rows.  So meh.  You do better and I'll take a patch.
full member
Activity: 182
Merit: 100
June 24, 2011, 07:39:52 AM
Why is my proxy all of a sudden not submiting shares?

If i point my miner to the pool directly all is fine.

EDIT: while im here, can somebody change the last submitted/request to say "X days/seconds/hours ago"
newbie
Activity: 28
Merit: 0
June 24, 2011, 07:36:05 AM
those are not my queries! :O

Didn't explain the image, lol. The top results were from your optimized version of the first query and the bottom result was from the query as it is now. It may be displayed differently by phpMyAdmin, but it still runs the same. Like I said, I didn't go past comparing the first query until I got some clarification on if I was reading the results of EXPLAIN correctly.
sr. member
Activity: 402
Merit: 250
June 24, 2011, 07:13:50 AM
those are not my queries! :O

Here they are:
$viewdata['recent-submissions'] = db_query($pdo, '
            SELECT w.name AS worker, p.name AS pool, sw.result AS result, sw.time AS time
            FROM submitted_work sw, pool p, worker w
            WHERE p.id=sw.pool_id AND w.id = sw.worker_id
            ORDER BY sw.time DESC
            LIMIT 10
        ');
        
        
        $viewdata['recent-failed-submissions'] = db_query($pdo, '
            SELECT w.name AS worker, p.name AS pool, sw.result AS result, sw.time AS time
            FROM submitted_work sw, pool p, worker w
            WHERE sw.result=0 AND p.id = sw.pool_id AND w.id = sw.worker_id
            ORDER BY sw.time DESC
            LIMIT 10
        ');


FYI, i have not actually even checked what they do exactly, just wrote simplified queries.

EDIT: I just checked, and interestingly doesn't hit index, which is really wierd. Oh well, i check why at better time.
newbie
Activity: 28
Merit: 0
June 24, 2011, 07:09:33 AM
Anyways, 2 first queries optimized, one less table to lookup, indices being actually hit. Far from being completely optimized (still a filesort happening!), but should proof to be an order of magnitude faster, DESPITE hitting more rows. I have NO way to test however, nor profile correctly due to lack of dataset size, so measurements would be less than error of margin.

Replace admin/index.php 2 first queries with those found at: http://pastebin.com/hwncLV1w

Looking at the image below, I fail to see how your 'optimized queries' are better. Maybe I just don't have a full grasp of the EXPLAIN command from phpMyAdmin, but it looks to me as your query scans all rows in the index, which comes out to 2436 with my current data set. That is way more than what the query was scanning before. Please correct me if I am wrong in reading the output. I only looked at the first query and stopped to get clarification.

http://i.imgur.com/G4bJM.png
sr. member
Activity: 402
Merit: 250
June 24, 2011, 06:41:09 AM
30k and 50k

That might give some hint of the effect, but really we start seeing difference at around 10x that size ...
full member
Activity: 182
Merit: 100
June 24, 2011, 06:38:52 AM
30k and 50k
sr. member
Activity: 402
Merit: 250
June 24, 2011, 06:28:56 AM
I will test, got a fix for the 3rd?

Yes, but approach was wrong so it might actually perform worse.

Try also how these affect: http://pastebin.com/kcWN9gPH

How many rows in your submitted_work and work_data tables?
full member
Activity: 182
Merit: 100
June 24, 2011, 06:21:39 AM
I will test, got a fix for the 3rd?
sr. member
Activity: 402
Merit: 250
June 24, 2011, 05:38:55 AM
well, i fooled around, according to profiling my changes are beneficial to the 3rd query but due to the really tiny sample data set i got the actual measurements mean nothing, the spent time might just be all in parsing the query and optimization engine, not the actual query.

So take even the earlier ones with a grain of salt: I've got no clue of the impact as i can't actually measure the difference, even tho i'm using for testing a ancient Athlon XP 1900+ ...

As soon as i got enough data i will see again how my changed queries affect the performance.
sr. member
Activity: 402
Merit: 250
June 24, 2011, 03:07:43 AM
wow, there is many kinds of bad things going on code QA wise.

This code is what i call confused newbie abstraction.

Using functions to output HTML in a view (already outputting HTML)? Yup!

Using abstration function to output a simple form image button? Yup!

Doing insane joined and dynamic tables query for a few fields of simple data? Yup!

Anyways, 2 first queries optimized, one less table to lookup, indices being actually hit. Far from being completely optimized (still a filesort happening!), but should proof to be an order of magnitude faster, DESPITE hitting more rows. I have NO way to test however, nor profile correctly due to lack of dataset size, so measurements would be less than error of margin.

Replace admin/index.php 2 first queries with those found at: http://pastebin.com/hwncLV1w

NOTE: I have not done proper testing, results seem to be correct tho Smiley

EDIT: Looking into 3rd query now, it's worse than expected. Will rework complete query and accompanying view portion. It actually checks all rows multiple times, assuming MySQL realizes how to optimize. It makes 5 dynamic (on-the-fly created) tables, hits *ALL* rows on submitted work, creates temp tables (lack of suitable index), 3 times filesort, 13 tables total, and if i interpret correctly total rows to go through is in the range of hundreds of thousands rows or more with my 2526 shares submitted test data! :O
sr. member
Activity: 402
Merit: 250
June 24, 2011, 01:53:57 AM
Meh.  Took me like 5 minutes to modify the work_data table, add new history tables, write a cron job to rotate the records out, and post my scripts.  Bonus: I didn't even have to think of any clever SQL tricks.

I looked at the linked post... There is a bunch of bad things i could now say about that cron job ...

Slow, prone to fault gum h4x to fix performance issues? People! This is how we create bloatware!

Fix the problem, not the symptoms!
member
Activity: 79
Merit: 14
June 23, 2011, 10:35:12 PM
In case anyone is interested, I was able to get the mining proxy working on my Dreamhost shared server with a bit of tweaking.  This should be equally applicable to other hosts that may impose similar restrictions.

Since Dreamhost forces PHP-CGI (rather than mod_php) on its shared hosting users, the .htaccess tricks don't work and the PHP_AUTH_USER / PHP_AUTH_PW variables in the scripts are empty.

First, follow all the basic setup instructions in the standard Proxy guide.

Then, you need to make sure that magic_quotes_gpc and allow_url_fopen are set to the defaults that we need, otherwise I don't think there's anything you can do.  To do this, create a file called phpinfo.php in your htdocs with the following code:
Code:
phpinfo();
?>

And browse to it in your browser.

Search for magic_quotes_gpc (it needs to be off) and allow_url_fopen (it needs to be on).  If these settings don't match, getting the proxy working is beyond the scope of this tweak.


If those settings are okay, then we can start with the tweaking.  The way the script is written won't allow you to authenticate, but everything else works fine.  To fix this...

Replace the contents of your .htaccess file with the following:
Code:
Options -Indexes
RewriteEngine on
RewriteRule .* - [env=HTTP_AUTHORIZATION:%{HTTP:Authorization},last]

Then, edit your common.inc.php file to include this code inside the "do_admin_auth()" function:
Code:
if (preg_match('/Basic\s+(.*)$/i', $_SERVER['HTTP_AUTHORIZATION'], $matches))
    {
        list($name, $password) = explode(':', base64_decode($matches[1]));
        $_SERVER['PHP_AUTH_USER'] = strip_tags($name);
        $_SERVER['PHP_AUTH_PW'] = strip_tags($password);
    }

This function will now look something like:
Code:
function do_admin_auth() {
    global $BTC_PROXY;
    if (preg_match('/Basic\s+(.*)$/i', $_SERVER['HTTP_AUTHORIZATION'], $matches))
    {
        list($name, $password) = explode(':', base64_decode($matches[1]));
        $_SERVER['PHP_AUTH_USER'] = strip_tags($name);
        $_SERVER['PHP_AUTH_PW'] = strip_tags($password);
    }
    if (!isset($_SERVER['PHP_AUTH_USER'])) {
        auth_fail();
    }

    if (    $_SERVER['PHP_AUTH_USER'] != $BTC_PROXY['admin_user'] ||
            $_SERVER['PHP_AUTH_PW']   != $BTC_PROXY['admin_password']) {
        auth_fail();
    }
}

EDIT: You also need to include this code snippet near the top of index.php
member
Activity: 79
Merit: 14
June 23, 2011, 10:15:04 PM
Just as another datapoint: loading Multipool into the proxy doesn't work for me, either.  My miner sits there with 0Mhash/sec.  Directly mining at Multipool directly works fine for me.

Also, I get weird authentication issues with hashkill.  About half the time it will fail the initial auth, the other half it authenticates, does a little bit of work, and then claims it had authentication issues:
Code:
[hashkill] Version 0.2.5
[hashkill] Plugin 'bitcoin' loaded successfully
[hashkill] Found GPU device: Advanced Micro Devices, Inc. - Barts
[hashkill] GPU0: AMD Radeon HD 6800 Series [busy:0%] [temp:49C]
[hashkill] Temperature threshold set to 90 degrees C
[hashkill] This plugin supports GPU acceleration.
[hashkill] Initialized hash indexes
[hashkill] Initialized thread mutexes
[hashkill] Spawned worker threads
[hashkill] Successfully connected and authorized at my.bitcoinminingproxy.com:80
[hashkill] Compiling OpenCL kernel source (amd_bitcoin.cl)
[hashkill] Binary size: 349376
[hashkill] Doing BFI_INT magic...

Mining statistics...
Speed: 273 MHash/sec [proc: 4] [subm: 1] [stale: 0] [eff: 25%]     [error] (ocl_bitcoin.c:141) Cannot authenticate!

Phoenix miner works fine for me with the same proxy settings.
kjj
legendary
Activity: 1302
Merit: 1026
June 23, 2011, 08:47:12 PM
Meh.  Took me like 5 minutes to modify the work_data table, add new history tables, write a cron job to rotate the records out, and post my scripts.  Bonus: I didn't even have to think of any clever SQL tricks.
Pages:
Jump to: