Pages:
Author

Topic: Flexible mining proxy - page 5. (Read 88812 times)

full member
Activity: 182
Merit: 100
June 28, 2011, 10:50:56 PM
diablo doesnt work at all, after a while it the proxy stops accepting shares. Rejected doesnt even show up.
hero member
Activity: 737
Merit: 500
June 28, 2011, 03:10:23 PM
DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.
I can't even find a reference to X-Roll-NTime in the DiabloMiner sources.  As long as it only does this when otherwise idle, it's as though the pool didn't support the feature anyway.  So the effect will be the same as if DiabloMiner didn't do this at all, since all of those shares will be rejected.  Therefore this is effectively a feature request against the proxy and not a bug.

As far as I know, poclbm is the only miner that support X-Roll-NTime.

I think the reason people complain about Diablo is that it makes the Diablo output look wrong (it shows massive rejected shares). 

That said, you are right that it should only happen in cases when it wouldn't have been productive anyway, so no harm done.
full member
Activity: 182
Merit: 107
June 28, 2011, 03:04:29 PM
DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.
I can't even find a reference to X-Roll-NTime in the DiabloMiner sources.  As long as it only does this when otherwise idle, it's as though the pool didn't support the feature anyway.  So the effect will be the same as if DiabloMiner didn't do this at all, since all of those shares will be rejected.  Therefore this is effectively a feature request against the proxy and not a bug.
hero member
Activity: 737
Merit: 500
June 28, 2011, 02:46:24 PM
Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.
I'll add support for this at some point.  Right now I'm trying to get the new C# getwork backend finished up.

Note that as long as X-Roll-NTime is sent as an HTTP header, the proxy should still work with DiabloMiner; since the proxy will not forward this HTTP header on to the miner, it should think that the pool doesn't support it.

DiabloMiner does it's time increment thing with or without the X-Roll-NTime header in any circumstance where it would otherwise have to be idle (getworks not returning fast enough, getworks erroring out, etc).  So DiabloMiner is going to submit data that won't be found with or without that header at least some of the time.
full member
Activity: 182
Merit: 107
June 28, 2011, 02:44:17 PM
Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.
I'll add support for this at some point.  Right now I'm trying to get the new C# getwork backend finished up.

Note that as long as X-Roll-NTime is sent as an HTTP header, the proxy should still work with DiabloMiner; since the proxy will not forward this HTTP header on to the miner, it should think that the pool doesn't support it.  If DiabloMiner is assuming that the pool supports it (I can't find a reference to X-Roll-NTime in the DiabloMiner sources) then, well, that's a DiabloMiner bug.
hero member
Activity: 737
Merit: 500
June 28, 2011, 11:45:36 AM
Hey, guys.  I am not sure if bitcoin mining proxy has support yet for X-Roll-NTime and for clients that increment the time header (DiabloMiner), but I added it to my ASP.NET implementation of the proxy last night and wanted to share what I had to do to make it work so that you guys to add it to the original PHP implementation if you like.

If you are not aware, the X-Roll-NTime header is a head that some pools may return to indicate to clients that they are allowed to just increment the time header when they have exhausted the nonce space in the getwork they have.  This allows poclbm, for example, to do only 1 getwork request per minute instead of 1 every 5-10 seconds or so on fast GPUs.  This is not only better for the pool because it has to respond to less getwork requests, but it appears to also dramatically reduce the occurrence of "the miner is idle" because the miner can always keep mining by just incrementing the time header while waiting for a new getwork.

Not all pools include this header, so the first change I made was to look for it when proxying getwork (both an actual getwork and submitted share) and when proxying a long poll.  I then made sure to pass the header through to the actual mining client when it was present.

The second, related situation is that some mining clients (DiabloMiner) will now choose to increment the time header even when this header is not present in circumstances where the miner would otherwise be idle.  They are doing this not as a mechanism to reduce the need for frequent getworks, but instead as a means of continuing looking for hashes even in the presence of network problems that are interfering with getwork requests (either erroring them out or just slowing them down).

In both cases (X-Roll-NTime and DiabloMiner's new algorithm), what will happen is that a submitted share will come in with data that is not exactly the same as the data we returned in the getwork, even after only looking at the first 152 characters of the hex data string.  This means that we won't find their submitted data in the work_data table and won't be able to determine which pool to submit to.

I fixed this by changing what data is stored in the work_data table and stripping out the integer (8 hex characters) that represents the timestamp being changed.  I don't claim to understand what all of the 19 integers (152 characters of hex) of the data are that we are saving, but I determinined experimentally that it was the second to last integer (8 hex characters) that was changing when the miners were incrementing the time header.

So, my code looks like this for pruning both the data returned from getwork request to a pool and for pruning the data submitted by the client during a submit.  Note, this is C#, but you can probably tell what the equvilent PHP code would look like:

Code:
           string fullDataString = (string)parameters[0];
            string data;
            if (fullDataString.Length > 152)
            {
                data = fullDataString.Substring(0, 136) + fullDataString.Substring(144, 8);
            }
            else
            {
                data = fullDataString;
            }

With this change (and with the change to correctly proxy the X-Roll-NTime header), my proxy now successfully proxies submits where the time header has been incremented.

I hope this is useful to you.

Update: See this thread for context on the DiabloMiner change:  http://forum.bitcoin.org/index.php?topic=1721.msg290512#msg290512
newbie
Activity: 11
Merit: 0
June 27, 2011, 09:50:30 AM
Just remembered another thing. Even when I'm not using Phoenix miner I seem to get quite high rejected shares rate when checking from the mining proxy database. When I check mining pool dashboard though it is all fine. So I guess this proxy currently marks certain bad connection issues or something similar also as rejected shares while they are really not?

But yeah PulsedMedia.. I wouldn't be surprised if your issues are partly caused by phoenix. I'm going to testrun poclbm in Windows on my machines today.. this far it seems very stable with this proxy as rpcminer clients.
sr. member
Activity: 402
Merit: 250
June 27, 2011, 09:11:33 AM
I'll reply to the rest of the post shortly, but wanted to answer this question before I leave for work:

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
If you've fetched the latest code and applied the database migration script, then you shouldn't be seeing any degradation anymore.  If you didn't, then you would certainly see horrible dashboard performance after a while, as there were some missing indexes that would cause table scans against submitted_work during the dashboard query.  (As submitted_work tends to grow slower than work_data -- but proportionally to it -- you'll need a fast miner, or a bunch of slow ones, to start seeing the performance issues quicker.)

Nope, not using the latest, but atleast the 2 first queries are mine, and my added indexes are in place.

Haven't yet checked what the diffs are.

I'm running about 1.1Ghash against this, and i noticed severe drop in earnings too, due to the more frequent phoenix crashes.

Will need to put aside 2 identical rigs to properly test is there difference.
My rejected rate is also "through the roof", at about 7% via this proxy! But again, need the testing setup of 2 identicals to verify.

Need to go buy more 6950s or 6770s to setup 2 identical rigs Wink
newbie
Activity: 11
Merit: 0
June 27, 2011, 08:37:36 AM
I have phoenix crashing issues too but I'm not really certain if it is this mining proxy causing it. Problem went away when I switched over to another miner(rpcminer). There are some known problems with phoenix. Then again without the proxy phoenix doesn't crash as often so I dunno.

I guess the truth is somewhere in the middle but since using other miners with this proxy works I don't see this as a problem.
full member
Activity: 182
Merit: 107
June 27, 2011, 07:44:08 AM
I'll reply to the rest of the post shortly, but wanted to answer this question before I leave for work:

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
If you've fetched the latest code and applied the database migration script, then you shouldn't be seeing any degradation anymore.  If you didn't, then you would certainly see horrible dashboard performance after a while, as there were some missing indexes that would cause table scans against submitted_work during the dashboard query.  (As submitted_work tends to grow slower than work_data -- but proportionally to it -- you'll need a fast miner, or a bunch of slow ones, to start seeing the performance issues quicker.)
sr. member
Activity: 402
Merit: 250
June 27, 2011, 04:43:08 AM
Installed and seems to work. Tho weighting seems to be quite a bit off (haven't looked at that portion of code yet).
There is no such thing as "weighting."  Did you read the readme?  Particularly the section on how priority works?

Code should be PLENTY more commented btw Wink
Probably, particularly index.php.  The rest is fairly well-separated and should be readable as-is.  My philosophy on comments is that if you have to comment a lot, your code doesn't explain itself well enough and should be refactored.

IMHO, code should be commented nevertheless, saves time reading. Ie. doc blocks to explain what for a method is for. Later on this saves time when you can just generate code documentation, and can refer to. Larger the project is, the more important this is. Tho this might be a bit small project for such level of documentation, nevertheless a short comment here and there is a good idea.

If you check some major project commentation rules or philosophies, like Linux Kernel, they suggest that one comment per 10 lines is a good level, level which you should target.

As for the queries, if you can avoid on the fly created tables, that's better. Indeed, i did not even check how my queries did, as i was expecting them to work as expected.

I spent sometime working on the 3rd one, and i achieved in under an hour to make the work amount less, less subqueries and on the fly created tables. No temporarys, no filesorts etc. but on the testing machine i'm using (Athlon XP 1900+ so REALLY slow), i couldn't verify results as times went into margin of error easily, and might just be parsing the query & deciding indices etc. takes that damn long on that slow CPU, with my tiny dataset at the time. It was faster without mHash rate, slower with mHash rate added (comparing oranges to oranges, so i compared to yours without mhash rate, and with it). Now i have larger dataset so when i got time i'll check it out again.
Tho, i might begin from scratch as approach is wrong.

Now i got 50k rows in the table, but that's still too small, i want to get to atleast 150k+ to be able to properly go through it, despite handling on this slow machine.

Why are you using echo_html function in the view templates? Have you looked into smarty btw? It's very easy to implement, and makes views damn much simpler to view (just view as html and most highlighters are perfect for it). And those saying it forces bad code behavior: It's all about how you use it, it doesn't force anything, it simply compiles the views you ask for, and actually provides an EXCELLENT "isolation layer" for different layers of code, and eases debugging quite a bit saving tons of time. Overhead is extremely minimal too, and the first time you want to even think about it is when you are hitting high requests per sec.

Also, i noticed some core functionality issues with this: phoenix started crashing now and then, about once a day per instance. Using BTCGuild, Bitcoin.cz and Deepbit, got 2 workers put for it, another works with 2 discreet GPUs, another with just 1. Sometimes it will just not pass new work, and sometimes it pauses work for a while. Is it only me? I'm using LinuxCoin v0.2a and it's accompanying phoenix.

EDIT: Still getting quite fast load on dashboard with this dataset. Infact most of the time in the ms range. At slowest nearly a second. What is the point where people are starting to experience serious performance degradation, all the time?
legendary
Activity: 1855
Merit: 1016
June 27, 2011, 02:05:33 AM
you can delete your own post.

The option to delete post was their before. Now it was removed by admins/mods.
QUOTE EDIT DELETE
Now only QUOTE EDIT
full member
Activity: 182
Merit: 107
June 26, 2011, 07:38:02 PM
.

Just wanted to point out that this was an empty reply Smiley
Yes, I wrote a reply and then realized that my reply was incorrect.  And since I can't delete my own posts, I just edited it to be empty.  Smiley
member
Activity: 79
Merit: 14
June 26, 2011, 04:55:22 PM
.

Just wanted to point out that this was an empty reply Smiley
newbie
Activity: 28
Merit: 0
June 26, 2011, 10:00:47 AM
Since the recent dashboard changes, the recent rejected is not working properly, seems to be missing alot of records.

No code was changed that would affect this. I simply removed the Result column as it was not really needed since they were all rejected anyways. Smiley

feature request:
  • bulk pool priority change
  • drag and release to change priority of pools

These look pretty good. Please open an issue here with some more detail and we can get that labeled properly for you.

for the stale rate, won't it make more sense if it is daily or even life time instead of default interval 1h?
but for submitted share alert, i would probably like 5min. to check whether one card is down.
for hashing speed, 15min will be fine.

TLDR: I recommend separate interval settings for submitted share alert, hashing speed and stale rate.

You can always open an issue here for the recommended changes. Smiley
full member
Activity: 182
Merit: 100
June 26, 2011, 03:34:45 AM
Since the recent dashboard changes, the recent rejected is not working properly, seems to be missing alot of records.


This will display "5 seconds/minutes/days/weeks/years ago" instead of a full Date Time string. Unfortunately the database doesnt store milliseconds.



Code: (common.inc.php)
function human_time($difference)
{
        $postfix = array("second", "minute", "hour", "day", "week", "month", "year");
        $lengths = array(60, 60, 24, 7, 4.3452380952380952380952380952381, 12);

        for($i = 0; $difference >= $lengths[$i]; $i++)
                $difference /= $lengths[$i];

      $difference = round($difference);

        if($difference != 1)
                $postfix[$i] .= "s";

        return "$difference $postfix[$i] ago";
}

function format_date($date)
{
        global $BTC_PROXY;
        $obj = new DateTime($date, new DateTimeZone('UTC'));
        $obj->setTimezone(new DateTimeZone($BTC_PROXY['timezone']));

        if($BTC_PROXY['date_format'] != "human")
                return $obj->format($BTC_PROXY['date_format']);
        else
        {
                $now = new DateTime("now", new DateTimeZone('UTC'));
                $now->setTimezone(new DateTimeZone($BTC_PROXY['timezone']));
                $timespan = $now->getTimestamp() - $obj->getTimestamp();
                return human_time($timespan);
        }
}

Code: (config.inc.php)
# Custom php date format or "human" for "x days/minutes/seconds ago"
'date_format'           => 'Y-m-d H:i:s T',
full member
Activity: 182
Merit: 100
June 25, 2011, 11:56:38 PM
Well i think i found the bug cdhowie, i sent you a packet dump yesterday.

http://forum.bitcoin.org/index.php?topic=1721.msg285078#msg285078


lol pool went down at time of update, what a coincedence...

Cant be me.. 90% shares arent being submitted?

Bug?

Miner 1
mhash 177.7/178.9 | a/r/hwe: 1/0/0 | ghash: 110.2 | fps: 30.0

Miner 2
mhash 819.5/826.6 | a/r/hwe: 0/3/0 | ghash: 114.0 113.3 112.3 | fps: 30.2

Quote
[24/06/11 8:49:58 PM] DEBUG: Attempt 77 found on Cypress (#3)
[24/06/11 8:50:02 PM] DEBUG: Attempt 78 found on Cypress (#2)
[24/06/11 8:50:07 PM] DEBUG: Attempt 79 found on Cypress (#2)
[24/06/11 8:50:12 PM] DEBUG: Attempt 80 found on Cypress (#3)
[24/06/11 8:50:13 PM] DEBUG: Attempt 81 found on Cypress (#2)
[24/06/11 8:50:14 PM] DEBUG: Attempt 82 found on Cypress (#3)
[24/06/11 8:50:15 PM] DEBUG: Attempt 83 found on Cypress (#2)
[24/06/11 8:50:18 PM] DEBUG: Attempt 84 found on Cypress (#3)
[24/06/11 8:50:21 PM] DEBUG: Attempt 85 found on Cypress (#1)
[24/06/11 8:50:23 PM] DEBUG: Attempt 86 found on Cypress (#2)
[24/06/11 8:50:33 PM] DEBUG: Attempt 87 found on Cypress (#2)

both updated to latest git

Just updated windows machine

Quote
[24/06/11 8:56:17 PM] DEBUG: Attempt 3 found on Cayman (#2)
[24/06/11 8:56:18 PM] DEBUG: Attempt 4 found on Cayman (#2)
[24/06/11 8:56:20 PM] DEBUG: Attempt 5 found on Cayman (#2)
[24/06/11 8:56:26 PM] DEBUG: Forcing getwork update due to nonce saturation
[24/06/11 8:56:31 PM] DEBUG: Forcing getwork update due to nonce saturation
[24/06/11 8:56:32 PM] DEBUG: Attempt 6 found on Cayman (#2)
[24/06/11 8:56:32 PM] DEBUG: Attempt 7 found on Cayman (#2)
[24/06/11 8:56:34 PM] DEBUG: Attempt 8 found on Cayman (#2)
[24/06/11 8:56:38 PM] DEBUG: Attempt 9 found on Cayman (#2)

mhash 364.5/362.8 | a/r/hwe: 0/1/0 | ghash: 30.1 | fps: 30.4

Nothing is being submitted?

EDIT: now how do i go back to the old version?

Nope, it's Diablominer and/or flexible proxy (though i never touched the proxy). It will find a few results, submit one or two and say accepted. After that "Attempt found" but never submit it?

Phoenix works 100% rock solid

The proxy probably does not correctly support things DiabloMiner does, such as time incrementing and returning multiple nonces for the same getwork over short periods. It looks like the sendwork thread is being choked by the proxy.

So, clearly, its a proxy bug.

Edit: IIRC i get the same behaviour with hashkill.
full member
Activity: 182
Merit: 107
June 25, 2011, 03:20:54 PM
.
newbie
Activity: 28
Merit: 0
June 25, 2011, 01:25:10 AM
It looks like when we do a getwork request to MultiPool, they are also returning extra headers after the JSON string. I am not sure if they are checking user agent or what on their side. I have a work around so that the data makes it into the work_data table. Now, it still looks like it has a problem submitting the data. I will look into this later in the day (Saturday), after I get some sleep. I'm thinking it is going to be along the same lines, where as, after we submit the work, MultiPool returns JSON + headers.

EDIT: I have a workable solution. I was able to successfully submit work with the proxy to MultiPool.

I will discuss the fix further with cdhowie in the morning and see how he wants to go about implementing it.
member
Activity: 79
Merit: 14
June 24, 2011, 11:53:24 PM
Re: the multipool issue, I added some debugging dumps to the place_json_call function.  

Slush's response to a getwork request looks like:
Code:
 {"id": "1", "result": {"hash1": "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000", "data": "00000001575fd9ef5901da864fff435588650d21f8619fa33cb3510f000006d2000000005fc23ab758404b1f6067f111978fcbcc1d4a227a377315ec590f71ed04bc0d8a4e0567941a0c2a1200000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000", "midstate": "bd4be1f1643712bd150248e7e2d9ace616611d3b9e8ea76b6b76a0180f6b00ce", "target": "ffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000"}, "error": null}

Whereas Multipool's looks like:
Code:
{"error":null,"id":"json","result":{"target":"ffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000","midstate":"6dfada9f763c6ae458d123a4a9e71a56bf5fc65946d7b40c8b679e865d7ebad6","data":"00000001575fd9ef5901da864fff435588650d21f8619fa33cb3510f000006d200000000a006b13a7db011c6779926e01ec4f67bc3246bc44419c1b4d204c0650be396a64e0567021a0c2a1200000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000","hash1":"00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000"}}

I haven't investigated enough to determine if this is the issue, but any chance it's due to Multipool's "id" field being non-numeric?
edit: Doubt that's it; slush's pool sometimes returns id="json" as well.

Either way, it appears (to my eyes, which aren't familiar with the pool mining protocol) that Multipool is returning valid data, but it isn't making its way into the work_data table.  There are zero entries with the pool_id that corresponds to Multipool.

edit2: the JSON response when Multipool is enabled is: {"error":"No enabled pools responded to the work request.","result":null,"id":1}

Hope this data helped...
Pages:
Jump to: