Pages:
Author

Topic: hashkill - testing bitcoin miner plugin - page 6. (Read 90929 times)

full member
Activity: 182
Merit: 100
After a random amount of time hashkill will crash with this:

Quote
Mining statistics...
Speed: 171 MHash/sec [proc: 8] [subm: 5] [stale: 1] [eff: 62%]     [error] (ocl_bitcoin.c:141) Cannot authenticate!

Using phoenix it's 100% stable. How can i debug?
newbie
Activity: 24
Merit: 0
I'm running dual 5830s on my test machine clocked at 875,900.  Also using the latest ubuntu 11.04 x64, sdk 2.4 and catalyst 11.6.

using Deepbit pool, hashkill is delivering about 433Mhash/sec but phoenix 1.5/phatk is delivering about 528Mhash/sec.  Should I be using an earlier driver?  I tried playing with -D, -G1, -G2, -G3, -G4.  Is there anything else I can try to duplicate the results other folks have had?
member
Activity: 111
Merit: 10
★Trash&Burn [TBC/TXB]★
* Integrated support for getting stats from pools (currently only bitcoinpool.com, deepbit.net and mining.bitcoin.cz)

Keep up the good work, love your changes!

Mind adding support for BitClockers.com Bitcoin Mining Pool stats?

JSON API:
Pool Stats: http://bitclockers.com/api/
User Stats: http://bitclockers.com/api/APIKEYHERE/

If you need any more information I'd love to help.
sr. member
Activity: 256
Merit: 250
It is going to be in the same format as the command line. In a text file, one per line.
full member
Activity: 124
Merit: 100
What type of pool's list do you implement for that? Some sort of [pool address] - [include in failover=1/0] - [workername] - [password]?
sr. member
Activity: 256
Merit: 250
Got some bad news...multi-pool support (failover/load-balance) will be delayed. I've had some problems making this work correctly (especially as far as LP stuff is concerned). Another thing is that I will be quite busy the following week or two and won't have time to work on it.

Regarding recent DDoS attack on slush/deepbit pools, this is not good. Nevertheless, there is a tip that may be helpful: it's simple. Just run 2 or more instances against different pools. Since hashkill utilizes all GPUs, GPU load will be balanced nicely. Once a pool is DDoS'd, connections to it would fail and then the other instances will utilize more GPU power. To understand it better, here is an example:

You have 2x5870 cards, running at 400MH/s each, 800MH/s overall. You run two instances - first one running against slush's pool, the second one - against deepbit. GPU load is balanced - you'd roughly spend 400MH/s mining for bitcoin.cz and 400MH/s mining for deepbit.net. Then imagine deepbit.net gets attacked and your connections to it fail. Instance #2 would wait until it can successfully reconnect. No GPU power would be wasted though - now the GPUs would be fully utilized by instance #1 running against bitcoin.cz getting 800MH/s. Once deepbit.net goes online again, you would get GPU utilization balanced all by itself.

This is very quick and dirty load-balancing scheme otherwise hard to do with multi-GPU configurations and one miner per GPU. I am using it and it works nicely.
full member
Activity: 182
Merit: 100
One thing i've noticed is that it generates less shares compared to the others but has a higher hash rate?
newbie
Activity: 12
Merit: 0
Quote
Anyone know why putting hashkill to run from a batch file produces this error?

hashkill-gpu: error while loading shared libraries: libOpenCL.so: cannot open shared object file: No such file or directory

It works fine if I run it from the terminal window, but produces this error if I run my batch file independent of the terminal window.

Causing problems with automatic startup of the miner after logging in...

Put the export LD_LIBRARY_PATH=... line in your script.

Thanks very much for that, got my problem fixed. A really awesome miner, I'm liking the interface with the user too.
sr. member
Activity: 256
Merit: 250
Doesn't poclbm display stales?

Some pools do report them (deepbit for sure). As for is hashkill more likely to get them...it depends on the pool and your "luck" mostly. Hashkill does flush the queues, but it does not immediately cancel the current getwork so if you have a share in the current NDRange, it would readily submit it and it would get display as stale. I could of course not submit that, but it does not matter...users would feel more happy about this of course, but they would not benefit from that in any way (other than feeling happier about less stales being indicated).
sr. member
Activity: 392
Merit: 250
So if I use, say, Poclbm it won't tell me about the stale shares, even though I'm probably getting just as many?

Do you think Hashkill is more likely to get stales, the way it's designed?  It might need long polling more than this or that other miner.
sr. member
Activity: 256
Merit: 250
Slush's pool does not support long polling.

Quote
Anyone know why putting hashkill to run from a batch file produces this error?

hashkill-gpu: error while loading shared libraries: libOpenCL.so: cannot open shared object file: No such file or directory

It works fine if I run it from the terminal window, but produces this error if I run my batch file independent of the terminal window.

Causing problems with automatic startup of the miner after logging in...

Put the export LD_LIBRARY_PATH=... line in your script.
sr. member
Activity: 392
Merit: 250
I just noticed today that Hashkill doesn't work nearly as well with Slush's pool as it does with Deepbit.

I'm getting 10-11% stale shares now with Slush.

Doesn't Hashkill work with Slush's long polling?
newbie
Activity: 12
Merit: 0
Anyone know why putting hashkill to run from a batch file produces this error?

hashkill-gpu: error while loading shared libraries: libOpenCL.so: cannot open shared object file: No such file or directory

It works fine if I run it from the terminal window, but produces this error if I run my batch file independent of the terminal window.

Causing problems with automatic startup of the miner after logging in...
hero member
Activity: 731
Merit: 503
Libertas a calumnia
Fine, thanks for all the details.
sr. member
Activity: 256
Merit: 250
There seems to be a misunderstanding here. Shares are not "processed". getworks are. Each of the 'procd' getworks result in zero, one or more shares. Your stats indicate that:

* You've requested a new getwork 5452 times since the program was run.
* Working on those 5452 getworks, you found 4755 shares and submitted them
* You have 64 stale (or invalid) shares - you submitted them but the pool rejected them.

Now since hashkill requests getworks in advance, if you by chance have a share per each getwork (not zero and not more than one), then you would still not have processed=submitted. That's because a queue is being filled in "in advance".

Multiple "short" blocks in a row are likely to bring that "efficiency" down. That's because on a new block, all the getworks in a queue that are already counted as "processed" are discarded. Efficiency is calculated for the whole program run, not the current block.

Connection failures (e.g unable to connect to the pool to send a share) obviously drops efficiency as well.
hero member
Activity: 731
Merit: 503
Libertas a calumnia
Thank you gat3way for the explainations, I still have a lot of "holes" in the comprehension of the whole thing but I'm beginning to understand.

Not sure why you are unhappy with 86% efficiency, if your pool dont have issue with efficiency then the ratio shouldnt matter that much to the end user.
What you should care about is amount of stales/invalid compared to your submitted shares.

Ok, these are my stats:
Quote
Speed: 1469 MHash/sec [proc: 5452] [subm: 4755] [stale: 64] [eff: 87%]
What I can't understand is why I've so many processed shares and not all of them is being submitted.

How comes a share is processed and not submitted?
In which scenarios this can happen?
hero member
Activity: 504
Merit: 502
Not sure why you have such poor eff cause Im on 100-104% the whole time , confused tho how I can get more than 100% eff.
gat3way, I've the same question, can you please explain how this can happen?

Also I make a followup to my test: while for a single device I get more shares using server side statistics in respect to hashkill, the result is better for hashkill if I use multiple cards.

I guess there are a lot of details to check... btw I still get low efficiency, around 86%.

Not sure why you are unhappy with 86% efficiency, if your pool dont have issue with efficiency then the ratio shouldnt matter that much to the end user.

What you should care about is amount of stales/invalid compared to your submitted shares.
sr. member
Activity: 256
Merit: 250
It is possible and a matter of luck. Efficiency is calculated based on the number of shares divided by the number of getworks received.

For a single getwork, you may have zero, one or more submitted shares and it is a matter of luck. If you have 3 getworks requested and found 4 shares while processing them - then yes, efficiency would be 125%. In the ideal scenario, efficiency would get close to 100% after hashkill has been working long enough.
hero member
Activity: 731
Merit: 503
Libertas a calumnia
Not sure why you have such poor eff cause Im on 100-104% the whole time , confused tho how I can get more than 100% eff.
gat3way, I've the same question, can you please explain how this can happen?

Also I make a followup to my test: while for a single device I get more shares using server side statistics in respect to hashkill, the result is better for hashkill if I use multiple cards.

I guess there are a lot of details to check... btw I still get low efficiency, around 86%.
sr. member
Activity: 256
Merit: 250
There shouldn't be much of a difference (though device-host transfers would be slower with larger buffers of course). BTW is mapping/unmapping device memory possible with pyopencl?
Pages:
Jump to: