Pages:
Author

Topic: DiabloMiner GPU Miner - page 49. (Read 866596 times)

hero member
Activity: 772
Merit: 500
July 04, 2011, 10:22:54 AM
Update: Removed a lot of dead code that the compiler should remove, and I think it might have been missing some.

Did you take a look at my kernel mod Wink?

Dia
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
July 04, 2011, 09:36:39 AM
Update: Removed a lot of dead code that the compiler should remove, and I think it might have been missing some.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
July 02, 2011, 09:03:28 PM
Update: Finished adding all the old optimizations, increase speed like 1-2%

Although, I seem to be at the limit here. I went from 378 to 379 on SDK 2.1. I think I'll work on undoing more frankenkernel insanity later.
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
July 01, 2011, 06:57:12 PM
Still haven't been able to get anything from bitcoins.lc with regard to the hgih rate of stales but other pools are looking really good. Added 2 more GPUs, nothing like one instance of Diablo churning at 3.2Gh/s
legendary
Activity: 1937
Merit: 1001
July 01, 2011, 11:01:42 AM
Could you add a version number or date in the file?
newbie
Activity: 27
Merit: 0
June 30, 2011, 11:43:31 PM
Update: Added bitless's hack.

My 5850@918 on 2.1 went from 369 to 378, so a 2.4% increase.
Just installed the one with June 27th .jar file on the machines out in the 'server oven'.
Great stuff!  Pretty much got +10 MegaHootzels per card.

Mostly running #! Linux AMD64, drivers v11.2 / 11.3, OpenCL 2.1, BOINC running 100% on all CPU cores/threads:

1x5850@775/875 went from 304 to 314   
2x5830x@875/900 went from 533 to 553
1x5830@875/900 went from 266 to 276
6970@885/800 + 6870@995/910 went from 670 to 694.  driver v11.6, OpenCL 2.4.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 30, 2011, 09:38:40 PM
I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

Nope, only that you seem to have 4 or 5 GPUs.

6 GPUs on this one, holing off turning on other 2 GPUs until I get this under control.... bitcoins.lc is looking at it to see what they can figure out, thanks for your help. It seems it's not an issue is a run a worker per GPU keeping the per worker hash rate lower.

While I I'm posting quick note on another topic, I remember seeing a post stating -f 0 was not a good idea, I always run that on my poclbm miners, any insight on what it should be for dedicated cards on a headless system?

If it is 6, you should be seeing about 12 in a burst.

-f 1 is recommended, it'll push it up significantly.

As miner per GPU, this won't fix your problem. It sounds like their patch just doesn't work right. I use a single networking thread to process all async getworks, and a single thread to process all async sendworks (ie, two threads total). If it is trying to pair stuff to TCP sessions, then it is 100% broken, and the guy that wrote the patch doesn't understand how HTTP works either.
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
June 30, 2011, 09:35:32 PM
I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

Nope, only that you seem to have 4 or 5 GPUs.

6 GPUs on this one, holing off turning on other 2 GPUs until I get this under control.... bitcoins.lc is looking at it to see what they can figure out, thanks for your help. It seems it's not an issue if I run a worker per GPU keeping the per worker hash rate lower.

While I I'm posting quick note on another topic, I remember seeing a post stating -f 0 was not a good idea, I always run that on my poclbm miners, any insight on what it should be for dedicated cards on a headless system?
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 30, 2011, 09:27:26 PM
I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

Nope, only that you seem to have 4 or 5 GPUs.
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
June 30, 2011, 09:13:39 PM
I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 30, 2011, 09:01:25 PM
I didn't see this coming but it seems since bitcoins.lc patched their bitcoind to add support for socket reuse my primary rig running Diablo started getting 15% rejects. It seems my miners running 400Mh/s or so are alright but my main rig running at 2.3Gh/s is having a really hard time, other pools are no problems but bitcoins.lc is hating that speed on one worker. The backend is pushpool.

How would I go about running an instance per GPU or something similar, I hate the idea of doing it but it looks like that might be the only option?

I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
June 30, 2011, 08:52:06 PM
I didn't see this coming but it seems since bitcoins.lc patched their bitcoind to add support for socket reuse my primary rig running Diablo started getting 15% rejects. It seems my miners running 400Mh/s or so are alright but my main rig running at 2.3Gh/s is having a really hard time, other pools are no problems but bitcoins.lc is hating that speed on one worker. The backend is pushpool.

How would I go about running an instance per GPU or something similar, I hate the idea of doing it but it looks like that might be the only option?
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 30, 2011, 08:48:51 AM
Is it possible to make it so that if connection is lost, instead of losing any found blocks, store them and try to submit them when connection is resumed? They may end up being rejected, if it was a long period of no ocnnectivity, but it's more likely that they'll be good?

Yes it is possible, seeing as DiabloMiner already does this. DiabloMiner will submit a share no matter what, mere network problems will not stop it.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 30, 2011, 08:48:08 AM
the latest diablo miner is 2mh/s faster than poclbm on my card! I even patched the maj function
edit: but I'm getting a third of my shares rejected lolwat

Because it was already patched.
sr. member
Activity: 371
Merit: 250
June 30, 2011, 08:32:09 AM
Is it possible to make it so that if connection is lost, instead of losing any found blocks, store them and try to submit them when connection is resumed? They may end up being rejected, if it was a long period of no ocnnectivity, but it's more likely that they'll be good?
hero member
Activity: 658
Merit: 500
June 29, 2011, 10:28:51 PM
the latest diablo miner is 2mh/s faster than poclbm on my card! I even patched the maj function
edit: but I'm getting a third of my shares rejected lolwat
member
Activity: 83
Merit: 10
June 29, 2011, 06:50:30 PM
Anyone who lost speed with the new kernel should try the latest version. It is now much faster for me.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 29, 2011, 04:42:40 PM
He alternates the buffer so the kernel can be executing on a different buffer at the same time as the buffer read on the other. They can both execute out of order. If it happens to run out of order, anyways.

Oddly, no. I do that by using two (formerly three) threads, each thread having its own queue.

So, that would be 4 (formerly 6) buffers.

The problem is the Radeon driver sometimes does not finish copying the buffer quickly due to system IO load (since SDK 2.1 does not support DMA, nor does 2.4 on Linux). Alternating buffer (which costs me nothing) means I can schedule the kernel execution without having to wait for the previously used buffer to unlock.

So, its executing in parallel, just out strictly out of order.
newbie
Activity: 39
Merit: 0
June 29, 2011, 03:59:18 PM
He alternates the buffer so the kernel can be executing on a different buffer at the same time as the buffer read on the other. They can both execute out of order. If it happens to run out of order, anyways.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
June 27, 2011, 07:54:06 PM
In the part where you enqueue the work to the gpu and then read the output buffer , why do you alternate two buffers? ( buffer and output are two arrays of two buffers)

Because its faster on some setups.
Pages:
Jump to: