Pages:
Author

Topic: Phoenix - Efficient, fast, modular miner - page 4. (Read 760757 times)

legendary
Activity: 1512
Merit: 1036
January 21, 2012, 09:11:05 PM
Just for curiosity I thought I'd see if there's any hashrate to be gained with compiling pyOpenCL in Visual Studio 2010 with Stream SDK 2.6 RC3:

Python 2.7.2
Base-12.1.1.win32-py2.7
numpy-MKL-1.6.1.win32-py2.7-2011-10-29
scipy-0.10.0.win32-py2.7
zope.interface-3.8.0.win32-py2.7
Twisted-11.1.0.win32-py2.7

I compiled boost_1_48_0 multithreaded in msvc-10.0, I now have boost_python-vc100-mt-1_48.dll. Compiled pyopencl-0.92 after doing the manifest tweaks and env variables to get it to work.

Results? Exactly the same 224.00 MHash/s as Phoenix 1.7.4 exe gives me. Yay. Three hours I won't get back... At least my python isn't slower than the exe's python any more.
legendary
Activity: 3080
Merit: 1080
January 21, 2012, 05:36:40 PM
Version 1.7.4 has been released.

Changes:
 - Added X-Work-Identifier support to RPC for better compatibility with P2Pool
 - Tweaked kernel WORKSIZE validation


Download

Latest version: 1.7.4
Windows binaries
Source code/Linux release (requires Python, Twisted, and PyOpenCL)

GitHub:
https://github.com/jedi95/Phoenix-Miner

Ummm, I think you forgot to either change the version number or did not upload the 1.7.4 source yet cause when I dl the latest git tarball it still says 1.7.3..just thought I'd let ya know :p
full member
Activity: 219
Merit: 120
January 21, 2012, 04:21:40 PM
Version 1.7.4 has been released.

Changes:
 - Added X-Work-Identifier support to RPC for better compatibility with P2Pool
 - Tweaked kernel WORKSIZE validation


Download

Latest version: 1.7.4
Windows binaries
Source code/Linux release (requires Python, Twisted, and PyOpenCL)

GitHub:
https://github.com/jedi95/Phoenix-Miner
hero member
Activity: 769
Merit: 500
January 15, 2012, 03:54:51 PM
Thanks jedi95 for your explanation, I did another test. This time Phoenix ran for 105 minutes, I have 91 Shares submitted and 0 (!!!!) rejects.
This makes 0,87 shares/min @ ~66 MH/s ... I currently don't know if this is better or worse than with PyOpenCL 0.92, but the 0 rejects are stunning.
I'll do another test with Phoenix 1.7.3 - PyOpenCL 0.92 tomorrow, where it has to run for 105 minutes, I'm interested in the result.

A very short test with CGMINER, who outputs a shares/min statistic shows a value below 0,87!

Edit: Damn, it seems my Pool did an upgrade, the 0 rejects seem to be the result of it. The share rate between 0.92 and 2011.2 is identical.

Dia
full member
Activity: 219
Merit: 120
January 15, 2012, 01:18:55 PM
@jedi95:

I observed a strange bug with the compiled Phoenix based on 2011.2, it crashes on this line, which works in your latest fixed version and worked with the one before, who had 2011.1beta3.
Code:
self.interface.debug('using PyOpenCL version ' + cl.VERSION_TEXT)

I know you won't give support for that one, but have you got any ideas why this happens?

Edit: Removed the line for testing and changed all depracted stuff to the new function calls and made another observation. I never saw such stuff like which block we are on or that Server gave work from previous block. Is this info from the pool or new in Phoenix?
Code:
[15/01/2012 15:29:45] LP: New work pushed
[15/01/2012 15:29:46] Server gave new work; passing to WorkQueue
[15/01/2012 15:29:46] New block (WorkQueue)
[15/01/2012 15:29:46] Server gave new work; passing to WorkQueue
[15/01/2012 15:29:46] Server gave work from the previous block, ignoring.
[15/01/2012 15:29:52] Server gave new work; passing to WorkQueue
[15/01/2012 15:29:53] Currently on block: 162310
[15/01/2012 15:30:52] Server gave new work; passing to WorkQueue
[15/01/2012 15:31:57] Server gave new work; passing to WorkQueue

Edit 2: I tested 2011.2 for 30 minutes and got 0 rejects, which is great for my 6550D ... dunno if this was luck or if this version runs smoother.

Thanks,
Dia

And this is exactly why I won't be using the newer versions. The "Server gave work from the previous block, ignoring" error is one of the many problems that occur because of the work delay.

This is basically what happened:
1. Sometime before 15:29:45 Phoenix started a work request
2. LP returned new work at 15:29:45
3. The work request completes at 15:29:46, but because it took so long it's from the previous block and useless

When this happens in reverse (sending work to server) you get stale shares. 30 minutes isn't really long enough for an effective stale share test, especially on a slower GPU like the 6550D.

"Currently on block: " is sent by the pool using x-blocknum. Some pools send it, others don't.
hero member
Activity: 769
Merit: 500
January 15, 2012, 09:10:30 AM
@jedi95:

I observed a strange bug with the compiled Phoenix based on 2011.2, it crashes on this line, which works in your latest fixed version and worked with the one before, who had 2011.1beta3.
Code:
self.interface.debug('using PyOpenCL version ' + cl.VERSION_TEXT)

I know you won't give support for that one, but have you got any ideas why this happens?

Edit: Removed the line for testing and changed all depracted stuff to the new function calls and made another observation. I never saw such stuff like which block we are on or that Server gave work from previous block. Is this info from the pool or new in Phoenix?
Code:
[15/01/2012 15:29:45] LP: New work pushed
[15/01/2012 15:29:46] Server gave new work; passing to WorkQueue
[15/01/2012 15:29:46] New block (WorkQueue)
[15/01/2012 15:29:46] Server gave new work; passing to WorkQueue
[15/01/2012 15:29:46] Server gave work from the previous block, ignoring.
[15/01/2012 15:29:52] Server gave new work; passing to WorkQueue
[15/01/2012 15:29:53] Currently on block: 162310
[15/01/2012 15:30:52] Server gave new work; passing to WorkQueue
[15/01/2012 15:31:57] Server gave new work; passing to WorkQueue

Edit 2: I tested 2011.2 for 30 minutes and got 0 rejects, which is great for my 6550D ... dunno if this was luck or if this version runs smoother.

Thanks,
Dia
hero member
Activity: 769
Merit: 500
January 14, 2012, 06:58:05 PM
Thanks for these versions, I will do some tests with them later today. Have you got any clue, why PyOpenCL would raise the time it takes to get new work?
As I understand it, the work is retrieved via another part or thread in Phoenix, that has nothing to do with OpenCL ... guess the QueueReader?

Dia
full member
Activity: 219
Merit: 120
January 14, 2012, 12:29:38 PM

I'm not sure if I understand this behaviour, but it seems with finish(), results are confirmed imediately, while without finish() it takes a few seconds for the results to get processed. Any explanation for this?

Dia

My best guess is that removing finish() makes the delay getting/sending results worse. (assuming you used 2011.1 or 2011.2 for that test)

Even if you don't see the same hashrate differences, the delay that 2011.1 and 2011.2 introduce to getting work/sending results is problematic.

Have a look at these test runs using -q 20 and a local bitcoin client:

0.92
Code:
C:\phoenix>python phoenix.py -u http://bitcoin:bitcoin@localhost:8332 -v -q 20 -k poclbm platform=1 device=0 aggression=2
[14/01/2012 10:13:35] Phoenix v1.7.3 starting...
[14/01/2012 10:13:35] Connected to server
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] New block (WorkQueue)
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[14/01/2012 10:13:35] Server gave new work; passing to WorkQueue
[137.12 Mhash/sec] [0 Accepted] [0 Rejected] [RPC]

Notice how it is able to fully populate the queue is less than a second.

Now compare that to 2011.2:
Code:
C:\phoenix>python phoenix.py -u http://bitcoin:bitcoin@localhost:8332 -v -q 20 -k poclbm platform=1 device=0 aggression=2
[14/01/2012 10:17:14] Phoenix v1.7.3 starting...
[14/01/2012 10:17:14] Connected to server
[14/01/2012 10:17:14] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:14] New block (WorkQueue)
[14/01/2012 10:17:14] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:14] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:17] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:17] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:17] Server gave new work; passing to WorkQueue
[14/01/2012 10:17:17] Server gave new work; passing to WorkQueue
[133.43 Mhash/sec] [0 Accepted] [0 Rejected] [RPC]

Notice how it now took a full 3 seconds to populate the queue.

EDIT: It seems that higher AGGRESSION makes the problem worse: (again using 2011.2)
Code:
C:\phoenix>python phoenix.py -u http://bitcoin:bitcoin@localhost:8332 -q 20 -v -k poclbm platform=1 device=0 aggression=8
[14/01/2012 10:59:12] Phoenix v1.7.3 starting...
[14/01/2012 10:59:12] Connected to server
[14/01/2012 10:59:12] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:12] New block (WorkQueue)
[14/01/2012 10:59:13] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:14] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:14] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:15] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:16] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:17] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:18] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:19] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:20] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:21] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:22] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:23] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:24] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:25] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:26] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:27] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:28] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:29] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:29] Server gave new work; passing to WorkQueue
[14/01/2012 10:59:30] Server gave new work; passing to WorkQueue
[143.70 Mhash/sec] [0 Accepted] [0 Rejected] [RPC]

We are now looking at close to 1 second per work request, which is simply unacceptable. This delay gets worse when connecting to remote servers. Unless a means of fixing this problem is found the official binaries for Phoenix will never use these versions of PyOpenCL.

However, I don't mind compiling a few binaries with the newer versions for testing:
http://jedi95.com/files/phoenix-1.7.3-2011.1.zip
http://jedi95.com/files/phoenix-1.7.3-2011.2.zip

To anyone who hasn't been following this thread:
The above binaries are intended for testing only, and WILL cause an increase in stale shares.
hero member
Activity: 769
Merit: 500
January 14, 2012, 06:51:26 AM
@jedi95: I saw that you are using PyOpenCL version 2011.1beta3 and tried to replace cl.enqueue_read_buffer(self.commandQueue, self.output_buf, self.output) with cl.enqueue_copy(self.commandQueue, self.output_buf, self.output), because enqueue_read_buffer() and enqueue_write_buffer() are not supported anymore, at least pyOpenCL docs say "Add pyopencl.enqueue_copy(). Deprecate all other transfer functions.". Perhaps a switch to the latest PyOpenCL version can fix this + we could throw out the unsupported commands. What do you think?

I read that Vector types were added aswell, see: http://documen.tician.de/pyopencl/array.html#pyopencl.array.vec

Edit: I saw a few Reject reason: stale errors, what does this error mean?

Thanks,
Dia

"Reject reason:" is sent by the pool server to indicate why a share was rejected. "stale" simply indicates that the share was from the previous block, and not useful.

The Windows binaries for Phoenix use PyOpenCL 0.92. I have tried both PyOpenCL 2011.1 and 2011.2, but for some reason they introduce a substantial delay to getting work. These versions also have a 1-2% hashrate loss compared to 0.92.

The deprecated functions are only at the API level. enqueue_copy() uses enqueue_write_buffer() and enqueue_read_buffer() internally.

If you want to make the deprecation warnings go away, just do this: (Will not work with compiled binaries, since they use 0.92)
Code:
Replace:
 cl.enqueue_read_buffer(self.commandQueue, self.output_buf, self.output)
With:
 cl.enqueue_copy(self.commandQueue, self.output, self.output_buf)

and

Replace:
 cl.enqueue_write_buffer(self.commandQueue, self.output_buf, self.output)
With:
 cl.enqueue_copy(self.commandQueue, self.output_buf, self.output)

If you want to compare versions for yourself, you can download them here: (These only work with Python 2.6)
http://jedi95.com/files/pyopencl-0.92.zip
http://jedi95.com/files/pyopencl-2011.1.zip
http://jedi95.com/files/pyopencl-2011.2.zip

The current version now gives back 0.92 on Windows ... well I have to say I really would like to see a Windows binary with the latest pyOpenCL version in order to play around with it. By the way 0.92 is NOT faster here, it's even a tad slower than 2011.1beta3 :-(. Is it hard for you to offer such a version for testing?

By the way, I made an interesting change in the __init__.py, I removed the self.commandQueue.finish() and made cl.enqueue_read_buffer(...) + cl.enqueue_write_buffer(...) blocking via is_blocking=True. I got that from AMDs :

Quote
Since the AMD OpenCL runtime supports only in-order queuing, using clFinish() on a queue and queuing a blocking command gives the same result. The latter saves the overhead of another API command.

For example:
clEnqueueWriteBuffer(myCQ, buff, CL_FALSE, 0, buffSize, input, 0, NULL, NULL);
clFinish(myCQ);
is equivalent, for the AMD OpenCL runtime, to:
clEnqueueWriteBuffer(myCQ, buff, CL_TRUE, 0, buffSize, input, 0, NULL, NULL);

That change does something I saw not before, take a look at this (finish() removed):
Code:
[14/01/2012 12:42:47] full: 1240247612892774400 hi: 288767650 lo: 0
[14/01/2012 12:42:48] full: 1297148181443772416 hi: 302015846 lo: 0
[14/01/2012 12:42:59] Result 00000000b6ea40ac... accepted
[14/01/2012 12:43:13] Result 00000000dbb7347a... accepted
And here (finish() in place):
Code:
[14/01/2012 12:47:50] full: 10315288014967799808 hi: 2401715148 lo: 0
[14/01/2012 12:47:50] Result 000000003c0db858... accepted
[14/01/2012 12:48:11] full: 16230176632667635712 hi: 3778882472 lo: 0
[14/01/2012 12:48:11] Result 0000000044807a7e... accepted

I'm not sure if I understand this behaviour, but it seems with finish(), results are confirmed imediately, while without finish() it takes a few seconds for the results to get processed. Any explanation for this?

Dia
full member
Activity: 219
Merit: 120
January 14, 2012, 02:14:21 AM

still quits after 10 sec
Its totally buggy

If by "quits", you mean "this version randomly crashes my computer with a frozen screen when using AGGRESSION=12, even at lower overclocks than before", then that's what I see too.

I just tested with AGGRESSION=12 and the only problem I can see is extreme desktop lag. (to be expected though, on a GTX 580 this gives a kernel execution time of close to 2 seconds)

I would test with higher values but this would trigger a TDR and driver reset due to the kernel execution time exceeding 2 seconds.

Can you post your system specs and command line?

I haven't really investigated, since this is my main PC where the symptom emerged, the problem only cropped up doing benchmarks, and things still work at an unintrusive phatk2 Worksize=128 Aggression=6 Fastloop=True. I got at least 5 lockups in an hour of testing on both phatk & dia's kernel at worksize=64 aggression=12, usually when clicking any other app but the miner. I would suspect it is something similar to diapolo's 8-27 kernel, where it's optimizations made a previous good overclock unstable (although that symptom was the kernel returning bad hashes).
Catalyst 12.1a/OpenCL2.6/Radeon HD 5770 @ 970/970 (stable up to 990/1300)/win7x86/phoenix.exe 1.7.3.

Benchmarks and more details of the config.

This might be a TDR issue. When you do things with the computer it slows down the miner somewhat, which can push it just over the edge when it is close to causing a TDR as-is. Any time a kernel execution takes longer than 2 seconds a TDR should occur, which will either crash the system or result in a GPU driver reset. This won't be a problem when using sane values of AGGRESSION. To test this, try running AGGRESSION=13. It should instantly cause a TDR and lockup regardless of overclock settings on your configuration.

More info on TDR:
http://msdn.microsoft.com/en-us/windows/hardware/gg487368

legendary
Activity: 1512
Merit: 1036
January 14, 2012, 01:55:09 AM

still quits after 10 sec
Its totally buggy

If by "quits", you mean "this version randomly crashes my computer with a frozen screen when using AGGRESSION=12, even at lower overclocks than before", then that's what I see too.

I just tested with AGGRESSION=12 and the only problem I can see is extreme desktop lag. (to be expected though, on a GTX 580 this gives a kernel execution time of close to 2 seconds)

I would test with higher values but this would trigger a TDR and driver reset due to the kernel execution time exceeding 2 seconds.

Can you post your system specs and command line?

I haven't really investigated, since this is my main PC where the symptom emerged, the problem only cropped up doing benchmarks, and things still work at an unintrusive phatk2 Worksize=128 Aggression=6 Fastloop=True. I got at least 5 lockups in an hour of testing on both phatk & dia's kernel at worksize=64 aggression=12, usually when clicking any other app but the miner. I would suspect it is something similar to diapolo's 8-27 kernel, where it's optimizations made a previous good overclock unstable (although that symptom was the kernel returning bad hashes).
Catalyst 12.1a/OpenCL2.6/Radeon HD 5770 @ 970/970 (stable up to 990/1300)/win7x86/phoenix.exe 1.7.3.

Benchmarks and more details of the config.
legendary
Activity: 3080
Merit: 1080
January 14, 2012, 12:38:12 AM
Is phoenix still closing connection after LP broadcast? I see that as pretty annoying, because managing new connections after every LP broadcast are making unnecessary load on server...

Nope it's not doing that anymore. With 1.7.0 I used to get TONS and tons disconnects and "worker is idle.." errors with your pool. I just never noticed how bad it was because I had the phoenix processed hidden behind "screen". But with 1.7.3 there are no disconnects whatsoever (watched it for 30 minutes straight...none, and the reject rate is super low) .

Conclusion: I love phoenix Smiley)))!!
full member
Activity: 219
Merit: 120
January 14, 2012, 12:26:11 AM

still quits after 10 sec
Its totally buggy

If by "quits", you mean "this version randomly crashes my computer with a frozen screen when using AGGRESSION=12, even at lower overclocks than before", then that's what I see too.

I just tested with AGGRESSION=12 and the only problem I can see is extreme desktop lag. (to be expected though, on a GTX 580 this gives a kernel execution time of close to 2 seconds)

I would test with higher values but this would trigger a TDR and driver reset due to the kernel execution time exceeding 2 seconds.

Can you post your system specs and command line?
legendary
Activity: 1512
Merit: 1036
January 13, 2012, 11:56:24 PM
The Windows binaries for Phoenix use PyOpenCL 0.92. I have tried both PyOpenCL 2011.1 and 2011.2, but for some reason they introduce a substantial delay to getting work. These versions also have a 1-2% hashrate loss compared to 0.92.
When running Dia's kernel on the 1.7.3 Win32 exe, it spits out this message:
[13/01/2012 12:58:20] using PyOpenCL version 2011.1beta3


thanks to these lines in __init__.py:
# output used pyOpenCL version
self.interface.debug('using PyOpenCL version ' + cl.VERSION_TEXT)


The hashrate loss in 2011.x is similar to what I've seen running from source, so if 1.7.3 exe has 2011.1, maybe a repackaging can get a bit more hashrate(?)


Ah, I must have missed some of the files when I was messing around with it trying to fix the delay getting work. The _cl.pyd file is from 0.92.

I have re-uploaded 1.7.3 with this fix applied. The old version of 1.7.3 has been re-uploaded as phoenix-1.7.3_old.zip.

still quits after 10 sec
Its totally buggy

If by "quits", you mean "this version randomly crashes my computer with a frozen screen when using AGGRESSION=12, even at lower overclocks than before", then that's what I see too.
full member
Activity: 221
Merit: 100
January 13, 2012, 05:42:11 PM
The Windows binaries for Phoenix use PyOpenCL 0.92. I have tried both PyOpenCL 2011.1 and 2011.2, but for some reason they introduce a substantial delay to getting work. These versions also have a 1-2% hashrate loss compared to 0.92.
When running Dia's kernel on the 1.7.3 Win32 exe, it spits out this message:
[13/01/2012 12:58:20] using PyOpenCL version 2011.1beta3


thanks to these lines in __init__.py:
# output used pyOpenCL version
self.interface.debug('using PyOpenCL version ' + cl.VERSION_TEXT)


The hashrate loss in 2011.x is similar to what I've seen running from source, so if 1.7.3 exe has 2011.1, maybe a repackaging can get a bit more hashrate(?)


Ah, I must have missed some of the files when I was messing around with it trying to fix the delay getting work. The _cl.pyd file is from 0.92.

I have re-uploaded 1.7.3 with this fix applied. The old version of 1.7.3 has been re-uploaded as phoenix-1.7.3_old.zip.

still quits after 10 sec
Its totally buggy
legendary
Activity: 1512
Merit: 1036
January 13, 2012, 04:48:11 PM
I have re-uploaded 1.7.3 with this fix applied. The old version of 1.7.3 has been re-uploaded as phoenix-1.7.3_old.zip.

Same hashrate, using -a 100 and running for five minutes until the last digit stops changing, but the reported version is 0.98. Thanks.
full member
Activity: 219
Merit: 120
January 13, 2012, 04:23:52 PM
The Windows binaries for Phoenix use PyOpenCL 0.92. I have tried both PyOpenCL 2011.1 and 2011.2, but for some reason they introduce a substantial delay to getting work. These versions also have a 1-2% hashrate loss compared to 0.92.
When running Dia's kernel on the 1.7.3 Win32 exe, it spits out this message:
[13/01/2012 12:58:20] using PyOpenCL version 2011.1beta3


thanks to these lines in __init__.py:
# output used pyOpenCL version
self.interface.debug('using PyOpenCL version ' + cl.VERSION_TEXT)


The hashrate loss in 2011.x is similar to what I've seen running from source, so if 1.7.3 exe has 2011.1, maybe a repackaging can get a bit more hashrate(?)


Ah, I must have missed some of the files when I was messing around with it trying to fix the delay getting work. The _cl.pyd file is from 0.92.

I have re-uploaded 1.7.3 with this fix applied. The old version of 1.7.3 has been re-uploaded as phoenix-1.7.3_old.zip.
legendary
Activity: 1512
Merit: 1036
January 13, 2012, 04:01:36 PM
The Windows binaries for Phoenix use PyOpenCL 0.92. I have tried both PyOpenCL 2011.1 and 2011.2, but for some reason they introduce a substantial delay to getting work. These versions also have a 1-2% hashrate loss compared to 0.92.
When running Dia's kernel on the 1.7.3 Win32 exe, it spits out this message:
[13/01/2012 12:58:20] using PyOpenCL version 2011.1beta3


thanks to these lines in __init__.py:
# output used pyOpenCL version
self.interface.debug('using PyOpenCL version ' + cl.VERSION_TEXT)


The hashrate loss in 2011.x is similar to what I've seen running from source, so if 1.7.3 exe has 2011.1, maybe a repackaging can get a bit more hashrate(?)
full member
Activity: 219
Merit: 120
January 13, 2012, 03:51:12 PM
@jedi95: I saw that you are using PyOpenCL version 2011.1beta3 and tried to replace cl.enqueue_read_buffer(self.commandQueue, self.output_buf, self.output) with cl.enqueue_copy(self.commandQueue, self.output_buf, self.output), because enqueue_read_buffer() and enqueue_write_buffer() are not supported anymore, at least pyOpenCL docs say "Add pyopencl.enqueue_copy(). Deprecate all other transfer functions.". Perhaps a switch to the latest PyOpenCL version can fix this + we could throw out the unsupported commands. What do you think?

I read that Vector types were added aswell, see: http://documen.tician.de/pyopencl/array.html#pyopencl.array.vec

Edit: I saw a few Reject reason: stale errors, what does this error mean?

Thanks,
Dia

"Reject reason:" is sent by the pool server to indicate why a share was rejected. "stale" simply indicates that the share was from the previous block, and not useful.

The Windows binaries for Phoenix use PyOpenCL 0.92. I have tried both PyOpenCL 2011.1 and 2011.2, but for some reason they introduce a substantial delay to getting work. These versions also have a 1-2% hashrate loss compared to 0.92.

The deprecated functions are only at the API level. enqueue_copy() uses enqueue_write_buffer() and enqueue_read_buffer() internally.

If you want to make the deprecation warnings go away, just do this: (Will not work with compiled binaries, since they use 0.92)
Code:
Replace:
 cl.enqueue_read_buffer(self.commandQueue, self.output_buf, self.output)
With:
 cl.enqueue_copy(self.commandQueue, self.output, self.output_buf)

and

Replace:
 cl.enqueue_write_buffer(self.commandQueue, self.output_buf, self.output)
With:
 cl.enqueue_copy(self.commandQueue, self.output_buf, self.output)

If you want to compare versions for yourself, you can download them here: (These only work with Python 2.6)
http://jedi95.com/files/pyopencl-0.92.zip
http://jedi95.com/files/pyopencl-2011.1.zip
http://jedi95.com/files/pyopencl-2011.2.zip
hero member
Activity: 769
Merit: 500
January 13, 2012, 02:41:00 AM
@jedi95: I saw that you are using PyOpenCL version 2011.1beta3 and tried to replace cl.enqueue_read_buffer(self.commandQueue, self.output_buf, self.output) with cl.enqueue_copy(self.commandQueue, self.output_buf, self.output), because enqueue_read_buffer() and enqueue_write_buffer() are not supported anymore, at least pyOpenCL docs say "Add pyopencl.enqueue_copy(). Deprecate all other transfer functions.". Perhaps a switch to the latest PyOpenCL version can fix this + we could throw out the unsupported commands. What do you think?

I read that Vector types were added aswell, see: http://documen.tician.de/pyopencl/array.html#pyopencl.array.vec

Edit: I saw a few Reject reason: stale errors, what does this error mean?

Thanks,
Dia
Pages:
Jump to: