Pages:
Author

Topic: Phoenix - Efficient, fast, modular miner - page 46. (Read 760839 times)

hero member
Activity: 742
Merit: 500
BTCDig - mining pool
Code:
Unhandled error in Deferred:
Traceback (most recent call last):
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 354, in _startRunCallbacks
    self._runCallbacks()
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 371, in _runCallbacks
    self.result = callback(self.result, *args, **kw)
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 330, in _continue
    self.unpause()
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 325, in unpause
    self._runCallbacks()
--- ---
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 371, in _runCallbacks
    self.result = callback(self.result, *args, **kw)
  File "/phoenix-1.3/minerutil/RPCProtocol.py", line 244, in
    d.addErrback(lambda x: self._failure())
  File "/phoenix-1.3/minerutil/RPCProtocol.py", line 307, in _failure
    self._setLongPollingPath(None)
  File "/phoenix-1.3/minerutil/RPCProtocol.py", line 174, in _setLongPollingPath
    self.activeLongPoll.cancel()
exceptions.AttributeError: Deferred instance has no attribute 'cancel'
member
Activity: 158
Merit: 10
Hi guys, what is the command line for mining solo ?
full member
Activity: 188
Merit: 100
Nice work guys! I get the following warning messages right after I start up my miners (version 1.3).

KernelInterface.py:195: DeprecationWarning: struct integer overflow masking is deprecated hashInput = pack('>76sI', staticData, nonce)

KernelInterface.py:210: DeprecationWarning: struct integer overflow masking is deprecated formattedResult = pack('<76sI', nr.unit.data[:76], nonce)

But it seems to work great after the warnings. I'm using Ubuntu 10.10 SDK 2.1 with four 5870's.

Thanks in advance for any help.
sr. member
Activity: 392
Merit: 250
Changed from 1.2 to 1.3 and it started sending Warning: work queue empty, miner is idle. :s.
If I run 1.2 it doesn´t happen.



So far I have only seen mine do that right before a Long Poll update when a block was found.

What were the circumstances? Was it right before or after a Long Poll Update? What pool are you using?
member
Activity: 77
Merit: 10
Changed from 1.2 to 1.3 and it started sending Warning: work queue empty, miner is idle. :s.
If I run 1.2 it doesn´t happen.

full member
Activity: 155
Merit: 100
Thank you sir, got my vapor x 5870 from 375 to 415 without any changes to hardware settings. Here is what works for me if anyone is interested

phoenix -u http://[email protected]:[email protected]:8332/ -k poclbm DEVICE=0 VECTORS BFI_INT AGGRESSION=10 WORKSIZE=2056
legendary
Activity: 1666
Merit: 1000


Adding WORKSIZE, not matter what size, dropped my speed 10 Mh/s. I haven't had a problem running without it.

If not specified it defaults to your cards spec -- probably increased from 128 to 256 I would guess.
sr. member
Activity: 392
Merit: 250
So I am running this on a stock 5870 and I am getting almost exactly the same hashrate as poclbm.  Has poclbm been updated or is there something wrong with my command line for phoenix.

I'm using:

./phoenix.py -u -k poclbm VECTORS=on BFI_INT AGGRESSION=13 WORKSIZE=128 DEVICE=1

Should I be using something different?  Getting 365 Mh/s in both poclbm and phoenix.


Try different AGGRESSION .. 12 was better than 13 or 11 for me.
Try lowering the WORKSIZE to 64, and try it without it completely.

Adding WORKSIZE, not matter what size, dropped my speed 10 Mh/s. I haven't had a problem running without it.
sr. member
Activity: 308
Merit: 251
muchas gracias senor

With today's version my stale shares has dropped from 3.5% to 0% Cheesy !! As for the comment about BFI_INT for 6XXX cards. Phoenix runs exactly the same as poclbm without BFI_INT on my xfx 5830 xxx edition. With it I get the extra boost and it steadily sits around 267MH/s.

I'm runing debian sid with catalyst 11.3 installed via the ATI packages, fglrx version 8.831.2-110308a-115935C-ATI and AMD APP v2.4.

oh and todays version of phoenix from trunk Cheesy.
hero member
Activity: 607
Merit: 500
Is there a possibility for 6xxx series cards to have a unique command (like BFI_INT) to make them more profitable than 5xxx?
i wonder....
legendary
Activity: 1260
Merit: 1000
How many GPUs are you running? Are they all mining? How much CPU are they hogging? You could also try putting the miner in verbose mode (with -v flag) to see any possible errors.

3 Miners (two are poclbm) on 3 5870's.  CPU usage is virtually nothing... 0.35 0.13 0.21

No errors shown with the -v flag.
newbie
Activity: 15
Merit: 0
So I am running this on a stock 5870 and I am getting almost exactly the same hashrate as poclbm.  Has poclbm been updated or is there something wrong with my command line for phoenix.

I'm using:

./phoenix.py -u -k poclbm VECTORS=on BFI_INT AGGRESSION=13 WORKSIZE=128 DEVICE=1

Should I be using something different?  Getting 365 Mh/s in both poclbm and phoenix.


I think vectors are defined just "VECTORS" not "VECTORS=on". That might solve the performance issue.

Yeah, I tried it both ways with the same result.


How many GPUs are you running? Are they all mining? How much CPU are they hogging? You could also try putting the miner in verbose mode (with -v flag) to see any possible errors.
hero member
Activity: 607
Merit: 500
if you lower the aggression? perhaps to 10
legendary
Activity: 1260
Merit: 1000
So I am running this on a stock 5870 and I am getting almost exactly the same hashrate as poclbm.  Has poclbm been updated or is there something wrong with my command line for phoenix.

I'm using:

./phoenix.py -u -k poclbm VECTORS=on BFI_INT AGGRESSION=13 WORKSIZE=128 DEVICE=1

Should I be using something different?  Getting 365 Mh/s in both poclbm and phoenix.


I think vectors are defined just "VECTORS" not "VECTORS=on". That might solve the performance issue.

Yeah, I tried it both ways with the same result.
full member
Activity: 238
Merit: 100
Do you mean a new getwork with a new previous block hash? Because midstate would change on
every new getwork regardless of whether it's based on a new block or not.

I'm curious how both situations are handled.

Quote
When a getwork with a new previous block comes in, however, the queue instantly purges itself
and tries to prevent the Phoenix kernel from running any of the old work (regardless of whether it's full or half-full).

Does it mean that the work is dropped only if the hash of previous block changes (good idea), otherwise new getwork is just added to the queue? I'm not sure how bitcoin daemon would handle a proof-of-work for old midstate that have current prevblock (e.g. in case of only merkleroot changed due to new transactions). Would it  be accepted?

newbie
Activity: 15
Merit: 0
So I am running this on a stock 5870 and I am getting almost exactly the same hashrate as poclbm.  Has poclbm been updated or is there something wrong with my command line for phoenix.

I'm using:

./phoenix.py -u -k poclbm VECTORS=on BFI_INT AGGRESSION=13 WORKSIZE=128 DEVICE=1

Should I be using something different?  Getting 365 Mh/s in both poclbm and phoenix.


I think vectors are defined just "VECTORS" not "VECTORS=on". That might solve the performance issue.
member
Activity: 84
Merit: 10
So taken together can it be summarize that askrate=10 is sufficient and the same as askrate=5?
legendary
Activity: 1260
Merit: 1000
So I am running this on a stock 5870 and I am getting almost exactly the same hashrate as poclbm.  Has poclbm been updated or is there something wrong with my command line for phoenix.

I'm using:

./phoenix.py -u -k poclbm VECTORS=on BFI_INT AGGRESSION=13 WORKSIZE=128 DEVICE=1

Should I be using something different?  Getting 365 Mh/s in both poclbm and phoenix.
member
Activity: 63
Merit: 10
In terms of full getwork responses, if the queue is already full when more work is received the oldest work is discarded.
This doesn't affect the chances of finding a block, since it's just as likely that the new work contains a solution.

What if the work queue is half full and new getwork with new midstate is received?

Do you mean a new getwork with a new previous block hash? Because midstate would change on
every new getwork regardless of whether it's based on a new block or not.

When a getwork with a new previous block comes in, however, the queue instantly purges itself
and tries to prevent the Phoenix kernel from running any of the old work (regardless of whether it's full or half-full).
This can result in the devices going idle for a short period of time, if Phoenix can't restock its queue fast enough to
feed them all with new work... but it's better than having them work on shares that are just going to get
dropped on the floor the moment they are received by the pool server, at any rate. Wink
full member
Activity: 238
Merit: 100
In terms of full getwork responses, if the queue is already full when more work is received the oldest work is discarded.
This doesn't affect the chances of finding a block, since it's just as likely that the new work contains a solution.

What if the work queue is half full and new getwork with new midstate is received?
Pages:
Jump to: