Pages:
Author

Topic: Phoenix - Efficient, fast, modular miner - page 8. (Read 760757 times)

vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 07, 2011, 11:49:34 AM
I'm not sure what's going on with the new 1.6.3 version but I've just upgraded and as soon as you start mining it throws out this, never to reconnect, making it unusable Undecided
Code:
[07/10/2011 17:39:39] Phoenix v1.6.3 starting...
[07/10/2011 17:39:40] Connected to server
[07/10/2011 17:39:40] Currently on block: 148444
[07/10/2011 17:39:40] Disconnected from server
[07/10/2011 17:39:45] Result: 4d183fe6 accepted
[07/10/2011 17:39:51] Result: 5c8a01ea accepted
[07/10/2011 17:39:52] Warning: work queue empty, miner is idle
[07/10/2011 17:39:53] Connected to server
[07/10/2011 17:39:53] Disconnected from server
[07/10/2011 17:40:05] Warning: work queue empty, miner is idle

That doesn't sound good. Has anyone else tested this version out yet?
hero member
Activity: 927
Merit: 1000
฿itcoin ฿itcoin ฿itcoin
October 07, 2011, 11:42:38 AM
I'm not sure what's going on with the new 1.6.3 version but I've just upgraded and as soon as you start mining it throws out this, never to reconnect, making it unusable Undecided
Code:
[07/10/2011 17:39:39] Phoenix v1.6.3 starting...
[07/10/2011 17:39:40] Connected to server
[07/10/2011 17:39:40] Currently on block: 148444
[07/10/2011 17:39:40] Disconnected from server
[07/10/2011 17:39:45] Result: 4d183fe6 accepted
[07/10/2011 17:39:51] Result: 5c8a01ea accepted
[07/10/2011 17:39:52] Warning: work queue empty, miner is idle
[07/10/2011 17:39:53] Connected to server
[07/10/2011 17:39:53] Disconnected from server
[07/10/2011 17:40:05] Warning: work queue empty, miner is idle
vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 07, 2011, 07:31:31 AM
Quote
[16:42] <@Eleuthria> Phoenix is terrible and inefficient, it tosses out a getwork after finding a single share instead of exhausting the space.

I don't know where he got that idea. The only time Phoenix ever tosses a getwork is when the queue is purged on block change. (since these getworks will only produce stale work anyway) Otherwise it works through the entire 2^32 nonce space.

To anyone who is still using SVN: we are no longer using it for development. All the latest code is on GitHub. The latest SVN revision is significantly out of date now.

As for the issues user have been reporting, we are in the process of fixing several RPC bugs. It turns out that the RPC code for persistent connections completely ignores the 2 connection limit. This can result in many extra connections being created under high load. (along with some other odd behavior)

Rolltime functionality is currently planned, but it won't be in the next version. For right now the RPC problems take priority.

Thanks for the reply and I am glad to hear development continues. I will point El to your reply later today. Thanks again. When a new release is published, I will be more than happy to pay my part of the bounty.
legendary
Activity: 1386
Merit: 1097
October 07, 2011, 07:24:42 AM
jedi95: Thanks for the response. I found that I was still using SVN repository (unfortunately other pool users are using it too), which may explain my problems. Can you please write it in big bold letters somewhere (top post in this thread) that they should migrate to actual repository?

I understand that X-Roll-NTime has lower priority for you. However can you add at least simple timeout on job validity? Ideally 60-120 seconds. There are many reasons why miners should not use older jobs; some slow miners are using one job for 5 minutes or even more! And one request per minute or two cannot be considered as 'ineffectivity' or 'network overhead'. Thanks a lot!
full member
Activity: 219
Merit: 120
October 06, 2011, 08:33:40 PM
Quote
[16:42] <@Eleuthria> Phoenix is terrible and inefficient, it tosses out a getwork after finding a single share instead of exhausting the space.

I don't know where he got that idea. The only time Phoenix ever tosses a getwork is when the queue is purged on block change. (since these getworks will only produce stale work anyway) Otherwise it works through the entire 2^32 nonce space.

To anyone who is still using SVN: we are no longer using it for development. All the latest code is on GitHub. The latest SVN revision is significantly out of date now.

As for the issues user have been reporting, we are in the process of fixing several RPC bugs. It turns out that the RPC code for persistent connections completely ignores the 2 connection limit. This can result in many extra connections being created under high load. (along with some other odd behavior)

Rolltime functionality is currently planned, but it won't be in the next version. For right now the RPC problems take priority.
legendary
Activity: 1386
Merit: 1097
October 06, 2011, 02:47:34 PM
yes, Aldiyen's fork works for me now. Worse part is that _now_ works also stock phoenix (latest version). I didn't changed anything, maybe connection is slightly better or something, but now I have 0.2% stale with both versions. Previously it was almost 30% stale.

But Aldiyen's fork have ntime rolling, which is very nice feature for strong rigs. I didn't watched it carefully, but my feeling is that it performs less getworks than stock phoenix. So I can recommend it so far.
vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 06, 2011, 02:17:57 PM
Looks interesting. I'm now testing Aldiyen's fork, he fixed also some network errors. Maybe it will solve my issues as well...

Any luck slush?
legendary
Activity: 1386
Merit: 1097
October 06, 2011, 10:17:04 AM
Looks interesting. I'm now testing Aldiyen's fork, he fixed also some network errors. Maybe it will solve my issues as well...
vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 06, 2011, 10:04:36 AM
On github ( https://github.com/jedi95/Phoenix-Miner/network ), it looks like aldiyen has added rollNTime support to his git repo.
vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 06, 2011, 09:51:47 AM
Phoenix is the most used miner on my pool (together with poclbm). So I'll pay something for fixing those bugs addressed by me above + adding correct X-Roll-NTime support. I don't have an idea how much work is that for somebody who already knows phoenix internals, but 20 BTC may be interesting?


I'll add another 10 BTC to the bounty to see these issues fixed.
legendary
Activity: 1386
Merit: 1097
October 06, 2011, 09:46:00 AM
Phoenix is the most used miner on my pool (together with poclbm). So I'll pay something for fixing those bugs addressed by me above + adding correct X-Roll-NTime support. I don't have an idea how much work is that for somebody who already knows phoenix internals, but 20 BTC may be interesting?
legendary
Activity: 1386
Merit: 1097
October 06, 2011, 09:43:41 AM
Well i'm sure there are a lot of people using Phoenix to mine, is there anyone who can fork the project and create a new release fixing these issues if Jedi95 is not around? Again, I am willing to pay to see the phoenix miner properly cleaned up with the necessary changes made. Maybe we should all start a bounty? Maybe that would get development on the project going again.

I tried to understand phoenix's sources and fix those issues, but it's asynchronous mess inside. Async programming is good for everything else than for understanding and debugging Wink.

Btw I tested phoenix r111 on my problematic miner and it works like a charm overnight. I have 0.1% stale on 12000 shares so far. It's not so effective as poclbm (as it does not implement roll ntime extension correctly), but at least have lower stale rate for me.
brand new
Activity: 0
Merit: 250
October 04, 2011, 02:22:50 AM
^^

generalfault - if that solves my problems then you are a star... does this progressively eat CPU cycles as times goes on? I've already written a Ruby script (used to be a coder more than a decade ago... Ruby got me back interested in messing around with code) but it works on the output of Phoenix, which ends up being a 20 megabyte file after a day's work...

You say that's piping the output into your perl script - does the voluminous output from phoenix get processed by your perl code and then end up in the bit bucket or is that 20 MB output being stored in RAM somewhere?

Perl (especially the way you've written it) is somewhat opaque to me (and I used to be a C coder, so that goes to show that there are two types of coder, those who understand regular expressions and those that don't - and yes I know regexps are fully supported in Ruby, ought to learn how they really work some day). I can't tell immediately from the code whether it's doing exactly what my Ruby code does, which is open the file and then scan *backwards* from the end looking for the last result.

The annoying thing is that the Ruby script eventually slows down the entire machine waiting on disk I/O - all these miners run on USB flashdrives and I presume Ruby's File object, even when scanning backwards, is opening the entire file into RAM (trying to go backwards from the end of a file constantly being written to is a tricky proposition)...

If you could let me know what's different between your code and mine, perhaps I can rewrite the Ruby script to work the same way (i.e. pipe the stdout from phoenix into Ruby stdin). If this is something cool about Perl then I'll use your script and most certainly will send you some BTC.

PM me if you can Smiley And many thanks!
vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 06, 2011, 09:22:56 AM
No I was referring to the problems posted by slush

Well i'm sure there are a lot of people using Phoenix to mine, is there anyone who can fork the project and create a new release fixing these issues if Jedi95 is not around? Again, I am willing to pay to see the phoenix miner properly cleaned up with the necessary changes made. Maybe we should all start a bounty? Maybe that would get development on the project going again.
full member
Activity: 221
Merit: 100
October 06, 2011, 08:52:47 AM
Sure would be nice if jedi95
would address these current problems
his last post was 2 months ago

Are these the open issues over at github.com or are there other outstanding issue not yet posted?


No I was referring to the problems posted by slush
brand new
Activity: 0
Merit: 250
October 02, 2011, 05:03:35 PM
Quick question because I haven't read all 50 pages - may already have been answered but isn't in the first post.

I've been running Phoenix since 1.48, then 1.50 (most of them), one instance of 1.60 and now I'm about to try 1.6.2. Having streaming status output from a load of separate miners isn't convenient (especially with 4 instances per machine, and 6 machines). Instead, I redirect all Phoenix output for each GPU to its own log file, and then every 5 seconds or so, run a Ruby script which opens each logfile, scans backwards to get the important information (hashrate, work units accepted, work units rejected, and the timestamp and message from the last output.

For example, running Phoenix in the terminal gives something like this (on a slow Mac...):

[02/10/2011 22:42:25] Result: e389dbcb accepted           
[02/10/2011 22:43:19] Result: 6fff67fd accepted           
[02/10/2011 22:45:52] Result: 28a3f178 accepted           
[70.60 Mhash/sec] [7822 Accepted] [12 Rejected] [RPC (+LP)]


The data I'm interested in picking out is highlighted in red bold. I take this data from each instance (4 per machine) and build a chunk of HTML containing the data. Each machine does the same thing, and my main webserver then does a simple server-side include for each miner box to create a consolidated table of all machines, cards and their status (it also does other stuff, like add in the temperature and clock from aticonfig, etc.). I'm happy with this approach, it's nice and neat. But over time, as the log files grow, parsing the file becomes slow. Yes, I need to optimise my Ruby code. But...

In the log file itself, there are a LOT of ^H characters; presumably this is how Phoenix maintains the last line with updating the hashrate. It's a pain in the arse for my Ruby script to parse... but MUCH MUCH MUCH more seriously, these simple log files grow into ENORMOUS sizes over time. Some of my machines are pretty reliable and rarely lock up / overheat. With these, the Phoenix output can grow to hundreds of megabytes.

This gives me two problems - firstly the Ruby scripts eat much more CPU than they need to, scanning through multi-MB files. Secondly, my miners run using USB flashdrives as their boot devices, and whilst they're all 16 GB flashdrives, files growing to multi-megabyte status doesn't help the speed or the useful life of the flash.


Is there any way to get immediate data out of a Phoenix instance - for example, the hashrate, the results accepted / rejected, and the last status message / timestamp?

Or, since I'm not a Linux god (my Unix skills come from being a Mac OS X guru) - is there any funky file type I can use that I can send the stdout from Phoenix to, and be able to read it from Ruby, but can be 'trimmed' on regular occasions (i.e. removing the old entries), or one that can be set to a specific size which it won't grow any larger than? Sort of like a FIFO but allowing, say, 32K of log text to build up before dropping the first bytes in?
vip
Activity: 1358
Merit: 1000
AKA: gigavps
October 06, 2011, 06:57:51 AM
Sure would be nice if jedi95
would address these current problems
his last post was 2 months ago

Are these the open issues over at github.com or are there other outstanding issue not yet posted?

EDIT:

The reason I am interested is that El over at btcguild.com has implemented QoS into his pool server and it doesn't play nice with phoenix. Here is the  conversation:

[16:37] @Eleuthria still getting miner idle warnings, since you suggested it was somehow on my end i reset all network equipment
[16:37] [05/10/2011 20:34:26] Phoenix v1.6.2 starting... [05/10/2011 20:34:26] Setting auto kill signal for 180 seconds. [05/10/2011 20:34:26] Connected to server [05/10/2011 20:34:26] Currently on block: 148211 [05/10/2011 20:34:30] Result: 050dafcb accepted [05/10/2011 20:34:33] Result: c74016fc accepted [05/10/2011 20:34:34] Result: 099d91dd accepted [05/10/2011 20:34:45] Result: 6c6bc416 accepted [05/10/2011 20:34:46] Result: 2e039bbe accepted [05/10/2011 20:34:50] Result: 0a9de4ce accepted [05/10/2011 20:35:08
[16:37] [05/10/2011 20:35:51] Warning: work queue empty, miner is idle
[16:41] <@Eleuthria> I'd suggest a better miner if you're running that fast on a single client
[16:41] all gpus are 310-317 Mh/s
[16:42] i'm using phoenix
[16:42] are you suggesting that phoenix is not a good miner?
[16:42] <@Eleuthria> Yes.
[16:42] <@Eleuthria> Phoenix is terrible and inefficient, it tosses out a getwork after finding a single share instead of exhausting the space
[16:44] <@Eleuthria> cgminer is better in every way
[16:44] i am all for learning and improving, but why do i not have these problems on other pools like arsbitcoin.com, yourbtc.net, mainframe.nl?
[16:44] <@Eleuthria> But if you insist on using phoenix, add -q 2 or -q 3
[16:45] <@Eleuthria> Because we put you in the back of the line after just feeding you work.
[16:45] <@Eleuthria> So you acn't just spam us with getwork requests.
[16:45] <@Eleuthria> Which is whats happening in your log
[16:45] <@Eleuthria> You finished 3 shares in under 3 seconds
[16:45] you're the guy with the 20GH/s operation?
[16:46] <@Eleuthria> On phoenix, that means you're issuing a new getwork multiple times per second
[16:46] <@Eleuthria> Even though a single getwork can contain multiple shares
[16:46] <@Eleuthria> poclbm fixed that, cgminer never had that problem, and my understanding is diablo never had that issue either.

Is there someone watching this thread that can fix this issue?

I would be willing to offer a bounty to have phoenix patched and a new release of the miner created.
full member
Activity: 221
Merit: 100
October 05, 2011, 01:15:33 PM
Sure would be nice if jedi95
would address these current problems
his last post was 2 months ago
newbie
Activity: 26
Merit: 0
October 03, 2011, 08:00:47 PM

Is there any way to get immediate data out of a Phoenix instance - for example, the hashrate, the results accepted / rejected, and the last status message / timestamp?

Or, since I'm not a Linux god (my Unix skills come from being a Mac OS X guru) - is there any funky file type I can use that I can send the stdout from Phoenix to, and be able to read it from Ruby, but can be 'trimmed' on regular occasions (i.e. removing the old entries), or one that can be set to a specific size which it won't grow any larger than? Sort of like a FIFO but allowing, say, 32K of log text to build up before dropping the first bytes in?

So, I liked that question for some reason. (most likly cause I had a similar problem.)
Here is a little perl script:

Code:
#!/usr/bin/perl

my @s;
$/ = "\b";
while(<>){
        next if(! m/^\[/);
        chomp;
        print "$_\n";
        if(push(@s,$_) > 20 or m/(Result|New Work)/ ){
                open(FH,">>","out.log");
                print FH join("\n",@s)."\n";
                close FH;
                @s = ();
                }
        }

you'd just pipe the output of phoenix into it:
./phoenix blah blah blah |./logger

It spools up to 20 lines and writes them out to out.log

if you are parsing out.log then you'd just:
mv out.log out.log.parsing

rm out.log.parsing

and a new out.log will be created.
legendary
Activity: 1344
Merit: 1004
October 02, 2011, 03:57:50 PM
Ok so I installed 1.64 (replacing 1.50) and my 5870s perform about 10 MH/s faster.  However, my 5970 is about 10 MH/s slower per core, so I guess it's a wash.  Anybody have any tips on parameters for a 5970?  I'm currently using BFI_INT VECTORS FASTLOOP=FALSE AGGRESSION=11 WORKSIZE=256 -k phatk2 and getting about 390 MH/s per core at 910/200MHz.

Where did you find phoenix 1.64? Latest that's publicly out is 1.6.2.
Pages:
Jump to: