Pages:
Author

Topic: GUI mining - updated Dec 3 with 7970 bugfix, also supports Stratum! - page 59. (Read 3232159 times)

newbie
Activity: 4
Merit: 0
Sounds like your settings file is corrupted. Try renaming/deleting it and restarting the application.
Thanks.
%APPDATA%\poclbm\poclbm.ini was empty.
I deleted it and restarted the GUI, then it worked.
sr. member
Activity: 294
Merit: 252
Sounds like your settings file is corrupted. Try renaming/deleting it and restarting the application.
newbie
Activity: 4
Merit: 0
Today I got this message when starting up the client:
Code:
ERROR:root:Exception:
Traceback (most recent call last):
  File "guiminer.py", line 1790, in
  File "guiminer.py", line 1379, in __init__
  File "guiminer.py", line 1524, in load_config
  File "json\__init__.pyo", line 267, in load
  File "json\__init__.pyo", line 307, in loads
  File "json\decoder.pyo", line 319, in decode
  File "json\decoder.pyo", line 338, in raw_decode
ValueError: No JSON object could be decoded
Traceback (most recent call last):
  File "guiminer.py", line 1790, in
  File "guiminer.py", line 1379, in __init__
  File "guiminer.py", line 1524, in load_config
  File "json\__init__.pyo", line 267, in load
  File "json\__init__.pyo", line 307, in loads
  File "json\decoder.pyo", line 319, in decode
  File "json\decoder.pyo", line 338, in raw_decode
ValueError: No JSON object could be decoded

Does anyone know what the problem is?
newbie
Activity: 56
Merit: 0
If I'm trying to mine locally, and I have 2x 6950's, should I be creating two miners for device 0 Cayman and Device 1 Cayman or should I just create one pool for 0 Since they're in crossfire?

Right now I have it on just (0) Cayman and I'm generating 260 Mhash/s is that good or bad?

Yes, you do need two miners, one for each card.
member
Activity: 82
Merit: 10
I'm generating 260 Mhash/s is that good or bad?
I'm not an expert, but i think it's not working fully. I have 1x5850 and average is ~230Mhash/s. (default clocks, no OC at all) Based on that yours should be much higher.
newbie
Activity: 19
Merit: 0
If I'm trying to mine locally, and I have 2x 6950's, should I be creating two miners for device 0 Cayman and Device 1 Cayman or should I just create one pool for 0 Since they're in crossfire?

Right now I have it on just
  • Cayman and I'm generating 260 Mhash/s is that good or bad?
Kiv
full member
Activity: 162
Merit: 100
Sure, it should be easy for my program to support multiple languages through GNU gettext. I unfortunately don't know any other languages well enough to do a translation, though.

If you're offering to do a translation then I would be happy to get things set up for you and include your work in the next version.

Thanks for the donation, every BTC is appreciated Smiley

I'm not sure that it is not mentioned above, I`m about multilingual support. It would be great. Especially if done in a way, that allow adding new languages, just by a simple editing of the template file and adding  new language file in specified directory. 
But anyway, my second BTC is going to your wallet. First goes to Momchil  Smiley .

sr. member
Activity: 350
Merit: 250
I'm not sure that it is not mentioned above, I`m about multilingual support. It would be great. Especially if done in a way, that allow adding new languages, just by a simple editing of the template file and adding  new language file in specified directory. 
But anyway, my second BTC is going to your wallet. First goes to Momchil  Smiley .
Kiv
full member
Activity: 162
Merit: 100
Does the console view display both miners crunched together or will it display only one miner at a time?  I'm seeing lines similar to "Line: 64256 khash/s".  If it does display both miners, it would be helpful if the name of the miner was put in the front of the line, like "Primary: 64256 khash/s".

Those lines would show individual miners. You can see the hash rate of a specific miner by going to its tab and looking in the status bar; the total hash rate is summed up on the status bar of the Summary tab.

I'm probably going to remove those "Line" reports from the console now that the CUDA seems to be working ok - I'm trying to keep the console output sparse so it's easier to see things like connection errors.

Glad you liked the balance feature Smiley
newbie
Activity: 56
Merit: 0
I'm running 2011-04-03-pre and the CUDA miner stuff is all working quite well.  Sorry for the slow reply.  I've been messing with all manner of combinations of driver versions and console commands for my 460s and came up with pretty much what I expected from the get-go.  Now with driver version 270.51 and rpcminer-cuda arguments of -gpugrid 128 -gputhreads 128, I get 128.6 MHash/s for my two overclocked 460s.  I'm still thinking that I may not mine for much longer.  It doesn't seem to be worth it on nVidia hardware while paying for electricity.

I've got a question.  Does the console view display both miners crunched together or will it display only one miner at a time?  I'm seeing lines similar to "Line: 64256 khash/s".  If it does display both miners, it would be helpful if the name of the miner was put in the front of the line, like "Primary: 64256 khash/s".

By the way, the balance button is quite nice.
member
Activity: 93
Merit: 10
+1 for a great mining program  Smiley
sr. member
Activity: 294
Merit: 252
Hi,

I have installed it with win 7 64 bits. But it seams not to work. It do not recognize the 64 bits radeon driver.

I've got it installed on Win7 x64 and it works fine. What version of the Catalyst drivers have you installed?
newbie
Activity: 10
Merit: 0
Hi,

I have installed it with win 7 64 bits. But it seams not to work. It do not recognize the 64 bits radeon driver.
Kiv
full member
Activity: 162
Merit: 100
Thanks, good bug report. It looks like the CUDA version gives a slightly different output format than the other puddinpop miners, so I'll adjust the code for that. I'll have a new version with the fix on the weekend.

It always says 0 accepted even though I can look in the console and see the valid server response. It also displays the last line of the response in the status panel: Listener for "CUDA": Server sent: {"result":true,"error":null,"id":"1"}

Thanks again. Looking forward to an update.
newbie
Activity: 34
Merit: 0
Hi everyone, by popular demand I'm putting up a beta version with support for puddinpop's RPC miners. I don't actually have access to CUDA hardware, so I need testers to see if that part works at all Smiley

The link is here:

    guiminer-20110403-pre.exe (self-extracting archive)

To use rpcminer for the backend, pick File -> New External Miner and then point to the miner EXE you want to use (CUDA, 4way, etc). I packaged the latest rpcminer with the GUI but you should be able to point to another version of the EXE if you want.

There's no device dropdown for these miners so to run on multiple devices you'll need to use the standard rpcminer flags.

Let me know if you have any luck and if it seems to be working I'll update the first post.

Thanks for the new version supporting puddinpop's CUDA miner. I didn't try the program before because the CUDA miner gives me a few extra MHash/s compared to poclbm. Anyway everything works find except that it does not record the number of accepted shares at the bottom correctly. It always says 0 accepted even though I can look in the console and see the valid server response. It also displays the last line of the response in the status panel: Listener for "CUDA": Server sent: {"result":true,"error":null,"id":"1"}

Thanks again. Looking forward to an update.
Kiv
full member
Activity: 162
Merit: 100

Then, in guiminer.py, in your frame subclass add:
Code:
if os.path.exists("guiminer.exe"):
     self.SetIcon(wx.Icon("guiminer.exe",wx.BITMAP_TYPE_ICO))

Windows should handle it natively after that, as long as the icon is in position 0.

Neat, I didn't realize you could create an wx.Icon from an executable.
newbie
Activity: 56
Merit: 0
As opposed to trying to fit multiple icons into the same ico file, you can take a single icon of max resolution size for Windows (96x96x256) and build it directly into the EXE when compiled, then reference the EXE for the icon. Windows will automatically downscale an icon from an executable, but wont if it's a standalone .ico file (which is why it requires multiple sizes in the same file).

Change setup.py to have:
Code:
setup(windows=[
        {
            "script": "guiminer.py",                   
            "icon_resources": [(0, "guiminer.ico")]     
        }
    ],

Then, in guiminer.py, in your frame subclass add:
Code:
if os.path.exists("guiminer.exe"):
     self.SetIcon(wx.Icon("guiminer.exe",wx.BITMAP_TYPE_ICO))

Windows should handle it natively after that, as long as the icon is in position 0.

The problem is that the taskbar icon was getting scaled from the wrong size, anyway.  That caused jagged edges on the circle.  When Windows scales a 32-bit icon, it uses nearest neighbor scaling as opposed to bilinear, which looks horrible.  Also, the largest icon Windows supports is 256x256x32-bit, however, since Windows does scale 8-bit icons using bilinear filtering, that may be one way to resolve this issue.  Of course, with an 8-bit icon, the edges can't be alpha blended to enhance smoothness of lower resolution icons, so it would have to be a big icon.  In Windows 7 and Vista, Windows reads multi-res icons in sizes 256x256, 64x64, 48x48, 32x32, 24x24, 16x16, and ignores all other resolutions even if they're present.  If you're going to do a multi-res icon, use only those sizes.  If you do only one resolution in an icon, I would suggest only using sizes divisible by 8.  Other than that, any resolution should be fine.

Also, Windows 7 and Vista support PNG compression for icons within .ico, .dll, and .exe files.  Just in case 256x256x32-bit seems too ridiculous.  Windows XP won't read that icon if it comes across it, but it should still parse the other sizes present in the file.

EDIT:  To help illustrate the problem, do this.  Run the calculator (calc.exe).  Now go to Windows\System32 and right-click calc.exe and select properties.  Look at the taskbar representation for the two programs (Calculator and calc.exe Properties).  If you have use small icons turned on for your taskbar, you should see a significant difference in the clarity of the buttons in these two.  You may also need to turn off always combine to see the properties window in the taskbar, or you could close the system32 window you had opened.  As far as I know, this issue only affects the taskbar when it wants to display a 16x16 icon and the 32x32 icon gets loaded instead.  The same thing happens to the notification tray icons in all versions of Windows, but this has been a known issue.  The taskbar crap is new since Vista.
sr. member
Activity: 258
Merit: 250
As opposed to trying to fit multiple icons into the same ico file, you can take a single icon of max resolution size for Windows (96x96x256) and build it directly into the EXE when compiled, then reference the EXE for the icon. Windows will automatically downscale an icon from an executable, but wont if it's a standalone .ico file (which is why it requires multiple sizes in the same file).

Change setup.py to have:
Code:
setup(windows=[
        {
            "script": "guiminer.py",                   
            "icon_resources": [(0, "guiminer.ico")]     
        }
    ],

Then, in guiminer.py, in your frame subclass add:
Code:
if os.path.exists("guiminer.exe"):
     self.SetIcon(wx.Icon("guiminer.exe",wx.BITMAP_TYPE_ICO))

Windows should handle it natively after that, as long as the icon is in position 0.
newbie
Activity: 56
Merit: 0
It works on my 8600m GT. but i get lower khash/s and my computer is slower... So reverting back to opengl Smiley.

If you don't feed the CUDA miner specific gpugrid and gputhreads parameters, it will do a benchmark of all of the available options to find the best one.  Either due to a bug or driver issues, some of your first couple of runs of the CUDA miner may be less than you could get.  Sometimes if you leave your system idle for about 60 seconds, then you get better speeds.  Sometimes, you would have to run it immediately after closing it to see the best speeds.  It's somewhat random.  If you do see one instance of it running the fastest, take note of the last pairs of numbers that show up.  For me, the best pair of numbers ended up being (128,128).  They'll show up in the order (gpugrid, gputhreads).  Otherwise, you could just run the miner yourself, constantly varying the inputs until you see the greatest average mining speed.  The first report of the hash speed will be lower than all of the others, so give it some time.  Also, for the large majority of nVidia cards, your ask rate should be larger than the default.  If you get less than 40 Mhash/s, this should be 10 seconds (10000 ms) or greater.  Everybody has a different theology for the interval between getwork requests, so if you mine in a pool, you may have to PM a pool operator for the settings that they would like you to use.  They'll need to know your hash rate.  As far as system slowness, there's an aggression setting you can raise or lower to change the responsiveness.

Hi everyone, by popular demand I'm putting up a beta version with support for puddinpop's RPC miners. I don't actually have access to CUDA hardware, so I need testers to see if that part works at all Smiley

I haven't had the time to run it, yet.  I'll be doing that sometime within the next 24 hours.  I'm glad to test it out for you and offer any suggestions I can, since I've been toying with these miners for some time.

There's no device dropdown for these miners so to run on multiple devices you'll need to use the standard rpcminer flags.

Yeah, I can't see how you would be able to fix this other than making a specific form for each miner.  As far as I know, puddinpop's RPC OpenCL miner and m0mchil's Python OpenCL miner yield pretty close to the same hash rate on a majority of hardware.  I don't think there will be too much of an outrage if puddinpop's OpenCL miner gets left out, but it would still be a little bit of a headache to set up hardware lists for each miner (OpenCL vs. CUDA).  The RPC CPU miner that does not have the -4way suffix runs the same code as the bitcoin client, if I remember correctly.  That one can probably be ignored, as well... maybe.  Also, CUDA has this weird behavior of making my secondary card be GPU 0 and my primary be GPU 1.  Just thought I would pass that along.
newbie
Activity: 37
Merit: 0
First time poster, long time lurker.

GTX 560Ti (main video card)
GTX 460 (secondary card)

both cards have been oc'd, not much but roughly +100 MHz Clock speed using stock air cooling

Previously with your GUI and poclbm I was doing 132 Mhash/s (116 Mhash/s when not oc'd) I have seen a combined increase of ~24 Mhash/s, bringing me up to ~156 Mhash/s.  I'm assuming that the difference is in the miner and not your GUI.

Overall the GUI and miner seem to be stable and and not producing any errors or problems. I will update if this changes. Kiv you have yet again done a great job, expect some coin in the mail.
Pages:
Jump to: