Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 600. (Read 5805546 times)

legendary
Activity: 1260
Merit: 1000
Are we somehow stuck in the current display format?  Why not compact the display. Do away with the share submission lines, or compact it to one line or something.  Who cares what the hash of the share being submitted is?  It's useful under limited circumstances, but it's really just fluff and not relevant. 

In a standard 24 line terminal of cgminer, 5 of those lines are static lines that can be removed.  Each line for GPU/BFL/ICA has lots of whitespace that can be compacted, possibly giving the ability to display 2 units per line instead of 1. 

Line beginning with TQ completely irrelevant to immediate status information - can be moved to another screen.  "Connected to..." line can be combined with the next Block line and/or moved to another screen for expanded display information.

That gives ~ 19 lines of display for unit status information, possibly combined into two lines, for a total of 38 units per miner... While that limit still sucks (A full rig box will get close to that limit), it's far more reasonable than 8.

Also - remove fractional Mh/s display ... just use whole numbers. You can definitely combine relevant information to two units per line with a little creative compacting.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
For my 5xxx cards, I found out that 2.2.6 works best. Although 2.3.3 also works fine, the cursor jumps everywhere on the screen randomly

I did see performance drop when I switch to diablo kernel, don't know why
The diablo kernel is only for 2.6 SDK based installations. That said, the custom modified poclbm kernel I include is probably equally good, if not better, for 2.6 SDK based installations. However, 2.1/2.4/2.5 SDK installations will usually work best with the phatk kernel (which is chosen automatically anyway). As for why your cursor jumps all over the screen randomly, NFI.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
For my 5xxx cards, I found out that 2.2.6 works best. Although 2.3.3 also works fine, the cursor jumps everywhere on the screen randomly

I did see performance drop when I switch to diablo kernel, don't know why
hero member
Activity: 518
Merit: 500
As a workaround, you could run several instances of cgminer, each managing 8 or whatever # devices works. You can define which devices each instance manages by using --device.

In the long run, I think its clear the GUI has to be separated from the miner, so ckolivas can concentrate doing what he does best, and any half competent dev can make a gui using the API. Of course then the API should work and not crash, its something I havent heard before, I hope someone looks in to that.

In that context, Id also like to bump my earlier feature request:

Got another feature request for the API. 2 in fact:

1) be able to read the worker name of a pool over the API
When using multiple workers for things like GPUmax, its currently impossible to distinguish between them. I can understand password being hidden, but the username is even visible on the screen, so why not expose at least the worker name over the API when calling the "pools" command ?

2) ability to delete pools over the API.
This has been discussed before, but its possible to do it in the CLI, it would be great if this were enabled over the API too; its one of the last things an alternate GUI using the API cant do.

Ill pledge 2.5 BTC for each feature if they make it into the mainline.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Reluctant to add another line for something so flaky, variable, unreliable, and just plain broken. Doesn't make sense. Alas curses just isn't flexible enough to cope with this sort of usage where it has some idea of being used on fixed terminals where resizing is a big deal. curses...
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Who was the moron who thought up 8? Me. 14 was the tipping point. Beyond 14 the display became corrupt. It is impossible to create "windows" off-screen with a curses display meaning that when I tried to make the window larger, it generated all sorts of annoying problems to do with trying to "resuscitate" the display when it's resized to a reasonable size. Now I'm quite sure you really could not care less about the reasons behind the annoying code problems that affect me. The reason I chose 8 was that I figured anyone with more than 8 devices is likely to at some stage get to 16, 20, whatever, and NOT be using GPUs, since most drivers are limited to 8 devices. Bad choice? Maybe, but there is another limit at 14 which will affect you at some stage even if you're okay at 12. So I can easily change it to 14 on the next release, but then you'll be burnt beyond that. When I created the curses interface for cgminer I never anticipated device numbers would be a limit. I CAN rewrite it entirely to make no upper limit but that would involve doing some annoying code rewrites.
I was thinking (well I've actually already done the change on my machine) to make it 8 by default (though 14 now since you said there is some issue above 14) and a parameter --max-status-lines that people can use and be damned of the consequences of their own choice if they resize the screen?

No idea if that is problematic or not - but if resizing the screen before starting cgminer works OK for more than 14 (and not changing it smaller later) then those with many FPGA's who still want to see all that info in a single terminal window can cause their own problems if they wish by using the --max-status-lines option?

(and adding the option is of course a simple code change)

Though, I guess, I can see that coming back in the future to bite you coz people will then say it doesn't work when they use that option and resize the screen to mess it up ...
sr. member
Activity: 446
Merit: 250
Who was the moron who thought up 8? Me. 14 was the tipping point. Beyond 14 the display became corrupt. It is impossible to create "windows" off-screen with a curses display meaning that when I tried to make the window larger, it generated all sorts of annoying problems to do with trying to "resuscitate" the display when it's resized to a reasonable size. Now I'm quite sure you really could not care less about the reasons behind the annoying code problems that affect me. The reason I chose 8 was that I figured anyone with more than 8 devices is likely to at some stage get to 16, 20, whatever, and NOT be using GPUs, since most drivers are limited to 8 devices. Bad choice? Maybe, but there is another limit at 14 which will affect you at some stage even if you're okay at 12. So I can easily change it to 14 on the next release, but then you'll be burnt beyond that. When I created the curses interface for cgminer I never anticipated device numbers would be a limit. I CAN rewrite it entirely to make no upper limit but that would involve doing some annoying code rewrites.

Thank you for the explanation. Having the 14 would be a great option if you would add it next time around it would definitely help.

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Who was the moron who thought up 8? Me. 14 was the tipping point. Beyond 14 the display became corrupt. It is impossible to create "windows" off-screen with a curses display meaning that when I tried to make the window larger, it generated all sorts of annoying problems to do with trying to "resuscitate" the display when it's resized to a reasonable size. Now I'm quite sure you really could not care less about the reasons behind the annoying code problems that affect me. The reason I chose 8 was that I figured anyone with more than 8 devices is likely to at some stage get to 16, 20, whatever, and NOT be using GPUs, since most drivers are limited to 8 devices. Bad choice? Maybe, but there is another limit at 14 which will affect you at some stage even if you're okay at 12. So I can easily change it to 14 on the next release, but then you'll be burnt beyond that. When I created the curses interface for cgminer I never anticipated device numbers would be a limit. I CAN rewrite it entirely to make no upper limit but that would involve doing some annoying code rewrites.
sr. member
Activity: 446
Merit: 250
Ok, so I haven't been following the cgminer thread... but I stopped in here because of this horrible commit that removes the display of more than 8 devices.  This basically makes cgminer useless for large farms.  I was just getting to the point where I was digging cgminer and getting comfortable with it, so I cranked it up on one of my larger boxes and low and behold it becomes a useless brick.  

Ugh.  I can't really fathom how the thought process went in removing this.  Why would you want to remove literally the most important information from cgminer?  Every other bit of information cgminer provides is less relevant than the current hashrate and temps of the units that are connected.  You can literally take every single other piece of information and do away with it and cgminer will retain value.  However, removing that information makes cgminer less functional than every other mining program out there.  So... what was the thought process behind that?


+1

I too wish to know why it is limited to 8, why not 6 or 12 or 30? Why not allow as many as one wants and if it gets to be to many, that individual could decide to run more instances or something.

I've had to revert to 2.3.1-2 so I can at least see my individual hash rates.
legendary
Activity: 1260
Merit: 1000
Ok, so I haven't been following the cgminer thread... but I stopped in here because of this horrible commit that removes the display of more than 8 devices.  This basically makes cgminer useless for large farms.  I was just getting to the point where I was digging cgminer and getting comfortable with it, so I cranked it up on one of my larger boxes and low and behold it becomes a useless brick.  

Ugh.  I can't really fathom how the thought process went in removing this.  Why would you want to remove literally the most important information from cgminer?  Every other bit of information cgminer provides is less relevant than the current hashrate and temps of the units that are connected.  You can literally take every single other piece of information and do away with it and cgminer will retain value.  However, removing that information makes cgminer less functional than every other mining program out there.  So... what was the thought process behind that?

Before someone mentions the API for a different front end, the API does not work on all machines.  I have 3 machines in my farm that, if the api is enabled, will crash the machine.   Disabling the API is the only thing that will allow cgminer to run on those machines.  The API is not the solution.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
No, there was no objection to this fix. You're thinking of something else, where Kano admitted his objection was unreasonable.
The land of the pedants, this be, where I must be proven wrong. What a fun game. This is starting to feel like the linux kernel mailing list and I will start ignoring people if it continues. Yes there were 3 different code issues in question. One was the PGA display thing, one was the hidden CPU mining bugs, and one was to make cgminer's display work with more than 14 devices. I refused all of them. Your debate was over the first. My issue is you muddied the water and somehow made this discussion about me being a tyrant and not accepting obvious fixes. The display thing was pretty much unrelated. Stretch this out any further, and I'll ignore you.
legendary
Activity: 2576
Merit: 1186
By the way, another example to demonstrate multiple points that have come up tonight...

Gigavps recently came in the #CGMiner channel to report a bug about the "semi-graphical" command line display malfunctioning with more than 16 devices - he had just turned on a total of 17+ FPGAs.

How did I confirm this bug? Not with 17 FPGAs - can't expect every developer to have that kind of equipment handy for testing - but by using CPUmining to generate 17 CPU threads. So yet another thing CPUmining helps test is CGMiner's basic frameworks themselves.

Unfortunately, ckolivas expressed that he would refuse to merge a fix for this issue even if I wrote it. Pretty much defeats the point. (Though I did still offer to debug and write the fix for Gigavps, at a reasonable per-hour cost; I can't blame him for declining, considering it wouldn't get merged)

P.S. Kanoi, thanks for digging out the logs which show you alone agreed to abide by your poll, but #CGMiner is a private channel and posting logs publicly is technically forbidden.
Yes, I'm a real bad guy, I don't accept code, and I keep the cgminer channel private. Playing on words makes this whole discussion look ridiculous. There is no rule about posting IRC logs publicly. It is not even remotely a private channel, and I happily accept code when it's good, useful, and not controversial. The code in question, Kano had a reasonable objection to in an area that affected him and his code, and I don't have an opinion regarding it, so I wasn't going to make the decision (unlike other times). I don't care how you spin this, but unless you and Kano can agree on it, I will continue to refuse to accept it.
No, there was no objection to this fix. You're thinking of something else, where Kano admitted his objection was unreasonable.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
By the way, another example to demonstrate multiple points that have come up tonight...

Gigavps recently came in the #CGMiner channel to report a bug about the "semi-graphical" command line display malfunctioning with more than 16 devices - he had just turned on a total of 17+ FPGAs.

How did I confirm this bug? Not with 17 FPGAs - can't expect every developer to have that kind of equipment handy for testing - but by using CPUmining to generate 17 CPU threads. So yet another thing CPUmining helps test is CGMiner's basic frameworks themselves.

Unfortunately, ckolivas expressed that he would refuse to merge a fix for this issue even if I wrote it. Pretty much defeats the point. (Though I did still offer to debug and write the fix for Gigavps, at a reasonable per-hour cost; I can't blame him for declining, considering it wouldn't get merged)

P.S. Kanoi, thanks for digging out the logs which show you alone agreed to abide by your poll, but #CGMiner is a private channel and posting logs publicly is technically forbidden.
Yes, I'm a real bad guy, I don't accept code, and I keep the cgminer channel private. Playing on words makes this whole discussion look ridiculous. There is no rule about posting IRC logs publicly. It is not even remotely a private channel, and I happily accept code when it's good, useful, and not controversial. The code in question, Kano had a reasonable objection to in an area that affected him and his code, and I don't have an opinion regarding it, so I wasn't going to make the decision (unlike other times). I don't care how you spin this, but unless you and Kano can agree on it, I will continue to refuse to accept it.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I just downloaded cgminer 2.3.3 and I thought I'd give it a try. My initial impressions of it is that it's rather buggy and general usage is complicated by the long list of options displayed from cgminer --help.

Other miners I've used like phoenix and ufa-soft miner do not exhibit such problems and they work as expected. On the other hand, I can't even get a proper benchmark read out from cgminer. A search on this thread shows someone else with nvidia hardware encountering similar problems and difficulty though they were using an earlier version 2.2.3. Apparently the latest cgminer build hasn't address these problems yet.
You can't have your cake and eat it too. You either have lots of power and lots of features and with that, ways to set up and use all those features. It doesn't look like you ever actually managed to get it working at all and blaming the software is a great way to sort out the problem. At the very least start the program with -D --verbose -T and then give us the output, the way the documentation in the readme says when you have a problem. Didn't read the readme? Of course not, it's too long. Why is it too long? Because cgminer has 100 times as many features as other miners so the documentation needs to be extensive. It's a catch 22. cgminer is extensive machinery, it is NOT a fancy gui app. For that, use guiminer.
legendary
Activity: 1876
Merit: 1000
Just be sure to never save the config ...


speaking of saving the config..  i finally got around to trying this thru the api.  when I did the voltages were all 0.00 Sad
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
While it may be true that a minimal interface is adequate, those of us that happen to have more than 8 devices on one cgminer and can no longer monitor them individually, are not receiving the same output that others see with less than 9 devices.

I can no longer monitor the temps or fan speeds or individual hash rates, rejects and hardware errors. No way of knowing IF one of the devises has a problem and, even if you knew, no way of figuring out which one has a problem either.

I have a mix of GPU and PGA's. That output is important to me. Why limit it to 8 devices?
Valid concern, yes. I haven't tried this, but if you had 16 devices (as an example) could you run 2 instances of cgminer on the same machine, each servicing 8 devices?

That is kind of what i am thinking. Not nearly as slick but doable. I'll be testing that later today.
Um ... the sample miner.php now allows you to show multiple rigs with a single script
(that can run anywhere on your network you like - however you'd need the latest miner.php version from my git until it's committed)
Just add an auto refresh to the page and it can even act just like the cgminer screen ...

And of course there are much more expansive monitoring tools for cgminer.

As for the limit - well, crashing cgminer or messing up the display completely sounds like a good idea to have a limit ...

... and yes you can run 10 copies of cgminer if you really want to and don't have a very small memory limit - one for each GPU/PGA if you really want to.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Does cgminer config file have a "comment" symbol?

If not could it be added to next version?  #?

The simplest/easiest method would simply to only support leading charecter comment symbol. 
Code:
#This line is a comment
# "auto-gpu" : true
#The command above has been commented out.

"auto-gpu" : true  #This is invalid because the format only support leading char comment symbols

More comprehensive would be to allow parse right comments (like the last line in code above) but I would be happy for a leading comment check.


this works:
Code:
"__auto-gpu" : true

precede any attribute with underscores will prevent that attribute from loading.
It's the issue of using a 'standard' that doesn't have what you want ...
Thus the answer to the first question is no.
Of course jjiimm_64's solution works to comment out an option.
Just be sure to never save the config ...
sr. member
Activity: 446
Merit: 250
While it may be true that a minimal interface is adequate, those of us that happen to have more than 8 devices on one cgminer and can no longer monitor them individually, are not receiving the same output that others see with less than 9 devices.

I can no longer monitor the temps or fan speeds or individual hash rates, rejects and hardware errors. No way of knowing IF one of the devises has a problem and, even if you knew, no way of figuring out which one has a problem either.

I have a mix of GPU and PGA's. That output is important to me. Why limit it to 8 devices?
Valid concern, yes. I haven't tried this, but if you had 16 devices (as an example) could you run 2 instances of cgminer on the same machine, each servicing 8 devices?

That is kind of what i am thinking. Not nearly as slick but doable. I'll be testing that later today.
legendary
Activity: 922
Merit: 1003
While it may be true that a minimal interface is adequate, those of us that happen to have more than 8 devices on one cgminer and can no longer monitor them individually, are not receiving the same output that others see with less than 9 devices.

I can no longer monitor the temps or fan speeds or individual hash rates, rejects and hardware errors. No way of knowing IF one of the devises has a problem and, even if you knew, no way of figuring out which one has a problem either.

I have a mix of GPU and PGA's. That output is important to me. Why limit it to 8 devices?
Valid concern, yes. I haven't tried this, but if you had 16 devices (as an example) could you run 2 instances of cgminer on the same machine, each servicing 8 devices?
sr. member
Activity: 446
Merit: 250
I want the best possible code, putting a purity test of code (which is pure logic) is just asinine.  Luke has found bugs (and fixes) in cgminer and bitcoind.  Would the community be better served by running buggier software because you dislike him.  As long as merges are vetted and not done without Due Diligence I honestly don't care if the unibomber wants to make a pull request.

Either the code changes have value or they doesn't.  THAT (and only that) should be the metric.

To try and get this somewhat back on topic my opininon is that changes to the interface on cgminer should be a low priority.  The API paid for by many of us, coded by Kano, and integrated and tested by conman provide the perfect path forward.  Nobody will ever agree on perfect interface.  There is no such thing.  The API allows multiple front ends to be developed independently of cgminer.

It allows separation of responsibilities:
kernel = hashing engine
cgminer = control & management
GUI = user interface, reporting, charting, etc

cgminer just needs enough of a native interface to allow low level troubleshooting.   So many people cling to the obsolete GUIminer that I am surprised nobody has made a Windows GUI interface for cgminer (maybe I should).

+1 for maintaining sanity. Also agree with keeping UI development on cgminer minimal. There is great value and benefit keeping the UI decoupled from the 'work' code. It is a good design/implementation practice which manages complexity while adding flexibility. I once wondered why cgminer didn't have a Windows front-end, but it didn't take long to realize that the existing curses-based text UI is more than adequate.

While it may be true that a minimal interface is adequate, those of us that happen to have more than 8 devices on one cgminer and can no longer monitor them individually, are not receiving the same output that others see with less than 9 devices.

I can no longer monitor the temps or fan speeds or individual hash rates, rejects and hardware errors. No way of knowing IF one of the devises has a problem and, even if you knew, no way of figuring out which one has a problem either.

I have a mix of GPU and PGA's. That output is important to me. Why limit it to 8 devices?
Jump to: