Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 592. (Read 5805740 times)

hero member
Activity: 481
Merit: 500
I tried moving my hard drive from a motherboard that only had one GPU to a motherboard that had three GPU's plugged in.

I did the sudo aticonfig -f --adapter=all --initial and rebooted, I also deleted the .bin file in cgminer's directory.

CGminer gave an error about the number of devices not matching. It saw three devices, but OpenCL was only seeing one?

Where else does OpenCL know about the number of video cards besides xorg.conf and what else should I have done besides running aticonfig?

The system was build with these instructions: https://github.com/kanoi/linux-usb-cgminer/blob/master/linux-usb-cgminer
opencl has nothing to do with xorg I'm afraid, but what you need before starting cgminer is:
Code:
export DISPLAY=:0


I'm not running as root. DISPLAY is already set. And I'm not using SSH, I'm on the local console.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
So, is this all currently reflected in 2.3.4 ? or upcoming in 2.3.5 ?
2.3.5, which I'm in the process of completing shortly.
sr. member
Activity: 462
Merit: 250
I heart thebaron
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I tried moving my hard drive from a motherboard that only had one GPU to a motherboard that had three GPU's plugged in.

I did the sudo aticonfig -f --adapter=all --initial and rebooted, I also deleted the .bin file in cgminer's directory.

CGminer gave an error about the number of devices not matching. It saw three devices, but OpenCL was only seeing one?

Where else does OpenCL know about the number of video cards besides xorg.conf and what else should I have done besides running aticonfig?

The system was build with these instructions: https://github.com/kanoi/linux-usb-cgminer/blob/master/linux-usb-cgminer
opencl has nothing to do with xorg I'm afraid, but what you need before starting cgminer is:
Code:
export DISPLAY=:0
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Likely a simple fix but trying to run cgminer 2.3.4 under Debian 6.01 x64 I get

./cgminer -n
cgminer: error while loading shared libraries: libusb-1.0.so.0: cannot open shared object file: No such file or directory

Any ideas?


Nevermind I finally stopped being a Linux wimp and using the prebuilt binary and compiled it myself.  It only took .... (don't laugh I am a SQL Server developer by trade) ... little over an hour.

Still I learned something and it working fine now.
Just for further reference:
Code:
sudo apt-get install libusb
hero member
Activity: 481
Merit: 500
I tried moving my hard drive from a motherboard that only had one GPU to a motherboard that had three GPU's plugged in.

I did the sudo aticonfig -f --adapter=all --initial and rebooted, I also deleted the .bin file in cgminer's directory.

CGminer gave an error about the number of devices not matching. It saw three devices, but OpenCL was only seeing one?

Where else does OpenCL know about the number of video cards besides xorg.conf and what else should I have done besides running aticonfig?

The system was build with these instructions: https://github.com/kanoi/linux-usb-cgminer/blob/master/linux-usb-cgminer
donator
Activity: 1218
Merit: 1079
Gerald Davis
Likely a simple fix but trying to run cgminer 2.3.4 under Debian 6.01 x64 I get

./cgminer -n
cgminer: error while loading shared libraries: libusb-1.0.so.0: cannot open shared object file: No such file or directory

Any ideas?


Nevermind I finally stopped being a Linux wimp and using the prebuilt binary and compiled it myself.  It only took .... (don't laugh I am a SQL Server developer by trade) ... little over an hour.

Still I learned something and it working fine now.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Ah there was also one bug where submitold was not being checked on re-submission. This has been fixed in my git tree.
legendary
Activity: 2576
Merit: 1186
Pools usually selectively request submitold on work items, usually for merged mining reasons - that is because a share for an old nmc block is still a relevant share for the existing BTC block. Most pools do not universally enable it except in the case of p2pool.
Eloipool always enables submitold. There's no real reason not to enable it, unless the pool wants to misreport a lower stale rate.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Pools usually selectively request submitold on work items, usually for merged mining reasons - that is because a share for an old nmc block is still a relevant share for the existing BTC block. Most pools do not universally enable it except in the case of p2pool.
hero member
Activity: 807
Merit: 500
Issue two is related to submission of stale shares.  Note that I did not have --submit-stale on the command line in this case, but I have previously seen cgminer submit detected stales to the pool because it was instructed to (pool is eligius).  This morning, I just happened to look at my miner while this was still on the screen:
Quote
[2012-03-23 08:22:53] LONGPOLL requested work restart, waiting on fresh work
[2012-03-23 08:22:57] Accepted 00000000.949b59d0.7ecd59ab GPU 0 thread 1 pool 0
[2012-03-23 08:23:03] Accepted 00000000.58e019ff.7a3bc657 GPU 0 thread 1 pool 0
[2012-03-23 08:23:05] Pool 0 communication failure, caching submissions
[2012-03-23 08:23:05] Stale share detected, discarding
[2012-03-23 08:23:07] Pool 0 communication resumed, submitting work
[2012-03-23 08:23:07] Accepted 00000000.ade4193a.07083649 GPU 0 thread 0 pool 0
I had recently restarted cgminer, so I can't be 100% certain cgminer was again instructed to submit stales, but assuming it was, I believe the discarded share above would indicate a bug in stale share handling in this scenario.  I also believe it would have been accepted based on the length of time between the last longpoll and this event and the fact that other shares were accepted before and after this one with no other new block event shown.
If you don't tell cgminer to submit stales, then it will discard what it considers stale.
The thing to know about stales is that a work request is stale based on when it was received, not when you see it's share(s) shown in cgminer.
Cgminer threw away that share coz it wasn't told to submit stales and the getwork time implied the share was stale.
You can't assume the getwork order matches the share display order.

Edit: oops yes the pool can also tell cgminer to submit stales - forgot to mention that - as mentioned above.
Today seems like a good day to beat a dead horse...
Quote
[2012-04-27 15:48:45] LONGPOLL detected new block on network, waiting on fresh work
[2012-04-27 15:48:45] Stale share detected, submitting as pool requested
[2012-04-27 15:48:51] Accepted 00000000.1cf4d45e.99314e90 GPU 0 thread 0 pool 0
[2012-04-27 15:48:56] Accepted 00000000.ce0e4d50.781165a9 GPU 0 thread 1 pool 0
[2012-04-27 15:49:10] Accepted 00000000.3ae325b1.654a64ff GPU 0 thread 1 pool 0
[2012-04-27 15:49:16] Pool 0 communication failure, caching submissions
[2012-04-27 15:49:16] Stale share detected, discarding
[2012-04-27 15:49:21] Pool 0 communication resumed, submitting work
[2012-04-27 15:49:21] Accepted 00000000.bfeef9eb.14ed8c5e GPU 0 thread 1 pool 0
[2012-04-27 15:49:23] Accepted 00000000.95be4b9a.137f31c6 GPU 0 thread 1 pool 0
So, given that there was a longpoll, and then a stale share submitted (that was apparently accepted), and there was not another longpoll, should it be safe to assume submitold was enabled during this [~11 second] communication failure?  If it is safe to assume that, does this output indicate that maybe I really did see what I thought I saw before regardless of my faulty assumptions at that time?  If so, note that this is on 2.3.2 (in case the relevant code may not be the same after the recent network changes), but I figured I should report it anyway since I'm not specifically looking for this and it's dumb luck when I catch a communication failure or stale share being discarded.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I do think that ckolivas lack of interest on FPGAs is a negative point, but honestly so as long as he keeps doing the great job he has done so far (and I'm sure luke-jr will still help cgminer development as he's building on top of it) it just doesn't matter.

I do however understand luke-jr and others frustration, I just don't share it Smiley
Thanks for comments. I was quite happy merging code for FPGAs with just the simplest of code audits on my part. The problem began when multiple people wanted to hack on the same code and I chose conservative changes over aggressive ones. My "lack of interest in FPGAs" was more to do with annoyance at wasting time managing that argument than actually having anything against FPGAs. Meanwhile luke-jr kept getting more and more aggressive towards my avoiding his code pushes. I stood my ground, he forked off. Now we have some weird middle ground where collaboration is expected yet the plan is to run as two separate projects. Apparently he's still willing to do code pushes to cgminer when suitable and will take my changes as I add to cgminer. Which still makes me wonder why there should be two projects at all. In my experience, forks diverge irrevocably over time and the push pull relationship will cease. There is still time to rescue this back into one project, but I think luke-jr has already burnt that bridge. Vaguely reminds me of the BIP16 saga.
legendary
Activity: 2576
Merit: 1186
Not that it matters much in the event of an arms race, of course, but I just wanted to state publicly that while I have a lot of respect for luke-jr's work and I *love* the name (which was kano's idea, I believe), my money is on cgminer for now.

Reasoning is:
- No one knows the core better
- As ztex worker maintainer I prefer a high entrance price development style, i.e. one where all my commits are verified
- I cannot possibly work on two code bases at once
- I have committed myself to supporting other devices on cgminer

I do think that ckolivas lack of interest on FPGAs is a negative point, but honestly so as long as he keeps doing the great job he has done so far (and I'm sure luke-jr will still help cgminer development as he's building on top of it) it just doesn't matter.

I do however understand luke-jr and others frustration, I just don't share it Smiley
I don't consider it an "arms race", and if peoples' changes are compatible with CGMiner, I encourage doing development there.
legendary
Activity: 1540
Merit: 1002
BFGMiner is forked, starting with 2.3.4. Comments etc welcome, but let's not clutter up the CGMiner thread.
In all honesty I'm sorry to see this, and long term I envision these projects will diverge too much for there to be code going to and from each of them. It may well be that cgminer becomes the dead project and I'll stop maintaining it. Good luck. I know the FPGA miners out there will be happier with you at the helm.

Not that it matters much in the event of an arms race, of course, but I just wanted to state publicly that while I have a lot of respect for luke-jr's work and I *love* the name (which was kano's idea, I believe), my money is on cgminer for now.

Reasoning is:
- No one knows the core better
- As ztex worker maintainer I prefer a high entrance price development style, i.e. one where all my commits are verified
- I cannot possibly work on two code bases at once
- I have committed myself to supporting other devices on cgminer

I do think that ckolivas lack of interest on FPGAs is a negative point, but honestly so as long as he keeps doing the great job he has done so far (and I'm sure luke-jr will still help cgminer development as he's building on top of it) it just doesn't matter.

I do however understand luke-jr and others frustration, I just don't share it Smiley
hero member
Activity: 591
Merit: 500
Updated git tree:
The most significant change is a massive change to the way network connections are made, so there should be MUCH less connections opened which should more or less abolish the overloaded networks people use.
Yes PLEASE. My crappy DSL has been needing this. Smiley
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Updated git tree:
The most significant change is a massive change to the way network connections are made, so there should be MUCH less connections opened which should more or less abolish the overloaded networks people use. Also longpoll is now tied to each pool set up, but only the longpoll attached to the primary pool causes a work restart. Also updated ztex support.


Changelog(reverse order):
Start longpoll only after we have tried to extract the longpoll URL.
Check for submitold flag on resubmit of shares, and give different message for stale shares on retry.
Check for submitold before submitstale.
Don't force fresh curl connections on anything but longpoll threads.
Create one longpoll thread per pool, using backup pools for those pools that don't have longpoll.
Use the work created from the longpoll return only if we don't have failover-enabled, and only flag the work as a longpoll if it is the current pool.
This will work around the problem of trying to restart the single longpoll thread on pool changes that was leading to race conditions.
It will also have less work restarts from the multiple longpolls received from different pools.
Remove the ability to disable longpoll. It is not a useful feature and will conflict with planned changes to longpoll code.
Remove the invalid entries from the example configuration file.
Add support for latest ATI SDK on windows.
Export missing function from libztex.
miner.php change socktimeoutsec = 10 (it only waits once)
Bugfix: Make initial_args a const char** to satisfy exec argument type warning (on Windows only)
miner.php add a timeout so you don't sit and wait ... forever
Create discrete persistent submit and get work threads per pool, thus allowing all submitworks belonging to the same pool to reuse the same curl handle, and all getworks to reuse their own handle.
Use separate handles for submission to not make getwork potentially delay share submission which is time critical.
This will allow much more reusing of persistent connections instead of opening new ones which can flood routers.
This mandated a rework of the extra longpoll support (for when pools are switched) and this is managed by restarting longpoll cleanly and waiting for a thread join.
miner.php only show the current date header once
miner.php also add current time like single rig page
miner.php display rig 'when' table at top of the multi-rig summary page
README - add some Ztex details
api.c include zTex in the FPGA support list
api.c ensure 'devs' shows PGA's when only PGA code is compiled
cgminer.c sharelog code consistency and compile warning fix
README correct API version number
README spelling error
api.c combine all pairs of sprintfs()
api.c uncomment and use BLANK (and COMMA)
Code style cleanup
Annotating frequency changes with the changed from value
README clarification of 'notify' command
README update for API RPC 'devdetails'
api.c 'devdetails' list static details of devices
Using less heap space as my TP-Link seems to not handle this much
legendary
Activity: 2576
Merit: 1186
- so the software overhead of closing and opening it every time isn't there except when the error occurs - because:
You forgot to mention there isn't any actual overhead to this workaround at all.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Meanwhile back on topic Smiley

One problem I've had with Icarus is that there is a UART bug that only affects some people.
The suggested solution was to open and close the USB Serial port every time you start/stop a work.
However what I want to do is identify when it happens and then do the USB Serial close open
(or something else if that doesn't fix it)
- so the software overhead of closing and opening it every time isn't there except when the error occurs - because:

With my rig I can't get it to happen at all, so I guess it must be certain Icarus' that have the problem or certain computers.
I've tried on 3 very different computers with 2 Icarus and one of those computers is also windows.
I've been unable to have it happen after running for more than 2 days in each case (windows was the shortest - about 2 days non stop)

So I need to find someone who does have it happen and get them to run some tests for me (that I'll add once I find someone)

What happens (according to luke-jr) is that it simply stops returning valid nonce values - cgminer stops returning shares forever.
(You need to restart/reset)
Also apparently he said he get's it happening reliably within about 5 hours of running (if not doing the open/close of the USB every time)

I've had that happen on rare occasions when I move the Icarus from one computer to another by moving the USB cable only - the 3 lights stay hard on
But even when I start cgminer it doesn't work - so I reset it with a power cycle or a python USB script I have.
So that's not related to it changing into this state while it is hashing away successfully.

Anyone able to help with this, that has this UART bug - and can see it happen regularly in cgminer?
sr. member
Activity: 378
Merit: 250
Why is it so damn hot in here?
BFGMiner is forked, starting with 2.3.4. Comments etc welcome, but let's not clutter up the CGMiner thread.
In all honesty I'm sorry to see this, and long term I envision these projects will diverge too much for there to be code going to and from each of them. It may well be that cgminer becomes the dead project and I'll stop maintaining it. Good luck. I know the FPGA miners out there will be happier with you at the helm.

Not really. 
hero member
Activity: 896
Merit: 1000
BFGMiner is forked, starting with 2.3.4. Comments etc welcome, but let's not clutter up the CGMiner thread.
Argh I can't pledge 32BTC for each project ! Well for now my coins are on CGMiner, it could change depending on which project keeps the focus of the contributors in the long term but I see no reason to switch myself right now.
Jump to: