Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 332. (Read 5806004 times)

sr. member
Activity: 448
Merit: 250
Is it normal for the memory clock on a 7950 to be locked and ignore the values I specify either in the config file or on the command line? With the following config file on Windows (I also have the same problem on Linux), the memory clock runs at 1250MHz

Code:
{
"pools" : [
        {
                "url" : "http://cryptominer.org:9332/",
                "user" : "username",
                "pass" : "password"
        }
]
,
"intensity" : "10",
"vectors" : "1",
"worksize" : "64",
"kernel" : "poclbm",
"lookup-gap" : "0",
"thread-concurrency" : "0",
"shaders" : "0",
"gpu-engine" : "1100",
"gpu-fan" : "0-85",
"gpu-memclock" : "0-750",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "80",
"api-port" : "4028",
"auto-fan" : true,
"auto-gpu" : true,
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "0",
"scan-time" : "60",
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}

Do I need to disable auto-gpu or something different?
legendary
Activity: 3586
Merit: 1098
Think for yourself
MY two PCs are stil shutting down because overheating, what can I do to prevent this, how to make cgminer slows down with mining when overheating!

If it is your GPU's overheating then use the auto fan and auto gpu command line arguments.

There are examples in the executive summary in the top post of this thread.

I have auto-fan flag, but no auto-gpu because it overheats more if I have auto-gpu.


That makes no sense.  What's your target temp?
member
Activity: 84
Merit: 10
Luke-Jr just posted this in the Cairnsmore FPGA thread:

"In brief, cg = original GPU miner bfg was based on, plus old bfgminer FPGA code with various things broken and a few minor things added

These days it usually only makes sense to use BFGMiner."

Anyone here beg to differ?
Ya, BFG is basically a ripoff of CG. LJR is always trying to discredit CGMiner at every turn, and promote his miner instead. He was the one to originally work on adding FPGA support to CGMiner, but the other devs couldn't work with him, so he make his own fork (now BFG). Both CG and BFG have now added support for different ASICs, but IMO Con and Kano do it better.See Here for the full story.

Ah - somehow thought so - funny how I gathered that from Luke-Jr's tone  Cool
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Luke-Jr just posted this in the Cairnsmore FPGA thread:

"In brief, cg = original GPU miner bfg was based on, plus old bfgminer FPGA code with various things broken and a few minor things added

These days it usually only makes sense to use BFGMiner."

Anyone here beg to differ?
I've now rewritten all the old serial drivers to use the new usbutils in cgminer - that all the new drivers do/will use.
Ztex is still the same as it is libusb also - but doesn't handle hotplug since it is before the new code (and I don't have one)

3.1.1 doesn't have the new Icarus driver yet - that I've now written and is testing in git
The new driver to come Linux is OK for that, but Windows needs some work on it to solve some hotplug issues.

3.1.1 will auto detect and handle correctly BFL, BAJ and MMQ devices.
The Icarus driver is still the old serial-USB driver in 3.1.1

The next will include usbutils/auto/hotplug for all the Icarus: ICA, BLT, LLT and AMU (not 100% sure about CMR yet though)
sr. member
Activity: 412
Merit: 250
MY two PCs are stil shutting down because overheating, what can I do to prevent this, how to make cgminer slows down with mining when overheating!

If it is your GPU's overheating then use the auto fan and auto gpu command line arguments.

There are examples in the executive summary in the top post of this thread.

I have auto-fan flag, but no auto-gpu because it overheats more if I have auto-gpu.
hero member
Activity: 896
Merit: 1000
Luke-Jr just posted this in the Cairnsmore FPGA thread:

"In brief, cg = original GPU miner bfg was based on, plus old bfgminer FPGA code with various things broken and a few minor things added

These days it usually only makes sense to use BFGMiner."

Anyone here beg to differ?

I tried bfgminer once because it advertised support for the dynamic clocking ability of the Cairnsmore1 FPGA boards with the hashvoodoo bitstream.
On p2pool, the dead on arrival shares went through the roof at ~50%. Never saw this kind of behaviour with cgminer, including with FPGAs (Icarus and Cairnsmore 1 with fixed frequencies).

I use MPBM for my Cairnsmore1 (it supports dynamic clocking) and cgminer for my GPUs and Icarus boards (and hopefully for my Avalon when it finally is delivered) .
legendary
Activity: 952
Merit: 1000
Luke-Jr just posted this in the Cairnsmore FPGA thread:

"In brief, cg = original GPU miner bfg was based on, plus old bfgminer FPGA code with various things broken and a few minor things added

These days it usually only makes sense to use BFGMiner."

Anyone here beg to differ?
Ya, BFG is basically a ripoff of CG. LJR is always trying to discredit CGMiner at every turn, and promote his miner instead. He was the one to originally work on adding FPGA support to CGMiner, but the other devs couldn't work with him, so he make his own fork (now BFG). Both CG and BFG have now added support for different ASICs, but IMO Con and Kano do it better.See Here for the full story.
full member
Activity: 140
Merit: 100
STATUS=S,When=1369336525,Code=78,Msg=CGMiner coin,Description=cgminer 3.1.1|COIN,Hash Method=sha256,Current Block Time=1369336161.322788,Current Block Hash=00000000000000c8fb30dc785e322b76269315b15684575fabb06a6d9a1175b8,LP=false,Network Difficulty=18446744073709553000.00000000|

---openwrt (ar71xx, mips, wr703n), why the network diff is ... command is echo -n "coin" | nc IP 4028
member
Activity: 84
Merit: 10
Luke-Jr just posted this in the Cairnsmore FPGA thread:

"In brief, cg = original GPU miner bfg was based on, plus old bfgminer FPGA code with various things broken and a few minor things added

These days it usually only makes sense to use BFGMiner."

Anyone here beg to differ?
hero member
Activity: 497
Merit: 500
the readme doesn't help me so much :/

And it can't unless you actually read it and spend the time doing research so that you can comprehend it as well.

If your unwilling or incapable of moving toward the goal of understanding then I would, kindly, suggest your finding something else to spend your/our time on.
Sam

I've try to set up to "thread-concurrency" : "",

But I have this error

GPU0: invalid nonce - HW error


Here my setup after reading the readme in deep

Code:
{
"pools" : [
{
"url" : "stratum+tcp://eu.wemineltc.com:3333",
"user" : "xxx",
"pass" : "xxx"
}
]
,
"intensity" : "20",
"vectors" : "1",
"worksize" : "256",
"kernel" : "scrypt",
"lookup-gap" : "0",
"thread-concurrency" : "",
"shaders" : "1792",
"gpu-engine" : "0-0",
"gpu-fan" : "0-0",
"gpu-memclock" : "0",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "30",
"scrypt" : true,
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}

This is in the README word for word. Maybe someone can translate for you. If you do not wish to read then maybe you should be paying someone to do this for you.
hero member
Activity: 896
Merit: 1000
the readme doesn't help me so much :/

And it can't unless you actually read it and spend the time doing research so that you can comprehend it as well.

If your unwilling or incapable of moving toward the goal of understanding then I would, kindly, suggest your finding something else to spend your/our time on.
Sam

I've try to set up to "thread-concurrency" : "",

But I have this error

GPU0: invalid nonce - HW error

HW error: try lower intensity values (I find 17/18 are what works the best on my hardware).
If it doesn't remove almost all HW errors (<1-2% of shares being an HW error warn you that you are near your hardware limit but that's for you to decide if it's acceptable, the more you overclock the more you risk frying your card), try lowering the GPU/memory clocks.
sr. member
Activity: 336
Merit: 250
the readme doesn't help me so much :/

And it can't unless you actually read it and spend the time doing research so that you can comprehend it as well.

If your unwilling or incapable of moving toward the goal of understanding then I would, kindly, suggest your finding something else to spend your/our time on.
Sam

I've try to set up to "thread-concurrency" : "",

But I have this error

GPU0: invalid nonce - HW error


Here my setup after reading the readme in deep

Code:
{
"pools" : [
{
"url" : "stratum+tcp://eu.wemineltc.com:3333",
"user" : "xxx",
"pass" : "xxx"
}
]
,
"intensity" : "20",
"vectors" : "1",
"worksize" : "256",
"kernel" : "scrypt",
"lookup-gap" : "0",
"thread-concurrency" : "",
"shaders" : "1792",
"gpu-engine" : "0-0",
"gpu-fan" : "0-0",
"gpu-memclock" : "0",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "30",
"scrypt" : true,
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}
legendary
Activity: 3586
Merit: 1098
Think for yourself
the readme doesn't help me so much :/

And it can't unless you actually read it and spend the time doing research so that you can comprehend it as well.

If your unwilling or incapable of moving toward the goal of understanding then I would, kindly, suggest your finding something else to spend your/our time on.
Sam
sr. member
Activity: 336
Merit: 250
Didn't say set it to zero. I said take it out  "".  CGMiner will find a value for you then you can tune from there. Try the Scrypt Readme.


I don't understand what you mean by take it out  "".

Sorry I'm noobish with scrypt and the readme doesn't help me so much :/
hero member
Activity: 497
Merit: 500
Didn't say set it to zero. I said take it out  "".  CGMiner will find a value for you then you can tune from there. Try the Scrypt Readme.
sr. member
Activity: 336
Merit: 250
Hi,

I have an issue when trying to run CGminer for mining Litecoin maybe you can help me ?

Thanks in advance Wink

Error:

Quote

 [2013-05-23 11:44:56] Started cgminer 3.1.1
 [2013-05-23 11:44:56] Started cgminer 3.1.1
 [2013-05-23 11:44:56] Loaded configuration file cgminer.conf
 [2013-05-23 11:44:56] Probing for an alive pool
 [2013-05-23 11:45:01] Maximum buffer memory device 0 supports says 536870912
 [2013-05-23 11:45:01] Your scrypt settings come to 1572864000
 [2013-05-23 11:45:01] Error -61: clCreateBuffer (padbuffer8), decrease TC or in
crease LG
 [2013-05-23 11:45:01] Failed to init GPU thread 0, disabling device 0
 [2013-05-23 11:45:01] Restarting the GPU from the menu will not fix this.
 [2013-05-23 11:45:01] Try restarting cgminer.
Press enter to continue:




Here my cgminer.conf

Quote

{
"pools" : [
   {
      "url" : "stratum+tcp://ltc.coinat.com:3333",
      "user" : "xxx",
      "pass" : "xxx"
   }
]
,
"intensity" : "18",
"vectors" : "1",
"worksize" : "256",
"kernel" : "scrypt",
"lookup-gap" : "2",
"thread-concurrency" : "24000",
"shaders" : "0",
"gpu-engine" : "0-0",
"gpu-fan" : "0-0",
"gpu-memclock" : "0",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "30",
"scrypt" : true,
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}



Set your shaders and take out thread-concurency and the run it.


If shaders set to "1792" and thread-concurency set to "0" = hardware error

But if shaders set to "1792" and thread-concurency set to "22400" cgminer running good with Average 635 Kh/s

Not bad but with a 7950 @ 1175gpu / 1500mem is pretty low if you see the Mining Hardware List, what do you think ?
legendary
Activity: 2912
Merit: 1060
Try my p2pool I too get 4 second lps
hero member
Activity: 497
Merit: 500
Hi,

I have an issue when trying to run CGminer for mining Litecoin maybe you can help me ?

Thanks in advance Wink

Error:

Quote

 [2013-05-23 11:44:56] Started cgminer 3.1.1
 [2013-05-23 11:44:56] Started cgminer 3.1.1
 [2013-05-23 11:44:56] Loaded configuration file cgminer.conf
 [2013-05-23 11:44:56] Probing for an alive pool
 [2013-05-23 11:45:01] Maximum buffer memory device 0 supports says 536870912
 [2013-05-23 11:45:01] Your scrypt settings come to 1572864000
 [2013-05-23 11:45:01] Error -61: clCreateBuffer (padbuffer8), decrease TC or in
crease LG
 [2013-05-23 11:45:01] Failed to init GPU thread 0, disabling device 0
 [2013-05-23 11:45:01] Restarting the GPU from the menu will not fix this.
 [2013-05-23 11:45:01] Try restarting cgminer.
Press enter to continue:




Here my cgminer.conf

Quote

{
"pools" : [
   {
      "url" : "stratum+tcp://ltc.coinat.com:3333",
      "user" : "xxx",
      "pass" : "xxx"
   }
]
,
"intensity" : "18",
"vectors" : "1",
"worksize" : "256",
"kernel" : "scrypt",
"lookup-gap" : "2",
"thread-concurrency" : "24000",
"shaders" : "0",
"gpu-engine" : "0-0",
"gpu-fan" : "0-0",
"gpu-memclock" : "0",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "1",
"hotplug" : "5",
"log" : "5",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "30",
"scrypt" : true,
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}



Set your shaders and take out thread-concurency and the run it.
legendary
Activity: 3586
Merit: 1098
Think for yourself
MY two PCs are stil shutting down because overheating, what can I do to prevent this, how to make cgminer slows down with mining when overheating!

If it is your GPU's overheating then use the auto fan and auto gpu command line arguments.

There are examples in the executive summary in the top post of this thread.
newbie
Activity: 60
Merit: 0
I just got my mining rig up and running tonight. I pointed it a P2Pool to mine LTC. I'm getting a decent hash rate (~500kH/s x 2 7950 cards), but I am seeing a lot of "Stratum from pool 0 requested work restart" in the log -- I mean like half of the entries. I'm also getting about a 30% reject rate.
...
P2Pool has a 10second LP ... so you should on average get one every 10seconds ...

Thanks for your help.

I'm getting these, on average, approximately every three to four seconds (according to the time stamps on the log entries).

If this is an oddity with this pool, I'm not opposed to switching to a different pool. But, if it is something in my settings, I would like to fix it.

Anything else you can suggest?
Jump to: