Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 365. (Read 5805537 times)

sr. member
Activity: 658
Merit: 250
Hello,

i have a problem with cg 3.0.0:

pool 2 JSON stratum auth failed: (unknown reason)

   {
      "url" : "stratum+tcp://mining.eligius.st:3334",
      "user" : "MYBTCADRESS",
      "pass" : "123"
   }


the same config runs with cgminer 2.11.4 without Problems


Try the latest git master version, this should be fixed now. Or you can wait for a new release.
sr. member
Activity: 344
Merit: 250
Flixxo - Watch, Share, Earn!
Hello,

i have a problem with cg 3.0.0:

pool 2 JSON stratum auth failed: (unknown reason)

   {
      "url" : "stratum+tcp://mining.eligius.st:3334",
      "user" : "MYBTCADRESS",
      "pass" : "123"
   }


the same config runs with cgminer 2.11.4 without Problems
newbie
Activity: 19
Merit: 0
is there any way to "specify share difficulty" in cgminer, in .conf file maybe? i switched to P2Pool on stratum and was wondering if i can change to diff2 shares instead of diff1? or does it even matter?
If you use stratum and make over 1 diff1 share per second it will rise automagically.
You can add /n or +n at end of username to get share diff=n
It is useful only on high power or you need lower data stream miner<>pool.
thanks for response, thats what i thought but btc guild say over 2ghs is diff2 but p2pool stays on diff1, oh well works fine i guess
is there any way to "specify share difficulty" in cgminer, in .conf file maybe? i switched to P2Pool on stratum and was wondering if i can change to diff2 shares instead of diff1? or does it even matter?

P.S. Cgminer is the best miner out there in my opinion. Con keep up the great work!!!!
Thanks! Much nicer than the same newbie questions over and over again Smiley

Alas it's not up to stratum or cgminer to set difficulty as there is no official way to do so (yet?). Though some pools allow you to specify by the password field instead as you can see from rav3n_pl's  response.
yes thank you both Smiley i understand some pools have that feature, seeing as ASICS are coming we will have greatly varying hash rates so being able to select difficulty in cgminer or stratum might be needed , I for one think mining should go back to its roots and steer away from pools but i could be wrong though.
legendary
Activity: 3583
Merit: 1094
Think for yourself
Hi,

just started to use cgminer a few days ago. It's amazing! gpu engine and max-temp parameters are just great for me. But I'm looking for a special function, maybe somebody could help: I would like to schedule the setting for max temperature: for example 75 °C at day and 80 °C at night. Would that work - and if, how? My mining computer stands in an office room, so at day (especially in summer) it must have lower temperatures. At night they can be higher, when nobody is there.

Anyone an idea?

I don't think there is a scheduling function in CGMiner, at least I haven't seen any reference to scheduling in the documentation in the top post of this thread.

Unless your changing your engine clocks, your GPU's are generating the same amount of waste heat whether your target temp is 75c or 80c.  At 80 your fans will just have to work less as they won't have to spin up to a higher speed.  So I would think you would want to have the higher temp during the day so that the fans don't make as much noise.

At any rate I think changing your target temp will have little effect on the temperature in the room.
Sam
newbie
Activity: 17
Merit: 0
That's crashing in the driver code that measures temperature. Are you using another app to adjust temperature/fanspeed at the same time? Theoretically they can fight over it. Disabling the other app if that's so might help. Not making cgminer adjust fanspeed/gpuspeed might help, or disabling all control/monitoring entirely with --no-adl should help if all else fails.
No, I don't use any other adjustment software. Your diagnosis is right: if I don't export DISPLAY, just COMPUTE, I get no fanspeed/temperature monitoring, and there is no error on quit. But I'd really like to have these features... summer is coming.
What I can't understand is why driver fails when I quit the program, and works perfectly fine while it is still running - it adjusts the fanspeed and engine clock to match my target temperature.

PS. You're great, -ck. Thanks for all your work:)
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
sr. member
Activity: 252
Merit: 250
Hi,

just started to use cgminer a few days ago. It's amazing! gpu engine and max-temp parameters are just great for me. But I'm looking for a special function, maybe somebody could help: I would like to schedule the setting for max temperature: for example 75 °C at day and 80 °C at night. Would that work - and if, how? My mining computer stands in an office room, so at day (especially in summer) it must have lower temperatures. At night they can be higher, when nobody is there.

Anyone an idea?
newbie
Activity: 46
Merit: 0
Hi,

I had no luck using Ubuntu 12.10 with cgminer and NOT using dummy plugs with two 7950 cards. After using dummy plugs everything is working as expected.

full member
Activity: 238
Merit: 100
In Gord We Trust
Quote
Q: I have multiple GPUs and although many devices show up, it appears to be
working only on one GPU splitting it up.

Which version of the README is that in? I searched mine (by copying and pasting to open office and using the search command) and it's not in there.

If you have the answer to the question that you posted, would you be so kind as to paste the answer to it as well?

Thanks again!
Yeah, what a ludicrous response.

Q: I have multiple GPUs and although many devices show up, it appears to be
working only on one GPU splitting it up.

A: Your driver setup is failing to properly use the accessory GPUs. Your
driver may be configured wrong or you have a driver version that needs a dummy
plug on all the GPUs that aren't connected to a monitor.



Thanks a million zvs! Is that from the 3.0 README? It most definitely is not in the version I have. WTF??? I am running xubuntu 12.04 with cgminer 2.11.4 and I don't know if I have a special case going on here or what. If my drivers are misconfigured, then I am clueless as to what to do next... Sad As far as I understand, linux doesn't require dummy plugs, correct?
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
Quote
Q: I have multiple GPUs and although many devices show up, it appears to be
working only on one GPU splitting it up.

Which version of the README is that in? I searched mine (by copying and pasting to open office and using the search command) and it's not in there.

If you have the answer to the question that you posted, would you be so kind as to paste the answer to it as well?

Thanks again!
Yeah, what a ludicrous response.

Q: I have multiple GPUs and although many devices show up, it appears to be
working only on one GPU splitting it up.

A: Your driver setup is failing to properly use the accessory GPUs. Your
driver may be configured wrong or you have a driver version that needs a dummy
plug on all the GPUs that aren't connected to a monitor.

zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
cgminer 2.11.4 and 3.0.0 I get an error GPU1 invalid nonce: HW error on my radeon hd 5870 runs fine under 2.11.3, even looks like it's producing normal hashes on the other 2 versions all the averages look right the 5s and avg  values show what both of my cards together produce, it's just reported as HW error instead of accepted on the 5870. Other card is a 7970 if that matters. Running windows 7 x64

I had this issue on p2pool until I set the shares to a specific difficulty.  About half were reported as 'hardware errors' on core 1 of a 5970.

Cgminer appears to have shittons of issues with combos of 5xxx and 7xxx cards.  I'd use something else.  
newbie
Activity: 17
Merit: 0
CGMiner locks up Windows on exit whether I press 'q' or close the window, but only when my 6770 is mining. Any fix for this?

But seriously, any help with this?

Same problem here on Ubuntu 12.04. cgminer version 2.11.4.
After that /var/log/syslog shows these messages:
Code:
[13788.238066] divide error: 0000 [#1] SMP
[13788.238076] CPU 1
[13788.238079] Modules linked in: vesafb snd_hda_codec_hdmi snd_hda_codec_realtek kvm_amd kvm hid_generic microcode snd_hda_intel snd_hda_codec snd_hwdep fglrx(PO) serio_raw snd_pcm ipt_MASQUERADE iptable_nat amd_iommu_v2 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack snd_seq_midi video xt_LOG edac_core edac_mce_amd snd_rawmidi xt_tcpudp k10temp wmi rfcomm bnep bluetooth parport_pc ppdev snd_seq_midi_event snd_seq usbhid snd_timer hid snd_seq_device iptable_filter iptable_mangle ip_tables x_tables snd i2c_nforce2 mac_hid soundcore snd_page_alloc f71882fg lp parport r8169 pata_amd ahci libahci forcedeth
[13788.238163]
[13788.238170] Pid: 1605, comm: Xorg Tainted: P           O 3.5.0-27-generic #46~precise1-Ubuntu MSI MS-7578/NF750-G55 (MS-7578)
[13788.238180] RIP: 0010:[]  [] CIslands_FanCtrl_SetFanSpeedRPM+0x71/0x160 [fglrx]
[13788.238394] RSP: 0018:ffff880067ef1d08  EFLAGS: 00010246
[13788.238400] RAX: 00000000608f3d00 RBX: ffff880068405828 RCX: 0000000000000000
[13788.238405] RDX: 0000000000000000 RSI: 0000000000000080 RDI: ffff88006840580c
[13788.238409] RBP: ffff88006840580c R08: 00000000c05001a0 R09: ffff88006a09d608
[13788.238414] R10: ffff88006a09d608 R11: ffff880066af7808 R12: 0000000000000000
[13788.238418] R13: 0000000000000001 R14: 00007fffeef19d80 R15: 0000000000000000
[13788.238423] FS:  00007fd3830e1880(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000
[13788.238428] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[13788.238432] CR2: 00007f5b99a084d4 CR3: 000000007a4ed000 CR4: 00000000000007e0
[13788.238437] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[13788.238441] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[13788.238446] Process Xorg (pid: 1605, threadinfo ffff880067ef0000, task ffff8800689e1700)
[13788.238449] Stack:
[13788.238453]  ffff88006840500c ffffffffa02fd5c0 0000000000000000 ffffffffa02fd5d2
[13788.238461]  0000000000000000 ffffffffa031b9b9 ffff880063291ff0 ffff880067ef1db8
[13788.238469]  ffff880063291ff0 0000000000000000 ffff880068758000 ffffffffa0312c33
[13788.238476] Call Trace:
[13788.238684]  [] ? PHM_SetFanSpeedPercent+0x50/0x50 [fglrx]
[13788.238884]  [] ? PHM_SetFanSpeedRPM+0x12/0x50 [fglrx]
[13788.239078]  [] ? PEM_SetFanSpeed+0x79/0xa0 [fglrx]
[13788.239274]  [] ? PEM_CWDDEPM_OD6_SetFanSpeed+0xd3/0x1c0 [fglrx]
[13788.239468]  [] ? PP_Cwdde+0x109/0x180 [fglrx]
[13788.239599]  [] ? firegl_pplib_cwddepm_call+0x1e0/0x250 [fglrx]
[13788.239609]  [] ? ns_capable+0x30/0x60
[13788.239739]  [] ? firegl_pplib_iri_call+0x2f0/0x2f0 [fglrx]
[13788.239856]  [] ? firegl_ioctl+0x1ed/0x250 [fglrx]
[13788.239964]  [] ? ip_firegl_unlocked_ioctl+0xe/0x20 [fglrx]
[13788.239974]  [] ? do_vfs_ioctl+0x8a/0x340
[13788.239983]  [] ? vfs_read+0x10d/0x180
[13788.239991]  [] ? sys_ioctl+0x91/0xa0
[13788.240000]  [] ? system_call_fastpath+0x16/0x1b
[13788.240004] Code: 00 00 00 48 89 ef 48 8d 5d 1c e8 4b 88 ff ff 48 89 ef 31 d2 89 c6 42 8d 0c e5 00 00 00 00 69 f6 c0 27 09 00 89 f0 be 80 00 00 00 f1 ba 70 00 30 c0 41 89 c4 e8 c0 37 f9 ff 48 89 ef 83 e0 07
[13788.240077] RIP  [] CIslands_FanCtrl_SetFanSpeedRPM+0x71/0x160 [fglrx]
[13788.240250]  RSP
[13788.240256] ---[ end trace ab7d0505fc0ce5b0 ]---
[13788.284891] [fglrx:firegl_release] *ERROR* device busy: 1 0
[13788.284897] [fglrx] release failed with code -EBUSY
[13788.861813] init: lightdm main process (1598) terminated with status 1

My GPU is 7790 (Bonaire), AMD SDK version 2.8RC, graphic driver is 12.101.2.1 (the only one that seems to support my graphic card).
Is there anything I can do to prevent this error?

I'll check 3.0 now.
full member
Activity: 238
Merit: 100
In Gord We Trust
Quote
Q: I have multiple GPUs and although many devices show up, it appears to be
working only on one GPU splitting it up.

Which version of the README is that in? I searched mine (by copying and pasting to open office and using the search command) and it's not in there.

If you have the answer to the question that you posted, would you be so kind as to paste the answer to it as well?

Thanks again!
newbie
Activity: 41
Merit: 0
Here's an odd one I'm running into today after re-burning my USB stick's linux image from a clean copy, pulling in the latest git, compiling and firing up cgminer...

cgminer's interface ignores all normal menu keystrokes (p, g, and so on), UNLESS I neglect to specify a pool in the config file, in which case it works normally because it had to accept keyboard input to allow me to set the pool credentials. I can use ctl-c to kill it and get back to a prompt, no problem. I just can't adjust settings while cgminer is running.

This is latest git pull as of "Thu Apr 11 13:31:17 2013 +1000".
Found this bug at last and it should be fixed in git master now, thanks.

Has this fix made it to cgminer 3.0 yet? I'm experiencing the same problem in Ubuntu -- no menu options displayed and keystrokes ignored after I specify a config file at startup.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Quote
Q: I have multiple GPUs and although many devices show up, it appears to be
working only on one GPU splitting it up.
full member
Activity: 238
Merit: 100
In Gord We Trust
I did that during setup and I tried it again in case anything got knocked around. No joy... Sad

This problem is driving me nuts. I can't figure what it could be.

Thanks for your help and suggestions so far though.
README believe it or not, has faq about this...

Are you referring to exporting the display settings? I have been doing that. If there is a section of the README that you know solves my problem, would you be so kind as to point it out to me? I have been through the README trying to find the answer for myself before posting here so that I wouldn't have to bother anyone here with my noob questions.  I really only decided to post here as a last resort because I know you (and everyone else) are busy with other things.

I decided to run ./cgminer -n again. It looks like the problem might lie here, no? Shouldn't the 5800 series be listed as Cypress? Having read the section on --gpu-map I am still unsure as to how I should solve this issue, if --gpu-map is even the correct solution. I have avoided using that command because I read the warning in the README. I am just a hobbyist and would prefer not to toast any of my hardware.

Thanks again for your time.

Code:
 [2013-04-23 13:02:17] CL Platform 0 vendor: Advanced Micro Devices, Inc.                    
 [2013-04-23 13:02:17] CL Platform 0 name: AMD Accelerated Parallel Processing                   
 [2013-04-23 13:02:17] CL Platform 0 version: OpenCL 1.2 AMD-APP (1016.4)                   
 [2013-04-23 13:02:17] Platform 0 devices: 4                   
 [2013-04-23 13:02:17] 0 Barts                   
 [2013-04-23 13:02:17] 1 Barts                   
 [2013-04-23 13:02:17] 2 Barts                   
 [2013-04-23 13:02:17] 3 Barts                   
 [2013-04-23 13:02:17] GPU 0 AMD Radeon HD 6700 Series   hardware monitoring enabled                   
 [2013-04-23 13:02:17] GPU 1 AMD Radeon HD 6700 Series   hardware monitoring enabled                   
 [2013-04-23 13:02:17] GPU 2 AMD Radeon HD 6700 Series   hardware monitoring enabled                   
 [2013-04-23 13:02:17] GPU 3 ATI Radeon HD 5800 Series hardware monitoring enabled                   
 [2013-04-23 13:02:17] 4 GPU devices max detected                   
 [2013-04-23 13:02:17] USB all: found 10 devices - listing known devices                   
 [2013-04-23 13:02:17] No known USB devices

hero member
Activity: 770
Merit: 502
Using 3.0.0-windows. Coming from 2.11.4-windows.

Two 5850's, working flawlessly. I don't see any hiccups. No decrease in hash rates and no increase of hash rates.

Thank you, ckolivas.

Edit:

But! my WU:996.7/m-WU:1026/m is crazy high!!! Which is great, I was at WU:741-771/m.

False alarm on WU:. My WU: fell down to the usual after many hours. It took many hours.
hero member
Activity: 770
Merit: 502
I have. I don't think it says anything about the hashrate going up a little, and then dropping back down 5 seconds after overclocking.

K, it was worth a shot, asking, and notifying you of the guide if you hadn't known of it.
full member
Activity: 196
Merit: 100
1. Every time I change the mem-clock, the hashrate will go up slightly (about 15Kh/s), and then drop back down to where it was (275Kh/s). According to GPU-Z, the speed is staying where I set it, so why is the hashrate going back down???

It's complicated.

really? care to explain, or is your comment just a reflection of your inadequacies? Either your inadequacies in articulation, which prevent you from explaining, or is it you inadequacies in comprehension, which prevent you from understanding it yourself?

yes
newbie
Activity: 19
Merit: 0
1. Every time I change the mem-clock, the hashrate will go up slightly (about 15Kh/s), and then drop back down to where it was (275Kh/s). According to GPU-Z, the speed is staying where I set it, so why is the hashrate going back down???

It's complicated.

really? care to explain, or is your comment just a reflection of your inadequacies? Either your inadequacies in articulation, which prevent you from explaining, or is it you inadequacies in comprehension, which prevent you from understanding it yourself?
Jump to: