Author

Topic: EWBF's CUDA Zcash miner - page 181. (Read 2164329 times)

newbie
Activity: 16
Merit: 0
March 31, 2017, 04:57:23 AM
I also thought that they were eating 350 watts but when inserted into a socket the measuring device was unpleasantly surprised. The programs show quite a different value as if up to 450 watts consumption and in the real world it's all 750.

I have 6 10603gb cards on 400 watt measured in the wall. You just need to use MSI afterburner and slide the TDP slider to the left until you reach 60%


How many power supplies do your video cards supply 1 per 400watt?
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
March 31, 2017, 04:53:18 AM
I also thought that they were eating 350 watts but when inserted into a socket the measuring device was unpleasantly surprised. The programs show quite a different value as if up to 450 watts consumption and in the real world it's all 750.

I have 6 10603gb cards on 400 watt measured in the wall. You just need to use MSI afterburner and slide the TDP slider to the left until you reach 60%
newbie
Activity: 16
Merit: 0
March 31, 2017, 04:49:23 AM
Dear friends, please forgive me for my bad English. I want to present my observations on a farm with 4 GTX 1060 3Gb graphics cards in the 100/500 overclock. My Zec farm consumes 750 watts, provided that in the state of being it eats 110 watts worth 2 gigabytes of memory and 2 power supplies of 500 watts and the processor Geleros G550 (from my observations I will say that the fairy tale is all that they are eating 120 watts, without overclocking for 150 watts and in dispersing oc 170 each) on ETH for 100 watts less in overclocking the farm eats. Here are my observations carried out by special instruments and gauges is not less than a day.


I have got 5x gtx 1060 90w each with 260sol/s 450w + 30w for motherboard.


What tool measured the consumption of the farm I hope not programmable but from the outlet 220 volts by the amount of amp consumed because the programs lie for 30-50 percent of the real indicators



I also thought that they were eating 350 watts but when inserted into a socket the measuring device was unpleasantly surprised. The programs show quite a different value as if up to 450 watts consumption and in the real world it's all 750.
newbie
Activity: 16
Merit: 0
March 31, 2017, 04:42:06 AM
Dear friends, please forgive me for my bad English. I want to present my observations on a farm with 4 GTX 1060 3Gb graphics cards in the 100/500 overclock. My Zec farm consumes 750 watts, provided that in the state of being it eats 110 watts worth 2 gigabytes of memory and 2 power supplies of 500 watts and the processor Geleros G550 (from my observations I will say that the fairy tale is all that they are eating 120 watts, without overclocking for 150 watts and in dispersing oc 170 each) on ETH for 100 watts less in overclocking the farm eats. Here are my observations carried out by special instruments and gauges is not less than a day.


I have got 5x gtx 1060 90w each with 260sol/s 450w + 30w for motherboard.


What tool measured the consumption of the farm I hope not programmable but from the outlet 220 volts by the amount of amp consumed because the programs lie for 30-50 percent of the real indicators
member
Activity: 219
Merit: 30
March 31, 2017, 04:30:27 AM
Dear friends, please forgive me for my bad English. I want to present my observations on a farm with 4 GTX 1060 3Gb graphics cards in the 100/500 overclock. My Zec farm consumes 750 watts, provided that in the state of being it eats 110 watts worth 2 gigabytes of memory and 2 power supplies of 500 watts and the processor Geleros G550 (from my observations I will say that the fairy tale is all that they are eating 120 watts, without overclocking for 150 watts and in dispersing oc 170 each) on ETH for 100 watts less in overclocking the farm eats. Here are my observations carried out by special instruments and gauges is not less than a day.


I have got 5x gtx 1060 90w each with 260sol/s 450w + 30w for motherboard.
newbie
Activity: 16
Merit: 0
March 31, 2017, 04:16:49 AM
Dear friends, please forgive me for my bad English. I want to present my observations on a farm with 4 GTX 1060 3Gb graphics cards in the 100/500 overclock. My Zec farm consumes 750 watts, provided that in the state of being it eats 110 watts worth 2 gigabytes of memory and 2 power supplies of 500 watts and the processor Geleros G550 (from my observations I will say that the fairy tale is all that they are eating 120 watts, without overclocking for 150 watts and in dispersing oc 170 each) on ETH for 100 watts less in overclocking the farm eats. Here are my observations carried out by special instruments and gauges is not less than a day.
member
Activity: 112
Merit: 10
March 31, 2017, 01:26:07 AM
Has anyone here tested this miner on the Nvidia Tesla K80 or M60?
I have one of each, which I use for ETH mining, and was thinking if it would be a good idea to mine ZCash with them.
full member
Activity: 240
Merit: 100
March 31, 2017, 01:15:15 AM
Windows defender has now marked version 3.2b as a virus and will block the file.

I'm also just finding this out.........   disable win defender?  lol

aarrggghhhhh


Message by Icon
Relevance: 15%
Just  an fyi if you loose the miner.exe file off your system and running windows 10, you can thank defender for auto deleting the file without permission for that, only way around it seeing you cant permanently disable defender is to allow an exception, have to use admin account disable realtime protection and then unzip and point to the miner.exe file and exclude it, else one day you'll wont have the file.

Yet again Microsoft  thinking for us poor humans..

Icon
sr. member
Activity: 281
Merit: 250
March 30, 2017, 10:55:10 PM
Windows defender has now marked version 3.2b as a virus and will block the file.

I'm also just finding this out.........   disable win defender?  lol

aarrggghhhhh
newbie
Activity: 29
Merit: 0
March 30, 2017, 09:28:38 PM
newbie
Activity: 14
Merit: 0
March 30, 2017, 06:28:54 PM
Thx for the miner EWBF Kiss
member
Activity: 130
Merit: 11
March 30, 2017, 04:24:52 PM
nvidia-smi is pretty broken for pascal cards:

https://devtalk.nvidia.com/default/topic/992477/linux/bug-378-xx-nvml-nvidia-smi-core-clock-is-wrong-on-pascal-devices/

Hope, nvidia will fix this in the next driver release, but i am not very confident.
newbie
Activity: 29
Merit: 0
March 30, 2017, 04:01:30 PM
I've a new 1080 Ti to play around with and I've found a bug (mostly like in the NVIDIA Linux drivers) so I may be heading to Windows. Here's the kerenl bt

Mar 30 16:10:33 zcash3 kernel: [12308.114736] BUG: unable to handle kernel NULL pointer dereference at 0000000000000160
Mar 30 16:10:33 zcash3 kernel: [12308.114983] IP: [] _nv015951rm+0x1c6/0x2b0 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.115312] PGD 0
Mar 30 16:10:33 zcash3 kernel: [12308.115377] Oops: 0000 [#1] SMP
Mar 30 16:10:33 zcash3 kernel: [12308.115484] Modules linked in: nvidia_uvm(POE) snd_hda_codec_hdmi nvidia_drm(POE) nvidia_modeset(POE) nvidia(POE) intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
 kvm irqbypass snd_hda_codec_realtek serio_raw snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core mei_me snd_hwdep snd_pcm mei snd_timer snd lpc_ich soundcore shpchp tpm_infineon 8250_fintek mac_hid acpi_p
ad ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq lib
crc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd nouveau psmouse ahci mxm_wmi i2c_algo_bit libahci ttm drm_kms_he
lper syscopyarea sysfillrect sysimgblt fb_sys_fops alx drm mdio video wmi fjes
Mar 30 16:10:33 zcash3 kernel: [12308.118288] CPU: 0 PID: 1370 Comm: miner Tainted: P           OE   4.4.0-71-generic #92-Ubuntu
Mar 30 16:10:33 zcash3 kernel: [12308.118553] Hardware name: MSI MS-7917/Z97 GAMING 5 (MS-7917), BIOS V1.13 02/16/2016
Mar 30 16:10:33 zcash3 kernel: [12308.118792] task: ffff88002b715400 ti: ffff8804a2ae0000 task.ti: ffff8804a2ae0000
Mar 30 16:10:33 zcash3 kernel: [12308.119023] RIP: 0010:[]  [] _nv015951rm+0x1c6/0x2b0 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.124837] RSP: 0018:ffff8804a2ae39e0  EFLAGS: 00010246
Mar 30 16:10:33 zcash3 kernel: [12308.130505] RAX: 0000000000000000 RBX: ffff880482a12ea0 RCX: 00000001fe86cfff
Mar 30 16:10:33 zcash3 kernel: [12308.136322] RDX: 00000001fe86c000 RSI: 0000000000000000 RDI: ffff8804bb7d0008
Mar 30 16:10:33 zcash3 kernel: [12308.142085] RBP: ffff880482a12e68 R08: 0000000000000000 R09: 0000000000000001
Mar 30 16:10:33 zcash3 kernel: [12308.147813] R10: 0000000002020008 R11: ffffffffc1aaaf20 R12: ffff8804bb7d0008
Mar 30 16:10:33 zcash3 kernel: [12308.153497] R13: 0000000000000001 R14: 00000001fe86c000 R15: 0000000000001000
Mar 30 16:10:33 zcash3 kernel: [12308.159101] FS:  00007f7c45fff700(0000) GS:ffff8804cec00000(0000) knlGS:0000000000000000
Mar 30 16:10:33 zcash3 kernel: [12308.170096] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 30 16:10:33 zcash3 kernel: [12308.175759] CR2: 0000000000000160 CR3: 0000000001e0a000 CR4: 00000000001406f0
Mar 30 16:10:33 zcash3 kernel: [12308.181439] Stack:
Mar 30 16:10:33 zcash3 kernel: [12308.186928]  0000000000000000 00000000001fe86c ffff8804ba123008 ffff880482a12ff8
Mar 30 16:10:33 zcash3 kernel: [12308.197989]  0000000000000000 ffffffffc17fb890 ffff8804ba123008 00000000001fe86c
Mar 30 16:10:33 zcash3 kernel: [12308.209097]  0000000000000000 ffff880482a12ff8 ffff8804869a6608 ffffffffc1a9e5cd
Mar 30 16:10:33 zcash3 kernel: [12308.220213] Call Trace:
Mar 30 16:10:33 zcash3 kernel: [12308.225662]  [] ? _nv010389rm+0xb0/0x270 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.231167]  [] ? _nv016944rm+0x6bd/0x700 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.236552]  [] ? _nv016990rm+0x20/0xc0 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.241796]  [] ? rm_gpu_ops_stop_channel+0x120/0x140 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.252001]  [] ? nvUvmInterfaceStopChannel+0x31/0x50 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.262244]  [] ? uvm_user_channel_stop+0x34/0x40 [nvidia_uvm]
Mar 30 16:10:33 zcash3 kernel: [12308.272502]  [] ? uvm_va_space_stop_all_user_channels.part.7+0x6f/0xc0 [nvidia_uvm]
Mar 30 16:10:33 zcash3 kernel: [12308.282820]  [] ? uvm_va_space_destroy+0x383/0x390 [nvidia_uvm]
Mar 30 16:10:33 zcash3 kernel: [12308.293238]  [] ? uvm_release+0x11/0x20 [nvidia_uvm]
Mar 30 16:10:33 zcash3 kernel: [12308.298613]  [] ? __fput+0xe4/0x220
Mar 30 16:10:33 zcash3 kernel: [12308.304003]  [] ? ____fput+0xe/0x10
Mar 30 16:10:33 zcash3 kernel: [12308.309266]  [] ? task_work_run+0x81/0xa0
Mar 30 16:10:33 zcash3 kernel: [12308.314522]  [] ? do_exit+0x2e1/0xb00
Mar 30 16:10:33 zcash3 kernel: [12308.319701]  [] ? poll_select_copy_remaining+0x140/0x140
Mar 30 16:10:33 zcash3 kernel: [12308.324856]  [] ? do_group_exit+0x43/0xb0
Mar 30 16:10:33 zcash3 kernel: [12308.329957]  [] ? get_signal+0x292/0x600
Mar 30 16:10:33 zcash3 kernel: [12308.334992]  [] ? do_signal+0x37/0x6f0
Mar 30 16:10:33 zcash3 kernel: [12308.339929]  [] ? poll_select_copy_remaining+0x140/0x140
Mar 30 16:10:33 zcash3 kernel: [12308.344859]  [] ? poll_select_copy_remaining+0x140/0x140
Mar 30 16:10:33 zcash3 kernel: [12308.349596]  [] ? poll_select_copy_remaining+0x140/0x140
Mar 30 16:10:33 zcash3 kernel: [12308.354140]  [] ? exit_to_usermode_loop+0x8c/0xd0
Mar 30 16:10:33 zcash3 kernel: [12308.358581]  [] ? syscall_return_slowpath+0x4e/0x60
Mar 30 16:10:33 zcash3 kernel: [12308.362931]  [] ? int_ret_from_sys_call+0x25/0x8f
Mar 30 16:10:33 zcash3 kernel: [12308.367182] Code: 0f 00 00 00 0f 84 d4 fe ff ff 48 8b 83 88 00 00 00 45 31 c0 48 85 c0 0f 85 c2 00 00 00 4c 89 f2 4b 8d 4c 3e ff 4c 89 c6 4c 89 e7 <41> ff 90 60 01 00 00 84 c0 8b 43
 08 0f 94 c2 a9 00 00 00 01 0f
Mar 30 16:10:33 zcash3 kernel: [12308.380668] RIP  [] _nv015951rm+0x1c6/0x2b0 [nvidia]
Mar 30 16:10:33 zcash3 kernel: [12308.385433]  RSP
Mar 30 16:10:33 zcash3 kernel: [12308.389949] CR2: 0000000000000160
Mar 30 16:10:33 zcash3 kernel: [12308.400824] ---[ end trace 283780b61aeb276b ]---
sr. member
Activity: 312
Merit: 250
March 30, 2017, 04:00:26 PM
Did you make clean install of the driver or just install over previous version?
Maybe that leaved the broken nvidia-smi?
legendary
Activity: 3892
Merit: 4331
March 30, 2017, 02:37:54 PM
how to see clocks and how to change them?
Thanks
EDIT: so far i tried both mem and gr and it says [NOT SUPPORTED]

http://cryptomining-blog.com/7341-how-to-squeeze-some-extra-performance-mining-ethereum-on-nvidia/
Its for ETH mining, but the the same priciple for reading max clocks and set it via nvidia-smi

If it says [NOT SUPPORTED] then the nvidia-smi that came with the driver is broken.
nVidia tend to release many drivers for linux with broken nvidia-smi...
I am sure that 375.39 has working nvidia-smi
375.39 is exactly the driver that I have and it says
SUPPORTED_CLOCKS    N/A

The card (Zotac 1060 6gb mini) is probably locked, darn it.
sr. member
Activity: 312
Merit: 250
March 30, 2017, 02:18:41 PM
how to see clocks and how to change them?
Thanks
EDIT: so far i tried both mem and gr and it says [NOT SUPPORTED]

http://cryptomining-blog.com/7341-how-to-squeeze-some-extra-performance-mining-ethereum-on-nvidia/
Its for ETH mining, but the the same priciple for reading max clocks and set it via nvidia-smi

If it says [NOT SUPPORTED] then the nvidia-smi that came with the driver is broken.
nVidia tend to release many drivers for linux with broken nvidia-smi...
I am sure that 375.39 has working nvidia-smi
legendary
Activity: 3892
Merit: 4331
March 30, 2017, 12:04:47 PM

Personally I too prefer linux for mining. Smiley
You can get more GPU usage by manually setting P0 instead of P2 by setting highest application clocks via nvidia-smi.
I use it on a rig with GeForce GTX 970 that also prefers P2 by default.

I've tried many ways to overclock nVidia card in linux, none of them successful (I use Ubuntu Server, headless, without GUI)

I am reading nvidia-smi -h file, but i could not get it how to see clocks and how to change them.
i am passing commands and get puzzling responses instead of a simple readout.
So, could you tell me how to see clocks and how to change them?
Thanks
EDIT: so far i tried both mem and gr and it says [NOT SUPPORTED]
legendary
Activity: 3892
Merit: 4331
March 30, 2017, 11:40:02 AM

Just noticed the same bug on one of the rigs
Code:
Temp: GPU0: 46C GPU1: 24C GPU2: 26C
GPU0: 0 Sol/s GPU1: 0 Sol/s GPU2: 0 Sol/s
Total speed: 0 Sol/s
ERROR: Looks like GPU0 are stopped. Restart attempt.
INFO: GPU0 are restarted.
ERROR: Looks like GPU1 are stopped. Restart attempt.
INFO: GPU1 are restarted.
ERROR: Looks like GPU2 are stopped. Restart attempt.
INFO: GPU2 are restarted.
CUDA: Device: 2 User selected solver: 3
CUDA: Device: 1 User selected solver: 3
CUDA: Device: 0 User selected solver: 0
CUDA: Device: 2 Thread exited with code: 46
CUDA: Device: 1 Thread exited with code: 46
CUDA: Device: 0 Thread exited with code: 46

I've running the 3b version since release, this behavior is from 2-3 days

i had a PC with one card, added second and cannot make it work.
second card starts fine, then drops out in 24hr or less, then mining continues, but PC is frozen; have to do a hard restart.
i am not sure that it is the program or card glitch or not enough power.
i have a 500W PSU and thought that it should be enough for two Zotacs 1060, but, maybe it isn't even though they suppose to use just 120w ea.
CPU only uses 54W(TDP).
newbie
Activity: 29
Merit: 0
March 30, 2017, 11:35:54 AM

Personally I too prefer linux for mining. Smiley
Currently I use nVidia's 375.39 driver - it is rock solid, but it lacks support for 1080 Ti in your case.
You can get more GPU usage by manually setting P0 instead of P2 by setting highest application clocks via nvidia-smi.
I use it on a rig with GeForce GTX 970 that also prefers P2 by default.

I've tried many ways to overclock nVidia card in linux, none of them successful (I use Ubuntu Server, headless, without GUI)

How do you manually set P0?

Uggh - may have to give Windoze a try then
sr. member
Activity: 312
Merit: 250
March 30, 2017, 11:16:28 AM
EWBF,

Will it be possible to add another switch under --eexit?
Like, exit the miner if total speed reaches 0 sol/s.



I second this, I will run for a good 2 days and then start hashing 0.  For me I don't think it is OC issues cause I already had to go down from when I ran 3b.  3b was the most stable at highest clocks.  I dont mind the lower clocks but frustrated with the stability issue so having a command to restart the miner if it reaches 0 would be great.

Yes, but i think it's better to fix this. On the side of the pool speed also falls? And can you show the log file?
Just noticed the same bug on one of the rigs
Code:
Temp: GPU0: 46C GPU1: 24C GPU2: 26C
GPU0: 0 Sol/s GPU1: 0 Sol/s GPU2: 0 Sol/s
Total speed: 0 Sol/s
ERROR: Looks like GPU0 are stopped. Restart attempt.
INFO: GPU0 are restarted.
ERROR: Looks like GPU1 are stopped. Restart attempt.
INFO: GPU1 are restarted.
ERROR: Looks like GPU2 are stopped. Restart attempt.
INFO: GPU2 are restarted.
CUDA: Device: 2 User selected solver: 3
CUDA: Device: 1 User selected solver: 3
CUDA: Device: 0 User selected solver: 0
CUDA: Device: 2 Thread exited with code: 46
CUDA: Device: 1 Thread exited with code: 46
CUDA: Device: 0 Thread exited with code: 46

I've running the 3b version since release, this behavior is from 2-3 days
Jump to: