Pages:
Author

Topic: Building Cheap Miners : My "Secret" - page 16. (Read 60249 times)

newbie
Activity: 82
Merit: 0
April 24, 2018, 09:15:23 AM
Is cheap mining a cheap hard drive?
sr. member
Activity: 784
Merit: 282
April 24, 2018, 08:37:28 AM
What you think about using one simple ATX supply per card and connect their black and green wires for sync
Simple atx supply cost around 8$ and it easy cat drive one GPU

That would probably be highly inefficient. Each ATX PSU would have to run some sort of fan to cool the PSU components. So imagine if you had 8 GPUs and 8 PSUs. You would have an additional 7 PSU fans that would be running all consuming more electricity that would eat up your mining profits. It would be so bulky and space consuming too.

Not cool.
hero member
Activity: 714
Merit: 512
April 24, 2018, 08:30:50 AM
I had to do the following:

1) Type "sudo sysctl -w vm.nr_hugepages=128"
2) Edit "/etc/security/limits.conf"
3) Add "* soft memlock 262144" and "* hard memlock 262144"
4) Change in XMR-STAK "config.txt" to "use_slow_memory : never"

This resolved the issue.

So... it seems I have to do #1 every time I re-boot as well now.

The other steps are saved but it seems to forget #1 upon re-boot.

SOLVED:

Replace Step #1 as follows:

1) In console go to "/etc" -- type "sudo gedit sysctl.conf" -- add "vm.nr_hugepages=128"

This saves it permanently.

So... not a huge fan of Linux over here, haha... such a PITA.
hero member
Activity: 881
Merit: 502
April 24, 2018, 08:26:44 AM
ı also build some cheap rigs. it is ok if you have max 5-6 rigs but if you plan 10-20 or more rigs best is 8 gpu rigs with new motherboard and gpus. reason is maintenance.

I beg to differ.

I will put my Oldish HP DL580 G7 from 2010 upgraded to 4x E7-8837's with 9-11 GPUs up against those newfangled rigs you mentioned. Being a Server it has robustness designed in. Redundant High Efficiency Gold/Platinum Power Supplies, ECC Error Corrected Memory, Superior Cooling. The parts in it are made to last for a very long time and if maintenance is needed it is easy to do.

The cost savings on these older systems vs new systems far out-way what little maintenance costs they might have sometime in the future.

If you use the same systems to build your farm it shouldn't matter if they are oldish or newish when it comes to maintenance as long as you go with quality hardware.

If somebody has those uniqe server MB, yes you are %100 right.

However I am talking about old generation MB which has maximum 4- PCIE slots.
and with power supplies of 500w.

If you can find 8-10 PCIE MB that is %100 fine.

What I mean is number of rigs is important as more rigs takes more time usually. Also CPU and system consumes energy therefore minimum 8 GPU is better.
hero member
Activity: 714
Merit: 512
April 24, 2018, 08:08:51 AM
I had to do the following:

1) Type "sudo sysctl -w vm.nr_hugepages=128"
2) Edit "/etc/security/limits.conf"
3) Add "* soft memlock 262144" and "* hard memlock 262144"
4) Change in XMR-STAK "config.txt" to "use_slow_memory : never"

This resolved the issue.

So... it seems I have to do #1 every time I re-boot as well now.

The other steps are saved but it seems to forget #1 upon re-boot.
hero member
Activity: 714
Merit: 512
April 24, 2018, 06:24:45 AM
Have you been using hugepages already?  I've been using it since day one.

The only thing I can think of is that because of the power flicker something happened kernel wise maybe?  Something changed that required the need for the band-aid you had to apply.

For shits and giggles as I know you have a lot of boxes that are going to need your remedy, image one of the new boxes if using HDD, just swap it into a broken box and see if it magically starts to work right because it appears the fix was software based and had nothing to do with hardware.

I'm really leaning to something on the reboot screwed up in the OS and has caused all your grief which is why I'm assuming a fresh install worked just fine on the new boxes.

I'm gearing up to start removing all my drives and do network boot here soon.

I found something that is working awesome on these dells...  but ... do I share?  You have enough firepower to drastically increase diff and still won't return my calls... Cheesy

Sorry... I am terrible at returning PMs so I tend not to look at them on forums (I'm on a lot of forums for my main business as well) Tongue

-----

I never setup huge pages on the Dell systems as I had already achieved what I had read to be max hash-rate of ~2100 H/S without doing so -- and being new to Linux saved myself the hassle. I am going to just go ahead and do it on the new ones I setup just in case.

On my Windows systems I always enable it as I see a direct difference.
hero member
Activity: 714
Merit: 512
April 24, 2018, 06:22:19 AM
ı also build some cheap rigs. it is ok if you have max 5-6 rigs but if you plan 10-20 or more rigs best is 8 gpu rigs with new motherboard and gpus. reason is maintenance.

I beg to differ.

I will put my Oldish HP DL580 G7 from 2010 upgraded to 4x E7-8837's with 9-11 GPUs up against those newfangled rigs you mentioned. Being a Server it has robustness designed in. Redundant High Efficiency Gold/Platinum Power Supplies, ECC Error Corrected Memory, Superior Cooling. The parts in it are made to last for a very long time and if maintenance is needed it is easy to do.

The cost savings on these older systems vs new systems far out-way what little maintenance costs they might have sometime in the future.

If you use the same systems to build your farm it shouldn't matter if they are oldish or newish when it comes to maintenance as long as you go with quality hardware.

Indeed.

I have a few "brand new" hardware miners... and I've had several motherboard failures (total or single PCI-E slot).

Z400s... zero system hardware has failed (I had one hard drive go out.. but that is unrelated to the Z400 itself).
hero member
Activity: 714
Merit: 512
April 24, 2018, 06:21:06 AM
What you think about using one simple ATX supply per card and connect their black and green wires for sync
Simple atx supply cost around 8$ and it easy cat drive one GPU

It's something I've thought of but haven't tried... I have 80 or so of the stock Z400 supplies at this point, haha... each one can power one fairly robust card.

I built a gaming PC for my son with a Z400 and stock PSU and when he isn't using it I do mining... with a GTX 1080. Just have to turn power down a bit in afterburner =)
member
Activity: 420
Merit: 10
April 24, 2018, 12:33:29 AM
What you think about using one simple ATX supply per card and connect their black and green wires for sync
Simple atx supply cost around 8$ and it easy cat drive one GPU
member
Activity: 214
Merit: 24
April 23, 2018, 11:34:38 PM
ı also build some cheap rigs. it is ok if you have max 5-6 rigs but if you plan 10-20 or more rigs best is 8 gpu rigs with new motherboard and gpus. reason is maintenance.

I beg to differ.

I will put my Oldish HP DL580 G7 from 2010 upgraded to 4x E7-8837's with 9-11 GPUs up against those newfangled rigs you mentioned. Being a Server it has robustness designed in. Redundant High Efficiency Gold/Platinum Power Supplies, ECC Error Corrected Memory, Superior Cooling. The parts in it are made to last for a very long time and if maintenance is needed it is easy to do.

The cost savings on these older systems vs new systems far out-way what little maintenance costs they might have sometime in the future.

If you use the same systems to build your farm it shouldn't matter if they are oldish or newish when it comes to maintenance as long as you go with quality hardware.
jr. member
Activity: 176
Merit: 1
April 23, 2018, 05:12:21 PM

The lithium batteries for your power wall i understand man. Good job on saving money on those.. But these old laptop batteries?

If you don't mind may i ask what you use them for? Thats a lot of junk that could be flammable if i understand correctly.

I hope you understand that the batteries for his power wall come from those laptop batteries right?  He tears the laptop battery casings open to extract the Lithium cells then proceeds to test and bin accordingly.
jr. member
Activity: 176
Merit: 1
April 23, 2018, 05:10:04 PM
Have you been using hugepages already?  I've been using it since day one.

The only thing I can think of is that because of the power flicker something happened kernel wise maybe?  Something changed that required the need for the band-aid you had to apply.

For shits and giggles as I know you have a lot of boxes that are going to need your remedy, image one of the new boxes if using HDD, just swap it into a broken box and see if it magically starts to work right because it appears the fix was software based and had nothing to do with hardware.

I'm really leaning to something on the reboot screwed up in the OS and has caused all your grief which is why I'm assuming a fresh install worked just fine on the new boxes.

I'm gearing up to start removing all my drives and do network boot here soon.

I found something that is working awesome on these dells...  but ... do I share?  You have enough firepower to drastically increase diff and still won't return my calls... Cheesy
hero member
Activity: 714
Merit: 512
April 23, 2018, 01:50:20 PM
Try clearing the NVRAM by changing jumper position on the board.  

* Locate the 3-pin CMOS jumper (CLR CMOS) on the motherboard
* Remove the jumper plug from pins 1 and 2
* Place the jumper plug on pins 2 and 3 and wait approximately 5 seconds
* Replace the jumper plug on pins 1 and 2

Are you running both PSUs or did you remove one like I did?  Maybe try swapping PSU to the other bay or even have both PSUs installed and plugged in.

Another thing to try is resetting BIOS to defaults in the BIOS itself.

I'll do what I can to help you trouble shoot even though you won't take my phone calls!



So I have tried:

1) Reset NVRAM with Jumper.
2) Completely re-install BIOS with Dell Utility.

Neither works... still ~1600 H/S.

I am running both supplies... I'll try swapping them with one another.

----

I just set up two more R815s that I had not previously powered on or set up at all... they both run at 2100 H/S just like they should / my other ones used to. I cannot find a single difference in any setting.

SOLVED... But leaves me with more questions than answers...

I had to do the following:

1) Type "sudo sysctl -w vm.nr_hugepages=128"
2) Edit "/etc/security/limits.conf"
3) Add "* soft memlock 262144" and "* hard memlock 262144"
4) Change in XMR-STAK "config.txt" to "use_slow_memory : never"

This resolved the issue.

So... the weird part is I did NOT have to do this before the power flicker. I also did NOT have to do this on the two new systems I setup today to get the full speed.

I have no idea how a power flicker could make changing these settings a necessity.
hero member
Activity: 881
Merit: 502
April 23, 2018, 01:41:16 PM
ı also build some cheap rigs. it is ok if you have max 5-6 rigs but if you plan 10-20 or more rigs best is 8 gpu rigs with new motherboard and gpus. reason is maintanance.
hero member
Activity: 714
Merit: 512
April 23, 2018, 01:18:27 PM
Try clearing the NVRAM by changing jumper position on the board.  

* Locate the 3-pin CMOS jumper (CLR CMOS) on the motherboard
* Remove the jumper plug from pins 1 and 2
* Place the jumper plug on pins 2 and 3 and wait approximately 5 seconds
* Replace the jumper plug on pins 1 and 2

Are you running both PSUs or did you remove one like I did?  Maybe try swapping PSU to the other bay or even have both PSUs installed and plugged in.

Another thing to try is resetting BIOS to defaults in the BIOS itself.

I'll do what I can to help you trouble shoot even though you won't take my phone calls!



So I have tried:

1) Reset NVRAM with Jumper.
2) Completely re-install BIOS with Dell Utility.

Neither works... still ~1600 H/S.

I am running both supplies... I'll try swapping them with one another.

----

I just set up two more R815s that I had not previously powered on or set up at all... they both run at 2100 H/S just like they should / my other ones used to. I cannot find a single difference in any setting.
full member
Activity: 280
Merit: 102
April 22, 2018, 09:35:44 PM

This is some of the batteries i got last month from going around and asking politely from many locations and picking up some from locations that told me they would place them to the side for me...



The lithium batteries for your power wall i understand man. Good job on saving money on those.. But these old laptop batteries?

If you don't mind may i ask what you use them for? Thats a lot of junk that could be flammable if i understand correctly.
jr. member
Activity: 176
Merit: 1
April 22, 2018, 09:24:29 PM
IN GENERAL DO NOT SEE MEET TO TAKE ALL THE EARLY CHEAP EQUIPMENT, LOOK BETTER BETWEEN THE PRICE AND THE QUALITY THEN WILL GO ON TOLY

I'm not sure I understand your statement.


 Cheesy Cheesy Cheesy
hero member
Activity: 714
Merit: 512
April 22, 2018, 02:48:17 PM
IN GENERAL DO NOT SEE MEET TO TAKE ALL THE EARLY CHEAP EQUIPMENT, LOOK BETTER BETWEEN THE PRICE AND THE QUALITY THEN WILL GO ON TOLY

I'm not sure I understand your statement.

All of my Z400s have been bullet-proof... close to a year now on the first ones I bought running 24/7.
newbie
Activity: 124
Merit: 0
April 22, 2018, 02:30:17 PM

IN GENERAL DO NOT SEE MEET TO TAKE ALL THE EARLY CHEAP EQUIPMENT, LOOK BETTER BETWEEN THE PRICE AND THE QUALITY THEN WILL GO ON TOLY
hero member
Activity: 714
Merit: 512
April 22, 2018, 10:25:14 AM
So I have been on ZergPool for a while now since they launched X16R on there.

But... the earnings seem a bit low lately so I am experimenting with my friends farm first (I manage it for him)... as he has only 10 machines.

He has 60x GTX 1080Ti but has only averaged ~$1.90 USD / Day per card for the last week or so -- so I am pushing him back onto ZPOOL to see how it stacks up (when we both swapped to ZERG it was doing much higher).
Pages:
Jump to: