Author

Topic: Will this setup work for my first mining rig? (Diagram included) (Read 808 times)

jr. member
Activity: 31
Merit: 4
Since the 1660Ti's are generally low power GPUs, you can try and plug in a 6 PCIe into the 8 PCIe connection on the GPU and it will most likely still work. Most of my RX 470/570 have 8 PCIe connectors but plugging in a 6 PCIe still works on them.

You can verify this with a multi-meter and see if all the GND pins are continuous and not one of them is a sensor pin. You couldn't do this with older GPUs back in the 2013 days such as the R9 280X because they had a sensor pin and made sure an actual 8 PCIe was connected but they were power hogs at > 300 Watt per GPU.

I think the reason they do this is because many people have a hard-coded 8 PCIe connector with their PSU which doesn't separate and they wanted those PSUs to get used.

i can confirm that the gpus dont work with only 6 pins (even though the 1660tis get a large majority of their power from the risers).  I didn't purposely test this out.  I just didnt fully plug in the last two pins on one gpu and was wondering why it wasn't being recognized.

Anyways I no longer have any issue with plugging in my gpus properly since I contacted evga about providing extra 8+6 pin cables so im all good now. 

Here is my current issue (which I posted in my most recent reply): ive had my rig up for a month (pretty stable for the past 2 weeks). So I have devoted my time to tweaking the OC settings so that I can extract a little more juice out of the cards. I had a few questions about my process and results.

For the past couple weeks I have been mining zcoin (mtp) on the trex miner. I have been overclocking with nvidia inspector.

I started by undervolting the cards to .8V and +150 core clock. (Well I bumped the core clock up by +25 until +175 caused imediate crashes).

I kept it at +150 and the rig ran for 15 hours straight without any gpus crashing. Then one gpu crashed. So I changed that gpu to +140 and the rest to +150. I ran the rig and now another gpu crashed after only 3 hours of uptime. Strange because that gpu was fine during the 15 hours uptime previously. Anyways, I lowered the clock on that one.

And I have been doing that process on repeat for about a week and a half. The rig will run fine for half a day but then I will check on it and see that a gpu had crashed. Ive had uptimes of 24+ hours and uptimes as low as 15 minutes. The good thing about trex miner is that it restarts itself (https://imgur.com/a/qcknxQW) and the affected gpu is running like normal, its not "crashed", as in it is not working anymore. Trex keeps a running list of all the gpus that crashed during the current session so that helps me from having to scroll up to find any errors(https://imgur.com/a/WHORZA7).

My questions: Am I doing this overclocking thing correctly?

I keep lowering and lowering the core clocks, seemingly without end. For my 9 gpu rig my core clocks are: +110,+110,+130,+150,+100,+100,+70,+100,+100. And still the rig will have a gpu that crashes once a day. And the weirdest thing is that one gpu will be running fine at its current clock for days and then it crashes for some reason. When will it end??

Why is it that one gpu ran fine for 24 hours at +130 but then couldn't run more than 2 hours at +120? Is the issue my core clocks or something else?

The gpus never "crash" so hard that they freeze the pc or even cause the gpus to be non responsive. Trex miner restarts its processes and all the gpus are operating like normal after restart, sometime for many hours afterwards. Other miner apps Ive used freeze or cause the pc to freeze when a gpu OC error occurs so thats neat. So should I even lower the core clocks or leave them high since the downtime is only for a few seconds? Is there any long term damage to the cards from the crashes?

Could this be a trex issue?
legendary
Activity: 3808
Merit: 1723
Since the 1660Ti's are generally low power GPUs, you can try and plug in a 6 PCIe into the 8 PCIe connection on the GPU and it will most likely still work. Most of my RX 470/570 have 8 PCIe connectors but plugging in a 6 PCIe still works on them.

You can verify this with a multi-meter and see if all the GND pins are continuous and not one of them is a sensor pin. You couldn't do this with older GPUs back in the 2013 days such as the R9 280X because they had a sensor pin and made sure an actual 8 PCIe was connected but they were power hogs at > 300 Watt per GPU.

I think the reason they do this is because many people have a hard-coded 8 PCIe connector with their PSU which doesn't separate and they wanted those PSUs to get used.
jr. member
Activity: 31
Merit: 4
ive had my rig up for a month (pretty stable for the past 2 weeks). So I have devoted my time to tweaking the OC settings so that I can extract a little more juice out of the cards. I had a few questions about my process and results.

For the past couple weeks I have been mining zcoin (mtp) on the trex miner. I have been overclocking with nvidia inspector.

I started by undervolting the cards to .8V and +150 core clock. (Well I bumped the core clock up by +25 until +175 caused imediate crashes).

I kept it at +150 and the rig ran for 15 hours straight without any gpus crashing. Then one gpu crashed. So I changed that gpu to +140 and the rest to +150. I ran the rig and now another gpu crashed after only 3 hours of uptime. Strange because that gpu was fine during the 15 hours uptime previously. Anyways, I lowered the clock on that one.

And I have been doing that process on repeat for about a week and a half. The rig will run fine for half a day but then I will check on it and see that a gpu had crashed. Ive had uptimes of 24+ hours and uptimes as low as 15 minutes. The good thing about trex miner is that it restarts itself (https://imgur.com/a/qcknxQW) and the affected gpu is running like normal, its not "crashed", as in it is not working anymore. Trex keeps a running list of all the gpus that crashed during the current session so that helps me from having to scroll up to find any errors(https://imgur.com/a/WHORZA7).

My questions: Am I doing this overclocking thing correctly?

I keep lowering and lowering the core clocks, seemingly without end. For my 9 gpu rig my core clocks are: +110,+110,+130,+150,+100,+100,+70,+100,+100. And still the rig will have a gpu that crashes once a day. And the weirdest thing is that one gpu will be running fine at its current clock for days and then it crashes for some reason. When will it end??

Why is it that one gpu ran fine for 24 hours at +130 but then couldn't run more than 2 hours at +120? Is the issue my core clocks or something else?

The gpus never "crash" so hard that they freeze the pc or even cause the gpus to be non responsive. Trex miner restarts its processes and all the gpus are operating like normal after restart, sometime for many hours afterwards. Other miner apps Ive used freeze or cause the pc to freeze when a gpu OC error occurs so thats neat. So should I even lower the core clocks or leave them high since the downtime is only for a few seconds? Is there any long term damage to the cards from the crashes?

Could this be a trex issue?
sr. member
Activity: 512
Merit: 260
ttps://i.imgur.com/54l3eFY.jpg bad idea linking between PSU's. Riser and GPU's PCE power needs to come from the same PSU. don't mix
member
Activity: 273
Merit: 12
Instead of plugging 2 PSUs into 2 seperate plugs, I always recommend purchasing a nice Surge Protector that can do a 12A 120V constant load. That way its easier to start up the system instead of worrying about 2 different on buttons. Just flip the on button on the surge protector and you are good. That 1300W PSU will power more than 5 GPUs no issue. You would actually be able to power all your GPUs with that no issue. Purchase an 8 Pin to dual 8 Pin to be able to power 2 GPUs from 1 single PCIe slot on the PSU. Sata Cables are perfectly fine. Since you are using a 1660ti they pull very little from the slot and pull very little from the cables, so you will have no issue with the sata connector. You will also be perfectly fine chaining 2 Risers per Sata connector on the PSU.

Its really questionable if people on here actually built out larger farms that are saying to not do these things. People dont really have understanding and use like 2 threads as evidence that sata connectors are bad and all. Just look Der8auer pulled over 200W from a PCIe slot, that would throw all these people crazy.

I run many 8x1080tis at 65-70% TDP with 2 EVGA 850W G3 PSUs. 2 Risers per sata connector on the PSU, 8pin to dual 8 pin on the 4 pcie connectors from the PSU. Ran them for years not one single issue.
legendary
Activity: 2534
Merit: 6080
Self-proclaimed Genius

-snip-
Nothing in particular, that adapter is designed to switch-on the two/three PSUs simultaneously by connecting all the PSON (green, if colored) wires to a neutral wire of the PSUs as you press the power-on switch connected to the Mobo.
(your image: the adapter's two wires)

Those will blow if you stare at them too much, JK.
Just don't short anything.
jr. member
Activity: 31
Merit: 4
https://imgur.com/a/5wH9a6Q

so im going to try to connect my second psu to my rig so i can add more gpus.  my mobo (gigabyte b250 fintech) came with this mobo adapater.  Is there anything I need to do in order to stay safe and not blow anything up?  I read that I need to power on the non mobo connected psu first and then power on the main psu.  and that i have to turn them off in the opposite order.   or do i not have to do that sort of behavior if i am using the adapter?  When I shut down the computer can I keep both psus in their switched "on" state or do i have to cut the power off?
jr. member
Activity: 31
Merit: 4
so it turns out that one of my gigabyte 1660tis runs at 140w at 100% tdp.  wtf?? how is that possible?  i thought all 1660tis run at 120w.  do i have a fake 1660ti? lol. all my asus 1660tis are fine, 120w at 100%
sr. member
Activity: 487
Merit: 266
so ive been ming for a couple days. i have my power limit set to 80% but i noticed on the graphs(https://imgur.com/a/f5DVVfw) that there are some spikes over the limit, up to 100-120% at the peaks of the circled spikes.  is this a reading error or something that absolutely shouldnt be occurring?

Another question I have: I have read that running GPUs at 100% power ruins the longevity of the cards.  Does overclocking the core and memory clocks have the same effect?

Last question: I've seen conflicting information.  Does running at 100% power ruin cards or does running at high temps (i.e. 70+) ruin cards.   ive been running at 60% power and have been only hitting 39-51C.  is it safe to up the power?  the thing im struggling with is finding overclock settings.  what ive been doing is upping the core by 25 and the mem by 150 and seeing if its stable.  but i feel like ive already overclocked it further than i should.  how do i know when its too much? right now im sitting at 60% power, +150 core, +700 mem mining mtp (zcoin)

I think that's just myths. As long as your cooling is OK (fans can sometimes die on GPUs, never had the issue, but it can happen) your cards will not have any longevity issues. Most of my gear is comprised of RX470s that are past 2 years old now, they still run like a charm.

The same goes for when I was selling some of my gear, people would always ask "Has it been used for mining???".

They assumed that because you use a GPU for what it is essentially intended for (do calculations. I mean when you game, it's the same thing....) the GPU would be worn out and on its way out. Which is about the most retarded question you can ask.

The fact is most miners will actually turn things down a notch with voltages and usually tune the card's frequency in by often underclocking it in regards to the undervolt. What matters isn't having the highest hashrate, it's having a stable GPU that doesn't give you high numbers of stale or bad shares and that you can leave running for days on end without a problem.

So the conclusion is that if there was any scientific evidence on if overclocking had any effect on a GPUs longevity, you could make the point that a miner's card would be in better shape than that of a gamer who would usually try to overclock the card to get the max performance out of it.

Regarding temps, GPUs can handle 70°C and up no problem.

Also look on the pool side of things. How does your reported hashrate match your miner's hashrate. Any large discrepancy means you likely went too far with your over/under clock/volt.
jr. member
Activity: 31
Merit: 4
so i got around to min maxing my overclock settings today and i noticed that on 3 gpus that they put out 31.2 mh/s at +895 mem clock but at +900 they each drop to 28.7 mhs. Is this some sort of a throttling? i find it strange that they would drop so much at a hard limit while my 3 other gpus all crash before the +895 mark.
The "throttling" GPU must be reaching high temperature and those that crash aren't stable.
"Silicon Lottery".

the temps are fine, only 54C for each of those cards.  or is there a memory controller temp as well?
legendary
Activity: 2534
Merit: 6080
Self-proclaimed Genius
Is it ok if I use one VGA cable to power a gpu and the riser for another gpu all on the same cable?
-snip-
-snip- your colleague used sata cables to connect to gpus or risers?  arent sata cables inadviseable because they rated for a low power consumption? im not trying to use sata connections at all.  just vga pcie 6+2/6 pin cables to power on my gpus (the 6+2 pin for my gpu and the 6 pin for the riser of the gpu above it).
He definitely mean risers (Sata to 4/6-pin adapter).

∙ Sata power cables are the same as the 4-pin molex or the GPU cables except for the connector.
∙ Some PSUs have the same output for the all 12v rail; and some low-quality ones have a dedicated rail for the GPU cables, low power for other peripherals; so the "rated for low power consumption" isn't an issue for your 80+ bronze rated PSU.

Yes, not adviceable because the pins are too thin and small so it has a lot of resistance that can produce a lot of heat if the current is too strong,
result: the connector (not the wires) will burn; but it's rare.
loading img
so i got around to min maxing my overclock settings today and i noticed that on 3 gpus that they put out 31.2 mh/s at +895 mem clock but at +900 they each drop to 28.7 mhs. Is this some sort of a throttling? i find it strange that they would drop so much at a hard limit while my 3 other gpus all crash before the +895 mark.
The "throttling" GPU must be reaching high temperature and those that crash aren't stable.
"Silicon Lottery".
jr. member
Activity: 31
Merit: 4
so i got around to min maxing my overclock settings today and i noticed that on 3 gpus that they put out 31.2 mh/s at +895 mem clock but at +900 they each drop to 28.7 mhs. Is this some sort of a throttling? i find it strange that they would drop so much at a hard limit while my 3 other gpus all crash before the +895 mark.
jr. member
Activity: 31
Merit: 4
use the VGA cable to power riser & GPU.

Is it ok if I use one VGA cable to power a gpu and the riser for another gpu all on the same cable?

That depends on the cards you are using,a colleague of mine used to connect two Gpu-s within the same Sata cable which can only support one Gpu and the result was that a Thermaltake PSU blowed away,we were using two of them for a six Gpu rig.Better stick to the plan it works best,do not overload your cabling and you should be fine.

i dont really understand your anecdote.  your colleague used sata cables to connect to gpus or risers?  arent sata cables inadviseable because they rated for a low power consumption? im not trying to use sata connections at all.  just vga pcie 6+2/6 pin cables to power on my gpus (the 6+2 pin for my gpu and the 6 pin for the riser of the gpu above it).
legendary
Activity: 3318
Merit: 1247
Bitcoin Casino Est. 2013
use the VGA cable to power riser & GPU.

Is it ok if I use one VGA cable to power a gpu and the riser for another gpu all on the same cable?

That depends on the cards you are using,a colleague of mine used to connect two Gpu-s within the same Sata cable which can only support one Gpu and the result was that a Thermaltake PSU blowed away,we were using two of them for a six Gpu rig.Better stick to the plan it works best,do not overload your cabling and you should be fine.
full member
Activity: 270
Merit: 115
Not really, every card is different due to the 'silicon lottery' for the GPU and memory used on the card plus other factors like the algo being mined, temperature (room as well as heat generated buy the card(s) / PC), there is also the card variation as well, this being the make / model, eg - EVGA make 3 - 5 models of a card as does MSI and others, all with different clock specs / parts used to make that card model.

At the end of the day, it's 'trial and error' for the cards that you have.
jr. member
Activity: 31
Merit: 4
For the OP and others here ....  please visit the site below, best site for those that are new to mining with GPU's.

https://www.gpuminingresources.com/p/what-hardware-do-i-buy-to-start-mining.html

thats the site i used to most when i was building my rig.  but it doesnt cover any of the overclocking aspects.  is there any site that addresses proper overclocking methods?
full member
Activity: 270
Merit: 115
For the OP and others here ....  please visit the site below, best site for those that are new to mining with GPU's.

https://www.gpuminingresources.com/p/what-hardware-do-i-buy-to-start-mining.html
newbie
Activity: 164
Merit: 0
good to stay around 75% or less. thats most efficient in my experience. keep temps 60s or below if you can. I keep my fans below 70% also.

finding the best settings is trial and error. each brand/model/card is different. have to play around with them and find the best for you and for each card. each card can do different things as not all silicon is the same. some are better than others
jr. member
Activity: 31
Merit: 4
so ive been ming for a couple days. i have my power limit set to 80% but i noticed on the graphs(https://imgur.com/a/f5DVVfw) that there are some spikes over the limit, up to 100-120% at the peaks of the circled spikes.  is this a reading error or something that absolutely shouldnt be occurring?

Another question I have:  I have read that running GPUs at 100% power ruins the longevity of the cards.  Does overclocking the core and memory clocks have the same effect?

Last question: I've seen conflicting information.  Does running at 100% power ruin cards or does running at high temps (i.e. 70+) ruin cards.   ive been running at 60% power and have been only hitting 39-51C.  is it safe to up the power?  the thing im struggling with is finding overclock settings.  what ive been doing is upping the core by 25 and the mem by 150 and seeing if its stable.  but i feel like ive already overclocked it further than i should.  how do i know when its too much? right now im sitting at 60% power, +150 core, +700 mem mining mtp (zcoin)
jr. member
Activity: 31
Merit: 4
use the VGA cable to power riser & GPU.

Is it ok if I use one VGA cable to power a gpu and the riser for another gpu all on the same cable?
each riser & gpu combination should be connected to same PSU. you can use one VGA cable for the GPU and the riser connected to that GPU. or you could use one VGA cable to power multiple risers, but it would be easier just to use one VGA cable per gpu/riser combination. you probably need the splitters linked to on ebay.

if you're keeping your GPUs under 120 watts (like around 90-120 watts) you could put 2 GPUs on each VGA cable. I think those VGA cables are rated for 300 watts, running 225 watts on one VGA cable would be no problem.

I have a rig right now where I have 2 GPUs per cable and I keep each card under 110 watts. no issues

the issue is my psut doesnt have any VGA cables with two 8 pin connectors on them so I cant string two gpus on one vga cable (since my 1660tis all use 8 pin connections). 

tbh, while doing my homework for my rig I think I read that its better to not use any extensions, splitters, or converters for the PSU cabling since the more connections the higher chance for an issue, aka fire, etc.  So thats why I would rather not add a extension for the 6 pin connector on my cables just so that I can keep both the riser and the gpu on that riser on the same VGA cable. 

If its advisebale I could power a gpu with a 8pin and then use the 6 pin on that same cable to power the riser of the gpu hanging above that one.  And then I could use a 8 pin cable to power the gpu that is handing above the first gpu.  They would all be on the same psu. 
newbie
Activity: 164
Merit: 0
use the VGA cable to power riser & GPU.

Is it ok if I use one VGA cable to power a gpu and the riser for another gpu all on the same cable?
each riser & gpu combination should be connected to same PSU. you can use one VGA cable for the GPU and the riser connected to that GPU. or you could use one VGA cable to power multiple risers, but it would be easier just to use one VGA cable per gpu/riser combination. you probably need the splitters linked to on ebay.

if you're keeping your GPUs under 120 watts (like around 90-120 watts) you could put 2 GPUs on each VGA cable. I think those VGA cables are rated for 300 watts, running 225 watts on one VGA cable would be no problem.

I have a rig right now where I have 2 GPUs per cable and I keep each card under 110 watts. no issues
jr. member
Activity: 31
Merit: 4
use the VGA cable to power riser & GPU.

Is it ok if I use one VGA cable to power a gpu and the riser for another gpu all on the same cable?
newbie
Activity: 164
Merit: 0
use 1 cable per riser. does not matter what type of cable you use to power the riser (molex or sata or 6pin) as long as it has 2 yellow wires. but use 1 PSU connection per riser or use the VGA cable to power riser & GPU. if you need to split the 8pin GPU, buy splitters from ebay, https://www.ebay.com/itm/5-pack-PCI-E-8-pin-to-2x-6-2-pin-Power-Splitter-Cable-PCIE-PCI-Express-5X/142553881213?epid=21005666775&hash=item2130df9a7d:g:X8YAAOSwaj9dAJKX

power supplies are most efficient at 50% DC power draw from your components. so you should make sure each set of GPUs is using 50-60% of your PSU max. thats the safest and most efficient use of the PSU
jr. member
Activity: 31
Merit: 4
Ok. I've bought and set up everything.  Can you guys check over what I have and what I plan to do with my rig so that I dont start any fires in my rig or in my apartment. 

Power The breaker in my place has the number 20 on each breaker, so I assume each circuit is rated for 20a.  I have 120V (although the kill-o-watt meter reads 124-125V).  Theoretically such a circuit can support 2400W but I am not trying to start any fires so I based all my calculations on a theoretical limit of 15a x 120v = 1800W.  I will be plugging the rig into a 2 outlet surge protector (https://imgur.com/a/Mwakrf6) plugged into one outlet.  That outlet is on its own circuit with nothing else on the circuit.  There will be two PSUs but there is only one outlet so that is why I got the 2 outlet surge protector. Do I even need a surge protector? Or can I just use a power strip without surge protection?

PSU I got two PSUs, EVGA 1300W G2.  In my first set up I will run 5 GPUs and mobo from one PSU.  But eventually, I would like to run 6 GPUs on each PSU. 

Mobo I bought the Gigabyte B250 Fintech which can support up to 13 GPUs since in my finished setup I would like to run 12 GPUs.  I know people say that running 12 GPUs from a single mobo is a nightmare.  What kind of issues arise?  Is it really that bad?  Nevertheless, if I run into issues running 12 GPUs I can always slap another mobo onto my frame and use the other PSU for it.  (As it is, I have a second mobo already.  I bought the B250 Mining Expert but didn't realize it doesn't support 12 nvidia GPUs).  The Gigabyte B250 Fintech mobo has two molex connections on either side of the pcie slots.  When should I plug a molex cable into those?  After installing 1 GPU? 6 GPU? 10+ GPUs? Do i need to power those pcie connectors if i am using powered risers?

GPUs  I plan on having 12 1660tis.  With 12 running at 90W each, they would all only be consuming 1080W.

Wiring I'm a bit limited by the wires that EVGA provided with the PSUs so I can only wire up 5 GPUs currently.  I have each GPU wired up with a 8 pin PCIe cable.  There were only two molex cables so I have 3 risers powered by those 2 molex cables.  And the 6th VGA cable powers two risers via 6 pin power.  I think I have read that its best to power the risers with six pin cables.  Should I use a molex to 6 pin PCIe converter to power the risers?  Eventually when I add a second row of GPUs I plan on using the VGA cables that have both a 8pin and a 6 pin connection to power a gpu and a riser for another gpu right above it.  Is it ok to use one pcie cable to power a gpu and a riser that isn't connected to that gpu? Also, can i plug a molex cable into the sata port on my psu? I have four sata ports on my evga 1300 psu and only two perif (molex) ports. If i acquire more molex cables will the sata ports provide the same amount of power as the perif ports?

Frame This is the frame my father and I built (https://imgur.com/a/OVEShmm).  It was a pain the ass to plan out and build this thing and took so many hours.  And the materials were like 60-70% the cost of a prebuilt one.  Tbh I will never recommend anyone try and build their own, just buy a prebuilt one. 

Here is the progress I have with 6 GPUs set up (only 5 GPUs plugged in) (https://imgur.com/a/gYy54em).  I haven't turned it on yet because I wanted to make sure everything is fine before I fry anything.

Aynthing that i have done wrong?  Will it cause any electrical issues to either my pc components or the wall circuitry?
jr. member
Activity: 31
Merit: 4


12 amps on a 15 amp circuit if that is all it does.

12 x 110 = 1320 watts

12 x 120 = 1440 watts.

2  five card rigs should work.   Just don't over clock them.

 i have a few 1660ti's and 1660's IO can set them to about 90 watts each so 10 x 90 = 900 add 100 for fans cpu mobo's and you are at 1100 which is under 1320.  here is a link for the rm1000x on

corsair website


https://www.corsair.com/us/en/Categories/Products/Certified-Refurbished/Power-Supplies/RMx-Series%E2%80%9A%C3%91%C2%A2-RM1000x-%E2%80%9A%C3%84%C3%AE-1000-Watt-80-PLUS%C2%AC%C3%86-Gold-Certified-Fully-Modular-PSU-%28NA%29-%28Refurbished%29/p/CP-9020094-NA/RF


these work well past the 1 year refurbished warranty if you pull 500 to 650 watts.

here is EVGA refurbished but I do not see a good deal
https://www.evga.com/products/productlist.aspx?type=8

Thanks. That's the math I was using. Just wanted to make sure I have it all planned out correctly before I blow anything up.

Btw I ordered two new  rm1000x’s since they are $160 new. For $20 more get an jnusued psu covered for 10 years.
legendary
Activity: 4326
Merit: 8899
'The right to privacy matters'

Honestly 8xGPU setup is a sweet spot, mixed with server PSU or LC1650 mining edition


This can't be conveyed more heavily.

When you get dense rigs, it is nice having fewer objects to handle;  but when one goes down, you lose a LOT of hash compared to smaller rigs.    Also;  The semantics of getting the 8+ GPU's to play friendly (usually a mix of AMD/Nvidia)... is too much for someone just starting off.

Extra risers for testing/diagnosis/failure replacements are a must.

Also;  when you are putting these high current draw miners on an electrical circuit;  lets say a 15A circuit;  you want to utilize 80% of that circuit safely if you are planning on having a consistent high load on a single circuit.  That would give you 12A to play with.   Running a 15A circuit at 15A will lead to failure and quite possibly a fire.  Nobody wants that... It may be rated to handle it;  but the wires will heat up, and you never know the quality of wire that was put into your walls a lot of the time.


Running dual PSU machines;  can be tricky, as one PSU usually starts up a little before the other.... sometimes even when they are the same model from the same batch (could be narrowed down to the variance in the current inrush limiter on the PSU giving them different startup times);  So in the event of a power failure, you must be sure they both start up at the same time, or the one that starts up second is the one hooked to the motherboard's ATX+CPU connectors.

Using a modified server PSU is a valid thing;  but It is smartest overall to not deal with any of the mess that could add to your potential problems when you are starting off and figuring things out.

what if i built two rigs with 5/6 1660tis each?  would they be allowed to be on the same circuit or is that too much draw?

12 amps on a 15 amp circuit if that is all it does.

12 x 110 = 1320 watts

12 x 120 = 1440 watts.

2  five card rigs should work.   Just don't over clock them.

 i have a few 1660ti's and 1660's IO can set them to about 90 watts each so 10 x 90 = 900 add 100 for fans cpu mobo's and you are at 1100 which is under 1320.  here is a link for the rm1000x on

corsair website


https://www.corsair.com/us/en/Categories/Products/Certified-Refurbished/Power-Supplies/RMx-Series%E2%80%9A%C3%91%C2%A2-RM1000x-%E2%80%9A%C3%84%C3%AE-1000-Watt-80-PLUS%C2%AC%C3%86-Gold-Certified-Fully-Modular-PSU-%28NA%29-%28Refurbished%29/p/CP-9020094-NA/RF


these work well past the 1 year refurbished warranty if you pull 500 to 650 watts.

here is EVGA refurbished but I do not see a good deal
https://www.evga.com/products/productlist.aspx?type=8
jr. member
Activity: 31
Merit: 4

Honestly 8xGPU setup is a sweet spot, mixed with server PSU or LC1650 mining edition


This can't be conveyed more heavily.

When you get dense rigs, it is nice having fewer objects to handle;  but when one goes down, you lose a LOT of hash compared to smaller rigs.    Also;  The semantics of getting the 8+ GPU's to play friendly (usually a mix of AMD/Nvidia)... is too much for someone just starting off.

Extra risers for testing/diagnosis/failure replacements are a must.

Also;  when you are putting these high current draw miners on an electrical circuit;  lets say a 15A circuit;  you want to utilize 80% of that circuit safely if you are planning on having a consistent high load on a single circuit.  That would give you 12A to play with.   Running a 15A circuit at 15A will lead to failure and quite possibly a fire.  Nobody wants that... It may be rated to handle it;  but the wires will heat up, and you never know the quality of wire that was put into your walls a lot of the time.


Running dual PSU machines;  can be tricky, as one PSU usually starts up a little before the other.... sometimes even when they are the same model from the same batch (could be narrowed down to the variance in the current inrush limiter on the PSU giving them different startup times);  So in the event of a power failure, you must be sure they both start up at the same time, or the one that starts up second is the one hooked to the motherboard's ATX+CPU connectors.

Using a modified server PSU is a valid thing;  but It is smartest overall to not deal with any of the mess that could add to your potential problems when you are starting off and figuring things out.

what if i built two rigs with 5/6 1660tis each?  would they be allowed to be on the same circuit or is that too much draw?
legendary
Activity: 1848
Merit: 1166
My AR-15 ID's itself as a toaster. Want breakfast?
Dude I am Phil from Queens😀

Born in far rockaway.
Grew up in Ozone Park

Got my degree from Queens college.


Back to your post.  Try six GPUs to start.

Do no go server route.

Buy extra risers as they tank a lot.

I prefer using EVGA or Corsair.

Corsair rm1000x refurbished costs 139.99 I have 5 or 6 of them they run very well

Honestly 8xGPU setup is a sweet spot, mixed with server PSU or LC1650 mining edition


This can't be conveyed more heavily.

When you get dense rigs, it is nice having fewer objects to handle;  but when one goes down, you lose a LOT of hash compared to smaller rigs.    Also;  The semantics of getting the 8+ GPU's to play friendly (usually a mix of AMD/Nvidia)... is too much for someone just starting off.

Extra risers for testing/diagnosis/failure replacements are a must.

Also;  when you are putting these high current draw miners on an electrical circuit;  lets say a 15A circuit;  you want to utilize 80% of that circuit safely if you are planning on having a consistent high load on a single circuit.  That would give you 12A to play with.   Running a 15A circuit at 15A will lead to failure and quite possibly a fire.  Nobody wants that... It may be rated to handle it;  but the wires will heat up, and you never know the quality of wire that was put into your walls a lot of the time.


Running dual PSU machines;  can be tricky, as one PSU usually starts up a little before the other.... sometimes even when they are the same model from the same batch (could be narrowed down to the variance in the current inrush limiter on the PSU giving them different startup times);  So in the event of a power failure, you must be sure they both start up at the same time, or the one that starts up second is the one hooked to the motherboard's ATX+CPU connectors.

Using a modified server PSU is a valid thing;  but It is smartest overall to not deal with any of the mess that could add to your potential problems when you are starting off and figuring things out.
legendary
Activity: 4326
Merit: 8899
'The right to privacy matters'
Dude I am Phil from Queens😀

Born in far rockaway.
Grew up in Ozone Park

Got my degree from Queens college.


Back to your post.  Try six GPUs to start.

Do no go server route.

Buy extra risers as they tank a lot.

I prefer using EVGA or Corsair.

Corsair rm1000x refurbished costs 139.99 I have 5 or 6 of them they run very well
jr. member
Activity: 86
Merit: 3
Honestly 8xGPU setup is a sweet spot, mixed with server PSU or LC1650 mining edition

I had heard about server psus but the entire breaker board thing really turned me off. Seemed pretty confusing to me.

What are the cons of the lc1650? Seems to good to be true. Does it not have a warranty? Do they sell it in America? Will it burn my house down?
Server PSUs are pretty easy to use, you need a breakout board and cables you attach the board to the end of the server PSU and just connect the rest up as you would a normal PSU, the only downside is that server PSUs are much louder than standard desktop ones like the EVGA you were looking at.
jr. member
Activity: 31
Merit: 4
Honestly 8xGPU setup is a sweet spot, mixed with server PSU or LC1650 mining edition

I had heard about server psus but the entire breaker board thing really turned me off. Seemed pretty confusing to me.

What are the cons of the lc1650? Seems to good to be true. Does it not have a warranty? Do they sell it in America? Will it burn my house down?
jr. member
Activity: 31
Merit: 4
When you are a newbie, you shouldnt start with a 10 GPU Setup.
My advise is to get a 6 GPU Mainboard and start with 1 PSU and 6 GPUs.
I plan on starting with 6 1660ti at first. Try it out and see how it works. I hope to add more gpus later if my blueprint is theoretically sound. Of course if something comes up during the upgrading process I can always buy a second mobo and use the other psu to make a second 6 gpu rig if the 10-12 gpu rig doesn't work out.
member
Activity: 438
Merit: 27
+1 Mr. Chung.
This is a nice and not expensive PSU.
member
Activity: 277
Merit: 23
Honestly 8xGPU setup is a sweet spot, mixed with server PSU or LC1650 mining edition
member
Activity: 438
Merit: 27
When you are a newbie, you shouldnt start with a 10 GPU Setup.
My advise is to get a 6 GPU Mainboard and start with 1 PSU and 6 GPUs.
member
Activity: 357
Merit: 26
Another question I have now: Is it worth buying a used psu (like an evga g2) since the warranty isn't really covered for the second owner? I feel like having the 10 year warranty is a big plus.

True, but some will give you a receipt, which will give you the warranty. Also about upfront cost - you'll get G2s (even T2's) for a third of their new value on feebay. THere is a risk, but...
jr. member
Activity: 31
Merit: 4
Another question I have now: Is it worth buying a used psu (like an evga g2) since the warranty isn't really covered for the second owner? I feel like having the 10 year warranty is a big plus.
jr. member
Activity: 31
Merit: 4
Quote
The B250 supports upto 13 GPU's of ANY kind, what error are you getting when you use more than 7?

I haven't received it yet actually, just ordered it a couple days ago but I was looking over the manual and this is the page where I got the info from.  

Quote
Yes and no. The risers do draw power, but that is a part of the GPU power draw, so if a GPU is rated at 90 wats, its drawing a part of it - lets say 70w - from the PCIe 6/8 pin connector and the rest - 20w - from the PCIe slot itself - in this case the riser.
Thanks for clearing it up! That's reassuring.
hero member
Activity: 751
Merit: 517
Fail to plan, and you plan to fail.
Quote
1.Will this board support 10 gpus? I accidentally bought the b250 mining expert for 19 cards but that board can only do 7 nvidia cards at most. are there any such restrictions for this board?
The B250 supports upto 13 GPU's of ANY kind, what error are you getting when you use more than 7?

Quote
2.the risers draw power from the psus, so do I have to add them to my estimates for power draw? every guide I have read so far says that I only need to calculate the wattage per gpu and add 150 for the fans, cpu and ram.
Yes and no. The risers do draw power, but that is a part of the GPU power draw, so if a GPU is rated at 90 wats, its drawing a part of it - lets say 70w - from the PCIe 6/8 pin connector and the rest - 20w - from the PCIe slot itself - in this case the riser.

Quote
3.Can I plug the two wall socket plugs into a surge protector so that they only use one wall socket? or should each go into their own plugs on the wall socket?
You can, and I prefer doing that for multi PSU systems, but make sure you get a high quality surge protector, rated at atleast 10a per socket.
jr. member
Activity: 31
Merit: 4
2*850w psu is overkill for 900w of gpu !
just buy less power models and take some good dual pcie cables if it's your point.

I'm running 12 RX 570 on a Fintech without any problem. Don't know about Nvidia.

I was just buying this psu model because it had 6vga/6perif connections whereas smaller models only had 4/4. I was going under the assumption that each gpu needed a dedicated 8 pin pcie cable coming directly from the psu.
sr. member
Activity: 661
Merit: 250
2*850w psu is overkill for 900w of gpu !
just buy less power models and take some good dual pcie cables if it's your point.

I'm running 12 RX 570 on a Fintech without any problem. Don't know about Nvidia.
jr. member
Activity: 31
Merit: 4


Will this rig of 10 gpus work as I have envisioned it? I am planning on plugging it into a standard US wall outlet. I assume its 15A so that would be about 1500-1600W. I want to get two 850W PSUs with 6 pcie connections and 6 peripheral connections. I would hook up one card to each of the pcie slots and run a molex cable to power two risers from each periph slot. each riser would be plugged into the motherboard via usb obviously. I would have 6 cases fans to start with (will add more if space permits) and they will be connected to the motherboard via splitters. The two psus will use an adapter to hook up to the atx board.

 i have some questions, assuming this set up is good to go.

1.Will this board support 10 gpus? I accidentally bought the b250 mining expert for 19 cards but that board can only do 7 nvidia cards at most. are there any such restrictions for this board?
2.the risers draw power from the psus, so do I have to add them to my estimates for power draw? every guide I have read so far says that I only need to calculate the wattage per gpu and add 150 for the fans, cpu and ram.
3.Can I plug the two wall socket plugs into a surge protector so that they only use one wall socket? or should each go into their own plugs on the wall socket?
4. Is it worth buying a used psu (like an evga g2) since the warranty isn't really covered for the second owner? I feel like having the 10 year warranty is a big plus.
Jump to: