Pages:
Author

Topic: Building Cheap Miners : My "Secret" - page 18. (Read 60235 times)

full member
Activity: 322
Merit: 233
April 18, 2018, 01:30:16 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.

Well... I have to take that back a bit now.

My power flickered last night thus turning off all my hardware. Came in, turned all the Dells back on... started XMR-STAK on them all & walked off.

Noticed my pool hash-rate never returned to how it was... every one of my R815s is now ~400 H/S slower than they were before the power flickered (1600 H/S vs. 2000 H/S).

I've tried re-booting and even re-compiling XMR-STAK... seems to be permanent ?!

Any ideas ?


Im really shocked you have not invested in UPS for your systems man.. i ordered 25 x of the CyberPower CP1500AVRLCD, i got them refurbished off ebay, the guy only had 25 left, so i contacted him to ask if i could buy all 25 at a better rate and he sold them to me for $88 each, but at that time was spring time sales, so i was able to get 20% using the emailed spring code i had sitting in reserves. I have one of these 1500 units on every 6 and less GPU rig i have running right now...
hero member
Activity: 714
Merit: 512
April 18, 2018, 01:29:55 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.

Well... I have to take that back a bit now.

My power flickered last night thus turning off all my hardware. Came in, turned all the Dells back on... started XMR-STAK on them all & walked off.

Noticed my pool hash-rate never returned to how it was... every one of my R815s is now ~400 H/S slower than they were before the power flickered (1600 H/S vs. 2000 H/S).

I've tried re-booting and even re-compiling XMR-STAK... seems to be permanent ?!

Any ideas ?

Some servers have a bios setting for throttling in the event of a power failure. It's possible it got stuck in that state? Have you tried a full proper power shut down and restart?

I have not done a complete "hard" power off -- I will give that a shot.

Okay tried on two systems. One just a "shut down" then power back on. One with a "shut down" and unplug / replug the system. No effect on either system. Both still doing only 1600 H/S

Apparently this IS, in fact, a thing :

https://community.spiceworks.com/topic/1447342-dell-r820-slow-after-power-outages

So... that being said I'll have to work on them tomorrow when I have some time.
full member
Activity: 322
Merit: 233
April 18, 2018, 01:21:54 PM
The prices on the 580s have gone up by at least 50% of what I was getting them for.

One of my ideas wasn't to exactly water cool the GPUs themselves but to do something similar using geothermal.  I have a backhoe so digging ain't a problem, ya know?  I was going to bury a 2k-3k gallon reservoir, run a bunch of pex and then pump that through a series of radiators (having a former automotive shop I have a lot laying around).  The radiators would be in essence a precooler for my geothermal heat pump that will only work in AC.  The colder the incoming air to the exchanger, the cooler the ouput will be.  If I can take 105 degree Texas summer heat (or even the exhaust output of a closed loop system) and turn it into 80 or 85 degree air before it hits the AC system, my AC system will be way more efficient.  I have the majority of the items already to make the precooler, and even the AC system.  Just need time.

I started this venture in January so I have no idea how I'm going to cope with the heat.  Painting the containers a semi gloss white has definitely helped cut down heat absorption and I expect the solar shade to equally help.

If you wouldn't mind, I might need a copy of that script you wrote! Cheesy

I was going to use automotive radiator also, but you cant mix dissimilar metals, copper and aluminum dont work well and you would have major issues down the line if you tried, thus why i had to source an all copper radiator. i managed to find one on ebay after some careful searching, but my plan B is to add more length to the loop, i have plenty of property still and may even try running some of it down into the earth vertically, someone posted over on another forum that i can purchase some pvc pipe large enough to allow the pipe to run down vertically and use water to frack the sand out by running the water hose down the middle as i disturb the soil with the pvc pipe, then afterwards run the cable down the middle, but im worried it would kink or whatnot. Another way someone mentioned was i could purchase a/c copper line and build a diy radiator without the fins.. idk i just hope the temps stay managable for me, since i spent a few k redoing this system multiple times, mostly on pumps failing (sigh)... my current pump that has lasted the longest is a simple submerged utility pump rated for 100% duty cycle at 80% load i picked up off local hardware store...

In regards to the code let me see if i can find it, what the script was just a simple nvidiainscector batch file to change GPU settings, but i added the lines to run it continuous with a pause for 720 minutes, then after the 720 minutes it would run the next line of code and pause for 720 minutes to run the next lines of code. basically i started the batch on the rigs at a set time of the day manually to keep it stupid, the pauses were 12hrs, so 12hrs during the middle of the day it would run one set of overclock settings, then after the pause would finish it would run the next set and it just looped back and forth over and over 24/7
hero member
Activity: 714
Merit: 512
April 18, 2018, 01:18:56 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.

Well... I have to take that back a bit now.

My power flickered last night thus turning off all my hardware. Came in, turned all the Dells back on... started XMR-STAK on them all & walked off.

Noticed my pool hash-rate never returned to how it was... every one of my R815s is now ~400 H/S slower than they were before the power flickered (1600 H/S vs. 2000 H/S).

I've tried re-booting and even re-compiling XMR-STAK... seems to be permanent ?!

Any ideas ?

Some servers have a bios setting for throttling in the event of a power failure. It's possible it got stuck in that state? Have you tried a full proper power shut down and restart?

I have not done a complete "hard" power off -- I will give that a shot.

Okay tried on two systems. One just a "shut down" then power back on. One with a "shut down" and unplug / replug the system. No effect on either system. Both still doing only 1600 H/S
hero member
Activity: 714
Merit: 512
April 18, 2018, 01:04:59 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.

Well... I have to take that back a bit now.

My power flickered last night thus turning off all my hardware. Came in, turned all the Dells back on... started XMR-STAK on them all & walked off.

Noticed my pool hash-rate never returned to how it was... every one of my R815s is now ~400 H/S slower than they were before the power flickered (1600 H/S vs. 2000 H/S).

I've tried re-booting and even re-compiling XMR-STAK... seems to be permanent ?!

Any ideas ?

Some servers have a bios setting for throttling in the event of a power failure. It's possible it got stuck in that state? Have you tried a full proper power shut down and restart?

I have not done a complete "hard" power off -- I will give that a shot.
hero member
Activity: 1118
Merit: 541
April 18, 2018, 01:00:29 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.

Well... I have to take that back a bit now.

My power flickered last night thus turning off all my hardware. Came in, turned all the Dells back on... started XMR-STAK on them all & walked off.

Noticed my pool hash-rate never returned to how it was... every one of my R815s is now ~400 H/S slower than they were before the power flickered (1600 H/S vs. 2000 H/S).

I've tried re-booting and even re-compiling XMR-STAK... seems to be permanent ?!

Any ideas ?


Some servers have a bios setting for throttling in the event of a power failure. It's possible it got stuck in that state? Have you tried a full proper power shut down and restart?

hero member
Activity: 714
Merit: 512
April 18, 2018, 12:54:35 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.

Well... I have to take that back a bit now.

My power flickered last night thus turning off all my hardware. Came in, turned all the Dells back on... started XMR-STAK on them all & walked off.

Noticed my pool hash-rate never returned to how it was... every one of my R815s is now ~400 H/S slower than they were before the power flickered (1600 H/S vs. 2000 H/S).

I've tried re-booting and even re-compiling XMR-STAK... seems to be permanent ?!

Any ideas ?
full member
Activity: 1179
Merit: 131
April 18, 2018, 12:33:55 AM

If I want to do more than 8 then I will need to rewire the SAS power. I believe that is what you did. Can you provide details in how to do that?

It's pretty easy.  The SAS power connector has 10 pins.  5 commons / grounds (black).  3 12V (red) and 2 3V (yellow). 

Assuming your cards only need 1 power connector, you are talking a total of 13 power connectors if 3 cars are installed in the system and 5 are on risers.  If the GPU connectors have 2 each on 3 cables that's 6.  You'll get another two out of the SAS connector.  Leaves you short 5 power connectors unless you are going to use splitters.

I've been successful increasing density on my breakout boards by using molex powered risers that I can power three off one pcie power header.  The pcie power header has 3 + and 3 negative at 12V.  Each gets a 12v to 5v step down and then are passed to a molex plug.  In essence I've turned one of the breakout board power connectors into 3 functioning molex connectors that will drive 3 risers saving 2 of pcie power connectors.

What version of Windows are you using?

Since it's quad CPU the DL580s are running Server 2016.  However, the 4 GPU limitation I have experienced on both Server 2016 AND Win10.  The DL360s and 380s run 10 since they are only dual proc.

It honestly sounds like a driver or firmware issue.  Unfortunately with HPE servers you have to pay for driver updates.  If you want to give it another try I would install windows and then install this service pack:  https://www.teimouri.net/review-hpe-service-pack-proliant-2018-03-0-hpe-spp/

Look toward the bottom for this link:

Service Pack for ProLiant (4.81 GB)
MD5 Checksum: a259b671a8cc37e89194d1f050d68424
jr. member
Activity: 176
Merit: 1
April 18, 2018, 12:04:31 AM
So I've got 4 GPUs going on an R815.  I started out to run 6 but HiveOS doesn't load a network driver for the R815.  Instead of scratching my head on how to figure that out, I simply threw in a pcie network card which stole a slot.  The riser I threw the network card in has two slots.  The second slot wouldn't accept a GPU.  I said screw it and settled for four at the moment.  Later on I'll see if I can get a 4 to 1 riser to work.  I have not had success with the 4 to 1's in the 580s but I have in the 360s and 380s.

It took me four solid days to deploy a few more vegas and reconfigure all my gpu rigs which were all HP Proliant boxes.  What a royal pain but it was necessary due to the hodge podge manner in which my little army grew over the last 100 days.  

At this point I consider myself a fairly well versed miner but there is so much more to learn.  I can say that HiveOS, once you understand how to use it, simply makes life easier.  I'm using all three free rigs and I'm seriously contemplating pulling the trigger for the rest of them.  I'm first going to play with some watchdog scripts for the windows boxes and setup a proxy which should simplify management.  Need to come to grips with Power play tables for the vegas as they are a pain to restart when necessary.

However the simplest by far has been an ASIC.  Set it and forget it.

Now up to 55 CPUs, 42 GPUs, 2 ASICs with one left to arrive that might be DOA with all the mother forking ASIC hating going on...  Cheesy A few dual and quad CPU stragglers to get online, but they are being bitches so they aren't up yet.
hero member
Activity: 714
Merit: 512
April 17, 2018, 04:32:03 PM
The Dell R815s that I got are running rock solid.

Hard to justify the price at this point, though, IMHO -- unless you get a sweet deal.
jr. member
Activity: 176
Merit: 1
April 17, 2018, 03:06:39 PM
The prices on the 580s have gone up by at least 50% of what I was getting them for.

One of my ideas wasn't to exactly water cool the GPUs themselves but to do something similar using geothermal.  I have a backhoe so digging ain't a problem, ya know?  I was going to bury a 2k-3k gallon reservoir, run a bunch of pex and then pump that through a series of radiators (having a former automotive shop I have a lot laying around).  The radiators would be in essence a precooler for my geothermal heat pump that will only work in AC.  The colder the incoming air to the exchanger, the cooler the ouput will be.  If I can take 105 degree Texas summer heat (or even the exhaust output of a closed loop system) and turn it into 80 or 85 degree air before it hits the AC system, my AC system will be way more efficient.  I have the majority of the items already to make the precooler, and even the AC system.  Just need time.

I started this venture in January so I have no idea how I'm going to cope with the heat.  Painting the containers a semi gloss white has definitely helped cut down heat absorption and I expect the solar shade to equally help.

If you wouldn't mind, I might need a copy of that script you wrote! Cheesy
full member
Activity: 350
Merit: 100
April 17, 2018, 11:38:26 AM
That's really awesome storx i was thinking about water cooling everything so much better than fans.

in the end i started dismantling and selling off everything though now my warehouse only has 10 rigs, have to decide to buy more gear or end the lease lol.


these servers look really fun to play with though so maybe... but i don't know. decisions decisions.
full member
Activity: 322
Merit: 233
April 17, 2018, 11:19:32 AM

Hows the servers working for yall?

Just realized your a fellow container farmer like myself, but i have mine setup on an all watercooled setup to control temps... how have you managed to control temps in yours?

mine is located in Florida, i bought some property that had 2 old mobile home pads with separate power panels that old abandoned burned down mobile homes lived once a long time ago...... Picked up the property for little to nothing and settled with the county to offset all the fees tacked on the property from all the junk and 2 abandoned mobiles if i cleaned it all up and got rid of the junk... so i spent a few months burning junk from the 80's lol....

Currrently I'm forcing tons of air through the unit.  Soon I will have to AC.  I've painted both containers white and am in the process of building a solar shade.  I have my work cut out for me!  Even though the servers have given me some fits, I'm glad I went that route.

I'm new to using USBs to run OSs for mining. Is it possible to use or does anyone have experience running Windows as an OS on these USBs? or do i need a custom mining OS like Hive or EthOs? I'm familiar with linux/ubuntu as well but windows is just way user friendly for configuring and remote troubleshooting.

USB sticks for Linux OK!  USB sticks for Windows, NOPE.  Sucks.  Might work, you won't be happy.  HIVEOS is the easiest of ALL once you figure out interface.

I hit a wall at 9 cards on the 580 also, using SMOS. Spent countless hours getting 9 cards to work. In one way i love the HP servers, H/s on cryptonight is a bonus but as a gpu-mining platform they look good on paper but is in my experience not fun maintaining. Did anyone try to run Windows 10 on the 580 with success?

I haven't tried more than 9.  I have one box that doesn't have the addin pcie board so I tried to use the 6th 8x PCIe Connector that is located on the SPI board that is supposed to be for a dual 10gb adapter so I could have 6 1080s going in it.  Well when I powered on I heard a pop and I'm currently down one server awaiting a new SPI board.   Cheesy Cheesy  Shocked  Sad  Angry

You can run Win10 on the 580 if you only want 2 procs recognized.

In my experience, Win10 and Server 2016 act identical on these servers.

Sooo beat.  I was up until 3:30AM last night trying to finish my migration.  I'm currently back to 90% operational.

I need to do some research on these servers that all of yall are running to see if they are still profitable to buy... GPU prices have dropped drastically, but profits have been marginal at best..

Ya i tried to use forced air at first, but the heat here in the summer just made it to impossible to maintain temps on the gear, so i had to build a script that would underclock the gear to 50%tdp during the hottest part of the days and back up to 65% during the night hours, so i searched into better means of cooling and invested in a large single waterloop that i run my entire farm off of now. I rented a ditch witch and dug a series of 2ft deep trenches on the property and i ground buried a few hundred feet of pex tube in the ground as my source of radiation of heat. My first attempt at running a test rig off the setup showed that running a typical watercooled pc pump was out of the picture, because the head pressure was to large on pushing fluid through 6GPU's on the first test rig. Then i moved to commercial inline pumps and over time they kept dieing on me also.. the head pressure combined with the heat just destroyed a commercial pump in a matter of a month and would start leaking from everywhere.. so i moved on to a spa/jacuzzi style pump thinking it was heat doing the issue and it turned out the heat was the issue all along killing the pumps, because it lastest the longest, so i started scaling up adding rigs to the waterloop and quickly i found myself pumpless again as the headpressure just was to much for the rigs. So a forum member mentioned an idea and its been rock stable for 6+ months now... I currently use a double pump system, but 1 is only running at a time, other is on pressure switch just as an emergency, but now i use a gravity fed system to cool all the gear. I have large 4inch pipes running across the top of the container wall that is pumped with water from the pump and on the end of the pipe about at the 90% level of the pipe, i have a large pipe dumping runoff back into the pump tank. Basically im using gravity to flow the water from the large 4inch pipe, into a diy manifold spreading the water to all 6 cards and flowing back out to the ground loop via a pipe at a level above the level of the pump tank. This has reduced the watts the pump runs at greatly and allowed it to stay cooled and reliable so far.. im letting gravity do all the work for me now... granted i went from a 40c loop to now a loop in the 50's, but it is the best way to go about it for me. I run universal copper CPU coolers i bought from alibaba in bulk, so my cost on them in bulk was around $8 each. after converting the system over to water, i dropped about 1500watts from the farm from removing all the GPU fans and just running a water pump and using mother earth to remove the heat. Now my entire farm runs at currently 58c max so far on the 90f+ days.. not sure yet what 100+ days will bring me yet, but if i have to im prepared to add additional cooling via a large radiator i purchased meant for industrial a/c system, but so far its been super nice running it this simple.
jr. member
Activity: 176
Merit: 1
April 17, 2018, 10:56:13 AM

Hows the servers working for yall?

Just realized your a fellow container farmer like myself, but i have mine setup on an all watercooled setup to control temps... how have you managed to control temps in yours?

mine is located in Florida, i bought some property that had 2 old mobile home pads with separate power panels that old abandoned burned down mobile homes lived once a long time ago...... Picked up the property for little to nothing and settled with the county to offset all the fees tacked on the property from all the junk and 2 abandoned mobiles if i cleaned it all up and got rid of the junk... so i spent a few months burning junk from the 80's lol....

Currrently I'm forcing tons of air through the unit.  Soon I will have to AC.  I've painted both containers white and am in the process of building a solar shade.  I have my work cut out for me!  Even though the servers have given me some fits, I'm glad I went that route.

I'm new to using USBs to run OSs for mining. Is it possible to use or does anyone have experience running Windows as an OS on these USBs? or do i need a custom mining OS like Hive or EthOs? I'm familiar with linux/ubuntu as well but windows is just way user friendly for configuring and remote troubleshooting.

USB sticks for Linux OK!  USB sticks for Windows, NOPE.  Sucks.  Might work, you won't be happy.  HIVEOS is the easiest of ALL once you figure out interface.

I hit a wall at 9 cards on the 580 also, using SMOS. Spent countless hours getting 9 cards to work. In one way i love the HP servers, H/s on cryptonight is a bonus but as a gpu-mining platform they look good on paper but is in my experience not fun maintaining. Did anyone try to run Windows 10 on the 580 with success?

I haven't tried more than 9.  I have one box that doesn't have the addin pcie board so I tried to use the 6th 8x PCIe Connector that is located on the SPI board that is supposed to be for a dual 10gb adapter so I could have 6 1080s going in it.  Well when I powered on I heard a pop and I'm currently down one server awaiting a new SPI board.   Cheesy Cheesy  Shocked  Sad  Angry

You can run Win10 on the 580 if you only want 2 procs recognized.

In my experience, Win10 and Server 2016 act identical on these servers.

Sooo beat.  I was up until 3:30AM last night trying to finish my migration.  I'm currently back to 90% operational.
full member
Activity: 139
Merit: 100
April 17, 2018, 04:48:55 AM
Under Linux, I have 6 Nvidia cards working.  Not sure what is up with the Proliant G7 series.  I have tried a DL360, DL380 and the DL580 and none of these will boot with more than 4 GPUs installed.  Windows will lock up at loading screen.

Slap a USB stick with HiveOS on it and 6 get recognized easily.

I have a ML350P G8 that has taken 6 GPUs in windows easily.  

Edit* Make that 9 1060s in one 580 in HiveOS.

I hit a wall at 9 cards on the 580 also, using SMOS. Spent countless hours getting 9 cards to work. In one way i love the HP servers, H/s on cryptonight is a bonus but as a gpu-mining platform they look good on paper but is in my experience not fun maintaining. Did anyone try to run Windows 10 on the 580 with success?
sr. member
Activity: 784
Merit: 282
April 17, 2018, 04:01:22 AM

Question on running off 16GB usb flash drives. Do these wear out pretty fast and die? When I tried running off a flash drive for HiveOS the flash drives died in about a week because of the constant log writing that was going on.


I'm using Sandisk USB 3.0 16MB sticks with no problems so far.  I did have a few issues early on with some cheap no name sticks.

The R815's, while power hogs are work horses.  They just plug away like nothing is wrong and require minimal baby sitting.

I'm having an issue getting more than 4 cards working on the 580s under Windows.  Two separate boxes lock up with 5 Nvidia cards connected.  I will try hive OS next.

Hows the servers working for yall?

Just realized your a fellow container farmer like myself, but i have mine setup on an all watercooled setup to control temps... how have you managed to control temps in yours?

mine is located in Florida, i bought some property that had 2 old mobile home pads with separate power panels that old abandoned burned down mobile homes lived once a long time ago...... Picked up the property for little to nothing and settled with the county to offset all the fees tacked on the property from all the junk and 2 abandoned mobiles if i cleaned it all up and got rid of the junk... so i spent a few months burning junk from the 80's lol....

I'm new to using USBs to run OSs for mining. Is it possible to use or does anyone have experience running Windows as an OS on these USBs? or do i need a custom mining OS like Hive or EthOs? I'm familiar with linux/ubuntu as well but windows is just way user friendly for configuring and remote troubleshooting.
full member
Activity: 322
Merit: 233
April 17, 2018, 02:47:23 AM

Question on running off 16GB usb flash drives. Do these wear out pretty fast and die? When I tried running off a flash drive for HiveOS the flash drives died in about a week because of the constant log writing that was going on.


I'm using Sandisk USB 3.0 16MB sticks with no problems so far.  I did have a few issues early on with some cheap no name sticks.

The R815's, while power hogs are work horses.  They just plug away like nothing is wrong and require minimal baby sitting.

I'm having an issue getting more than 4 cards working on the 580s under Windows.  Two separate boxes lock up with 5 Nvidia cards connected.  I will try hive OS next.

Hows the servers working for yall?

Just realized your a fellow container farmer like myself, but i have mine setup on an all watercooled setup to control temps... how have you managed to control temps in yours?

mine is located in Florida, i bought some property that had 2 old mobile home pads with separate power panels that old abandoned burned down mobile homes lived once a long time ago...... Picked up the property for little to nothing and settled with the county to offset all the fees tacked on the property from all the junk and 2 abandoned mobiles if i cleaned it all up and got rid of the junk... so i spent a few months burning junk from the 80's lol....
jr. member
Activity: 176
Merit: 1
April 17, 2018, 01:33:03 AM

If I want to do more than 8 then I will need to rewire the SAS power. I believe that is what you did. Can you provide details in how to do that?

It's pretty easy.  The SAS power connector has 10 pins.  5 commons / grounds (black).  3 12V (red) and 2 3V (yellow). 

Assuming your cards only need 1 power connector, you are talking a total of 13 power connectors if 3 cars are installed in the system and 5 are on risers.  If the GPU connectors have 2 each on 3 cables that's 6.  You'll get another two out of the SAS connector.  Leaves you short 5 power connectors unless you are going to use splitters.

I've been successful increasing density on my breakout boards by using molex powered risers that I can power three off one pcie power header.  The pcie power header has 3 + and 3 negative at 12V.  Each gets a 12v to 5v step down and then are passed to a molex plug.  In essence I've turned one of the breakout board power connectors into 3 functioning molex connectors that will drive 3 risers saving 2 of pcie power connectors.

What version of Windows are you using?

Since it's quad CPU the DL580s are running Server 2016.  However, the 4 GPU limitation I have experienced on both Server 2016 AND Win10.  The DL360s and 380s run 10 since they are only dual proc.
full member
Activity: 1179
Merit: 131
April 16, 2018, 10:03:26 PM
Under Linux, I have 6 Nvidia cards working.  Not sure what is up with the Proliant G7 series.  I have tried a DL360, DL380 and the DL580 and none of these will boot with more than 4 GPUs installed.  Windows will lock up at loading screen.

Slap a USB stick with HiveOS on it and 6 get recognized easily.

I have a ML350P G8 that has taken 6 GPUs in windows easily.  

Edit* Make that 9 1060s in one 580 in HiveOS.

What version of Windows are you using?
member
Activity: 214
Merit: 24
April 16, 2018, 09:37:39 PM
Under Linux, I have 6 Nvidia cards working.  Not sure what is up with the Proliant G7 series.  I have tried a DL360, DL380 and the DL580 and none of these will boot with more than 4 GPUs installed.  Windows will lock up at loading screen.

Slap a USB stick with HiveOS on it and 6 get recognized easily.

I have a ML350P G8 that has taken 6 GPUs in windows easily.  

Edit* Make that 9 1060s in one 580 in HiveOS.

Nice.

I am only going to be able to do 8 GTX 750's for now.

I only want to use the internal power and no breakout box/power supply. That means the three 10-pin to dual 6-pin cables could power six cards. The GTX 750's take up two card slots so each on them cover two PCIe slots. That does leave 7 slots left but with only 6 PCIe 6-pin power plugs adds to 8 cards. A similar solution and what I will do is put three GTX 750's in the internal PCIe slots that cover 6 PCIe slots leaving 5 PCIe slots free. I will use 5 risers for the external GTX 750's. That also adds up to eight.

If I want to do more than 8 then I will need to rewire the SAS power. I believe that is what you did. Can you provide details in how to do that?
Pages:
Jump to: