Author

Topic: Review of the RX 580 Gigabyte card (Read 155 times)

jr. member
Activity: 182
Merit: 7
No noise. No hustle. Quiet as whisper. Comino.
September 16, 2018, 12:46:01 PM
#1
Me and my collegues have been testing various video cards that are available on the market! This time it is an RX 580 Gigabyte Gaming card version with a memory module from Micron. Traditionally, gaming video cards are made in a way to enable the user to play around-the-clock, but not to mine. Simply turning on the card and overclocking it isn't any fun, so let’s delve inside together and check everything out.

Let's start with the casing. It's not badly made, but it's so thin on the sides that it can easily be broken. The cooling system consists of three tubes, and the radiator is in contact with the power system. (at least so it seems) Let's open the graphics card and see how everything works. The main question is how the memory modules are cooled inside the video card. Around-the-clock cooling greatly affects the stability of the hashrate and determine for how long the chips will function at a maximum performance level (without degrading).

The contact with the GPU isn't of a very high quality. There’s a small gap between the 3 tubes. This means that there is an area that doesn't touch the chip, and the surface isn't perfectly even. The manufacturers usually only try to cool the GPU, because they think that only the processor heats up, but we won’t be using the graphics card to play games. And as we can see, the cooling structure has its own problems. The power system is an important part of the card, one of the 3 components that directly affect the stability of our hashrate, and the overclocking parameters, it contacts through a thick gasket with a steel base, so the heat is not being removed in any way. It’s just a piece of steel. The memory, is the part that will actually be overclocking for the best results. The memory supply contacts the board. But we are interested in whether this board is in contact with an aluminum radiator. Let's unscrew it and take a look.

We can see, the memory is simply cooled by a piece of steel. This is unacceptable during mining, since memory and the memory power supply system heats up just as much as the GPU, possibly even more. For ethereum mining we will need to down the voltage of the GPU for better indications of energy efficiency. In other words, to make the card consume less electricity in order for us to earn more.

Removing a thermocouple from the chokes - in reality, this is a useless thing. But a pretty good system to power the memory. A 2 phase power supply. However, the power phase of the memory controller is poor. It consists of only a single phase and there is reason to doubt the quality of the clamp and the heat sink. Dissipation of heat can spread over the entire area of the board. Also, a voltage regulator can be found on the reverse side, which contributes to the heating of this area. We can also see the 6 phases of the GPU power supply, a 12-volt input choke, and the PCI-e power supply together with its separate controller.

We reassemble the card and apply the thermal paste. We installed the card in a Razer, placing it conveniently to show the real temperatures within the card. The fans are now working at 100% and the card has been mining for 15 minutes straight. Let's look at the temperature indicators.

The maximum temperature on the power circuits is 75 degrees Celsius. This is no longer a good thing, at this temperature, the power can become less stable, which can affect the overclocking parameters, since during overclocking the power supply is even more loaded! The temperature on the memory is 69 degrees and after working for a while longer and warming up the rest of the card, the temperature can reach as high as 80 degrees Celsius.

The temperature on the GPU is 67 degrees. With a down volted GPU for mining ether, where the GPU isn’t the most important piece of the process, the processor will most likely be all right in a room with temperature of 28 degrees Celsius. But let's see what temperature the program is showing. Strange, the temperature shown in the program differs from what our thermal imager is revealing. This indicates that the sensor on the card isn’t fulfilling its task. And most importantly, the program doesn’t show the temperature of the memory and the power supply units. “This is very misleading to many since the GPU works at a comfortable temperature, and everything else is overheating. Cards quickly fail, and users understand what the fault was only after the breakdown occurs.

Let's look at how hot the coolers are. We can see that when functioning at 100% they get up to 59 degrees Celsius. At 100 %, the stringer can also heat up and add heat to the board. The more it wears out, the hotter it will get.

We overclocked the card and decided to see its potential, we did this by timing and modifying the BIOS. We achieved 33.5 ethereum mega hash, with no errors during miming. This is a fantastic result. If you are interested in how this was achieved, write in comments and i'll reply with more details. The most important conclusion is that without using liquid cooling, memory and power supply modules overheat. It affects stability of the hashrate and after a continuous use of 2 to 3 months, the chip will degrade. So, it affects the income received from mining.

We also have a video of this experiment!
Jump to: