That's quite an odd story as well, I thought that most computers would favor the cold in order for them to function (they it does have to be above zero degrees - otherwise components start to fail like you stated - a lot of this failure can be temporary). How are they doing this year round with the frost?
Please excuse Professor mode here...
The > freezing temp pretty much only applies to circuits that use wet electrolytic capacitors. Typically water-based they can literally freeze and be damaged from the resulting expansion. Dry or solid caps have no problem with that part. One other physical aspect that may not be happy with very low temps is the actual very fine soldered connections to the chips and other components. Temp goes down - typical solder becomes brittle and may actually crack from stress of the parts shrinking.
But - now comes the matter of the components having specs that change with temp (temperature coefficient). It is usually a % change/deg from nominal values. Many of the circuits in digital equipment such as clock timers and voltage references are design to operate in-spec over a narrow range of temperatures based on what the expected change in tolerances are over temp. Too cold, might not start at all or will run too fast or slow. Too high temp, probably will start but again will run out of spec. IF care is taken in the design to deal with extreme temp by using better (read: more expensive) components then no problem but, that costs money to implement.
Then comes semiconductors (transistors): They are excruciatingly sensitive to temp vs voltage required to switch from 'off' to 'on' coupled with the fact of what the resulting Vdrop of the on-state is vs temp changing (goes up with temp) .
Now the Black Majik of semiconductor design starts kicking in. While has always been an issue since days of the 1st hand-assembled transistors through ~the 1micron and lower nodes, as junction size decreases the amount of Black Art applied increases. For us, 'Black Art' is a decent way to describe what a Foundry does to make ever-smaller transistor junction sizes. They literally make scads of test structures (gates) tweaking tiny bits of the process over and over and over until they get physical devices that mirror what simulation models say they should do. Those design tweaks become part of a Foundry's Secret Sauce for producing chips. Costs and dev times is staggering ergo the count-on-one-hand number of companies/Foundries that can produce advanced node sizes such as 16nm and lower.
In short, 28/22nm chips were touchy but being a reasonably mature tech also had fairly predictable production rules (Black Art) libraries. For the 16/14nm nodes - still has a long way to go regarding predictable results (operating specs) per batch of wafers ran.... IMHO that is a large part of the reason for Bitmain's Auto-tune firmware. It is definitely why even the s7's used a pre-heat mode when being booted: 1st boot is at high Vcore to get the chips near operating temp followed about a minute later by the 2nd boot at final Vcore voltage. In short, Bitmain and others designed their miners with the much more expected moderate to high ambient operating temps in mind vs running cold.
... and that's the simplified reasons why the miners like high temps...