Gaming reviews have NOTHING to do with mining usage.
Try looking at the various build-up reviews for the Vega FOR MONERO MINING USAGE.
For a specific related example, most of the reviews I have seen put "high load" power usage of a GTX 1070 ti in the 180-220 watt range - yet the MOST EFFICIENT mining point for those cards is 106 watts give or take a couple, and even folks that "push" them for higher hashrate rarely go higher than 145-150 range because the gains above that are TINY compared to the increase in power usage (my specific 1070 ti mining cards are currently set to 105 watts to keep total system draw under 6 amps on the system they are in).
Also, the Vega 64 pulls a lot more power IN GAMING USAGE than the 56 does, the 56 was deliberately designed by AMD to have a much lower TDP limit
YOU need to check your ASSUMPTIONS, that's where you are messing up.
Floating Point operations are 100% WORTHLESS for cryptocoin usage, which is 100% INTEGER operations.
Another BAD ASSUMPTION you make, not a mistake on MY part.
Would you also care to explain how "LTE16x" cell phone interface helps mining? Just for ONE example of the "not useful stuff for mining" on a Snapdragon 835.
My "under 300 watt" measurement was made AT THE WALL on a Brand power meter while the system was actively hashing at 1950+ hash on Monero mining - and that was on a system that is NOT "power optimised" well, as I've never figured out how to get bloody Wattman to do a lot of the stuff I can do routinely in Afterburner like UNDERVOLT (Vega 56 cards LOVE to be undervolted, they tend to clock HIGHER with some undervolt as it lets them stay below the TDP easier). It is running a severely overkill Gold-rated power supply (Seasonic X-850) because that's what I had available when I put the system together, but the system is based on a FM2 motherboard with am AMD A10-7890k (which is NOT a low power APU) with the iGPU running the graphics for Win10, an HGST 3TB hd (system is also doing BURST mining) so the actual power draw of the SYSTEM as a whole would probably be about 275 watts (That model of PS usually pulls about 92% efficiency in the 30-50% load range).
220 watts draw for the GPU is a PESSIMISTIC estimate, as I'm pretty sure the rest of the system is pulling 80-100 watts total NOT 50-60.
Presuming an optimistic 2 watts, can a Snapdragon manage 20 hash on Monero?
8 cores at 2 Ghz (ballpark average, I saw the "big/little" core split) in theory should manage more, but how much CACHE do they have on them - Monero wants ballpark 2 MB of CPU CACHE MEMORY per thread to run efficiently (and I can't find a spec anywhere that shows the amount of CPU cache on a Snapdragon).
Then figure in the COST of the things - even *IF* they can mine efficiently, is it worth the COST of the things for whatever hashrate they achieve?
THAT is the primary reason pretty much any SoC setup gets ignored for mining - even if it IS efficient in hash/watt, the sheer COST makes the time to achieve ROI end up being measured in YEARS.
Everyone knows that Nvidia cards are much more efficient right now.
Your 300W load is only considering the GPU load. And like I said before even if you consider such figure You cannot reach the 30 you've mentioned that's simple math.
20Hash on monero is totaly possible an 835 in theory, neither the 835 or the X1 have 2MB of cache per core they have 2MB per 4 cores. but you are not factoring the GPU Side of thing
Like I said before the cost from random supplier is expensive, but if you can build your own board it can be interesting. It is hard to justify such a project considering that not only the miners and software is not efficient
as for
Floating Point operations are 100% WORTHLESS for cryptocoin usage, which is 100% INTEGER operations.
Another BAD ASSUMPTION you make, not a mistake on MY part.
Would you also care to explain how "LTE16x" cell phone interface helps mining? Just for ONE example of the "not useful stuff for mining" on a Snapdragon 835.
There are always different version of these SoC for different markets for example when Nvidia or Qualcomm produce a SoC for embeded machines these SoC do not have said functionalities, and Again your forget each time about the GPU side of these SoC please check out the SDA835 (APQ8098) for example ! as for floating point calculation it's a correct way of comparing hardware especially when it comes to different platforms, since it doesn't not require platform optimisation and whatsnot.
ARM processors aren't like x86, and they usually have a shared L2 Cache for all cores. My Allwinner H3 avg's 8H/s for four cores and 2.25 for a single core. the proccessor only has a 512KB L2 cache, so if the rules were correct I'd get negative performamce from rather than a performance increase. Reason being the L2 is shared and is low latency and even though your using sd cards it tiny amounts information being read so you don't have big drops in performance. You really have to look at the whole design and these are precursors to ASIC's