I don't have the exact numbers to support this claim, but by my estimations of the last few days, EC2 mining via GPU/g2.2xlarge instances seems to be more cost-efficient than EC2 mining via c3.8xlarge. Is that at all possible?
I had about 30 c3.8xlarge instances running yesterday, and replaced them with about 30 g2.2xlarge instances. The total hashrate seems to be lower (hard for me to add it up, but maybe about half of the CPU instances), but cost is closer to 1/3 of the CPU instances.
Just asking, hoping someone with more experience in this can maybe confirm this intuition, or tell me if I'm most likely wrong.
What are the has rates on the c3.8xlarge and g2.2xlarge, what are the costs?
Rates is easy: c3.8xlarge is around 0.3 USD/hour, g2.2xlarge around 0.1 USD/hour. Slightly lower in fact, but let's compare them by max rate.
Hashrate is more difficult. I don't know exactly how to read the miner logs, the hashrate values seems to change quite a bit over time.
Additional complication: when I opened a large number of concurrent c3.8xlarge instances in the same region, the per instance rate seemed to be lower than when I only had one or two instances in that region running.