I would imagine you started mining sooner than 8 hours after the launch of the coin, and didn't make your estimate based on your block generation rate right when difficulty reached 0.1. Otherwise, if you were 1/1000th of the network hash power, you would've been successfully mining between 2 and 2.5 non-orphaned blocks per hour on your 6-core i7 at that point in time. Or if you were 1/600th of the network hash power, you would've been successfully mining between 3 and 4 non-orphaned blocks per hour. Is that your claim?
Correct - I have mined 3-4 blocks in less than 2h. There was a new block every 2-4secs if I remember correctly, so 900-1800 new blocks per hour. I switched off yacoin-qt sometimes in between, so there are gaps.
You'd have no way of knowing I didn't just lift someone else's data center photo off Google Images anyway, unless I went around scribbling silly YAC-specific messages on everything with a Sharpie to prove otherwise.
Photoshop?
I didn't doubt you may have one rack with a few blades. I know that old blades are cheap on ebay, but are pretty power-inefficient, so usually sold off by large companies. Heck, even one of my colleagues had a rack server at home when I was doing my PhD.
Not including the additional blades used as file servers for all the other blades to network boot from, we're talking about 197kW. It's >200kW if we count cooling however (62kW more, for 60 tons of HVAC).
Uh, I feel so bad, 3kW short! And that is for someone who is more into statistics than into building blade farms and didn't bother to google it.
The error in power costs is easy to explain: UK is more expensive.
If you're actually curious, my cluster is rented out for large 3D render jobs for film projects.
Yes, that's interesting. It'd be a nice project to have people rent out their CPU cycles for a distributed renderer instead of mining coins... The upload bandwidth would be a bit shitty and introduce some render latency, but probably negligible.
If you've had any experience with the BladeCenter platform, you may be aware that this hardware doesn't fail particularly often (I can count hardware failures in the last year on one hand) and managing an 800 server cluster is a 1 person job.
Hmm, our infra team is a bit larger and we do have quite a few outages throughout the year. But then, we are actually running disks in them that fail, discover Redhat-Linux bugs along the way and monitor network latency very tightly.