You are funny. But it gets on my nerve when people are not telling the truth.
I suspect you're guessing here. Do you have first-hand knowledge of my data center?
Edit - I had "1000 YAC is yours if you can even correctly guess which US state it's in" here, but then I posted a screen-shot which makes it obvious which timezone I'm in, which significantly cuts down on the number of states I could be in.1) I mined very successfully with a hexacore Intel i7 up until difficulty 0.1-0.2. From the number of blocks generated I estimated that I was 1/1000 - 1/600 of the network. Check my posts, I posted that when it happened, not when I felt bitter about not getting enough coins or in need to disprove you. Maybe your setup was wrong.
Well, good for you and congratulations on your mining success. I hope you profit greatly from your YAC fortune. If my setup was wrong, I would've seen a significant number of orphaned blocks and/or would have achieved significantly lower hash rates than avg 300kH/sec per HS21 blade server and 200kH/sec per Amazon c1.xlarge instance. The reality is, apparently I had very nearly the lowest orphan rate of anyone that has weighed in so far? I bet that wasn't the result of misconfiguration, and instead happened because I tweaked the client source to form a low-latency mesh amongst my own servers forming a significant portion of the Yacoin network nodes and jacked up the outbound connection count. Ami right? :-)
I would imagine you started mining sooner than 8 hours after the launch of the coin, and didn't make your estimate based on your block generation rate right when difficulty reached 0.1. Otherwise, if you were 1/1000th of the network hash power, you would've been successfully mining between 2 and 2.5 non-orphaned blocks per hour on your 6-core i7 at that point in time. Or if you were 1/600th of the network hash power, you would've been successfully mining between 3 and 4 non-orphaned blocks per hour. Is that your claim? Or did you actually mean that from the coin launch until difficulty reached somewhere between 0.1 and 0.2, you mined between 1/600th and 1/1000th of the "moneysupply" mined by the client? I would consider that quite possible if you started early, but that's significantly different than what I claimed was occurring when difficulty reached 0.1.
2) You show a pic of 1 blade rack on a table (!) and want us believe you have access to a whole blade center consuming >200kW power and hence costing 1000$ in power alone to operate per day? How much you'd have to sell your YACs for to be profitable?
Where is the table (!) in my photos? It sure looks to me like I have the BladeCenter chassis visible in the photos very obviously mounted in a standard 19" rack. Did you not see the other 3 photos? It sorta sounds like you only saw the last of the 4. If your point is that I did not share a photo showing 58 loaded BladeCenter chassis in a row of racks, you are correct, I did not share such a photo. And I took the photos with such a ridiculously narrow depth of field. I must be hiding something pretty interesting in the background that I wasn't willing to share with the world. Guilty as charged, the depth of field is intentionally narrow and I won't be posting photos of the whole data center. You'd have no way of knowing I didn't just lift someone else's data center photo off Google Images anyway, unless I went around scribbling silly YAC-specific messages on everything with a Sharpie to prove otherwise.
Your numbers are close, but a bit off, an IBM HS21 w/ 2x E5450's consumes 210W at full load on all cores, taking into account the 92% efficiency of the standard 2000W BladeCenter power supplies. Each 8677 chassis consumes ~500W to operate the dual fans, AMM and two Ethernet switch modules:
58x 8677 chassis base load: 58 x 500W = 29kW
800x HS21 blades w/ E5450's: 800 x 210W = 168kW
Not including the additional blades used as file servers for all the other blades to network boot from, we're talking about 197kW. It's >200kW if we count cooling however (62kW more, for 60 tons of HVAC).
Where you're real far off is on power cost:
259kW x $0.065/kWh x 24 hours = $404/day.
We're only talking about an 800A 208V 3-phase service entrance feeding my cluster. I've seen much larger 3-phase service entrances in warehouses, industrial spaces, even offices. Nothing particularly exciting or out of the ordinary, no? Half the homes in my area have 400A 1-phase 120/240V service entrances, and that's already a capacity of apprx 96kW, for an everyday residence.
3) Did you lose your job already? Seriously if you did what you claim you did, you should be worried! Blade centers like this are operated by companies with >>1.000-10.000 employees. I could have tried to use my employers compute blades (20x2x8 xeon cores). Your Infrastructure team must be morons if they have not seen the traffic/CPU etc and acted upon it.
That's a bit presumptuous. You've assumed that I do not own the data center and server cluster. Where exactly are IBM's qualification requirements that restrict sales of BladeCenter components to only "companies with >>1.000-10.000 employees"? Hell, you can even hop on eBay and snag used BladeCenter chassis all day long for $175/ea shipped and HS21 or HS21 XM blades w/ 2x E5450's and 8GB of ECC RAM for $100/ea to $140/ea shipped. Zero of my blade servers, chassis and related components were purchased directly from IBM, the whole cluster was built for minimal cost from surplus components.
If such amazing technology as used 4-year-old Xeon blade servers are simply not available to companies without 1000-10000 employees, how did I post a photo showing 14 of them powered on, then another photo with an obviously YAC-specific message written with a Sharpie across the heatsinks of one of the blade servers? By your logic, this should not have been possible. Unless I Photoshopped it?
If you're actually curious, my cluster is rented out for large 3D render jobs for film projects. If I'm between render jobs, I'll certainly do with my hardware as I please. And I guess my "infrastructure team" will have to just continue twiddling their thumbs watching my bandwidth consumption (wow, a whopping 18Mbps, the world is ending and my dual-homed fiber connections just can't cope with that kind of extreme pressure!). If you've had any experience with the BladeCenter platform, you may be aware that this hardware doesn't fail particularly often (I can count hardware failures in the last year on one hand) and managing an 800 server cluster is a 1 person job. Particularly if the entirety of the cluster (other than 2 blades) all network boot the exact same Ubuntu image off the file server blade for batch rendering. I sure as heck wouldn't hire any network admin that couldn't achieve this simple task..
4) you install possibly harmful software on company resources?!? And possibly disrupt company operations? For a few coins? WELL DONE DUDE!
I'll be sure to remind myself not to install possible harmful software on my own hardware in the future. Thank you for your concern, however. I guess for now I'll have to just rely on the fact that I can read and understand everything that the code is doing and can run a diff between pocopoco's code and the NovaCoin source. Come back and talk when you're no longer a slave to your employer and have achieved a level of IT experience where you are able to pursue your own career path and ambitions.
Given that I have all the correct numbers on power consumption and HVAC heat load for a cluster of this size, perhaps that's sufficient to draw your own judgement. If not, well, sorry, I have nothing for you.
Are you next going to suggest that there's no way I could have possibly launched the additional 760 c1.xlarge spot instances between the 8 Amazon data centers? It would've been 800 if the Sao Paulo data center hadn't run out of spot instance availability and set the spot price to whatever amount I was bidding.