I have experimented somewhat with CPU mining. Using my own computer with Intel Core 2 Duo I get about 10-15 kH/s per CPU. Using a cloud server (Windows Azure, AMD Opteron 4171) I get 700-800 kH/s per CPU.
I ran Sisoft's Sandra Crypto benchmark on both systems and found that my cores have about half the processing power of Azure's, but that's a far cry from the >50x mining performance gulf I witnessed.
The only difference is that I have Windows on my desktop PC while I used Ubuntu on the cloud server. Don't believe the difference boils down to that though.
Anyone care to explain what's going on?
SiSoft Sandra is a synthetic benchmark. It benchmarks relatively simple operations such as AES ciphers, perhaps "true" hashing of memory. They are small things, usually designed to adapt to a large set of hardware.
Cryptocurrency hashes might be jokes but they usually concatenate several operations, mangle memory. The processor has an harder time 'figuring out' the best thing to do.
In particular, the Opteron has L3 cache and I'm pretty sure no Core2 model has L3. This can affect syntactic benchmarks hugely, it also has a richer instruction set which could accelerate Sandra.
By contrast, most CPU miners are 'high compatibility' and often don't come with newer CPU support by default.