Author

Topic: how many sha256 hashes can a consumer-grade CPU compute per second in average? (Read 374 times)

legendary
Activity: 4326
Merit: 8914
'The right to privacy matters'
A long time ago, the Bitcoin mining hash rate on a CPU was up to 3 MH/s, and up to 1 GH/s on a GPU. They may be somewhat higher now, but still the same order of magnitude.

Here is a chart: https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison


Core i5 2500K   4/4   20.6mh from your chart is accurate as I used to mine just a tiny bit with an i5 2500  on the pool bitminter back in 2012
full member
Activity: 161
Merit: 230
This seems to be an example of the XY problem

https://en.wikipedia.org/wiki/XY_problem

It would be easier if you told us why you need the number of hashes a consumer grade CPU can do, so we can better aid with the underlying problem you've encountered.
legendary
Activity: 4466
Merit: 3391
A long time ago, the Bitcoin mining hash rate on a CPU was up to 3 MH/s, and up to 1 GH/s on a GPU. They may be somewhat higher now, but still the same order of magnitude.

Here is a chart: https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison
legendary
Activity: 3822
Merit: 2703
Evil beware: We have waffles!
Keep in mind that Intel, AMD,et al have had support for hardware sha256 functions since at least 2013 and the more recent generations of Intel & AMD cpu's now have said hardware built into the chip. Sorta like the early PC days when there were 1st math coprocessors for the 186, 286 & 386 cpu's followed by the 486 with the coprocessor in the CPU chip. Details of the Intel extensions is here.

Thing is that they are for full data encryption & decryption functions both of which rely on knowing both the private and public keys for speedy processing - not mining where we only try to randomly find the right key.
newbie
Activity: 5
Merit: 1
Using the code provided by seek3r, with some adaptions (such as using more recent package versions for the sha2 and digests crates) I was able to obtain around over 3 million double sha256 hashes per second with 2 threads. I actually computed double sha256 hashes (the output of the first digest being the input for the second), so it would be 6 million single hashes. The tests were done in a x86_64 architecture machine, running 2 threads, and each thread ran on a Intel CPU with 2.60 GHz (upper bound 4GHz).
legendary
Activity: 1316
Merit: 2018
Thank you very much! It worked for my purposes!
Glad to hear that!
Always a pleasure, especially if it means I can improve/check my Rust skills.    Cool
newbie
Activity: 5
Merit: 1
I have a linux machine, x86_64 architecture, 4-core Intel. Thank you!

Alright thanks. Tried something on Rust.
You can test around with the NUM_THREADS - set it to 8 if your cpu supports hyper-threading. You can test around and increase NUM_HASHES by every round to get a more precise result.

Since I used crypto and rayon crates you have to add them as dependencies on your cargo.toml with their latest versions:

Code:
[dependencies]
crypto = "0.2.36"
rayon = "1.8.0"

After that, implement this code:

Code:
extern crate crypto;
extern crate rayon;

use crypto::digest::Digest;
use crypto::sha2::Sha256;
use rayon::prelude::*;
use std::time::Instant;

const NUM_HASHES: usize = 1000000;
const NUM_THREADS: usize = 4;

fn compute_hashes() {
    let string = "sha256_maxhash".as_bytes();
    let mut sha = Sha256::new();
    for _ in 0..NUM_HASHES {
        sha.input(string);
        let _ = sha.result_str();
        sha.reset();
    }
}

fn main() {
    rayon::ThreadPoolBuilder::new()
        .num_threads(NUM_THREADS)
        .build_global()
        .unwrap();

    let start = Instant::now();
    (0..NUM_THREADS).into_par_iter().for_each(|_| compute_hashes());
    let duration = start.elapsed();

    let total_hashes = NUM_HASHES * NUM_THREADS;
    let hashes_per_second = total_hashes as f64 / duration.as_secs_f64();
    println!("Time taken: {:?} seconds", duration);
    println!("Hash rate: {:.0} hashes per second", hashes_per_second);
}

Save it and you are done! To run it just execute it via cargo run --release in the same directory.


Thank you very much! It worked for my purposes!
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
While some member provide nice answer, i would just recommend OP to search for result of Hashcat benchmark result for certain device. Few example,
1. Apple M1 Ultra, 1785.4MH/s. Source, https://gist.github.com/Chick3nman/ccfb883d2d267d94770869b09f5b96ed.
2. 8x Nvidia GTX 1080, 23012.1 MH/s. Source, https://gist.github.com/epixoip/a83d38f412b4737e99bbef804a270c40.

Having said that, someone made a performance benchmark to check how fast calculating SHA256 your CPU is, inside the browser: https://www.measurethat.net/Benchmarks/Show/1246/0/sha256

Doesn't seem reliable to me since different browser and browser version could affect benchmark result.
legendary
Activity: 3500
Merit: 6320
Crypto Swap Exchange
As far as SHA256 performance on CPU is concerned, there are special instructions available on most CPUs for accelerating the computation of SHA256 hashes.

These are available per-thread, so the performance you get from using some generic SHA-256 function is going to be slower than a specialized function that makes use of these opcodes.

Having said that, someone made a performance benchmark to check how fast calculating SHA256 your CPU is, inside the browser: https://www.measurethat.net/Benchmarks/Show/1246/0/sha256

Not even tl;dr, just dr but isn't that going to only be able to use 1 core of a CPU? Usually most browsers will limit that.

But no matter what running it in a browser although simple, will never give good results due to overhead.

-Dave
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
As far as SHA256 performance on CPU is concerned, there are special instructions available on most CPUs for accelerating the computation of SHA256 hashes.

These are available per-thread, so the performance you get from using some generic SHA-256 function is going to be slower than a specialized function that makes use of these opcodes.

Having said that, someone made a performance benchmark to check how fast calculating SHA256 your CPU is, inside the browser: https://www.measurethat.net/Benchmarks/Show/1246/0/sha256
legendary
Activity: 1316
Merit: 2018
I have a linux machine, x86_64 architecture, 4-core Intel. Thank you!

Alright thanks. Tried something on Rust.
You can test around with the NUM_THREADS - set it to 8 if your cpu supports hyper-threading. You can test around and increase NUM_HASHES by every round to get a more precise result.

Since I used crypto and rayon crates you have to add them as dependencies on your cargo.toml with their latest versions:

Code:
[dependencies]
crypto = "0.2.36"
rayon = "1.8.0"

After that, implement this code:

Code:
extern crate crypto;
extern crate rayon;

use crypto::digest::Digest;
use crypto::sha2::Sha256;
use rayon::prelude::*;
use std::time::Instant;

const NUM_HASHES: usize = 1000000;
const NUM_THREADS: usize = 4;

fn compute_hashes() {
    let string = "sha256_maxhash".as_bytes();
    let mut sha = Sha256::new();
    for _ in 0..NUM_HASHES {
        sha.input(string);
        let _ = sha.result_str();
        sha.reset();
    }
}

fn main() {
    rayon::ThreadPoolBuilder::new()
        .num_threads(NUM_THREADS)
        .build_global()
        .unwrap();

    let start = Instant::now();
    (0..NUM_THREADS).into_par_iter().for_each(|_| compute_hashes());
    let duration = start.elapsed();

    let total_hashes = NUM_HASHES * NUM_THREADS;
    let hashes_per_second = total_hashes as f64 / duration.as_secs_f64();
    println!("Time taken: {:?} seconds", duration);
    println!("Hash rate: {:.0} hashes per second", hashes_per_second);
}

Save it and you are done! To run it just execute it via cargo run --release in the same directory.
legendary
Activity: 3472
Merit: 10611
It depends on the message you want to compute the hash for.
The smaller the size of the message the lower the number of blocks (internally used in SHA256) hence the faster it will compute. For example computing hash for 10 bytes is faster than 33 bytes and faster than 80 bytes and faster than 200 bytes.
It is slower if you want to compute double SHA256 hash.

In the context of bitcoin we almost always compute double SHA256.
33 bytes is size of majority of public keys that are hashed in the most common OP_HASH160 scripts.
80 byte is the header size that is hashed in mining and you can easily find stats for both CPUs and GPUs hashrates on the internet like [1].

[1] https://en.bitcoin.it/wiki/Non-specialized_hardware_comparison
newbie
Activity: 5
Merit: 1
I just made a script for this, that way op can verify how fast his PC can generate sha codes.

Code:
time for a in $(seq 1 10); do echo "$a" | sha256sum; done

The code is for linux and what it does is to generate 10 sha256 and print the time:

Quote
real   0m0.019s
user   0m0.015s
sys   0m0.008s

Now let's try with 1000 and see the time.

Quote
real   0m1.501s
user   0m1.413s
sys   0m0.570s

And with 10,000

Quote
real   0m16.384s
user   0m14.474s
sys   0m5.943s

And i get these results with this CPU, maybe other users could test and post their results with a better PC:

Code:
*-cpu
          product: Intel(R) Core(TM) i5-6300HQ CPU @ 2.30GHz
          vendor: Intel Corp.
          physical id: 1
          bus info: cpu@0
          version: 6.94.3
          size: 2660MHz
          capacity: 3200MHz
          width: 64 bits

Thanks for the code!

However, it understimates the computations because it runs shell commands each time. For every hash, the operating system will load the SHA256SUM binary into memory, make the proper system calls, then execute. There is a lot of CPU cycles there which are not used for computing the hashes.

I think it the proper way is to compute millions of hashes within a C or Rust program (maybe C++ too) and also measure the time within the program itself, using the languages libraries.

That's because running the `time` command line ulitiy also takes into account the time to load the program into memory, setup the main function, exit the program, which is a lot of system calls and numerous CPU cycles that are not used for the computation of the hashes and they do take some miliseconds.
newbie
Activity: 5
Merit: 1

Otherwise, could somebody help me write a C code or Rust code with threads to run the computations in a multicore computer? So I can test it locally.


Might could help you with that.

Can you give me infos about ur CPU that will be used for this test? Need to know so I can set the maximum of threads that will be used.
Otherwise I will just set two variables so you can adjust it while testing. One for the number of used threads and one for the amount of hashes per thread.
Rust is fine for you?

I have a linux machine, x86_64 architecture, 4-core Intel. Thank you!
legendary
Activity: 3346
Merit: 3130
I just made a script for this, that way op can verify how fast his PC can generate sha codes.

Code:
time for a in $(seq 1 10); do echo "$a" | sha256sum; done

The code is for linux and what it does is to generate 10 sha256 and print the time:

Quote
real   0m0.019s
user   0m0.015s
sys   0m0.008s

Now let's try with 1000 and see the time.

Quote
real   0m1.501s
user   0m1.413s
sys   0m0.570s

And with 10,000

Quote
real   0m16.384s
user   0m14.474s
sys   0m5.943s

And i get these results with this CPU, maybe other users could test and post their results with a better PC:

Code:
*-cpu
          product: Intel(R) Core(TM) i5-6300HQ CPU @ 2.30GHz
          vendor: Intel Corp.
          physical id: 1
          bus info: cpu@0
          version: 6.94.3
          size: 2660MHz
          capacity: 3200MHz
          width: 64 bits
legendary
Activity: 1316
Merit: 2018

Otherwise, could somebody help me write a C code or Rust code with threads to run the computations in a multicore computer? So I can test it locally.


Might could help you with that.

Can you give me infos about ur CPU that will be used for this test? Need to know so I can set the maximum of threads that will be used.
Otherwise I will just set two variables so you can adjust it while testing. One for the number of used threads and one for the amount of hashes per thread.
Rust is fine for you?
copper member
Activity: 1330
Merit: 899
🖤😏
Maybe this topic could also help you. The link on stackoverflow site is for 2012.

Edit: I remember a powerful tool capable of running different hash functions and doing benchmark. It's called hashcat, from hashcat.net😉
full member
Activity: 193
Merit: 124
Just digging around
https://stackoverflow.com/questions/4764026/how-many-sha256-hashes-can-a-modern-computer-compute

There is a code you can run and post the results.

I believe though that this is not relevant as GPUs are WAAAAY better. So pointless to run this on CPU, look for CUDA implementation.
newbie
Activity: 5
Merit: 1
Is there a reliable source of information, maybe academic work or independent analysis with statistics, on the average number of sha256 hashes a consumer-grade CPU can compute?

Otherwise, could somebody help me write a C code or Rust code with threads to run the computations in a multicore computer? So I can test it locally.

I just would like to know a rough estimate. I'd like to know the order of magnitude, such as hundreds of thousands or maybe millions of hashes per second.
Jump to: