Depends on speed. We know sieving is running about 25x, but how much of the CPU time was used for sieving and how much for fermat testing? Let x = CPU sieve time and y = CPU Fermat time. If x=y (50/50 split) then GPU = 2% for 52% of normal time = 92.3% speed increase. If x=9y (90/10 split) then GPU = 3.6% for 13.6% of normal time = 635% speed increase. if 9x=y (10/90 split) GPU = .4% fo 90.4% of normal time = 10.6% speed increase. These are extremely rough figures, communication times between GPU/CPU will also apply. Maybe he'll solve Fermat testing on the GPU and get a full 2500% speed increase. Short answer: too many current unknowns.
I did a similar calculation, but then Koooooj@reddit pointed out that I was wrong. Because with a much powerful sieve, CPU gets less test to do for fixed number of output.
for example (with made up numbers):
before: 10000Numbers ---sieve--> 100Numbers ---test---> 1Number
after : 10000Numbers ---sieve--> 10Numbers ---test---> 1Number
this way, even we don't speed up the execution of Fermat test, the test time is down to 10%.
But he's comparing apples to oranges. If sieveing speed is increased 25x, then you need different parameters to sieve deeper. Also, where does he get these numbers? 10000 seems extremely small, 100 is 1%, so we are to believe the CPU sieve is 99% effective? The GPU sieve is 99.9% effective? There's a lot of questions that need to be asked here:
What is he using for h
0?
What is he using for c?
What is he using for e?
What is the first sieving prime?
What is he using for P
s?
I'm sorry, but those number he's tossing out look made up, with no proof to back them up.