- Generate up to a certain size polynomial. I use 200560490130 or the next as my base primorial and store a vector of all 48923875 entries.
- Sieve *this* out up to the huge primorial in advance.
- Do your operations relative to the huge primorial.
But, as warned - the simple bitvector is still working better for me.
Cool, that is what I am going but looking at your numbers I also pre sieve the possible p6 chains reducing my candidate count by ~128 times:
const uint64_t primorial = 7420738134810;
const uint32_t sexcount = 14 243 984;
Then I run a second scan inline to catch the next 2 dozen or so primes (lets me avoid gmp and use simple 64bit math) before I hit the expensive code. General idea was to get a list of candidates which could be feed into something else (GPU was the thought).
It is much faster than reference but it is reaching the limit of how fast I can push it.
I have probably made a horrendous error in my algorithm... but coding again was fun...
Regards,
My implementation is a little different from both implementations mentioned above. If fact, the overhead is much less than that of jh00's implementation. My implementation is almost identical to Kim Walisch's primesieve implementation with a few minor exceptions.
Please see Kim Walisch's description of wheel factorization if you would like to know exactly what I am doing:
http://primesieve.org/
@bsunau7 - mine does the same. I kill any location that fails to produce a six-set. I wonder which of us has a bug? *grin* I'll check my sieving code again. As one way to start comparing, the polynomials for the first few primorials are:
Generator at Pn7 (210)
97
Generator at Pn11 (2310)
97 937 1147 1357 2197
Generator at Pn13 (30030)
97 1357 2407 3457 4717 5557 5767 6817 7867 8077 8287 10177 10597 11647 12907 13747 13957 15007 16057 16267 17107 18367 19417 19837 21727 21937 22147 23197 24247 24457 25297 26557 27607 28657 29917
@Supercomputing - Did you figure out a way to combine wheel factorization with storing a dense bitvector div 2310 (or div 210)? Or do you just allow a large bitvector and handle it through segmentation? I liked the way the jh implementation saved a lot of sieve space that way, and a straightforward prime sieve achieves a less dense packing (3-4x).
Money on me being at fault, to save memory (make it fit!) I merge two smaller sieves... I'm working on an alternate method to validate, I suspect my error only exists in numbers later than 10million. For referecne my first few numbers:
$ od -t u8 sextuplet.bin | head
0000000 1 7
0000020 97 1357
0000040 3457 4717
0000060 5767 8077
0000100 10597 12907
0000120 19417 23197
0000140 29917 30127
0000160 32947 34417
0000200 35797 36847
0000220 37897 38107
A quick spot check picking 10177 as an example... 10177 is prime, (10177+6) is not prime (divisible by 17) so it shouldn't be a valid sextuplet.
Regards,
--
bsunau7