Primality tests take longer, so the whole search process takes longer. For example, searches with 11k digit numbers are very slow.
Empirically in the 100-8000 digit range, the BPSW test is about , i.e. 2x larger size is 5-6x longer time.
The larger size also means a longer range for a large merit, which means more tests. Presumably growth. There is a complicating factor of the partial sieve that has a dynamic depth.
Usually the tradeoff is that small sizes run faster but are better covered, hence need high merits to get a record. Large sizes (200k+) are slow but are so sparse that almost anything found is a record.
The sweet spot this year (2015 at the time of writing) seems to be in the 70-90k range for efficiency of generating records. There are lots of gaps with merit under 10.
A little experiment looking at the time and number of merits >= 5.0 found using where k=1..10000 without multiples of 2,3,5.
p=20: 1.7s 102 found = 60/s (28-30 digits)
p=40: 4.1s 236 found = 58/s (69-71 digits)
p=80: 19.6s 515 found = 26/s (166-169 digits)
p=160: 235s 985 found = 4/s (392-395 digits)
Interestingly with this form, the number we find with merit >= 5 goes up as p gets larger but the time taken goes up quite a bit faster. This explains the shape of the graph of current records: high at the beginning and dropping off as gap size increases.
It’s certainly possible that a different method of selecting the search points would be more efficient and it’s also possible to improve the speed of this or other methods vs. doing prev/next prime with my GMP code.
For example with numbers larger than ~3000 digits using gwnum would be faster than GMP. Gapcoin uses a different method but it’s not obvious how to get exact efficiency comparisons.
Answered by Dana Jacobsen, edited.