A multimodular algorithm for computing Bernoulli numbers
Abstract.
We describe an algorithm for computing Bernoulli numbers. Using a parallel implementation, we have computed for , a new record. Our method is to compute modulo for many small primes , and then reconstruct via the Chinese Remainder Theorem. The asymptotic time complexity is , matching that of existing algorithms that exploit the relationship between and the Riemann zeta function. Our implementation is significantly faster than several existing implementations of the zeta-function method.
1. Introduction
The Bernoulli numbers are rational numbers defined by
In this paper we focus on the problem of computing , as an exact rational number, for a single large (even) value of . To date, the most efficient algorithms for computing an isolated have been based on the formula
(1) |
where is the Riemann zeta function [IR90, p. 231]. One first computes a high-precision approximation for via (1), using the Euler product for . The exact denominator of is known by the von Staudt–Clausen theorem. Provided that the numerical approximation to has enough correct bits — it turns out that bits is enough — we may recover the numerator of exactly. We will refer to this algorithm and its variants as the ‘zeta-function algorithm’.
The zeta-function algorithm has been rediscovered numerous times. The formula (1) itself goes back to Euler. Chowla and Hartung [CH72] give an ‘exact formula’ for , the evaluation of which is tantamount to executing the algorithm, but using the defining sum for instead of the Euler product, and using in place of the exact denominator of . Fillebrown [Fil92] mentions [CH72] and points out that using the Euler product is asymptotically faster; she gives a detailed time and space complexity analysis, and attributes the algorithm to Herbert Wilf (unpublished). Fee and Plouffe [FP07] state that they discovered and implemented the zeta-function algorithm in 1996. Kellner [Kel02] also refers to [CH72] and suggests using the Euler product. He mentions some data produced by Fee and Plouffe, although not their algorithm. According to Pavlyk [Pav08], the implementation in Mathematica 6 [Wol07] derives from Kellner’s work. Stein [Ste07, p. 30] indicates another independent discovery by Henri Cohen, Karim Belabas, and Bill Daly (unpublished), that led to the implementation in PARI/GP [Par07].
In this paper we present a new algorithm for computing that does not depend on (1) or any properties of . The basic idea is to compute modulo for sufficiently many small primes , and then reconstruct using the Chinese Remainder Theorem. Using a parallel implementation, we have computed for , improving substantially on the previous record of [Pav08].
Let denote the time (bit operations) required to multiply two -bit integers. We may assume that , using for example the Schönhage–Strassen algorithm [SS71]. According to Fillebrown’s analysis [Fil92, p. 441], the zeta-function algorithm for computing requires multiplications of integers with bits, so its running time is . The new algorithm also runs in time . However, we will see that over the feasible range of computation the running time behaves more like .
The new algorithm is easily parallelisable, as the computations for the various are independent. Only the final stages of the modular reconstruction are more difficult to execute in parallel, but these account for only of the running time. During the modular computations, each thread requires only space. The zeta-function algorithm may also be parallelised, for instance by mapping Euler factors to threads, but this requires space per thread.
2. Computing modulo
Throughout this section denotes a prime, and an even integer satisfying . Our algorithm for computing modulo is based on the following well-known congruence. Let , and suppose that . Then
(2) |
where
Equation (2) may be deduced by reading the statement of Theorem 2.3 of [Lan90, §2] modulo (as is done in the proof of Corollary 2 to that theorem).
Equation (2) may be used to compute modulo directly, but computing modulo for each is unnecessarily expensive. Instead, we select a generator of the multiplicative group , put , and rewrite the sum as
(3) |
Note that since . For we have , and , so we obtain the half-length sum
(4) |
leading to the following algorithm.
Proposition 1.
Let be a prime, and let be an even integer in the interval . Algorithm 1 computes modulo in time .
Proof.
At the top of the loop we have and . The value assigned to is . Therefore the value added to is . By (4) the return value is modulo .
We now analyse the complexity. A generator of may be found deterministically in time [Shp96]. In the main loop, the algorithm performs additions, multiplications and divisions (with remainder) of integers with bits. The computation of requires another such operations. The division by requires an extended GCD of integers with bits. The total cost is . ∎
3. Computing as a rational number
In this section we present a multimodular algorithm for computing as an exact rational number. Before getting to the algorithm proper, we make some preliminary calculations. In what follows, we assume that and that is even.
The denominator is given by the von Staudt–Clausen theorem [IR90, p. 233]:
Note that , since divides [CH72, p. 114].
The size of the numerator may be bounded quite tightly as follows. From (1) and Stirling’s approximation [Lan97, p. 120] we have
Since and , a short calculation shows that where .
For any , let . Our strategy will be to select so that , and then compute modulo . This ensures that we have sufficient information to reconstruct . In the algorithm itself our choice of will be quite sharp, but we also need a rough a priori estimate. We will take . To check that this works, we will use the estimate , a weak form of the prime number theorem, valid for [Nar00, p. 111]. Then we have
so that . Algorithm 2 presents the resulting algorithm; several important details are given in the proof of the following theorem.
Theorem 2.
Algorithm 2 computes in time .
Proof.
Computing all primes less than requires time , via an elementary sieving algorithm, since . In what follows we assume that all primality tests and enumeration of primes are performed with the aid of this list.
To compute , make a list of the factors of (for example, by testing divisibility of by each integer in ), and for each factor check whether is prime. Multiply together those that are prime using a product tree [vzGG03, Algorithm 10.3, p. 293]. Since , this takes time .
The goal of the while-loop is to quickly find a bound , much tighter than , such that . The variable holds an approximation to the product of the primes encountered so far. It should be represented in floating-point, with at least bits in the mantissa, so that each multiplication by magnifies the relative error by at most . We claim that the loop terminates before exhausting the precomputed list of primes. Indeed, let be the number of such that and . After iterations, approximates with relative error at most ; that is, . This is impossible since the loop termination condition would already have been met earlier. This argument also shows that the selected bound satisfies . Since , computing takes time .
In the for-loop, for each we use Algorithm 1 to compute . Note that Algorithm 1 requires that . To satisfy this requirement, we put , compute using Algorithm 1, and then recover via Kummer’s congruence [Lan90, p. 41]. The total number of primes processed is no more than . According to Proposition 1, the cost for each prime is , since . Therefore the total cost of the while-loop is .
We compute and simultaneously using fast Chinese Remaindering [vzGG03, Theorem 10.25, p. 302]; the cost is .
In the last few lines we recover . By construction we have , so . Note that . If then is positive so we simply have ; otherwise is negative, and we must correct by subtracting . ∎
Remark.
During the modular collection phase, we skipped those for which . An alternative would be to use the fact that for these [IR90, p. 233], and to apply the multimodular algorithm directly to rather than to . This would require the additional minor overhead of computing for all of the other primes.
4. Implementation techniques for computing modulo
In this section we sketch several techniques that yield a large constant factor improvement in the speed of computing modulo , for almost all and . In experiments we have found that these techniques yield an increase in speed by a factor in excess of compared to an efficient implementation of Algorithm 1.
We first perform some preparatory algebra. Consider again the congruence (2). Let be the multiplicative order of in , and put . Let be a generator of . Then forms a complete set of representatives for . Putting , we obtain
(5) |
Considerations similar to those of §2 allow us to halve the number of terms, as follows. First suppose that is even. Since and , the double sum in (5) becomes
Now suppose that is odd. Then is even, and we have , so that for some . The double sum in (5) becomes
We combine both cases into a single statement by writing
(6) |
where
The sum in (6) may be evaluated by an algorithm similar to Algorithm 1, with an outer loop over and an inner loop over ; we omit the details. The inner loop is executed times, the same number of iterations as in the main loop of Algorithm 1. Note that if is chosen ‘randomly’, then we expect to be quite large, so it is imperative to make the inner loop as efficient as possible.
To this end, we specialise to the case ; this will allow us to exploit the fact that multiplication by modulo is very cheap. Of course, this specialisation does not work if . In practice, such primes tend to be rare for any given , and we may fall back on Algorithm 1 to handle them.
Directly from the definition of , we have where
From (6) we obtain
(7) |
where is the order of in , and and are defined as before.
In the algorithm that evaluates (7), the inner loop is considerably simpler than the main loop of Algorithm 1. Given at the beginning of the loop, we compute the next term by doubling it and subtracting if necessary, and is or according to whether the subtraction occurred. Therefore only a single modular multiplication is required on each iteration, to keep track of .
Next we describe further optimisations for computing a sum of the form
(8) |
(For evaluating (7), we take , , .) The following observation is crucial: if , and if the binary expansion of is (where each is 0 or 1), then . Indeed, the binary expansion of is obtained by shifting that of to the left by digits, so
and then
Assume that we have available a routine for computing the binary expansion of , whose output is a sequence of base- digits. Strip off the last terms of (8) and compute them separately; we may then assume that is divisible by . Put , so that (8) becomes
(9) |
We may now use a caching strategy as follows. Maintain a table of length , where . Each is a residue modulo , and is initialised to zero. For each , add to the table element , where is determined from the -th base- digit of by
(In practice, is represented by the sequence of bits themselves, and is obtained by a simple bit-mask.) After finishing the loop over , compute the values , where . Then the desired sum is given by . The main benefit of this approach is that in the inner loop we only perform modular multiplications, instead of . The tradeoff is that we must maintain the table , and some extra work is required at the end to assemble the information from the table. For this to be worthwhile, we should have . We should also choose so that the base- digits of can be extracted efficiently. Moreover, memory locality problems can offset the savings in arithmetic if is too large. We have found in practice that works quite well.
In the implementation discussed in §5, we made several further optimisations, which we describe only briefly.
For better memory locality — and indeed less total memory consumption — we split (9) into blocks and process each block separately. This avoids needing to store and then later retrieve too many bits of the expansion of . Instead of computing for each separately, we precompute an approximation to , and then multiply it by each that occurs, taking advantage of the speed of multiplication by a single-word integer compared to the corresponding division.
We use several tables instead of just one. For example, on a 32-bit machine, we use four tables: the highest 8 bits of each word of contribute to the first table, the next 8 bits to the second table, and so on. This reduces further the number of modular multiplications. When adding values into a table, we skip the reduction modulo , delaying this until the end of the loop. Of course, this is only possible when the word size is large enough compared to , otherwise overflow may occur. Finally, instead of re-initialising the table on each iteration of the outer loop, we simply continue to accumulate data into the same table. This partly mitigates against the overhead incurred when is unusually large compared to .
Most of the modular arithmetic uses Montgomery’s reduction algorithm [Mon85].
5. Performance data
The author implemented the above algorithms in a C++ package named bernmm (Bernoulli multi-modular). The source code is released under the GNU General Public License (GPL), and is freely available from the author’s web page. It depends on the GMP library [Gra08] for arbitrary precision arithmetic, and on NTL [Sho07] for some of the single-precision modular arithmetic. It is included as a standard component of the Sage computer algebra system (version 3.1.2) [SJ05], accessible via the bernoulli function.
The parallel version distributes the modular computations among the requested number of threads, and depends on the standard pthreads API. For the reconstruction phase, the lowest layers of the subproduct tree are performed in parallel, but the number of threads decreases towards the top of the tree, since the extended GCD routine is not threaded.
Table 1 shows timing data for , spaced by intervals of , for bernmm and several existing implementations of the zeta-function algorithm. The timings were obtained on a 16-core 2.6GHz AMD Opteron (64-bit) machine with 96 GB RAM, running Ubuntu Linux. The last two rows of the table ( and ) improve on the prior record set by Pavlyk [Pav08]. The values and may be downloaded from the author’s web page.
The compiler used was gcc 4.1.3. We used the version of GMP shipped with Sage 3.1.2, which is the same as GMP 4.2.2, but also includes assembly code written by Pierrick Gaudry that improves performance on the Opteron, and code by Niels Möller that implements a quasi-linear time extended GCD algorithm [Möl08]. We used NTL version 5.4.1, also included in Sage.
The timings for PARI/GP were obtained for version 2.3.3, using the same patched version of GMP. We used the bernfrac function from the GP interpreter. The timings for calcbn, a specialised program for computing Bernoulli numbers, were obtained from calcbn 2.0 [Kel08], again using the same patched GMP. The timings for Mathematica were obtained from the Linux x86 (64-bit) version of Mathematica 6.0, using the BernoulliB function. Neither PARI/GP nor Mathematica supply a parallel implementation as far as we are aware.
The peak memory usage for computing was approximately 5 GB; this occurred during the final extended GCD operation. By comparison, PARI/GP used 4.3 GB for , and calcbn used 1.7 GB for (with ten threads).
For each implementation, we observe that the ratio of the times in adjacent rows of the table is approximately 10, reflecting the predicted quasi-quadratic asymptotic running time of both the zeta-function algorithm and the multimodular algorithm.
We checked our results by two methods. First, we checked that as a real number, the proposed value for agrees with the estimate . Since we computed by modular information alone, this is a very strong consistency check. Second, we reduced modulo the proposed value for , and compared this against the output of Algorithm 1, for a few primes distinct from those primes that we used to compute . Again, this is a very strong consistency check — stronger in fact, since it involves all of the bits of the proposed .
Mathematica, PARI/GP and calcbn use GMP internally, so naturally any improvement in GMP’s large integer arithmetic will improve the timings reported in the table. In this connection we mention the improvements in multiplication speed reported by [GKZ07]; their paper also gives comparisons with other implementations of large-integer arithmetic.
PARI | MMA | calcbn | bernmm | ||||
---|---|---|---|---|---|---|---|
CPU | CPU | CPU | wall | CPU | wall | ||
threads | time | time | time | time | time | time | |
1 | 0.07 | 0.18 | 0.04 | 0.25 | |||
1 | 0.68 | 1.80 | 0.49 | 1.02 | |||
1 | 7.45 | 21.7 | 5.72 | 5.46 | |||
1 | 88.3 | 285 | 82.2 | 38.6 | |||
2 | 82.3 | 41.6 | 38.6 | 20.4 | |||
1 | 1058 | 3434 | 935 | 340 | |||
4 | 967 | 246 | 341 | 96.1 | |||
1 | 3397 | ||||||
4 | 3385 | 3406 | 909 | ||||
1 | |||||||
10 | 3938 | ||||||
10 | |||||||
10 |
Acknowledgments
Many thanks to Joe Buhler, Bernd Kellner, and Andrew Sutherland for their comments on a draft of this paper, and to the Department of Mathematics at Harvard University for providing the hardware on which the computations were performed.
References
- [CH72] S. Chowla and P. Hartung, An “exact” formula for the -th Bernoulli number, Acta Arith. 22 (1972), 113–115.
- [Fil92] Sandra Fillebrown, Faster computation of Bernoulli numbers, J. Algorithms 13 (1992), no. 3, 431–445.
- [FP07] Greg Fee and Simon Plouffe, An efficient algorithm for the computation of Bernoulli numbers, 2007, preprint arXiv:math.NT/0702300v2.
- [GKZ07] Pierrick Gaudry, Alexander Kruppa, and Paul Zimmermann, A GMP-based Implementation of Schönhage–Strassen’s Large Integer Multiplication Algorithm, ISSAC (Dongming Wang, ed.), ACM, 2007, pp. 167–174.
- [Gra08] Torbjörn Granlund, The GNU Multiple Precision Arithmetic library, 2008, http://gmplib.org/.
- [IR90] Kenneth Ireland and Michael Rosen, A classical introduction to modern number theory, 2 ed., Graduate Texts in Mathematics, vol. 84, Springer-Verlag, 1990.
- [Kel02] Bernd C. Kellner, Über irreguläre Paare höherer Ordnungen, Diplomarbeit, 2002.
- [Kel08] by same author, calcbn, 2008, http://www.bernoulli.org/.
- [Lan90] Serge Lang, Cyclotomic fields I and II, second ed., Graduate Texts in Mathematics, vol. 121, Springer-Verlag, New York, 1990.
- [Lan97] by same author, Undergraduate analysis, second ed., Undergraduate Texts in Mathematics, Springer-Verlag, New York, 1997.
- [Möl08] Niels Möller, On Schönhage’s algorithm and subquadratic integer GCD computation, Math. Comp. 77 (2008), no. 261, 589–607 (electronic).
- [Mon85] Peter L. Montgomery, Modular multiplication without trial division, Math. Comp. 44 (1985), no. 170, 519–521.
- [Nar00] Władysław Narkiewicz, The development of prime number theory, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2000, From Euclid to Hardy and Littlewood.
- [Par07] The PARI Group, Bordeaux, PARI/GP, version 2.3.3, 2007, available from http://pari.math.u-bordeaux.fr/.
- [Pav08] Oleksandr Pavlyk, Today We Broke the Bernoulli Record: From the Analytical Engine to Mathematica, 2008, posted on Wolfram Blog (http://blog.wolfram.com/) on 29th April 2008, retrieved 15th June 2008.
- [Sho07] Victor Shoup, NTL: A library for doing number theory, http://www.shoup.net/ntl/, 2007.
- [Shp96] Igor Shparlinski, On finding primitive roots in finite fields, Theoret. Comput. Sci. 157 (1996), no. 2, 273–275.
- [SJ05] William Stein and David Joyner, Sage: System for algebra and geometry experimentation, Communications in Computer Algebra (ACM SIGSAM Bulletin) 39 (2005), no. 2, 61–64.
- [SS71] A. Schönhage and V. Strassen, Schnelle Multiplikation grosser Zahlen, Computing (Arch. Elektron. Rechnen) 7 (1971), 281–292.
- [Ste07] William Stein, Modular forms, a computational approach, Graduate Studies in Mathematics, vol. 79, American Mathematical Society, Providence, RI, 2007, With an appendix by Paul E. Gunnells.
- [vzGG03] Joachim von zur Gathen and Jürgen Gerhard, Modern computer algebra, second ed., Cambridge University Press, Cambridge, 2003.
- [Wol07] Wolfram Research, Inc., Mathematica 6.0, 2007, http://www.wolfram.com/.