This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\NewDocumentCommand\binaryMatrix

omm

SZ Sequences: Binary-Based (0,2q)(0,2^{q})-Sequences

Abdalla G. M. Ahmed 0000-0002-2348-6897 Shenzhen UniversityChina abdalla˙gafar@hotmail.com Matt Pharr 0000-0002-0566-8291 NVIDIAUSA matt@pharr.org Victor Ostromoukhov Univ Lyon 1, CNRS, INSA LyonFrance victor.ostromoukhov@liris.cnrs.fr  and  Hui Huang Shenzhen UniversityChina
Abstract.

Low-discrepancy sequences have seen widespread adoption in computer graphics thanks to the superior rates of convergence that they provide. Because rendering integrals often are comprised of products of lower-dimensional integrals, recent work has focused on developing sequences that are also well-distributed in lower-dimensional projections. To this end, we introduce a novel construction of binary-based (0,4)(0,4)-sequences; that is, progressive fully multi-stratified sequences of 4D points, and extend the idea to higher power-of-two dimensions. We further show that not only it is possible to nest lower-dimensional sequences in higher-dimensional ones—for example, embedding a (0,2)(0,2)-sequence within our (0,4)(0,4)-sequence—but that we can ensemble two (0,2)(0,2)-sequences into a (0,4)(0,4)-sequence, four (0,4)(0,4)-sequences into a (0,16)(0,16)-sequence, and so on. Such sequences can provide excellent rates of convergence when integrals include lower-dimensional integration problems in 2, 4, 16,\ldots dimensions. Our construction is based on using 2×\times2 block matrices as symbols to construct larger matrices that potentially generate a sequence with the target (0,s)(0,s)-sequence in base ss property. We describe how to search for suitable alphabets and identify two distinct, cross-related alphabets of block symbols, which we call ss and zz, hence SZ for the resulting family of sequences. Given the alphabets, we construct candidate generator matrices and search for valid sets of matrices. We then infer a simple recurrence formula to construct full-resolution (64-bit) matrices. Because our generator matrices are binary, they allow highly-efficient implementation using bitwise operations and can be used as a drop-in replacement for Sobol matrices in existing applications. We compare SZ sequences to state-of-the-art low discrepancy sequences, and demonstrate mean relative squared error improvements up to 1.93×1.93\times in common rendering applications.

submissionid: 1042copyright: acmcopyrightjournal: TOGjournalyear: 2025journalvolume: 45journalnumber: 6article: 1publicationmonth: 12
Refer to caption
Figure 1. (a) Visualization of all pairwise projections of the first 16 dimensions of the Sobol sequence. The binary Sobol generator matrices are shown along the diagonal. Underneath them, each square shows the first 256 points of the 2D projection using the pair of matrices in its row and column. Above the matrices are power spectra of the projection using the pair of matrices in the corresponding row and column, averaged over 1 million Owen-scrambled realizations. (b) The corresponding visualization for our SZ points. SZ points are well-distributed over all 2D projections; here we see that all sets of 256 points satisfy 16×1616\times 16 stratification, unlike Sobol points. The power spectra of their projections are much more regular and smoother at higher frequencies. Further, unlike Sobol, where only the first two dimensions are a (0,2)(0,2)-sequence (red background), with SZ points each successive pair of matrices yields a (0,2)(0,2)-sequence. Further, each consecutive set of 4 matrices generates 4D points that are (0,4)(0,4)-sequences (green background), which means that they not only have all combinations of 4×644\times 64 stratifications in 2D, but also all combinations of 4×4×164\times 4\times 16 stratifications for all 4 triplets in 3D, as illustrated in the exploded cube for the first three dimensions, and then there is a full 4×4×4×44\times 4\times 4\times 4 stratification in 4D. Finally, all these 16 SZ matrices together give a (0,16)(0,16)-sequence. (c, d) These properties lead to superior results in common rendering applications: the insets on the left are rendered using Sobol points and exhibit unconverged Sobol’s characteristic checkerboard pattern while the insets on the right, using SZ points, do not suffer from this artifact, and have lower numeric error as well. For this scene, overall mean relative squared error is 1.93×1.93\times lower at 64 samples per pixel with SZ points.

1. Introduction

Sampling is a fundamental process in computer graphics (CG), underlying multiple areas including halftoning and stippling, geometry processing, machine learning, and most notably rendering, where pixels are computed by Monte-Carlo integration of complex light-transport paths. Even though these paths are inherently high-dimensional, they are mostly built from from 2D surface interactions, in addition to some 1D constituents; from here emerged the idea of similarly building the high-dimensional samples by pairing 2D and 1D samples over these constituents (Cook et al., 1984; Glassner, 1994). The research focus therefore went to optimizing 2D distributions, and it was long believed that optimizing over the high-dimensional space is “at best a secondary concern” (Pharr and Humphreys, 2004). Even when the inherently high-dimensional low-discrepancy (LD) sequences were brought to CG, generally only the first two dimensions were used and similarly “padded” to build high-dimensional samples (Kollig and Keller, 2002).

While LD sequences could be used to sample the image plane directly (Mitchell, 1992), doing so with parallel rendering based on screen-space decomposition was challenging since consecutive image samples end up at far-away pixels. A milestone came with Grünschloß et. al. (2012) showing how to invert LD sequences, making it possible to determine which sample in the sequence corresponds to a given sample index in a pixel. This inversion greatly facilitated global LD sampling, where a single high-dimensional LD sequence is used to generate all samples for all pixels in all dimensions. Despite noticeable artifacts before convergence, the undeniably superior performance of this sampling strategy clearly indicated that high-dimensional uniformity, in fact, does matter. It is worth noting that the employed Sobol sequences are already optimized for improved 2D projections by Joe and Kuo (2008), hence it is actually a combination of two- and high-dimensional uniformity that accounts for this success. Soon after, pbrt and Mitsuba, the common research rendering engines, both adopted global Sobol sampling for their defaults, and most of subsequent sampling research in CG was devoted to LD constructions, with a primary goal of imposing more control on these distributions, especially their lower-dimensional projections.

In this paper, we make a major advancement in this research direction of controlling LD distributions. Rather than adapting one of the few known sequences, we present a novel construction of a complete family of LD sequences, built entirely from first principles. We use binary matrices to construct (0, 4)-sequences in base 4, as defined in Section 2 below. We then show how to extend this sequence to additional dimensions, achieving (0,2)(0,2)-sequences in each pair of dimensions, (0,4)(0,4)-sequences in each quad, (0,16)(0,16)-sequences in each set of 16 dimensions, and so on. Our sequences thus give superior integration performance in many cases where lower-dimensional projections are important. Because our generator matrices are binary, they can be used as a drop-in replacement in systems that currently use Sobol samples; no algorithmic changes are necessary, and performance is unaffected with their substitution.

2. Technical Background and Related Work

Our work belongs to the research area on uniform distribution of points (Kuipers and Niederreiter, 1974), aiming at obtaining point distributions that uniformly cover a domain better than random (white noise) distributions. The ss-dimensional sampled domain is usually scaled to a half-open unit hypercube (0,1]s(0,1]^{s}. In this section, we furnish a basic technical background needed to understand our proposed method, and briefly review the most closely-related work.

2.1. Discrepancy

The term “discrepancy” refers to the error of using a point set to estimate the volume of region by counting the ratio of points falling inside. Star discrepancy, commonly notated as DD^{*}, refers to the maximum discrepancy measured over all axis-aligned hyper-rectangles in the concerned domain, and gives a reliable measure of the performance of the point set in numerical integration. A point set is considered a low-discrepancy (LD) set if it attains 𝒪(logs1(N)/N)\mathcal{O}\left(\log^{s-1}(N)/N\right) discrepancy for NN points in ss-dimensions. A low-discrepancy sequence is an infinite sequence of points that provides an LD set for certain contiguous blocks of points of certain sizes, and attains 𝒪(logs(N)/N)\mathcal{O}\left(\log^{s}(N)/N\right) discrepancy for all contiguous blocks.

2.2. Radix-Based Constructions

A significant construction of a one-dimensional LD sequence was presented by van der Corput (1935). If

(1) v3v2v1=i=1vi2i1\ldots v_{3}v_{2}v_{1}=\sum_{i=1}v_{i}2^{i-1}

denotes the binary encoding of the sample index vv, then the vvth sample is computed by mirroring the bits of vv at the fraction point:

(2) xv=i=1vi2i,x_{v}=\sum_{i=1}v_{i}2^{-i}\,,

This approach can be generalized to higher dimensions by generating the kkth component/dimension of the sample using a distinct linear mapping

(3) Xv(k)=C(k)V,X^{(k)}_{v}=C^{(k)}V\,,

where VV is a vector representing the vvth digit-reversed van-der-Corput-like sample in some base bb, Xv(k)X^{(k)}_{v} is a vector representing the same-base fractional digits of the final sample location along the kkth axis, and C(k)C^{(k)} is a matrix in Galois Field (GF) bb, known as a generator matrix, that linearly maps reversed digits of the sample index to digits of the computed sample, hence the name “digital” to collectively describe these methods. Next we briefly outline the three most established digital construction approaches to LD sequences.

The first known higher-dimensional extension to van der Corput sequences in due to Halton (1960), and simply uses a different prime base for encoding the index vv along each axis. Identity matrices may be used for C(k)C^{(k)} in Eq. (3), but more complex scrambling matrices have been proposed to improve the pairwise 2D projections.

A very different approach was taken by Sobol (1967), who stuck to a binary-base encoding, and constructed a different generator matrix for each dimension, using a recurrence operation to compute each column of the matrix from the preceding columns, taking the coefficients from an irreducible polynomial. These polynomials, in a sense, replace the role of different bases in Halton’s construction. We skip the details and only highlight the most relevant one here that the first two irreducible polynomials lead to the pair

(4) (I,P)=(\binaryMatrix[]321..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1,\binaryMatrix[]3211111111111111111111111111111111.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1..11..11..11..11..11..11..11..1111111111.1111.1111.1111.1111..1.1..1.1..1.1..1.111111111.1.1.1.1..11111111..111111111.1.1.11.1.1.1.11..11.11..11..11..1111111111.1.1.1.1..11..1111.1111111111111111..1.1.1.1.1.1.1.111..11..11..11.1111..1111.11111.1..1.1.1111..1.111111111.1.1.1.1..11..1111.1111..1.111.1)(I,P)=\left(\binaryMatrix[]{32}{1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1}\,,\binaryMatrix[]{32}{11111111111111111111111111111111.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1..11..11..11..11..11..11..11..11...1...1...1...1...1...1...1...1....1111....1111....1111....1111.....1.1.....1.1.....1.1.....1.1......11......11......11......11.......1.......1.......1.......1........11111111........11111111.........1.1.1.1.........1.1.1.1..........11..11..........11..11...........1...1...........1...1............1111............1111.............1.1.............1.1..............11..............11...............1...............1................1111111111111111.................1.1.1.1.1.1.1.1..................11..11..11..11...................1...1...1...1....................1111....1111.....................1.1.....1.1......................11......11.......................1.......1........................11111111.........................1.1.1.1..........................11..11...........................1...1............................1111.............................1.1..............................11...............................1}\right)

that constitutes the first two, invariant, and most well-known dimensions of Sobol’s construction, where

(5) P=(pi,j:(ji)mod2)P=\left(p_{i,j}:\binom{j}{i}\bmod 2\right)

happens to be the Pascal matrix in GF(2), where ii/jj are row/column indices, indexed from 0. Please note that we hereinafter adopt the convention of showing binary matrices as arrays of (0) gray and (1) black dots for better visibility, and also occasionally using dots instead of zeros for the same purpose.

These first two dimensions of Sobol inspired a third construction by Faure (1982) that uses powers of a Pascal matrices

(6) C(k)=(ci,j(k):(ji)kjimodb),C^{(k)}=\left(c^{(k)}_{i,j}:\binom{j}{i}k^{j-i}\bmod b\right)\,,

in a prime base bb, and generates a bb-dimensional LD sequence analogous to the 2D Sobol sequence, in a sense explained in the following subsection. Niederreiter (1987) extended Faure construction to power-of-prime bases bqb^{q} using symbolic matrices and arithmetic over the GF(bqb^{q}).

Thus, we have three well-established constructions approaches to high-dimensional LD sequences:

  1. (1)

    Halton, using different prime bases.

  2. (2)

    Sobol and variants, using binary-polynomial-based matrices.

  3. (3)

    Faure and Niederreiter extension, using Pascal matrices over fields.

Among the three, two factors made Sobol the most adopted one in computer graphics: a) the binary base, leading to extremely fast computation, and b) that higher distribution quality is obtained for lower dimensions, as will be explained in the following subsection. Halton comes next, sharing the second but not the first property of Sobol. Because the prime bases are known at compile-time, the cost of integer divisions and modulus operations can be reduced (Warren, 2012), bringing the digit reversal of Halton to within tolerable limits; this has led to making Halton sequences being used in common rendering platforms like pbrt (Pharr et al., 2016).

In contrast, we are unaware of any mention of Faure sequences and their derivatives in CG, and we may attribute this primarily to the large base used for all dimensions, requiring a considerably large number of points to attain the advertised uniformity. The lookup-based arithmetic of Niederreiter’s extension exacerbate this by adding considerable computational overhead.

While our work is developed from first principles, the resulting construction may be considered as an extension to Faure’s, specifically Niederreiter’s extension, that addresses both limitations: we ensemble higher-dimensional sequences from lower-dimensional ones, and use GF(2) matrices like Sobol’s. Thus, our construction effectively introduces the third category of LD sequences to CG.

2.3. (t,m,s)(t,m,s)-Nets and (t,s)(t,s)-Sequences

Niederreiter (1987) also established a complete theoretical framework for developing and studying a large class of LD constructions, including the radix-based ones listed above. At the heart of this theory is so-called (t,m,s)(t,m,s)-Nets in base bb, which refers to a multi-stratified distribution of bmb^{m} points in an ss-dimensional hypercube such that, for all possible stratification (slicing) ways of the domain into bmtb^{m-t} similar rectangular strata (cells), we find exactly btb^{t} points in each stratum. An illustration of the essence of a (0,4,4)(0,4,4)-net in base 4, for example, is shown in Fig. 1. Then come (t,s)(t,s)-sequences in base bb, which are infinite sequences of ss-dimensional points such that, for all integer mm, the first and all subsequent blocks of bmb^{m} points comprise a (t,m,s)(t,m,s)-net in base bb. While these definitions are independent of the construction method, the model possibly existed thanks to the existence of algebraic recipes for construction. Specifically, the definition of (t,m,s)(t,m,s)-nets translates directly into a simple condition on digital: if we build a hybrid m×mm\times m matrix

(7)

Hm1,m2,,ms=(c0,0(1)c0,m1(1)cm11,0(1)cm11,m1(1)c0,0(s)c0,m1(s)cms1,0(s)cms1,m1(s)),\displaystyle H_{m_{1},m_{2},\ldots,m_{s}}=\left(\begin{array}[]{lcl}c^{(1)}_{0,0}&\ldots&c^{(1)}_{0,m-1}\\ \vdots&\vdots&\vdots\\ c^{(1)}_{m_{1}-1,0}&\ldots&c^{(1)}_{m_{1}-1,m-1}\\ \hline\cr\vdots&\vdots&\vdots\\ \hline\cr c^{(s)}_{0,0}&\ldots&c^{(s)}_{0,m-1}\\ \vdots&\vdots&\vdots\\ c^{(s)}_{m_{s}-1,0}&\ldots&c^{(s)}_{m_{s}-1,m-1}\\ \end{array}\right)\,,

by combining the first mkm_{k} rows of the kkth generator matrix in Eq. (3), with mkm_{k}s summing to mm, then each such a hybrid matrix multiplied by VV computes the allocation of the points to strata in a specific stratification. The (0,m,s)(0,m,s)-net condition, for example, then corresponds to requiring that each hybrid matrix is invertible, ensuring a one-to-one correspondence of points to strata. A (0,s)(0,s)-sequence requires a set of generator matrices that maintains this hybrid matrix condition progressively over their top-left corners.

These definitions suggest that it is desirable to keep both the base bb and the parameter tt small, though different needs might call for different configurations. To give some examples, the van der Corput construction is a (0, 1)-sequence in base 2. Faure sequences are (0,b)(0,b)-sequences in their respective prime bases, while Niederreiter’s extension to Faure creates (0,bq)(0,b^{q})-sequences in power-of-prime bases bqb^{q}. The first two dimensions of Sobol sequence comprise a (0,2)(0,2)-sequence in base 2, but the tt value increases as we take more dimensions, which makes lower dimensions attain better quality, as we mentioned in Sec. 2.2. Halton sequences, in contrast, do not readily fit within the (t,s)(t,s)-sequences model, unless we adapt it to accept different bases. They still exhibit a multistratified structure, though. Hence, the common ground of all the three families of constructions is that they attain optimal quality at powers of the base, or product of bases in case of Halton, and here we can see the reason of superiority of Sobol and Halton over Faure for graphics applications where lower dimensions are usually more important.

In this paper we develop a novel construction of (0,2q)(0,2^{q})-sequences using binary GF(2) matrices. Technically speaking, we are breaking two traditional rules of nets and sequences: that a composite number cannot directly be used as a base, and that one can not obtain a (0,s)(0,s)-sequence with a base b<sb<s. We achieve this result, however, by combining the two violations: instead of using a base-4 directly, for example, we use 2×22\times 2 binary matrices as building blocks to emulate base-4 matrices. Please note that b=sb=s is implied in the following if no base is mentioned.

2.4. Scrambling and Shuffling of LD Constructions

Beyond the algebraic recipes for constructing LD sets and sequences, the design space is greatly enriched by scrambling techniques that can derived new valid nets/sequences from given ones. Tezuka (1994) showed that multiplying arbitrary lower-triangular matrices from left by the generator matrices produces valid generators, since doing so preserves that ranks of all hybrid matrices. Owen (1995), in contrast, introduced a post-processing technique, “Owen scrambling”, that preserves the net/sequence structure. In contrast to scrambling that applies to sample locations, the term shuffling refers to reordering (permuting) the sample indices (Faure and Tezuka, 2002). As (0,s)(0,s)-sequences, all these are applicable to our sequences, and we will use them in our development.

2.5. LD Sampling in Computer Graphics

The interest in LD construction has grown in the graphics community over the past decade, mostly devoted to finding handles to impose control over these distributions as commonly needed in graphics applications. Ahmed et al. (2016) presented an original 2D low-discrepancy construction that enables imposing a blue noise spectrum, then Perreir et al. (2018) imposed a blue noise spectrum onto selected pairs of dimensions of a Sobol sequence. While both methods compromised the discrepancy of the underlying distribution, they presented a proof of concept that encouraged further research.

In a different path, Christensen et al. (2018) tried searching for dyadic (i.e., base-2) (0,2)(0,2)-sequences sequences with good spatial and spectral properties, and Pharr (2019) presented a more efficient search, but Ahmed and Wonka then presented a theorem ([)Theorem 4.2]Ahmed2021Optimizing implying that the search space is actually confined to Owen-scrambled 2D Sobol, and soon the first group of authors, lead by Helmer (2021), developed an extremely fast implementation of Owen scrambling. We take this as an excellent example of the fast-paced learning cycles of the community to analyze and manipulate the inherently difficult-to-understand LD construction. In this paper we present an almost complete cycle from experimentation to elements of a theory.

In more recent years, the research on LD in CG reached the state of enriching the quasi-Monte-Carlo (QMC) literature with significant findings and novel constructions, including an efficient algorithm that spans the whole space of dyadic (0,m,2)(0,m,2)-nets (Ahmed and Wonka, 2021), a cascade of dyadic (0,m,2)(0,m,2)-nets over successive pairs of dimensions (Paulin et al., 2021), a family of self-similar dyadic (0,2)(0,2)-sequences (Ahmed et al., 2023), a gradient-decent optimization of discrete Owen scrambling (Doignies et al., 2024), and a base-3 analog to Sobol sequences (Ostromoukhov et al., 2024). Our current work is partially inspired by this last work of Ostromoukhov et al., where we thought of using base 4 instead to gain the advantage of binary computation.

3. Exploration

In this section we report our pilot search for binary matrices that generate (0,2q)(0,2^{q})-sequences, starting from experimental exploration and gradually developing an analytical framework distilled from empirical findings. Starting with four dimensions, our initial goal is to construct a (0,4)(0,4)-sequence by searching for a working set of generator matrices that linearly map a reverse-ordered vector of digits of the sample index into vectors of digits representing coordinates along the four axes, as in Eq. (3). It is already established and well known that plain matrices and straightforward arithmetic in a composite base like 4 would not serve the purpose. An intuition of this deficiency may be obtained by noting that a factor of the base, like 2 in base 4, would not preserve all the ranks of a multiplied sequence number digit modulo base, which drastically limits the opportunity of finding a working set of matrices. Rather than using GF(4) symbolic arithmetic like Niederreiter (1987), our idea is to use plain GF(2) matrices and vectors, interpreting 2×22\times 2 blocks of the generator matrices and pairs of bits of the multiplied VV vector as base-4 digits. Our first key insight is that the set of 2×22\times 2 invertible binary matrix blocks is capable of resolving all the 6 permutations of the list of non-zero numbers in a base-4 digit. While this does not prove that building matrices from such blocks would yield a (0,4)(0,4)-sequence, it is at least encouraging to explore.

We start with an exhaustive search over a small range to validate the concept. Without loss of generality, we may assume the identity matrix II for the first dimension, since it can be factored out from the right of the four matrices to permute the sequence number. Our plan, then, is to progressively search for three binary square matrices that expand at a pace of 2 rows and columns and fulfill the hybrid-matrices condition (Eq. (7)) in each step. That is, in each step, and for each dimension, we expand each matrix

(8) Am+2(AmCRX),A_{m+2}\leftarrow\left(\begin{array}[]{c|c}A_{m}&C\\ \hline\cr R&X\end{array}\right)\,,

by adding an m×2m\times 2 column CC, a 2×m2\times m row RR, and a 2×22\times 2 corner block XX. We will drop the suffix hereinafter where not deemed ambiguous. We may always assume zeros for RR by factoring a lower-triangular Tezuka-scrambling matrix:

(9) (ACRX)=(I0RA1I)(AC0RA1C+X),\left(\begin{array}[]{c|c}A&C\\ \hline\cr R&X\end{array}\right)=\left(\begin{array}[]{c|c}I&0\\ \hline\cr RA^{-1}&I\end{array}\right)\left(\begin{array}[]{c|c}A&C\\ \hline\cr 0&RA^{-1}C+X\end{array}\right)\,,

where II is an appropriately sized identity matrix. Then we use the factoring

(10) (AC0X)=(I00X).(AC0I),\left(\begin{array}[]{c|c}A&C\\ \hline\cr 0&X\end{array}\right)=\left(\begin{array}[]{c|c}I&0\\ \hline\cr 0&X\end{array}\right)\,.\left(\begin{array}[]{c|c}A&C\\ \hline\cr 0&I\end{array}\right)\,,

to reveal that XX must be invertible. This reduces the set for corner extension matrices from 16 to the following six:

(11)

{(1..1),(11.1),(1.11),(.11.),(111.),(.111)}

.
\scalebox{0.8}{\mbox{$\displaystyle\left\{\left(\begin{array}[]{cc}1&.\\ .&1\end{array}\right),\left(\begin{array}[]{cc}1&1\\ .&1\end{array}\right),\left(\begin{array}[]{cc}1&.\\ 1&1\end{array}\right),\left(\begin{array}[]{cc}.&1\\ 1&.\end{array}\right),\left(\begin{array}[]{cc}1&1\\ 1&.\end{array}\right),\left(\begin{array}[]{cc}.&1\\ 1&1\end{array}\right)\right\}$}}\,.

We may further exclude the third and fifth by factoring out a lower-triangular Tezuka-scrambling matrix, reducing the set to only four:

(12)

{(1..1),(11.1),(.11.),(.111)}

,
\scalebox{0.8}{\mbox{$\displaystyle\left\{\left(\begin{array}[]{cc}1&.\\ .&1\end{array}\right),\left(\begin{array}[]{cc}1&1\\ .&1\end{array}\right),\left(\begin{array}[]{cc}.&1\\ 1&.\end{array}\right),\left(\begin{array}[]{cc}.&1\\ 1&1\end{array}\right)\right\}$}}\,,

which is more favorable as a power of 2.

We built an efficient exhaustive search algorithm by placing a whole matrix compactly in a 64-bit word, allowing for fast bit manipulation and for building lookup tables of invertible matrices. This enabled searching up to 8×88\times 8 matrices, i.e., four extension steps. The search confirmed the existence of multiple working sets of matrices, which encouraged us to conduct two important experiments. We first considered nesting a (0, 2)-sequence, namely 2D Sobol, in our (0, 4) set. Towards that end, we enforced the extension of the second-dimension matrix to reproduce the Pascal matrix, and search for a complementing pair of matrices. This successfully worked all the way to 8×\times8 matrices, with multiple choices in each extension. Next, we considered ensembling two (0, 2)-sequences into a (0, 4)-sequence. To achieve this, we used the formula recently exposed by Ahmed et al. (2023), telling that for any pair {Ux,Uy}\{U_{x},U_{y}\} of upper-triangular generator matrices to make a (0, 2)-sequence, they have to satisfy

(13) UyUx1=UxUy1=P.U_{y}U_{x}^{-1}=U_{x}U_{y}^{-1}=P\,.

Thus, we conducted the search as with nesting, and in each extension filtered the results that satisfy this condition. Once again, the experiment worked successfully all the way to the 8×88\times 8 matrices.

3.1. Initial Findings

For all combinations of valid initial 2×\times2 blocks of the three matrices, we consistently obtained 768 distinct valid sets for the first extension to 4×\times4, and 384 distinct valid extension on each branch for subsequent extension to 6×\times6 and 8×\times8, which suggests an analytical construction model. The fixed number of alternatives in each extension step indicates that the degrees of freedom are possibly limited to the top and bottom blocks of the extension columns. Following this hint, we note that the corner extension block XX in Eq. (8) is not involved in any of the hybrid matrices, hence comprises, for each of the three matrices, a free choice among the four options in Eq. (12). This accounts for a factor of 43=644^{3}=64, and we verified that fixing the extension corner blocks XX to identity reduces the list to 12/6 sets in the first/subsequent extension steps. This leaves us with the degrees of freedom in the top row, which we analyze next.

Let

(14) (X0X1Xj1Xj)\left(X_{0}\,X_{1}\,\ldots\,X_{j-1}\,X_{j}\right)

denote the first block-row in one matrix XX of the three upon the jjth extension, where each XiX_{i} is a 2×\times2 block matrix. By constructing a hybrid matrix comprising jj rows of the first identity matrix and this row of XX, we find that

(15) |IX0Xj1Xj|=1|Xj|=1,\left|\begin{array}[]{ccc|c}&I&&\\ \hline\cr X_{0}&\ldots&X_{j-1}&X_{j}\end{array}\right|=1\implies\left|X_{j}\right|=1\,,

leading to the conclusion that, for each of the three matrices, all 2×22\times 2 blocks of the first block row have to be invertible.

Now consider taking a hybrid of the first j1j-1 rows of II with the first row of each of two other matrices AA and BB, then we get

(16) |IA0Aj2Aj1AjB0Bj2Aj1Aj|=1|Aj1AjBj1Bj|=1.\left|\begin{array}[]{ccc|cc}&I&&\\ \hline\cr A_{0}&\ldots&A_{j-2}&A_{j-1}&A_{j}\\ B_{0}&\ldots&B_{j-2}&A_{j-1}&A_{j}\end{array}\right|=1\implies\begin{vmatrix}A_{j-1}&A_{j}\\ B_{j-1}&B_{j}\end{vmatrix}=1\,.

Since all four elements are invertible by virtue of Eq. (15), we can use our earlier “RA1C+XRA^{-1}C+X” reduction of Eq. (9) to get

(17) |Bj1Aj11Aj+Bj|=1,\left|B_{j-1}A_{j-1}^{-1}A_{j}+B_{j}\right|=1\,,

or

(18) |Aj11Aj+Bj11Bj|=1.\left|A_{j-1}^{-1}A_{j}+B_{j-1}^{-1}B_{j}\right|=1\,.

Let us define a symbol

(19) xj=Xj11Xj,x_{j}=X_{j-1}^{-1}X_{j}\,,

to designate the product of the inverse of the j1j-1st by the jjth entry in the first row of block-matrix XX. Then Eq (18), rewritten as

(20) |aj+bj|=1,\left|a_{j}+b_{j}\right|=1\,,

says that the sum of corresponding symbols in the first row of each pair {A,B}\{A,B\} of generator matrices must be invertible, which also implies that the symbols at corresponding slots must be distinct for the three matrices. Thus, for each slot in the first block row, the inserted block-matrix symbols must be

  1. (1)

    Invertible.

  2. (2)

    Distinct for each matrix.

  3. (3)

    Have invertible sums.

The population of six invertible 2×\times2 matrices comprises two disjoint sets,

(21) (S,Z)=({s1,s2,s3},{z1,z2,z3}),(S,Z)=\left(\left\{\underbrace{\leavevmode\hbox to12.3pt{\vbox to12.3pt{\pgfpicture\makeatletter\hbox{\;\lower-2.73334pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{ {}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{ }\pgfsys@endscope}}} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@moveto{2.73334pt}{6.83331pt}\pgfsys@curveto{2.73334pt}{8.34291pt}{1.5096pt}{9.56665pt}{0.0pt}{9.56665pt}\pgfsys@curveto{-1.5096pt}{9.56665pt}{-2.73334pt}{8.34291pt}{-2.73334pt}{6.83331pt}\pgfsys@curveto{-2.73334pt}{5.32372pt}{-1.5096pt}{4.09998pt}{0.0pt}{4.09998pt}\pgfsys@curveto{1.5096pt}{4.09998pt}{2.73334pt}{5.32372pt}{2.73334pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@moveto{9.56665pt}{6.83331pt}\pgfsys@curveto{9.56665pt}{8.34291pt}{8.34291pt}{9.56665pt}{6.83331pt}{9.56665pt}\pgfsys@curveto{5.32372pt}{9.56665pt}{4.09998pt}{8.34291pt}{4.09998pt}{6.83331pt}\pgfsys@curveto{4.09998pt}{5.32372pt}{5.32372pt}{4.09998pt}{6.83331pt}{4.09998pt}\pgfsys@curveto{8.34291pt}{4.09998pt}{9.56665pt}{5.32372pt}{9.56665pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.73334pt}{0.0pt}\pgfsys@curveto{2.73334pt}{1.5096pt}{1.5096pt}{2.73334pt}{0.0pt}{2.73334pt}\pgfsys@curveto{-1.5096pt}{2.73334pt}{-2.73334pt}{1.5096pt}{-2.73334pt}{0.0pt}\pgfsys@curveto{-2.73334pt}{-1.5096pt}{-1.5096pt}{-2.73334pt}{0.0pt}{-2.73334pt}\pgfsys@curveto{1.5096pt}{-2.73334pt}{2.73334pt}{-1.5096pt}{2.73334pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@moveto{9.56665pt}{0.0pt}\pgfsys@curveto{9.56665pt}{1.5096pt}{8.34291pt}{2.73334pt}{6.83331pt}{2.73334pt}\pgfsys@curveto{5.32372pt}{2.73334pt}{4.09998pt}{1.5096pt}{4.09998pt}{0.0pt}\pgfsys@curveto{4.09998pt}{-1.5096pt}{5.32372pt}{-2.73334pt}{6.83331pt}{-2.73334pt}\pgfsys@curveto{8.34291pt}{-2.73334pt}{9.56665pt}{-1.5096pt}{9.56665pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope} \pgfsys@invoke{ }\pgfsys@endscope{{{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{ }\pgfsys@endscope\hss}}\endpgfpicture}}}_{s_{1}},\underbrace{\leavevmode\hbox to12.3pt{\vbox to12.3pt{\pgfpicture\makeatletter\hbox{\;\lower-2.73334pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{ {}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{ }\pgfsys@endscope}}} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@moveto{2.73334pt}{6.83331pt}\pgfsys@curveto{2.73334pt}{8.34291pt}{1.5096pt}{9.56665pt}{0.0pt}{9.56665pt}\pgfsys@curveto{-1.5096pt}{9.56665pt}{-2.73334pt}{8.34291pt}{-2.73334pt}{6.83331pt}\pgfsys@curveto{-2.73334pt}{5.32372pt}{-1.5096pt}{4.09998pt}{0.0pt}{4.09998pt}\pgfsys@curveto{1.5096pt}{4.09998pt}{2.73334pt}{5.32372pt}{2.73334pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@moveto{9.56665pt}{6.83331pt}\pgfsys@curveto{9.56665pt}{8.34291pt}{8.34291pt}{9.56665pt}{6.83331pt}{9.56665pt}\pgfsys@curveto{5.32372pt}{9.56665pt}{4.09998pt}{8.34291pt}{4.09998pt}{6.83331pt}\pgfsys@curveto{4.09998pt}{5.32372pt}{5.32372pt}{4.09998pt}{6.83331pt}{4.09998pt}\pgfsys@curveto{8.34291pt}{4.09998pt}{9.56665pt}{5.32372pt}{9.56665pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.73334pt}{0.0pt}\pgfsys@curveto{2.73334pt}{1.5096pt}{1.5096pt}{2.73334pt}{0.0pt}{2.73334pt}\pgfsys@curveto{-1.5096pt}{2.73334pt}{-2.73334pt}{1.5096pt}{-2.73334pt}{0.0pt}\pgfsys@curveto{-2.73334pt}{-1.5096pt}{-1.5096pt}{-2.73334pt}{0.0pt}{-2.73334pt}\pgfsys@curveto{1.5096pt}{-2.73334pt}{2.73334pt}{-1.5096pt}{2.73334pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@moveto{9.56665pt}{0.0pt}\pgfsys@curveto{9.56665pt}{1.5096pt}{8.34291pt}{2.73334pt}{6.83331pt}{2.73334pt}\pgfsys@curveto{5.32372pt}{2.73334pt}{4.09998pt}{1.5096pt}{4.09998pt}{0.0pt}\pgfsys@curveto{4.09998pt}{-1.5096pt}{5.32372pt}{-2.73334pt}{6.83331pt}{-2.73334pt}\pgfsys@curveto{8.34291pt}{-2.73334pt}{9.56665pt}{-1.5096pt}{9.56665pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope} \pgfsys@invoke{ }\pgfsys@endscope{{{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{ }\pgfsys@endscope\hss}}\endpgfpicture}}}_{s_{2}},\underbrace{\leavevmode\hbox to12.3pt{\vbox to12.3pt{\pgfpicture\makeatletter\hbox{\;\lower-2.73334pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{ {}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{ }\pgfsys@endscope}}} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@moveto{2.73334pt}{6.83331pt}\pgfsys@curveto{2.73334pt}{8.34291pt}{1.5096pt}{9.56665pt}{0.0pt}{9.56665pt}\pgfsys@curveto{-1.5096pt}{9.56665pt}{-2.73334pt}{8.34291pt}{-2.73334pt}{6.83331pt}\pgfsys@curveto{-2.73334pt}{5.32372pt}{-1.5096pt}{4.09998pt}{0.0pt}{4.09998pt}\pgfsys@curveto{1.5096pt}{4.09998pt}{2.73334pt}{5.32372pt}{2.73334pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@moveto{9.56665pt}{6.83331pt}\pgfsys@curveto{9.56665pt}{8.34291pt}{8.34291pt}{9.56665pt}{6.83331pt}{9.56665pt}\pgfsys@curveto{5.32372pt}{9.56665pt}{4.09998pt}{8.34291pt}{4.09998pt}{6.83331pt}\pgfsys@curveto{4.09998pt}{5.32372pt}{5.32372pt}{4.09998pt}{6.83331pt}{4.09998pt}\pgfsys@curveto{8.34291pt}{4.09998pt}{9.56665pt}{5.32372pt}{9.56665pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.73334pt}{0.0pt}\pgfsys@curveto{2.73334pt}{1.5096pt}{1.5096pt}{2.73334pt}{0.0pt}{2.73334pt}\pgfsys@curveto{-1.5096pt}{2.73334pt}{-2.73334pt}{1.5096pt}{-2.73334pt}{0.0pt}\pgfsys@curveto{-2.73334pt}{-1.5096pt}{-1.5096pt}{-2.73334pt}{0.0pt}{-2.73334pt}\pgfsys@curveto{1.5096pt}{-2.73334pt}{2.73334pt}{-1.5096pt}{2.73334pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@moveto{9.56665pt}{0.0pt}\pgfsys@curveto{9.56665pt}{1.5096pt}{8.34291pt}{2.73334pt}{6.83331pt}{2.73334pt}\pgfsys@curveto{5.32372pt}{2.73334pt}{4.09998pt}{1.5096pt}{4.09998pt}{0.0pt}\pgfsys@curveto{4.09998pt}{-1.5096pt}{5.32372pt}{-2.73334pt}{6.83331pt}{-2.73334pt}\pgfsys@curveto{8.34291pt}{-2.73334pt}{9.56665pt}{-1.5096pt}{9.56665pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope} \pgfsys@invoke{ }\pgfsys@endscope{{{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{ }\pgfsys@endscope\hss}}\endpgfpicture}}}_{s_{3}}\right\},\left\{\underbrace{\leavevmode\hbox to12.3pt{\vbox to12.3pt{\pgfpicture\makeatletter\hbox{\;\lower-2.73334pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{ {}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{ }\pgfsys@endscope}}} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@moveto{2.73334pt}{6.83331pt}\pgfsys@curveto{2.73334pt}{8.34291pt}{1.5096pt}{9.56665pt}{0.0pt}{9.56665pt}\pgfsys@curveto{-1.5096pt}{9.56665pt}{-2.73334pt}{8.34291pt}{-2.73334pt}{6.83331pt}\pgfsys@curveto{-2.73334pt}{5.32372pt}{-1.5096pt}{4.09998pt}{0.0pt}{4.09998pt}\pgfsys@curveto{1.5096pt}{4.09998pt}{2.73334pt}{5.32372pt}{2.73334pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@moveto{9.56665pt}{6.83331pt}\pgfsys@curveto{9.56665pt}{8.34291pt}{8.34291pt}{9.56665pt}{6.83331pt}{9.56665pt}\pgfsys@curveto{5.32372pt}{9.56665pt}{4.09998pt}{8.34291pt}{4.09998pt}{6.83331pt}\pgfsys@curveto{4.09998pt}{5.32372pt}{5.32372pt}{4.09998pt}{6.83331pt}{4.09998pt}\pgfsys@curveto{8.34291pt}{4.09998pt}{9.56665pt}{5.32372pt}{9.56665pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.73334pt}{0.0pt}\pgfsys@curveto{2.73334pt}{1.5096pt}{1.5096pt}{2.73334pt}{0.0pt}{2.73334pt}\pgfsys@curveto{-1.5096pt}{2.73334pt}{-2.73334pt}{1.5096pt}{-2.73334pt}{0.0pt}\pgfsys@curveto{-2.73334pt}{-1.5096pt}{-1.5096pt}{-2.73334pt}{0.0pt}{-2.73334pt}\pgfsys@curveto{1.5096pt}{-2.73334pt}{2.73334pt}{-1.5096pt}{2.73334pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@moveto{9.56665pt}{0.0pt}\pgfsys@curveto{9.56665pt}{1.5096pt}{8.34291pt}{2.73334pt}{6.83331pt}{2.73334pt}\pgfsys@curveto{5.32372pt}{2.73334pt}{4.09998pt}{1.5096pt}{4.09998pt}{0.0pt}\pgfsys@curveto{4.09998pt}{-1.5096pt}{5.32372pt}{-2.73334pt}{6.83331pt}{-2.73334pt}\pgfsys@curveto{8.34291pt}{-2.73334pt}{9.56665pt}{-1.5096pt}{9.56665pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope} \pgfsys@invoke{ }\pgfsys@endscope{{{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{ }\pgfsys@endscope\hss}}\endpgfpicture}}}_{z_{1}},\underbrace{\leavevmode\hbox to12.3pt{\vbox to12.3pt{\pgfpicture\makeatletter\hbox{\;\lower-2.73334pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{ {}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{ }\pgfsys@endscope}}} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@moveto{2.73334pt}{6.83331pt}\pgfsys@curveto{2.73334pt}{8.34291pt}{1.5096pt}{9.56665pt}{0.0pt}{9.56665pt}\pgfsys@curveto{-1.5096pt}{9.56665pt}{-2.73334pt}{8.34291pt}{-2.73334pt}{6.83331pt}\pgfsys@curveto{-2.73334pt}{5.32372pt}{-1.5096pt}{4.09998pt}{0.0pt}{4.09998pt}\pgfsys@curveto{1.5096pt}{4.09998pt}{2.73334pt}{5.32372pt}{2.73334pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@moveto{9.56665pt}{6.83331pt}\pgfsys@curveto{9.56665pt}{8.34291pt}{8.34291pt}{9.56665pt}{6.83331pt}{9.56665pt}\pgfsys@curveto{5.32372pt}{9.56665pt}{4.09998pt}{8.34291pt}{4.09998pt}{6.83331pt}\pgfsys@curveto{4.09998pt}{5.32372pt}{5.32372pt}{4.09998pt}{6.83331pt}{4.09998pt}\pgfsys@curveto{8.34291pt}{4.09998pt}{9.56665pt}{5.32372pt}{9.56665pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.73334pt}{0.0pt}\pgfsys@curveto{2.73334pt}{1.5096pt}{1.5096pt}{2.73334pt}{0.0pt}{2.73334pt}\pgfsys@curveto{-1.5096pt}{2.73334pt}{-2.73334pt}{1.5096pt}{-2.73334pt}{0.0pt}\pgfsys@curveto{-2.73334pt}{-1.5096pt}{-1.5096pt}{-2.73334pt}{0.0pt}{-2.73334pt}\pgfsys@curveto{1.5096pt}{-2.73334pt}{2.73334pt}{-1.5096pt}{2.73334pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@moveto{9.56665pt}{0.0pt}\pgfsys@curveto{9.56665pt}{1.5096pt}{8.34291pt}{2.73334pt}{6.83331pt}{2.73334pt}\pgfsys@curveto{5.32372pt}{2.73334pt}{4.09998pt}{1.5096pt}{4.09998pt}{0.0pt}\pgfsys@curveto{4.09998pt}{-1.5096pt}{5.32372pt}{-2.73334pt}{6.83331pt}{-2.73334pt}\pgfsys@curveto{8.34291pt}{-2.73334pt}{9.56665pt}{-1.5096pt}{9.56665pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope} \pgfsys@invoke{ }\pgfsys@endscope{{{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{ }\pgfsys@endscope\hss}}\endpgfpicture}}}_{z_{2}},\underbrace{\leavevmode\hbox to12.3pt{\vbox to12.3pt{\pgfpicture\makeatletter\hbox{\;\lower-2.73334pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{ {}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{ }\pgfsys@endscope}}} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@moveto{2.73334pt}{6.83331pt}\pgfsys@curveto{2.73334pt}{8.34291pt}{1.5096pt}{9.56665pt}{0.0pt}{9.56665pt}\pgfsys@curveto{-1.5096pt}{9.56665pt}{-2.73334pt}{8.34291pt}{-2.73334pt}{6.83331pt}\pgfsys@curveto{-2.73334pt}{5.32372pt}{-1.5096pt}{4.09998pt}{0.0pt}{4.09998pt}\pgfsys@curveto{1.5096pt}{4.09998pt}{2.73334pt}{5.32372pt}{2.73334pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@moveto{9.56665pt}{6.83331pt}\pgfsys@curveto{9.56665pt}{8.34291pt}{8.34291pt}{9.56665pt}{6.83331pt}{9.56665pt}\pgfsys@curveto{5.32372pt}{9.56665pt}{4.09998pt}{8.34291pt}{4.09998pt}{6.83331pt}\pgfsys@curveto{4.09998pt}{5.32372pt}{5.32372pt}{4.09998pt}{6.83331pt}{4.09998pt}\pgfsys@curveto{8.34291pt}{4.09998pt}{9.56665pt}{5.32372pt}{9.56665pt}{6.83331pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{6.83331pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0.9,0.9,0.9}\definecolor[named]{pgfstrokecolor}{rgb}{0.9,0.9,0.9}\pgfsys@color@gray@stroke{0.9}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.9}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.9,0.9,0.9}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.73334pt}{0.0pt}\pgfsys@curveto{2.73334pt}{1.5096pt}{1.5096pt}{2.73334pt}{0.0pt}{2.73334pt}\pgfsys@curveto{-1.5096pt}{2.73334pt}{-2.73334pt}{1.5096pt}{-2.73334pt}{0.0pt}\pgfsys@curveto{-2.73334pt}{-1.5096pt}{-1.5096pt}{-2.73334pt}{0.0pt}{-2.73334pt}\pgfsys@curveto{1.5096pt}{-2.73334pt}{2.73334pt}{-1.5096pt}{2.73334pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@moveto{9.56665pt}{0.0pt}\pgfsys@curveto{9.56665pt}{1.5096pt}{8.34291pt}{2.73334pt}{6.83331pt}{2.73334pt}\pgfsys@curveto{5.32372pt}{2.73334pt}{4.09998pt}{1.5096pt}{4.09998pt}{0.0pt}\pgfsys@curveto{4.09998pt}{-1.5096pt}{5.32372pt}{-2.73334pt}{6.83331pt}{-2.73334pt}\pgfsys@curveto{8.34291pt}{-2.73334pt}{9.56665pt}{-1.5096pt}{9.56665pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{6.83331pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{ } \pgfsys@invoke{ }\pgfsys@endscope} \pgfsys@invoke{ }\pgfsys@endscope{{{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{ }\pgfsys@endscope\hss}}\endpgfpicture}}}_{z_{3}}\right\}\right)\,,

that satisfy these requirements, each comprising two diagonally-mirrored triangular elements, and a counter diagonal element that switches between them via addition, hence we call them SS (think ) and ZZ (think ) to make it easier to recall which is which. Hence comes the name SZ for these sequences. We will also stick hereinafter to this “S/Z” convention in discriminating left/right-hand elements and operations, respectively.

We conclude this subsection by noting that these three simple rules comprise all the requirements for satisfying the hybrid matrix condition for the first 2×22\times 2 block matrices. Further, they analogously extend to other powers of 2. Thus, constructing binary matrices for generating a “(0,2,2q)(0,2,2^{q})-net in base 2q2^{q}” reduces to finding an alphabet of q×qq\times q-block matrices satisfying these rules. This name in Niederreiter’s notation just means a set of 22q2^{2q} points in 2q2^{q} dimensions that is stratified in all 2q1(2q1)2^{q-1}(2^{q}-1) pairs of 2D projections; cf. Jarosz et. al. (2019). Next, we show how to extend the matrices to arbitrary sizes for (0,2q)(0,2^{q})-sequences in base 2q2^{q}.

3.2. Refinement

Instead of defining the symbols in Eq. (19) in terms of adjacent entries, it would be more constructive to populate the matrices directly with symbols. Realizing that all first-row blocks must be invertible helps us to achieve this via this subtle factoring

(22) (IAJ1)((II),(III),(IBjAJ1I),(ICjAJ1I))(IAj).\displaystyle\scalebox{0.6}{\mbox{$\displaystyle\begin{pmatrix}I&&\\ &\ddots&\\ &&A_{J}^{-1}\end{pmatrix}\left(\begin{pmatrix}I&&\\ &\ddots&\\ &&I\end{pmatrix},\begin{pmatrix}I&\ldots&I\\ &\ddots&\vdots\\ &&I\end{pmatrix},\begin{pmatrix}I&\ldots&B_{j}A_{J}^{-1}\\ &\ddots&\vdots\\ &&I\end{pmatrix},\begin{pmatrix}I&\ldots&C_{j}A_{J}^{-1}\\ &\ddots&\vdots\\ &&I\end{pmatrix}\right)\begin{pmatrix}I&&\\ &\ddots&\\ &&A_{j}\end{pmatrix}$}}\,.

The common SS (left-hand-side) factor represents a Tezuka scambling, hence may be dropped. Thus, without loss of generality, we may populate all blocks in the first row of the second-dimension (first of the three) matrix with identity matrix blocks. Once we implement this last factoring, the exhaustive search reduces to this one-and-only quadruple of generator matrices:

(23) {\binaryMatrix[]321..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1..1,\binaryMatrix[]321.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1..1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1..11111111.11111111.1.1..1.1..1.1..1.11.1..1.1..1.1..1.11.1.1.1..1.1.1.1..1.1.1.11.1.1.1.1.1.1.11.1.1.1.11..1111..111.1.1.1..1.1.1.1..11.11.1.1.1.1.1.1.1.11.1.1.1.1.1.1.1...1111..1111..1.1..1.1.1.1..1.1.1.11.11.1.1.1..1.1.1.1..11.11.1.11.11..1,\binaryMatrix[]321..1111..1111..1111..1111..1111..1111..1111..1111..1111..1111..1..1111..1111..1111..1111..1111..1..1..1..111.111..1..1..111.111..1..1..1111..1111..1111..1..1..1111111..111...1111..1..1..1111..111.111..1..1..1111..1111..1111..1..1111..1..1..1111..1111..1..1111..1111..1111111..1..1..1111..1..1..111111.111..1..1..1111..1111..1111..1..1111..1..1..1..1111..1,\binaryMatrix[]321.11.11.11.11.11.11.11.11.11.11..11.11.11.11.11.11.11.11.11.11.1..1.1..11..1.1..11..1.11..11..1.1..11..1.1..11.1.11.11.1..11..1.11..11..1.11.11.1..11.1.11.11..11.11.1..1.11.11.11.11.111.11.1..11.11.11.1.1..1..11..1..11.11..1.1.11.11..11.11.1..1.11..11.1.11.11.11.11.11..11.11.11.11.11.1.1..11..1..1..11..1.1..1.11.11.111..1.11.1.11..11.1.11.11..11.11.1..1.11..11.1.11..11.1..1},\left\{\binaryMatrix[]{32}{1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1................................1}\,,\binaryMatrix[]{32}{1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1..1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1..1...1...1...1...1...1...1...1....1...1...1...1...1...1...1...1....1.1.....1.1.....1.1.....1.1......1.1.....1.1.....1.1.....1.1......1.......1.......1.......1........1.......1.......1.......1........1.1.1.1.........1.1.1.1..........1.1.1.1.........1.1.1.1..........1...1...........1...1............1...1...........1...1............1.1.............1.1..............1.1.............1.1..............1...............1................1...............1................1.1.1.1.1.1.1.1..................1.1.1.1.1.1.1.1..................1...1...1...1....................1...1...1...1....................1.1.....1.1......................1.1.....1.1......................1.......1........................1.......1........................1.1.1.1..........................1.1.1.1..........................1...1............................1...1............................1.1..............................1.1..............................1................................1}\,,\binaryMatrix[]{32}{1..1111..1111..1111..1111..1111..1111..1111..1111..1111..1111..1..1...11...1..1...11...1..1...11...1..1...11...1..1...11...1..1.....1..1.....111....111.....1..1.....111....111.....1..1.....111......1........1......11......1........1......11......1........1........1..1111.........111..111.........1111..1........1..1111...........1...11..........11...1...........1..1...........1...11............1..1............111..............111............1..1..............1...............11...............1..............1.................1..1111..1111..1.................1111..1111..111..................1...11...1..1....................1..1...11...1....................1..1.....111.....................111....111.......................1........1.......................1......11........................1..1111..........................1111..1..........................1...11...........................1..1.............................1..1.............................111..............................1................................1}\,,\binaryMatrix[]{32}{1.11.11.11.11.11.11.11.11.11.11..11.11.11.11.11.11.11.11.11.11.1..1....1..11..1....1..11..1....1...1..11..1....1..11..1....1..11....1.11....11.1.....11.....1.11.....11.....1.11....11.1.....11.......1.......11.......1......1........1......1.......11.......1........1.11.11..........11.11.1.........11.11.1........11.11.11..........1....1...........1..11...........1..11..........11..1.............1.11.............11..............11.............11.1..............1................1...............1..............11................1.11.11.11.11.11.................11.11.11.11.11...................1....1..11..1....................1..11..1....1....................1.11....11.1.....................11.....1.11......................1.......11.......................1......1.........................1.11.11..........................11.11.1..........................1....1...........................1..11............................1.11.............................11...............................1................................1}\right\}\,,

here shown to 32-bit depth. As per our exhaustive search, this quadruple represents the template of all binary-matrix-based (0, 4)-sequences, and all other possible variants may be derived by plugging back the degrees of freedom factored earlier, as we will discuss later.

Upon careful inspection of these matrices, the first important observation is that they are built exclusively from the SS block symbols in Eq. (21), plus empty (0) blocks. Further inspection, guided by a preliminary assumption that these sequences bear some relation to Faure’s, reveals that these are actually Pascal matrices

(24) 𝒫(a)=(pi,j:(ji)aji),\mathcal{P}(a)=\left(p_{i,j}:\binom{j}{i}a^{j-i}\right)\,,

constructed from these four block symbols.

4. SZ Sequences

Based on our exhaustive search for binary (0,4)(0,4)-sequence generators and similar partial searches for (0,8)(0,8) and (0,16)(0,16) sequences, we find that what characterizes the SS set of 2×22\times 2 block symbols, compared to the complementing ZZ set in Eq. (21), is that it is not only closed under addition, but also multiplication, and hence constitutes a finite field, as dictated by Wedderburn’s little theorem, stating that “every finite division ring is a field”. In a nutshell, finite fields are sets of elements over which are defined analogous operations to addition and multiplication, and the set is closed under these two operations, with associated neutral elements analogous to 0 and 1, and with analogous definitions of additive and multiplicative inverses. The only missing element in our alphabets to make a field is a zero, but in fact, this element has existed and been used all the way in our development: it is the symbol that is used to build the identity matrix in our set! This could be slightly confusing, since we have a “grand” identity matrix for the final set of generator matrices, built from the 0 symbol, and a “Pascal-ed” identity symbol used to build the second-dimension generator matrix.

Once we arrive at this conclusion for 4D, we may readily envision analogous constructions for higher power-of-2 dimensions, as summarized in Algorithm 1.

Input : (1) index qq of required power-of-2 dimension s=2qs=2^{q}.
Output : A set of ss binary generator matrices to produce a (0,s)(0,s)-sequence.
1 Search for a set (alphabet) {xi}i=1s1\{x_{i}\}_{i=1}^{s-1} of s1s-1 invertible q×qq\times q matrices that is closed under addition, multiplication, and inversion;
2 Build a Pascal matrix 𝒫(x)\mathcal{P}(x) of the desired numerical resolution for each symbol in the alphabet;
Along with an equally sized identity matrix, these constitute the desired set of generator matrices.
Algorithm 1 Constructing an SZ (0,2q)(0,2^{q})-sequence.

Using this algorithm, we constructed SZ sequences for different values of qq, and empirically verified their (0,s)(0,s)-sequence property by building and validating all the sets of hybrid matrices. It is worth noting here that, as a special case of our model, the conventional Pascal matrix in 2D Sobol/Faure constructions is the Pascal matrix of the identity of a 2D alphabet comprising granular 0 and 1 as their block matrices. Thus, we hereinafter consider an all-zero q×qq\times q block matrix as an intrinsic symbol in our alphabets.

Thus, while we did not proactively choose to use fields, they automatically emerged, bringing us closer to Faure’s sequences and their Niederreiter’s extension. Let us briefly abstract our empirical search results in Section 3 to understand what happened: We constructed the following four base-4 matrices:

(25)

{(1................1................1................1................1................1................1................1................1................1................1................1................1................1................1................1),(1111111111111111.1.1.1.1.1.1.1.1..11..11..11..11...1...1...1...1....1111....1111.....1.1.....1.1......11......11.......1.......1........11111111.........1.1.1.1..........11..11...........1...1............1111.............1.1..............11...............1),(1231231231231231.1.3.2.1.3.2.1.3..12..23..31..12...1...2...3...1....1231....3123.....1.3.....3.2......12......31.......1.......3........12312312.........1.3.2.1..........12..23...........1...2............1231.............1.3..............12...............1),(1321321321321321.1.2.3.1.2.3.1.2..13..32..21..13...1...3...2...1....1321....2132.....1.2.....2.3......13......21.......1.......2........13213213.........1.2.3.1..........13..32...........1...3............1321.............1.2..............13...............1)}

.
\scalebox{0.42}{\mbox{$\displaystyle\left\{\begin{array}[]{ll}\left(\begin{array}[]{cccccccccccccccc}1&.&.&.&.&.&.&.&.&.&.&.&.&.&.&.\\ .&1&.&.&.&.&.&.&.&.&.&.&.&.&.&.\\ .&.&1&.&.&.&.&.&.&.&.&.&.&.&.&.\\ .&.&.&1&.&.&.&.&.&.&.&.&.&.&.&.\\ .&.&.&.&1&.&.&.&.&.&.&.&.&.&.&.\\ .&.&.&.&.&1&.&.&.&.&.&.&.&.&.&.\\ .&.&.&.&.&.&1&.&.&.&.&.&.&.&.&.\\ .&.&.&.&.&.&.&1&.&.&.&.&.&.&.&.\\ .&.&.&.&.&.&.&.&1&.&.&.&.&.&.&.\\ .&.&.&.&.&.&.&.&.&1&.&.&.&.&.&.\\ .&.&.&.&.&.&.&.&.&.&1&.&.&.&.&.\\ .&.&.&.&.&.&.&.&.&.&.&1&.&.&.&.\\ .&.&.&.&.&.&.&.&.&.&.&.&1&.&.&.\\ .&.&.&.&.&.&.&.&.&.&.&.&.&1&.&.\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&1&.\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&.&1\end{array}\right),&\left(\begin{array}[]{cccccccccccccccc}1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1\\ .&1&.&1&.&1&.&1&.&1&.&1&.&1&.&1\\ .&.&1&1&.&.&1&1&.&.&1&1&.&.&1&1\\ .&.&.&1&.&.&.&1&.&.&.&1&.&.&.&1\\ .&.&.&.&1&1&1&1&.&.&.&.&1&1&1&1\\ .&.&.&.&.&1&.&1&.&.&.&.&.&1&.&1\\ .&.&.&.&.&.&1&1&.&.&.&.&.&.&1&1\\ .&.&.&.&.&.&.&1&.&.&.&.&.&.&.&1\\ .&.&.&.&.&.&.&.&1&1&1&1&1&1&1&1\\ .&.&.&.&.&.&.&.&.&1&.&1&.&1&.&1\\ .&.&.&.&.&.&.&.&.&.&1&1&.&.&1&1\\ .&.&.&.&.&.&.&.&.&.&.&1&.&.&.&1\\ .&.&.&.&.&.&.&.&.&.&.&.&1&1&1&1\\ .&.&.&.&.&.&.&.&.&.&.&.&.&1&.&1\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&1&1\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&.&1\end{array}\right),\\ \\ \left(\begin{array}[]{cccccccccccccccc}1&2&3&1&2&3&1&2&3&1&2&3&1&2&3&1\\ .&1&.&3&.&2&.&1&.&3&.&2&.&1&.&3\\ .&.&1&2&.&.&2&3&.&.&3&1&.&.&1&2\\ .&.&.&1&.&.&.&2&.&.&.&3&.&.&.&1\\ .&.&.&.&1&2&3&1&.&.&.&.&3&1&2&3\\ .&.&.&.&.&1&.&3&.&.&.&.&.&3&.&2\\ .&.&.&.&.&.&1&2&.&.&.&.&.&.&3&1\\ .&.&.&.&.&.&.&1&.&.&.&.&.&.&.&3\\ .&.&.&.&.&.&.&.&1&2&3&1&2&3&1&2\\ .&.&.&.&.&.&.&.&.&1&.&3&.&2&.&1\\ .&.&.&.&.&.&.&.&.&.&1&2&.&.&2&3\\ .&.&.&.&.&.&.&.&.&.&.&1&.&.&.&2\\ .&.&.&.&.&.&.&.&.&.&.&.&1&2&3&1\\ .&.&.&.&.&.&.&.&.&.&.&.&.&1&.&3\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&1&2\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&.&1\end{array}\right),&\left(\begin{array}[]{cccccccccccccccc}1&3&2&1&3&2&1&3&2&1&3&2&1&3&2&1\\ .&1&.&2&.&3&.&1&.&2&.&3&.&1&.&2\\ .&.&1&3&.&.&3&2&.&.&2&1&.&.&1&3\\ .&.&.&1&.&.&.&3&.&.&.&2&.&.&.&1\\ .&.&.&.&1&3&2&1&.&.&.&.&2&1&3&2\\ .&.&.&.&.&1&.&2&.&.&.&.&.&2&.&3\\ .&.&.&.&.&.&1&3&.&.&.&.&.&.&2&1\\ .&.&.&.&.&.&.&1&.&.&.&.&.&.&.&2\\ .&.&.&.&.&.&.&.&1&3&2&1&3&2&1&3\\ .&.&.&.&.&.&.&.&.&1&.&2&.&3&.&1\\ .&.&.&.&.&.&.&.&.&.&1&3&.&.&3&2\\ .&.&.&.&.&.&.&.&.&.&.&1&.&.&.&3\\ .&.&.&.&.&.&.&.&.&.&.&.&1&3&2&1\\ .&.&.&.&.&.&.&.&.&.&.&.&.&1&.&2\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&1&3\\ .&.&.&.&.&.&.&.&.&.&.&.&.&.&.&1\end{array}\right)\end{array}\right\}$}}\,.

These are exactly the ones in Eq. (23), just encoding each distinct 2×22\times 2 block as a base-4 digit, namely indexed after 0 and the three SS symbols in Eq. (21). When multiplied by a base-4 column vector representation of a sequence number, these specific matrices generate a (0, 4)-sequence in base 4, i.e., a sequence of 4D points that is fully multi-stratified for powers-of-4 number of points. However, multiplication is not the standard arithmetic modulo-4 one, but uses a special table

(26) Z0123S00000101232023130312,\begin{array}[]{cc|cccc}&Z&0&1&2&3\\ S&&&&&\\ \hline\cr 0&&0&0&0&0\\ 1&&0&1&2&3\\ 2&&0&2&3&1\\ 3&&0&3&1&2\end{array}\,,

that is built to amend the degenerate multiplication table modulo 4, preparing it for Faure construction. Here SS refers to the left multiplicand digit from the matrix and ZZ to the right one from the vector. Finally, this arithmetic table is not built arbitrarily, nor looked up, but is intrinsic to the matrix-block alphabet, and is evaluated automatically via GF(2) matrix-vector multiplication:

(27) (\binaryMatrix[0.7]2.\binaryMatrix[0.7]21..1\binaryMatrix[0.7]2.111\binaryMatrix[0.7]2111.)(\binaryMatrix[1]1..\binaryMatrix[1]11.\binaryMatrix[1]1.1\binaryMatrix[1]111)=(\binaryMatrix[1]1..\binaryMatrix[1pt]1..\binaryMatrix[1pt]1..\binaryMatrix[1pt]1..\binaryMatrix[1]1..\binaryMatrix[1pt]11.\binaryMatrix[1pt]1.1\binaryMatrix[1pt]111\binaryMatrix[1]1..\binaryMatrix[1pt]1.1\binaryMatrix[1pt]111\binaryMatrix[1pt]11.\binaryMatrix[1]1..\binaryMatrix[1pt]111\binaryMatrix[1pt]11.\binaryMatrix[1pt]1.1).\begin{pmatrix}\binaryMatrix[-0.7]{2}{....}\\ \binaryMatrix[-0.7]{2}{1..1}\\ \binaryMatrix[-0.7]{2}{.111}\\ \binaryMatrix[-0.7]{2}{111.}\end{pmatrix}\begin{pmatrix}\binaryMatrix[-1]{1}{..}&\binaryMatrix[-1]{1}{1.}&\binaryMatrix[-1]{1}{.1}&\binaryMatrix[-1]{1}{11}\end{pmatrix}=\begin{pmatrix}\binaryMatrix[-1]{1}{..}&\binaryMatrix[-1pt]{1}{..}&\binaryMatrix[-1pt]{1}{..}&\binaryMatrix[-1pt]{1}{..}\\ \binaryMatrix[-1]{1}{..}&\binaryMatrix[-1pt]{1}{1.}&\binaryMatrix[-1pt]{1}{.1}&\binaryMatrix[-1pt]{1}{11}\\ \binaryMatrix[-1]{1}{..}&\binaryMatrix[-1pt]{1}{.1}&\binaryMatrix[-1pt]{1}{11}&\binaryMatrix[-1pt]{1}{1.}\\ \binaryMatrix[-1]{1}{..}&\binaryMatrix[-1pt]{1}{11}&\binaryMatrix[-1pt]{1}{1.}&\binaryMatrix[-1pt]{1}{.1}\end{pmatrix}\,.

Please mind the bit-reversed ordering: top row is least significant bit, so \binaryMatrix[-1.4]11. is 1 and \binaryMatrix[-1.4]1.1 is 2.

Even though we have arrived at this construction experimentally and independently, as detailed in the preceding Section 3, once abstracted this way, we may see a close similarity to the analytically developed construction of Niederreiter (1987; 1992) that extends Faure’s sequences to power-of-prime bases. Further, the uniqueness of finite fields makes us strongly believe that our construction does actually coincide with Niederrieter’s. This remains a hypothesis, though, awaiting for analytical validation, since the multiple degrees of freedom in both constructions makes them produce different realizations, which makes empirical verification difficult. In all cases, both constructions span the same universe of (0,2q)(0,2^{q})-sequences in base 2q2^{q}. It is the emphasized text in the preceding description, however, that is the essential difference between our construction and Niederrieter’s: instead of using symbolic GF(2q2^{q}) arithmetic with lookup tables, as in the standard implementation of Niederreiter sequences (Bratley et al., 1992), our field elements are embedded as block matrices in GF(2) generator matrices that handle both the field arithmetic and the bijections operators in Niederreiter’s construction. This makes our binary-based construction far more efficient and scalable. Another important advantage is that our experimental approach lead us to discover nesting and ensembling, which makes a significant difference in graphics applications, as we will see later in Section 5.2.

4.1. Alphabets

Identifying alphabets as finite fields brings with many insights and implications. For example, it implies commutativity of matrix multiplication over the set, which we will use later for randomization. The most relevant property for our construction, however, is that in fields like our alphabets there exists a symbol, in fact many, that multiplicatively cycles through all other non-zero symbols; that is, it can generate all the non-zero elements as its powers. Such a symbol is called a “primitive” and commonly designated as α\alpha. Then the alphabet may be written as

(28) Σ={0,I,α,α2,,αq2}.\Sigma=\{0,I,\alpha,\alpha^{2},\ldots,\alpha^{q-2}\}\,.

Aside from analytical insights and proofs, alpha elements prove quite helpful in practice. The first facility they can offer is a straightforward search plan for alphabets, as summarized in Algorithm 2, replacing complex and time-consuming stack-based searches we implemented in our exploration phase.

Input : (1) A power of 2 size of elements s=2qs=2^{q},
(2) A timeout number of attempts T.
Output : An alphabet of ss block matrices.
1 for i0i\leftarrow 0 to T1T-1 do
2   Construct a random invertible q×qq\times q matrix block xx;
3   Repeatedly multiply xx by itself until you get II, and record the needed number of multiplications: its “root index” nn;
4 if  n=s1n=s-1  then
5    return {0,{xi}i=0s2}\left\{0,\left\{x^{i}\right\}_{i=0}^{s-2}\right\};
6    
7 
Return a timeout message.
Algorithm 2 AlphaSearch: Searching for an alphabets.

Secondly, noting that an alpha element can belong to a unique alphabet, the search algorithm can easily be modified to enumerate all alphabets, dividing the number of alpha elements in the universe of invertible matrices by the number of alphas in a single alphabet. We performed such a search over the feasible range, and obtained

(29) (|{Σ(q)}q=16|)=(1,1,8,336,64512,53329920).\left(\left|\left\{\Sigma(q)\right\}_{q=1}^{6}\right|\right)=(1,1,8,336,64512,53329920)\,.

A brief Internet search identified this with OEISA258745 (2015), described as “Order of general affine group AGL(n,2) (=A028365(n)) divided by (n+1)”. This gives a hint for the possibility of building alphabets algorithmically, which we leave for future research.

4.2. Randomization

With the template set of generator matrices built up exclusively of alphabet building blocks, we may now randomize the set by plugging back the degrees of freedom we factored earlier. The extension rows we removed in Eq. (9), along with the XX diagonal blocks we factored later, exactly identify with the Tezuka’s (1994) scrambling mentioned in Section 2.4. Beyond that, our model readily supports Owen’s (1995) scrambling and Faure-Tezuka (2002) shuffling in both 22 and 2q2^{q} bases; as well as Xor-scrambling (Kollig and Keller, 2002).

4.3. Nesting

The key idea of nesting, introduced in the exploration phase, is to find an alphabet over 22q2^{2q} that embeds in its symbols the symbols of a given alphabet over 2q2^{q}, in such a way that the generators of the nested sequence may be reproduced using sequence-preserving manipulations of the corresponding generators of the nesting sequence. Toward that end, we start by extracting symbols of the nesting alphabet from generators of the nested sequence. We note that 2×\times2 blocks of the nested generator tile a single block of the nesting generator. The symbol of the nesting alphabet is found in the second block of the first row of the Pascal matrix, but we first have to factor out the first block, leaving only an identity; hence

(30) a=(IaI)1(a2a3a2)=(a2a2),\displaystyle\langle a\rangle=\begin{pmatrix}I&a\\ &I\end{pmatrix}^{-1}\begin{pmatrix}a^{2}&a^{3}\\ &a^{2}\end{pmatrix}=\begin{pmatrix}a^{2}&\\ &a^{2}\end{pmatrix}\,,

where a\langle a\rangle denotes a symbol of the nesting alphabet embedding symbol aa of the nested sequence alphabet. We then search for a 22q2^{2q} alphabet that contains this subset. A brute-force approach to such a search is far more difficult than before, since we seek quite specific alphabets. We therefore generated and analyzed all nesting alphabets in a feasible range, and luckily managed to find a pattern that all nesting alphabets are built exclusively from the nested alphabet symbols. This reduces the search space dramatically from 24q22^{4q^{2}} to 24q2^{4q}, which makes it feasible to reach 64K dimensions. This reduction not only enables us to find working alphabets, but to uniformly sample them.

Once the alphabet is found, the generator matrices are built as per Algorithm 1, then the generator matrices of embedded symbols are multiplied by a factor

(31) 𝒫(a)=(IaI)𝒫(a),\mathcal{P}(a)=\begin{pmatrix}I&a\\ &I\end{pmatrix}\mathcal{P}(\langle a\rangle)\,,

to restore the original matrices, which provably works by expanding the multiplication by the block entry (j,i)(j,i)

(32) (IaI)(aji)=(IaI)(a2j2ia2j2i)=(a2j2ia(2j+1)2ia2j2i),\displaystyle\scalebox{0.85}{\mbox{$\displaystyle\begin{pmatrix}I&a\\ &I\end{pmatrix}\begin{pmatrix}\langle a\rangle^{j-i}\end{pmatrix}=\begin{pmatrix}I&a\\ &I\end{pmatrix}\begin{pmatrix}a^{2j-2i}&\\ &a^{2j-2i}\end{pmatrix}=\begin{pmatrix}a^{2j-2i}&a^{(2j+1)-2i}\\ &a^{2j-2i}\end{pmatrix}$}}\,,

which match the entries in 𝒫(a)\mathcal{P}(a), then the coefficients

(33) ((2j2i)(2j+12i)(2j2i+1)(2j+12i+1)),\begin{pmatrix}\binom{2j}{2i}&\binom{2j+1}{2i}\\ \\ \binom{2j}{2i+1}&\binom{2j+1}{2i+1}\end{pmatrix}\,,

are matched using Lucas’ Theorem. Note that for the nesting set, this multiplication represents a Tezuka scrambling, hence preserves the structure of the sequence.

We conclude this section by making a remark that nesting, as described above, works only by doubling qq, i.e., squaring the dimension in each upsampling step of the sequence space, not by doubling the dimension as we would hope.

4.4. Ensembling

Ensembling is less obvious than nesting, since it depends on a non-obvious property. We start with a nested sequence, and the key idea is to try to make subsequent bands of dimensions of the nesting sequence, with respect to each other, look identical to their counterparts in the nested band. This starts by arranging the symbols in a suitable order, as outlined in Algorithm 3,

Input : An alphabet Σ\Sigma of s=2qs=2^{q} block-matrix symbols.
Output : An ordered alphabet with a nested substructure.
1 Ensure the placement of the 0 and II symbols in the first and second slots, respectively;
2 k2k\leftarrow 2;
3 while k<sk<s do
4 for i1i\leftarrow 1 to k1k-1 do
5    xΣ[i]+Σ[k]x\leftarrow\Sigma[i]+\Sigma[k];
6      Locate the slot jj containing xx;
7      Swap Σ[i+k]\Sigma[i+k] with Σ[j]\Sigma[j];
8    
9 k2kk\leftarrow 2\cdot k;
10 
Algorithm 3 NestSort: order alphabets to enable ensembling.

which arranges the symbols in a literally nested algebraic structure of bands

(34) B1(0,I),B2(B1,B1+x1),B3(B2,B2+x2),\displaystyle B_{1}\leftarrow(0,I)\,,\;B_{2}\leftarrow(B_{1},B_{1}+x_{1})\,,\;B_{3}\leftarrow(B_{2},B_{2}+x_{2})\,,\;\ldots

This idea is based on an interesting property

(35) 𝒫(a+x)=𝒫(a)𝒫(x),\mathcal{P}(a+x)=\mathcal{P}(a)\mathcal{P}(x)\,,

of Pascal matrices that summing the symbols is equivalent to multiplying the corresponding Pascal matrices. A sketch of a proof starts by noting that each entry

(36) (ji)(a+x)ji,\binom{j}{i}(a+x)^{j-i}\,,

of 𝒫(a+x)\mathcal{P}(a+x) on the right-hand side embodies a binomial expansion with the same range of powers as the corresponding row-column product on the left-hand side, and it then remains to equate the binomial coefficients. Applying this property to the nested structure in Eq. (34), we note that adding a symbol to each subset is equivalent to multiplying its generator matrix by those of the subset, with a net effect of reordering the points in the subset while preserving their geometry. That is, the sub-sequence of each band is just a reordering of the sequence of the reference band, hence can be turned into a nested sequence by the same restoration in Eq. (31), actually the identical sequence, but in a different order. This can be observed in Fig. 1(b), where all (0, 2) components look identical.

Ensembling, as described above, imposes substantial correlation between corresponding dimensions in different bands, which might cause undesirable effects in some applications. To mitigate this, we may shuffle the points in different bands, as done when pairing independent 2D sequences (Burley, 2020), but that would break the high-dimensional structure in our case. Fortunately, we may take advantage of the commutative multiplication property of alphabets mentioned in Section 4.1: if we multiply any of the Pascal generator matrices from left by any symbol from the alphabet, it is multiplied by all blocks, then the multiplication order can be exchanged, and the symbol can be factored on the right. The operation on the left is a Tezuka scrambling, hence preserves the structure of the sequence, but it is effectively equivalent to a Faure–Tezuka shuffling (see Section 2.4) that reorders the coordinates in the concerned dimension. To use this idea with ensembling, we use 4D symbols to shuffle 2D bands, 16D symbols to shuffle 4D bands, and so on. More details are in our supplementary code.

5. Results and Applications

To evaluate the performance of SZ-sequences, we have evaluated them both for the numerical integration of synthetic functions as well as for rendering. We further include empirical measurements of discrepancy as well as analysis of their power spectra. Our implementation is included in the supplemental material.

Linear (g1g^{1})
Gaussian (gg^{\infty})
  f1,2Df3,4Df^{D}_{1,2}*f^{D}_{3,4}
Refer to caption
   f1,2,3,4Df^{D}_{1,2,3,4}
Refer to caption
   f1,2Df3,4D+f5,6Df7,8Df^{D}_{1,2}*f^{D}_{3,4}+f^{D}_{5,6}*f^{D}_{7,8}
Refer to caption
  f1,2Df1,3Df1,4Df2,3Df2,4Df3,4Df^{D}_{1,2}*f^{D}_{1,3}*f^{D}_{1,4}*f^{D}_{2,3}*f^{D}_{2,4}*f^{D}_{3,4}
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 2. Mean relative squared error when integrating of a variety of analytic 4- and 8-dimensional functions. In these plots, “Sobol-d0” and “Sobol-d4” denote Sobol points starting at dimension 0 and 4, respectively, and similarly for SZ. SZ-0 performs as well as Sobol at power-of-4 sample counts, though is sometimes worse at intermediate power-of-2 counts. In some cases (e.g., the third column of gg^{\infty}), it gives significantly lower error. SZ-4 generally performs as well as SZ-0, while Sobol-4 often has higher error than Sobol-0. See Fig. 8 for results with the g0g^{0} function.

5.1. Numerical Integration

In order to compare the performance of SZ points to other point sets for integration, we measured mean relative squared error (MRSE) when integrating a variety of simple analytic functions. We follow the approach developed by Jarosz et al. ([)Section 5.2]Jarosz19Orthogonal, which we summarize here. Three 1D functions are used as building blocks

(37) g0(r)\displaystyle g^{0}(r) =1binaryStep(r,rend),\displaystyle=1-\mathrm{binaryStep}(r,r_{\mathrm{end}})\,,
(38) g1(r)\displaystyle g^{1}(r) =1linearStep(r,rstart,rend),\displaystyle=1-\mathrm{linearStep}(r,r_{\mathrm{start}},r_{\mathrm{end}})\,,
(39) g(r)\displaystyle g^{\infty}(r) =er2/(2σ2),\displaystyle=\mathrm{e}^{-r^{2}/(2\sigma^{2})}\,,

with rend=3/πr_{\mathrm{end}}=3/\pi, rstart=rend0.2r_{\mathrm{start}}=r_{\mathrm{end}}-0.2, and σ=1/3\sigma=1/3. Projection of an nn-dimensional point pp to a set of dimensions did_{i} is denoted by

(40) pd1,,dn=(pd1,,pdn),p_{d_{1},\ldots,d_{n}}=(p_{d_{1}},\ldots,p_{d_{n}})\,,

nn-dimensional functions are then defined by taking the vector norm of projected points

(41) fd1,,dnD(p)=gD(pd1,,pdn).f_{d_{1},\ldots,d_{n}}^{D}(p)=g^{D}(\|p_{d_{1}},\ldots,p_{d_{n}}\|).

These are plots of associated 2D functions:

Refer to caption
f1,20f_{1,2}^{0}
Refer to caption
f1,21f_{1,2}^{1}
Refer to caption
f1,2f_{1,2}^{\infty}

We used four forms of functions for our tests:

  • f1,2Df3,4Df^{D}_{1,2}*f^{D}_{3,4}: the product of two 2D functions.

  • f1,2,3,4Df^{D}_{1,2,3,4}: a fully 4D function.

  • f1,2Df3,4D+f5,6Df7,8Df^{D}_{1,2}*f^{D}_{3,4}+f^{D}_{5,6}*f^{D}_{7,8}: the sum of the product of two 2D functions.

  • f1,2Df1,3Df1,4Df2,3Df2,4Df3,4Df^{D}_{1,2}*f^{D}_{1,3}*f^{D}_{1,4}*f^{D}_{2,3}*f^{D}_{2,4}*f^{D}_{3,4}: the product of functions of all 2D projections of a 4D point.

For each of these and for each of the three gg functions defined above, we computed a reference value and then ran 1,024 independent trials, each taking up to 2162^{16} samples. Owen scrambling was used to randomize the low-discrepancy sequences. In addition to our SZ sequence and the Sobol sequence, we also measured results with independent uniform random numbers and with the Halton sequence. For SZ and Sobol, we also measured results using samples starting at the 4th dimension, to reflect the case in rendering where such integrands might be encountered after a few dimensions of the sequence had already been consumed. We will use “SZ-d0” and “SZ-d4” to distinguish these, and similarly for Sobol. We measured MRSE at all power-of-two number of samples

Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption
(a) Complex Lighting (b) Depth of Field (c) Complex Light Transport (d) Environment Map (e) Glossy Surfaces (f) Bidirectional Path Tracing
Figure 4. Plots comparing RMSE of SZ-based samplers to state-of-the-art samplers for various common test scenes (insets). G/Z refer to global/Z indexing strategy (Pharr et al., 2023), and SZ-16D is a raw (non-nested) 16D SZ sequence. Beneath each plot we highlight an interesting feature of the scene. The range is extended in (e) as an example of the complex convergence behaviors.

For all functions, performance of all samplers other than independent is similar for g0g^{0}, so we will focus on g1g^{1} and gg^{\infty} in the following. Error plots for g1g^{1} and gg^{\infty} are shown in Fig. 2; the plot for g0g^{0} can be seen in Fig. 8 in our supplemental. For both the first and second function forms, SZ-d0 matches Sobol-d0’s performance at power-of-4 sample counts, though it has higher error at intermediate powers of two. However, SZ-d4 delivers equally good error to SZ-d0, while Sobol-d4 is significantly worse. We would expect this trend to continue at higher dimensions, as SZ maintains its (0,2)(0,2)- and (0,4)(0,4)-sequences, while the quality of lower-dimensional projections of Sobol points worsens. For the third function form, the sum of the product of two 2D functions, both SZ-d0 and SZ-d4 significantly outperform the Sobol sampler. We believe that this is due both to it providing (0,2)(0,2)-sequences for each of the constituent 2D functions but also proving (0,4)(0,4)-sequences for each term. Finally, in the product of all of the 2D projections of a 4D point,we again see SZ-d0 and SZ-d4 matching Sobol at power-of-4 sample counts, and SZ-d4 having much lower error than Sobol-d4.

5.2. Rendering

To evaluate the effectiveness of SZ sequences for rendering, we first implemented a Sampler in pbrt (Pharr et al., 2016), using the global sampling strategy, where the first two dimensions are used for the image plane, and another using Z-Sampling (Ahmed and Wonka, 2020), where a shuffled Z-sequence of the pixels is used to index the samples. We modified the later, however, to sample all dimensions at once using SZ sequences.

In Fig. 4, we compare these SZ-based samplers to state-of-the-art samplers for various common test scenes. We also included a raw (non-nested) 16D Z-indexed SZ sampler to give a rough idea about the expected performance of a randomly taken Niederreiter’s (0, 16) and Faure’s (0, 17) sequences. The plots suggest that the baseline performance of SZ is generally superior to Halton, Faure, and Niederreiter, thanks to finer granularity, and is in the same range as Sobol. We also note that the Z-indexed SZ sampler offers a decent aliasing/noise reduction compromise at low/high sampling rates, respectively. That is, it outperforms global samplers at low sampling rates and the original Z sampler at higher rates.

Reference
Refer to caption
MRSE /

F

LIP  
Sobol
Refer to captionRefer to caption
0.037195 / 0.049664  
Sobol MRSE
Refer to captionRefer to caption
SZ
Refer to captionRefer to caption
0.019270 / 0.047296  
SZ MRSE
Refer to captionRefer to caption
Reference
Refer to captionRefer to caption

Refer to caption0.200

Figure 5. Comparisons of rendering using the Sobol sequence and using our SZ sequence with the Hangar Interior environment map. Rendering was at 64spp with direct lighting only. Crops show representative results; full images are available in our supplemental material.

Beyond these baseline tests, we implemented a more detailed test in which we evaluated Sobol and SZ points for rendering direct illumination from an environment map, using two error metrics: MRSE and the perceptually-based F LIP metric (Andersson et al., 2020, 2021). For this evaluation, however, we modified pbrt so that the first two dimensions of the sequence are used to select a point inside each pixel area, the next two are used to select a point on the lens, the next two to importance-sample a direction from the environment map’s distribution, and then two more are used to sample the BSDF. Note that this differs from pbrt’s default assignment of dimensions for integration which does not consider the possibility of (0,2q)(0,2^{q})-sequences and so consumes additional 1D samples such as for time and sampling which light source to sample from early dimensions of samplers. We chose the Sportscar scene for the evaluation since it includes a variety of complex measured BSDFs, ranging from nearly diffuse to highly specular.

Fig. 5 shows results with a high-dynamic range environment map, including crops of representative parts of the image rendered at 64spp. The SZ sampler provides a meaningful reduction of 1.93×1.93\times for MRSE; this error reduction is visually evident in images. In this case, we can see that SZ does not suffer from the characteristic checkerboard pattern of unconverged Sobol sampling. See Section A for additional results with this scene, including a variety of different environment maps, visualizations of error across entire images, and error measurements at additional sampling rates.

5.3. Discrepancy

The essence of Niederreiter’s framework ([)Chapter 4]Niederreiter92Random is that identifying a sequence like SZ as (0,2q)(0,2^{q})-sequences in base 2q2^{q} is sufficient to qualify it as a low discrepancy sequence. We nontheless ran a test in the feasible range to empirically validate our model, and obtained the results in Fig. 6, confirming the theoretical model. Please note that we skipped the first two dimensions, as they are identical to Sobol, Faure, and Niederreter.

Refer to caption Refer to caption
(a) (b)
Figure 6. Star discrepancy in (a) three and (b) four dimensions, comparing raw and ensembled SZ sequences to Sobol, measured over 100 Owen-scrambled realizations. All the three sequences swing about close averages, with Sobol slightly better, but SZ seems to slightly excel at powers of 4 in 4D.

5.4. Frequency Spectra

In Fig. 1 we see a comparison between frequency spectra of corresponding pairs of dimensions between SZ and Sobol sequences. Our construction evidently admits better pairwise projections at powers of 2q2^{q} octaves, though the discrepancy plots in Fig. 6 suggest that Sobol behaves better at a more granular powers-of-2 octaves. Further, the fact that all consecutive pairs of dimensions of ensembled SZ sequences are dyadic (0,2)(0,2)-sequences makes them ripe for different optimizations of such sequences like those suggested by Ahmed et al. (2023; 2024) or Doignies et. al. (2024). In the supplementary materials we provide more detailed spectral plots, as well as a tool to produce more.

5.5. Time and Space Complexity

As a binary-matrix-based construction, the baseline for time and space complexity is that of Sobol sequences, and migrating from Sobol in existing systems is as simple as replacing the matrices. Notably, the matrices are identical for the the first two dimensions, hence preserving any special code for inversions (Grünschloß et al., 2012) or fast execution (Ahmed, 2024).

Understanding the anatomy of SZ matrices, however, offers multiple ways to optimize the space and/or time complexity. For a non-trivial example, in the code listing of Fig. 7(a), we exploit the alpha property to generate a complete set of 256 pair-wise stratified points in 16D, as visualized in Fig. 7(b).

Refer to caption Refer to caption
(a) Code (b) plots of generated points
Figure 7. (a) Self-contained code example to compute 256 16D points that are pair-wise stratified in all 2D projections. (b) Plots to visualize the generated points without\with Owen scrambling. The diagonal shows the implicit matrices used to compute the points in the dimension of respective rows and columns.

In this example, the space complexity is reduced to a single integer 0x13A5 representing the alpha symbol of the alphabet in Fig. 1(b), while other symbols are evaluated implicitly by computing subsequent dimensions from preceding ones. Combined with Owen/Xor scrambling, this could represent a viable GPU-friendly sampling solutions for some applications, or a self-contained sample generator for hardware integration and/or in embedded systems. The mentioned example is limited to two octaves and without nesting, but similar specialized solution might be tailored for more advanced requirements.

Apart from such specialized hacks and tricks, SZ sequences admit a generic optimization via this “S-P-Z decomposition”:

(42) 𝒫(a)=(Ia1a2)𝒫(I)(Iaa2).\mathcal{P}(a)=\begin{pmatrix}I&&\\ &a^{-1}&&\\ &&a^{-2}&\\ &&&\ddots\\ \end{pmatrix}\mathcal{P}(I)\begin{pmatrix}I&&\\ &a&&\\ &&a^{2}&\\ &&&\ddots\\ \end{pmatrix}\,.

The middle Pascal factor may be evaluated using the fast diagonal factoring introduced by Ahmed (2024), while the block-diagonal factors are simply treated as a sum of diagonal matrices that translate similarly to bit shift-n-mask operations. This reduces the time and space complexity from 𝒪(m)\mathcal{O}(m) for mm-bit precision to 𝒪(log(m))\mathcal{O}(\log(m)) for the middle PP factor and 𝒪(2q1)\mathcal{O}(2q-1) for the SS and ZZ factors, independent of numeric precision, which is especially significant for 64-bit computations. We actually obtained 11/23% memory saving and 20/15×\times speedup for the first 16/256 dimensions, respectively. An implementation is available in our supplementary materials.

5.6. Coding Complexity

Even though ending up in only a few lines of code, implementing low-discrepancy construction to compute generator matrices is relatively difficult and quite prone to mistakes, and SZ sequences are no exception. Acknowledging this, we will make our code publicly available, and aim at maintaining libraries for C, Python, etc., to encourage the adoption of these sequences. That said, we still encourage the readers to try to implement the sequences themselves, as it can potentially improve understanding of their properties.

6. Conclusion

In this paper, we enriched the sampling library with a novel construction of binary-based low-discrepancy sequences that scale flexibly with dimension, offering multiple degrees of freedom and control to enable sampling solution architects to build sequences tailored to their needs. Started experimentally, our empirical-based modeling led us close to the very well-established constructions of Faure and Niederreiter, with the primary difference that we replace elementary fields over (power-of-) prime bases by synthetic fields built over binary matrices, which enables very efficient computation. More importantly, our model uniquely offers the capability of ensembling a high-dimensional LD sequence from low-dimensional sub-sequences, which is desirable in graphics applications like rendering, where the integration domain is mostly made of 2D surfaces.

Our sequences are evidently superior to Halton’s for CG applications, and through multiple abstract and rendering tests, we demonstrated that they benchmark competitively with respect to the well-established Sobol sequences, offering a promising alternative to break their monopoly. Sobol seems to lead in some measures like discrepancy, while SZ excels in others like spectral control and speed performance. We believe, however, that a deeper analysis of the integrator-sampler interaction is needed before arriving at a final conclusion about the performance of these sequences. We therefore expect this work to spark future research aiming at analyzing, improving, and utilizing these sequences.

References

  • (1)
  • Ahmed (2024) Abdalla G. M. Ahmed. 2024. An Implementation Algorithm of 2D Sobol Sequence Fast, Elegant, and Compact. In Eurographics Symposium on Rendering, Eric Haines and Elena Garces (Eds.). The Eurographics Association. https://doi.org/10.2312/sr.20241147
  • Ahmed et al. (2016) Abdalla G. M. Ahmed, Hélène Perrier, David Coeurjolly, Victor Ostromoukhov, Jianwei Guo, Dong-Ming Yan, Hui Huang, and Oliver Deussen. 2016. Low-Discrepancy Blue-Noise Sampling. ACM Trans. Graph. 35, 6, Article 247 (Nov. 2016), 13 pages. https://doi.org/10.1145/2980179.2980218
  • Ahmed et al. (2023) Abdalla G. M. Ahmed, Mikhail Skopenkov, Markus Hadwiger, and Peter Wonka. 2023. Analysis and Synthesis of Digital Dyadic Sequences. ACM Trans. Graph. 42, 6, Article 218 (Dec. 2023), 17 pages. https://doi.org/10.1145/3618308
  • Ahmed and Wonka (2020) Abdalla G. M. Ahmed and Peter Wonka. 2020. Screen-space blue-noise diffusion of monte carlo sampling error via hierarchical ordering of pixels. ACM Trans. Graph. 39, 6, Article 244 (Nov. 2020), 15 pages. https://doi.org/10.1145/3414685.3417881
  • Ahmed and Wonka (2021) Abdalla G. M. Ahmed and Peter Wonka. 2021. Optimizing Dyadic Nets. ACM Trans. Graph. 40, 4, Article 141 (jul 2021), 17 pages. https://doi.org/10.1145/3450626.3459880
  • Andersson et al. (2020) Pontus Andersson, Jim Nilsson, Tomas Akenine-Möller, Magnus Oskarsson, Kalle Åström, and Mark D. Fairchild. 2020.

    F

    LIP: A Difference Evaluator for Alternating Images.
    Proceedings of the ACM on Computer Graphics and Interactive Techniques 3, 2 (2020), 15:1–15:23. https://doi.org/10.1145/3406183
  • Andersson et al. (2021) Pontus Andersson, Jim Nilsson, Peter Shirley, and Tomas Akenine-Möller. 2021. Visualizing Errors in Rendered High Dynamic Range Images. In Eurographics Short Papers. https://doi.org/10.2312/egs.20211015
  • Bratley et al. (1992) Paul Bratley, Bennett L. Fox, and Harald Niederreiter. 1992. Implementation and Tests of Low-Discrepancy Sequences. ACM Trans. Model. Comput. Simul. 2, 3 (July 1992), 195–213. https://doi.org/10.1145/146382.146385
  • Burley (2020) Brent Burley. 2020. Practical Hash-based Owen Scrambling. Journal of Computer Graphics Techniques (JCGT) 10, 4 (29 December 2020), 1–20. http://jcgt.org/published/0009/04/01/
  • Christensen et al. (2018) Per Christensen, Andrew Kensler, and Charlie Kilpatrick. 2018. Progressive Multi-Jittered Sample Sequences. In Computer Graphics Forum, Vol. 37. Wiley Online Library, 21–33.
  • Cook et al. (1984) Robert L. Cook, Thomas Porter, and Loren Carpenter. 1984. Distributed Ray Tracing. In Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’84). Association for Computing Machinery, New York, NY, USA, 137–145. https://doi.org/10.1145/800031.808590
  • Doignies et al. (2024) Bastien Doignies, David Coeurjolly, Nicolas Bonneel, Julie Digne, Jean-Claude Iehl, and Victor Ostromoukhov. 2024. Differentiable Owen Scrambling. ACM Trans. Graph. 43, 6, Article 255 (Nov. 2024), 12 pages. https://doi.org/10.1145/3687764
  • Faure (1982) Henri Faure. 1982. Discrépance de Suites Associées à un Système de Numération (en Dimension s). Acta Arithmetica 41, 4 (1982), 337–351. http://eudml.org/doc/205851
  • Faure and Tezuka (2002) Henri Faure and Shu Tezuka. 2002. Another Random Scrambling of Digital (t,s)-Sequences. In Monte Carlo and Quasi-Monte Carlo Methods 2000, Kai-Tai Fang, Harald Niederreiter, and Fred J. Hickernell (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 242–256.
  • Glassner (1994) Andrew S. Glassner. 1994. Principles of Digital Image Synthesis. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
  • Grünschloß et al. (2012) Leonhard Grünschloß, Matthias Raab, and Alexander Keller. 2012. Enumerating Quasi-Monte Carlo Point Sequences in Elementary Intervals. In Monte Carlo and Quasi-Monte Carlo Methods 2010, Leszek Plaskota and Henryk Woźniakowski (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 399–408.
  • Halton (1960) John H Halton. 1960. On The Efficiency Of Certain Quasi-Random Sequences Of Points In Evaluating Multi-Dimensional Integrals. Numer. Math. 2 (1960), 84–90.
  • Helmer et al. (2021) Andrew Helmer, Per Christensen, and Andrew Kensler. 2021. Stochastic Generation of (t, s) Sample Sequences. In Eurographics Symposium on Rendering - DL-only Track, Adrien Bousseau and Morgan McGuire (Eds.). The Eurographics Association. https://doi.org/10.2312/sr.20211287
  • Hošek and Wilkie (2012) Lukáš Hošek and Alexander Wilkie. 2012. An Analytic Model for Full Spectral Sky-Dome Radiance. ACM Transactions on Graphics 31, 4, Article 95 (July 2012), 9 pages. https://doi.org/10.1145/2185520.2185591
  • Hošek and Wilkie (2013) Lukáš Hošek and Alexander Wilkie. 2013. Adding a Solar-Radiance Function to the Hošek-Wilkie Skylight Model. IEEE Computer Graphics and Applications 33, 3 (2013), 44–52. https://doi.org/10.1109/MCG.2013.18
  • Inc. (2015) OEIS Foundation Inc. 2015. Sequence A258745: Order of the general affine group AGL(n,2). The Online Encyclopedia of Integer Sequences. https://oeis.org/A258745 Accessed: 2025-01-18.
  • Jarosz et al. (2019) Wojciech Jarosz, Afnan Enayet, Andrew Kensler, Charlie Kilpatrick, and Per Christensen. 2019. Orthogonal Array Sampling for Monte Carlo Rendering. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 135–147.
  • Joe and Kuo (2008) Stephen Joe and Frances Y. Kuo. 2008. Constructing Sobol Sequences with Better Two-Dimensional Projections. SIAM J. Sci. Comput. 30, 5 (Aug. 2008), 2635–2654. https://doi.org/10.1137/070709359
  • Kollig and Keller (2002) Thomas Kollig and Alexander Keller. 2002. Efficient Multidimensional Sampling. In Computer Graphics Forum, Vol. 21. 557–563.
  • Kuipers and Niederreiter (1974) Lauwerens Kuipers and Harald Niederreiter. 1974. Uniform Distribution of Sequences. John Wiley & Sons, New York. http://opac.inria.fr/record=b1083239 A Wiley-Interscience publication..
  • Mitchell (1992) Don Mitchell. 1992. Ray Tracing and Irregularities of Distribution. In Third Eurographics Workshop on Rendering Proceedings. 61–69.
  • Niederreiter (1987) Harald Niederreiter. 1987. Point Sets and Sequences with Small Discrepancy. Monatshefte für Mathematik 104, 4 (1987), 273–337.
  • Niederreiter (1992) Harald Niederreiter. 1992. Random Number Generation and Quasi-Monte Carlo Methods. SIAM.
  • Ostromoukhov et al. (2024) Victor Ostromoukhov, Nicolas Bonneel, David Coeurjolly, and Jean-Claude Iehl. 2024. Quad-Optimized Low-Discrepancy Sequences. In ACM SIGGRAPH 2024 Conference Papers (Denver, CO, USA) (SIGGRAPH ’24). Association for Computing Machinery, New York, NY, USA, Article 72, 9 pages. https://doi.org/10.1145/3641519.3657431
  • Owen (1995) Art B. Owen. 1995. Randomly Permuted (t,m,s)-Nets and (t, s)-Sequences. In Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, Harald Niederreiter and Peter Jau-Shyong Shiue (Eds.). Springer New York, New York, NY, 299–317.
  • Paulin et al. (2021) Loïs Paulin, David Coeurjolly, Jean-Claude Iehl, Nicolas Bonneel, Alexander Keller, and Victor Ostromoukhov. 2021. Cascaded Sobol’ Sampling. ACM Trans. Graph. 40, 6, Article 275 (dec 2021), 13 pages. https://doi.org/10.1145/3478513.3480482
  • Perrier et al. (2018) Hélène Perrier, David Coeurjolly, Feng Xie, Matt Pharr, Pat Hanrahan, and Victor Ostromoukhov. 2018. Sequences with Low-Discrepancy Blue-Noise 2-D Projections. Computer Graphics Forum (Proceedings of Eurographics) 37, 2 (2018), 339–353.
  • Pharr (2019) Matt Pharr. 2019. Efficient Generation of Points that Satisfy Two-Dimensional Elementary Intervals. Journal of Computer Graphics Techniques (JCGT) 8, 1 (27 February 2019), 56–68. http://jcgt.org/published/0008/01/04/
  • Pharr and Humphreys (2004) Matt Pharr and Greg Humphreys. 2004. Physically-Based Rendering: from Theory to Implementation (1st ed.). Morgan Kaufmann Publishers Inc.
  • Pharr et al. (2016) Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based Rendering: From Theory to Implementation (3rd ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
  • Pharr et al. (2023) Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2023. Physically Based Rendering: From Theory to Implementation (4th ed.). MIT Press.
  • Sobol’ (1967) Il’ya Meerovich Sobol’. 1967. On the Distribution of Points in a Cube and the Approximate Evaluation of Integrals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 7, 4 (1967), 784–802.
  • Tezuka (1994) Shu Tezuka. 1994. A Generalization of Faure Sequences and its Efficient Implementation. Technical Report, IBM Research, Tokyo Research Laboratory (1994). https://doi.org/10.13140/RG.2.2.16748.16003
  • van der Corput (1935) J.G. van der Corput. 1935. Verteilungsfunktionen. Proceedings of the Nederlandse Akademie van Wetenschappen 38 (1935), 813–821.
  • Warren (2012) Henry S. Warren. 2012. Hacker’s Delight (2nd ed.). Addison-Wesley Professional.

Appendix A Additional Results

A.1. Numerical Integration

Fig. 8 presents MRSE measurements for four synthetic functions using the g0g^{0} function from Sec. 5.1. Note that unlike with the g1g^{1} and gg^{\infty} functions, all considered LD sampling patterns have similar error with the step function, though all still are substantially better than independent sampling.

  f1,2Df3,4Df^{D}_{1,2}*f^{D}_{3,4}
Refer to caption
   f1,2,3,4Df^{D}_{1,2,3,4}
Refer to caption
   f1,2Df3,4D+f5,6Df7,8Df^{D}_{1,2}*f^{D}_{3,4}+f^{D}_{5,6}*f^{D}_{7,8}
Refer to caption
  f1,2Df1,3Df1,4Df2,3Df2,4Df3,4Df^{D}_{1,2}*f^{D}_{1,3}*f^{D}_{1,4}*f^{D}_{2,3}*f^{D}_{2,4}*f^{D}_{3,4}
Refer to caption
Figure 8. MRSE with g0g^{0} with a variety of analytic 4- and 8-dimensional functions. As in Fig. 2, “Sobol-d0” and “Sobol-d4” denote Sobol points starting at dimension 0 and 4, respectively, and similarly for SZ.

A.2. Rendering

Fig. 9 presents additional results for the Sportscar scene with additional environment maps to supplement Fig. 5. Together, we have the following environment maps, spanning a range of illumination:

  • Empty Warehouse: the interior of a large warehouse, primarily lit by fluorescent lights but with a small window allowing some daylight. (Fig. 5).

  • Cayley Interior: the interior of a house, mostly illuminated with daylight through the doors of a balcony.

  • Hangar Interior: the interior of an airplane hangar, lit by daylight both through skylights and a large open door.

  • Sky: an analytic sky model (Hošek and Wilkie, 2012, 2013).

For all of these (and for Fig. 5), we report the average error over 50 images rendered with different random seeds to average out small variations in error across independent runs. Reference images were rendered with 32,768 samples per pixel (spp).

For the first two additional environment maps, SZ also reduces error compared to Sobol sampling; Sky has slightly higher error with SZ than with Sobol samples, though error for both is low. In general, the greatest improvement seems to come with the most complex environment maps. Given both a complex BSDF and complex illumination, sampling their full 4D product well is important for accurate rendering. We hypothesize that the scenes with more complex environments see more improvement thanks to SZ providing a (0,4)(0,4)-sequence for those dimensions.

Cayley Interior
Empty Warehouse
Sky
Reference
Refer to caption
MRSE /

F

LIP    
Sobol
Refer to captionRefer to caption
0.018208 / 0.037904    
Sobol MRSE
Refer to captionRefer to caption
SZ
Refer to captionRefer to caption
0.012227 / 0.034762    
SZ MRSE
Refer to captionRefer to caption
Reference
Refer to captionRefer to caption
Refer to caption
MRSE /

F

LIP    
Refer to captionRefer to caption
0.021288 / 0.035138    
Refer to captionRefer to caption
Refer to captionRefer to caption
0.013279 / 0.032293    
Refer to captionRefer to caption
Refer to captionRefer to caption
Refer to caption
MRSE /

F

LIP    
Refer to captionRefer to caption
0.001472 / 0.012582    
Refer to captionRefer to caption
Refer to captionRefer to caption
0.001641 / 0.012173    
Refer to captionRefer to caption
Refer to captionRefer to caption
Figure 9. Comparisons of images rendered using the Sobol sequence and using our SZ sequence. All images are rendered with 64spp with direct lighting only, using four varied environment maps. Crops show representative results; full images are available in our supplemental material. See Figure 10 for the specification of the heatmap used to visualize MRSE error.

MRSE visualizations for the entire images with both samplers are shown in Figure 10. We can see that the error reduction is greatest with the complex glossy specular BSDF of the car’s paint, though for some scenes there is also some benefit for the near-diffuse ground plane and background.

Error at different numbers of samples per pixel is reported in Tables 1 and 2. SZ has lower MRSE than Sobol over a range of power-of-4 sampling rates and also has lower F LIP error at low sampling rates. At higher sampling rates, both have very low F LIP error. It is interesting to note that the relative performance of SZ and Sobol can be quite different at different sampling rates, even for the same scene. This is somewhat unexpected, as we generally expect MRSE to decrease at a constant rate with increasing sampling rate. We also see that with Sky, SZ does extremely well at 16 and 256spp—both even powers of 4 samples. At those rates, SZ has nearly 3×3\times lower error than Sobol sampling.

We have also measured performance of our sampler on a 32-core AMD 3970X CPU and an NVIDIA A6000 GPU. We find that there is no meaningful performance difference when using the SobolSampler or our SZSampler; the time spent generating sample points is less than 3% of total rendering time.

SZ      Sobol
Refer to captionRefer to caption
Hangar Interior
Refer to captionRefer to caption
Cayley Interior
Refer to captionRefer to caption
Empty Warehouse
Refer to captionRefer to caption
Sky

Refer to caption0.200

Figure 10. Visualizations of MRSE for the four scenes shown in Figure 10, generated with test images rendered at 64spp. The benefit from SZ points is generally small in the relatively diffuse ground-plane and background; the largest reductions in error are on the car body, which has a complex BSDF.
Hangar Interior Cayley Interior Empty Warehouse Sky
spp Sobol SZ Ratio Sobol SZ Ratio Sobol SZ Ratio Sobol SZ Ratio
16 0.1280.128 0.1110.111 1.15×1.15\times 0.08560.0856 0.06970.0697 1.23×1.23\times 0.1110.111 0.08030.0803 1.38×1.38\times 0.03290.0329 0.01110.0111 2.96×2.96\times
64 0.03720.0372 0.01930.0193 1.93×1.93\times 0.01820.0182 0.01220.0122 1.49×1.49\times 0.02130.0213 0.01330.0133 1.60×1.60\times 0.001470.00147 0.001640.00164 0.897×0.897\times
256 0.003830.00383 0.003930.00393 0.977×0.977\times 0.00290.0029 0.002160.00216 1.34×1.34\times 0.002970.00297 0.002270.00227 1.31×1.31\times 0.0004120.000412 0.0001520.000152 2.71×2.71\times
1024 0.0009450.000945 0.0008380.000838 1.13×1.13\times 0.0009140.000914 0.0005190.000519 1.76×1.76\times 0.0007020.000702 0.0006450.000645 1.09×1.09\times 0.0000460.000046 0.0000540.000054 0.852×0.852\times
Table 1. MRSE the four Sportscar scenes at various power-of-4 numbers of pixel samples. For most scenes and most sampling rates, MRSE is lower with the SZ sampler than with the Sobol sampler. For Sky SZ sampling yields much lower error at 16 and 256spp but slightly higher at 64 and 1024spp.
Hangar Interior Cayley Interior Empty Warehouse Sky
spp Sobol SZ Diff. Sobol SZ Diff. Sobol SZ Diff. Sobol SZ Diff.
16 0.1010.101 0.09860.0986 0.002450.00245 0.08520.0852 0.07950.0795 0.005730.00573 0.07620.0762 0.07120.0712 0.004970.00497 0.04130.0413 0.02970.0297 0.01170.0117
64 0.04970.0497 0.04730.0473 0.002370.00237 0.03790.0379 0.03480.0348 0.003140.00314 0.03510.0351 0.03230.0323 0.002850.00285 0.01260.0126 0.01220.0122 0.0004090.000409
256 0.0240.024 0.02550.0255 0.00154-0.00154 0.01920.0192 0.01940.0194 0.000228-0.000228 0.0170.017 0.01670.0167 0.0002750.000275 0.006060.00606 0.006220.00622 0.000159-0.000159
1024 0.01370.0137 0.01510.0151 0.00149-0.00149 0.01110.0111 0.01190.0119 0.000744-0.000744 0.009340.00934 0.01070.0107 0.00132-0.00132 0.00290.0029 0.004820.00482 0.00193-0.00193
Table 2.

F

LIP error for the test scenes at various power-of-4 numbers of pixel samples. F LIP error is consistently lower with the SZ sampler at 16 and 64 samples per pixel; at 256 and 1024 samples per pixel, the image is nearly converged and F LIP reports nearly equal error for both samplers.