Ripple Attention for Visual Perception with Sub-quadratic Complexity
Abstract
Transformer architectures are now central to sequence modeling tasks. At its heart is the attention mechanism, which enables effective modeling of long-term dependencies in a sequence. Recently, transformers have been successfully applied in the computer vision domain, where 2D images are first segmented into patches and then treated as 1D sequences. Such linearization, however, impairs the notion of spatial locality in images, which bears important visual clues. To bridge the gap, we propose ripple attention, a sub-quadratic attention mechanism for vision transformers. Built upon the recent kernel-based efficient attention mechanisms, we design a novel dynamic programming algorithm that weights contributions of different tokens to a query with respect to their relative spatial distances in the 2D space in linear observed time. Extensive experiments and analyses demonstrate the effectiveness of ripple attention on various visual tasks.
1 Introduction
The transformer architecture (Vaswani et al., 2017) has been dominant in various important natural language processing (NLP) tasks, including machine translation (Vaswani et al., 2017; Dehghani et al., 2019), language understanding (Devlin et al., 2018), language modeling (Dai et al., 2019; Baevski & Auli, 2019) and many others. The cornerstone of a transformer is the attention mechanism (Bahdanau et al., 2014) which computes pair-wise interactions between any token pairs of the input sequence. As a result, it is capable of modeling long-term dependencies in a sequence, which is an important factor to the success of transformers.
Recently, the transformer architecture has also found its applications in the domain of computer vision (CV). It is adopted for image classification (Dosovitskiy et al., 2020; Touvron et al., 2020; Liu et al., 2021a; Yang et al., 2021; Wang et al., 2021b), segmentation (Wang et al., 2020b; Strudel et al., 2021), low-level image processing (Chen et al., 2020), image generation (Parmar et al., 2018), object detection (Carion et al., 2020; Meng et al., 2021) and many other tasks. In these vision applications, a 2D image is represented as a set of patches flattened into a 1D sequence. These patches are analogous to the tokens in sequence modeling tasks that are commonly seen in NLP. Nevertheless, such linearization undermines the inherent local structure of a 2D image, which bears important visual clues (Simoncelli & Olshausen, 2001). There often exist strong correlations within local neighborhoods in an image. Therefore, paying more attention to patches in a closer region could facilitate gathering information that is particularly useful in visual pattern recognition. This is similar to the concept of context in NLP, just that the structural context of a visual token is scattered in the 1D sequence, making it difficult for the transformer to capture such prior knowledge. In contrast, the convolutional neural network (CNN) (Fukushima & Miyake, 1982; LeCun et al., 1989; Krizhevsky et al., 2012), which has been the de-facto architecture in computer vision tasks for decades, utilizes local receptive fields and achieves good performance. The drawback of that is, as convolution operations are limited to small receptive fields, they have great difficulty in extracting global image features.
Therefore, it is appealing to incorporate the notion of spatial vicinity into the transformer, while still preserving its capacity of modeling long-term dependencies. To bridge the gap, we propose ripple attention (Figure 1; §3), an efficient attention mechanism for vision transformers based on recently proposed linearized attention variants (§2.2). In ripple attention, contributions from different tokens to a query are weighted with respect to their relative spatial distances in the 2D space. These spatial weights are derived through a stick-breaking transformation (§3.2), which promotes local correlations by leaning to assign larger weights to spatially closer tokens. We then design a dynamic programming algorithm (§3.3) that is capable of executing ripple attention in linear observed time, taking advantage of the recently proposed linearized attention (§2.2) and the summed-area table technique (§3.3).
We validate our method by conducting extensive experiments on image classification and object detection tasks (§4). Ripple attention significantly improves the accuracy of the original vision transformer in image classification and performs competitively with detection transformers for object detection (§4.3), in asymptotically faster runtime (§5.3). Further analysis on the rippling distance and ablation studies (§5.1) indicate that ripple attention favors contributions from tokens in the vicinity yet preserves global information from long-term dependencies.
2 Preliminary
2.1 Attention Mechanism
Let denote a set of query vectors, which attend to key and value vectors, denoted by matrices and respectively. For a query vector at position , the softmax attention function computes the following quantity111We omit the scaling factor for simplicity.:
(1) |
which is an average of the set of value vectors weighted by normalized similarity between different queries and keys. However, such quantity requires computing the similarity between all pairs of queries and keys, incurring quadratic complexity in both time and memory. It makes the computational overhead for long sequences prohibitive, especially in the case of vision tasks.
2.2 Linearized Attention
To reduce the computational complexity in attention mechanism, prior works propose to linearize the softmax kernel (Choromanski et al., 2020; Katharopoulos et al., 2020; Peng et al., 2021). In particular, they replace the exponential kernel used in softmax functions with a dot product of two feature maps , where . Further details about the choice of feature maps can be found in Appendix C.1. With the feature map, linearized attention can be written as:
(2) |
In other words, by grouping together the computations of keys and values, their statistics can be shared for all queries. It therefore achieves linear complexity in both time and memory with respect to the length of the sequence, as we only need to compute and once and then reuse them for each query.
3 Model
In this section, we introduce ripple attention, a novel attention mechanism that features the relative spatial vicinity. We start from a reformulation of the linearized attention (§2.2) under the notation of vicinal groups (§3.1). This reformulation makes it straightforward to introduce a spatial weight associated with each vicinal group, which is the cornerstone of ripple attention. We then describe the derivation of these spatial weights through a stick-breaking transformation (§3.2) and a dynamic programming algorithm (§3.3) based on the summed-area table technique to perform the computation of ripple attention efficiently.
3.1 Ripple Attention
We assume an input image consists of patch tokens. Given a query token at position , we partition the whole set of patch tokens into vicinal groups , according to their Chebyshev (or chessboard) distances from the position to the query, which means for every token at position we have . Illustrations of such vicinal group partitioning can be found in Figure 1(a) or Figure 1(b), where each group is marked by a different color.



Under the notation of vicinal groups, we can reformulate the linearized attention as:
(3) |
This formulation is computationally equivalent since the summations over and also cover all positions within the image. The essence of ripple attention is to let tokens respond differently to a query, according to their relative distances. Typically, tokens close to the query in the 2D space should weigh more than the tokens far away in general, since there exist strong local correlations in images. This control is translated into a spatial weight associated with each vicinal group and can be easily introduced into linearized attention222The partitioning is hard to implement in softmax attention mechanism. More discussions about ripple-softmax and its complexity can be found in Appendix A.:
(4) |
We define and , to reweigh contributions of different vicinal groups with respect to a query in the attention computation. By respecting the spatial structure of images, ripple attention constructs a structured context for queries, which facilitates the model to reconcile both global and local information. The name ripple attention comes from its similarity to the ripples on the surface of water (Figure 1).
3.2 Spatial Weights
To derive the spatial weights ,333In this section, we sometimes drop the dependence on position for spatial weights when there is no ambiguity. we first define a sequence of scalars , where . The spatial weights are parameterized as follows (with ):
(5) |
The sequence is generated through a small neural network followed by a sigmoid function. See Appendix C.3 for more details. Our construction is analogous to the stick-breaking process in Bayesian nonparametrics (Wasserman, 2006), except that here is a deterministic scalar rather than a random variable. One of its appealing properties is:
In other words, we only assume the supremum of each spatial weight is decreasing. A stronger constraint could be monotonicity, for example if . We argue that the former is more favorable, because it offers the flexibility to let distant tokens outweigh when necessary; the effective modeling of such long-term dependencies is deemed as the key to the success of the transformer architecture as well.
In theory, the stick-breaking transformation produces different spatial weights. However, as the weights become trivially small near the end of this transformation, the computational overhead incurred by them becomes worthless. Therefore, we define a threshold to adaptively terminate the transformation at vicinal group when the length of remaining stick is less than (i.e., ). We then merge all vicinal groups with and share the same weights among them assuming they contribute equally:
(6) |
This truncating-and-merging operation is demonstrated in Figure 1(c), where the threshold is set such that . In this case, all the remaining groups (outside the dashed line) share the same weight according to equation 6. Compared to Figure 1(b), adaptive ripple allows a stop of the transformation before hitting the boundary, which prevents potentially worthless computations for distal groups. At the same time, it does weigh in the contributions from those groups, preserving the ability to capture long-term dependencies.
3.3 Dynamic Programming
The only problem left now is how to compute ripple attention effectively. A naïve implementation of equation 4 has a time complexity of . Since is bounded by , the computation is quadratic with respect to the length of the sequence. We give detailed derivations of runtime complexity of each attention variant in Appendix A.
In this section, we present a dynamic programming algorithm built on the summed-area table (SAT) technique, a classic algorithm in computer graphics and computer vision (Crow, 1984; Viola et al., 2001), which reduces the time complexity of ripple attention to . SAT is an efficient data structure that stores prefix sums for each pixel position of an image such that summations over any window in the image can be retrieved in constant time. For an image with height and width , it first initializes the table by computing the cumulative sum of all the tokens above and to the left of inclusively in the 2D plane444In practice, we adopt a linear-complexity implementation which first performs the cumulative summation over the row axis and then the column axis, yielding the same result.:
(7) |
For a square window with center and radius (i.e., a region centered at with both its height and width equal to ), the summation over its elements is denoted by and can be computed in constant time (Crow, 1984):
(8) |
In this work, we consider and as generalized pixels within the input image and construct two SATs and to compute their prefix summations respectively. According to equation 8, the window sums can be obtained efficiently and are denoted as and .
We show that the sum of and over vicinal group can also be computed within constant time from SATs:
Intuitively, this can be viewed as taking the difference between the largest square window wrapped by the group and the smallest square window containing the group. Equipped with SATs, the formulation of ripple attention becomes:
(9) |
In §3.2, we merge all groups with assuming equal contributions (equation 6), which can be jointly computed in constant time using . Therefore, given a reasonable hyper-parameter choice of , the algorithm can achieve linear observed time in the sequence length. This is due to the fact that after the precomputation of SATs (in linear complexity), for each query the required summations of vicinal groups can be computed in constant time.
Efficient Gradient Computation.
The algorithm discussed above addresses the runtime complexity of the forward pass of ripple attention. In Appendix B, we also present a dynamic programming algorithm to compute gradients for the backward pass, again in time and space complexity. The main idea is to utilize the symmetry of vicinal groups and reformulate the gradient calculations as summations over different groups, where computations could be further reduced using SATs; in contrast, a naiv̈e implementation would come with complexity.
Complexity Analysis.
As mentioned above, ripple attention runs in time complexity with the help of dynamic programming on the introduced vicinal groups; and it could achieve linear observed runtime in practice with appropriate hyper-parameter configuration. Algorithm 1 sketches the dynamic programming for the ripple attention mechanism given a single query and a threshold .555Note that the algorithm can be easily executed in parallel for all queries. Due to the flexibility of ripple attention, its time complexity can be further improved if we adapt the step size of the rippling process. For example, we could achieve time complexity if we allow ripples to be exponentially thicker; see §5.2 for more detailed discussion. As for the memory consumption, we observe that the tensor does not even need to be explicitly materialized, since previously computed results for closer vicinal groups could be reused for more distant vicinal groups. Therefore, the space complexity of ripple attention remains irrespective of the rippling distance .
4 Experiments
We conduct extensive experiments on image classification and detection tasks to demonstrate the effectiveness of ripple attention.
4.1 Experimental Setup
Datasets
For image classification, we evaluate our model on standard benchmark datasets: (1) ImageNet1k dataset (Deng et al., 2009), consisting of approximately 1,280K/50K images of 1000 classes for training/validation splits respectively; (2) CIFAR-100 (Krizhevsky et al., 2009), which contains 50K images of 100 classes for training and 10K for evaluation. For detection tasks, we conduct our experiment on the COCO benchmark (Lin et al., 2014) consisting of 118k training and 5k validation images respectively.
Baselines
Our model for image classification is based on the vision transformer architecture (ViT) (Dosovitskiy et al., 2020; Touvron et al., 2020), where the attention block is replaced with ripple attention. We compare ripple attention (referred to as ripple hereafter) with various attention mechanisms in ViT:
-
•
deit, which adopts the same architecture as ViT and vanilla softmax attention.666To facilitate comparisons and simplify experimental settings, we do not use the distillation technique.
-
•
convit, which imposes a soft convolutional inductive bias on the vanilla attention mechanism.
-
•
deit-la, a deit model equipped with linearized attention (§2.2) instead of softmax attention. We also include several variants that improve deit-la, such as permuteformer (Chen, 2021), spe (Liutkus et al., 2021) and Rotary positional embeddings (rope, Su et al., 2021) that incorporates relative positional encodings.
For object detection, we evaluate our model in the general framework of detection transformer (DETR; Carion et al., 2020) to test the generalization ability of ripple. However, due to the slow convergence of detr, we are unable to run detr model for a full training schedule given limited computational resources. Instead, we adopt smca (Gao et al., 2021) as our baseline, a variant of detr that greatly speeds up the convergence by constraining the attention map in the decoder side. Our model, referred to as smca-ripple, replaces all attention blocks with ripple attention in the transformer encoder. For completeness, we also compare with smca-la, an smca variant that adopts linearized attention in encoder attention block.
4.2 Main Implementation Details
Here we discuss key ingredients for implementing ripple; see Appendix C for more comprehensive implementation details and discussions.
Feature Map Parameterization.
Note that ripple is based on the linearized attention mechanism (§2.2). In this work, the feature map is defined to be deterministic with learnable parameters, which consists of a two-layer MLP with trigonometric and ReLU activations in turn. We find it works well in our experiments. Detailed discussions about our choice and a corresponding ablation study can be found in Appendix C.1.
Rippling Attention Specifications.
In §3.2 we define a threshold to control the termination of rippling process. In practice we find it beneficial to introduce a hard constraint such that the model explicitly limits the maximum distance of rippling propagation to and then merges all the remaining groups. In this way, we could not only further reduce the computation overhead, but also encourage the attention mechanism to allocate more weights to distal groups. This can be seen as a stronger version of halting threshold , which is easier to tune due to a more intuitive effect on the rippling process. Given , our model is robust to the change of ; therefore, we set to 0.001 throughout our experiments and only conduct ablation studies on (§5.1). We find an intermediate value gives a reasonable trade-off between local and long-term dependencies.
Furthermore, when applied to vision transformers for image classification tasks, we only replace the first several attention layers with ripple attention, while the remaining ones adopt linearized attention.
4.3 Main Results
Results on ImageNet-1K Dataset.
The results of comparisons among ripple and other models on ImageNet1k dataset are presented in Table 1. We observe ripple outperforms both deit-la, upon which ripple is built, and its variants by a large margin. Although deit-la gives a clear performance drop compared to the standard vision transformer deit, ripple still performs better than deit and achieves results comparable to the improved variant convit while in asymptotically faster runtime, which clearly demonstrates the effectiveness of our approach.
Model | # Params | Top-1 Acc. | Top-5 Acc. |
Models with quadratic complexity | |||
deit | 5.72M | 72.20 | 91.10 |
convit (d’Ascoli et al., 2021) | 5.72M | 73.11 | 91.71 |
Models with sub-quadratic complexity | |||
deit-la | 5.76M | 70.67 | 90.16 |
deit-la + sincspe (Liutkus et al., 2021) | 5.84M | 67.32 | 88.14 |
deit-la + convspe (Liutkus et al., 2021) | 6.69M | 67.64 | 88.40 |
deit-la + rope (Su et al., 2021) | 5.76M | 71.19 | 90.48 |
permuteformer (Chen, 2021) | 5.76M | 71.42 | 90.51 |
ripple | 5.78M | 73.02 | 91.56 |
Results on CIFAR-100 Dataset.
We further conduct experiments on CIFAR-100 dataset and report results in Table 2. ripple outperforms both deit-la and deit by a substantial margin on CIFAR-100 dataset, and also achieves competitive performance compared to convit. This suggests that ripple also generalizes well on a relatively smaller dataset. Following the setting in (Wu et al., 2021; Yan et al., 2021), we also make a comparison of these models in the absence of absolute positional embeddings. We observe a significantly larger performance gap between vanilla vision transformers and models designed to incorporate the notion of locality (Table 2). This implies ripple could structure the scattered spatial context, which is beneficial to information aggregation among patches. Still, the performance decrease in ripple in the absence of positional embeddings suggests that absolute global positions contain complementary information to the prior knowledge of locality, which is also consistent with a recent study (Islam et al., 2020).
Model | w/ APE | w/o APE | ||||
# Params | Top-1 Acc. | Top-5 Acc. | # Params | Top-1 Acc. | Top-5 Acc. | |
deit-la | 5.42M | 67.00 | 88.57 | 5.36M | 54.04 | 79.66 |
deit | 5.42M | 67.87 | 89.71 | 5.36M | 53.64 | 80.30 |
convit | 5.42M | 74.34 | 92.87 | 5.36M | 73.88 | 92.20 |
ripple | 5.47M | 73.94 | 92.37 | 5.42M | 72.94 | 91.86 |
Results on COCO Benchmark.
In Table 3 we report the results for object detection. Again, we see the same trend that the performance drops by over 2 AP when using linearized attention in the encoder (smca-la). However, smca-ripple improves smca-la on all object scales with a marginal increase of GFLOPs and almost catches up with smca. The mAP gap between smca-ripple and smca is further narrowed down from 0.5 to 0.3 with 108 training epochs. In addition, smca-ripple achieves better results than smca on small scale objects, which is attributed to the promoted locality of ripple attention.
Model | # Params | GFLOPs | Inference time(s) | 50 epochs | 108 epochs | ||||||
AP | APS | APM | APL | AP | APS | APM | APL | ||||
smca | 41.5M | 88 | 0.059 | 41.0 | 21.9 | 44.3 | 59.1 | 42.7 | 22.8 | 46.1 | 60.0 |
smca-la | 41.7M | 79 | 0.062 | 39.1 | 19.8 | 42.8 | 56.5 | 41.1 | 22.0 | 44.5 | 59.0 |
smca-ripple | 41.8M | 80 | 0.065 | 40.5 | 22.1 | 44.1 | 57.7 | 42.3 | 23.2 | 45.6 | 60.0 |
5 Analysis


5.1 Inspecting the Rippling Process
Model | locality | global dep. | Speed | Top-1 Acc. | Top-5 Acc. | |
ripple | ✓ | ✓ | 2 | 832 | 72.65 | 91.83 |
4 | 792 | 73.94 | 92.37 | |||
8 | 578 | 73.37 | 92.21 | |||
16 | 382 | 73.48 | 92.25 | |||
fixed-ripple | ✓ | ✓ | 4 | 795 | 71.34 | 90.77 |
logarithmic-ripple | ✓ | ✓ | – | 810 | 73.05 | 91.78 |
truncated-ripple | ✓ | ✗ | 4 | 820 | 72.18 | 91.66 |
ripple w/o sbt | ✗ | ✓ | 4 | 784 | 71.94 | 91.83 |
deit-la | ✗ | ✓ | – | 2664 | 67.00 | 88.57 |
On the Effect of Maximum Rippling Distances.
The maximum rippling distance defined in §4.1 controls the boundary of ripple with informative spatial weights. To evaluate its effect on the modeling performance, we vary the maximum rippling distance and report the results on CIFAR-100 dataset in Table 4. Overall, ripple performs well with a moderate or larger . If is too small, the performance drops significantly, although still outperforming deit-la. It can be attributed to the fact that if the stick-breaking transformation terminates early, the query would attend mostly to its immediate spatial neighbors while not sufficiently respecting global dependencies.
On the Effect of Stick Breaking Transforms.
We construct a variant of ripple with fixed and exponentially-decayed spatial weights (i.e., ), which is denoted by fixed-ripple. This variant appears to be a trivial solution for incorporating the locality into the model, although it does not respect potentially strong long-term dependencies and only assign diminishing weights to distal groups. We find ripple with hard-coded weights also performs better than deit-la, which indicates the effectiveness of recovering spatial structures in transformers. Our stick-breaking transformation gives a further boost over the fixed-weight baseline, thanks to its flexibility in the spatial weights. We also consider a baseline ripple w/o sbt, which replaces the stick-breaking transformation (§3.2) with a simple softmax function to generate spatial weights. This variant does not promote any locality but has the potential to learn such pattern implicitly. It performs slightly better than hard-coded weights and much worse compared to ripple, verifying the effectiveness of stick-breaking transformation.
To further investigate how spatial weights generated from our proposed stick-breaking transform (SBT;§3.2) deviate from fixed-ripple, we plot the training dynamic of the average Jensen-Shannon divergence (JSD) between distributions induced by SBT and fixed-ripple, which is shown in Figure 3. The JSD scores are averaged over all training samples, for each of which we further average over all attention blocks and heads. Intuitively, a higher JSD value reflects a large discrepancy between their induced distributions. Since the logits in SBT are usually initialized around 0, the spatial weights are close to the exponential weights during the early stage of training; however, as soon as the training starts, the JSD value rises sharply, which is possibly due to balancing between global and local information; after that, the curve decreases slightly, indicating that the mechanism might tend to favor local correlations; finally it plateaus at a high JSD value, which indicates that the induced distribution does not simply degenerate to a fixed distribution nor to a vanilla linearized attention (with uniform weights).
On the Effect of Global and Local Information.
To demonstrate the relation between global and local information, we design another baseline truncated-ripple, which puts a hard termination of rippling process such that all distant groups beyond are discarded (i.e., ) instead of merged. This results in a limited receptive field without global dependency modeling. As shown in Table 4, the comparison among truncated-ripple, ripple and deit-la reveals that both global and local information play an important role in modeling, while the notion of locality is possibly more important than global connectivity in vision tasks, which concurs with previous findings (Dosovitskiy et al., 2020; d’Ascoli et al., 2021).

5.2 Alternative Partitioning schemes for Vicinal Groups
Ripple attention is a flexible framework in balancing between the running time complexity and predictive accuracy. To explore this, we compare the full ripple attention against a rippling process where ripples get exponentially thicker so that the process could reach the image boundary in logarithmic time. Formally, recall that vicinal groups with respect to a query token at position is defined as , where every token at position belongs to if and only if . We generalize this notion by relaxing the equality condition, that is, if , which allows the size of vicinal groups to be exponentially larger instead of keeping constant. In this way, the number of vicinal groups is . This method, which we refer to as logarithmic-ripple, enjoys time complexity and becomes more efficient than base ripple attention. As reported in Table 4, we see ripple-logarithmic also outperforms deit-la by a large margin, although leading to a clear performance drop compared to the full ripple attention. This may be due to that ripple-logarithmic processes visual tokens at a coarser-grained level. Nevertheless, ripple-logarithmic justifies the flexibility of our framework that one could trade off the task accuracy for higher efficiency and vice versa. More details and run-time comparison can be found in §D.2.
5.3 Empirical Running Time and Memory Consumption
To verify the advantage of asymptotically faster running complexity in ripple, we conduct a simulation experiment on vision transformers to compare the empirical running time and memory consumption of ripple against its baselines under different numbers of tokens. The detailed setup can be found in Appendix E. Figures 2 demonstrate the comparison results. As mentioned in §3.3, both deit and ripple (Naïve) come with quadratic complexity in the number of tokens. We observe that ripple with dynamic programming (DP) performs significantly better than ripple (Naïve), which demonstrates the effectiveness of our dynamic programming algorithm. Furthermore, ripple behaves similarly to deit-la as the number of tokens increases, verifying that it could be executed in linear observed time. When processing a large number of tokens, ripple often achieves a 5 or even 10 reduction in running time and memory compared to its quadratic counterparts.
6 Related Work
Transformer architectures (Vaswani et al., 2017) are first introduced for neural machine translation. Recently, researchers begin to apply the transformer model in the computer vision domain, showing promising results in various tasks, such as image generation (Parmar et al., 2018), video action recognition (Bertasius et al., 2021; Liu et al., 2021b), segmentation (Wang et al., 2020b; Strudel et al., 2021), object detection (Carion et al., 2020; Gao et al., 2021), low-level image processing (Chen et al., 2020) and image classification (Dosovitskiy et al., 2020; Touvron et al., 2020; Liu et al., 2021a). A large body of research has been devoted into improving efficiency and effectiveness of vision transformers (Dosovitskiy et al., 2020). Recent advances improve original vision transformers from various perspectives, such as data-efficient training (Touvron et al., 2020), adopting pyramid architectures (Wang et al., 2021a; Liu et al., 2021a; Heo et al., 2021) and incorporating the notion of locality, which can be done by applying convolutional modules into the architecture (Li et al., 2021; Wu et al., 2021; Yan et al., 2021; Xu et al., 2021; Yuan et al., 2021; d’Ascoli et al., 2021), restricting the scope of the self-attention (Liu et al., 2021a; Dong et al., 2021; Chen et al., 2021) or initializing self-attention maps as a convolution kernel (d’Ascoli et al., 2021). In contrast to these prior works, we directly model the locality inside the attention mechanism yet permit long-term dependencies, without relying on any convolutional operations or limiting the receptive field; at the same time, ripple attention runs in linear observed time so that the quadratic bottleneck in standard vision transformers can be greatly alleviated. Our work is orthogonal to previous works that modify the transformer architecture and it is worth exploring their combination to improve the overall vision transformer model design.
Our model is built on the linearized attention mechanism, which approximates the softmax kernel with the dot product of feature maps. The feature maps can be stochastic, such as in RFA (Peng et al., 2021) and Performer (Choromanski et al., 2020), or deterministic (Katharopoulos et al., 2020; Schlag et al., 2021). Recently, many works are proposed to improve linearized attention by incorporating relative positional encodings (Liutkus et al., 2021; Luo et al., 2021; Chen, 2021; Su et al., 2021). Other efficient attention mechanisms include methods that limit the attention pattern to be sparse (Child et al., 2019; Ho et al., 2019; Kitaev et al., 2020) or utilizes a low rank approximation by projecting input sequences to fewer key-value pairs (Wang et al., 2020a). A comprehensive review of recent advances in efficient attention mechanisms can be found in (Tay et al., 2020a, b).
7 Conclusion
In this work, we present ripple attention, a novel attention mechanism for visual perception with sub-quadratic complexity. In ripple attention, contributions of different tokens to a query are weighted with respect to their spatial distances in the 2D space. We design a dynamic programming algorithm that computes weighted contributions for all queries in linear observed time and derive the spatial weights through an adaptive stick-breaking transformation. We conduct extensive experiments and analyses to demonstrate the effectiveness of ripple attention.
Acknowledgements
We thank the anonymous reviewers for their valuable suggestions that greatly helped improve this work. This research was supported in part by the joint research scheme of the National Natural Science Foundation of China (NSFC) and the Research Grants Council (RGC) under grant number N_HKU714/21.
References
- Baevski & Auli (2019) Baevski, A. and Auli, M. Adaptive input representations for neural language modeling. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ByxZX20qFQ.
- Bahdanau et al. (2014) Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
- Berman et al. (2019) Berman, M., Jégou, H., Vedaldi, A., Kokkinos, I., and Douze, M. Multigrain: a unified image embedding for classes and instances. arXiv preprint arXiv:1902.05509, 2019.
- Bertasius et al. (2021) Bertasius, G., Wang, H., and Torresani, L. Is space-time attention all you need for video understanding? In Proceedings of the International Conference on Machine Learning (ICML), July 2021.
- Carion et al. (2020) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. End-to-end object detection with transformers. In European Conference on Computer Vision, pp. 213–229. Springer, 2020.
- Chen et al. (2021) Chen, C.-F., Panda, R., and Fan, Q. Regionvit: Regional-to-local attention for vision transformers. arXiv preprint arXiv:2106.02689, 2021.
- Chen et al. (2020) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., and Gao, W. Pre-trained image processing transformer. arXiv preprint arXiv:2012.00364, 2020.
- Chen (2021) Chen, P. Permuteformer: Efficient relative position encoding for long sequences. arXiv preprint arXiv:2109.02377, 2021.
- Child et al. (2019) Child, R., Gray, S., Radford, A., and Sutskever, I. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
- Choromanski et al. (2020) Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.
- Crow (1984) Crow, F. C. Summed-area tables for texture mapping. In Proceedings of the 11th annual conference on Computer graphics and interactive techniques, pp. 207–212, 1984.
- Cubuk et al. (2020) Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703, 2020.
- Dai et al. (2019) Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q., and Salakhutdinov, R. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978–2988, Florence, Italy, 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P19-1285.
- d’Ascoli et al. (2021) d’Ascoli, S., Touvron, H., Leavitt, M., Morcos, A., Biroli, G., and Sagun, L. Convit: Improving vision transformers with soft convolutional inductive biases. arXiv preprint arXiv:2103.10697, 2021.
- Dehghani et al. (2019) Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., and Kaiser, L. Universal transformers. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7.
- Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.
- Devlin et al. (2018) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Dong et al. (2021) Dong, X., Bao, J., Chen, D., Zhang, W., Yu, N., Yuan, L., Chen, D., and Guo, B. Cswin transformer: A general vision transformer backbone with cross-shaped windows. arXiv preprint arXiv:2107.00652, 2021.
- Dosovitskiy et al. (2020) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Fukushima & Miyake (1982) Fukushima, K. and Miyake, S. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in neural nets, pp. 267–285. Springer, 1982.
- Gao et al. (2021) Gao, P., Zheng, M., Wang, X., Dai, J., and Li, H. Fast convergence of detr with spatially modulated co-attention. arXiv preprint arXiv:2101.07448, 2021.
- Glorot & Bengio (2010) Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
- Heo et al. (2021) Heo, B., Yun, S., Han, D., Chun, S., Choe, J., and Oh, S. J. Rethinking spatial dimensions of vision transformers. arXiv preprint arXiv:2103.16302, 2021.
- Ho et al. (2019) Ho, J., Kalchbrenner, N., Weissenborn, D., and Salimans, T. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180, 2019.
- Hoffer et al. (2020) Hoffer, E., Ben-Nun, T., Hubara, I., Giladi, N., Hoefler, T., and Soudry, D. Augment your batch: Improving generalization through instance repetition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8129–8138, 2020.
- Huang et al. (2016) Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. Q. Deep networks with stochastic depth. In European conference on computer vision, pp. 646–661. Springer, 2016.
- Islam et al. (2020) Islam, M. A., Jia, S., and Bruce, N. D. B. How much position information do convolutional neural networks encode? In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rJeB36NKvB.
- Kasai et al. (2021) Kasai, J., Peng, H., Zhang, Y., Yogatama, D., Ilharco, G., Pappas, N., Mao, Y., Chen, W., and Smith, N. A. Finetuning pretrained transformers into rnns. arXiv preprint arXiv:2103.13076, 2021.
- Katharopoulos et al. (2020) Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pp. 5156–5165. PMLR, 2020.
- Kitaev et al. (2020) Kitaev, N., Kaiser, L., and Levskaya, A. Reformer: The efficient transformer. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkgNKkHtvB.
- Krizhevsky et al. (2009) Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
- Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
- LeCun et al. (1989) LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.
- Li et al. (2021) Li, Y., Zhang, K., Cao, J., Timofte, R., and Van Gool, L. Localvit: Bringing locality to vision transformers. arXiv preprint arXiv:2104.05707, 2021.
- Lin et al. (2014) Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014.
- Liu et al. (2021a) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030, 2021a.
- Liu et al. (2021b) Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., and Hu, H. Video swin transformer. arXiv preprint arXiv:2106.13230, 2021b.
- Liutkus et al. (2021) Liutkus, A., Cí́fka, O., Wu, S.-L., Simsekli, U., Yang, Y.-H., and Richard, G. Relative positional encoding for transformers with linear complexity. In International Conference on Machine Learning, pp. 7067–7079. PMLR, 2021.
- Loshchilov & Hutter (2016) Loshchilov, I. and Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
- Loshchilov & Hutter (2019) Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
- Luo et al. (2021) Luo, S., Li, S., Cai, T., He, D., Peng, D., Zheng, S., Ke, G., Wang, L., and Liu, T.-Y. Stable, fast and accurate: Kernelized attention with relative positional encoding. arXiv preprint arXiv:2106.12566, 2021.
- Meng et al. (2021) Meng, D., Chen, X., Fan, Z., Zeng, G., Li, H., Yuan, Y., Sun, L., and Wang, J. Conditional detr for fast training convergence. arXiv preprint arXiv:2108.06152, 2021.
- Parmar et al. (2018) Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, L., Shazeer, N., Ku, A., and Tran, D. Image transformer. In International Conference on Machine Learning, pp. 4055–4064. PMLR, 2018.
- Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32:8026–8037, 2019.
- Peng et al. (2021) Peng, H., Pappas, N., Yogatama, D., Schwartz, R., Smith, N. A., and Kong, L. Random feature attention. arXiv preprint arXiv:2103.02143, 2021.
- Schlag et al. (2021) Schlag, I., Irie, K., and Schmidhuber, J. Linear transformers are secretly fast weight programmers. In International Conference on Machine Learning, pp. 9355–9366. PMLR, 2021.
- Shaw et al. (2018) Shaw, P., Uszkoreit, J., and Vaswani, A. Self-attention with relative position representations. In NAACL-HLT (2), 2018.
- Simoncelli & Olshausen (2001) Simoncelli, E. P. and Olshausen, B. A. Natural image statistics and neural representation. Annual review of neuroscience, 24(1):1193–1216, 2001.
- Strudel et al. (2021) Strudel, R., Garcia, R., Laptev, I., and Schmid, C. Segmenter: Transformer for semantic segmentation. arXiv preprint arXiv:2105.05633, 2021.
- Su et al. (2021) Su, J., Lu, Y., Pan, S., Wen, B., and Liu, Y. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021.
- Tay et al. (2020a) Tay, Y., Dehghani, M., Abnar, S., Shen, Y., Bahri, D., Pham, P., Rao, J., Yang, L., Ruder, S., and Metzler, D. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020a.
- Tay et al. (2020b) Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. Efficient transformers: A survey. arXiv preprint arXiv:2009.06732, 2020b.
- Touvron et al. (2020) Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jégou, H. Training data-efficient image transformers & distillation through attention. arXiv preprint arXiv:2012.12877, 2020.
- Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
- Viola et al. (2001) Viola, P., Jones, M., et al. Robust real-time object detection. International journal of computer vision, 4(34-47):4, 2001.
- Wang et al. (2020a) Wang, S., Li, B., Khabsa, M., Fang, H., and Ma, H. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020a.
- Wang et al. (2021a) Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., and Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. arXiv preprint arXiv:2102.12122, 2021a.
- Wang et al. (2021b) Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., and Shao, L. Pvtv2: Improved baselines with pyramid vision transformer. arXiv preprint arXiv:2106.13797, 2021b.
- Wang et al. (2020b) Wang, Y., Xu, Z., Wang, X., Shen, C., Cheng, B., Shen, H., and Xia, H. End-to-end video instance segmentation with transformers. arXiv preprint arXiv:2011.14503, 2020b.
- Wasserman (2006) Wasserman, L. All of nonparametric statistics. Springer Science & Business Media, 2006.
- Wightman (2019) Wightman, R. Pytorch image models. https://github.com/rwightman/pytorch-image-models, 2019.
- Wu et al. (2021) Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., and Zhang, L. Cvt: Introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808, 2021.
- Xiao et al. (2021) Xiao, T., Singh, M., Mintun, E., Darrell, T., Dollár, P., and Girshick, R. Early convolutions help transformers see better. arXiv preprint arXiv:2106.14881, 2021.
- Xu et al. (2021) Xu, Y., Zhang, Q., Zhang, J., and Tao, D. Vitae: Vision transformer advanced by exploring intrinsic inductive bias. arXiv preprint arXiv:2106.03348, 2021.
- Yan et al. (2021) Yan, H., Li, Z., Li, W., Wang, C., Wu, M., and Zhang, C. Contnet: Why not use convolution and transformer at the same time? arXiv preprint arXiv:2104.13497, 2021.
- Yang et al. (2021) Yang, J., Li, C., Zhang, P., Dai, X., Xiao, B., Yuan, L., and Gao, J. Focal attention for long-range interactions in vision transformers. Advances in Neural Information Processing Systems, 34, 2021.
- Yuan et al. (2021) Yuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., and Wu, W. Incorporating convolution designs into visual transformers. arXiv preprint arXiv:2103.11816, 2021.
- Yun et al. (2019) Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023–6032, 2019.
- Zhang et al. (2017) Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
- Zhong et al. (2020) Zhong, Z., Zheng, L., Kang, G., Li, S., and Yang, Y. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 13001–13008, 2020.
Appendices
Appendix A Analysis of Runtime Complexity
In this section, we give more details about the runtime complexity (§3.3) for different attention mechanisms in Table 5. In particular, we focus on variants (1) Ripple-softmax and (2) Ripple (naïve), since the rest have been clarified in the main text.
Complexity of Ripple-softmax.
Ripple-softmax aims to inject radial bias through rippling into the vanilla softmax attention. Here we present two possible ways to achieve this goal, either explicitly or implicitly. Given a query, we could perform vanilla attention over one of its vicinal groups once so that the rippling effect could be explicitly encoded. Since the number of tokens within vicinal group of distance is directly proportional to (note that the number of tokens within vicinal group of distance is simply the difference between ) and there are groups in total, the overall complexity would be ; then considering all queries results in complexity (along with a large constant). On the other hand, we could also add specific spatial weights directly to the attention matrix, implicitly enforcing the radial bias. This is similar to relative positional encodings (Shaw et al., 2018), but in this case the overall complexity is at least due to the computation of attention matrices, which is as inefficient as vanilla softmax attention in terms of complexity. In this work, we simply refer to the explicit method as Ripple-softmax.
Complexity of Ripple (naïve).
Naïvely implementing ripple attention comes with complexity, as mentioned in the beginning of . This is due to the fact that for each query, we have to first sum over all tokens in one vicinal group, which is (see the analysis in the paragraph above), and then aggregate over all groups. This results in complexity, and for all queries (also with a large constant; see the empirical running comparison in §5.3).
Appendix B Efficient Gradient Computation in Ripple Attention
To perform gradient back-propagation for ripple attention, a naive implementation would be directly adopting the automatic differentiation, since all operations in our forward pass (Algorithm 1) are differentiable; however, since many computations in Algorithm 1 overlap with each other, it would lead to substantially repetitive calculations. In addition, we find it even takes time and memory, which is highly inefficient compared to the forward pass.
In this section, we present an algorithm based on dynamic programming to perform efficient back-propagation for ripple attention, which again comes with sub-quadratic complexity. Recall that given a single query , all the keys and values , ripple attention executes the following computation during the forward pass777Without loss of generality, we merge the feature map into the vector representation of queries and keys to simplify notations.:
(10) |
The main difference between linearized attention and ripple attention lies in the computation procedure of summation over and . Therefore, we put main focus on calculating gradients for the following quantity
(11) |
where denotes a -dimensional vector located at position , which could be (unrolled) or .888We focus on the general form here since the derived algorithm applies to both the nominator and the denominator. The remaining computation in ripple attention (e.g., dot product with ) can be easily handled by standard automatic differentiation. We are mainly interested in computing gradients with respect to and
In ripple attention, we maintain a summed area table (SAT) to efficiently retrieve the partial reduction over vicinal groups (see Algorithm 1 for more details). Although all operations in Algorithm 1 is differentiable and thus admits the use of automatic differentiation to calculate gradients, it is very inefficient since computations of most intermediate nodes in the computation graph (for example, , , and in Algorithm 1) overlap with each other, resulting in a large amount of repeated computation.
Here we inspect the form (equation 11) and show that the properties of vicinal groups could be made use of to derive efficient gradient computation.
Gradients with respect to spatial weights.
We assume gradients , that is, the gradient of our loss objective w.r.t. output at all positions are available during back-propagation. According to the chain rule, the partial derivative of the objective w.r.t. has the following form:
where and denote the -th dimension of and the output respectively. The first quality holds since spatial weights at every position only depends on the output at that position; but since the same spatial weight applies to all dimensions of , we have to reduce over the embedding dimension to compute the partial derivative. Similar to the forward pass computation (equation 9), we recognize that the inner summation over the vicinal group can be again computed efficiently by utilizing SATs with Algorithm 1.
Gradients with respect to .
The partial derivative w.r.t. element can be written as
(12) |
where we define as the indicator function such that it is set to 1 if and 0 otherwise. A naive way to compute the partial derivatives above has complexity, since for every key vector at position we need to sum its influences over all positions. However, we show that we could again solve them via dynamic programming.
Our key observation is that the vicinal group is symmetrical w.r.t. its arguments, that is, if and only if . Then the partial derivative (equation 12) is equivalent to
Thanks to the symmetry, the computation of partial derivatives is converted into reduction over vicinal groups, it can be effectively solved by dynamic programming (§3.3) in again time, which involves instantiating an SAT for the quantity over all positions . Equipped with this result, substituting or into yields the term in the nominator and denominator, respectively.
Softmax attention | Ripple-softmax† | Linearized attention | Ripple (naïve) | Ripple (DP) |
Appendix C Additional Implementation Details
We implement our model using PyTorch (Paszke et al., 2019) and PyTorch image models (timm) toolkit (Wightman, 2019). We also implement a CUDA kernel for the ripple attention mechanism.
C.1 Deterministic Adaptive Feature Maps for Linearized Attention
Background.
Generally, a random feature map is defined by a function , uni-variate functions as well as identically distributed random vectors following some distribution (Choromanski et al., 2020):
yielding a map from to , where . Then by setting different configurations of ’s, ’s and , we could construct various unbiased estimators for the quantity , that is,
For instance, we could let , where , are trigonometric functions and (Peng et al., 2021; Choromanski et al., 2020). Although unbiased, researchers note that the use of trigonometric functions does not ensure non-negative scores, which may lead to large estimate variance and unstable training (Choromanski et al., 2020). Alternatively, we could construct an estimator by setting with and , which is again unbiased but enjoys positiveness (FAVOR+, Choromanski et al., 2020).
Our proposed deterministic adaptive feature map.
Recently, researchers also proposed various heuristic designs of feature maps (Choromanski et al., 2020; Schlag et al., 2021; Kasai et al., 2021) that do not guarantee unbiasedness but might either exhibit lower variance, simplify computation or bring other useful benefits. Unfortunately, through extensive preliminary experiments we found most of these linearized attention variants (either random or deterministic) did not work well in the setting of vision transformers. We hypothesize there are two reasons for the performance drop: the first one is the usage of random samples, which suffers from the slow Monte Carlo convergence rate and instability during training; the second one is due to fixed weights, preventing the map from being adaptive and learning useful patterns. To this end, we propose the following deterministic feature map:
(13) |
Intuitively, we still follow the trigonometric feature map, except that we set to be initialized as independent standard Gaussian samples but then learnable during training; the generated feature is then passed through a fully connected layer followed by a ReLU activation. It is deterministic and involves learnable parameters, which we found greatly improves performance.
Comparison with other feature maps and ablation study.
We conduct a simple ablation study to demonstrate the effectiveness of our proposed feature map and report comparisons with other feature maps999 For methods that adopt random features, we sample a set of random weights at every training step and use the same set of weights during evaluation. We also attempted various ways to schedule the redrawing random weights during training, but did not observe any performance gain., as shown in Table 6. In general, we find it works pretty well in practice and outperforms other feature maps that are either deterministic or random. For our ablation study, we consider two variants of our proposed approach: (1) the method that recovers the original random trigonometric feature map, that is, recasting as random samples and re-drawing it at every iteration; (2) the method that removes the fully connected layer (characterized by parameters and ). From Table 6, we see a great performance drop if we use random weights, which indicates that random feature maps lead to more difficult training in vision transformers. In addition, a feed-forward layer will give a further performance boost due to the increased flexibility. Therefore, we adopt our proposed deterministic feature map throughout our work.
Feature map | Deterministic | Top-1 Acc. |
RFA (Peng et al., 2021) | ✗ | 67.10 |
Performer (Choromanski et al., 2020) | ✗ | 65.92 |
DPFP (Schlag et al., 2021) | ✓ | 63.95* |
T2R (Kasai et al., 2021) | ✓ | 70.02 |
Ours | ✓ | 70.67 |
Ours w/ randomly sampled | ✗ | 66.82 |
Ours w/o fully connected network | ✓ | 70.02† |
C.2 Explicitly Controlling the Maximum Rippling Distance
In §3.2 we define the threshold to control the termination of rippling process. In practice we find it beneficial to introduce a hard constraint such that the model limits the maximum distance of rippling propagation to and then merges all the remaining groups. In this way, we could not only further reduce the computation overhead, but also encourage the attention mechanism to allocate more weights to distal groups. This can be seen as a stronger version of halting threshold , which is easier to tune due to a more intuitive effect on the rippling process. We find an intermediate value gives a reasonable trade-off between local and long-term dependencies. Given , our model is robust to the change of ; therefore, we only set to 0.001 by default and mainly conduct ablation studies on .
C.3 Parameterization of Spatial Weights
In terms of parameterizing spatial weights, we allocate an embedding vector for every of stick units, so that they could adapt themselves to learn useful patterns from data. To compute spatial weights, we first linearly project each value vector and then perform dot-product with each of stick unit embeddings101010Since stick-breaking transformations ensure the produced weights to be inside a simplex, logits would suffice to produce a sequence of spatial weights with size . to produce logits . Every logit is then passed through a modified sigmoid function to yield the length of each stick unit . This modification, which is inspired by the default stick-breaking transform implementation in PyTorch distribution package (Paszke et al., 2019), ensures the model does not put most of mass on the first several sticks. We find this trick slightly improves performance. Consequently, spatial weights are derived by applying stick-breaking transformations to ’s according to equation 5.
C.4 Architecture Details
For image classification, all model architectures follow the tiny variant of deit (Touvron et al., 2020), consists of 12 transformer layers, with the embedding dimension set to 192, except that we set the number of heads per attention block to 6 for all models. For object detection, our model is based on the architecture of smca with single scale features (Gao et al., 2021), which could facilitate comparisons and demonstrate the effectiveness of ripple attention more clearly. In particular, the number of transformer layers is 6 for both the encoder and decoder, with the number attention heads and the embedding dimension set to 8 and 256, respectively; the backbone is the pre-trained ResNet-50 (He et al., 2016) on ImageNet1k with fixed batch-norm layers.
C.5 Specifics for Applying Ripple Attention in Vision Transformers
Average pooling instead of using class tokens for classification.
Since ripple attention directly operates on 2D images, it is hard to directly employ the widely used class token for classification tasks (Dosovitskiy et al., 2020; Touvron et al., 2020). Instead, we adopt mean average pooling over all tokens instead of class tokens to extract feature vectors that are fed into the classification head.
Multi-head ripple attention.
Similar to multi-head attention (Vaswani et al., 2017), which is used in most Vision transformer architectures, we also adopt a multi-head variant of ripple attention, where different heads maintain different sets of spatial weights. The multi-head ripple attention allows different heads to focus on locality to various degrees, increasing the overall expressiveness.
On the number of ripple layers.
A straightforward implementation choice is to replace regular attention at all layers of ViT with ripple attention. However, we find empirically only replacing several initial transformer layers works equally well. Since the input tokens of transformers consist of local patches, promoting local correlations at lower layers and maintaining structural spatial contexts could facilitate information aggregation; but as tokens go higher, every token is contextualized by global information and in this case adding the notion of locality might mislead the modeling. Therefore, we propose to use a hybrid architecture, where the lower layers use ripple attention while upper ones still adopt linear attention mechanisms. This choice is further supported by our ablation study experiments Appendix D.1, where our model achieves the best performance over various settings if only the first 9 transformer layers use ripple attention. Therefore, throughout experiments we use this configuration unless otherwise stated.
C.6 Training Setup
In this section, we describe our full training setup for both image classification and object detection.
Training details for image classification
We following the same procedure to train the models as in deit (Touvron et al., 2020), including the data-augmentation, the regularization and the hyper-parameter setting for a head-to-head comparison. We use AdamW optimizer (Loshchilov & Hutter, 2019) to train our model on 8 NVIDIA V100 GPUs for 300 epochs on both CIFAR-100 and ImageNet1k datasets. We adopt commonly used data augmentation methods, including random clipping, cropping, Rand-Augment (Cubuk et al., 2020) and random erasing (Zhong et al., 2020). However, we remove repeated augmentation (Hoffer et al., 2020) as we find it slows down convergence for both linearized attention and ripple attention, as also observed in previous studies (Berman et al., 2019; Xiao et al., 2021). For regularization, we employ stochastic depth (Huang et al., 2016), Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), all of which are set to default settings in DeiT (Touvron et al., 2020). Training protocols that are specific to different datasets are listed as follows:
-
•
For ImageNet1k dataset we set the batch size to 1024 and the learning rate to 0.001 with cosine learning rate decay (Loshchilov & Hutter, 2016). The image size is set to with patch size , resulting in tokens.
-
•
for CIFAR-100 dataset, the batch size and the learning rate is set to 512 and 0.0005 respectively, with the same cosine learning rate decay. In terms of the image size, we use the original scale , where a patch size is used to produce non-overlapping patches.
During evaluation, we report top-1 and top-5 accuracy on the evaluation set of both ImageNet1k and CIFAR-100 datasets.
Training details for object detection
We follow the same training protocol as SMCA (Gao et al., 2021). In particular, we initialize the transformer parameters with Xavier initialization (Glorot & Bengio, 2010), and use the pretrained weights on ImageNet1k for the backbone. We adopt the AdamW optimizer (Loshchilov & Hutter, 2019), set the weight decay to and the learning rate to and for the backbone and transformer, respectively. We also decrease the learning rate to 1/10 of its original value after 40 epochs for 50 epoch schedule and after 80 epochs for 108 epoch training schedule. The dropout rate is set to 0.1. The data augmentation scheme and the loss objective is also the same as SMCA (Gao et al., 2021). All detection models are trained on 8 NVIDIA V100 GPUs with a total batch size of 16.
Appendix D Additional Experiment Results
D.1 On the Effect of Various Ripple Layers
As mentioned in §4.1, directly replacing all attention layers in deit-la with ripple could be a sub-optimal choice. To validate this, we conduct an ablation study on ImageNet1k dataset to investigate the effect of different numbers of ripple layers, where the first several layers use ripple attention while upper ones still adopt linearized attention mechanism. The results are shown in Table 7. In particular, we find the model performance consistently improves as the number of ripple layers increases, but drops a little when the depth of ripple layers reaches a certain level (e.g., 9). Our observation aligns with our intuition, which suggests using hybrid attention layers could achieve a good trade-off between locality promotion and global dependency modeling. Therefore, ripple uses 9 ripple layers by default throughout our experiments unless otherwise stated.
# ripple layers | Speed | Top-1 Acc. | Top-5 Acc. |
0 | 2664 | 70.67 | 90.16 |
3 | 1355 | 71.63 | 90.42 |
6 | 916 | 72.41 | 90.32 |
9 | 792 | 73.02 | 91.56 |
12 | 563 | 72.69 | 91.30 |
D.2 On the Effect of Different Parameterization Schemes for Vicinal Groups
Ripple attention is a flexible framework in that it allows the trade-off between the running time complexity and task accuracy. To explore this, we compare the full ripple attention against a rippling process where ripples get exponentially thicker so that the process could reach the image boundary in logarithmic time. Formally, recall that vicinal groups with respect to a query token at position is defined as , where every token at position belongs to if and only if . In the setting of ripple-logarithmic, we generalize this notion by relaxing the equality condition, that is, now if and only if , which allows the size of vicinal groups to be exponentially larger instead of keeping constant. In this way, the number of all vicinal groups is . This model, which we refer to as ripple-logarithmic, enjoys time complexity and is more efficient than base ripple attention.
To empirically evaluate the efficiency of this variant, we plot the empirical running statistics of ripple-logarithmic under different numbers of tokens. For completeness, we also include a variant where scales linearly with the image height (or width), denoted by ripple-dense. As shown in Figure 4, we observe ripple-logarithmic runs as fast as base ripple (whose is fixed) and becomes more efficient than the dense version as the number of tokens increases. On the other hand, all of these models run with the same amount of memory consumption, as their space complexity is constant in the rippling distance. In terms of task performance, as reported in Table 4, we see a clear performance drop if we adopt ripple-logarithmic, which could be due to that ripple-logarithmic processes visual tokens at a coarser-grained level. This again justifies the flexibility of our framework: one could trade off the task accuracy for more efficiency and vice versa.


D.3 Performance Comparison under the Same Speed Constraint
We conduct an ablative experiment to compare the performance of ripple against deit-la under the same speed constraint. The hyper-parameter configuration remains the same as the main experiment. As shown in Table 8, we find that a 4-layer transformer with ripple outperforms a 12-layer model with deit-la by a reasonable margin while running at a similar speed. This indicates the enlarged modeling capacity of ripple compared to traditional linearized attention, demonstrating the effectiveness of our approach.
Models | Speed | # Params | Top-1 Acc. |
4-layer deit-la | 6953 | 1.88M | 63.35 |
deit-la | 2686 | 5.50M | 67.10 |
4-layer ripple | 2369 | 1.89M | 70.03 |
ripple | 893 | 5.52M | 74.11 |
Appendix E Setup for Empirical Running Time and Memory Consumption
For the simulation experiment conducted in §5.3, we use the same vision transformer architecture for all models, whose hyper-parameter setting is specified in Appendix C.4, except that we set embedding dimension to 96 and batch size to 4; otherwise, most configurations tested here will make it infeasible for deit and ripple (Naïve) to fit into the 32GB memory of a single NVIDIA V100 GPU machine.