Sparse transformer with local and seasonal adaptation for multivariate time series forecasting

Transformers have achieved remarkable performance in multivariate time series(MTS) forecasting due to their capability to capture long-term dependencies. However, the canonical attention mechanism has two key limitations: (1) its quadratic time complexity limits the sequence length, and (2) it generates future values from the entire historical sequence. To address this, we propose a Dozer Attention mechanism consisting of three sparse components: (1) Local, each query exclusively attends to keys within a localized window of neighboring time steps. (2) Stride, enables each query to attend to keys at predefined intervals. (3) Vary, allows queries to selectively attend to keys from a subset of the historical sequence. Notably, the size of this subset dynamically expands as forecasting horizons extend. Those three components are designed to capture essential attributes of MTS data, including locality, seasonality, and global temporal dependencies. Additionally, we present the Dozerformer Framework, incorporating the Dozer Attention mechanism for the MTS forecasting task. We evaluated the proposed Dozerformer framework with recent state-of-the-art methods on nine benchmark datasets and confirmed its superior performance. The experimental results indicate that excluding a subset of historical time steps from the time series forecasting process does not compromise accuracy while significantly improving efficiency. Code is available at https://github.com/GRYGY1215/Dozerformer.


Introduction
Multivariate time series (MTS) forecasting endeavors to adeptly model the dynamic evolution of multiple variables over time from their historical records, thereby facilitating the accurate prediction of future values.This task holds significant importance in various applications Petropoulos et al. (2022).The advent of deep learning has significantly advanced MTS forecasting, various methods are proposed based on Recurrent Neural Networks (RNN) Lai et al. (2018b); Zhang et al. (2024b;2021), Convolutional Neural Networks (CNN) Shih et al. (2019); Wang et al. (2023).In recent years, the Transformer has demonstrated remarkable efficacy in MTS forecasting.This notable performance can be attributed to its inherent capacity to effectively capture global temporal dependencies across the entirety of the historical sequence.The Transformer is initially proposed for natural language processing(NLP) Vaswani et al. (2017) tasks, with its primary objective of extracting information from words within sentences.It achieved great success in the NLP field and swiftly extended its impact in computer vision Dosovitskiy et al. (2021) and time series analysis Wen et al. (2023).The critical challenge for applying transformers in MTS forecasting is to modify the attention mechanism based on the characteristics of time series data.Informer Zhou et al. (2021) introduced a ProbSparse attention mechanism that restricts each key to interact only with a fixed number of the most significant queries.This model employs a generative-style decoder to produce predictions for multiple time steps in a single forward pass, aiming to capture temporal dependencies at the individual time step level.Autoformer Wu et al. (2021) proposed an Auto-Correlation mechanism to model temporal dependencies at the sub-series level.It also introduced a time series decomposition block that utilizes moving averages to separate seasonal and trend components from raw MTS data.FEDformer Zhou et al. (2022) utilizes a Frequency Enhanced Block and Frequency Enhanced Attention to discern patterns in the frequency domain.It achieves linear computational complexity by selectively sampling a limited number of Fourier components for the attention mechanism's queries, keys, and values.Pyraformer Liu et al. (2022) introduces a pyramidal graph organized as a tree structure where each parent node has several child nodes.It aims to capture data patterns at multiple scales by enabling each parent node to merge the sub-series from its child nodes.As a result, the scale of the graph exponentially increases based on the number of child nodes.Crossformer Zhang & Yan (2023) introduced a Two-Stage Attention mechanism that applies full attention across both time steps and variable dimensions.It also proposed a three-level framework designed to capture hierarchical latent representations of MTS data by merging tokens produced at lower levels.Dlinear Zeng et al. (2023) challenges the effectiveness of the transformer by proposing a straightforward linear layer to infer historical records to predictions directly and outperforming aforementioned Transformer-based methods on benchmark datasets.MICN Wang et al. (2023)  Nevertheless, these methods generate predictions for all future time steps from the entire historical sequence.Thus, they ignored that the look-back window of historical time steps is critical in generating accurate predictions.For example, predicting the value at horizon 1 using the entire historical sequence is suboptimal and inefficient.

Sparse Self Attentions
Transformers have achieved significant achievements in various domains, including NLP Vaswani et al. (2017), computer vision Dosovitskiy et al. (2021), and time series analysis Wen et al. (2023).However, their quadratic computational complexity limits input sequence length.Recent studies have tackled this issue by introducing modifications to the full attention mechanism.Longformer Beltagy et al. ( 2020) introduces a sparse attention mechanism, where each query is restricted to attending only to keys within a defined window or dilated window, except for global tokens, which interact with the entire sequence.Similarly, BigBird Zaheer et al. (2020) proposes a sparse attention mechanism consisting of Random, Local, and Global components.The Random component limits each query to attend a fixed number of randomly selected keys.The Local component allows each query to attend keys of nearby neighbors.The Global component selects a fixed number of input tokens to participate in the query-key production process for the entire sequence.In contrast to NLP, where input consists of word sequences, and computer vision Khan et al. (2022), where image patches are used, time series tasks involve historical records at multiple time steps.
To effectively capture time series data's seasonality, having a sufficient historical record length is crucial.For instance, capturing weekly seasonality in MTS data sampled every 10 minutes necessitates approximately 6 × 24 × 7 time steps.Consequently, applying the Transformer architecture to time series data is impeded by its quadratic computational complexity.

Method
This section presents the Dozerformer method, incorporating the Dozer attention mechanism.We first introduce the Dozerformer framework, comprising a transformer encoder-decoder pair and a linear layer designed to forecast the seasonal and trend components of MTS data.Subsequently, we provide a comprehensive description of the proposed Dozer attention mechanism, focusing on eliminating query-key pairs between input and output time steps that do not contribute to accuracy.

Framework
The MTS forecasting task aims to infer values of D variables for future O time steps X ∈ R O×D from their historical records X ∈ R I×D .Figure 2   To predict the trend component, we employ a linear layer for the direct inference of trend forecasts from historical trend components.Subsequently, the forecasts for the seasonal and trend components are aggregated by summation to obtain the final predictions denoted as X pred ∈ R O×D .A detailed step-by-step computation procedure is introduced in Supplementary Section 2.

Dozer Attention
The canonical scaled dot-product attention is as follows: where the Q, K, and V represent the queries, keys, and values obtained through embedding from the input sequence of the d-th series, denoted as X d enc ∈ R c×Nenc×p .It's noteworthy that we flatten both the feature map and patch size dimensions of c×p) , representing the latent representations of patches with a size of p.The scaling factor d k denotes the dimension of the vectors in queries and keys.The canonical transformer exhibits two key limitations: firstly, its encoder's self-attention is afflicted by quadratic computational time complexity, thereby restricting the length of N enc ; secondly, its decoder's cross-attention employs the entire sequence to forecast all future time steps, entailing unnecessary computation for ineffective historical time steps.To overcome these issues, we propose a Dozer Attention mechanism, comprising three components: Local, Stride, and Vary.

Stride
Cross Attention In Figure 3, we illustrate the sparse attention matrices of Local, Stride, and Vary components employed in the proposed Dozer attention mechanism.Squares shaded in gray signify zeros, indicating areas where the computation of the query-key product is redundant and thus omitted.These areas represent saved computations compared to full attention.The pink-colored squares correspond to the Local component, signifying that each query exclusively calculates the product with keys that fall within a specified window.Blue squares denote the Stride component, where each query selectively attends to keys that are positioned at fixed intervals.The green squares represent the Vary component, where queries dynamically adapt their attention to historical sequences of varying lengths based on the forecasting horizons.As a result, the Dozer attention mechanism exclusively computes the query-key pairs indicated by colored (pink, blue, and green) squares while efficiently eliminating the computation of gray query-key pairs.

Local
The MTS data comprises variables that change continuously and exhibit locality, with nearby values in the time dimension displaying higher correlations than those at greater temporal distances.This characteristic is evident in Figure 3, which illustrates the strong locality within MTS data.Building upon this observation, we introduce the Local component, a key feature of our approach.In the Local component, each query selectively attends to keys that reside within a specified time window along the temporal dimension.Equation 2 defines the Local component as follows: Where A represents the attention matrix that records the values of production between queries and keys.The superscripts self and cross denote self-attention and cross-attention, respectively.The subscripts i and j represent the locations or time steps of the query vector and key vector along the temporal dimension, respectively.t specifies the time step at which forecasting is conducted, and w represents the window size for the Local component.In the case of self-attention, the Local component enables each query to selectively attend to a specific set of keys that fall within a defined time window.Specifically, each query in self-attention attends to 2 × ⌊w/2⌋ + 1 keys, with ⌊w/2⌋ keys representing the neighboring time steps on each side of the query.For cross-attention, the concept of locality differs since future time steps cannot be utilized during forecasting.Instead, the locality is derived from the last ⌊w/2⌋ + 1 time steps, situated at the end of the sequence.These steps encompass the most recent time points up to the moment of forecasting, denoted as time t.

Stride
Seasonality is a key characteristic of time series data, and to capture this attribute effectively, we introduce the Stride component.In this component, each query selectively attends to keys positioned at fixed intervals in the temporal dimension.To illustrate this concept, let's consider a query at time step t, denoted as q t , and assume that the time series data exhibits a seasonality pattern with a period (interval).Equation 3 defines the Stride component as follows: In self-attention, the Stride component initiates by computing the product between the query q t and the key at the same time step, denoted as k t .It then progressively expands its attention to keys positioned at a temporal distance of interval time steps from t, encompassing keys like k t−interval , k t+interval , and so forth, until it spans the entire sequence range.For cross-attention, the Stride component identifies time steps within the input sequence X I that are separated by multiples of interval time steps, aligning with the seasonality pattern.Hence, the Stride component consistently computes attention scores (query-key products) using these selected keys from the Encoder's I input time steps, yielding a total of s = ⌊I/interval⌋ query-key pairs.

Vary
In the context of MTS forecasting scenarios employing the canonical attention mechanism, predictions for all future time steps are computed through a weighted sum encompassing the entirety of the historical sequence.Regardless of the forecasting horizon, this canonical cross-attention mechanism diligently computes all possible query-key pairs.However, it is important to recognize that harnessing information from the entire historical sequence does not necessarily improve forecasting accuracy, especially in the case of time series data characterized by evolving distributions, as exemplified by stock prices.
To tackle this challenge, we propose the Vary component, which dynamically expands the historical time steps considered as the forecasting horizon extends.We define the initial length of the historical sequence utilized as v.When forecasting a single time step into the future (horizon of 1), the Vary component exclusively utilizes v time steps from the sequence.As the forecasting horizon increases gradually by 1, the history length used also increments by 1, starting from the vary window v until it reaches the maximum input length.Equation 4 defines the Vary component as follows: Training details.We implemented the proposed Dozerformer in Pytorch and trained with Adam optimizer (β 1 = 0.9 and β 2 = 0.99).The initial learning rate is selected from {5e − 5, 1e − 4, 5e − 4, 1e−3} via grid search and updated using the Cosine Annealing scheme.The transformer encoder layer is set to 2 and the decoder layer is set to 1.We select the look-back window size of historical time steps utilized from {96, 192, 336, 720} via grid search except for the ILI dataset which is set to 120.The patch size is selected from {24, 48, 96} via grid search.The main results, summarized in Table 1, are average values derived from six repetitions using seeds 1, 2022, 2023, 2024, 2025, and 2026.The ablations study and parameter sensitivity results are obtained from random seed 1.

Main Results
Table 1 presents a quantitative comparison between the proposed Dozerformer and baseline methods across nine benchmarks using MSE and MAE, where the bolded and underlined case (dataset, horizon, and metric) represent the best and second-best results, respectively.The Dozerformer achieved superior performance than state-of-the-art baseline methods by performing the best in 48 cases and second-best in 22 cases.Compared to state-of-the-art baseline methods, Dozerformer achieved reductions in MSE: 0.4% lower than PatchTST Nie et al. (2023), 8.3% lower than Dlinear Zeng et al. (2023), 20.6% lower than MICN Wang et al. (2023), and a substantial 40.6% lower than Crossformer Zhang & Yan (2023).While PatchTST came closest to our method in performance, it utilizes full attention, making it less efficient in terms of computational complexity and memory usage.We conclude that Dozerformer outperforms recent state-of-the-art baseline methods in terms of accuracy.

Computational Efficiency
We analyze the computational complexity of transformer-based methods and present it in Table 2.For self-attention, the proposed Dozer self-attention achieved linear computational complexity w.r.tI with a coefficient (w + s)/p.w and s denote the keys that the query attends with respect to Local and Stride components, which are small numbers (e.g.w ∈ {1, 3} and s ∈ {2, 3}).In contrast, p corresponds to the size of time series patches and is set to a larger value (e.g., p ∈ 24, 48, 96).
We conclude that the coefficient (w + s)/p is consistently smaller than 1 under the experimental conditions, highlighting the superior computational complexity performance of Dozer self-attention.
To analyze the computational complexity of cross-attention, it's essential to consider the specific design of the decoder for each method and analyze them individually.The Transformer calculates production between all L + O queries and I keys, with its complexity influenced by inputs for both the encoder and decoder.2023) The Local and Stride components of Dozer cross-attention specifically address (L + O)/p queries, calculating the product of each query with w keys within the local window and s keys positioned at fixed intervals from the query.Consequently, their computational complexity is linear with respect to L + O, characterized by the coefficient (w + s)/p.The Vary component's complexity exhibits a quadratic relationship with respect to O, with a coefficient of 1/(2p 2 ).This quadratic complexity is notably more efficient compared to linear complexity when O < 2p 2 (e.g.1152 when p = 24).Additionally, the Vary component maintains linear complexity concerning O when v is greater than 1, although its coefficient (v − 1)/p should be significantly less than 1 for practical efficiency.Throughout all experiments, we consistently set v to 1, rendering (v − 1)O/p negligible.
The memory complexity of the Dozer Attention is O((w + s)I/p) for self-attention and O((w + s)(L + O)/p + (O/p) 2 /2 + (v − 1)O/p) for cross-attention.It's worth noting that the analysis of Dozer Attention's computational complexity is equally applicable to its memory complexity.In conclusion, the Local and Stride components excel in efficiency by attending queries to a limited number of keys.As for the Vary component, it proves to be more efficient than linear complexity in practical scenarios, with its complexity only influenced by forecasting horizons.
We selected ETTh1 as an illustrative example and evaluated the Params (total number of learnable parameters), FLOPs (floating-point operations), and Memory (maximum GPU memory consumption).With a historical length set at 720 and a forecasting horizon of 96, Table 3   To investigate the effect of local size w, stride size s, and vary size v to MTS forecasting accuracy, we conduct experiments and present the results in Figure 4.These hyperparameters influence the sparsity of the attention matrix, consequently affecting the efficiency of the Dozerformer in terms of computation and memory usage.A smaller local size w, vary starting size v, and a larger stride size s result in a more sparse attention matrix, contributing to increased efficiency.It is noteworthy that an increase in historical time steps utilized in the attention mechanism does not necessarily lead to enhanced accuracy.The optimal local size varies based on the dataset's characteristics, but we observe higher MSE values as the local size increases and more time steps are utilized.Regarding the stride size, it reaches an optimum of 9 for the ETTh1 and ETTm1 datasets, while it is 1 for the Weather datasets.For the vary size, a small value of 1 produces good performance in terms of accuracy and efficiency.In summary, we conclude that the selection of these hyperparameters varies across datasets.However, a sparse matrix, in general, performs better than a more dense attention matrix.

Effect of look-back window size
The size of the look-back window, denoted as I, impacts primarily the stride component of the Dozerformer, as it determines the model's receptive field.Figure 5 illustrates the effect of lookback window size on seven methods at horizons 96 and 720.Remarkably, the Dozerformer consistently outperforms other methods for various sequence lengths in the ETTh1, ETTm1, and Weather datasets.The Exchange-Rate dataset is an exception; due to its lack of seasonality, all methods experienced a decline in accuracy with longer input sequences.Notably, an increase in input sequence length does not necessarily lead to improved accuracy for the Dozerformer.In contrast, for the remaining baseline methods, their performance tends to deteriorate as the input sequence length increases.We attribute this trend to longer input sequences incorporating more historical time steps that lack correlation with the forecasting target, thereby degrading the performance of these meth-

A.2 Detailed Computation of the Dozerformer
Given MTS input X ∈ R I×D , the first step is to decompose it into seasonal and trend components Wu et al. (2021); Zhou et al. (2022); Zeng et al. (2023) as follows: Where X s ∈ R I×D and X t ∈ R I×D are seasonal and trend-cyclical components, respectively.
A simple linear layer is utilized to model the trend component and generate trend component predictions.
where X pred t ∈ R O×D is the prediction for trend component.The Linear operation directly generates O future trend values by projecting from I historical trend values for each variable in the MTS data.
The transformer encoder-decoder pair utilizing the Dozer attention is employed as the seasonal model.The DI embedding Zhang et al. (2024a) is utilized to embed the raw MTS into feature maps and partition them into patches. X where the X s ∈ R 1×I×D is the seasonal components of input, X emb ∈ R c×I×D represents c feature maps embedded by a convolutional layer with kernel size of 3 × 1.The Patch procedure divides the time series into N I = ⌈I/p⌉ non-overlapping patches of size p, yielding X pat ∈ R c×N I ×p×D .The X 0 is zero-padding when the input sequence length I is not divisible by the patch size p.The Dozerformer adheres to a channel-independent design, where each variable is individually inputted into the transformer.Consequently, we combined the feature map dimension and the patch size dimension to form the transformer's input X d pat ∈ R N I ×(p×c) .The transformer encoder and decoder employ the Dozer attention as follows: where the K is the subset of keys selected by the characteristics of datasets.Note that the X d pat is one variable of the MTS data, following the channel-independent design of patchTST Nie et al. (2023).We follow the canonical multi-head attention utilizing the attention mechanism presented in Equation 8 as follows: Please note that the multi-head attention, as presented in Equations 8 and 9, differs in cross attention as its Query is derived from the decoder's input.Here, H d ∈ R N O ×(p×c) represents the transformer output, where N O = ⌈O/p⌉ denotes the number of patches of the decoder.Subsequently, the Dozerformer separates the feature dimension and patch size dimension, resulting in The output for all variables is obtained by concatenating the outputs of individual variables, denoted as H ∈ R c×N O ×p×D .
Then, a Conv layer with a kernel size of 1 × 1 is utilized to reduce the number of feature maps from c to 1 as follows.
X pred s = Conv (H) (10) where X pred s ∈ R O×D is the predictions for the seasonal components.Lastly, the seasonal component and trend component are summed elementwisely as follows to generate final predictions.
X pred = X pred s + X pred t (11) where X pred ∈ R O×D is the predictions for the future O time steps of the D variables in the MTS data.

B.1 The quantitative results in MASE
Table 7 illustrates the quantitative comparison results between the proposed Dozerformer and baseline methods using the Mean Absolute Scaled Error (MASE) metric.

Figure 1 :
Figure 1: (a) The heatmap illustrates the correlation among 168 time steps for the ETTh1, ETTm1, Weather, and Exchange-Rate datasets.(b) Full Attention: Generates predictions using the entire historical sequence.Local Component: Utilizes time steps within a specified window.Stride Component: Utilizes time steps at fixed intervals from the target.Vary Component: Adaptively uses historical time steps as the forecasting horizon increases.
Figure 1(b) shows the full attention mechanism generating predictions, it generates predictions for all O future time steps from the entire historical sequence of length I.They also ignored the characteristics of MTS data, like locality and seasonality.Figure 1(a) shows the heatmap of correlation among 168 time steps of four datasets that indicate clear locality and seasonality.An attention mechanism should only utilize those time steps that have a high correlation with the prediction target.
illustrates the overall framework of Dozerformer.Initially, it decomposes the MTS data into seasonal and trend components, following previous methods Wu et al. (2021); Zhou et al. (2022); Zeng et al. (2023); Wang et al. (2023).Subsequently, the transformer and linear models are employed to generate forecasts for the seasonal component X s ∈ R I×D and the trend component X t ∈ R I×D , respectively.

Figure 2 :
Figure 2: The architecture of our proposed Dozerformer framework.The dimension invariant (DI) embed Zhang et al. (2024a) transforms raw single-channel MTS data into multi-channel feature maps while preserving both the time step and variable dimensions.It further partitions the MTS data into multiple patches along the time step dimension, resulting in patched MTS embeddings denoted as X enc ∈ R c×Nenc×p×D .Here, c represents the number of feature maps, N enc = ⌈I/p⌉ indicates the number of patches for the encoder, and p represents the patch size.Similarly, the decoder's input, denoted as X dec ∈ R c×N dec ×p×D (not shown in Figure 2), is derived from L historical time steps and O zero padding.In this context, N dec =

Figure 3 :
Figure 3: The illustration of the Dozer attention mechanism.Upper: The self-attention consists of Local and Stride.Lower: The cross attention consists of Local, Stride, and Vary.

Figure 6 :
Figure 6: Visualizations of forecasting results on ETTh1 dataset at horizon 96.

Figure 7 :
Figure 7: Visualizations of forecasting results on ETTh1 dataset at horizon 720.
the distinctive characteristics of MTS data.The Local component exclusively considers time steps within a window determined by the target time step.The Stride component selectively employs time steps that are at fixed intervals from the target time step.Lastly, the Vary component dynamically adjusts its use of historical time steps according to the forecasting horizon-employing a shorter historical sequence for shorter horizons and a longer one for longer horizons.The contributions of our work are summarized as follows: To address those issues, we propose Dozerformer, a novel framework that incorporates an innovative attention mechanism comprising three key components: Local, Stride, and Vary.Figure1(b) illustrates the concept of the Local, Stride, and Vary components.They leverage historical time steps based on To address this challenge, various methods have been proposed.Informer Zhou et al. (2021) introduces a ProbSparse attention mechanism, allowing each key to attend to a limited number of queries.This modification achieves O(I log I) complexity in both computational and memory usage.Autoformer Wu et al. (2021) proposes an Auto-Correlation mechanism designed to model period-based temporal dependencies.It achieves O(I log I) complexity by selecting only the top logI query-key pairs.FEDformer Zhou et al. (2022) introduces frequency-enhanced blocks, including Fourier and Wavelet components, to transform queries, keys, and values into the frequency domain.The attention mechanism computes a fixed number of randomly selected frequency components from queries, keys, and values, resulting in linear complexity.Both Crossformer Zhang & Yan (2023) and PatchTST Nie et al. (2023) propose patching mechanisms that partition time series data into patches spanning multiple time steps, effectively reducing the total length of the historical sequence.
The Informer selects log(L + O) queries and I/2 keys (time steps near t), resulting in O(log(L + O)I/2).Autoformer and FEDformer pad zeros to keys when I is smaller than L + O and only select the last L + O keys when I is greater than L + O.As a result, their cross-attention complexity is linear w.r.t L + O.The Crossformer decoder takes only O zero paddings as input.Consequently, its cross-attention attends to O/p queries and I/p keys.It's worth noting that Crossformer also applies full attention across the variable dimension, so its complexity is additionally influenced by the variable count D. PatchTST solely employs the transformer encoder, making cross-attention irrelevant in this context.

Table 2 :
Computational complexity of self-attention and cross-attention.The Encoder's input historical sequence length is denoted I, and The decoder's input historical sequence length and forecasting horizons are denoted as L and O, respectively.D indicates the variable dimension of MTS data.p is the patch size.
Nie et al. (2023))itative results of this efficiency comparison.DlinearZeng et al. (2023), characterized by a straightforward architecture employing a linear layer to directly generate forecasting values from historical records, achieved the best efficiency among the methods considered.Notably, the proposed Dozerformer exhibited superior efficiency compared to other baseline methods based on CNN or transformers.It is essential to highlight that patchTSTNie et al. (2023)is the only method that did not significantly lag behind Dozerformer in accuracy.However, its parameter count is six times greater than that of Dozerformer, and FLOPs are 17 times larger.

Table 3 :
Zhou et al. (2021)17)results in ETTh1 dataset for input length of 720 and forecasting horizon 96To investigate the impact of the Local, Stride, and Vary components individually, we conducted experiments by using only one of these components in the Dozer Attention.The results are summarized in Table4, the mean value across four horizons(96, 192, 336, and 720)for seed 1 was presented.The hyper-parameter of each component follows the setting of the main results.Overall, each component working alone performed slightly worse than the Dozer Attention, showing the effectiveness of each component in capturing the essential attributes of MTS data.To demonstrate the effectiveness of the proposed Dozer attention mechanism, we conducted a comparative analysis by replacing it with other attention mechanisms in the Dozerformer framework.Specifically, we compared it with canonical full attentionVaswani et al. (2017), Auto-CorrelationWu et al. (2021), Frequency Enhanced Auto-CorrelationZhou et al. (2022), and Prob-Sparse attentionZhou et al. (2021).The results of this comparison are presented in Table5.The Dozer Attention outperformed other attention mechanisms, securing the top position with 22 bestcase scenarios out of 32.Canonical full attention performed as the second-best, achieving the best

Table 4 :
Ablation study of three key components of Dozer attention: Local, Stride, and Vary.