This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space Model

Lincan Li Hanchen Wang Wenjie Zhang
University of New South Wales
Sydney
Corresponding author.
Australia
lincan.li@unsw.edu.au, hanchen.wang@unsw.edu.au,
wenjie.zhang@unsw.edu.au
Abstract

Spatial-Temporal Graph (STG) data is characterized as dynamic, heterogenous, and non-stationary, leading to the continuous challenge of spatial-temporal graph learning. In the past few years, various GNN-based methods have been proposed to solely focus on mimicking the relationships among node individuals of the STG network, ignoring the significance of modeling the intrinsic features that exist in STG system over time. In contrast, modern Selective State Space Models (SSSMs) present a new approach which treat STG Network as a system, and meticulously explore the STG system’s dynamic state evolution across temporal dimension. In this work, we introduce Spatial-Temporal Graph Mamba (STG-Mamba) as the first exploration of leveraging the powerful selective state space models for STG learning by treating STG Network as a system, and employing the Spatial-Temporal Selective State Space Module (ST-S3M) to precisely focus on the selected STG latent features. Furthermore, to strengthen GNN’s ability of modeling STG data under the setting of selective state space models, we propose Kalman Filtering Graph Neural Networks (KFGN) for dynamically integrate and upgrade the STG embeddings from different temporal granularities through a learnable Kalman Filtering statistical theory-based approach. Extensive empirical studies are conducted on three benchmark STG forecasting datasets, demonstrating the performance superiority and computational efficiency of STG-Mamba. It not only surpasses existing state-of-the-art methods in terms of STG forecasting performance, but also effectively alleviate the computational bottleneck of large-scale graph networks in reducing the computational cost of FLOPs and test inference time. The implementation code is available at: https://github.com/LincanLi98/STG-Mamba.

1 Introduction

Spatial-temporal graph data is a kind of Non-Euclidean data which widely exists in our daily life, such as urban traffic network, metro system InFlow/OutFlow, social networks, regional energy load, weather observations, etc. Owning to the dynamic, heterogenous, and non-stationary nature of STG data, accurate and efficient STG forecasting has long been a challenging task.

Lately, with the popularization of Mamba Gu and Dao (2023); Wang et al. (2024); Liu et al. (2024), Modern Selective State Space Models (SSSM) have aroused considerable interest among researchers from computer vision and Natural Language Processing. SSSM is a variant of the State Space Models, which originated from Control Science and Engineering fields Lee et al. (1994); Friedland (2012). State Space Models provide a professional framework to describe a physical system’s dynamics state evolution through sets of input, output, and state variables related by first-order differential or difference equations. The approach allows for a compact way to model and analyze systems with multiple inputs and outputs (MIMO) Aoki (2013).

STG learning can be treated as a complex process of understanding and forecasting the evolution of STG networks over time, which highly resembles the state space transition process where each state encapsulates the current configuration of a system and transitions represent changes over time. Deep learning-based SSSMs bring new horizons to STG learning tasks. However, great challenges pose in accurately and effectively adopting SSSMs architectures for STG modeling.

Motivated by the excellent long-term contextual modeling ability and low computational overhead of SSSMs, we propose Spatial-Temporal Graph Mamba (STG-Mamba) as the first deep learning-based SSSM for effective data-centric STG learning. The main contributions of this work are:

  • We make the first attempt to adapting SSSMs for STG learning tasks. A simple yet elegant way is employed to extend SSSMs to handle STG data. Specifically, we formulate the framework following the Stacked Residual Encoder Fashion, which takes NN stacked Graph Selective State Space Block (GS3B) as basic module, with the proposed Kalman Filtering Graph Neural Network (KFGN), the Spatial-Temporal Selective State Space Module (ST-S3M), and a simultaneous STG Feed-Forward connection to serialize and coordinate different internal modules.

  • A novel Spatial-Temporal Selective State Space Module (ST-S3M) is proposed for STG network pioneering integration with selective state space models, which performs input-dependent adaptive spatial-temporal graph feature selection. The Graph Selective Scan algorithm within ST-S3M simultaneously receives graph information from KFGN through the Feed-Forward connection, the Feed-Forward graph information is then employed to assist more effective updating of the state transition matrix and control matrix.

  • We introduce KFGN as the specialized adaptive spatiotemporal graph generation and upgrading method, which smoothly fits in within the SSSM-based context. In KFGN, DynamicFilter-GNN serves as the module which generating input-specific dynamic graph structures. The KF-Upgrading mechanism models STG inputs from different temporal granularities as parallel streams, and the output embeddings are integrated through the statistical theory learning of Kalman Filtering for optimization.

  • Extensive evaluations are carried on three open-sourced benchmark STG dataset. Results demonstrate that our STG-Mamba not only exceed other benchmark methods in terms of STG prediction performance, but also achieve O(n)O(n) computational complexity, remarkably reducing the computational overhead compared with Transformers-based methods.

2 Preliminaries

Spatial-Temporal Graph System (STG system). We for the first time define a network which consists of STG data as a Spatial-Temporal Graph System. In theory foundation, a system is defined as the representation of a physical process, comprising state variables that describe the system’s current condition, input variables influencing the system’s state, and output variables reflecting the system’s response. For STG System, this framework is adapted to accommodate spatial dependencies and temporal evolution, structuring the system with nodes representing spatial entities, edges denoting spatial interactions, and state variables evolving over time to capture the dynamics of STG data.

State Space Models (SSMs). Deep Learning-based SSMs are a lately invented class of sequential models that are closely related to RNN architectures and classical state space models. They are characterized by a particular continuous system model that maps a multi-dimensional input sequence to a corresponding output sequence through an implicit latent state representation. SSMs are defined by four parameters (𝔸,𝔹,,𝔻)(\mathbb{A},\mathbb{B},\mathbb{C},\mathbb{D}) that specify how the input (control signal) and the current state determine the next state and output. This framework allows for efficient sequential modeling by enabling both linear and non-linear computations. As a variant of SSMs, SSSMs emphasize how to build the selection mechanism upon SSMs, which is highly resemble to the core idea of attention mechanism, making it a competitive counterpart of Transformer architecture.

STG Forecasting based on SSSMs. Employing SSSMs for spatial-temporal graph forecasting, the problem can be formulated as dynamically identifying and utilizing relevant portions of historical spatiotemporal data and graph structures to predict the future states of a STG system.

Given spatiotemporal graph 𝔾ST=(VST,EST,AST)\mathbb{G}^{ST}=(V^{ST},E^{ST},A^{ST}) and historical spatial-temporal sequential data Xtp+1:t={Xtp+1,Xtp+2,,Xt}\rm{X}_{t-p+1:t}=\{X_{t-p+1},X_{t-p+2},...,X_{t}\}, the objective is to utilize SSSMs to forecast future STG system’s states X^t+1:t+k={X^t+1,,X^t+k}\rm{\hat{X}_{t+1:t+k}}=\{\hat{X}_{t+1},...,\hat{X}_{t+k}\}. This process is achieved by learning a mapping function FSSSM()F_{SSSM}(\cdot) that dynamically selects relevant state transitions and interactions for prediction:

FSSSM(Xtp+1:t;𝔾ST)=X^t+1:t+kF_{SSSM}(\rm{X}_{t-p+1:t};\mathbb{G}^{ST})=\rm{\hat{X}_{t+1:t+k}} (1)

3 Methodology

In this section, we start with introducing the model architecture of STG-Mamba and the formulation of STG Selective State Space Models in Section 3.1 and Appendix D as well. Following that, we elaborate the key contributions of this work, including the Kalman Filtering Graph Neural Networks (Section 3.2) which specifically designed for SSSMs to enable adaptive statistical-based graph learning under contexts with noisy data and uncertainty; and the Spatial-Temporal Selective State Space Module (ST-S3M) which serves as the state space selection mechanism for spatial-temporal graph learning (Section 3.3). Finally, a detailed computational efficiency analysis of STG-Mamba in presented in Appendix B because of limited space.

3.1 Architecture of STG Selective State Space Models

Refer to caption
Figure 1: The comprehensive illustration of STG-Mamba’s Model Architecture.

Figure 1 illustrates the comprehensive architecture of the proposed STG-Mamba. Specifically, we formulate the overall architecture following the Residual Encoder fashion for efficient sequential data modeling and prediction. In STG-Mamba, we leverage the Graph Selective State Space Block (GS3B) as the basic Encoder module, and repeat it NN times. GS3B consists of several networks and operations, including Layer Norm, Kalman Filter Graph Neural Networks (KFGN) which consists of DynamicFilter-GNN followed by Kalman Filter Upgrading (KF-Upgrading), Spatial-Temporal Selective State Space Module (ST-S3M) which consists of Linear & Split, Conv1D, SiLU, Grapg State Space Selection Mechanism (GSSSM), Element-Wise Concatenation, Graph Information Feed-Forward, as well as the Residual Connection. The Graph Information Feed-Forward structure here is specifically designed for coordinating different modules’ information transmission and updating, ensuring the latest STG information is available for every module.

3.2 Kalman Filtering Graph Neural Networks

The motivation for employing Kalman Filtering-based methods in constructing GNNs stems from the need to enhance the reliability and accuracy of spatial-temporal forecasting. STG big data (i.e., traffic sensor records, weather station records, etc) often contains inherent biases and noises that existing methods typically overlook. By integrating Kalman Filtering-based optimization and upgrading, KFGN effectively addresses these inaccuracies by dynamically weighing the reliability of data streams from different temporal granularity, optimizing the fusion of these data streams based on the estimated variances. This approach not only corrects dataset inherent errors but also significantly improves the model’s capacity to capture the complex inter-dependencies within STG patterns.

As depicted in Figure 2, the KFGN pipeline consists of two key steps. In the first step, embeddings generated from the model inputs of different temporal granularities (i.e., recent steps/periodic steps/trend steps) are sent into a DynamicFilter-GNN module for processing. Then, in the second step, the outputs of DynamicFilter-GNN module are integrated and optimized through the Kalman Filtering Upgrading module. In the initial stage of DynamicFilter-GNN, a learnable Base Filter parameter matrix of size in_fea×in_fea\mathbb{R}^{\text{in\_fea}\times\text{in\_fea}} is defined. It’s employed to transform the graph adjacency matrix, thereby dynamically adjusting the connection degrees between nodes. Following that, we uniformly initialize the weights and biases, with an initialization standard deviation stdv calculated as the reciprocal of the square root of the number of input features: stdv=1in_fea\text{stdv}=\frac{1}{\sqrt{\text{in\_fea}}}. This uniform initialization is critical for ensuring the module starts from a neutral point.

Refer to caption
Figure 2: Kalman Filtering Graph Neural Networks.

Subsequently, we transformer the Base Filter via a linear transformation layer, and combine it with the original adjacency matrix AiniciA_{\text{ini}}^{c_{i}}, resulting in a dynamically adjusted adjacency matrix AdynciA_{\text{dyn}}^{c_{i}}. Let ci,i={1,2,3}{r,p,q}c_{i,i=\{1,2,3\}}\in\{r,p,q\} represents the input data from different temporal granularities, (i.e., c1=rc_{1}=r denotes historical recent, c2=pc_{2}=p denotes historical period, c3=qc_{3}=q denotes historical trend). Leveraging AdynciA_{\text{dyn}}^{c_{i}}, input embedding hincih_{in}^{c_{i}}, weights WDFciW_{DF}^{c_{i}} and bias bDFcib_{DF}^{c_{i}}, the input embedding undergoes a graph convolution:

hDFci=hinci(AdynciWDFci)+bDFcih_{DF}^{c_{i}}=h_{in}^{c_{i}}\cdot(A_{\text{dyn}}^{c_{i}}\cdot W_{DF}^{c_{i}})+b_{DF}^{c_{i}} (2)

This design enables the model to dynamically adjust the strengths of connections between nodes within the graph based on the learned adjustments.

Following DynamicFilter-GNN, the next step is Kalman Filtering Upgrading. Since the input data are subjected to the same dataset but within different temporal granularities, they are assumed to follow Gaussian Distributions:

yq(x;μq,σq)exp(xμq)22σq22πσq2;yp(x;μp,σp)exp(xμp)22σp22πσp2;yr(x;μr,σr)exp(xμr)22σr22πσr2;y_{q}(x;\mu_{q},\sigma_{q})\triangleq\frac{\exp^{-\frac{(x-\mu_{q})^{2}}{2\sigma_{q}^{2}}}}{\sqrt{2\pi\sigma_{q}^{2}}};y_{p}(x;\mu_{p},\sigma_{p})\triangleq\frac{\exp^{-\frac{(x-\mu_{p})^{2}}{2\sigma_{p}^{2}}}}{\sqrt{2\pi\sigma_{p}^{2}}};y_{r}(x;\mu_{r},\sigma_{r})\triangleq\frac{\exp^{-\frac{(x-\mu_{r})^{2}}{2\sigma_{r}^{2}}}}{\sqrt{2\pi\sigma_{r}^{2}}}; (3)

where subscript q=c3,p=c2,r=c1q=c_{3},p=c_{2},r=c_{1} denote historical trend/period/recent dataset, respectively.

We employ Kalman Filtering method to derive accurate information from the multi-granularity observation sets. Specifically, their individual probability distribution functions can be integrated together by multiplication:

yfuse(x;μq,σq,μp,σp,μr,σr)\displaystyle y_{fuse}(x;\mu_{q},\sigma_{q},\mu_{p},\sigma_{p},\mu_{r},\sigma_{r}) =exp(xμq)22σq22πσq2×exp(xμp)22σp22πσp2×exp(xμr)22σr22πσr2\displaystyle=\frac{\exp^{-\frac{(x-\mu_{q})^{2}}{2\sigma_{q}^{2}}}}{\sqrt{2\pi\sigma_{q}^{2}}}\times\frac{\exp^{-\frac{(x-\mu_{p})^{2}}{2\sigma_{p}^{2}}}}{\sqrt{2\pi\sigma_{p}^{2}}}\times\frac{\exp^{-\frac{(x-\mu_{r})^{2}}{2\sigma_{r}^{2}}}}{\sqrt{2\pi\sigma_{r}^{2}}}
=1(2π)3/2σq2σp2σr2exp((xμq)22σq2+(xμp)22σp2+(xμr)22σr2)\displaystyle=\frac{1}{(2\pi)^{3/2}\sqrt{\sigma_{q}^{2}\sigma_{p}^{2}\sigma_{r}^{2}}}\exp^{-\left(\frac{(x-\mu_{q})^{2}}{2\sigma_{q}^{2}}+\frac{(x-\mu_{p})^{2}}{2\sigma_{p}^{2}}+\frac{(x-\mu_{r})^{2}}{2\sigma_{r}^{2}}\right)} (4)

By reorganizing Eq. 4 into a simplified version, we have:

yfuse(x;μfuse,σfuse)=12πσfuse2exp(xμfuse)22σfuse2y_{fuse}(x;\mu_{fuse},\sigma_{fuse})=\frac{1}{\sqrt{2\pi\sigma_{fuse}^{2}}}\exp^{-\frac{(x-\mu_{fuse})^{2}}{2\sigma_{fuse}^{2}}} (5)

where μfuse=μq/σq2+μp/σp2+μr/σr21/σq2+1/σp2+1/σr2\mu_{fuse}=\frac{\mu_{q}/\sigma_{q}^{2}+\mu_{p}/\sigma_{p}^{2}+\mu_{r}/\sigma_{r}^{2}}{1/\sigma_{q}^{2}+1/\sigma_{p}^{2}+1/\sigma_{r}^{2}} and σfuse2=11/σq2+1/σp2+1/σr2\sigma_{fuse}^{2}=\frac{1}{1/\sigma_{q}^{2}+1/\sigma_{p}^{2}+1/\sigma_{r}^{2}}.

To simplify the representation of μfuse\mu_{fuse} and σfuse2\sigma_{fuse}^{2}, we hereby introduce parameters ωq=1σq2,ωp=1σp2,ωr=1σr2\omega_{q}=\frac{1}{\sigma_{q}^{2}},\omega_{p}=\frac{1}{\sigma_{p}^{2}},\omega_{r}=\frac{1}{\sigma_{r}^{2}}. Then, μfuse\mu_{fuse} and σfuse2\sigma_{fuse}^{2} can be re-written as:

μfuse=μqωq+μpωp+μrωrωq+ωp+ωr;σfuse2=1ωq+ωp+ωr\mu_{fuse}=\frac{\mu_{q}\omega_{q}+\mu_{p}\omega_{p}+\mu_{r}\omega_{r}}{\omega_{q}+\omega_{p}+\omega_{r}};\\ \sigma_{fuse}^{2}=\frac{1}{\omega_{q}+\omega_{p}+\omega_{r}} (6)

This means that observations from different branches can be effectively integrated using weighted sum, and the weight is a combination of variances:

μfuse\displaystyle\mu_{fuse} =\displaystyle= μq(ωqωq+ωp+ωr)+μp(ωpωq+ωp+ωr)+μr(ωrωq+ωp+ωr)\displaystyle\mu_{q}(\frac{\omega_{q}}{\omega_{q}+\omega_{p}+\omega_{r}})+\mu_{p}(\frac{\omega_{p}}{\omega_{q}+\omega_{p}+\omega_{r}})+\mu_{r}(\frac{\omega_{r}}{\omega_{q}+\omega_{p}+\omega_{r}}) (7)
\displaystyle\Downarrow
yfuse\displaystyle y_{fuse} =\displaystyle= yq(ωqωq+ωp+ωr)+yp(ωpωq+ωp+ωr)+yr(ωrωq+ωp+ωr)\displaystyle y_{q}(\frac{\omega_{q}}{\omega_{q}+\omega_{p}+\omega_{r}})+y_{p}(\frac{\omega_{p}}{\omega_{q}+\omega_{p}+\omega_{r}})+y_{r}(\frac{\omega_{r}}{\omega_{q}+\omega_{p}+\omega_{r}})

Directly calculating the variance of an observation set requires the complete data, which is computationally expensive. Therefore, we estimate the variance by computing the variance distribution of each training samples:

𝔼[σ{q,p,r}2]=1Nmi(SiS¯)2L\mathbb{E}[\sigma_{\{q,p,r\}}^{2}]=\frac{1}{N_{m}}\sum_{i}\frac{(S_{i}-\overline{S})^{2}}{L} (8)

where LL is the length of each sample sequence, and NmN_{m} is the number of data samples. SiS_{i} denotes the ii-th observation, and S¯\overline{S} denotes the average value of all observed samples. To further improve the integration and upgrading, we add two learnable weight params ϵ,φ\epsilon,\varphi to balance different observation branches. To this end, based on the formulation of yfusey_{fuse} in Eq. 7, the output of Kalman Filtering Upgrading module is:

y~fuse=ϵ(y^qωq)+φ(y^pωp)+y^rωrωq+ωp+ωr\tilde{y}_{fuse}=\frac{\epsilon\cdot(\hat{y}_{q}\omega_{q})+\varphi\cdot(\hat{y}_{p}\omega_{p})+\hat{y}_{r}\omega_{r}}{\omega_{q}+\omega_{p}+\omega_{r}} (9)

where ωq=1σq2,ωp=1σp2,ωr=1σr2\omega_{q}=\frac{1}{\sigma_{q}^{2}},\omega_{p}=\frac{1}{\sigma_{p}^{2}},\omega_{r}=\frac{1}{\sigma_{r}^{2}}, ϵ\epsilon and φ\varphi are trainable weights. Finally, for the convenience of neural network training and ensure the scalability, we remove the constant denominator (ωq+ωp+ωr)(\omega_{q}+\omega_{p}+\omega_{r}) in Eq. 9. As such, the final output of Kalman Filtering Upgrading is defined as:

y^fuse=ϵ(y^qωq)+φ(y^pωp)+y^rωr\hat{y}_{fuse}=\epsilon\cdot(\hat{y}_{q}\omega_{q})+\varphi\cdot(\hat{y}_{p}\omega_{p})+\hat{y}_{r}\omega_{r} (10)

Here, y^fuse\hat{y}_{fuse} is the final output of KF-Upgrading module. Note that our approach is an optimized version of the classic Kalman Filtering theory, which is specifically designed for deep learning-based methodology, ensuring both computationally efficiency, dynamic hierarchical STG feature fusion, and improved accuracy.

3.3 ST-S3M: Spatial-Temporal Selective State Space Module

ST-S3M plays the significant role of adaptive STG feature selection, serving as the counterpart of Attention mechanism. The architecture of ST-S3M is shown in Figure 1. After being processed by KFGN module for dynamic spatiotemporal dependency modeling and statistical-based integration & upgrading. The generated embedding h^fuse\hat{h}_{fuse} is sent to ST-S3M for input-specific dynamic feature selection. In ST-S3M, the input STG embedding passes through a Linear Layer, followed by a splitting operation. Let bb denotes batch size, ll denotes sequence length, dmodeld_{model} be the feature dimension, dinnerd_{inner} be the model inner feature dimension, we have:

hmain-res\displaystyle h_{\text{main-res}} =Winh^fuse+bin\displaystyle=W_{in}\hat{h}_{fuse}+b_{in}
(hmain,res)\displaystyle(h_{\text{main}},res) =split(hmain-res)\displaystyle=\text{split}(h_{\text{main-res}}) (11)

where h^fuseb×l×dmodel\hat{h}_{fuse}\in\mathbb{R}^{b\times l\times d_{model}}, Win2dinner×dmodelW_{in}\in\mathbb{R}^{2d_{inner}\times d_{model}}, bin2dinnerb_{in}\in\mathbb{R}^{2d_{inner}}, hmain-resb×l×2dinnerh_{\text{main-res}}\in\mathbb{R}^{b\times l\times 2d_{inner}}, hmain,resb×l×dinnerh_{\text{main}},res\in\mathbb{R}^{b\times l\times d_{inner}}

Then, hmainh_{\text{main}} flows into a 1D convolution layer, followed by SiLU activation function:

hmain=SiLU(Conv1D(hmain))h_{\text{main}}^{{}^{\prime}}=\text{SiLU}(\text{Conv1D}(h_{\text{main}})) (12)

where hmainb×l×dinnerh_{\text{main}}^{{}^{\prime}}\in\mathbb{R}^{b\times l\times d_{inner}}. The output of SiLU is sent to graph state space selection mechanism:

hsssm=GSSSM(hmain)h_{\text{sssm}}=\text{GSSSM}(h_{main}^{{}^{\prime}}) (13)

where hsssmb×l×dinnerh_{\text{sssm}}\in\mathbb{R}^{b\times l\times d_{inner}}. Meanwhile, the residual part resres is also processed by SiLU activation, and finally we employ element-wise dot product to fuse the main STG embedding and its residual: hsssmSiLU(res)h_{\text{sssm}}\odot\text{SiLU}(res). The fused result is transformed through a Linear projection:

hout=Wout(hsssmSiLU(res))+bouth_{out}=W_{out}(h_{\text{sssm}}\odot\text{SiLU}(res))+b_{out} (14)

where houtb×l×dmodelh_{out}\in\mathbb{R}^{b\times l\times d_{model}} is the final output of ST-S3M. Woutdmodel×dinnerW_{out}\in\mathbb{R}^{d_{model}\times d_{inner}}, boutdmodelb_{out}\in\mathbb{R}^{d_{model}}.

The GSSSM in ST-S3M plays a main role in adaptive spatiotemporal feature selection. We detail the parameter computation and update process of the Graph State Space Selection Mechanism in Algorithm 2. Graph Selective Scan Algorithm (Algorithm 1) which receives graph information feed-forward is the most significant step in the state space selection process. We detail it in the following.

Graph Selective Scan Algorithm. The algorithm is a further extension of the basic selective scan, which integrates dynamic graph information generated by KFGN into state-space selection & update procedure, enhancing Mamba’s capability in capturing STG dependencies. The key steps and modifications are as follows:

We start with obtaining the dimension of the input tensor u(b,l,din)u\in\mathbb{R}^{(b,l,d_{in})}, where bb is the batch size, ll denotes the sequence length, dind_{in} is the input features dim; and the second dim of 𝔸\mathbb{A} is denoted as nn.

Next, we highlight the main novelty of the graph selective scan algorithm–the integrated upgrade of graph information Feed-Forward and param Δ\Delta^{*}, which corresponds to line 2-line 5 of Algorithm 1. We retrieve the dynamic graph adjacency matrix αt\alpha_{t}from the DynamicFilter-GNN: αt=DynamicFilter-GNN.get_transformed_adjacency()\alpha_{t}=\text{DynamicFilter-GNN}.\text{get\_transformed\_adjacency()}. In order to let the graph information to participate in the state space selection and update process, an intuitive and natural way is to fuse the parameter Δ\Delta^{*}, and let the fused parameter flow into the following calculation of delta𝔸\text{delta}\mathbb{A} and delta𝔹u\text{delta}\mathbb{B}_{u}. However, the dimension of αt\alpha_{t} and Δ\Delta^{*} may not be identical. Thus, we initialize a padding matrix adj_padded=𝟏din×din\text{adj\_padded}=\mathbf{1}^{d_{\text{in}}\times d_{\text{in}}}, then we fill the padding matrix with the graph information from αt\alpha_{t} as: adj_padded[:αt.size(0),:αt.size(1)]=αt\text{adj\_padded}[:\alpha_{t}.\text{size}(0),:\alpha_{t}.\text{size}(1)]=\alpha_{t}. After the dimension adjustment, we therefore integrate Δ\Delta^{*} and adj_padded by performing matrix multiplication: Δ=matmul(Δ,adj_padded)\Delta^{{}^{\prime}}=\text{matmul}(\Delta^{*},\text{adj\_padded}).

Following that, we Discretize the continuous parameters 𝔸\mathbb{A} and 𝔹\mathbb{B} as:

State Transition Matrix update: delta𝔸=exp(einsum(Δ,𝔸))\displaystyle\text{State Transition Matrix update: }\text{delta}\mathbb{A}=\exp(\text{einsum}(\Delta^{{}^{\prime}},\mathbb{A})) (15)
Control Matrix update: delta𝔹u=einsum(Δ,𝔹,u)\displaystyle\text{Control Matrix update: }\text{delta}\mathbb{B}_{u}=\text{einsum}(\Delta^{{}^{\prime}},\mathbb{B},u)

where einsum denotes the Einstein Summation Convention. delta𝔸\text{delta}\mathbb{A} is the updated state transition matrix, delta𝔹u\text{delta}\mathbb{B}_{u} is the updated control matrix.

Then, Iterative State Update is executed on state xb×din×nx\in\mathbb{R}^{b\times d_{in}\times n}. For time step from ii to ll, we have:

x\displaystyle x delta𝔸[:,i]×x+delta𝔹u[:,i]\displaystyle\leftarrow\text{delta}\mathbb{A}[:,i]\times x+\text{delta}\mathbb{B}_{u}[:,i] (16)
z\displaystyle z einsum(x,[:,i,:])\displaystyle\leftarrow\text{einsum}(x,\mathbb{C}[:,i,:])

where zz is the current output computed by einstein summation convention. zz is then appended to the output list zsz_{s}. The list of outputs zsz_{s} is stacked to form the output tensor zz, and finally we add the direct gain 𝔻\mathbb{D} to the final output as: zz+u×𝔻z\leftarrow z+u\times\mathbb{D}.

The Graph Selective Scan algorithm offers several advantages including: enhanced spatiotemporal dependency modeling, adaptability to changing graph structures, and improved forecasting accuracy, making it particularly suitable for spatiotemporal graph learning tasks.

4 Experiments

4.1 Dataset Statistics and Baseline Methods

Datasets. We meticulously select three real-world STG datasets from California road network speed records, Hangzhou metro system entry/exit records, and weather station records across mainland China. Namely PeMS04 Song et al. (2020), HZMetro Liu et al. (2022), and KnowAir Wang et al. (2020), respectively. The detailed dataset statistics and descriptions are summarized in Table 1.

Table 1: Statistics and descriptions of the three datasets used for experimental evaluation.
Dataset PeMS04 HZMetro KnowAir
City California, USA Hangzhou, China 184 main cities across China
Data Type Network traffic speed Station InFlow/OutFlow Weather records
Total Nodes 307 80 184
Time Interval 5-minute 15-minute 3-hour
Time Range 1/Jan/2018 - 28/Feb/2018 1/Jan/2019 - 25/Jan/2019 1/Sep/2016 - 31/Jan/2017
Dataset Length 16,992 2,400 1,224

Baseline Methods. For fair comparison, we consider both benchmark spatiotemporal graph neural network (STGNN)-based methods, and Transformer-based methods for STG learning.

STGNN-based Methods: (I) STGCN Yu et al. (2018), (II) STSGCN Song et al. (2020), (III) STG-NCDE Choi et al. (2022), (IV) DDGCRN Weng et al. (2023).

Attention (Transformer)-based Methods: (V) ASTGCN Guo et al. (2019), (VI) ASTGNN Guo et al. (2021), (VII) PDFormer Jiang et al. (2023), (VIII) STAEformer Liu et al. (2023), (IX) MultiSPANS Zou et al. (2024).

4.2 Implementation Settings

We split all the datasets with a ratio of 6:2:2 for Training/Validation/Testing sets along the time axis, respectively. Before starting model training session, all the data samples are normalized into range [0,1][0,1] with MinMax normalization method. We set the batch size as 48 and employ AdamW as the Optimizer. The learning rate is initialized as 1e41e^{-4}, with a weight decay of 1e21e^{-2}. CosineAnnealingLR Loshchilov and Hutter (2016) is adopted as the learning rate scheduler, with a maximum of 50 iterations, and a minimum learning rate 1e51e^{-5}. Model training epochs is set as 100. The number of GS3B Encoder Layer is set as N=4N=4. We use the historical ground-truth data from the past 12 steps (for historical recent/period/trend scales) to forecast the future 12 time steps. Mean Squared Error (MSE) is employed as the Loss Function for optimization. All experiments are conducted under one NVIDIA A100 80GB GPU for neural computing acceleration.

4.3 Evaluation Metrics

We employ RMSE/MAE/MAPE/stdMAEstd_{\text{MAE}} as the deterministic evaluation metrics, all the indicators are smaller the better; ΔVAR\rm{\Delta_{\text{VAR}}} (Change in Variance Accounted For) and R2\rm{R^{2}} (Coefficient of Determination) as statistic-based evaluation metric, where ΔVAR\rm{\Delta_{\text{VAR}}} is the smaller the better, R2\rm{R^{2}} is the larger the better. The reported results in Experiment section are the average value of 10 runs. The detailed indicator calculation methods are provided in Appendix B because of limited space.

4.4 Results Evaluation and Comparison

A comprehensive evaluation between STG-Mamba and the proposed baseline methods is conducted, and we report the RMSE/MAE/MAPE/stdMAEstd_{\text{MAE}} performance results in Table 2. STG-Mamba consistently outperforms other benchmark methods, except for the MAPE criteria of PeMS04 (Flow). It can also be easily observed that Transformer-based methods which integrate GNNs inside the architecture always gain superior performance than sole GNN-based methods. For instance, PDFormer, STAEformer, and MultiSPANS always show better performance than STGCN, STSGCN, and STG-NCDE. The reason is that although GNN is an effective approach of modeling STG tasks, Transformer-based methods have stronger ability to capture local and long-term dynamic dependencies. Within the GNN-based methods, STG-NCDE and DDGCRN stand out for their competitive performance. STG-NCDE models continuous-time dynamics through neural controlled differential equations (NCDE), offering a more precise representation of spatiotemporal feature changes over time compared to other discrete GNN-based methods.

As a new contender poised to challenge Transformer architecture, the selective state space mechanism of Mamba diverges from Attention by utilizing a continuous-time model that captures the dynamic evolution of spatial-temporal dependencies more naturally and efficiently. Unlike Attention, SSSM directly captures the temporal evolution of features, enabling it to scale more gracefully with the sequence length and complexity of STG data. Furthermore, we facilitate STG-Mamba with Kalman Filtering-based adaptive STG evolution, the elaborately designed graph selective scan algorithm, and the Feed-Forward connection, making it inherit the advantages of modern SSSM, and highly suitable for STG learning.

Table 2: Performance Eval and Comparison with Baselines on PeMS04/HZMetro/KnowAir Dataset.
Model PeMS04 (Flow) HZMetro KnowAir
RMSE MAE MAPE stdMAEstd_{\text{MAE}} RMSE MAE MAPE stdMAEstd_{\text{MAE}} RMSE MAE MAPE stdMAEstd_{\text{MAE}}
STGCN 35.55 22.70 14.59 1.2068 34.85 21.33 13.47 1.1835 11.46 8.37 11.26 0.4358
STSGCN 33.65 21.19 13.90 1.1820 33.24 20.79 13.06 1.1249 11.22 8.15 10.89 0.4134
STG-NCDE 31.09 19.21 12.76 1.0736 32.91 20.75 12.88 1.1097 10.85 7.93 10.47 0.3922
DDGCRN 30.51 18.45 12.19 1.0365 31.69 19.52 12.45 1.0586 10.43 7.84 10.38 0.3794
ASTGCN 35.22 22.93 16.56 1.1853 34.36 21.12 13.50 1.1258 10.27 7.57 10.51 0.3745
ASTGNN 31.16 19.26 12.65 1.0583 32.75 20.63 12.47 1.0784 9.68 7.35 10.23 0.3569
PDFormer 29.97 18.32 12.10 1.0217 30.18 19.13 11.92 1.0205 9.46 7.12 10.06 0.3375
STAEformer 30.18 18.22 11.98 0.9678 29.94 18.85 12.03 0.9728 8.69 6.93 9.89 0.3083
MultiSPANS 30.46 19.07 13.29 0.9851 30.31 18.97 11.85 0.9931 8.57 6.84 10.05 0.3149
STG-Mamba 29.53 18.09 12.11 0.9218 29.23 18.26 11.59 0.9271 8.05 6.37 9.64 0.2648

4.5 Robustness Analysis

Table 3: Robustness analysis. We choose rush/non-rush, weekend/non-weekend traffic scenarios.
Models Rush Hours Non Rush Hours
RMSE MAE MAPE RMSE MAE MAPE
ASTGNN 33.57 21.05 13.81 29.76 18.22 12.49
PDFormer 31.69 19.54 12.38 28.95 18.16 11.97
STAEformer 31.36 19.10 12.22 29.47 17.98 11.93
MultiSPAN 30.95 19.28 13.31 30.04 18.37 12.54
STG-Mamba 29.58 18.13 12.15 29.23 18.04 11.91
Models Weekend Non Weekend
RMSE MAE MAPE RMSE MAE MAPE
ASTGNN 30.66 19.08 13.35 31.82 19.53 12.28
PDFormer 29.87 18.26 12.04 30.39 18.68 12.23
STAEformer 29.83 18.19 11.98 30.26 18.42 12.27
MultiSPAN 30.13 18.42 12.73 30.52 19.31 13.34
STG-Mamba 29.31 18.05 11.92 29.57 18.15 12.16

STG data exhibit notable periodicity and diversity, with urban mobility/traffic data showing clear differences between peak travel times in the morning/evening and non-peak periods, as well as between weekdays and weekends. Given these variations due to external environmental changes, it is significant to determine whether deep learning models can effectively model the spatiotemporal dependencies under different conditions. As such, we establish four distinct external scenarios: (a) rush hours on weekdays from 8:00-11:00 AM and 4:00-7:00 PM; (b) non-rush hours during weekdays; (c) weekends (all hours); and (d) non-weekend (all hours). Extensive experiments were conducted on the PeMS04 dataset. Table 3 presents the forecasting results of the four traffic scenarios.

An ideal deep learning-based STG system should be robust to external disturbances, maintaining consistent performance across different scenarios. Compared methods like ASTGNN, PDFormer, and STAEformer show significant performance differences between Rush hour and Non-rush Hour scenarios. Even in the Weekend/Non-Weekend scenario, STG-Mamba demonstrates the smallest performance difference, showing maximum robustness and achieving the best RMSE/MAE/MAPE metrics. In summary, STG-Mamba exhibits superior robustness and outperforms existing baselines in diverse traffic conditions.

Statistical-based evaluation in spatiotemporal forecasting is indispensible as they provide a quantifiable measure of how well a model captures and predicts complex data dynamics over time and space. Specifically, R2R^{2} and ΔVAR\Delta_{\text{VAR}} help in assessing the accuracy and robustness of models under varying conditions, ensuring the reliability of predictions and effectively inform decision-making processes. The Statistical evaluation results are presented in Table 4. Here, STG-Mamba demonstrates superior performance across datasets, achieving the highest R2R^{2} values and the lowest ΔVAR\Delta_{\text{VAR}} scores, indicating the advancement of accuracy and efficiency in handling spatiotemporal dependencies.

Table 4: Statistical evaluation results.
Model PeMS04 (Flow) HZMetro KnowAir
ΔVAR\Delta_{\text{VAR}} R2R^{2} ΔVAR\Delta_{\text{VAR}} R2R^{2} ΔVAR\Delta_{\text{VAR}} R2R^{2}
STGCN 0.23145 0.77069 0.25769 0.74385 0.49851 0.50487
STSGCN 0.20538 0.79456 0.21474 0.78862 0.47536 0.52831
STG-NCDE 0.18275 0.81973 0.19827 0.80492 0.46173 0.54125
DDGCRN 0.13291 0.87355 0.14796 0.85511 0.43185 0.56506
ASTGCN 0.17289 0.82698 0.18131 0.82134 0.42729 0.58461
ASTGNN 0.14863 0.84850 0.15765 0.84627 0.39726 0.60563
PDFormer 0.11849 0.88190 0.13259 0.87095 0.36182 0.64719
STAEformer 0.09631 0.90632 0.10383 0.89738 0.33215 0.66085
MultiSPANS 0.06218 0.93804 0.07596 0.92561 0.32569 0.67122
STG-Mamba 0.04387 0.95489 0.05148 0.94752 0.30182 0.69537

4.6 Ablation Study

Refer to caption
Figure 3: Model component analysis of STG-Mamba.

To investigate the effectiveness of each model component within STG-Mamba, we further design five kinds of model variants, and evaluate their forecasting performance on PeMS04/HZMetro/KnowAir dataset: (I) STG-Mamba: The full STG-Mamba model without any modification. (II) STG-Mamba w/o KF-Upgrading: We replace the KF-Upgrading module in KFGN with the simple sum average operation to fuse the three temporal branches. (III) STG-Mamba w/o Dynamic Filter Linear: We replace the proposed Dynamic Filter Linear in KFGN with the basic static graph convolution. (IV) STG-Mamba w/o GSSSM: The Graph State Space Selection Mechansim (GSSSM) is replaced by the basic SSSM Gu and Dao (2023). (V) STG-Mamba w/o ST-S3M: The whole ST-S3M module is removed from GS3B Encoder.

As illustrated in Figure 3, STG-Mamba w/o KF-Upgrading shows worse performance than the full model, demonstrating the necessity and suitability of employing statistical learning-based GNN state space upgrading and optimization for SSSM-based methods. The decrease of performance due to the absence of Dynamic Filter Linear demonstrate its effectiveness, and the necessity of designing a suitable adaptive GNN for STG feature learning. Furthermore, removing the State Space Selection Mechansim in ST-S3M results in a substantial degradation of model capability, proving the feasibility of using SSSM as the substitute for Attention mechanism. Finally, the removal of ST-S3M makes the STG-Mamba degrade to a plain GNN-based model, resulting in the lowest performance.

5 Conclusions

In this work, we for the first time introduce deep learning-based selective state space models (SSSM) for spatial-temporal graph learning tasks. We propose STG-Mamba that leverages modern SSSM for accurate and efficient STG forecasting. In STG-Mamba, the ST-S3M module facilitates input-dependent graph evolution and feature selection, successfully integrates STG network with SSSMs. Through the Kalman Filtering Graph Neural Networks (KFGN), the learned STG embeddings achieve a smooth optimization & upgrading based on statistical learning, aligning with the whole STG selective state space modeling process. Compared with Attention-based methods, STG-Mamba achieves linear-time complexity and substantial reduction in FLOPs and inference time. Extensive empirical studies are conducted on three open-sourced STG datasets, demonstrating the consistent superiority of STG-Mamba over other benchmark methods. We believe that STG-Mamba presents a brand new promising approach to general STG learning fields, offering competitive model performance and input-dependent contextual feature selection under an affordable computational cost.

References

  • Gu and Dao [2023] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.
  • Wang et al. [2024] Chloe Wang, Oleksii Tsepa, Jun Ma, and Bo Wang. Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv preprint arXiv:2402.00789, 2024.
  • Liu et al. [2024] Jiarun Liu, Hao Yang, Hong-Yu Zhou, Yan Xi, Lequan Yu, Yizhou Yu, Yong Liang, Guangming Shi, Shaoting Zhang, Hairong Zheng, et al. Swin-umamba: Mamba-based unet with imagenet-based pretraining. arXiv preprint arXiv:2402.03302, 2024.
  • Lee et al. [1994] Jay H Lee, Manfred Morari, and Carlos E Garcia. State-space interpretation of model predictive control. Automatica, 30(4):707–717, 1994.
  • Friedland [2012] Bernard Friedland. Control system design: an introduction to state-space methods. 2012.
  • Aoki [2013] Masanao Aoki. State space modeling of time series. 2013.
  • Song et al. [2020] Chao Song, Youfang Lin, Shengnan Guo, and Huaiyu Wan. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 914–921, 2020.
  • Liu et al. [2022] Lingbo Liu, Jingwen Chen, Hefeng Wu, Jiajie Zhen, Guanbin Li, and Liang Lin. Physical-virtual collaboration modeling for intra- and inter-station metro ridership prediction. IEEE Transactions on Intelligent Transportation Systems, 23(4):3377–3391, 2022.
  • Wang et al. [2020] Shuo Wang, Yanran Li, Jiang Zhang, Qingye Meng, Lingwei Meng, and Fei Gao. Pm2.5-gnn: A domain knowledge enhanced graph neural network for pm2.5 forecasting. In Proceedings of the 28th international conference on advances in geographic information systems, pages 163–166, 2020.
  • Yu et al. [2018] Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), pages 3634–3640, 2018.
  • Choi et al. [2022] Jeongwhan Choi, Hwangyong Choi, Jeehyun Hwang, and Noseong Park. Graph neural controlled differential equations for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6367–6374, 2022.
  • Weng et al. [2023] Wenchao Weng, Jin Fan, Huifeng Wu, Yujie Hu, Hao Tian, Fu Zhu, and Jia Wu. A decomposition dynamic graph convolutional recurrent network for traffic forecasting. Pattern Recognition, 142:109670, 2023.
  • Guo et al. [2019] Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 922–929, 2019.
  • Guo et al. [2021] Shengnan Guo, Youfang Lin, Huaiyu Wan, Xiucheng Li, and Gao Cong. Learning dynamics and heterogeneity of spatial-temporal graph data for traffic forecasting. IEEE Transactions on Knowledge and Data Engineering, 34(11):5415–5428, 2021.
  • Jiang et al. [2023] Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, and Jingyuan Wang. Pdformer: Propagation delay-aware dynamic long-range transformer for traffic flow prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 4365–4373, 2023.
  • Liu et al. [2023] Hangchen Liu, Zheng Dong, Renhe Jiang, Jiewen Deng, Jinliang Deng, Quanjun Chen, and Xuan Song. Spatio-temporal adaptive embedding makes vanilla transformer sota for traffic forecasting. In Proceedings of the 32nd ACM international conference on information and knowledge management, pages 4125–4129, 2023.
  • Zou et al. [2024] Dongcheng Zou, Senzhang Wang, Xuefeng Li, Hao Peng, Yuandong Wang, Chunyang Liu, Kehua Sheng, and Bo Zhang. Multispans: A multi-range spatial-temporal transformer network for traffic forecast via structural entropy optimization. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages 1032–1041, 2024.
  • Loshchilov and Hutter [2016] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
  • Kong et al. [2024] Jianlei Kong, Xiaomeng Fan, Min Zuo, Muhammet Deveci, Xuebo Jin, and Kaiyang Zhong. Adct-net: Adaptive traffic forecasting neural network via dual-graphic cross-fused transformer. Information Fusion, 103:102122, 2024.
  • Huo et al. [2023] Guangyu Huo, Yong Zhang, Boyue Wang, Junbin Gao, Yongli Hu, and Baocai Yin. Hierarchical spatio–temporal graph convolutional networks and transformer network for traffic flow forecasting. IEEE Transactions on Intelligent Transportation Systems, 24(4):3855–3867, 2023.
  • Liang et al. [2023] Yuxuan Liang, Yutong Xia, Songyu Ke, Yiwei Wang, Qingsong Wen, Junbo Zhang, Yu Zheng, and Roger Zimmermann. Airformer: Predicting nationwide air quality in china with transformers. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14329–14337, 2023.
  • Chen et al. [2023] Ling Chen, Jiahui Xu, Binqing Wu, and Jianlong Huang. Group-aware graph neural network for nationwide city air quality forecasting. ACM Transactions on Knowledge Discovery from Data, 18(3):1–20, 2023.
  • Zhang et al. [2023] Yipeng Zhang, Xin Wang, Hong Chen, and Wenwu Zhu. Adaptive disentangled transformer for sequential recommendation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, page 3434–3445, 2023.
  • Wei et al. [2023] Yinwei Wei, Wenqi Liu, Fan Liu, Xiang Wang, Liqiang Nie, and Tat-Seng Chua. Lightgt: A light graph transformer for multimedia recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1508–1517, 2023.
  • Chen et al. [2022] Changlu Chen, Yanbin Liu, Ling Chen, and Chengqi Zhang. Bidirectional spatial-temporal adaptive transformer for urban traffic flow forecasting. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  • Cong et al. [2021] Yuren Cong, Wentong Liao, Hanno Ackermann, Bodo Rosenhahn, and Michael Ying Yang. Spatial-temporal transformer for dynamic scene graph generation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 16372–16382, 2021.
  • Zhou et al. [2021] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 11106–11115, 2021.
  • Zhou et al. [2022] Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International Conference on Machine Learning, pages 27268–27286, 2022.
  • Rampášek et al. [2022] Ladislav Rampášek, Michael Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems, 35:14501–14515, 2022.

Appendix A ST-S3M Graph Selective Scan Algorithm & Parameter Calculation and Upgrade Algorithm

Algorithm 1 Graph Selective Scan Algorithm

Input: u,Δ,𝔸,𝔹,,𝔻u,\Delta^{*},\mathbb{A},\mathbb{B},\mathbb{C},\mathbb{D};
Output: zz;

1:  Obtain the dim of uu as (b,l,din)(b,l,d_{\text{in}}) the second dim of 𝔸\mathbb{A} as nn;
2:  Get the dynamic graph adjacency matrix from DynamicFilterGNN: αtDynamicFilterGNN.get_transformed_adjacency()\alpha_{t}\leftarrow\text{DynamicFilter\-GNN.get\_transformed\_adjacency()};
3:  Initialize a padding matrix adj_paddeddin×din\text{adj\_padded}\in\mathbb{R}^{d_{\text{in}}\times d_{\text{in}}}: adj_padded𝟏din×din\text{adj\_padded}\leftarrow\mathbf{1}^{d_{\text{in}}\times d_{\text{in}}};
4:  Filling the padding matrix with Graph information: adj_padded[:αt.size(0),:αt.size(1)]αt\text{adj\_padded}[:\alpha_{t}\text{.size}(0),:\alpha_{t}\text{.size}(1)]\leftarrow\alpha_{t};
5:  Integrate Δ\Delta^{*} and adj_padded through multiplication: Δmatmul(Δ,adj_padded)\Delta^{{}^{\prime}}\leftarrow\text{matmul}(\Delta^{*},\text{adj\_padded});
6:  Discretize continuous parameters 𝔸\mathbb{A} and 𝔹\mathbb{B}:
7:  delta𝔸exp(einsum(Δ,𝔸))\text{delta}\mathbb{A}\leftarrow\exp(\text{einsum}(\Delta^{{}^{\prime}},\mathbb{A})), where delta𝔸b×l×din×n\text{delta}\mathbb{A}\in\mathbb{R}^{b\times l\times d_{\text{in}}\times n};
8:  delta𝔹ueinsum(Δ,𝔹,u)\text{delta}\mathbb{B}_{u}\leftarrow\text{einsum}(\Delta^{{}^{\prime}},\mathbb{B},u), where delta𝔹ub×l×din×n\text{delta}\mathbb{B}_{u}\in\mathbb{R}^{b\times l\times d_{\text{in}}\times n};
9:  Initialize State xx as zeroes with dimension b×din×n\mathbb{R}^{b\times d_{\text{in}}\times n};
10:  for ii from 11 to ll do
11:     xdelta𝔸[:,i]×x+delta𝔹u[:,i]x\leftarrow\text{delta}\mathbb{A}[:,i]\times x+\text{delta}\mathbb{B}_{u}[:,i];
12:     zeinsum(x,[:,i,:])z\leftarrow\text{einsum}(x,\mathbb{C}[:,i,:]);
13:     Append zz to the output list zsz_{s};
14:  end for
15:  zstack(zs)z\leftarrow\text{stack}(z_{s}), where zb×l×dinz\in\mathbb{R}^{b\times l\times d_{\text{in}}};
16:  Adding the direct gain 𝔻\mathbb{D}:
17:  zz+u×𝔻z\leftarrow z+u\times\mathbb{D}
18:  Return zz.
Algorithm 2 State Space Selection Mechanism Parameter Calculation & Upgrade Algorithm.

Input: xb×l×dmodelx\in\mathbb{R}^{b\times l\times d_{\text{model}}}, learnable model params 𝔸logdin×n\mathbb{A}_{log}\in\mathbb{R}^{d_{\text{in}}\times n} and 𝔻din\mathbb{D}\in\mathbb{R}^{d_{\text{in}}};
Output: yb×l×dinnery\in\mathbb{R}^{b\times l\times d_{\text{inner}}};

1:  Compute input-independent params 𝔸din×n\mathbb{A}\in\mathbb{R}^{d_{\text{in}}\times n} and 𝔻din\mathbb{D}\in\mathbb{R}^{d_{\text{in}}}:
2:  𝔸=exp(𝔸log)\mathbb{A}=-\exp(\mathbb{A}_{log}), 𝔻=𝔻\mathbb{D}=\mathbb{D};
3:  Process input xx with Linear Layer Input_projInput\_proj:
4:  xdbl=Input_proj(x)x_{\text{dbl}}=Input\_proj(x), xdblb×l×(dtrank+2n)x_{\text{dbl}}\in\mathbb{R}^{b\times l\times(dt_{\text{rank}}+2n)};
5:  Dividing xdblx_{\text{dbl}} in order to get the input-dependent params Δ,𝔹,\Delta,\mathbb{B},\mathbb{C}, where Δb×l×dtrank\Delta\in\mathbb{R}^{b\times l\times dt_{\text{rank}}}, 𝔹,b×l×n\mathbb{B},\mathbb{C}\in\mathbb{R}^{b\times l\times n};
6:  Use Linear Layer dt_projdt\_proj and SoftPlusSoftPlus activation to adjust Δ\Delta:
7:  Δ=SoftPlus(dt_proj(Δ))\Delta^{*}=SoftPlus(dt\_proj(\Delta)), where Δb×l×din\Delta^{*}\in\mathbb{R}^{b\times l\times d_{\text{in}}};
8:  Employ the GraphSelectiveScan algorithm to proceed with x,Δ,𝔸,𝔹,,𝔻x,\Delta^{*},\mathbb{A},\mathbb{B},\mathbb{C},\mathbb{D}:
9:  y=GraphSelectiveScan(x,Δ,𝔸,𝔹,,𝔻)y=\text{GraphSelectiveScan}(x,\Delta^{*},\mathbb{A},\mathbb{B},\mathbb{C},\mathbb{D});
10:  Return the Output yy.

Appendix B Computational Efficiency Analysis and Comparison

B.1 Theoretical Computational Efficiency Analysis

Leveraging a data-specific selection process, the increasing of sequence length LL or graph network node scale NN by kk-fold can results in huge expansion of model parameters for Transformer-based methods. STG-Mamba introduces a sophisticated algorithm that is conscious of hardware capabilities, exploiting the layered structure of GPU memory to minimize the ensuing cost. In detail, for a batch size of BB, STG feature dimension of dd^{\prime}, and network node number of NN, STG-Mamba reads the O(BLd+Nd)O(BLd^{\prime}+Nd^{\prime}) input data of 𝔸,𝔹,,Δ\mathbb{A},\mathbb{B},\mathbb{C},\Delta from High Bandwidth Memory (HBM), processes the intermediate stages sized O(BLdN)O(BLd^{\prime}N) in Static Random-Access Memory (SRAM), and dispatches the ultimate output, sized O(BLd)O(BLd^{\prime}), back to HBM. In contrast, if a Transformer-based model is employed, the HBM will read O(BL2d)O(BL^{2}d^{\prime}) input data, and the computational size remains O(BL2d)O(BL^{2}d^{\prime}) in SRAM. In common practices we often have BL2dBLd+NdBL^{2}d\gg BLd^{\prime}+Nd^{\prime}, which means SSSM-based approach can effectively decrease the computational overhead in large-scale parallel processing, such as neural network training. Additionally, by not retaining intermediate states, it also diminishes memory usage, since these states are recalculated during the gradient assessment in reverse phase. SSSM’s GPU-optimized design ensures STG-Mamba operates with a linear time complexity O(L)O(L) relative to the length of the input sequence, marking a substantial speed advantage over traditional transformers’ dense attention computations, which exhibit quadratic time complexity O(L2)O(L^{2}).

B.2 Experimental Computational Efficiency Analysis

STG-Mamba offers substantial improvement in computational efficiency in addition to performance gain. In order to quantatively evaluate computational efficiency, we select the widely recognized Floating Point Operations (FLOPs) and Inference Time as criteria. For both of the two criteria, a smaller value indicates better model performance. STG-Mamba is compare with other Transformer-based benchmark methods: ASTGNN, PDFormer, and STAEformer, on the same testing set.

Table 5 showcases the inference time comparison on PeMS04 and KnowAir testing set, we report the average inference time of every 100 batch with Sequence_Length=12, Batch_Size=48. It can be seen that ASTGNN is relatively redundancy, with the lowest inference time among the compared Transformer-based methods. STG-Mamba gains 18.09% and 20.54% performance increase in inference time than the 2nd best performance model (STAEformer).

We further compare the computational FLOPs increment with accordance to the scale complexity of STG network. Specifically, we use the number of node individuals to represent the scale of STG network. In the experiments, we select different number of nodes scales: [50,100,150,200,250,300][50,100,150,200,250,300] for PeMS04 and [30,60,90,120,150,180][30,60,90,120,150,180] for KnowAir, respectively. STG-Mamba is compared with the benchmark STAEformer in every settings. Figure 4 visualizes the FLOPs evaluation under different STG scale complexity. We can easily find that STG-Mamba enjoys a linear increment in FLOPs, while STAEformer shows quadratic increment in FLOPs. At the initial small-scale setting, the FLOPs for STG-Mamba/STAEformer are quite similar. However, with the increasing of network complexity and scale, the difference becomes more significant. At the 300 node setting of PeMS04, the FLOPs for STG-Mamba and STAEformer is 60.85G and 134.92G (2.22 times more), respectively. At the 180 node setting of KnowAir, the FLOPs for STG-Mamba and STAEformer is 34.03G and 102.79G (3.54 times more), respectively.

Benefiting from the linear computational increment brought by modern selective state space models, STG-Mamba and alike models demonstrate stronger capability and computational efficiency for modeling large-scale STG data, which holds promise for addressing the computational overhead in future large-scale STG learning.

Table 5: Test inference time comparison with benchmark Transformer-based methods.
Model Inference Time on PeMS04 Inference Time on KnowAir
ASTGNN 3.285 s 3.018 s
PDFormer 2.431 s 2.357 s
STAEformer 2.106 s 1.933 s
STG-Mamba 1.547 s 1.481 s
performance \uparrow +26.54% +23.38 %
Refer to caption
Figure 4: (Computational Efficiency Eval) FLOPs comparison between STG-Mamba and STAEformer on PeMS04/KnowAir dataset under different STG node number settings.

Appendix C Evaluation Metrics Calculation Method

We employ RMSE/MAE/MAPE/stdMAEstd_{\text{MAE}} as the deterministic evaluation metrics, all the indicators are smaller the better; ΔVAR\rm{\Delta_{\text{VAR}}} (Change in Variance Accounted For) and R2\rm{R^{2}} (Coefficient of Determination) as statistic-based evaluation metric, where ΔVAR\rm{\Delta_{\text{VAR}}} is the smaller the better, R2\rm{R^{2}} is the larger the better. The reported results in Experiment section are the average value of 10 runs.

Here, stdMAEstd_{\text{MAE}} refers to the standard deviation of multiple MAE values in the testing stage. stdMAEstd_{\text{MAE}} provides a measure of consistency for MAE results across multiple evaluation sets. A smaller stdMAEstd_{\text{MAE}} indicates the model performs more stably under different conditions, with less variation in prediction errors. The calculation formula for stdMAEstd_{\text{MAE}} is as follows: Step1-Calculate the average MAE μ\mu: μ=1ni=1nMAEi\mu=\frac{1}{n}\sum_{i=1}^{n}\text{MAE}_{i}. Here, nn denotes the number of batches in the testing dataset, with the MAE value computed for each batch. MAEi\text{MAE}_{i} is the MAE result of the ii-th batch. Step2-Calculate the standard deviation: stdMAE=i=1n(MAEiμ)2n\text{std}_{\text{MAE}}=\sqrt{\frac{\sum_{i=1}^{n}(\text{MAE}_{i}-\mu)^{2}}{n}}.

R2R^{2} refers to the Coefficient of Determination is a statistical index used for measuring the proportion of variance in the dependent variable that is predictable from the independent variables. R2R^{2} provides a gauge of the effectiveness of the model in explaining the variations in the observed data. A higher R2R^{2} value indicates the model has a higher predictive accuracy and explains a substantial proportion of the variance in the dependent variable. R2R^{2} is formulated as follows: R2=1i=1n(yiy^i)2i=1n(yiy¯)2R^{2}=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}}{\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}} where yiy_{i} denotes the ground-truth values, y^i\hat{y}_{i} denotes the model predictions, y¯\bar{y} is the mean of the ground-truth values. (yiy^i)2\sum(y_{i}-\hat{y}_{i})^{2} indicates the unexplained variance by the model, while the denominator, (yiy¯)2\sum(y_{i}-\bar{y})^{2}, is the total variance in the observed data.

Change in Variance Accounted For (ΔVAR\Delta_{VAR}) quantifies the improvement or regression in how much variance the model can account for compared to a previous models or baselines. This metric is particularly valuable in iterative model optimization where understanding the variance captured by model updates is crucial. ΔVAR\Delta_{VAR} is calculates as follows: ΔVAR=Var(yy^)Var(y)\Delta_{VAR}=\frac{\text{Var}(y-\hat{y})}{\text{Var}(y)}. Var(yy^)\text{Var}(y-\hat{y}) computes the variance of residuals, and Var(y)\text{Var}(y) is the variance of the ground-truth data. The difference between the variance ratio of the current and previous models provides the change in variance accounted for, reflecting the model’s improvement in explaining the data variability.

Appendix D Formulation of Selective State Space Models for STG Learning

SSSMs like structured state space sequential models (S4) and Mamba, initially designed for sequential data, are adapted for Spatial-Temporal Graph (STG) data modeling by leveraging their structure to handle spatiotemporal dependencies dynamically. The adaptation involves discretizing the continuous system described by linear Ordinary Differential Equations (ODEs) into a format amenable to deep neural networks modeling and optimization, enhancing computational efficiency while capturing the evolving dynamics of the STG networks.

Modern selective state space models, such as Mamba, build upon classical continuous systems by incorporating deep learning techniques and discretization methods suitable for complex time-series data. Mamba specifically maps a one-dimensional input sequence x(t)x(t)\in\mathbb{R} through intermediate implicit states h(t)Nh(t)\in\mathbb{R}^{N}, culminating in an output y(t)y(t)\in\mathbb{R}. The fundamental process is described by a modified set of state space equations that include discretization adjustments to accommodate deep learning frameworks:

h(t)=𝔸¯h(t)+𝔹¯x(t)y(t)=h(t)\begin{array}[]{lr}h^{\prime}(t)=\overline{\mathbb{A}}h(t)+\overline{\mathbb{B}}x(t)\\ y(t)=\mathbb{C}h(t)\end{array} (17)

where 𝔸¯N×N\overline{\mathbb{A}}\in\mathbb{R}^{N\times N} and 𝔹¯N×1\overline{\mathbb{B}}\in\mathbb{R}^{N\times 1} are discretized representations of the continuous state transition and input matrices, respectively. 1×N\mathbb{C}\in\mathbb{R}^{1\times N} is the output matrices. The discretization process involves transformation techniques that convert 𝔸\mathbb{A} and 𝔹\mathbb{B} from their continuous forms to computational efficient formats. These transformations are determined by a selective scan algorithm that utilizes a convolutional approach to model interactions across temporal dimension:

𝔸¯=exp(Δ𝔸)𝔹¯=Δ𝔹(exp(Δ𝔸)𝕀)\begin{array}[]{lr}\overline{\mathbb{A}}=\exp(\Delta\mathbb{A})\\ \overline{\mathbb{B}}=\Delta\mathbb{B}\cdot(\exp(\Delta\mathbb{A})-\mathbb{I})\end{array} (18)

Mamba extends classic state space model functionalities by incorporating a selective mechanism that dynamically adapts to the input sequence characteristics. This adaptability is achieved through a novel architecture that selectively applies convolution kernels across the sequence, optimizing the model’s response to spatiotemporal variations:

𝕂¯=(𝔹¯,𝔸𝔹¯,,𝔸L1𝔹¯)\begin{array}[]{lr}\overline{\mathbb{K}}=(\mathbb{C}\overline{\mathbb{B}},\mathbb{C}\overline{\mathbb{A}\mathbb{B}},\dots,\mathbb{C}\overline{\mathbb{A}^{L-1}\mathbb{B}})\end{array} (19)

where 𝕂¯\overline{\mathbb{K}} denotes a structured convolutional kernel, capturing dependencies across spatial and temporal dimensions, and LL represents the length of the input sequence xx. This selective convolutional approach significantly enhances the model’s capability to forecast spatiotemporal data by focusing computation on critical features of the data sequence, thus improving both accuracy and efficiency in predictions.

Appendix E Discussion of Limitations

STG-Mamba for spatial-temporal graph learning demonstrates significant advantages, such as superior forecasting accuracy, model robustness, computational efficiency, and the ability to effectively handle datasets with noise and uncertainty. Despite these promising results, there are some limitations to consider:

  • Although STG-Mamba has been extensively evaluated on three diverse open-sourced datasets (Weather/Traffic/Urban Metro), these datasets may not fully representing the wide range of possible spatiotemporal scenarios, potentially limiting the generalizability of the model. Future work should aim to test the model on a broader range of datasets from different geographical regions and temporal scales.

  • The model interpretability is another concern worthy of further investigation. Same as other deep learning-based methods, the black-box nature of Mamba architecture makes it not easy to interpret how specific predictions are made. Enhancing the interpretability deep learning-based SSSMs could be beneficial, particularly in applications where understanding the decision-making process is crucial.

  • While the novel integration of Kalman Filtering statistic learning-based optimization with dynamic graph neural networks helps in managing noise and uncertainty, the model’s performance can still be affected by the quality and completeness of input dataset. Developing robust data pre-processing techniques and improving the model’s ability to directly handle incomplete or noisy data would further enhance its reliability.

We believe by addressing the aforementioned limitations, the future iterations of STG-Mamba can achieve even greater accuracy, robustness, and applicability across a broader range of spatial-temporal graph learning tasks.

Appendix F Illustration of the STG feature selection procedure

Figure 5 is the illustration of the spatial-temporal state space selection mechanism that facilitates input-dependent feature selection and focusing.

Refer to caption
Figure 5: Illustration of the spatial-temporal state space selection mechanism that facilitates input-dependent feature selection and focusing.

Appendix G Related Works

Owning to the strong ability of capturing dynamic features across both temporal and spatial dimension, attention-based methods, such as Spatial-Temporal Transformer and its variants have recently been broadly implemented in spatial-temporal graph (STG) learning tasks. In the domain of STG traffic forecasting, seminal works have emerged that leverage these methods to enhance prediction accuracy. ASTGCN Guo et al. [2019] introduces an innovative approach combining spatial-attention & temporal attention mechanisms with graph neural networks. ADCT-Net Kong et al. [2024] presents to incorporate a dual-graphic cross-fusion Transformer for dynamic traffic forecasting. Huo et al. [2023] integrates hierarchical graph convolutional networks with Transformer networks to capture sophisticated STG relationships among traffic data. For weather forecasting, Liang et al. [2023] exemplifies how Transformer-based models can be utilized for accurate air quality forecasting across vast geographical areas. Chen et al. [2023] further demonstrates the power of GNNs in predicting air quality, emphasizing the importance of group-aware mechanisms in enhancing STG forecasting performance. In the realm of Social Recommendation, Zhang et al. [2023] introduces an Adaptive Disentangled Transformer (ADT) framework that optimizes the disentanglement of attention heads in different layers of Transformer architecture. Wei et al. [2023] explores the improved efficiency of multimedia recommendation by introducing modal-specific embeddings and a lightweight self-attention.

Although Transformer-based methods demonstrate notable improvements in STG relationship learning Chen et al. [2022], Cong et al. [2021], their modeling ability on large-scale STG networks and Long-Term Sequential Forecasting (LTSF) tasks is greatly hindered by the quadratic computational complexity O(n2)O(n^{2}) provided by its core component–attention mechanism Zhou et al. [2021, 2022], Rampášek et al. [2022]. Moreover, attention mechanisms usually approximate the full-length historical data, or encode all the context features, which may not be necessary for modeling the long-term dependencies of STG learning tasks. As such, the research world has been actively looking for counterparts of Attention-based models.