Adaptive Two-Stage Cloud Resource Scaling via Hierarchical Multi-Indicator Forecasting and Bayesian Decision-Making
Abstract.
The surging demand for cloud computing resources, driven by the rapid growth of sophisticated large-scale models and data centers, underscores the critical importance of efficient and adaptive resource allocation. As major tech enterprises deploy massive infrastructures with thousands of GPUs, existing cloud platforms still struggle with low resource utilization due to key challenges: capturing hierarchical indicator structures, modeling non-Gaussian distributions, and decision-making under uncertainty. To address these challenges, we propose HARMONY, an adaptive Hierarchical Attention-based Resource Modeling and Decision-Making System. HARMONY combines hierarchical multi-indicator distribution forecasting and uncertainty-aware Bayesian decision-making. It introduces a novel hierarchical attention mechanism that comprehensively models complex inter-indicator dependencies, enabling accurate predictions that can adapt to evolving environment states. By transforming Gaussian projections into adaptive non-Gaussian distributions via Normalizing Flows. Crucially, HARMONY leverages the full predictive distributions in an adaptive Bayesian process, proactively incorporating uncertainties to optimize resource allocation while robustly meeting SLA constraints under varying conditions. Extensive evaluations across four large-scale cloud datasets demonstrate HARMONY’s state-of-the-art performance, significantly outperforming nine established methods. A month-long real-world deployment validated HARMONY’s substantial practical impact, realizing over 35,000 GPU hours in savings and translating to $100K+ in cost reduction, showcasing its remarkable economic value through adaptive, uncertainty-aware scaling. Our code is available at https://github.com/Floating-LY/HARMONY1.
1. Introduction
The rapid growth and increasing complexity of cloud service data centers have amplified the demand for computational resources, underscoring the critical importance of efficient resource allocation. As major technological enterprises expand their infrastructure—such as Meta deploying an AI cloud supercluster with over 24,000 NVIDIA H100 GPUs (Meta, [n. d.]) and OpenAI scaling to 7,500 GPU servers (OpenAI, [n. d.])—the scale and management complexity of these data centers have surged considerably. This expansion has resulted in significantly more intricate architectures and operational requirements, leading to challenges in optimizing computational resource utilization (Wang et al., 2023; Zhang et al., 2022a). Consequently, many methods have been proposed to address these scaling issues by analyzing future workloads and making informed resource allocation decisions, aiming to improve resource utilization and economic benefits (Shi et al., 2023; Qiu et al., 2020; Chen et al., 2023b; Zhang et al., 2022b). Tackling the resource scaling challenge has become increasingly crucial as cloud service data centers continue to expand in scale.

Although cloud resource scaling remains an actively researched topic, the growing scale and complexity of cloud service continue to present new challenges. Based on our real-world observations and an extensive literature review, we have identified the following three key challenges for cloud service resource scaling:
Challenge 1: Hierarchical Structure in Indicators. Considering multiple performance indicators simultaneously is imperative for cloud service resource scaling decisions (Pan et al., 2023; Feng and Ding, 2023). For instance, when scaling GPU services, service provider must account for the joint impact of CPU, GPU consumption, and service quality indicators like response time. Single-indicator predictions are insufficient due to the intricate hierarchical relationships among these indicators. As shown in Fig. 1 and study (Meng et al., 2020), this hierarchy stems from the user access flow: service requests generate computational tasks consuming system resources (e.g., CPU, GPU), which affect the perceived service quality. Relying on any single indicator fails to capture the complex interplay within this hierarchical structure. Accurately understanding and modeling these intricate relationships is crucial for capturing the intrinsic system behavior.
Challenge 2: Non-Gaussian Distribution of Indicator. Cloud services experience frequent business adjustments and complex workloads, leading to non-Gaussian distributions of performance indicators (Wang, 2024; Han et al., 2021). These indicators often exhibit non-parametric distributions, reflecting the increasing complexity of their temporal patterns. Traditional Gaussian models are inadequate for accurately capturing the true distributions of cloud service indicators. Thus, more flexible and sophisticated modeling approaches are necessary to avoid inefficiencies and potential violations of Service-Level Agreements (SLAs) caused by inaccurate distribution assumptions.
Challenge 3: Decision-making under Uncertainty As cloud service architectures grow more complex, decision-making must consider inherent uncertainties. (Pan et al., 2023; Chen et al., 2021) Relying on point estimates can mislead the decision process, especially when prediction accuracies vary across indicators. For example, if GPU usage predictions are less accurate, scaling decisions may become skewed, leading to inefficiencies or SLA violations. The dynamic nature of cloud environments further complicates point prediction, as accuracy can vary significantly across different scenarios. This variability limits the adaptability of point predictions and increases the risk of poor decisions. To mitigate these issues, it’s crucial to model uncertainties and integrate them into the decision-making process.
However, existing methods overlook these key challenges posed by large-scale cloud services. Most works focus on single indicator patterns (Wang et al., 2023; Chen et al., 2023b; Xue et al., 2022; Hua et al., 2023; Qian et al., 2022), neglecting hierarchical inter-indicator relationships (Challenge 1), which limits their adaptivity to evolving business needs. Recent studies (Feng and Ding, 2023; Zhang et al., 2022a) exploring these relationships still rely on point predictions, failing to model decision uncertainties (Challenge 3). While MagicScaler (Pan et al., 2023) combines distribution prediction and optimization for uncertainty modeling, it assumes Gaussian distributions and inadequately captures hierarchical structures, compromising accuracy (Challenge 1, 2). Consequently, uncertainty-aware decision methods leveraging multi-indicator distribution forecasts, which hold significant potential, remain largely underexplored. Appx. B provides further details.
Therefore, we propose the Hierarchical Attention-based Resource Modeling and Decision-Making System (HARMONY), a novel unified framework that holistically addresses these challenges. HARMONY comprises two main components: Hierarchical Multi-Indicator Distribution Forecasting and Bayesian Decision-Making. The Forecasting component adopts a hierarchical attention mechanism to capture intricate, causal relationships among multiple cloud service indicators (Challenge 1), enabling comprehensive awareness of the evolving environment state. Furthermore, HARMONY models the non-Gaussian nature of indicators using Normalizing Flows (Challenge 2), ensuring predictions reliably capture real-world complexities. For decision-making, HARMONY integrates the predicted multi-indicator distributions to manage uncertainties dynamically (Challenge 3). This adaptive Bayesian algorithm leverages the full predictive distributions, incorporating uncertainties to optimize resource allocation under varying conditions while meeting SLA constraints. By combining multi-indicator awareness and uncertainty modeling, HARMONY achieves highly adaptive scaling that can proactively respond to environment changes and mitigate potential risks. In summary, HARMONY fills a critical research gap in large-scale cloud services by proposing a unified framework that:
-
•
Introduces a novel hierarchical attention mechanism to capture complex inter-indicator dependencies, addressing causality and improving prediction accuracy.
-
•
Models non-Gaussian indicator distributions by Normalizing Flows, capturing real-world cloud indicator complexities for more reliable resource predictions.
-
•
Designs the Bayesian decision algorithm that leverages predictive distributions, incorporating uncertainties for optimized resource allocation under SLA constraints.
-
•
Achieves superior performance on four large-scale cloud datasets, significantly outperforming nine established methods. In a month-long scaling test, HARMONY demonstrated substantial economic impact, saving over 35,000 GPU hours.
2. Problem Definition
In this section, we introduce the fundamental concepts of Cloud Service Resource Scaling (CRS). Our task is centered around two objectives: Forecasting the operating conditions of cloud services by analyzing performance indicators. Decision on the appropriate number of resources for cloud services, guided by the outcomes of forecasting task. Table 3 offers a comprehensive overview of the key symbols and their meanings utilized throughout our discussion.
Cloud services are characterized by multiple indicators, each recorded at distinct time stamps and capturing different aspects of the service’s performance and utilization. We formally define these historical indicators in Def. 1:
Definition 1 (Historical Indicators).
Historical indicators represent the status of a service at different time stamps, including information on service requests, consumption, and quality. The historical indicators are denoted as , where and denote the number of indicators and time stamps, respectively. In addition, We use , , and denotes the service requests, usage, quality indicators where .
Due to the hierarchical nature of cloud service indicators, we introduce hierarchical level information to capture and improve predictive performance. The level information is structured as Def. 2:
Definition 2 (Level Information).
Level information contains the hierarchical details of the historical indicators. It is denoted by an array , where each element represents the level of the corresponding indicator in the hierarchy.
Based on historical indicators and level information, we aim to predict future distributions of these indicators. The formal definition of the forecasting task is given in Def. 3:
Definition 3 (Forecasting).
Given historical indicators and level information , the goal of forecasting is to predict the values of the indicators at a future time stamp . This prediction is denoted as , where is the number of possible states in the distribution of each indicator.
Then we define the basic computational unit for scheduling in cloud services, which may be referred to by different names depending on the system, such as PODs in Kubernetes.
Definition 4 (Computational Unit).
A computational unit (CU) comprises a combination of computing resources such as CPU, GPU, and Memory. The computational units have specific configuration denoted as , where each element represents the quantity of the -th resource type. Here corresponds to the number of service consumption indicators. All computational units for a given cloud service share identical configurations.
After getting the future distributions of service indicators and computational unit configuration, we determine the optimal number of CUs required. The formal definition is provided in Def. 5:
Definition 5 (Decision).
Given the predicted distributions and computational unit configuration , the goal of the auto-scaling decision algorithm is to determine the appropriate number of CUs to allocate. This optimal number is denoted by .

3. FORECASTING FUTURE DISTRIBUTIONS
In this section, we elucidate HARMONY for predicting future distributions from historical indicators. Our forecasting model adopts an encoder-decoder architecture. First, the historical indicators are processed through Indicator Embedding within the Encoder to integrate temporal information. Hierarchical Attention is then applied across the embeddings of different indicators to capture their hierarchical relationships. This output is subsequently projected onto a Gaussian distribution. The Decoder part then applies Normalizing Flows to transform the Gaussian distribution into complex, non-parametric distributions. The architecture of the model is depicted in Fig. 2. In the following parts of this section, we will introduce the design of encoder and decoder in detail.
3.1. Encoder: Capturing Temporal and Hierarchical Dependencies
The encoder processes historical indicators to extract temporal and hierarchical dependencies, addressing the complex nature of cloud service data highlighted in Challenge 1. Utilizing Indicator Embedding and Hierarchical Attention mechanisms, it captures the diverse, interrelated aspects of cloud services. The encoder then projects these processed embeddings into latent Gaussian distributions, forming a solid basis for predictions. This approach effectively tackles the challenges presented by the data’s temporal dynamics and hierarchical structure, ensuring a comprehensive representation of cloud service indicators.
3.1.1. Indicator Embedding
The encoder begins with the input of historical indicators . Recent research (Liu et al., 2024) has demonstrated that embedding along the variable dimension , rather than the temporal dimension , can more effectively model the relationships between variables. This approach is particularly beneficial in large-scale cloud service scenarios where the interdependencies between different indicators are both significant and complex.
To further enhance the temporal dependencies, we extract time features and concatenate them with the indicator embeddings, forming an enhanced feature matrix , where is the number of time features. For instance, time stamp information such as year, month, and day can be concatenated with the indicators.
Subsequently, the enhanced indicators are projected to the embeddeding through a dense layer, where is the dimensionality of the embedding. This projection facilitates the robust extraction of temporal patterns, laying a strong foundation for the Hierarchical Attention Mechanism.
3.1.2. Hierarchical Attention
We design a series of blocks for hierarchical attention, each comprising multiple levels. The input to the -th block is denoted as . In each attention block, we implement layers of attention to model the structure, where is the largest level of indicators. To incorporate hierarchical information, we introduce level information and expand it into , where we set the layer of time features as to capture the global temporal dynamics. We project the input into for shared use. This approach promotes the sharing and reuse of learned representations and enhance the modeling capability.
For the -th layer in the block, it focuses on elements within the same hierarchical level and the output of the preceding layer. Initially, we compute the mask for the current level by Eqn. (1):
(1) |
Then, we obtain masked queries , keys , and values by element-wise multiplication of , subsequently adding the output of the previous layer as Eqn. (2) denotes. This operation enables the attention mechanism at this layer to focus on the relationships between elements within the current level of indicators, as well as the comprehensive information involving the current level of indicators and those from the preceding layer. Eqn. (3) exhibits the calculating process of layer output .
(2) |
(3) |
The function can take various forms, such as scaled dot-product attention, depending on the specific architecture. In our implementation, we employ the commonly used FullAttention, which calculates the attention score across all the unmasked indicators. By adhering to the hierarchical structure, this mechanism allows each layer to specialize in capturing relationships at a specific level of the data hierarchy. It ensures that no information from lower layers leaks into the higher layers, preserving the causal order inherent in the data. This design is crucial for accurately modeling inter-indicator dependencies in forecasting task.
Finally, we combine the output of each layer after Layer Normalization (Ba et al., 2016) and a Feed Forward Network (FFN) as Eqn. (4) denotes, which is the commonly-used configuration in transformers. In our implementation, the FFN is implemented by two dense layers. The combined output serves as the input for the next block.
(4) |
3.1.3. Distribution Projector
Having comprehensively captured the temporal and hierarchical dependencies of the indicators, our objective is to estimate the future conditions of the indicators through a distributional approach. To achieve this, we map the output of the last attention block through a projector to a parameterized Gaussian distribution, which serves as the output of the encoder, as shown in Eqn. (5). represents the embeddings of the indicators and is implemented by two dense layers. The last two dimensions of respectively represent the mean and variance of the Gaussian distribution.
(5) |
3.2. Decoder: Transforming Gaussian to Complex Distributions
As discussed in Challenge 2, the complex distribution of indicators in cloud services renders simple Gaussian models inadequate. Hence, we need to transform the parameterized Gaussian distribution from the encoder into a non-parametric distribution with greater representational flexibility. To achieve this, we employ a Real-NVP (Dinh et al., 2017) architecture, a form of Normalizing Flows (NF). It enables efficient sampling and density estimation by transforming simple distributions, such as isotropic Gaussians, into more complex ones through a series of invertible transformations.
3.2.1. Normalizing Flow
The transformation starts by sampling a latent vector from the Gaussian distribution . Each NF consists of two coupling layers and each layer transforms one half of the input embedding while leaving the other half unchanged. Let denote the input to the -th flow. It is split into two halves: and .
In the first coupling layer, is processed by two separate Multi-Layer Perceptrons (MLPs) to produce the scaling factor and the translation factor as Eqn. (6) describes:
(6) |
These factors are then used to transform the second half of the input embedding in Eqn. (7).
(7) |
The output of the first coupling layer is formed by concatenating the transformed half, , with the unchanged half . The second coupling layer mirrors the first but with the roles of the two halves reversed. It takes the output of the first layer and applies the same which use to compute the scaling and translation factors and transforming accordingly, resulting in .
Finally, the outputs from the two coupling layers are concatenated to produce the output of the -th flow as Eqn. (8) denotes.
(8) |
This process repeats for all flows, gradually transforming the Gaussian distribution into a complex non-parametric distribution.
By retaining the factors in coupling layers, the model enables efficient and bidirectional conversion between distributions, facilitating both inference and generation tasks. In summary, the carefully designed NF architecture, incorporating coupling layers and alternating transformations, enhances the model’s expressiveness and flexibility. These advantages collectively contribute to the model’s ability to effectively capture the complex latent space of cloud service indicators and improve overall performance.
3.2.2. Update Parameters
After obtaining the output of the last Normalizing Flow and the ground truth , the loss function is directly computed on the transformed samples. We use the mean squared error (MSE) loss in Eqn. (9) to update the parameters of the Encoder and Decoder.
(9) |
The MSE loss function is chosen due to the absence of a parameterized probability distribution, making it impractical to apply loss functions based on probability density, such as the negative log-likelihood. It offers simplicity, statistical significance, contributing to faster optimization. In addition, it is important to note that alternative loss functions such as quantile loss could also be considered for their ability to handle extreme cases or outliers. After training process, we sample from the Gaussian distribution and transform the samples through decoder to obtain non-parametric distributions of indicators where is the number of samples. Leveraging the advantages of parameterized distributions and normalizing flows, the approach is valuable for producing samples that better match real-world distributions, thereby providing guidance for decision. Alg. 2 summarizes the training process.
4. Decision under Uncertainty
As highlighted in Challenge 3, relying solely on point predictions for decision-making fails to account for the uncertainty in forecasts, which compromises decision stability and effectiveness. To address this issue, we designed a Bayesian decision-making method that utilizes the distribution obtained from our forecasting model. The Bayesian approach leverages the full predictive distribution, not just a single point estimate, allowing us to incorporate uncertainty into the decision-making process. For each potential decision, we compute the expected cost by combining the future distribution of the forecast indicators with the configuration of the computational units. Ultimately, we select the decision that minimizes the expected cost within the range as the final decision. Fig. 3 exhibits the framework of our decision process.

The input to our decision algorithm comprises the forecasted distribution and the configuration of computational units . For simpler scenarios, the Gaussian distribution produced by the encoder can be used instead. To ensure compatibility with the computational units, We use to represent the distributions of service consumption indicators.
Our objective is to determine the optimal number of computational units (CUs). We achieve this by searching within an operational range from to . For each candidate number , we calculate the corresponding cost and select the configuration with the minimum cost as our optimal decision.
For each , the cost is computed individually for each service consumption indicator. Specifically, we calculate the resources to be allocated as . We then assess the over-provisioning probability and the under-provisioning probability . These probabilities can be estimated by various methods such as kernel density estimation or by simply computing the proportion of samples in that are greater or less than . Understanding the likelihood of over-provisioning and under-provisioning, the decision can better manage risks related to resource allocation. It helps ensure that resources are neither wasted nor insufficient.
The cost for each indicator is derived by weighting the probabilities and with weights and , respectively. The overall cost is then computed as Eqn. (10) denotes:
(10) |
Finally, we select the number of CUs that minimizes the cost as the optimal number of CUs, denoted as .
In continuous online decision-making, we dynamically adjust the weights and based on the estimated service quality indicators . For example, if the mean of the predicted response time distribution approaches the Service Level Agreement (SLA) constraints, we reduce the weight of the over-provisioning penalty . This adjustment biases the decision algorithm towards allocating more resources, thereby maintaining high service quality. Alg. 1 details our Bayesian decision-making process.
In summary, our approach offers several key advantages:
(1) Modeling Uncertainty: By incorporating the full distribution of multiple indicators, our algorithm produces optimal decisions that avoid the limitations of point predictions.
(2) Comprehensive Optimization: Our algorithm can jointly optimize across different service indicators. It allows for a more balanced resource allocation in contrast to methods that rely on existing single-indicator decisions.
(3) SLA Compliance and Adaptivity: Our algorithm takes SLA constraints into account by dynamically adjusting the weights for over-provisioning and under-provisioning based on predicted service quality. It also adapts to a wide range of business scenarios by allowing users to prioritize the allocation of specific resources, such as GPUs, according to their needs through adjustable weights.
5. Experiments
In this section, we conduct a series of experiments designed to address the following research questions:
RQ1: How does the performance of HARMONY compare with SOTA methods in predicting indicators of cloud service workload?
RQ2: Can the key components within HARMONY be identified as significant contributors to its superior performance?
RQ3: In a real-world scenario of automatic scaling decision-making, can our approach achieve SOTA performance?
5.1. Experimental Settings
5.1.1. Datasets
Our study utilizes a comprehensive dataset from various types of cloud services to demonstrate the effectiveness of our scaling method. The dataset includes the following components: (1) Pay_GPU: This dataset consists of indicators collected from high-load GPU inference services over a one-week period, featuring distinct GPUs. The data is sourced from the cloud platform that supports Alipay, one of the world’s largest payment applications with over billion users globally. It represents a high-performance computing environment optimized for GPU-based tasks. (2) Pay_CPU: This dataset incorporates data from 100 CPU services with a total of CPU cores, also collected over a one-week period from Alipay. It represents a general-purpose cloud infrastructure supporting CPU-intensive operations. (3) Ali111https://github.com/alibaba/clusterdata/tree/v2018: Sourced from the Alibaba Cluster Trace, this dataset documents multivariate workload indicators from machines over an -day period, showcasing a typical machine-based cloud architecture. (4) Fisher222https://github.com/chrisliu1995/Fisher-model/tree/master: This dataset includes workload data from containers over a -day period within a Kubernetes framework, representing a modern containerized environment. These diverse datasets selection ensures HARMONY’s broad applicability and robust performance across different cloud service architectures.
5.1.2. Data preprocessing
In our data processing pipeline, we intentionally refrained from applying any preprocessing to sudden changes in the data, recognizing their prevalent occurrence within cloud services. For data normalization, we utilized MinMax normalization to standardize all indicators. Given the primary origin of our data from model inference services, we aggregated all indicators into -minute intervals using the maximum value of each indicator for each interval to meet production environment requirements, enabling effective data management and the model’s real-time prediction capabilities while addressing potential Service Level Agreement (SLA) violations resulting from under-provisioning. Subsequently, the dataset was partitioned into training, testing, and validation sets in a 7:1:2 ratio to ensure robust model evaluation. Following Def. 1, we categorize the service indicators into distinct groups such as request, consumption, and quality, generating Level Information as one of the inputs of HARMONY.
5.1.3. Baselines
To evaluate the performance of HARMONY, we compare it with state-of-the-art baselines. These baselines encompassed probabilistic prediction models: DeepAR (2017) (Flunkert et al., 2017), CSDI (2021) (Tashiro et al., 2021), MG-TSD (2024) (Fan et al., 2024), and multivarate prediction methods: Pyraformer (2022) (Liu et al., 2022), DLinear (2023) (Zeng et al., 2023),PatchTST (2023) (Nie et al., 2023), Crossformer (2023) (Zhang and Yan, 2023),TSMixer (2023) (Chen et al., 2023a), iTransformer (2024) (Liu et al., 2024). Appx. C provides a detailed introduction.
5.1.4. Evaluation Metrics
To assess the accuracy of HARMONY in cloud service prediction, we use two metrics: Mean Squared Error (MSE) and Continuous Ranked Probability Score (CRPS). MSE evaluates the accuracy of point predictions, measuring the average squared difference between predicted and actual values. CRPS, a more comprehensive metric, assesses the quality of probabilistic forecasts. It evaluates how well the entire predicted probability distribution aligns with the observed outcome, considering both accuracy and forecast confidence. Lower CRPS scores indicate better probabilistic predictions. For probabilistic methods, we sample from the predicted distribution to compute MSE. For point prediction methods, we repeat the point predictions as inputs for CRPS calculation. By using both MSE and CRPS, we provide a thorough assessment of HARMONY’s performance in both point and probabilistic predictions for cloud service contexts.
5.1.5. Parameter Settings
HARMONY is developed using PyTorch and trained on an NVIDIA P100 GPU. The hyper parameter settings are as follows: the learning rate is set to 0.001, the batch size is set to , and the dimension of the representation is set to . The numbers of Hierarchical Attention block and Normalizing Flow are both set to . All models are trained for 20 epochs, and the best-performing model on the validation set is utilized for testing.
Datasets | T | Metrix | DeepAR | CSDI | MG-TSD | Pyraformer | DLinear | PatchTST | Crossformer | TSMixer | iTransformer | HARMONY |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Pay_GPU | 72 | MSE | 0.0225 | 0.0374 | 0.0331 | 0.0227 | 0.0350 | 0.0202 | 0.8261 | 0.2013 | 0.0179 | 0.0171 |
CRPS | 0.0706 | 0.1214 | 0.1454 | 0.0781 | 0.1176 | 0.0719 | 0.6652 | 0.3686 | 0.0708 | 0.0563 | ||
144 | MSE | 0.0291 | 0.0409 | 0.0357 | 0.0241 | 0.0419 | 0.0267 | 1.2853 | 0.0880 | 0.0248 | 0.0229 | |
CRPS | 0.0807 | 0.1302 | 0.2628 | 0.0859 | 0.1372 | 0.0893 | 0.9662 | 0.2160 | 0.0892 | 0.0664 | ||
288 | MSE | 0.0366 | 0.0513 | 0.0443 | 0.0374 | 0.0413 | 0.0481 | 1.6736 | 0.5510 | 0.0476 | 0.0355 | |
CRPS | 0.0992 | 0.1828 | 0.3062 | 0.1140 | 0.1323 | 0.1363 | 1.1406 | 0.5487 | 0.1328 | 0.0890 | ||
Pay_CPU | 72 | MSE | 0.3593 | 0.2674 | 0.1258 | 0.0676 | 0.1365 | 0.0327 | 0.3639 | 0.2997 | 0.0337 | 0.0323 |
CRPS | 0.2142 | 0.5457 | 0.3317 | 0.0891 | 0.1920 | 0.0708 | 0.3067 | 0.2347 | 0.0711 | 0.0572 | ||
144 | MSE | 0.4260 | 0.2680 | 0.1718 | 0.1084 | 0.1748 | 0.0604 | 0.4071 | 0.2465 | 0.0683 | 0.0448 | |
CRPS | 0.2170 | 0.5273 | 0.4722 | 0.1146 | 0.2066 | 0.0921 | 0.3866 | 0.2449 | 0.0962 | 0.0730 | ||
288 | MSE | 0.4997 | 0.2998 | 0.2196 | 0.1441 | 0.2442 | 0.1219 | 0.5541 | 0.1610 | 0.1103 | 0.0762 | |
CRPS | 0.2351 | 0.5369 | 0.5837 | 0.1519 | 0.2356 | 0.1298 | 0.4622 | 0.2060 | 0.1295 | 0.0943 | ||
Ali | 72 | MSE | 0.0044 | 0.0172 | 0.0318 | 0.0042 | 0.0112 | 0.0044 | 0.0891 | 0.0410 | 0.0047 | 0.0039 |
CRPS | 0.0399 | 0.0730 | 0.0535 | 0.0327 | 0.0821 | 0.0328 | 0.2359 | 0.1550 | 0.0340 | 0.0289 | ||
144 | MSE | 0.0054 | 0.0116 | 0.0323 | 0.0056 | 0.0145 | 0.0049 | 0.0962 | 0.0246 | 0.0055 | 0.0048 | |
CRPS | 0.0492 | 0.0588 | 0.0729 | 0.0401 | 0.0855 | 0.0351 | 0.2496 | 0.1235 | 0.0390 | 0.0290 | ||
288 | MSE | 0.0066 | 0.0134 | 0.0349 | 0.0062 | 0.0167 | 0.0058 | 0.0612 | 0.0239 | 0.0071 | 0.0061 | |
CRPS | 0.0542 | 0.0613 | 0.0852 | 0.0473 | 0.0888 | 0.0426 | 0.2048 | 0.1166 | 0.0482 | 0.0399 | ||
Fisher | 72 | MSE | 0.0695 | 0.0766 | 0.0650 | 0.0461 | 0.0500 | 0.0467 | 0.0587 | 0.0581 | 0.0470 | 0.0453 |
CRPS | 0.1481 | 0.1634 | 0.1840 | 0.1141 | 0.1448 | 0.1143 | 0.1545 | 0.1484 | 0.1160 | 0.0995 | ||
144 | MSE | 0.0604 | 0.0968 | 0.0663 | 0.0469 | 0.0574 | 0.0472 | 0.0616 | 0.0863 | 0.0489 | 0.0458 | |
CRPS | 0.1390 | 0.2282 | 0.1991 | 0.1172 | 0.1606 | 0.1159 | 0.1596 | 0.1691 | 0.1200 | 0.0940 | ||
288 | MSE | 0.0562 | 0.0934 | 0.0757 | 0.0483 | 0.0557 | 0.0485 | 0.0640 | 0.1046 | 0.0496 | 0.0474 | |
CRPS | 0.1276 | 0.1481 | 0.2057 | 0.1213 | 0.1450 | 0.1195 | 0.1594 | 0.1903 | 0.1237 | 0.1097 | ||
Avg. Rank | All | MSE | 5.9167 | 7.8333 | 6.7500 | 3.1667 | 6.0833 | 3.0833 | 9.1667 | 8.0833 | 3.7500 | 1.0000 |
CRPS | 4.5833 | 7.8333 | 8.5000 | 3.3333 | 6.3333 | 3.0000 | 8.8333 | 8.0000 | 3.5833 | 1.0833 |
5.2. RQ1: Comparative Results
Table 1 presents the comparative results between HARMONY and various baselines. The experimental data lead to several insights:
(1) HARMONY demonstrates superior performance in both point prediction and probabilistic prediction across four diverse cloud service datasets, achieving optimal results with average improvements of % in MSE and % in CRPS. These consistent improvements across varied datasets underscore HARMONY’s generalization capability, evidencing its efficacy in providing accurate predictions for cloud services with different architectures. We attribute the enhancement in point prediction primarily to the hierarchical attention mechanism, which effectively models the causal relationships and sequential dependencies among indicators, thus addressing Challenge 1. The improvement in distribution prediction can be attributed to the Normalizing Flow’s non-parametric modeling of cloud service indicator distributions, addressing Challenge 2.
(2) The results from the majority of models reveal a counter intuitive trend: for cloud services characterized by short adjustment cycles and rapidly evolving indicators, longer input length do not consistently lead to improved prediction accuracy. This phenomenon can be elucidated by considering the dynamic nature of cloud services. Longer input sequences inevitably incorporate a greater number of historical time patterns, some of which may have become obsolete due to recent service adjustments or evolving operational conditions. These outdated patterns, when included in the prediction process, can introduce noise and potentially skew the model’s understanding of current trends, thereby compromising prediction accuracy. Consequently, when developing predictive models for cloud service indicators, it is imperative to calibrate input sequence length based on service-specific characteristics.
(3) Attention-based methods exhibit stronger generalization capabilities, providing adaptive performance across datasets. With the exception of Crossformer, whose performance may be sensitive to segment size configurations due to its locality-based embedding approach, attention-based methods, including Pyraformer, PatchTST, and iTransformer, consistently demonstrate superior performance across all datasets, ranking second only to HARMONY. While MLP-based approaches (TSMixer, DLinear) show limitations in fitting complex, large-scale data, as evidenced by their poor performance on Pay_GPU datasets. Similarly, CSDI’s inability to converge on the Pay_CPU dataset suggests that the multiple iterations of Gaussian noise addition and denoising in diffusion process may exacerbate the challenge of fitting the indicators.

5.3. RQ2: Ablation Study
We conducted ablation studies to analyze the impact of key components within our system. The outcomes are shown in Fig. 4.
(1) Impact of Hierarchical Attention: We replace the Hierarchical Attention with Full Attention blocks that do not differentiate levels and computing the attention coefficients for all indicators pairwise. This substitution is denoted as ’w/o H’. Experimental results indicate that such a replacement leads to a deterioration in both MSE and CRPS across all input lengths in all cloud service datasets. Notably, for the CPU service, which exhibits more consistent and stable relationships among indicators, the removal of Hierarchical Attention results in an average increase of in Mean Squared Error (MSE). We attribute it to the Full Attention Mechanism’s failure to account for the hierarchical relationships between indicators, leading to confusion in information propagation across different layers. Thus, for cloud service with clear structures, Hierarchical Attention proves to be a more effective choice.
(2) Impact of Normalizing Flow: We compared our model with a variant using a simple linear layer for Gaussian distribution mapping, denoted as ’w/o NF’. Results show significant increases in both Mean Squared Error (MSE) and Continuous Ranked Probability Score (CRPS) for the ’w/o NF’ variant. This suggests that the Normalizing Flow better captures the true data distribution, effectively addressing Challenge 2. The improvement is particularly pronounced in complex cloud environments like Pay_GPU, Ali, and Fisher, where operational patterns are intricate and subject to rapid changes. In these scenarios, the Normalizing Flow demonstrates superior adaptability and fidelity to the underlying distribution. Thus, incorporating the Normalizing Flow substantially enhances performance in dynamic service environments, underscoring its importance in accurate cloud service prediction.
5.4. RQ3: Online Decision-Making
We conducted a comprehensive one-month online A/B test from June 12, 2024, on Alipay’s internal cloud service inference platform. The test focused on ten large-scale GPU services, evaluating the performance of HARMONY in predicting future workload indicators and making resource allocation decisions using Algorithm 1. Appx. D provides detailed information on the deployment process.
The key metrics we considered included CPU and GPU usage hours (, ), resource utilization , , and the probability of allocated resources exceeding actual consumption . Higher resource utilization rates while maintaining a high signifies better performance of the scaling method. We compared our approach with the following methods: The first group is Non-Predictive Methods: Rule-Based: A method uses the max of historical indicators to make resource allocations. Autopilot (2020) (Rzadca et al., 2020): A classical auto-scaling method that selects the maximum value from historical indicators over a given period and updates periodically. To avoid SLA violations, a 10% buffer is added to the Non-Predictive Methods. Predictive Methods: FIRM (2020) (Qiu et al., 2020): An adaptive auto-scaling method using reinforcement learning to make decisions based on service quality indicators. FSA (2023) (Wang et al., 2023): A method uses service request predictions to make resource allocation decisions, taking into account the relationship between service request and service consumption.


From Fig. 5, it is evident that our approach adeptly aligns resource allocation with demand fluctuations, thereby allocating fewer resources compared to other methods during peak and trough periods. Table 2 further substantiates our method’s superior performance, achieving significantly higher utilization rates while maintaining a high success rate (SuccR) of 99.82%. This underscores the efficacy of our Bayesian decision-making algorithm, which incorporates multiple indicators to model uncertainties, thus achieving an optimal balance between resource savings and SLA constraints. The marginally lower SuccR can be attributed to occasional service restarts, which could be addressed through proactive human intervention. Notably, our approach conserves CPU core hours and GPU hours over the month-long test period compared to the second-best method, translating to estimated economic benefits exceeding . This substantial cost saving is achieved while ensuring high service quality. Overall, our evaluation demonstrates the effectiveness of HARMONY in optimizing resource allocation, enhancing resource utilization, achieving significant cost savings, and maintaining high service quality.
Methods | CPU_Usage | GPU_Usage | CPU_Uti | GPU_Uti | SuccR |
---|---|---|---|---|---|
NoScale | 1.02e+7 | 1.13e+6 | 16.27% | 18.66% | 100% |
Rule-Based | 5.56e+6 | 6.50e+5 | 29.87% | 32.46% | 99.92% |
Autopilot | 5.35e+6 | 6.17e+5 | 31.03% | 34.20% | 99.88% |
FIRM | 3.72e+6 | 4.09e+5 | 44.62% | 51.59% | 99.23% |
FSA | 3.10e+6 | 3.54e+5 | 53.55% | 59.60% | 95.87% |
HARMONY | 2.86e+6 | 3.19e+5 | 58.04% | 66.14% | 99.82% |
6. Conclusion
We introduce HARMONY, a novel framework addressing cloud service resource scaling challenges by integrating multi-indicator distribution predictions into a Bayesian decision process. HARMONY leverages a hierarchical attention mechanism and Normalizing Flows to capture hierarchical dependencies and non-Gaussian distributions, ensuring accurate and reliable resource predictions. Empirical validation showed that HARMONY outperforms established methods and delivers significant economic benefits, saving over GPU computation hours in a month-long A/B test, demonstrating its practical impact in large-scale cloud service environments.
7. ACKNOWLEDGEMENT
This work was sponsored by the National Natural Science Foundation of China [U23A20309, 62272302, 62172276, 62372296], CCF-Ant Research Fund [CCF-AFSG RF20230408] and the Shanghai Municipal Science Technology Major Project [2021SHZDZX0102].
References
- (1)
- Ba et al. (2016) Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. CoRR abs/1607.06450 (2016).
- Chen et al. (2021) Huangke Chen, Xiaomin Zhu, Guipeng Liu, and Witold Pedrycz. 2021. Uncertainty-Aware Online Scheduling for Real-Time Workflows in Cloud Service Environment. IEEE Transcations on Service Computing 14, 4 (2021), 1167–1178.
- Chen et al. (2023b) Jiadong Chen, Yang Luo, Xiuqi Huang, Fuxin Jiang, Yangguang Shi, Tieying Zhang, and Xiaofeng Gao. 2023b. IPOC: An Adaptive Interval Prediction Model based on Online Chasing and Conformal Inference for Large-Scale Systems. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 202–212.
- Chen et al. (2023a) Si-An Chen, Chun-Liang Li, Nate Yoder, Sercan Arik, and Tomas Pfister. 2023a. TSMixer: An all-MLP Architecture for Time Series Forecasting. Transcations on Machine Learning Research (TMLR) (2023).
- Dinh et al. (2017) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2017. Density estimation using Real NVP. In International Conference on Learning Representations, (ICLR).
- Fan et al. (2024) Xinyao Fan, Yueying Wu, Chang Xu, Yuhao Huang, Weiqing Liu, and Jiang Bian. 2024. MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process. In International Conference on Learning Representations (ICLR).
- Feng and Ding (2023) Binbin Feng and Zhijun Ding. 2023. GROUP: An End-to-end Multi-step-ahead Workload Prediction Approach Focusing on Workload Group Behavior. In ACM The Web Conference (WWW). 3098–3108.
- Flunkert et al. (2017) Valentin Flunkert, David Salinas, and Jan Gasthaus. 2017. DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks. CoRR abs/1704.04110 (2017).
- Han et al. (2021) Xing Han, Sambarta Dasgupta, and Joydeep Ghosh. 2021. Simultaneously Reconciled Quantile Forecasting of Hierarchically Related Time Series. In International Conference on Artificial Intelligence and Statistics (AISTATS), Vol. 130. 190–198.
- Hua et al. (2023) Qin Hua, Dingyu Yang, Shiyou Qian, Hanwen Hu, Jian Cao, and Guangtao Xue. 2023. KAE-Informer: A Knowledge Auto-Embedding Informer for Forecasting Long-Term Workloads of Microservices. In ACM The Web Conference (WWW). 1551–1561.
- Liu et al. (2022) Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, and Schahram Dustdar. 2022. Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting. In International Conference on Learning Representations (ICLR).
- Liu et al. (2024) Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. 2024. iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. In International Conference on Learning Representations (ICLR).
- Luo et al. (2024) Yang Luo, Mohan Gao, Zhemeng Yu, Haoyuan Ge, Xiaofeng Gao, Tengwei Cai, and Guihai Chen. 2024. Integrating System State into Spatio Temporal Graph Neural Network for Microservice Workload Prediction. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
- Luo et al. (2023) Yang Luo, Zehao Gu, Shiyang Zhou, Yun Xiong, and Xiaofeng Gao. 2023. Meteorology-Assisted Spatio-Temporal Graph Network for Uncivilized Urban Event Prediction. In IEEE International Conference on Data Mining, ICDM. 468–477.
- Meng et al. (2020) Yuan Meng, Shenglin Zhang, Yongqian Sun, Ruru Zhang, Zhilong Hu, Yiyin Zhang, Chenyang Jia, Zhaogang Wang, and Dan Pei. 2020. Localizing Failure Root Causes in a Microservice through Causality Inference. In IEEE/ACM International Symposium on Quality of Service (IWQoS). 1–10.
- Meta ([n. d.]) Meta. [n. d.]. Building Meta’s GenAI Infrastructure. https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/
- Nie et al. (2023) Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. 2023. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In International Conference on Learning Representations (ICLR).
- OpenAI ([n. d.]) OpenAI. [n. d.]. Scaling Kubernetes to 7,500 Nodes. .https://openai.com/research/scaling-kubernetes-to-7500-nodes.
- Pan et al. (2023) Zhicheng Pan, Yihang Wang, Yingying Zhang, Sean Bin Yang, Yunyao Cheng, Peng Chen, Chenjuan Guo, Qingsong Wen, Xiduo Tian, Yunliang Dou, Zhiqiang Zhou, Chengcheng Yang, Aoying Zhou, and Bin Yang. 2023. MagicScaler: Uncertainty-aware, Predictive Autoscaling. Proceedings of the VLDB Endowment (PVLDB) 16, 12 (2023), 3808–3821.
- Qian et al. (2022) Huajie Qian, Qingsong Wen, Liang Sun, Jing Gu, Qiulin Niu, and Zhimin Tang. 2022. RobustScaler: QoS-Aware Autoscaling for Complex Workloads. In IEEE International Conference on Data Engineering (ICDE). 2762–2775.
- Qiu et al. (2020) Haoran Qiu, Subho S. Banerjee, Saurabh Jha, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer. 2020. FIRM: An Intelligent Fine-grained Resource Management Framework for SLO-Oriented Microservices. In USENIX Symposium on Operating Systems Design and Implementation, OSDI. 805–825.
- Rzadca et al. (2020) Krzysztof Rzadca, Pawel Findeisen, Jacek Swiderski, Przemyslaw Zych, Przemyslaw Broniek, Jarek Kusmierek, Pawel Nowak, Beata Strack, Piotr Witusowski, Steven Hand, and John Wilkes. 2020. Autopilot: workload autoscaling at Google. In ACM EuroSys Conference. 16:1–16:16.
- Shi et al. (2023) Tao Shi, Hui Ma, Gang Chen, and Sven Hartmann. 2023. Auto-Scaling Containerized Applications in Geo-Distributed Clouds. IEEE Transcations on Service Computing (TSC) 16, 6 (2023), 4261–4274.
- Tashiro et al. (2021) Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. 2021. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In Advances in Neural Information Processing Systems (NeurIPS). 24804–24816.
- Wang (2024) Shiyu Wang. 2024. NeuralReconciler for Hierarchical Time Series Forecasting. In ACM International Conference on Web Search and Data Mining (WSDM). 731–739.
- Wang et al. (2023) Shiyu Wang, Yinbo Sun, Xiaoming Shi, Shiyi Zhu, Lintao Ma, James Zhang, Yangfei Zheng, and Liu Jian. 2023. Full Scaling Automation for Sustainable Development of Green Data Centers. In International Joint Conference on Artificial Intelligence, (IJCAI). 6264–6271.
- Xue et al. (2022) Siqiao Xue, Chao Qu, Xiaoming Shi, Cong Liao, Shiyi Zhu, Xiaoyu Tan, Lintao Ma, Shiyu Wang, Shijun Wang, Yun Hu, Lei Lei, Yangfei Zheng, Jianguo Li, and James Zhang. 2022. A Meta Reinforcement Learning Approach for Predictive Autoscaling in the Cloud. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4290–4299.
- Zeng et al. (2023) Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2023. Are Transformers Effective for Time Series Forecasting?. In Conference on Artificial Intelligence (AAAI). 11121–11128.
- Zhang et al. (2022a) Wei Zhang, Quan Chen, Kaihua Fu, Ningxin Zheng, Zhiyi Huang, Jingwen Leng, and Minyi Guo. 2022a. Astraea: towards QoS-aware and resource-efficient multi-stage GPU services. In ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Lausanne (ASPLOS). 570–582.
- Zhang and Yan (2023) Yunhao Zhang and Junchi Yan. 2023. Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting. In International Conference on Learning Representations (ICLR).
- Zhang et al. (2022b) Zhen Zhang, Shuai Zheng, Yida Wang, Justin Chiu, George Karypis, Trishul Chilimbi, Mu Li, and Xin Jin. 2022b. MiCS: Near-linear Scaling for Training Gigantic Model on Public Cloud. Proceedings of the VLDB Endowment (PVLDB) 16, 1 (2022), 37–50.
Appendix A Notation Table & Pesudocode
Symbol | Description |
---|---|
Historical indicators | |
Level information of indicators | |
Computational unit’s configuration | |
Enhanced Indicator Embedding. | |
Embeddings after Hierachical Attention | |
Learned Latent Gaussian Distributions | |
Transformed samples of indicators | |
Predicted future distribution of indicators | |
Optimal number of CUs after decision algorithm |
Appendix B Related Works
Existing research on cloud service resource scaling focuses on two aspects: Workload Forecasting and Decision-Making.
Workload Forecasting: Initial studies primarily targeted the prediction of single service request indicators (Chen et al., 2023b; Hua et al., 2023; Wang et al., 2023; Luo et al., 2023). For instance, IPOC (Chen et al., 2023b) designed an online ensemble algorithm to handle the variable temporal patterns of cloud service requests. KAE-Informer (Hua et al., 2023) combined time series decomposition and attention mechanisms for diverse load prediction tasks. These methods focused on the irregularities within individual indicators but did not account for the intrinsic correlations among various indicators. With the advancement of multivariate time series prediction methods, more approaches (Luo et al., 2024; Feng and Ding, 2023; Liu et al., 2024) have started to consider the relationships among multiple cloud service indicators. For example, STAMP (Luo et al., 2024) employed spatio-temporal graph neural networks, and GROUP (Feng and Ding, 2023) used rule-based methods to capture inter-indicator correlations. However, these methods overlook hierarchical structures and causal relationships, thus failing to address Challenge 1.
In recent years, the increased scale and complexity of cloud services have led to some works (Wang, 2024; Pan et al., 2023; Chen et al., 2021; Tashiro et al., 2021; Fan et al., 2024) focusing on probabilistic distribution predictions instead of point predictions to tackle the uncertainty in indicator predictions described in Challenge 3. For example, MagicScaler (Pan et al., 2023) predicts Gaussian distributions of workloads. However, these methods do not address complex distributions of service indicators, failing to meet Challenge 2.
Decision in Resource Scaling: In terms of scheduling, many methods use simple rule-based approaches or optimization modeling to convert prediction outcomes into Computational Unit decisions (Chen et al., 2023b; Luo et al., 2024; Pan et al., 2023; Wang et al., 2023; Qian et al., 2022). For instance, IPOC (Chen et al., 2023b) multiplies predicted service consumption by a coefficient for resource decisions, and RobustScaler (Qian et al., 2022) integrates service quality and service consumption for optimization-oriented decision. However, these methods do not effectively utilize prediction uncertainty or achieve flexible, cross-indicator optimization, thus failing to address Challenge 3.
In summary, HARMONY aims to fill the existing research gap in large-scale cloud services by integrating predictions of the distributions of multiple indicators and embedding them into the Bayesian decision-making process for joint optimization.
Appendix C Baseline Description
The first group consists of probabilistic prediction models. Among them, DeepAR is implemented by a standard library333https://ts.gluon.ai/stable/api/gluonts/gluonts.mx.model.deepar.html while both CSDI and MG-TSD employed their official implementations.
DeepAR: A commonly adopted probabilistic forecasting approach involves utilizing recurrent neural networks to project predictions onto a Gaussian distribution.
CSDI: A method utilzes score-based diffusion model to generate conditional probability predictions.
MG-TSD: An advanced diffusion-based model that leverages the inherent granularity levels within the data as targets at diffusion steps, effectively guiding the learning process of diffusion models.
The second group is multivariate forecasting methods, which are the most widely used cloud service workload prediction methods. All the following baselines are implemented by open-source code444 https://github.com/thuml/Time-Series-Library.
Pyraformer: A method utilizes pyramidal attention to effectively capture both short and long-term temporal dependencies with low time and space complexity.
DLinear: A method utilizing linear layers to capture temporal correlation for multivariate time-series prediction tasks.
PatchTST: A Transformer-based model designed for multivariate time series forecasting and self-supervised representation learning, utilizing a channel-wise independent mechanism.
Crossformer: A Transformer-based model that employs Dimension Segment Wise (DSW) embedding to capture cross-time and cross-dimension dependencies for multivariate series forecasting.
TSMixer: A novel architecture based on MLPs that efficiently captures the complexity of multivariate series.
iTransformer: A Transformer-based model that redefines the attention and feed-forward network to function on inverted dimensions, enhancing its ability to capture multivariate correlations.
Appendix D Deployment Details
To ensure reliability and minimize the risk of SLA violations, we have implemented several key practices for all prediction-based methods during deployment:
90-90 Principle: A critical step in our approach is the rigorous use of the 90-90 validation check before deploying any service. This practice ensures that at least of the predictions demonstrate an accuracy exceeding . It is important to note that all services included in our testing have passed the 90-90 validation check. This practice significantly mitigates the risk of SLA breaches. Importantly, we find larger-scale inference services with more stable workload patterns have a higher likelihood of passing the 90-90 test. In practice, we focus on optimizing a subset of larger-scale services to achieve rapid benefits, rather than optimizing all services.
Frequent Model Retraining: Our model undergoes daily retraining in the production environment, and updates are deployed only after meeting the 90-90 criterion in offline testing to ensure consistent performance and adaptability to changing workloads. Our emphasis lies in prioritizing prediction accuracy over the training cycle, given the frequent adjustments in cloud service workloads. By opting for daily model retraining, we aim to avoid the influence of past, dissimilar workload patterns and improve prediction accuracy, as validated in Sec. 5.2. Moreover, the one-day model update cycle for a single cloud service provides sufficient time for retraining and enhances our focus on prediction accuracy improvement.
Resource Allocation Indicators: For a more direct comparison, we have primarily focused on optimizing CPU and GPU utilization as well as response time indicators while adjusting the weights in Bayesian decision-making. Additionally, HARMONY can accommodate joint optimization of memory, GPU memory, and other indicators as needed, adapting to utilize different combinations of indicators based on the specific demands of the task.
These practices have been invaluable in overcoming data collection, modeling, and deployment challenges within a production environment. They provide a robust framework ensuring successful implementation of our resource scaling solution.