Towards Lifelong Scene Graph Generation with Knowledge-ware In-context Prompt Learning
Abstract
Scene graph generation (SGG) endeavors to predict visual relationships between pairs of objects within an image. Prevailing SGG methods traditionally assume a one-off learning process for SGG. This conventional paradigm may necessitate repetitive training on all previously observed samples whenever new relationships emerge, mitigating the risk of forgetting previously acquired knowledge. This work seeks to address this pitfall inherent in a suite of prior relationship predictions. Motivated by the achievements of in-context learning in pretrained language models, our approach imbues the model with the capability to predict relationships and continuously acquire novel knowledge without succumbing to catastrophic forgetting. To achieve this goal, we introduce a novel and pragmatic framework for scene graph generation, namely Lifelong Scene Graph Generation (LSGG), where tasks, such as predicates, unfold in a streaming fashion. In this framework, the model is constrained to exclusive training on the present task, devoid of access to previously encountered training data, except for a limited number of exemplars, but the model is tasked with inferring all predicates it has encountered thus far. Towards LSGG, we propose to present visual content as textual representations as the input tokens for a pretrained language model, e.g., GPT-2, and develop a rehearsal strategy via an in-context prompt based on a novel knowledge-aware prompt retrieval mechanism. Rigorous experiments demonstrate the superiority of our proposed method over state-of-the-art SGG models in the context of LSGG across a diverse array of metrics. Besides, extensive experiments on the two mainstream benchmark datasets, VG and Open-Image(v6), show the superiority of our proposed model to a number of competitive SGG models in terms of continuous learning and conventional settings. Moreover, comprehensive ablation experiments demonstrate the effectiveness of each component in our model.
Index Terms:
Scene Graph Generation, Continuous Learning, Visual Relationship Detection, In-context Learning.I Introduction
The task of scene graph generation (SGG), which entails the detection and localization of visual objects and their relationships within an image, holds a fundamental position in the computer vision community and has garnered significant attention. In the context of SGG, visual concepts such as objects or attributes are systematically presented as a directed scene graph (SG). This representation proves to be of utmost importance in the broader realm of scene understanding [1] and extends its influence to diverse vision tasks, including image captioning [2], image retrieval[3], and visual question answering [4]. Formally, a scene graph is articulated as a set of relation triples, denoted as subject, predicate, object.
A substantial body of recent research has made continuous strides in enhancing the performance of SGG [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Nevertheless, these methods commonly operate under the assumption of a closed set of predicates, conducting once-and-for-all training on a fixed dataset, such as Visual Genome [3]. While this conventional training and inference setup is convenient, it proves impractical in realistic applications where new relationships may emerge over time. In practical scenarios, those existing methods necessitate the integration of new data with all previously available data, followed by retraining on the combined dataset. This technique becomes increasingly resource-intensive and time-consuming, particularly as the size of the training set grows substantially. Conversely, training these models solely with new samples presents a challenge known as catastrophic forgetting [18, 19].
Consequently, an intuitive question emerges: is it feasible for an SGG model to undergo incremental training with newly arriving data without experiencing forgetting previously acquired knowledge? In pursuit of an answer to this question, we contemplate a more pragmatic lifelong learning setting (alternatively termed continual learning [19]) specific to scene graph generation, referred to as lifelong scene graph generation (LSGG). In this context, an SGG model performs continual updates with newly arriving data. It is notable to highlight that a scene graph comprises object and predicate labels, both of which have the potential to emerge over time. In this study, we narrow the focus of lifelong learning to the latter category, novel predicates, while assuming that all object labels are predetermined through a pretrained object detection network.
Addressing LSGG has to confront two primary challenges. Firstly, prevalent SGG datasets, such as Visual Genome [3], manifest a notably long-tailed predicate distribution. Consequently, each task within the LSGG paradigm may exhibit a highly skewed distribution, where certain predicates may be associated with only a few samples. This necessitates endowing the model with a few-shot learning capability for effective learning of the current task. Secondly, since the model lacks access to previously encountered training examples or simply replays with a few-shot examples [20, 21, 22], it must not only learn novel relationships using newly arriving data in a few-shot manner but also retain the acquired knowledge from earlier tasks.
In addressing the first challenge, we propose predicting visual relationships using pretrained language models, such as GPT- [23], by distilling rich language knowledge from a cross-modality model, e.g., GLIP [24]. Recently, pretrained models [25, 26, 27] have demonstrated considerable success in few-shot and zero-shot scenarios owing to their robust representation capabilities derived from unsupervised training on extensive corpora. A prevalent approach to harnessing these models involves designing or learning [28] task-specific prompts for extracting knowledge from them, particularly for downstream tasks such as text generation [29]. However, in SGG, there is no universally pretrained visual relationship model. While the prior work [30] has proposed pretraining a relation model by leveraging dense captions [3], the acquisition of such language corpus proves to be resource-intensive and limits the method’s applicability. To circumvent this challenge, we initially extract visual features from a cross-modality model, e.g., GLIP [24]. Subsequently, we train an encoder to embed these features as a set of symbolic representations. These embeddings, enriched with linguistic knowledge, then serve as input for a pretrained language model to predict relationships.
The second challenge in LSGG pertains to mitigating forgetting during training. In this regard, L2P [31] proposed learning multiple prompts to preserve acquired knowledge. However, L2P did not formulate an effective rehearsal strategy for replaying exemplars from the memory buffer. Drawing inspiration from the widely employed in-context learning strategy in natural language processing [32, 33, 34], we propose a rehearsal strategy centered around an in-context prompt to retain learned tasks to the greatest extent possible. The fundamental concept behind in-context learning involves incorporating a few examples with ground-truth labels into a prompt, transforming these exemplars into demonstrations [33] or supplements to the conventional prompt [35]. Building upon this notion, we propose the learning of multiple knowledge-aware prompts, each of which retains a few examples for rehearsal. During inference, we employ a knowledge-based retrieval technique to identify the most suitable prompts and their corresponding exemplars. Subsequently, these prompts and exemplars are concatenated to form a comprehensive prompt for the language model to predict relationships.
In summary, our contributions are four-fold:
-
•
We propose a new challenging and practical task, dubbed lifelong scene graph generation (LSGG), aiming to learn to predict predicates in a streaming manner without forgetting the learned knowledge in the past.
-
•
Towards LSGG, we propose to present visual contents with rich textual symbolic representation via a transformer-based encoder and then design a rehearsal strategy based on an in-context prompt to alleviate forgetting. To the best of our knowledge, this is the first attempt for LSGG.
-
•
For identifying better prompts and exemplars in the prompt context, we introduce a knowledge-aware prompt retrieval strategy. This strategy proves to be highly advantageous for relation prediction and is particularly effective in mitigating the effects of forgetting.
-
•
We conduct extensive experiments over a number of state-of-the-art SGG models to evaluate their performance in the LSGG setting. Results show that our in-context-based prompting method is superior to other models by a large margin, setting a strong baseline for the LSGG task. Besides, on the task of conventional SGG tasks, our model also exhibits considerable improvements over many SGG models.
II Related Work
II-A Scene Graph Generation
Scene graph generation [36], also referred to as visual relationship detection [37], is a task focused on detecting and localizing relationships between subject-object pairs within an image. In the early stages of research [37, 38, 6, 39, 40], the prevailing pipeline involved leveraging an object detection network to extract object features. Subsequently, a feature refinement module was applied to capture contextual cues [5], external language priors [37], or localization information [41]. The primary research question during this phase often centered on the second step, specifically on how to learn robust representations for relationship classification. However, with the revelation of a highly skewed predicate distribution in datasets such as Visual Genome [3], subsequent works [16, 42, 13, 43, 14] shifted their focus to address the long-tail problem. In this line of research, various techniques were developed, including knowledge embedding [16], reweighting or resampling [5], causal analysis [13], fine-grained predicate prediction [14], and predicate probability distribution based loss (PPDL) [44].
However, these methods typically assume that the training data are arriving at once. In practical scenarios, where the collection of scene graph data is particularly challenging for rare predicates, there is a need for models to continuously learn when new data becomes available without access to previously trained samples. Addressing this gap, we propose a novel task, Lifelong Scene Graph Generation.
II-B Prompt-based Learning
Prompt learning [45] has emerged as a pivotal paradigm in recent years, fueled by the great advancements in pretrained language models (PLMs) such as GPT-3 [25]. The exploration of prompt-based learning has been particularly pervasive in the realm of natural language processing (NLP), where researchers have harnessed PLMs as expansive knowledge repositories. This approach involves the strategic design of templates or prompts to query PLMs, obviating the need for extensive model retraining. Notable contributions in NLP include the development of adaptive prompts for downstream tasks, such as Adaprompt [28], Prefix-tuning [30], Prefix-Tuning [46], and Learning Continuous Prompts [31]. These efforts underscore the flexibility and efficiency of prompt-based learning in extracting task-specific information from pretrained language models.
The success and versatility of prompt-based learning have transcended the domain of NLP into computer vision, marked prominently by the advent of Contrastive Language-Image Pre-training (CLIP) [27], which is a large-scale pretrained cross-modality model and has become a linchpin for prompt-based learning in computer vision. In this context, prompt-based learning has demonstrated its efficacy in tasks such as image classification [44] and open-vocabulary object detection [47]. These applications showcase the adaptability of prompt-based learning across diverse domains, underscoring its potential as a versatile methodology for knowledge extraction and task adaptation.
The exploration of prompt-based learning extends beyond the individual model level to the broader context of few-shot learning strategies. The concept of in-context learning has gained prominence in few-shot scenarios for pretrained models in both NLP [48, 32, 33, 49] and computer vision [35]. In-context learning [50] involves enriching the input prompt by incorporating few-shot exemplars or demonstrations within the context. This contextual information guides the pretrained model to make more informed predictions without the need for extensive fine-tuning. Such approaches [32, 51] address the challenge of finding optimal prompts for specific queries, enhancing the robustness and adaptability of prompt-based learning strategies.
II-C Lifelong learning
Lifelong learning [19] a paradigm that reflects the continuous acquisition of knowledge over time, has garnered significant attention across various domains. In the realm of machine learning and artificial intelligence, lifelong learning has become a pivotal topic, addressing the challenges posed by evolving datasets, dynamic environments, and the need for models to adapt to new information without catastrophic forgetting. Approaches in lifelong learning can be broadly categorized into three main strategies: rehearsal-based methods, parameter isolation methods, and episodic memory methods. Rehearsal-based methods aim to mitigate forgetting by storing and periodically revisiting previously encountered data. Early works such as Elastic Weight Consolidation (EWC) [52] introduced the concept of selectively protecting important parameters based on their significance to previously learned tasks. This strategy was later extended in works like Progressive Neural Networks (PNN) [53], which proposed the addition of task-specific neural networks to the model architecture, allowing for the continual integration of new tasks. Parameter isolation methods focus on segregating parameters associated with different tasks to prevent interference during training. One prominent example is the Learning without Forgetting (LwF) [54], where knowledge from previous tasks is distilled into a knowledge base and used to constrain the learning of subsequent tasks. This strategy enables models to leverage previously acquired knowledge while adapting to new tasks. Memory-augmented neural networks, such as Neural Turing Machines (NTM) [55], integrate external memory components to facilitate the storage and retrieval of past experiences. These models enable the effective handling of sequential tasks and provide mechanisms for selective memory access. Beyond these strategies, lifelong learning has also seen the integration of meta-learning techniques. Meta-learning methods, including Model-Agnostic Meta-Learning (MAML) [56] and Reptile [57], aim to instill a capacity for rapid adaptation to new tasks by exposing models to diverse scenarios during training.
In the context of visual recognition, lifelong learning has been applied to tasks such as object recognition, scene understanding, and image classification. Paris et al. [58] employ ensemble learning to adapt to new tasks, while Hou et al. [59] leverage knowledge distillation for continual adaptation. Visual domain tasks often present additional challenges due to the high dimensionality of image data and the intricate relationships between visual elements.
III Preliminaries
III-A Problem Statement
Lifelong Scene Graph Generation (LSGG) involves training a visual relationship prediction model from a continuous stream of data, where novel relationship categories may emerge dynamically. Formally, the training data are presented as a sequence of tasks , where represents the training, validation, and testing sets for the -th task. The complete label set is defined as . For any two label sets, it is assumed that . During the -th training stage, the model is trained with under two configurations: (1) the model cannot access any previously seen data points, denoted as ; and (2) the model can only access a few examples from the seen tasks, denoted as , where . This work focuses on the second setting. At the -th inference stage, the model is tasked with predicting the labels of all previous tasks, denoted as .
In SGG, a scene graph is presented as a suite of relation subject-predicate-object (SPO) triples, i.e,, where and denote an object and relationship label set, respectively. In this work, we mainly consider is given in a streaming manner, that is, an object class could occur in different tasks, but a relationship class must exclusively emerge in one specific-task. We could mathematically denote them as and , where and denote any two training stages.

III-B Prompt-based learning
Prompt-based learning aims to exploit a template with a [MASK] slot to probe a pretrained model. For example, in CLIP [27], we can use the embedding of a prompt, e.g., “a photo of [class]”, to classify the object label, rather than learning new classifiers. The challenge is how to find the best prompt given a query. In the above example, possible prompts could be “an image of [class]” or “[class] in the photo”, etc. To solve this problem, Zhou et al. [35] proposed to learn to prompts by finetuning the pretrained model. Without loss of generality, we could denote the learned prompt as:
(1) |
where is the input query tokens; [;] is the concatenation operation; denotes the learned soft prompt; is the length of the prompt; and is the label prediction token. Then, we take as input for a pretrained model and treat the output of as the predicted result.
III-C In-context learning
In-context learning has demonstrated its efficacy in few-shot learning scenarios for pretrained language models [48, 32, 33, 49]. The fundamental notion is to enrich the knowledge embedded in the input prompt by presenting few-shot exemplars complete with ground truth labels in the context. This context guides the pretrained model in making better predictions. Formally, the in-context prompt can be denoted as:
(2) |
where represents the prompt in Eq.(1); denotes a pair of input data and the corresponding ground-truth label; and is the number of input examples. In Eq.(2), the context can be viewed as a supplement to the prompt in Eq.(1). It provides few-shot exemplars with ground-truth labels, and these exemplars do not introduce extra parameters.
IV Method
In Figure 1, we illustrate our primary framework for lifelong scene graph generation, which comprises three integral components: the mapping of visual features to symbolic representations, knowledge-aware prompt learning, and in-context exemplar selection. The core idea underlying this framework involves initially presenting visual content as a set of understandable symbolic representations for a pretrained language model. Subsequently, we aim to acquire a set of knowledge-specific prompts, which are stored in a dedicated memory slot. Upon the arrival of a new task, we retrieve the most pertinent prompts along with their associated exemplars and ground-truth labels. These elements are amalgamated to construct a comprehensive in-context prompt, subsequently employed to query the language model. Specifically, the first component of our framework entails the transformation of visual features into textual tokens or symbolic representations. This transformation is implemented by aligning image features derived from an object detection network with textual space of a language model such as GPT-. For a given query sample, a knowledge-guided retrieval strategy is deployed to identify the most suitable exemplars, which then serve as in-context samples. The selected exemplars, combined with the query sample, are input into a pretrained language model by concatenating them.
IV-A Visual features to symbolic representations
IV-A1 Visual feature extraction
In alignment with prior established SGG models [5, 17, 14], our approach involves the extraction of four key features—namely, the context , relationship , subject , and object —to represent a subject-object pair comprehensively. Specifically, the context feature is derived from the global image representation, emphasizing a holistic understanding of the entire image. In contrast, the subsequent three features are extracted by an object detection network. Notably, the relationship feature is characterized by the union region of the subject and object within the image, signifying their contextual information in the regional context.
Traditionally, the widely employed model for object detection in SGG is Faster-RCNN [60]. However, with the success of pretrained cross-modality models such as RegionCLIP [61], and GLIP [24], cross-modality features from these models have demonstrated greater robustness and generality [62], along with an open-vocabulary capability. Motivated by this, we choose cross-modality representation as the visual feature. By default, we select GLIP to extract the aforementioned four features. Similar to CLIP, GLIP comprises two encoders designed for images and texts, respectively. Concretely, given an input image, the image encoder of GLIP outputs the region features , where is the number of regions and is the feature dimension, while the text encoder produces a set of textual embeddings , where denotes the number of textual tokens. In the object detection tasks, we could set the object class words as the text input as [24].
IV-A2 Visual-language to symbolic representations
After obtaining the cross-modality features where denotes the superscripts , , , and for brevity, that is the context, relationship, subject, and object feature respectively, we seek to decode them into textual/symbolic representations. To achieve this, we utilize a standard transformer to map those cross-modality features to textual tokens as [63]. Formally, we could denote this as:
(3) |
where signifies a transformer employed to encode cross-modality features into textual representations; is a project function of MLP to transform the single vector into vectors as the input for , i.e., in which is the feature dimension. The parameter denotes the length of the encoded textual tokens. The concatenation operation is denoted by . Besides, represents a set of learnable prompt tokens, and their corresponding hidden output state embeddings are considered as the transformed symbolic representations, i.e., .
IV-B In-context prompts for LSGG
The key challenge in LSGG lies in mitigating the risk of forgetting learned knowledge without recourse to excessive previously trained instances. A naïve prompt-based way method (e.g. [47]) involves inputting tokenized representations, i.e., , , and , into a pre-trained language model (e.g., GPT-) as in Eq.(1), i.e.,
(4) |
Nevertheless, in practice, this approach gives rise to pronounced issues of catastrophic forgetting. We conjecture that this occurrence can be attributed to two discernible reasons: firstly, the extra knowledge reliant on the single prompt may prove effective for the current task, yet lack adaptability when confronted with a new task; secondly, the contextual information, encompassing solely the global image cues , is inherently limited for a language model. To mitigate the aforementioned issues, we propose learning multiple adaptive knowledge-aware prompts, each of which stores a few representative exemplars, serving as the in-context in Eq. (2).
IV-B1 Knowledge-aware prompts learning
A main challenge within in-context learning pertains to the selection of suitable examples for contextualization [48, 32, 33]. In Equation (2), prevailing in-context methods [48, 33] commonly employ a uniform prompt denoted as across all contextual exemplars. However, we posit that such an approach proves ineffectual for LSGG. As illustrated by the findings in L2P [64], relying on a singular prompt fails to adequately preserve acquired knowledge. Consequently, we contend that diverse exemplars may be associated with distinct prompts, each representing a distinct type of knowledge. For instance, in SGG, where the VG dataset features multiple predicate types such as localization and human actions, we advocate the utilization of disparate prompts, namely knowledge-aware prompts, for each specific type of knowledge.
To that end, we define our knowledge-aware prompts as a collection of triples denoted as , where represents the number of prompts, signifies a knowledge word, and is a set where each element is defined as , corresponding to a seen instance with the ground truth. To elaborate, when presented with a query , the initial step involves utilizing the global context feature to identify the top- most similar through cosine similarity measurements across the set . This retrieval process can be articulated as follows:
(5) |
where denotes a retrieval function designed to yield the top closest knowledge words. Notably, undergoes a random initialization in its nascent stage. After the identification of the most similar knowledge words, the corresponding prompts of are selected as the input prompts.
As for the exemplar, different from the knowledge word retrieval, we employ the regional context representation, i.e., the relationship feature , to retrieve the optimal exemplar within each through a cosine similarity search, defined as:
(6) |
where is a function to return the closest exemplar in .
IV-B2 In-context prompt of relation prediction
So far, we have the top most similar prompts along with their corresponding exemplars , complete with their labels . Then, we could define our in-context prompt as:
(7) |
Where represent the retrieved prompts arranged in ascending order of similarity, as [33]. In practice, is the concatenation of image context, relation region, subject, object tokens, which are transformed by Eq.( 3). Then, we feed into a pretrained language model as:
(8) |
where denotes a pretrained language model (e.g., GPT-), denotes the parameters of the language model, and is a prediction score distribution. It is important to note that, in practical scenarios where relationship words may comprise multiple words (e.g., ”sitting on”), comprises learnable embeddings, where denotes the maximum length encompassing all potential predicate words.
IV-C Training and Inference
IV-C1 Training
In summary, our framework comprises four primary modules: the cross-modality feature extraction network, GLIP; the transformation from visual features to symbolic representations, in Eq. (3); the learnable knowledge words stored in ; and the relation predictor, in Eq.(8). It is noteworthy that, during the training phase, we froze the parameters of both the feature extraction network, GLIP, and the language model, but only learn the , the prompts and knowledge words.
Intuitively, the learning of knowledge words is equal to optimize the distance between the query and the selected knowledge words , that is:
(9) |
Regarding the predicate classification, we use the standard cross-entropy loss to classify them as:
(10) |
where is the number of length of the prefix in , i.e., ; denotes the -th predicate word in ; is an output probability for the -th example and each is the vocabulary size of the language model; and denotes a ground-truth predicate. It is worth noting that during training, the examples in buffer memory also require to be replayed by , serving as rehearsal loss[19, 65]. Hence, the overall loss is:
(11) |
where and are two hyperparameters to weight the terms.
IV-C2 Inference
we employ Eq. (5) to retrieve the top similar knowledge entries from the buffer memory, which is populated after training. Subsequently, we apply Eq. (6) to identify the most similar exemplars. These retrieved knowledge entries and exemplars are then concatenated to form an extended input sequence denoted as in Eq. (7), which is utilized as input for the language model.
V Experiments
In this section, we will first elaborate on our experimental dataset, evaluation setting and metrics. Then, we will report our model’s quantitative performance on LSGG, compared with the state-of-the-art methods. Last, a comprehensive ablation study and qualitative results are presented.
V-A Experiment Settings
V-A1 Dataset
Visual Genome (VG) is the mainstream benchmark dataset for SGG. Following previous works [5, 13, 6], we use the pre-processed VG with object classes and predicates [6]. VG consists of 108k images, of which images are used for training and for testing. Additionally, images make up the validation set. Open-Image(v6): consists of object categories and predicate categories. Following the split of [66], the training set has images while the validation and test sets contain and images respectively.
V-A2 Evaluation settings of LSGG
In the context of LSGG, we consider training data are arriving in a sequential manner. To emulate this scenario, we partition the predicate words of VG into sub-sequences. For instance, on the VG dataset, each task may contain predicates. Regarding task division, we employ two distinct splitting strategies: Random Splitting and Frequency-Based Splitting, but in default, we use the random splitting strategy. Specifically, the former randomly divides the predicates into five tasks, commonly adopted in mainstream continuous learning studies [67, 68, 21]. The latter approach aligns with the frequencies of predicates to divide those predicates, adhering to a long-tail distribution observed in the training set. The rationale behind this strategy is rooted in the belief that more common predicates are relatively easier to acquire, whereas the rare ones pose a greater challenge and are anticipated to arrive later in the training sequence. During any training stage, the model lacks access to seen examples or is restricted to a limited number of examples from prior training stages. After the completion of each training stage, model evaluation is conducted concerning all arrived predicates. Our experimental evaluations are reported across three sub-tasks in SGG: Predicate Classification (PredCls), Scene Graph Classification (SGCls), and Scene Graph Detection (SGDet).
V-A3 Evaluation Metrics
We mainly report the results on two types of metrics: the unbiased metric mean Recall@K () and the conventional metric . It is worth noting that the latter one does not reflect a model’s true performance on tail relations [16, 13]. Hence, following [69], we also report the results on the average of R@K and mR@K, denoted as M@K. Regarding continuous learning, we report the model’s performance to prevent forgetting on the metric of Forgetting Measure [70, 71, 67]. Besides, for Open-Image, following the settings of [66], we report four met- rics: R@, weighed mean Average Precision (wmAPrel) and weighed mean Average Precision (wmAPphr) of triples, and scorewtd. The scorewtd is calculated as: scorewtd = R@ + wmAPrel + wmAPphr.
Methods | Predicate Classification | Scene Graph Classification | Scene Graph Detection | ||||||
---|---|---|---|---|---|---|---|---|---|
mR@50/100 | R@50/100 | M@50/100 | mR@50/100 | R@50/100 | M@50/100 | mR@50/100 | R@50/100 | M@50/100 | |
IMP[6] | 7.7 / 8.5 | 45.7 / 47.2 | 26.7 / 27.9 | 5.0 / 5.7 | 25.6 / 27.0 | 15.3/ 16.4 | 2.6 / 3.4 | 16.7 / 18.8 | 9.7 / 11.1 |
Motifs[5] | 9.6 / 11.2 | 50.2 / 51.9 | 29.9 / 31.6 | 6.4 / 7.2 | 28.3 / 30.9 | 17.4/ 19.1 | 3.5 / 4.2 | 18.8 / 20.9 | 11.2 / 12.6 |
VCTree[42] | 10.8 / 12.6 | 51.7 / 53.1 | 31.3 / 32.9 | 6.6 / 7.8 | 27.9 / 29.2 | 17.3 / 18.5 | 3.4 / 4.5 | 17.2 / 19.4 | 10.3 / 12.0 |
TDE[13] | 12.2 / 13.5 | 35.4 / 37.7 | 23.8 / 25.6 | 7.0 / 8.1 | 20.1 / 22.4 | 13.6 / 15.6 | 3.3 / 4.0 | 15.7 / 17.7 | 9.5 / 10.9 |
SHA[17] | 15.2 / 16.9 | 38.2 / 41.1 | 27.0 / 29.3 | 8.3 / 9.1 | 22.5 / 23.8 | 15.4 / 16.5 | 3.7 / 4.4 | 17.8 / 19.2 | 10.8 / 11.8 |
SQUAT[72] | 14.1 / 15.7 | 50.6 / 52.4 | 32.4 / 34.1 | 7.7 / 8.6 | 26.1 / 28.0 | 16.9 / 18.3 | 4.0 / 4.6 | 19.5 / 21.4 | 11.8 / 13.0 |
Ov-SGG [30] | 15.3 / 17.0 | 51.2 / 52.7 | 33.3 / 34.9 | 8.0 / 9.3 | 29.4 / 31.8 | 18.7 / 20.4 | – / – | – / – | – / – |
PE-Net [69] | 15.6 / 17.4 | 52.1 / 54.8 | 33.9 / 36.0 | 9.1 / 10.8 | 31.4 / 32.9 | 20.4 / 22.1 | 4.2 / 5.3 | 21.2 / 23.8 | 12.7 / 14.6 |
VS3[62] | 16.2 / 17.1 | 52.3 / 54.2 | 34.3 / 35.7 | 8.8 / 10.2 | 31.2 / 32.5 | 20.0 / 21.4 | 4.5 / 5.1 | 20.1 / 22.4 | 12.3 / 13.8 |
ICSGGs | 14.6 / 15.7 | 51.0 / 52.7 | 32.8 / 34.2 | 8.6 / 9.5 | 30.2 / 31.7 | 19.4 / 20.8 | 3.8 / 4.5 | 19.4 / 20.3 | 11.6 / 12.4 |
ICSGGm | 15.4 / 16.6 | 52.9 / 54.6 | 34.2 / 35.6 | 9.0 / 10.1 | 31.6 / 33.0 | 20.3 / 21.6 | 4.0 / 4.8 | 20.0 / 21.7 | 12.0 / 13.5 |
ICSGG | 18.6 / 20.3 | 54.1 / 56.4 | 36.4 / 38.4 | 9.7 / 11.4 | 33.1 / 34.8 | 21.4 / 23.1 | 4.9 / 5.6 | 22.7 / 24.5 | 13.8 / 15.1 |
V-A4 Baseline methods
We select a number of representative SOTA models as our baselines: IMP [6], Motifs [5], VCTree [42], TDE [13], SHA [17], Ov-SGG [30], VS3[62], and PE-Net [69]. Note that since TDE and SHA are model-agnostic, for a fair comparison, we choose VCTree as the their base model. Our model is dubbed as ICSGG.
V-B Implementation Details
Following the setup in VS3 [62], we employ GLIP (i.e., the GLIP-T and larger GLIP-L encoders) as our object detection network, and their parameters remain frozen during training. Consistent with [62], we retain the top object detection results per image for scene graph detection. The comprises an -layer transformer with multi-head attention components. We represent the context, relationship, subject, and object using , , , and tokens, respectively. The input token dimension for the language model is set to . Two hyperparameters, and , are set to and , respectively. For our multiple prompts, we default to prompts, each storing examples, akin to the buffer size setting in [20]. Each prompt consists of tokens, with set to . We utilize GPT- (M) as our language model with a vocabulary size of . Additionally, we experiment with other language models, such as GPT- small (M) and GPT- Medium (M). All experiments are conducted on Nvidia Ti GPUs using the AdamW optimizer with a learning rate of and a weight decay of . Our model implementation is based on the released code of TDE 111https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch and the open-source platform 222https://huggingface.co.
V-C Main Results of LSGG and SGG
We first report the overall performance after the five training stages with the buffer size set to . All models use the same object detection network GLIP [24] to extract region features. For the other baseline models, we randomly sample examples to store in the memory buffer.
V-C1 Overall results of LSGG
Table I shows the comparison results of LSGG after the five training stages on the VG dataset. For the baseline models, following the incremental setting of [20], we store examples dynamically for each seen predicate, where is the number of seen predicates and increases with the training stage. Note that we further report other variant our models, i.e., ICSGGs and ICSGGm, using the small size GPT- ( MB) and medium size GPT- ( MB), respectively. It is worth noting that VS3, originally conceived as a weakly supervised scene graph generation model, is evaluated in our comparison based on its fully supervised version.
Models | Recall@50 | wmAPrel | wmAPphr | scorewtd |
---|---|---|---|---|
Motifs [5] | 58.27 | 21.59 | 23.86 | 29.83 |
VCtree[42] | 59.32 | 22.08 | 24.15 | 30.35 |
SQUAT [72] | 60.13 | 23.05 | 23.12 | 30.49 |
SHA [17] | 61.41 | 23.14 | 24.28 | 31.25 |
PE-Net [69] | 61.35 | 23.80 | 24.17 | 31.86 |
OV-SGG [30] | 61.21 | 24.12 | 25.36 | 32.03 |
VS3 [62] | 62.08 | 24.48 | 25.69 | 32.48 |
ICSGGs | 62.73 | 24.70 | 26.21 | 29.97 |
ICSGGm | 64.82 | 25.41 | 27.38 | 34.08 |
ICSGG | 65.24 | 25.97 | 28.39 | 34.79 |
For the results, we could observe that many SOTA methods, such as SHA and PE-Net, commonly underperform on the new task of LSGG, especially on the metric of mR@K, although they have shown competitive performance on the conventional SGG setting. Interestingly, VS3 gains competitive results across many metrics, and beat all the other prior models in the table. We posit that the primary contributing factor to this observation lies in the substantial inclusion of a large language supervision in VS3. This incorporation significantly enhances the model’s grasp of visual-language knowledge, thereby mitigating the extent of forgetting on the acquired knowledge. Our ICSGG gains the best results over all terms. For example, on the task of PredCls, ICSGG exceeds the second best model VS3 by on average in terms of mR@/. Similarly, the same case can be found on the other baseline models, e.g., SHA [17] despite of showing competitive SOTA performance on the conventional SGG. Table II presents the results on the Open-Image (v6) dataset. Similar to the observations on the VG dataset, our ICSGG model attains the most favorable performance across all metrics, exhibiting an approximately -point superiority over the second-best model VG3 on average. This underscores the efficacy of our in-context-based knowledge-aware prompt learning approach in the context of lifelong scene graph generation. In the ablation section, we will delve into a detailed evaluation of the effectiveness of each component within our model.
V-C2 Results of forgetting
An importance metric of lifelong learning is Forgetting Measure(FM) [71], which gauges how effectively a model retains previously acquired knowledge.. Notably, a higher FM value indicates a greater extent of forgetting, signifying a worse performance on tasks from previous stages. We use mR@K as the base measurement to calculate FM. Table III shows the results of overall FM after five training stages. It is evident that conventional SGG models, such as Motifs and Ov-SGG, and even recent state-of-the-art methods like VS3, exhibit more pronounced forgetting issues compared to our model. However, ICSGG demonstrates an average FM value that is points lower than the second-best model, VS3, across all terms.
V-C3 Results after each training stage
We also present the results of PredCls from task1 to task4 in terms of mR@ after each training stage on the VG dataset, as illustrated in Figure 2. The outcomes reveal that, while several baseline models such as Motifs and VS3 exhibit similar performance initially, they manifest severe forgetting issues as training progresses. For instance, in task1, VS3 and ICSGG display comparable performance initially, but after five training stages, ICSGG demonstrates an approximately -point improvement over VS3. These findings provide additional confirmation of our model’s proficiency in mitigating catastrophic forgetting.
V-C4 Results of SGG
For a fair comparison with other SGG models, we also present results using the conventional training scheme, assuming all predicates arrive simultaneously. Table IV illustrates the outcomes on the VG dataset, utilizing the unbiased metric of mR@K. It is important to note that all compared models utilize the same object detection results from GLIP and region features for equitable comparison while excluding reweighting [73] or resampling [13] strategies. Our memory buffer is set to . From the results, it is evident that our ICSGG continues to outperform other SGG models, with the exception of SQUAT on the mR@ metric for PredCls. This underscores the effectiveness of our in-context prompt learning strategy not only in continuous learning but also in its ability to mitigate biases.
Methods | PredCls | SGCls | SGDet |
---|---|---|---|
FM@50/100 | FM@50/100 | FM@50/100 | |
Motifs[5] | 19.2 / 21.3 | 11.2 / 12.9 | 7.1 / 8.2 |
VCTree[42] | 18.1 / 20.6 | 10.5 / 12.0 | 6.4 / 7.4 |
SQUAT[72] | 17.3 / 19.5 | 9.8 / 11.3 | 7.4 / 8.1 |
PE-Net[62] | 16.5 / 18.4 | 9.2 / 10.4 | 6.3 / 7.0 |
Ov-SGG[30] | 16.0 / 17.7 | 8.5 / 9.4 | 6.2 / 6.9 |
VS3 [62] | 15.1 / 17.2 | 8.0 / 9.2 | 5.9 / 6.6 |
ICSGGs | 15.4 / 17.0 | 8.2 / 9.1 | 6.1 / 7.3 |
ICSGGm | 14.6 / 16.3 | 7.4 / 8.7 | 5.6 / 6.7 |
ICSGG | 13.5 / 15.3 | 6.4 / 8.0 | 5.0 / 5.8 |

Method | Predcls | SGCls | SGDet |
---|---|---|---|
mR@50/100 | mR@50/100 | mR@50/100 | |
IMP [6] | 11.0 / 11.8 | 5.6 / 5.9 | 3.7 / 4.8 |
Motifs [5] | 12.7 / 15.8 | 7.6 / 8.8 | 5.9 / 6.7 |
VCTree [42] | 14.2 / 16.5 | 8.2 / 9.6 | 6.3 / 7.1 |
SHA [17] | 18.9 / 20.8 | 10.9 / 11.6 | 7.8 / 9.0 |
SQUAT [72] | 25.5 / 28.3 | 17.7 / 19.2 | 13.4 / 15.7 |
Ov-SGG [30] | 24.3 / 26.3 | 12.5 / 15.3 | 10.5 / 12.8 |
VS3 [62] | 21.4 / 24.9 | 14.7 / 17.1 | 11.5 / 13.9 |
ICSGGs | 23.4 / 25.2 | 13.6 / 15.5 | 11.5 / 12.4 |
ICSGGm | 24.5 / 26.9 | 15.2 / 17.0 | 12.3 / 13.3 |
ICSGG | 25.7 / 27.8 | 17.8 / 20.0 | 14.5 / 16.2 |
V-D Ablation Study
In this section, we will test effectiveness of main components in our model. Specifically, there are four main techniques: (1) in-context exemplar selection and ordering; (2) the size of the memory buffer; (3) finetuning the language model; (4) the length of prompts; and (5) the task splitting strategy. When conducting experiments, we only modify the ablated component but keep the other setting to the best. The results on the VG and Open-Image(v6) are shown in Table V and Table VI, respectively.

V-D1 Exemplar selection strategies
To validate our proposed knowledge-based retrieval strategy, we compare it against two random selection approaches: (1) randomly selecting prompts from the prompt set and (2) randomly choosing one sample from the corresponding exemplar set, denoted as and , respectively. Additionally, we examine the random ordering strategy as an alternative to our ascending similarity order. The results indicate that all random strategies exhibit relative performance declines, with the most substantial decrease observed when randomly selecting prompts, which incurs an average decrease of about points. This affirms the effectiveness of our proposed knowledge-based retrieval strategy for exemplar selection. Moreover, this conclusion aligns with the findings in [33].
Methods | PredCls | SGCls | SGDet |
---|---|---|---|
mR@50/100 | mR@50/100 | mR@50/100 | |
w/o-kap | 15.3 / 17.1 | 8.5 / 9.3 | 4.0 / 5.2 |
w/o-toe | 18.1 / 19.4 | 9.2 / 10.8 | 4.5 / 5.4 |
w/o-aso | 17.5 / 19.0 | 9.0 / 10.4 | 4.2 / 5.3 |
w/o-inc | 16.6 / 18.2 | 8.2 / 9.5 | 3.6 / 4.8 |
w-1k | 18.1 / 19.6 | 9.2 / 11.0 | 4.6 / 5.5 |
w-ft | 17.0 / 18.6 | 8.8 / 9.6 | 4.4 / 5.0 |
w-sc | 17.7 / 19.1 | 8.5 / 10.5 | 4.1 / 5.2 |
w-lc | 18.6 / 20.4 | 9.7 / 11.3 | 4.9 / 5.8 |
w-frq | 18.2 / 20.0 | 9.8 / 11.1 | 4.8 / 5.7 |
ICSGG | 18.6 / 20.3 | 9.7 / 11.4 | 4.9 / 5.6 |
Models | Recall@50 | wmAPrel | wmAPphr | scorewtd |
---|---|---|---|---|
w/o-kap | 61.71 | 22.37 | 25.29 | 31.41 |
w/o-toe | 64.18 | 25.05 | 27.52 | 33.86 |
w/o-aso | 63.29 | 24.84 | 26.17 | 33.06 |
w/o-inc | 62.22 | 23.82 | 26.08 | 32.40 |
w-1k | 64.63 | 25.20 | 28.10 | 34.25 |
w-ft | 60.17 | 22.49 | 23.47 | 30.42 |
w-sc | 63.52 | 23.95 | 26.50 | 32.88 |
w-lc | 65.18 | 26.06 | 28.61 | 34.90 |
w-frq | 65.16 | 26.10 | 28.04 | 34.71 |
ICSGG | 65.24 | 25.97 | 28.39 | 34.79 |
V-D2 The size of memory buffer
In this part, we conduct experiments to explore the influence of different buffer sizes, as this parameter may impact the quality of selected samples. By default, we set the buffer size to , following the empirical choice in [20]. We further examine two additional buffer size settings: (1) - no memory buffer, which utilizes Eq.(4) without the in-context learning strategy for predicting relationships; and (2) - setting the buffer size to . For the variant model, we observe significant performance declines, with an average decrease of approximately points compared to the full ICSGG. This demonstrates that a naive prompt alone may not exhibit its superiority in mitigating knowledge forgetting. However, when incorporating more inductive knowledge, such as real samples with ground-truth labels, the more sophisticated in-context prompt can effectively resist knowledge forgetting.
On the other hand, when setting the memory buffer to a smaller size, we observe a slight decrease in performance across all metrics. This phenomenon is likely due to the larger buffer size improving the quality of retrieved samples, which, in turn, benefits the quality of the in-context prompt.
V-D3 Finetuning the language model
During training, we refrain from updating the language model by default, based on the assumption that fine-tuning may alter the preserved knowledge in the pretrained language model. To validate this assumption, we conduct an experiment involving the finetuning of the pretrained language model, denoted as . During the finetuning process, we adopt a smaller learning rate for the language model, as done in [30]. The results indicate that updating the parameters of the language model does not yield any improvement; instead, a noticeable decline is observed across all metrics. This affirms our conjecture that updating the language model would compromise the preserved knowledge and, consequently, have a detrimental effect on the model’s generality.
V-D4 The length of prompts
Intuitively, the length of input tokens can influence the output of a language model. In our default setting, we configure the length of context, relationship, and object as , , and , respectively. In this experiment, we test two alternative lengths: (1) , indicating a small length, i.e., , , ; and (2) , indicating a large length, i.e., , , , for context, relationship, and object, respectively. The results show that the smaller length leads to a slight performance decrease, primarily because shorter prompts result in inferior representations for visual features, capturing less visual content. However, when enlarging the length, the performance increment is limited, and it incurs additional training time costs.
V-D5 The task splitting strategy
By default, we employ a random splitting strategy to obtain our subtasks. However, in practical scenarios, later arriving predicates are often rarer. To address this, we investigate an alternative splitting strategy based on predicate frequency, ordering from head to tail. The results under this frequency-based splitting are presented in Table VI, denoted as . From the results, it is observed that the splitting strategy does not exert a significant influence on the overall results, although there is approximately a decrease compared to ICSGG. This suggests that our in-context prompt learning exhibits few-shot learning capability, as the model can handle a severe data-starvation problem, particularly with the frequency-based splitting, especially for tail predicates.
V-E Qualitative Results
Figure 3 showcases four qualitative results comparing ICSGG and the state-of-the-art model VS3 [17] on the PredCls task after five training stages. Additionally, we present the embedding tokens of the context and relationship using our . To present our transformed textual tokens, we choose the closest embedding from the vocabulary of GPT-, following [63]. From the results, it is evident that although our textual tokens do not construct a conventional sentence, certain key content is preserved. For instance, in the first figure, the context presentations contain terms such as ”room” and ”on”. Remarkably, in the second example, our model predicts the entirely novel predicate ”stop on”, which is more informative than the ground truth ”on”.
VI Conclusion
In this paper, we propose a novel and practical task for scene graph generation, dubbed lifelong scene graph generation, aiming to learn to predict relationship in a incremental fashion. Towards LSGG, we have to solve two intrinsic challenges: extreme imbalanced data distribution and forgetting. To sovle those problems, we proposed to use textual embeddings to present visual content so that we could leverage the pretrained language model to infer relationships by designing in-context based prompts. Extensive experiments show that our proposed ICSGG is much better than other SOTA SGG models. Limitations: we do not jointly train the object detection network in this paper by designing a one-step framework. In the future work, we will explore a one-step model for LSGG in an end-to-end manner to predict relationship.
References
- [1] T. Xiao, Y. Liu, B. Zhou, Y. Jiang, and J. Sun, “Unified perceptual parsing for scene understanding,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 418–434.
- [2] J. Gu, S. Joty, J. Cai, H. Zhao, X. Yang, and G. Wang, “Unpaired image captioning via scene graph alignments,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 10 323–10 332.
- [3] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma et al., “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” International journal of computer vision, vol. 123, no. 1, pp. 32–73, 2017.
- [4] D. A. Hudson and C. D. Manning, “Gqa: A new dataset for real-world visual reasoning and compositional question answering,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6700–6709.
- [5] R. Zellers, M. Yatskar, S. Thomson, and Y. Choi, “Neural motifs: Scene graph parsing with global context,” in CVPR, 2018, pp. 5831–5840.
- [6] D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei, “Scene graph generation by iterative message passing,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5410–5419.
- [7] R. Li, S. Zhang, B. Wan, and X. He, “Bipartite graph network with adaptive message passing for unbiased scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 11 109–11 119.
- [8] J. Gu, H. Zhao, Z. Lin, S. Li, J. Cai, and M. Ling, “Scene graph generation with external knowledge and image reconstruction,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1969–1978.
- [9] T. He, L. Gao, J. Song, and Y.-F. Li, “Exploiting scene graphs for human-object interaction detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 15 984–15 993.
- [10] G. Yin, L. Sheng, B. Liu, N. Yu, X. Wang, J. Shao, and C. C. Loy, “Zoom-net: Mining deep feature interactions for visual relationship recognition,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 322–338.
- [11] A. Kolesnikov, A. Kuznetsova, C. Lampert, and V. Ferrari, “Detecting visual relationships using box attention,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0.
- [12] Z.-S. Hung, A. Mallya, and S. Lazebnik, “Contextual translation embedding for visual relationship detection and scene graph generation,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 11, pp. 3820–3832, 2020.
- [13] K. Tang, Y. Niu, J. Huang, J. Shi, and H. Zhang, “Unbiased scene graph generation from biased training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3716–3725.
- [14] X. Lyu, L. Gao, Y. Guo, Z. Zhao, H. Huang, H. T. Shen, and J. Song, “Fine-grained predicates learning for scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 467–19 475.
- [15] A. Zareian, S. Karaman, and S.-F. Chang, “Bridging knowledge graphs to generate scene graphs,” in European conference on computer vision. Springer, 2020, pp. 606–623.
- [16] T. Chen, W. Yu, R. Chen, and L. Lin, “Knowledge-embedded routing network for scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6163–6171.
- [17] X. Dong, T. Gan, X. Song, J. Wu, Y. Cheng, and L. Nie, “Stacked hybrid-attention and group collaborative learning for unbiased scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 427–19 436.
- [18] M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” in Psychology of learning and motivation. Elsevier, 1989, vol. 24, pp. 109–165.
- [19] M. Delange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–1, 2021. [Online]. Available: https://doi.org/10.1109/TPAMI.2021.3057446
- [20] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
- [21] D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne, “Experience replay for continual learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- [22] M. De Lange and T. Tuytelaars, “Continual prototype evolution: Learning online from non-stationary data streams,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8250–8259.
- [23] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
- [24] H. Zhang, P. Zhang, X. Hu, Y.-C. Chen, L. Li, X. Dai, L. Wang, L. Yuan, J.-N. Hwang, and J. Gao, “Glipv2: Unifying localization and vision-language understanding,” Advances in Neural Information Processing Systems, vol. 35, pp. 36 067–36 080, 2022.
- [25] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- [26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in ACL, 2019.
- [27] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning. PMLR, 2021, pp. 8748–8763.
- [28] X. Chen, X. Xie, N. Zhang, J. Yan, S. Deng, C. Tan, F. Huang, L. Si, and H. Chen, “Adaprompt: Adaptive prompt-based finetuning for relation extraction,” arXiv preprint arXiv:2104.07650, 2021.
- [29] J. Li, T. Tang, W. X. Zhao, and J.-R. Wen, “Pretrained language models for text generation: A survey,” in IJCAI, 2021.
- [30] T. He, L. Gao, J. Song, and Y.-F. Li, “Towards open-vocabulary scene graph generation with prompt-based finetuning,” in European Conference on Computer Vision. Springer, 2022, pp. 56–73.
- [31] Y. Du, F. Wei, Z. Zhang, M. Shi, Y. Gao, and G. Li, “Learning to prompt for open-vocabulary object detection with vision-language model,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 084–14 093.
- [32] Y. Chen, R. Zhong, S. Zha, G. Karypis, and H. He, “Meta-learning via language model in-context tuning,” arXiv preprint arXiv:2110.07814, 2021.
- [33] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer, “Rethinking the role of demonstrations: What makes in-context learning work?” in EMNLP, 2022.
- [34] Y. Lu, M. Bartolo, A. Moore, S. Riedel, and P. Stenetorp, “Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity,” in ACL, 2022.
- [35] K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models,” International Journal of Computer Vision, vol. 130, no. 9, pp. 2337–2348, 2022.
- [36] P. Xu, X. Chang, L. Guo, P.-Y. Huang, X. Chen, and A. G. Hauptmann, “A survey of scene graph: Generation and application,” EasyChair Preprint, no. 3385, 2020.
- [37] C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei, “Visual relationship detection with language priors,” in European conference on computer vision. Springer, 2016, pp. 852–869.
- [38] H. Zhang, Z. Kyaw, S.-F. Chang, and T.-S. Chua, “Visual translation embedding network for visual relation detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5532–5540.
- [39] A. Newell and J. Deng, “Pixels to graphs by associative embedding,” in NIPS, 2017, pp. 2171–2180.
- [40] B. Dai, Y. Zhang, and D. Lin, “Detecting visual relationships with deep relational networks,” in CVPR, 2017, pp. 3076–3086.
- [41] S. Inayoshi, K. Otani, A. Tejero-de Pablos, and T. Harada, “Bounding-box channels for visual relationship detection,” in European Conference on Computer Vision. Springer, 2020, pp. 682–697.
- [42] K. Tang, H. Zhang, B. Wu, W. Luo, and W. Liu, “Learning to compose dynamic tree structures for visual contexts,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6619–6628.
- [43] T. He, L. Gao, J. Song, J. Cai, and Y.-F. Li, “Learning from the scene and borrowing from the rich: Tackling the long tail in scene graph generation,” in IJCAI, 2020.
- [44] W. Li, H. Zhang, Q. Bai, G. Zhao, N. Jiang, and X. Yuan, “Ppdl: Predicate probability distribution based loss for unbiased scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 19 447–19 456.
- [45] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” arXiv preprint arXiv:2107.13586, 2021.
- [46] X. L. Li and P. Liang, “Prefix-tuning: Optimizing continuous prompts for generation,” in ACL, 2021.
- [47] Y. Du, F. Wei, Z. Zhang, M. Shi, Y. Gao, and G. Li, “Learning to prompt for open-vocabulary object detection with vision-language model,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 14 084–14 093.
- [48] A. K. Lampinen, I. Dasgupta, S. C. Chan, K. Matthewson, M. H. Tessler, A. Creswell, J. L. McClelland, J. X. Wang, and F. Hill, “Can language models learn from explanations in context?” arXiv preprint arXiv:2204.02329, 2022.
- [49] K.-L. Chiu and R. Alexander, “Detecting hate speech with gpt-3,” arXiv preprint arXiv:2103.12407, 2021.
- [50] S. Shin, S.-W. Lee, H. Ahn, S. Kim, H. Kim, B. Kim, K. Cho, G. Lee, W. Park, J.-W. Ha et al., “On the effect of pretraining corpora on in-context learning by a large-scale language model,” arXiv preprint arXiv:2204.13509, 2022.
- [51] J. Kossen, T. Rainforth, and Y. Gal, “In-context learning in large language models learns label relationships but is not conventional learning,” arXiv preprint arXiv:2307.12375, 2023.
- [52] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
- [53] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, “Progressive neural networks,” arXiv preprint arXiv:1606.04671, 2016.
- [54] Z. Li and D. Hoiem, “Learning without forgetting,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 12, pp. 2935–2947, 2017.
- [55] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” arXiv preprint arXiv:1410.5401, 2014.
- [56] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International conference on machine learning. PMLR, 2017, pp. 1126–1135.
- [57] A. Nichol, J. Achiam, and J. Schulman, “On first-order meta-learning algorithms,” arXiv preprint arXiv:1803.02999, 2018.
- [58] G. I. Parisi, J. Tani, C. Weber, and S. Wermter, “Lifelong learning of human actions with deep neural network self-organization,” Neural Networks, vol. 96, pp. 137–149, 2017.
- [59] S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, “Lifelong learning via progressive distillation and retrospection,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 437–452.
- [60] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
- [61] Y. Zhong, J. Yang, P. Zhang, C. Li, N. Codella, L. H. Li, L. Zhou, X. Dai, L. Yuan, Y. Li et al., “Regionclip: Region-based language-image pretraining,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 793–16 803.
- [62] Y. Zhang, Y. Pan, T. Yao, R. Huang, T. Mei, and C.-W. Chen, “Learning to generate language-supervised and open-vocabulary scene graph using pre-trained visual-semantic space,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2915–2924.
- [63] R. Mokady, A. Hertz, and A. H. Bermano, “Clipcap: Clip prefix for image captioning,” arXiv preprint arXiv:2111.09734, 2021.
- [64] Z. Wang, Z. Zhang, C.-Y. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. Dy, and T. Pfister, “Learning to prompt for continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 139–149.
- [65] G. M. van de Ven and A. S. Tolias, “Three scenarios for continual learning,” CoRR, vol. abs/1904.07734, 2019. [Online]. Available: http://arxiv.org/abs/1904.07734
- [66] X. Han, J. Yang, H. Hu, L. Zhang, J. Gao, and P. Zhang, “Image scene graph generation (sgg) benchmark,” arXiv preprint arXiv:2107.12604, 2021.
- [67] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “On tiny episodic memories in continual learning,” in ICML, 2019.
- [68] P. Buzzega, M. Boschini, A. Porrello, D. Abati, and S. Calderara, “Dark experience for general continual learning: a strong, simple baseline,” Advances in neural information processing systems, vol. 33, pp. 15 920–15 930, 2020.
- [69] C. Zheng, X. Lyu, L. Gao, B. Dai, and J. Song, “Prototype-based embedding network for scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 783–22 792.
- [70] A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, “Efficient lifelong learning with a-gem,” arXiv preprint arXiv:1812.00420, 2018.
- [71] A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. Torr, “Riemannian walk for incremental learning: Understanding forgetting and intransigence,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 532–547.
- [72] D. Jung, S. Kim, W. H. Kim, and M. Cho, “Devil’s on the edges: Selective quad attention for scene graph generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18 664–18 674.
- [73] Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie, “Class-balanced loss based on effective number of samples,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 9268–9277.