Helpful or Harmful: Inter-Task Association in Continual Learning

Chung-Ang University, South Korea
ECCV 2022
MY ALT TEXT

An overview of our proposed H2 framework, which selectively identifies helpful parameters from previous tasks through task-level model search and parameter-level sensitivity analysis, enabling optimal knowledge reuse while mitigating negative interference in continual learning.

Abstract

When optimizing sequentially incoming tasks, deep neural networks generally suffer from catastrophic forgetting due to their lack of ability to maintain knowledge from old tasks. This may lead to a significant performance drop of the previously learned tasks. To alleviate this problem, studies on continual learning have been conducted as a countermeasure. Nevertheless, it suffers from an increase in computational cost due to the expansion of the network size or a change in knowledge that is favorably linked to previous tasks. In this work, we propose a novel approach to differentiate helpful and harmful information for old tasks using a model search to learn a current task effectively. Given a new task, the proposed method discovers an underlying association knowledge from old tasks, which can provide additional support in acquiring the new task knowledge. In addition, by introducing a sensitivity measure to the loss of the current task from the associated tasks, we find cooperative relations between tasks while alleviating harmful interference. We apply the proposed approach to both task- and class-incremental scenarios in continual learning, using a wide range of datasets from small to large scales. Experimental results show that the proposed method outperforms a large variety of continual learning approaches for the experiments while effectively alleviating catastrophic forgetting.

Motivation:
In continual learning, deep neural networks frequently encounter catastrophic forgetting when learning tasks sequentially, due to their limited capacity to preserve previously acquired knowledge. While structural allocation strategies mitigate this by assigning disjoint parameter subsets per task, they often reuse prior parameters without evaluating their relevance, which may introduce negative interference and impair learning of the current task.

Proposed Method:
We reformulate the forgetting problem as a task interference problem and propose H2 (Helpful or Harmful), a continual learning approach that selectively reuses only helpful knowledge from previous tasks. Our method first conducts a model search to identify cooperative old tasks at a coarse level, and then applies a sensitivity measure using Fisher Information to isolate crucial parameters at a fine level. This dual-stage strategy enables the construction of an optimal subnetwork for the current task, facilitating efficient knowledge reuse while suppressing interference.

Result 3

Ablation study results on Split CIFAR-10 under both task- and class-incremental scenarios. The effectiveness of the model search, sensitivity-based parameter selection, and contrastive loss is validated, with additional comparisons on parameter count, FLOPs, and memory usage for binary masks.

BibTeX


@inproceedings{jin2022helpful,
  title={Helpful or harmful: Inter-task association in continual learning},
  author={Jin, Hyundong and Kim, Eunwoo},
  booktitle={European conference on computer vision},
  pages={519--535},
  year={2022},
  organization={Springer}
}