Sample-level weighting for multi-task learning with auxiliary tasks

Emilie Grégoire, Muhammad Hafeez Chaudhary, Sam Verboven

Research output: Contribution to journalArticlepeer-review

Abstract

Multi-task learning (MTL) can improve the generalization performance of neural networks by sharing representations with related tasks. Nonetheless, MTL is challenging in practice because it can also degrade performance through harmful interference between tasks. Recent work has pursued task-specific loss weighting as a solution for this interference. However, existing algorithms treat tasks as atomic, lacking the ability to explicitly separate harmful and helpful signals beyond the task level. To this end, we propose SLGrad, a sample-level highly-dynamic weighting algorithm for multi-task learning with auxiliary tasks. By exploiting a hold-out meta objective, SLGrad reshapes the task distributions through sample-level weights to eliminate harmful auxiliary signals and augment useful task signals. Substantial generalization performance gains are observed on both semi-synthetic and synthetic datasets, as well as on three common real-world supervised multi-task problems. By studying the resulting sample and task weight distributions, we show that SLGrad is able to bridge the existing gap between sample weighting in single-task learning and dynamic task weighting in multi-task learning.

Original languageEnglish
Pages (from-to)3482-3501
Number of pages20
JournalApplied Intelligence
Volume54
Issue number4
DOIs
Publication statusPublished - Feb 2024

Keywords

  • Deep learning
  • Dynamic task weighting
  • Multi-task learning
  • Sample-level weighting

Fingerprint

Dive into the research topics of 'Sample-level weighting for multi-task learning with auxiliary tasks'. Together they form a unique fingerprint.

Cite this