Sample-level weighting for multi-task learning with auxiliary tasks

Emilie Grégoire, Muhammad Hafeez Chaudhary, Sam Verboven

Onderzoeksoutput: Bijdrage aan een tijdschriftArtikelpeer review

Samenvatting

Multi-task learning (MTL) can improve the generalization performance of neural networks by sharing representations with related tasks. Nonetheless, MTL is challenging in practice because it can also degrade performance through harmful interference between tasks. Recent work has pursued task-specific loss weighting as a solution for this interference. However, existing algorithms treat tasks as atomic, lacking the ability to explicitly separate harmful and helpful signals beyond the task level. To this end, we propose SLGrad, a sample-level highly-dynamic weighting algorithm for multi-task learning with auxiliary tasks. By exploiting a hold-out meta objective, SLGrad reshapes the task distributions through sample-level weights to eliminate harmful auxiliary signals and augment useful task signals. Substantial generalization performance gains are observed on both semi-synthetic and synthetic datasets, as well as on three common real-world supervised multi-task problems. By studying the resulting sample and task weight distributions, we show that SLGrad is able to bridge the existing gap between sample weighting in single-task learning and dynamic task weighting in multi-task learning.

Originele taal-2Engels
Pagina's (van-tot)3482-3501
Aantal pagina's20
TijdschriftApplied Intelligence
Volume54
Nummer van het tijdschrift4
DOI's
StatusGepubliceerd - feb. 2024

Vingerafdruk

Duik in de onderzoeksthema's van 'Sample-level weighting for multi-task learning with auxiliary tasks'. Samen vormen ze een unieke vingerafdruk.

Citeer dit