HydaLearn: Highly Dynamic Task Weighting for Multitask Learning with Auxiliary Tasks

Sam Verboven, Muhammad Hafeez Chaudhary, Jeroen Berrevoets, Vincent Ginis, Wouter Verbeke

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

Multitask learning (MTL) can improve performance on one task by sharing representations with one or more related auxiliary tasks. Usually, MTL networks are trained on a composite loss function formed by a fixed weighted combination of separate task losses. In practice, however, static loss weights lead to poor results for two reasons. First, the relevance of the auxiliary tasks gradually drifts throughout the learning process. Second, for minibatch-based optimization, the optimal task weights vary significantly from one update to the next depending on the minibatch sample composition. Here, we introduce HydaLearn, an intelligent weighting algorithm that connects the main-task gain to the individual task gradients, to inform dynamic loss weighting at the minibatch level, addressing the two above shortcomings. We demonstrate significant performance increases on synthetic data and two real-world data sets.

OriginalspracheEnglisch
Seiten (von - bis)5808-5822
Seitenumfang15
FachzeitschriftApplied Intelligence
Jahrgang53
Ausgabenummer5
DOIs
PublikationsstatusVeröffentlicht - März 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „HydaLearn: Highly Dynamic Task Weighting for Multitask Learning with Auxiliary Tasks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren