Multi-task learning
Multi-task learning (MTL) [1] is a field in machine learning in which we utilize a single model to learn multiple tasks simultaneously.
In theory, the approach allows knowledge sharing between tasks and achieves better results than single-task training. Moreover, as the model tries to learn a representation to optimize multiple tasks, there is a lower chance of overfitting and, hence, better generalization.
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. [2]
In practice, large recommendation and search systems often measure user satisfaction based on multiple metrics, such as stay time, click-through rate, and…