Separating common from salient patterns with Contrastive Representation Learning
Résumé
Contrastive Analysis is a sub-field of Representation Learning that aims at separating common factors of variation between two datasets, a background (i.e.,
healthy subjects) and a target (i.e., diseased subjects), from the salient factors of
variation, only present in the target dataset. Despite their relevance, current models based on Variational Auto-Encoders have shown poor performance in learning
semantically-expressive representations. On the other hand, Contrastive Representation Learning has shown tremendous performance leaps in various applications (classification, clustering, etc.). In this work, we propose to leverage the ability of Contrastive Learning to learn semantically expressive representations well
adapted for Contrastive Analysis. We reformulate it under the lens of the InfoMax
Principle and identify two Mutual Information terms to maximize and one to minimize. We decompose the first two terms into an Alignment and a Uniformity term,
as commonly done in Contrastive Learning. Then, we motivate a novel Mutual
Information minimization strategy to prevent information leakage between common and salient distributions. We validate our method, called SepCLR, on three
visual datasets and three medical datasets, specifically conceived to assess the
pattern separation capability in Contrastive Analysis.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |
Copyright (Tous droits réservés)
|