Jump to content

Domain adaptation

From Wikipedia, the free encyclopedia
(Redirected from Domain randomization)
Distinction between usual machine learning setting and transfer learning, and positioning of domain adaptation

Domain adaptation is a field associated with machine learning and transfer learning. It addresses the challenge of training a model on one data distribution (the source domain) and applying it to a related but different data distribution (the target domain).

A common example is spam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain).

Domain adaptation techniques can also leverage unrelated data sources to improve learning. When multiple source distributions are involved, the problem extends to multi-source domain adaptation.[1]

Domain adaptation is a specialized area within transfer learning. In domain adaptation, the source and target domains share the same feature space but differ in their data distributions. In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).[2]

Classification of domain adaptation problems

[edit]

Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.

Distribution shifts

[edit]

Common distribution shifts are classified as follows:[3][4]

  • Covariate Shift occurs when the input distributions of the source and destination change, but the relationship between inputs and labels remains unchanged. The above-mentioned spam filtering example typically falls in this category. Namely, the distributions (patterns) of emails may differ between the domains, but emails labeled as spam in the one domain should similarly be labeled in another.
  • Prior Shift (Label Shift) occurs when the label distribution differs between the source and target datasets, while the conditional distribution of features given labels remains the same.  An example is a classifier of hair color in images from Italy (source domain) and Norway (target domain).  The proportions of hair colors (labels) differ, but images within classes like blond and black-haired populations remain consistent across domains. A classifier for the Norway population can exploit this prior knowledge of class proportions to improve its estimates.
  • Concept Shift (Conditional Shift) refers to changes in the relationship between features and labels, even if the input distribution remains the same. For instance, in medical diagnosis, the same symptoms (inputs) may indicate entirely different diseases (labels) in different populations (domains).

Data available during training

[edit]

Domain adaptation problems typically assume that some data from the target domain is available during training. Problems can be classified according to the type of this available data:[5][6]

  • Unsupervised: Unlabeled data from the target domain is available, but no labeled data. In the above-mentioned example of spam filtering, this corresponds to the case where emails from the target domain (user) are available, but they are not labeled as spam. Domain adaptation methods can benefit from such unlabeled data, by comparing its distribution (patterns) with the labeled source domain data.
  • Semi-supervised: Most data that is available from the target domain is unlabelled, but some labeled data is also available. In the above-mentioned case of spam filter design, this corresponds to the case that the target user has labeled some emails as being spam or not.
  • Supervised: All data that is available from the target domain is labeled. In this case, domain adaptation reduces to refinement of the source domain predictor. In the above-mentioned example classification of hair-color from images, this could correspond to the refinement of a network already trained on a large dataset of labeled images from Italy, using newly available labeled images from Norway.

Formalization

[edit]

Let be the input space (or description space) and let be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) able to attach a label from to an example from . This model is learned from a learning sample .

Usually in supervised learning (without domain adaptation), we suppose that the examples are drawn i.i.d. from a distribution of support (unknown and fixed). The objective is then to learn (from ) such that it commits the least error possible for labelling new examples coming from the distribution .

The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions and on [citation needed]. The domain adaptation task then consists of the transfer of knowledge from the source domain to the target one . The goal is then to learn (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain [citation needed].

The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?

Four algorithmic principles

[edit]

Reweighting algorithms

[edit]

The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).[7][8]

Iterative algorithms

[edit]

A method for adapting consists in iteratively "auto-labeling" the target examples.[9] The principle is simple:

  1. a model is learned from the labeled examples;
  2. automatically labels some target examples;
  3. a new model is learned from the new labeled examples.

Note that there exist other iterative approaches, but they usually need target labeled examples.[10][11]

Search of a common representation space

[edit]

The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.[12][13]

Hierarchical Bayesian Model

[edit]

The goal is to construct a Bayesian hierarchical model , which is essentially a factorization model for counts , to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.[14]

Softwares

[edit]

Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:

  • ADAPT[15] (Python)
  • TLlib [16] (Python)
  • Domain-Adaptation-Toolbox [17] (MATLAB)

References

[edit]
  1. ^ Crammer, Koby; Kearns, Michael; Wortman, Jeniifer (2008). "Learning from Multiple Sources" (PDF). Journal of Machine Learning Research. 9: 1757–1774.
  2. ^ Sun, Shiliang; Shi, Honglei; Wu, Yuanbin (July 2015). "A survey of multi-source domain adaptation". Information Fusion. 24: 84–92. doi:10.1016/j.inffus.2014.12.003. S2CID 18385140.
  3. ^ Kouw, Wouter M.; Loog, Marco (2019-01-14), An introduction to domain adaptation and transfer learning, doi:10.48550/arXiv.1812.11806, retrieved 2024-12-22
  4. ^ Farahani, Abolfazl; Voghoei, Sahar; Rasheed, Khaled; Arabnia, Hamid R. (2020-10-07), A Brief Review of Domain Adaptation, doi:10.48550/arXiv.2010.03978, retrieved 2024-12-23
  5. ^ Stanford Online (2023-04-11). Stanford CS330 Deep Multi-Task & Meta Learning - Domain Adaptation l 2022 I Lecture 13. Retrieved 2024-12-23 – via YouTube.
  6. ^ Farahani, Abolfazl; Voghoei, Sahar; Rasheed, Khaled; Arabnia, Hamid R. (2020-10-07), A Brief Review of Domain Adaptation, doi:10.48550/arXiv.2010.03978, retrieved 2024-12-23
  7. ^ Huang, Jiayuan; Smola, Alexander J.; Gretton, Arthur; Borgwardt, Karster M.; Schölkopf, Bernhard (2006). "Correcting Sample Selection Bias by Unlabeled Data" (PDF). Conference on Neural Information Processing Systems (NIPS). pp. 601–608.
  8. ^ Shimodaira, Hidetoshi (2000). "Improving predictive inference under covariate shift by weighting the log-likelihood function". Journal of Statistical Planning and Inference. 90 (2): 227–244. doi:10.1016/S0378-3758(00)00115-4. S2CID 9238949.
  9. ^ Gallego, A.J.; Calvo-Zaragoza, J.; Fisher, R.B. (2020). "Incremental Unsupervised Domain-Adversarial Training of Neural Networks" (PDF). IEEE Transactions on Neural Networks and Learning Systems. PP (11): 4864–4878. doi:10.1109/TNNLS.2020.3025954. hdl:20.500.11820/72ba0443-8a7d-4cdd-8212-38682d4f0730. PMID 33027004. S2CID 210164756.
  10. ^ Arief-Ang, I.B.; Salim, F.D.; Hamilton, M. (2017-11-08). DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data. 4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys). Delft, Netherlands. pp. 1–10. doi:10.1145/3137133.3137146. ISBN 978-1-4503-5544-5.
  11. ^ Arief-Ang, I.B.; Hamilton, M.; Salim, F.D. (2018-12-01). "A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO2 Sensor Data". ACM Transactions on Sensor Networks. 14 (3–4): 21:1–21:28. doi:10.1145/3217214. S2CID 54066723.
  12. ^ Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor (2016). "Domain-Adversarial Training of Neural Networks" (PDF). Journal of Machine Learning Research. 17: 1–35.
  13. ^ Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2017). "Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation". arXiv:1703.01461 [cs.RO].
  14. ^ Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2018). "Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data". arXiv:1810.09433 [stat.ML].
  15. ^ de Mathelin, Antoine and Deheeger, François and Richard, Guillaume and Mougeot, Mathilde and Vayatis, Nicolas (2020) "ADAPT: Awesome Domain Adaptation Python Toolbox"
  16. ^ Mingsheng Long Junguang Jiang, Bo Fu. (2020) "Transfer-learning-library"
  17. ^ Ke Yan. (2016) "Domain adaptation toolbox"