Transfer Learning From Multiple Source Domains via Consensus Regularization
Luo, Ping; Zhuang, Fuzhen; Xiong, Hui; Xiong, Yuhong; He, Qing
Keyword(s): Classification, Transfer Learning, Consensus Regularization
Abstract: Recent years have witnessed an increased interest in transfer learning. Despite the vast amount of research performed in this field, there are remaining challenges in applying the knowledge learnt from multiple source domains to a target domain. First, data from multiple source domains can be semantically related, but have different distributions. It is not clear how to exploit the distribution differences among multiple source domains to boost the learning performance in a target domain. Second, many real- world applications demand this transfer learning to be performed in a distributed manner. To meet these challenges, we propose a consensus regularization framework for transfer learning from multiple source domains to a target domain. In this framework, a local classifier is trained by considering both local data available in a source domain and the prediction consensus with the classifiers from other source domains. In addition, the training algorithm can be implemented in a distributed manner, in which all the source-domains are treated as slave nodes and the target domain is used as the master node. To combine the training results from multiple source domains, it only needs share some statistical data rather than the full contents of their labeled data. This can modestly relieve the privacy concerns and avoid the need to upload all data to a central location. Finally, our experimental results show the effectiveness of our consensus regularization learning.
Additional Publication Information: To be published in ACM 17th Conference on Information and Knowledge Management, (CIKM'08), October 26-30, 2008, Napa Valley, California, USA
External Posting Date: September 21, 2008 [Fulltext]. Approved for External Publication
Internal Posting Date: September 21, 2008 [Fulltext]