D-LDA: A Topic Modeling Approach without Constraint Generation for Semi-Defined Classification
Zhuang, Fuzhen; Luo, Ping; Shen, Zhiyong; He, Qing; Xiong, Yuhong; Shi, Zhongzhi
Keyword(s): Semi-defined classification, Topic modeling, Gibbs Sampling, Semi-supervised clustering
Abstract: We study what we call semi-defined classification, which deals with the categorization tasks where the taxonomy of the data is not well defined in advance. It is motivated by the real-world applications, where the unlabeled data may also come from some other unknown classes besides the known classes for the labeled data. Given the unlabeled data, our goal is to not only identify the instances belonging to the known classes, but also cluster the remaining data into other meaningful groups. It differs from traditional semi-supervised clustering in the sense that in semi- supervised clustering the supervision knowledge is too far from being representative of a target classification, while in semi-defined classification the labeled data may be enough to supervise the learning on the known classes. In this paper we propose the model of Double-latent-layered LDA (D-LDA for short) for this problem. Compared with LDA with only one latent variable y for word topics, D-LDA contains another latent variable z for (known and unknown) document classes. With this double latent layers consisting of y and z and the dependency between them, D-LDA directly injects the class labels into z to supervise the exploiting of word topics in y. Thus, the semi-supervised learning in D-LDA does not need the generation of pairwise constraints, which is required in most of the previous semi-supervised clustering approaches. We present the experimental results on ten different data sets for semi-defined classification. Our results are either comparable to (on one data sets), or significantly better (on the other nine data set) than the six compared methods, including the state-of-the-art semi-supervised clustering methods.
No page numbers available Pages
Additional Publication Information: To be published in the Tenth IEEE International Conference on Data Mining, Sydney, Australia, December 14-17, 2010
External Posting Date: October 21, 2010 [Fulltext]. Approved for External Publication
Internal Posting Date: October 21, 2010 [Fulltext]