Details

On using mixtures and modes of mixtures in data analysis

by 1979- Yao, Weixin

Abstract (Summary)
iii My thesis includes two topics: modal local polynomial regression and label switching for Bayesian mixtures. Modal Local Polynomial Regression By combining the ideas of local polynomial regression (LPR) and modal regression, we created a new adaptive robust nonparametric regression method – “modal local polynomial regression (MLPR).” We have successfully proved that asymptotically MLPR produces smaller mean square error (MSE) than LPR when there are outliers or when the error distribution has a heavier tail than the normal distribution. Furthermore, unlike other general robust methods, this new method achieves robustness without sacrificing efficiency. In fact, in cases where there are no outliers or where the error distribution has a light tail (e.g. Gaussian distribution), MLPR produces results that are at least as good as the local polynomial method. By adding one more tuning parameter, MLPR performs better than the traditional LPR. Specifically, suppose the bivariate data {(xi, yi), i = 1, ..., n} are independent and identically sampled from the model: Y = m(X) + ?, where E(? | X) = 0. Our focus is to estimate the smooth function m(x). For any given x0, using Taylor’s expansion in the neighborhood of x0: m(x) ? ?p v=0 m(v)(x0)(x ? x0)v/v! , LPR fits the above polynomial regression locally at x0 by minimizing the weighted least square criterion ? ? n? p? ?Kh(xi ? x0) ?yi ? ?j(xi ? x0)j ?2 ? ? ? , i=1 j=0 where Kh(t) = h?1K(t/h), K(t) is a symmetric probability density function. Denote ˆ? = ( ˆ ?0, . . . , ˆ ?p) the solution to minimize the above formula. We estimate m(v)(x0) by iv v! ˆ ?v, v = 0, . . . , p. Specifically, when v = 0, we estimate m(x0) by ˆ ?0. In comparison, MLPR estimates ˆ ? by maximizing the objective function ? n? ?Kh1(xi ? x0)?h2(y p? i ? ?j(xi ? x0)j ? )? , i=1 j=0 where ?h2(t) = h?1 2 ?(t/h2), ?(t) is the standard normal density function, and h2 is a constant depending on error distribution. If p = 0, which is the modal local constant model, ˆ ?0 represents the conditional mode of a kernel density estimator, conditional on x = x0. The EM algorithm for finding the modes of mixtures (Li, Ray, and Lindsay, 2007) can be extended to find ˆ ?. In the E Step we calculate: ?(j | ?(k)) ? K h1 (xj ? x0)? h2(yj??p n? j=1 l=0 ?(k) l [ ?(j | ?(k)) log (xj?x0)l). In the M Step, we find ?(k+1) by maximizing the function ( ?h2(yj ? ?p l=0 ?l(xj ? x0)l )] ) with respect to ? = (?0, . . . , ?p). The added robustness can be seen in the E step by the inclusion of ? h2(yj ? ?p l=0 ?l(xj ?x0)l) as a weight function. As a result, any outliers are weighted less under MLPR. Notice that h2 ? ?, ?(j | ?(k)) ? K h1 (xj ?x0), then MLPR is exactly the same as LPR. Therefore, robustness is achieved without sacrificing efficiency. We have successfully found the way to select the adaptive optimal bandwidth h2 based on the asymptotic MSE. We can also extend MLPR to simple linear regression to create a robust linear regression – “modal linear regression (MLR)”. Instead of using least square ?n i=1(yi ? ?0 ? ?1xi)2, the loss function ?n i=1 ?h(yi ? ?0 ? ?1xi) is used to estimate the regression parameters ? = (?0, ?1). Due to similar reasons stated above, we can achieve robustness without sacrificing efficiency. v Label Switching for Bayesian Mixtures One of the most fundamental problems for Bayesian mixture model estimation is label switching. We mainly propose two methods to solve this problem. One solution is to use the modes of the posterior distribution to do labelling. In order to find the posterior modes, we successfully created an algorithm to find the posterior modes of Bayesian mixtures by using the ideas of ECM (Meng and Rubin, 1993) and Gibbs sampler. This labelling method creates a natural and intuitive partition of the parameter space into labelled regions and has a nice explanation based on the highest posterior region (HPD). The other main solution is to do labelling by minimizing the normal likelihood of the labelled Gibbs samples. Unlike order constraint method, this new method can be easily extended to high dimension case and is scale invariant to the component parameters. In addition, this labelling method can be also used to solve label switching in frequentist case.
Bibliographical Information:

Advisor:

School:Pennsylvania State University

School Location:USA - Pennsylvania

Source Type:Master's Thesis

Keywords:

ISBN:

Date of Publication:

© 2009 OpenThesis.org. All Rights Reserved.