c*******e 发帖数: 150 | 1 【 以下文字转载自 Statistics 讨论区 】
发信人: cavaliere (Un Baiser S'il Vous Plaît), 信区: Statistics
标 题: Regression中噪音项是一个AR(1),如何做MLE或者其它Fit?
发信站: BBS 未名空间站 (Mon Sep 15 22:02:42 2014, 美东)
想请教一下版上的各位大牛们,如果
Linear Regression中Noise Term是一个AR(1) process,通常都有什么成熟的算法做
MLE 或者其它方法 fit ?
具体的说,模型可以表示为 Y(t) = X(t) \dot \beta + E(t),
X(t) 和 \beta 都是 K-维的向量,其它都是标量。
t = 1, 2, 3, ..., T 是手头的 sample,
但是和经典的 Linear Regression 不同,E(t) 不是 i.i.d. 的高斯白噪音,可以假定
E(t) 服从一下 model:
E(t) = \rho * E(t-1) + \sigma * Z(t)
\rho 和 \sigma 是 unknown parameter,Z(t) 可以认为是高斯白噪音。
所以全部的 parameters 包括 向量 \beta 和标量 \sigma, \rho
最好还是 maximum-likelihood 的方法,这样我可以保留后面做 log-Likelihood
Ratio
Test 的可行性,以便于 做 model comparison/selection
简单地做了一下 google 和 literature survey,也许是我搜寻用的关键字不对,没有找
到什么有用的材料 -_-
谢谢各位好心的大侠指点啦! | Y****a 发帖数: 243 | 2 Y(t)- rho Y(t-1) = bata (X(t) - rho X(t-1)) + e
where e is iid normal(0,sigma^2)
apply EM algorithm to estimate beta and rho.
1. initial value rho = 0 => beta(hat)
2. plug in beta(hat), transform your Y and X, estimate rho(hat)
3. repeat steps 1 & 2 until converge. | s*********i 发帖数: 218 | 3 Try Dynamic Regression function in R | c*******e 发帖数: 150 | 4 Awesome. Upon doing further survey on this topic, I also think this is the
best solution.
Out of curiosity, may I ask a further questions:
given the sample X(t) and Y(t), suppose that beta_star(rho_star) maximized
the likehihood function of all beta given that rho == rho_star, and
rho_star(beta_star) maximized the likelihood function of all rho given that
beta == beta_star, namely this pair of beta_star(rho_star) and
rho_star(beta_star) is the fixed-point which we converged at step (3), is
there
any theoretical guarantee that this pair [beta_star, rho_star] is the global
maximum-likelihood estimator (MLE)? or there could be counter-examples that
there could a gap from the global maximum, and we need to be careful when
applying properties of the MLE to the obtained estimators.
thanks very much!
【在 Y****a 的大作中提到】 : Y(t)- rho Y(t-1) = bata (X(t) - rho X(t-1)) + e : where e is iid normal(0,sigma^2) : apply EM algorithm to estimate beta and rho. : 1. initial value rho = 0 => beta(hat) : 2. plug in beta(hat), transform your Y and X, estimate rho(hat) : 3. repeat steps 1 & 2 until converge.
| Y****a 发帖数: 243 | 5 I remember there was a prove that under certain conditions, the algorithm
reach global mle. But forgot what the conditions were :( | h*****7 发帖数: 6781 | 6 EM没有理论论证全局最优的概率,to the best of my knowledge
记住一条,EM这类方法,不属于概率论范畴,属于随机过程,因为它用了指示器(可参
考随机过程计算方法算法导论之类的),所以很难给出理论确界和最优解概率。一般说
无限趋近最优解。
在GMM条件下,倒是有人系统测试过EM的解和最优解有多远
另外MLE本来就没道理的,就是通俗说法的屁股决定脑袋。就算解出全局最优,对参数
估计也是imperfect solution,至于后续Wilk's theorem,也是asymptotic的,所以LZ
就别要求太高了
LZ这种情况,没法用时序或者频域作分析,用EM是比较理想的
靠,说了一堆,回头看发现对LZ没啥帮助,还是疑似我老马甲的YueJia讲得好 | l*******m 发帖数: 1096 | 7 kalman filter. 就是em算法。研究几十年了
【在 c*******e 的大作中提到】 : Awesome. Upon doing further survey on this topic, I also think this is the : best solution. : Out of curiosity, may I ask a further questions: : given the sample X(t) and Y(t), suppose that beta_star(rho_star) maximized : the likehihood function of all beta given that rho == rho_star, and : rho_star(beta_star) maximized the likelihood function of all rho given that : beta == beta_star, namely this pair of beta_star(rho_star) and : rho_star(beta_star) is the fixed-point which we converged at step (3), is : there : any theoretical guarantee that this pair [beta_star, rho_star] is the global
|
|