c*****l 发帖数: 1493 | 1 simulating a very basic model: Y|b=X*\beta+Z*b +\sigma^2* diag(ni);
b~N(0,\psi) #bivariate normal
where b is the latent variable, Z and X are ni*2 design matrices, sigma is
the error variance,
Y are longitudinal data, i.e. there are ni measurements for object i.
Parameters are \beta, \sigma, \psi; call them \theta.
I wrote a EM, the M step is to maximize the log(f(Y,b;\theta)) as the
regular way,
the E step involves the evaluation of E step, using Gauss Hermite
approximation.
All are simulated data. X and Z are naive like cbind(rep(1,m),1:m)
After 200 iterations, the estimated \beta converges to the true value while
\sigma and \psi do not.
Basically the estimated \sigma keeps increasing..
I am confused since the \hat{\beta} requires \sigma and \psi from previous
iteration. If something wrong then all estimations should be incorrect...
Another question is that I calculated the logf(Y;\theta) to see if it
increases after updating \theta.
Seems decreasing.....
I thought the X and Z are linearly dependent would cause some issue but I
also changed the Z.
Can any one give some help? I am stuck it for days...
I can send the code to you. Seems | l******n 发帖数: 9344 | 2 check your formula and codes with simple example
These two things are very prone to have problems.
【在 c*****l 的大作中提到】 : simulating a very basic model: Y|b=X*\beta+Z*b +\sigma^2* diag(ni); : b~N(0,\psi) #bivariate normal : where b is the latent variable, Z and X are ni*2 design matrices, sigma is : the error variance, : Y are longitudinal data, i.e. there are ni measurements for object i. : Parameters are \beta, \sigma, \psi; call them \theta. : I wrote a EM, the M step is to maximize the log(f(Y,b;\theta)) as the : regular way, : the E step involves the evaluation of E step, using Gauss Hermite : approximation.
|
|