SCM

Forum: help

Monitor Forum | Start New Thread Start New Thread
RE: Equality constraint optimization [ Reply ]
By: Ott Toomet on 2015-03-14 04:54
[forum:42024]
Hey,
checked it, equality constraints seem to support both 'iterlim' parameter (for Newton-Raphson & Friends) and SUMTMaxIter (for sumt outer iterations).

BHHH is essentially the same as Newton-Raphson. The latter uses true Hessian, the former approximates the Hessian using the Information Equality (IE). IE is only valid when maximizing log-likelihood, requires the observations to be independent, and requires observation-wise gradient. In practice BHHH tends initially to converge slower but close to the optimum they behave the same. This is because IE only holds at true parameter values. NR and BHHH are asymptotically the same (this is what IE states). However, on finite samples you may get different inference (in my experience, up to 30% different standard errors). See Calzolari % Fiorentini (1993), Economics Letters 42(1), 5-13. Note that one cannot easily tell which standard errors are 'better' on finite sample as ML inference is only asymptotically valid.

Best,
Ott

RE: Equality constraint optimization [ Reply ]
By: hannes rohloff on 2015-03-13 18:24
[forum:42020]
Hi Ott,

I am impressed by the response time^^.

You might just solved my problem. I tried sth like:

function(th){

th[rei] <- th[rej]

.......}

where rei and rej are the restricted parameters. The problem was that the outcome depended on which is rei and rej, even though obviously that shouldnt matter (i.e. th[17] = th[18] equals th[18]=th[17]).

Regarding the example, the likelihood function depends on quite some code (in total about 300 lines^^). I could send you the whole EM-algo, but I think you have better things to do than look through 600 lines of (for your understanding probably messy) code. But you got the point, so your solution probably will work just fine.

Luckily I dont need covarmat since I only use it in a LR test.

Regarding BHHH and line search:

Ok, so your code looks similar to this:

steps <- c(5,2,1,0.5,0.25,0.1,0.01,0.005,0)
step <- 0.1
deriv <- jacobian(likeli_calc, th)
hess <- t(deriv)%*%deriv
lik <- sum(likeli_calc(th))

deriv <- colSums(deriv)

liksums <- matrix(nrow=9, ncol=2)
liksums[,2] <- seq(1,9,1)
liksums[9,1] <- lik

for (j in 1:9){

temp = th + step*steps[j]*inv_all(hess)%*%deriv
theta_list[,j] <- temp

}



endscor=0
j=8
while (j >= 1 & endscor==0) {

temp <- sum(likeli_calc(theta_list[,j]))
liksums[j,1] <- temp

if (is.na(liksums[j,1])==TRUE | is.nan(liksums[j,1])==TRUE | is.infinite(liksums[j,1])==TRUE) {

liksums[j,1] <- -10000000

}


if (liksums[(j+1),1]>liksums[j,1]) endscor=1

j=j-1
}


?? Or lets say it works similarly? (I know it's ugly)

Unfortunately I am really no expert in the field of optimization let alone line search (I barely know what it is^^).

I was asking about that (how the maxBHHH works), since I was afraid that the results of the EM-algo with the constraints ( since then it would use the sumt()-algo) and the one without them wouldn't be comparable due to the difference in the optimization routine.

Sorry for the long text, big thank you and bests,

Hannes

RE: Equality constraint optimization [ Reply ]
By: Ott Toomet on 2015-03-13 16:59
[forum:42018]
Hey,
haven't checked sumt and maxiter. Do you have a tiny example I could try?

Otherwise, I would maybe wrap your true likelihood function inside a wrapper that does the constraints. Something in this direction:

fakeLoglik <- function(fakeTheta) {
theta[1:2] <- fakeTheta[1]
theta[..other components..] <- fakeTheta[-1]
trueLoglik(theta)
}
maxlik(fakeLoglik, start, ...)

Essentially, you have just one parameter instead of two, and that's a simple way to force likelihood function to understand that. As an additional bonus you also get correct inference now ;-)

About BHHH: it is the same optimizer as maxNR (implemented by me). It does not have any meaningful line search algorithm, if the original estimate fails, it just keeps halving the step until we find a new estimate that is better. I would be happy to improve it, I know it causes trouble in several occasions, so any references are welcome :-)

Cheers,
Ott

Equality constraint optimization [ Reply ]
By: hannes rohloff on 2015-03-13 16:29
[forum:42017]
Dear all,

I have to optimize a likelihood function with linear equality constraints. I make use of this optimization routine in the maximization step of an EM algo, since I figured that it doesnt change the results significantly, I only use one iteration of the maxBHHH optimizer. Now I would like to do this for the constrained case as well, but I can't figure out how to pass the maxiter argument to the sumt() function (I cannot find the sumt() at all).

Additionally could you quickly explain, how "exactly" the BHHH estimator is implemented, i.e. what is the underlying optimization routine, optim?So is there a line search carried out even if the maxiter=1?

Just in case you might right away be able to think about a work around, the constraint is simply theta[1]=theta[2].

I hope this is comprehandable....

Thanks and best regards,

Hannes






Thanks to:
Vienna University of Economics and Business Powered By FusionForge