My adviser has a newly built model with unknown parameters a = [a1 a2 a3 a4];
I am assigned to estimate the parameters with MLE.
The model is
f(X) = p*exp( sqrt( (a1*x1)^2 + (a2*x2)^2) + (a3*x3)^2 + (a4*x4)^2 ) )
X ([x1 x2 x3 x4]) is known, p is known, empirical data for f(X) which corresponds to X are known.
I know matlab offers mle.m to handle MLE scenario, but I really have no idea how to use it to do the parameter estimation. Or maybe there is other way/function to do it?
BTW, there are 76 groups of X and empirical data.
Thanks.

Hello,
I have a problem using the function ode45 of derivatives in MATLAB 6.
I want to pass some parameters (Fr,Ft,m) through the ode45 function
so i can change it each time. I realise the following steps:
- Making the KoernKurv(t,y,Fr,Ft,m):
function dy = KoernKurv(t,y,Fr,Ft,m)
dy =zeros(4,1);
dy(1)= y(2);
dy(2)= Fr/m + y(1)*y(4)^2;
dy(3) = y(4);
dy(4) = (Ft/m - 2*y(1)*y(2)*y(4))/(y(1)^2);
- Realising the ode45 function in the command window:
[T Q] = ode45('KoernKurv',[0 3],[1 1 1 1],[],2,1,1)
I think the syntax is correct but I get an error 'Too many input
arguments'. Could someone help me please?
Many thanks!
Thiet

Dear readers,
I am trying to run a Maximum Likelihood Estimation of a likelihood
function that has 2 parameters, lets say a,b The assembly of the
likelihood function depends on the parameters.
Let x_i be an observation and lets say the marginals look like this:
f(a,b|x) = 2*a+b+x iff a < b
f(a,b | x) = 2*b+a+x iff b < a
f(a,b| x) = 4*a+x iff a=b
Now the likelihood function is the
product of
L(a,b|x_1, x_2...) = product i=1:n f(a,b,x_i)
this can be transformed into a sum of logarithms.
and Log(L(a,b|x_1...)) = Sum i=1:n Log(f(a,b,x_i))
Now my question is:
1) How do a code the defintion of the marginal f(a,b,x) into matlab.
In mathematica there is the possbilty of appending "/; a > b" (
/; a =b or /; a<b resp.) to a function definition. Is there an
equivalent in matlab or do I have to use 'if then else' in a matlab
function body?
Given there is no equivalent other than if then else. How can I give
fmincon the contraint a < b (a=b, a>b resp.?)
2) How can I contruct the sum of logarithms mentioned above? What I
did was the following:
function f = logLikelihood(x)
n = length(x);
a = sym('a','real');
b = sym('b','real');
i = 1:n;
m(i) = log(x(i,1) + x(i,2) + a + b);
f = sum(m);
% x is an nx2 matrix of n observations of two items which I load
with load('filename.txt')
The logLikelihood function nicely assembles the desired sum of
logarithms but how do I make a function
out if that I can pass to fmincon??? I somehow figure this can be
achived by function handles but how exactly do I do it? I always ran
into fmincon telling that a is an unknown variable when I tried to
make use of function handles. (I tried to define an anoymous function
using the return value of logLikelihood)
(btw: In the logLikelihood I ignored problem 1. If there is no
equivalent to mathematica's /; , then I will have to pass the a<b
etc constraints to fmincon and run three seperate fmincon runs. But
how do I pass these constraints? I could not figure out how to
achieve this with contraint options available in fmincon)
Any help would be greatly appreciated!
Kind regards,
Markus

Yudha wrote:
> log(f(t|k,t0))=log(k)-log(gamma(1/k))-log(t0)-(-t/t0)^k
MATLAB has a GAMMALN function. Use that instead of log(gamma(1/k)).
> When I calculate L(k,t0|t) with two different dataset of t, it's always gave me 0. So it seems that the numerical scheme behind mle tool in matlab optimizes L(k,t0|t) by reaching it minimum value instead of maximum. Therefore the mle tool on matlab may not be working for this case.
As the help says,
>> help mle
MLE Maximum likelihood estimation.
[snip]
[...] = MLE(DATA,'nloglf',NLOGLF,'start',START,...) returns MLEs for
the parameters of the distribution whose negative log-likelihood is
given by NLOGLF.
But you haven't said how you are using MLE, ror provided any code, so it's impossible to say why you are getting zero.
There's a reason why one typically works with the _log_-likelihood rather than the likelihood: the likelihood in even moderately-sized data sets is often extremely small, to the point of underflowing in double precision. Is that the problem?