2009/5/21 <josef.pktd@gmail.com>:
> Do you know how well these optimization functions would handle
> discontinuities at the boundary? e.g
>> def wrapobjectivefn(x):
> if transpose(x).M.x > 1.0:
> return a_large_number
> else:
> return realobjectivefn(x)
>> I don't know what the appropriate wrapper for the gradient would be,
> maybe also some large vector.
>> I'm doing things like this in matlab, but I haven't tried with the
> scipy minimizers yet.
I would say they'd handle it badly, in the sense that most of them try
to do something like build up a quadratic form approximating the
function, then head for the minimum of that quadratic form. A
discontinuity is of course going to make nonsense of this quadratic
form, though the fact that you get a huge value will tend to send the
optimizer screeching back in the direction it came from. Unfortunately
it probably won't know it should discard the huge value, so if it
overshoots too much you could wind up with the set of evaluation
points being filled with bogus values. If feasible, it might not hurt
to use something like huge_value*(x**2+y**2+...) so that the solver
tends to gravitate back towards the allowed value.
In an ideal world, even in the absence of constraints, it would be
possible for the objective function to return NaN, which the solver
would (ideally) recognize as indicating a point where the function
cannot be safely evaluated. It would then choose some other point to
evaluate the function. You might need a constraint to safely choose
such a new point. (And if you had a constraint, safer and simpler to
simply refuse to even call the objective anywhere the constraint is
not met.) But as I understand it, the problem with this is not just
that it hasn't been implemented, but that a good optimization scheme
would need to be more flexible about the locations of its samples than
the current ones are.
Anne
> Josef
> _______________________________________________
> SciPy-user mailing list
>SciPy-user@scipy.org>http://mail.scipy.org/mailman/listinfo/scipy-user>