but I am having trouble. I have a complicated objective to minimize
subject to complicated nonlinear constraints, all of which are
functions of my 3 choice variables, and a wide variety of parameters.
My script that starts everything looks like this:

Complicated_objective is a file that returns the value of the
objective for its first argument, and the value of anlaytical gradient
for its second. Complicated_constaints returns a vector of nonlinear
inequality constraints for its first argument, and a vector of
nonlinear equality constraints for its second.

The reason to do this is so that I can use the @objective and @nonlin
syntax for fmincon; objective and nonlin are only functions of x, not
of the parameters, because they are subfunctions of a function that
has been passed the parameters already. I believe this is the form I
should use in order to pass the gradient and the nonlinear constraints
on to fmincon. My problem is that when I run this code, I get the
following error

>Warning: Trust-region-reflective algorithm does not solve this type of problem,
using active-set algorithm. You could also try the interior-point or
sqp
algorithms: set the Algorithm option to 'interior-point' or 'sqp' and
rerun. For
more help, see Choosing the Algorithm in the documentation.

DOD <dcodea@gmail.com> wrote in message <c3d4297a-b9d6-4819-beff-f974bb7a4ceb@a26g2000vbo.googlegroups.com>...
>
My problem is that when I run this code, I get the
> following error
=======================

You mean "warning".

> IE, for some reason fmincon is leaving the Trust-region-reflective
> algorithm
============================

because trust-region only supports
xor(bound constraints, linear equality constraints)
as mentioned in the FMINCON doc page.

> and going to active set, which does not make use of my
> analytical gradient.
====================

Active set will not ignore your analytical gradient.
It simply doesn't require you to supply it (unlike trust-region)

The reason that you cannot use the trust-region-reflective algorithm has
nothing to do with the way you pass extra parameters. As documented
here, among other places:http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-18.html#bsbwxm7
the trust-region-reflective algorithm only takes bound constraints or
linear equality constraints, but not both, and cannot handle nonlinear
constraints. It's a good algorithm, but is limited in the range of
constraints it can handle.

The active-set algorithm does make use of your gradient calculation
assuming that
1. You set GradObj to 'on'
2. You set GradConstr to 'on'

You might also want to try setting Algorithm to 'interior-point', as
recommended in the link above.

It is possible that your nonlinear constraints are not written
correctly, since you indicate that you have more than one, and that you
give a vector as the gradient of the constraints. As described here:http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-11.html#brhkghv-16
the gradient of the constraints should be a matrix, with the same number
of columns as the number of constraints, and the number of rows is the
dimension of your vector x.

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation

On 4/13/2011 12:29 PM, DOD wrote:
> I am trying to follow the advice given here
>
> http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-7.html
>
> but I am having trouble. I have a complicated objective to minimize
> subject to complicated nonlinear constraints, all of which are
> functions of my 3 choice variables, and a wide variety of parameters.
> My script that starts everything looks like this:
>
> % define a bunch of parameters
> gamma= .2;
> beta = .3;
> x0=[.5 .5 .5];
> %etc
> solution = nested_minimization_program(x0,gamma,beta)
>
>
> Nested_minimization_program looks like this:
>
> function out = nest_minimization-program(x0,gamma,beta)
> options = optimset('GradObj','on');
> out = fmincon(@objective,x0,[],[],[],[],[0 0 0],[1 1
> 1],@nonlin,options)
> function [obj obj_gradient] = objective(x)
> [obj obj_gradient] = complicated_objective(x,gamma,beta);
> end
> function [ineq_constriant eq_constraint] = nonlin(x)
> [ineq_constriant eq_constraint] =
> complicated_constaints(x,beta,gamma)
> end
> end
>
> Complicated_objective is a file that returns the value of the
> objective for its first argument, and the value of anlaytical gradient
> for its second. Complicated_constaints returns a vector of nonlinear
> inequality constraints for its first argument, and a vector of
> nonlinear equality constraints for its second.
>
> The reason to do this is so that I can use the @objective and @nonlin
> syntax for fmincon; objective and nonlin are only functions of x, not
> of the parameters, because they are subfunctions of a function that
> has been passed the parameters already. I believe this is the form I
> should use in order to pass the gradient and the nonlinear constraints
> on to fmincon. My problem is that when I run this code, I get the
> following error
>
>> Warning: Trust-region-reflective algorithm does not solve this type of problem,
> using active-set algorithm. You could also try the interior-point or
> sqp
> algorithms: set the Algorithm option to 'interior-point' or 'sqp' and
> rerun. For
> more help, see Choosing the Algorithm in the documentation.
>
> IE, for some reason fmincon is leaving the Trust-region-reflective
> algorithm and going to active set, which does not make use of my
> analytical gradient. The requirements for fmincon to use analytical
> gradients is, according to http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-3.html,
>
>> Write code that returns:
>> The objective function (scalar) as the first output
>
>> The gradient (vector) as the second output
>
>> Set the GradObj option to 'on' with optimset.
>
>
> objective returns a scalar value of the objective and a gradient as
> required, Gradobj is turned on, so I don't see my problem.

Alan Weiss <aweiss@mathworks.com> wrote in message <io503h$l9m$1@fred.mathworks.com>...
>
>
> The active-set algorithm does make use of your gradient calculation
> assuming that
> 1. You set GradObj to 'on'
> 2. You set GradConstr to 'on'
=================

But as I understand things, you do not have to do both, if you only want to supply an analytical gradient for one or the other....

On Apr 13, 3:08 pm, "Matt J " <mattjacREM...@THISieee.spam> wrote:
> DOD <dco...@gmail.com> wrote in message <c3d4297a-b9d6-4819-beff-f974bb7a4...@a26g2000vbo.googlegroups.com>...
>
> My problem is that when I run this code, I get the> following error
>
> =======================
>
> You mean "warning".
>
> > IE, for some reason fmincon is leaving the Trust-region-reflective
> > algorithm
>
> ============================
>
> because trust-region only supports
> xor(bound constraints, linear equality constraints)
> as mentioned in the FMINCON doc page.
>
> > and going to active set, which does not make use of my
> > analytical gradient.
>
> ====================
>
> Active set will not ignore your analytical gradient.
> It simply doesn't require you to supply it (unlike trust-region)
>
> http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-3.html#bsj1e55
>
> However, you might try using interior-point as the warning message suggests to see if performance improves.

I'm not sure I understand; I am not supplying any linear equality
constraints - those matrices are empty. I am using upper and lower
bounds and nonlinear constraints. Is the problem therefore the very
use of a nonlinear constraint? IE the trust-region method will not
work in my case in any event?

As for your advice to try interior point, I am again having some
trouble that I think stems from the use of parameters. Here is my
code:

What is different from before: the objective function @ramsey_obj now
only supplies the scalar objective and the vector of gradients. The
hessian is supplied by a separate function @hessianfn. This new
function is again defined as a subfunction so that it is only a
function of x, since the parameters required by ramsey_hessian have
been passed to nested_ramsey_minimization.

I don't understand the problem; the only input to hessianfn is x,
which is a 3-vector, and in the code for ramsey_hessian, only x(1)
through x(3) are used. So I don't know what this error is really
telling me.

On Apr 13, 3:12 pm, Alan Weiss <awe...@mathworks.com> wrote:
> The reason that you cannot use the trust-region-reflective algorithm has
> nothing to do with the way you pass extra parameters. As documented
> here, among other places:http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-18.html#bsbwxm7
> the trust-region-reflective algorithm only takes bound constraints or
> linear equality constraints, but not both, and cannot handle nonlinear
> constraints. It's a good algorithm, but is limited in the range of
> constraints it can handle.
>
> The active-set algorithm does make use of your gradient calculation
> assuming that
> 1. You set GradObj to 'on'
> 2. You set GradConstr to 'on'
>
> You might also want to try setting Algorithm to 'interior-point', as
> recommended in the link above.
>
> It is possible that your nonlinear constraints are not written
> correctly, since you indicate that you have more than one, and that you
> give a vector as the gradient of the constraints. As described here:http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-11.html#brhkgh...
> the gradient of the constraints should be a matrix, with the same number
> of columns as the number of constraints, and the number of rows is the
> dimension of your vector x.
>
> Good luck,
>
> Alan Weiss
> MATLAB mathematical toolbox documentation
>
> On 4/13/2011 12:29 PM, DOD wrote:
>
>
>
>
>
>
>
> > I am trying to follow the advice given here
>
> >http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-7.html
>
> > but I am having trouble. I have a complicated objective to minimize
> > subject to complicated nonlinear constraints, all of which are
> > functions of my 3 choice variables, and a wide variety of parameters.
> > My script that starts everything looks like this:
>
> > % define a bunch of parameters
> > gamma= .2;
> > beta = .3;
> > x0=[.5 .5 .5];
> > %etc
> > solution = nested_minimization_program(x0,gamma,beta)
>
> > Nested_minimization_program looks like this:
>
> > function out = nest_minimization-program(x0,gamma,beta)
> > options = optimset('GradObj','on');
> > out = fmincon(@objective,x0,[],[],[],[],[0 0 0],[1 1
> > 1],@nonlin,options)
> > function [obj obj_gradient] = objective(x)
> > [obj obj_gradient] = complicated_objective(x,gamma,beta);
> > end
> > function [ineq_constriant eq_constraint] = nonlin(x)
> > [ineq_constriant eq_constraint] =
> > complicated_constaints(x,beta,gamma)
> > end
> > end
>
> > Complicated_objective is a file that returns the value of the
> > objective for its first argument, and the value of anlaytical gradient
> > for its second. Complicated_constaints returns a vector of nonlinear
> > inequality constraints for its first argument, and a vector of
> > nonlinear equality constraints for its second.
>
> > The reason to do this is so that I can use the @objective and @nonlin
> > syntax for fmincon; objective and nonlin are only functions of x, not
> > of the parameters, because they are subfunctions of a function that
> > has been passed the parameters already. I believe this is the form I
> > should use in order to pass the gradient and the nonlinear constraints
> > on to fmincon. My problem is that when I run this code, I get the
> > following error
>
> >> Warning: Trust-region-reflective algorithm does not solve this type of problem,
> > using active-set algorithm. You could also try the interior-point or
> > sqp
> > algorithms: set the Algorithm option to 'interior-point' or 'sqp' and
> > rerun. For
> > more help, see Choosing the Algorithm in the documentation.
>
> > IE, for some reason fmincon is leaving the Trust-region-reflective
> > algorithm and going to active set, which does not make use of my
> > analytical gradient. The requirements for fmincon to use analytical
> > gradients is, according tohttp://www.mathworks.com/help/toolbox/optim/ug/brhkghv-3.html,
>
> >> Write code that returns:
> >> The objective function (scalar) as the first output
>
> >> The gradient (vector) as the second output
>
> >> Set the GradObj option to 'on' with optimset.
>
> > objective returns a scalar value of the objective and a gradient as
> > required, Gradobj is turned on, so I don't see my problem.

It is correct that I have two nonlinear constraints, and they are
returned in a vector. The 'gradient' for these constraints is then
returned as a 3x2 matrix, which I believe is correct. So I don't
think that is the problem. You are correct that it was not clear to
me that the trus-region methods do not support nonlinear constraints,
so I am attempting to rewrite as an interior-point problem, but as
detailed in my reply above, that is not quite working yet either.
Thanks for your help!

DOD <dcodea@gmail.com> wrote in message <c8fed5ff-c706-4111-b990-77b5aea71685@q12g2000prb.googlegroups.com>...
>
> Is the problem therefore the very
> use of a nonlinear constraint? IE the trust-region method will not
> work in my case in any event?
===============

Yes.

> --
> ??? Error using ==> nested_ramsey_minimization>hessianfn
> Too many input arguments.
> ---
>
> I don't understand the problem; the only input to hessianfn is x,
> which is a 3-vector, and in the code for ramsey_hessian, only x(1)
> through x(3) are used. So I don't know what this error is really
> telling me.
=================

hessianfn is not supposed to be a function only of x. It is supposed to be a function of two arguments

"Matt J" wrote in message <io53ft$f7$1@fred.mathworks.com>...
>
>
> hessianfn is not supposed to be a function only of x. It is supposed to be a function of two arguments
>
> hessian = hessianfn(x, lambda)
>
====================

Granted, the FMINCON documentation could be clearer about this. It just says that the HessianFcn has to a return a Hessian. It doesn't tell you that it's supposed to be the Hessian of the Lagrangian, as opposed to just the objective function.

On Apr 13, 4:24 pm, "Matt J " <mattjacREM...@THISieee.spam> wrote:
> "Matt J" wrote in message <io53ft$f...@fred.mathworks.com>...
>
> > hessianfn is not supposed to be a function only of x. It is supposed to be a function of two arguments
>
> > hessian = hessianfn(x, lambda)
>
> ====================
>
> Granted, the FMINCON documentation could be clearer about this. It just says that the HessianFcn has to a return a Hessian. It doesn't tell you that it's supposed to be the Hessian of the Lagrangian, as opposed to just the objective function.

I see, this makes it much more clear what is required. Thanks very
much for your help, I think I see what I need to do.

What is a watch list?

You can think of your watch list as threads that you have bookmarked.

You can add tags, authors, threads, and even search results to your watch list. This way you can easily keep track of topics that you're interested in. To view your watch list, click on the "My Newsreader" link.

To add items to your watch list, click the "add to watch list" link at the bottom of any page.

How do I add an item to my watch list?

Search

To add search criteria to your watch list, search for the desired term in the search box. Click on the "Add this search to my watch list" link on the search results page.

You can also add a tag to your watch list by searching for the tag with the directive "tag:tag_name" where tag_name is the name of the tag you would like to watch.

Author

To add an author to your watch list, go to the author's profile page and click on the "Add this author to my watch list" link at the top of the page. You can also add an author to your watch list by going to a thread that the author has posted to and clicking on the "Add this author to my watch list" link. You will be notified whenever the author makes a post.

Thread

To add a thread to your watch list, go to the thread page and click the "Add this thread to my watch list" link at the top of the page.

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

About Newsgroups, Newsreaders, and MATLAB Central

What are newsgroups?

The newsgroups are a worldwide forum that is open to everyone. Newsgroups are used to discuss a huge range of topics, make announcements, and trade files.

Discussions are threaded, or grouped in a way that allows you to read a posted message and all of its replies in chronological order. This makes it easy to follow the thread of the conversation, and to see what’s already been said before you post your own reply or make a new posting.

Newsgroup content is distributed by servers hosted by various organizations on the Internet. Messages are exchanged and managed using open-standard protocols. No single entity “owns” the newsgroups.

There are thousands of newsgroups, each addressing a single topic or area of interest. The MATLAB Central Newsreader posts and displays messages in the comp.soft-sys.matlab newsgroup.

How do I read or post to the newsgroups?

MATLAB Central

You can use the integrated newsreader at the MATLAB Central website to read and post messages in this newsgroup. MATLAB Central is hosted by MathWorks.

Messages posted through the MATLAB Central Newsreader are seen by everyone using the newsgroups, regardless of how they access the newsgroups. There are several advantages to using MATLAB Central.

Use the Email Address of Your Choice
The MATLAB Central Newsreader allows you to define an alternative email address as your posting address, avoiding clutter in your primary mailbox and reducing spam.

Spam Control
Most newsgroup spam is filtered out by the MATLAB Central Newsreader.

Tagging
Messages can be tagged with a relevant label by any signed-in user. Tags can be used as keywords to find particular files of interest, or as a way to categorize your bookmarked postings. You may choose to allow others to view your tags, and you can view or search others’ tags as well as those of the community at large. Tagging provides a way to see both the big trends and the smaller, more obscure ideas and applications.

Watch lists
Setting up watch lists allows you to be notified of updates made to postings selected by author, thread, or any search variable. Your watch list notifications can be sent by email (daily digest or immediate), displayed in My Newsreader, or sent via RSS feed.

Other ways to access the newsgroups

Use a newsreader through your school, employer, or internet service provider

Pay for newsgroup access from a commercial provider

Use Google Groups

Mathforum.org provides a newsreader with access to the comp.soft sys.matlab newsgroup