I work as a Quant for an Asset Management and Insurance company and have recently enrolled for Masters degree in Computer Science. I am thinking about investigating how important having a "good" optimizer is for portfolio optimization / asset allocation problems in the presence of,

Multiple asset classes + multiple risk factors

Different risk-adjusted-return objective functions,

Linear constraints as well as VaR or CVaR constraints, and

Noise resulting from Monte Carlo Methods and Stochastic Processes

My hypothesis is that as you add more 'complexity' to the optimization problem, more traditional local optimization algorithms will struggle to optimize the problem and that it may make more sense to use a global optimization algorithm. It's really just a hypothesis at this point, so I may be totally wrong.

My question is really two fold, firstly, do you think that this is a worthwhile topic to research, and secondly, can anybody recommend any papers which relates to the above topic(s)? Thanks in advance.

1 Answer
1

Let me try to give you an overview over the different approaches to optimization, and the specific challenges. You'll see that the problem is more a trade-off continuum than a binary choice.

All an optimizer does is finding a minimum to a specified problem. If you have an analytically tactable problem, that could be as easy as a lagrangian, usually your problem is much more complex, so you'll need to resort to numerical approaches.

With numerical approaches, you have to trade speed against accuracy. The most accurate option is to simply calulate all possible input-output combinations and find the minimum. Then you can start to increase speed by using a clever way to leave out points you don't really need to calculate. For example, pick a point, go to the left by a very short increment, see if the result is better, and either move to that new point, or turn the other way. That's a gradient descent. Obviously this way you'll easily get trapped inside local minimae, often you'd like to avoid that.

Next, you can add random jumps to periodically get out of these traps, that's called simmulated annealing. That's a little better, but will not give you stable results if your problem is not sufficiently convex in its parameters. Then you'd need to work on your problem formulation to add convexity.

Alternatively, you could use genetic algorithms, starting with sets of inital parameters, compare them, and use some kind of mutation to generate new candidates. By subsequently comparing them, you can often get a pretty accurate location of your global minimum.

As you can see, it's pretty hard to make this tradeoff decision. A common solution is to use multi-step approaches, first find some point in the region of the global minimum, and later refine using gradient descent.