Convex optimization methods show promise for solving
least-squares problems. As exemplified by the least-squares Dix
equation, convex optimization can yield similar results to those
obtained through conjugate gradient methods. Yet convex optimization
also has the advantage of imposing geologically constrained bounds to
further enhance the solution. The solution also shows
a slightly better ability to pick up the faults than the conjugate
gradient method. As stated above, this could simply be a
function of choice.

The convex optimization solver may not be as fast as conjugate
gradient methods, but the solution obtained is guaranteed to be
correct. The conjugate gradient solution is usually obtained by
iterating until we are tired. The convex solver, on the other hand,
works until a preset accuracy is achieved. In these problems, this
precision was set at 10-9. The efficiency of versus regularization is quite striking. It takes 8 times
more iterations to do the than the , but only 3
times as many when both are bounded. This is difference comparable to
that of conjugate gradients.

To apply convex optimization larger problems and more complex
operators, a convex optimization solver that does not rely on MATLAB
is needed. While the software is efficient and easy to use, it is
limited by MATLAB's efficiency and lack of memory. If a new solver
can be created convex optimization could be successful for future endeavors.