Numerical Algebra, Control and Optimization (NACO) is an international journal devoted to publishing peer-refereed high quality original papers on any non-trivial interplay between numerical linear algebra, control and optimization. These three areas are closely related and complementary. The developments of many fundamentally important theories and methods in optimization and control are based on numerical linear algebra. Efficient implementation of algorithms in optimization and control also provides new theoretical challenges in numerical linear algebra. Furthermore, optimization theory and methods are widely used in control theory, especially for solving practical control problems. On the other hand, control problems often initiate new theory, techniques and methods to be developed in optimization.

The main objective of NACO is to provide a single forum for and promote collaboration between researchers and practitioners in these areas. Significant practical and theoretical problems in one area can be addressed by the use of appropriate recent advanced theory techniques and methods from the other two areas leading to the discovery of new ideas and the development of novel methodologies in numerical algebra, control and optimization.

The journal adheres to the publication ethics and malpractice policies outlined by COPE.

In this paper, we consider the unconstrained optimization problem under the following conditions:
(S1) The objective function is evaluated with a certain bounded error,
(S2) the error is controllable, that is, the objective function can be evaluated to any desired accuracy,
and (S3) more accurate evaluation requires a greater computation time.
This situation arises in many fields such as engineering and financial problems, where objective function values are obtained from considerable numerical calculation or a simulation.
Under (S2) and (S3), it seems reasonable to set the accuracy of the evaluation to be low at points far from a solution, and high at points in the neighborhood of a solution.
In this paper, we propose a derivative-free trust-region algorithm based on this idea.
For this purpose, we consider (i) how to construct a quadratic model function by exploiting pointwise errors and (ii) how to control the accuracy of function evaluations to reduce the total computation time of the algorithm.
For (i), we propose a method based on support vector regression.
For (ii), we present an updating formula of the accuracy of which is related to the trust-region radius.
We present numerical results for several test problems taken from CUTEr and a financial problem of estimating implied volatilities from option prices.

The nonlinear semidefinite optimization problem arises from applications in system control, structural design, financial management, and other fields. However, much work is yet to be done to effectively solve this problem. We introduce some new theoretical and algorithmic development in this field. In particular, we discuss first and second-order algorithms that appear to be promising, which include the alternating direction method, the augmented Lagrangian method, and the smoothing Newton method. Convergence theorems are presented and preliminary numerical results are reported.

Nonlinear equations and nonlinear least squares problems have many
applications in physics, chemistry, engineering, biology, economics,
finance and many other fields. In this paper, we will review some
recent results on numerical methods for these two special problems,
particularly on Levenberg-Marquardt type methods, quasi-Newton type
methods, and trust region algorithms. Discussions on variable
projection methods and subspace methods are also given. Some
theoretical results about local convergence results of the
Levenberg-Marquardt type methods without non-singularity assumption
are presented. A few model algorithms based on line searches and
trust regions are also given.

In this paper, we briefly review the
extensions of quasi-Newton methods for large-scale optimization.
Specially, based on the idea of maximum determinant positive
definite matrix completion, Yamashita (2008) proposed a new sparse
quasi-Newton update, called MCQN, for unconstrained optimization
problems with sparse Hessian structures. In exchange of the
relaxation of the secant equation, the MCQN update avoids solving
difficult subproblems and overcomes the ill-conditioning of
approximate Hessian matrices. A global convergence analysis is
given in this paper for the MCQN update with Broyden's convex family
assuming that the objective function is uniformly convex and its
dimension is only two. &nbsp
This paper is dedicated to Professor Masao Fukushima on occasion of his 60th birthday.

The all-together method is one of the support vector machine (SVM)
for multiclass classification by using a piece-wise linear function.
Recently, we proposed a new hard-margin all-together model maximizing
geometric margins in the sense of multiobjective optimization
for the high generalization ability, which is called
the multiobjective multiclass SVM (MMSVM).
Moreover,
we derived its solving techniques which can find a Pareto optimal solution
for the MMSVM, and verified that classifiers with larger geometric margins
were obtained by the proposed techniques through numerical experiments.
However, the experiments are not enough
to evaluate the classification performance of the proposed model,
and the MMSVM is a hard-margin model which can be applied to only piecewise
linearly separable data.
Therefore, in this paper, we extend the hard-margin model into soft-margin one
by introducing penalty functions for the slack margin variables,
and derive a single-objective second-order cone programming (SOCP) problem
to solve it.
Furthermore, through numerical experiments we verify the classification performance of
the hard and soft-margin MMSVMs for benchmark problems.

In this paper, we study the
stochastic variational inequality problem (SVIP)
from a viewpoint of minimization of conditional value-at-risk. We employ the D-gap residual function for VIPs to define
a loss function for SVIPs. In order to reduce the risk of high losses in applications of SVIPs, we use the
D-gap function and conditional value-at-risk to present a deterministic
minimization reformulation for SVIPs. We show that the new
reformulation is a convex program under suitable conditions.
Furthermore, by using the smoothing techniques and the Monte Carlo
methods, we propose a smoothing approximation method for finding a
solution of the new reformulation and show that this method is
globally convergent with probability one.

We consider a regularization method for the numerical solution of
mathematical programs with complementarity constraints (MPCC) introduced
by Gui-Hua Lin and Masao Fukushima. Existing convergence results are
improved in the sense that the MPCC-LICQ assumption is replaced
by the weaker MPCC-MFCQ. Moreover, some preliminary numerical results
are presented in order to illustrate the theoretical improvements.

In this paper, we propose a descent derivative-free method for
solving symmetric nonlinear equations. The method is an extension of
the modified Fletcher-Reeves (MFR) method proposed by Zhang, Zhou and Li [25] to symmetric
nonlinear equations. It can be applied to solve large-scale
symmetric nonlinear equations due to lower storage requirement.
An attractive property of the method is that
the directions generated by the method
are descent for the residual function. By the use of some backtracking line search technique,
the generated sequence of function values is decreasing. Under
appropriate conditions, we show that
the proposed method is globally convergent. The preliminary numerical
results show that the method is practically effective.

In this paper, we focus on fractional programming problems
that minimize the ratio of two indefinite quadratic functions
subject to two quadratic constraints.
Utilizing the relationship between fractional programming
and parametric programming,
we transform the original problem into a univariate nonlinear equation.
To evaluate the function in the equation,
we need to solve a problem of minimizing a nonconvex quadratic function subject to two quadratic constraints.
This problem is commonly called a Celis-Dennis-Tapia (CDT) subproblem, which arises
in some trust region algorithms for equality constrained optimization.
In the outer iterations of the algorithm,
we employ the bisection method and/or the generalized Newton method.
In the inner iterations, we utilize Yuan's theorem
to obtain the global optima of the CDT subproblems.
We also show some numerical results to examine the efficiency of the algorithm.
Particularly, we will observe that the generalized Newton method is
more robust to the erroneous evaluation for the univariate functions
than the bisection method.

In this paper, Filter Genetic Algorithm (FGA) method is proposed to find the global optimal of the constrained mixed variable programming problem. The considered problem is reformulated to take the form of optimizing two functions, the objective function and the constraint violation function. Then, the filter set methodology [5] is applied within a genetic algorithm framework to solve the reformulated problem. We use pattern search as local search to improve the obtained solutions. Moreover, the gene matrix criteria [10] has been applied to accelerated the search process and to terminate the algorithm. The proposed method FGA is promising compared with some other methods existing in the literature.