Domenico Salvagnin

Department of Information Engineering, Padova

Publications

1.

Fast Approaches to Improve the Robustness of a Railway Timetable

M. Fischetti, D. Salvagnin, A. Zanette

Transportation Science 43:321-335, 2009

Abstract:

The Train Timetabling Problem (TTP) consists in finding a train schedule on a
railway network that satisfies some operational constraints and maximizes some
profit function which counts for the efficiency of the infrastructure usage.
In practical cases, however, the maximization of the objective function is not
enough and one calls for a robust solution that is capable of absorbing as
much as possible delays/disturbances on the network. In this paper we
propose and analyze computationally four different methods to improve the
robustness of a given TTP solution for the aperiodic (non cyclic) case. The
approaches combine Linear Programming (LP) and ad-hoc Stochastic Programming/Robust
Optimization techniques. We compare computationally the
effectiveness and practical applicability of the four techniques under
investigation on real-world test cases from the Italian railway company
(Trenitalia). The outcome is that two of the proposed techniques are very fast
and provide robust solutions of comparable quality with respect to the
standard (but very time consuming) Stochastic Programming approach.

Abstract:

Modern Mixed-Integer Programming (MIP) solvers exploit a rich arsenal of
tools to attack hard problems. It is widely accepted by the OR community that
the solution of very hard MIPs can take advantage from the solution of a
series of time-consuming auxiliary Linear Programs (LPs) intended to enhance
the performance of the overall MIP solver. E.g., auxiliary LPs may be solved
to generate powerful disjunctive cuts, or to implement a strong branching
policy. Also well established is the fact that finding good-quality heuristic
MIP solutions often requires a computing time that is just comparable to that
needed to solve the LP relaxations. So, it makes sense to think of a new
generation of MIP solvers where auxiliary MIPs (as opposed to LPs) are
heuristically solved on the fly, with the aim of bringing the MIP technology
under the chest of the MIP solver itself. This leads to the idea of
“translating into a MIP model” (MIPping) some crucial decisions to be taken
within a MIP algorithm (How to cut? How to improve the incumbent solution? Is
the current node dominated?). In this paper we survey a number of successful
applications of the above approach.

Abstract:

Finding a feasible solution of a given Mixed-Integer Programming
(MIP) model is a very important NP-complete problem that can be
extremely hard in practice. Feasibility Pump (FP) is a heuristic
scheme for finding a feasible solution to general MIPs that can be
viewed as a clever way to round a sequence of fractional solutions of
the LP relaxation, until a feasible one is eventually found. In this
paper we study the effect of replacing the original rounding function
(which is fast and simple, but somehow blind) with more clever
rounding heuristics. In particular, we investigate the use of a
diving-like procedure based on rounding and constraint propagation — a
basic tool in Constraint Programming. Extensive computational results
on binary and general integer MIPs from the literature show that the
new approach produces a substantial improvement of the FP success
rate, without slowing-down the method and with a significantly better
quality of the feasible solutions found.

Abstract:

The concept of dominance among nodes of a branch-and-bound tree, although
known for a long time, is typically not exploited by general-purpose
mixed-integer linear programming (MILP) codes. The starting point of our work
was the general-purpose dominance procedure proposed in the 1980s by Fischetti
and Toth, where the dominance test at a given node of the branch-and-bound
tree consists of the (possibly heuristic) solution of a restricted MILP only
involving the fixed variables. Both theoretical and practical issues
concerning this procedure are analyzed, and important improvements are
proposed. In particular, we use the dominance test not only to fathom the
current node of the tree, but also to derive variable configurations called
"nogoods" and, more generally, "improving moves." These latter configurations,
which we rename “pruning moves” so as to stress their use in a node-fathoming
context, are used during the enumeration to fathom large sets of dominated
solutions in a computationally effective way. Computational results on a
testbed of MILP instances whose structure is amenable to dominance are
reported, showing that the proposed method can lead to a considerable speedup
when embedded in a commercial MILP solver.

Finding the Next Solution in Constraint- and Preference-based Knowledge Representation Formalisms

R. Brafman, F. Rossi, D. Salvagnin, K. B. Venable, T. Walsh

KR 2010 Proceedings, 425-433

Abstract:

In constraint or preference reasoning, a typical task is to compute a
solution, or an optimal solution. However, when one has already a solution, it
may be important to produce the next solution following the given one in a
linearization of the solution ordering where more preferred solutions are
ordered first. In this paper, we study the computational complexity of finding
the next solution in some common preference-based representation formalisms.
We show that this problem is hard in general CSPs, but it can be easy in
tree-shaped CSPs and tree-shaped fuzzy CSPs. However, it is difficult in
weighted CSPs, even if we restrict the shape of the constraint graph. We also
consider CP-nets, showing that the problem is easy in acyclic CP-nets, as well
as in constrained acyclic CP-nets where the (soft) constraints are tree-shaped
and topologically compatible with the CP-net.

Abstract:

Cutting plane methods are widely used for solving convex optimization
problems and are of fundamental importance, e.g., to provide tight bounds for
Mixed-Integer Programs (MIPs). This is obtained by embedding a cut-separation
module within a search scheme. The importance of a sound search scheme is well
known in the Constraint Programming (CP) community. Unfortunately, the
"standard" search scheme typically used for MIP problems, known as the Kelley
method, is often quite unsatisfactory because of saturation issues. In this
paper we address the so-called Lift-and-Project closure for 0-1 MIPs
associated with all disjunctive cuts generated from a given set of elementary
disjunction. We focus on the search scheme embedding the generated cuts. In
particular, we analyze a general meta-scheme for cutting plane algorithms,
called in-out search, that was recently proposed by Ben-Ameur and Neto.
Computational results on test instances from the literature are presented,
showing that using a more clever meta- scheme on top of a black-box cut
generator may lead to a significant improvement.

Abstract:

Gomory Mixed-Integer Cuts (GMICs) are widely used in modern branch-and-cut
codes for the solution of Mixed-Integer Programs. Typically, GMICs are
iteratively generated from the optimal basis of the current Linear Programming
(LP) relaxation, and immediately added to the LP before the next round of cuts
is generated. Unfortunately, this approach is prone to instability. In this
paper we analyze a different scheme for the generation of rank-1 GMICs read
from a basis of the original LP—the one before the addition of any cut. We
adopt a relax-and-cut approach where the generated GMICs are not added to the
current LP, but immediately relaxed in a Lagrangian fashion. Various
elaborations of the basic idea are presented, that lead to very fast—yet
accurate—variants of the basic scheme. Very encouraging computational results
are presented, with a comparison with alternative techniques from the
literature also aimed at improving the GMIC quality. We also show how our
method can be integrated with other cut generators, and successfully used in a
cut-and-branch enumerative framework.

Abstract:

This paper reports on the fifth version of the Mixed Integer Programming
Library. The MIPLIB 2010 is the first MIPLIB release that has been assembled
by a large group from academia and from industry, all of whom work in integer
programming. There was mutual consent that the concept of the library had to
be expanded in order to fulfill the needs of the community. The new version
comprises 361 instances sorted into several groups. This includes the main
benchmark test set of 87 instances, which are all solvable by today’s codes,
and also the challenge test set with 164 instances, many of which are
currently unsolved. For the first time, we include scripts to run automated
tests in a predefined way. Further, there is a solution checker to test the
accuracy of provided solutions using exact arithmetic.

Abstract:

We address the exact solution of the famous esc instances of the quadratic
assignment problem. These are extremely hard instances that remained
unsolved—even allowing for a tremendous computing power—by using all previous
techniques from the literature. During this challenging task we found that
three ideas were particularly useful, and qualified as a breakthrough for our
approach. The present paper is about describing these ideas and their impact
in solving esc instances. Our method was able to solve, in a matter of seconds
or minutes on a single PC, all easy cases (all esc16* plus esc32e and esc32g).
The three previously-unsolved esc32c, esc32d and esc64a were solved in less
than half an hour, in total, on a single PC. We also report the solution in
about 5 hours of the previously-unsolved tai64c. By using a facility-flow
splitting procedure, we were finally able to solve to proven optimality, for
the first time, both esc32h (in about 2 hours) as well as "the big fish"
esc128 (to our great surprise, the solution of the latter required just a few
seconds on a single PC).

Abstract:

In multiagent settings where agents have different preferences, preference
aggregation can be an important issue. Voting is a general method to aggregate
preferences. We consider the use of voting tree rules to aggregate agents’
preferences. In a voting tree, decisions are taken by performing a sequence of
pairwise comparisons in a binary tree where each comparison is a majority vote
among the agents. Incompleteness in the agents’ preferences is common in many
real-life settings due to privacy issues or an ongoing elicitation process. We
study how to determine the winners when preferences may be incomplete, not
only for voting tree rules (where the tree is assumed to be fixed), but also
for the Schwartz rule (in which the winners are the candidates winning for at
least one voting tree). In addition, we study how to determine the winners
when only balanced trees are allowed. In each setting, we address the
complexity of computing necessary (respectively, possible) winners, which are
those candidates winning for all completions (respectively, at least one
completion) of the incomplete profile. We show that many such winner
determination problems are computationally intractable when the votes are
weighted. However, in some cases, the exact complexity remains unknown. Since
it is generally computationally difficult to find the exact set of winners for
voting trees and the Schwartz rule, we propose several heuristics that find in
polynomial time a superset of the possible winners and a subset of the
necessary winners which are based on the completions of the (incomplete)
majority graph built from the incomplete profiles.

Abstract:

We propose a hybrid MIP/CP approach for solving multi-activity shift
scheduling problems, based on regular languages that partially describe the
set of feasible shifts. We use an aggregated MIP relaxation to capture the
optimization part of the problem and to get rid of symmetry. Whenever the MIP
solver generates a integer solution, we use a CP solver to check whether it
can be turned into a feasible solution of the original problem. A MIP-based
heuristic is also developed. Computational results are reported, showing that
the proposed method is a promising alternative compared to the
state-of-the-art.

Abstract:

The split closure has been proved in practice to be a very tight
approximation of the integer hull formulation of a generic mixed-integer
linear program. However, exact separation procedures for optimizing over the
split closure have unacceptable computing times in practice, hence many
different heuristic strategies have been proposed in the last years. In this
paper we present a new overall framework for approximating the split closure,
that merges different ideas from the previous approaches. Computational
results prove the effectiveness of the proposed procedure compared to the
state of the art, showing that a good approximation of the split closure bound
can be obtained with very reasonable computing times.

Abstract:

Branch-and-bound methods for mixed-integer programming (MIP) are traditionally
based on solving a linear programming (LP) relaxation and branching on a
variable which takes a fractional value in the (single) computed relaxation
optimum. In this paper we study branching strategies for mixed-integer
programs that exploit the knowledge of multiple alternative optimal solutions
(a cloud) of the current LP relaxation. These strategies naturally extend
state-of-the-art methods like strong branching, pseudocost branching, and
their hybrids. We show that by exploiting dual degeneracy, and thus multiple
alternative optimal solutions, it is possible to enhance traditional methods.
We present preliminary computational results, applying the newly proposed
strategy to full strong branching, which is known to be the MIP branching rule
leading to the fewest number of search nodes. It turns out that cloud
branching can reduce the mean running time by up to 30% on standard test sets.

Abstract:

Orbital shrinking is a newly developed technique in the MIP community to deal
with symmetry issues, which is based on aggregation rather than on symmetry
breaking. In a recent work, a hybrid MIP/CP scheme based on orbital shrinking
was developed for the multi-activity shift scheduling problem, showing
significant improvements over previous pure MIP approaches. In the present
paper we show that the scheme above can be extended to a general framework for
solving arbitrary symmetric MIP instances. This framework naturally provides a
new way for devising hybrid MIP/CP decompositions. Finally, we specialize the
above framework to the multiple knapsack problem. Computational results show
that the resulting method can be orders of magnitude faster than pure MIP
approaches on hard symmetric instances.

Abstract:

We develop a computational method for computing valid inequalities for any
mixed-integer set PJ. Our implementation takes the form of a separator and is
capable of returning only facet-defining inequalities for conv(PJ). The
separator is not comparable in speed with the specific cutting-plane
generators used in branch-and-cut solvers, but it is general-purpose. We can
thus use it to compute cuts derived from any reasonably small relaxation PJ of
a general mixed-integer problem, even when there exists no specific
implementation for computing cuts with PJ. Exploiting this, we evaluate, from
a computational perspective, the usefulness of cuts derived from several types
of multi-row relaxations. In particular, we present results with four
different strengthenings of the two-row intersection cut model, and multi-row
models with up to fifteen rows. We conclude that only fully-strengthened
two-row cuts seem to offer a significant advantage over two-row intersection
cuts. Our results also indicate that the improvement obtained by going from
models with very few rows to models with up to fifteen rows may not be worth
the increased computing cost.

Abstract:

We discuss the variability in the performance of multiple runs of branch-and-cut
mixed integer linear programming solvers, and we concentrate on the one deriving from
the use of different optimal bases of the linear programming relaxations. We propose
a new algorithm exploiting more than one of those bases and we show that different versions
of the algorithm can be used to stabilize and improve the performance of the solver.

Abstract:

Directional sensors are gaining importance due to applications, including
surveillance, detection, and tracking. Such sensors have a limited field-of-view and
a discrete set of directions they can be pointed to. The Directional Sensor Control
problem (DSCP) consists in assigning a direction of view to each sensor. The location
of the targets is known with uncertainty given by a joint a-priori Gaussian
distribution, while sensor locations are known exactly. In this paper we study exact
and heuristic approaches for the DSCP with the goal of maximizing information gain on
the location of a given set of immobile target objects. In particular, we propose an
exact mixed integer convex programming (MICP) formulation to be solved by a black-box
MICP solver and several meta-heuristic approaches based on local search. A
computational evaluation shows the very good performance of both methods.

Abstract:

We address the solution of a very challenging (and previously unsolved) instance of
the quadratic 3-dimensional assignment problem, arising in digital wireless
communications. The paper describes the techniques developed to solve this instance to
optimality, from the choice of an appropriate mixed-integer programming formulation, to
cutting planes and symmetry handling. Using these techniques we were able to solve the
target instance with moderate computational effort (2.5 million nodes and one week of
computations on a standard PC).

Abstract:

Many combinatorial optimization problems can be formulated as the search for the best
possible permutation of a given set of objects, according to a given objective function.
The corresponding MIP formulation is thus typically made of an assignment substructure,
plus additional constraints and variables (as needed) to express the objective function.
Unfortunately, the permutation structure is generally lost when the model is flattened
out as a mixed integer program, and state-of-the-art MIP solvers do not take full
advantage of it. In the present paper we propose a heuristic procedure to detect
permutation problems from their MIP formulation, and show how we can take advantage of
this knowledge to speedup the solution process. Computational results on quadratic
assignment and single machine scheduling problems show that the technique, when embedded
in a state-of-the-art MIP solver, can indeed improve performance.

Abstract:

Parallel computation requires splitting a job among a set of processing units called
workers. The computation is generally performed by a set of one or more master workers
that split the workload into chunks and distribute them to a set of slave workers. To
guarantee correctness and achieve a desirable balancing of the split (needed for
scalability), many schemes introduce a (possibly large) overhead due to
communication/synchronization among the involved workers. We propose a simple mechanism
to avoid the communication issues of the approach above. In the new paradigm, called
SelfSplit, each worker is able to autonomously determine, without any communication with
the other workers, the job parts it has to process. The above feature makes the scheme
very suited for those applications where communication among workers is time consuming
or unreliable. In particular, it allows for a simple yet effective parallelization of
divide-and-conquer algorithms with a short input that produce a very large number of
time-consuming job parts, as it happens, e.g., when an NP-hard problem is solved by an
enumerative method. Computational results are reported, showing that SelfSplit can
achieve an almost linear speedup for hard Constraint Programming applications, even when
64 workers are considered.

Abstract:

Symmetry plays an important role in optimization. The usual approach
to cope with symmetry in discrete optimization is to try to eliminate
it by introducing artificial symmetry-breaking conditions into the
problem, and/or by using an ad-hoc search strategy. This is the common
approach in both the mixed-integer programming (MIP) and constraint
programming (CP) communities. In this paper we argue that symmetry is
instead a beneficial feature that we should preserve and exploit as
much as possible, breaking it only as a last resort. To this end, we
outline a new approach, that we call orbital shrinking, where
additional integer variables expressing variable sums within each
symmetry orbit are introduced and used to ``encapsulate'' model
symmetry. This leads to a discrete relaxation of the original problem,
whose solution yields a bound on its optimal value. Then, we show that
orbital shrinking can be turned into an exact method for solving
arbitrary symmetric MIP instances. The proposed method naturally
provides a new way for devising hybrid MIP/CP decompositions. Finally,
we report computational results on two specific applications of the
method, namely the multi-activity shift scheduling and the multiple
knapsack problem, showing that the resulting method can be orders of
magnitude faster than pure MIP or CP approaches.

Abstract:

SelfSplit is a simple static mechanism to convert a sequential tree-search code into a parallel one.
In this paradigm, tree-search is distributed among a set of identical workers, each of which is able
to autonomously determine---without any communication with the other workers---the job parts it has to
process. SelfSplit already proved quite effective in parallelizing Constraint Programming solvers.
In the present paper we investigate the performance of SelfSplit when applied to a Mixed-Integer
Linear Programming (MILP) solver. Both ad-hoc and general purpose MILP codes have been considered.
Computational results show that SelfSplit, in spite of its simplicity, can achieve good speedups
even in the MILP context.

Abstract:

In this paper we consider a packing problem arising in inventory allocation
applications, where the operational cost for packing the bins is comparable, or even
higher, than the cost of the bins (and of the items) themselves. This is the case,
for example, of warehouses that have to manage a large number of different customers
(e.g., stores), each requiring a given set of items. For this problem, we present
Mixed-Integer Linear Programming heuristics based on problem substructures that lead
to easy-to-solve and meaningful subproblems, and exploit them within an overall
meta-heuristic framework. The new heuristics are evaluated on a standard set of
instances, and benchmarked against known heuristics from the literature. Computational
experiments show that very good (often proven optimal) solutions can consistently
be computed in short computing times.

Abstract:

In mixed-integer programming, the branching rule is a key component to a fast convergence of the branch-and-bound algorithm. The most common strategy is to branch on simple disjunctions that
split the domain of a single integer variable into two disjoint intervals. Multi-aggregation is
a presolving step that replaces variables by an affine linear sum of other variables, thereby
reducing the problem size. While this simplification typically improves the performance of MIP
solvers, it also restricts the degree of freedom in variable-based branching rules. We present a
novel branching scheme that tries to overcome the above drawback by considering general
disjunctions defined by multi-aggregated variables in addition to the standard disjunctions based
on single variables. This natural idea results in a hybrid between variable- and constraint-based
branching rules. Our implementation within the constraint integer programming framework SCIP
incorporates this into a full strong branching rule and reduces the number of branch-and-bound
nodes on a general test set of publicly available benchmark instances. For a specific class of
problems, we show that the solving time decreases significantly.

Abstract:

The Steiner Tree Problem is a challenging NP-hard problem. Many hard instances of this problem
are publicly available, that are still unsolved by state-of-the-art branch-and-cut codes.
A typical strategy to attack these instances is to enrich the polyhedral description of the
problem, and/or to implement more and more sophisticated separation procedures and branching
strategies. In this paper we investigate the opposite viewpoint, and try to make the solution
method as simple as possible while working on the modeling side. Our working hypothesis is that
the extreme hardness of some classes of instances mainly comes from over-modeling, and that some
instances can become quite easy to solve when a simpler model is considered. In other words,
we aim at “thinning out” the usual models for the sake of getting a more agile framework.
In particular, we focus on a model that only involves node variables, which is rather appealing
for the “uniform” cases where all edges have the same cost. We show that this model allows one
to quickly produce very good (sometimes proven optimal) solutions for notoriously hard instances
from the literature. In particular, we report improved solutions for several SteinLib instances,
including the (in)famous hypercube ones. Even though we do not claim our approach can work well
in all cases, we report surprisingly good results for a number of unsolved instances. In some
cases, our approach takes just few seconds to prove optimality for instances never solved
(even after days of computation) by the standard methods.

Abstract:

Mixed Integer Linear Programming (MILP) is commonly used to model indicator constraints,
i.e., constraints that either hold or are relaxed depending on the value of a binary variable.
Unfortunately, those models tend to lead to weak continuous relaxations and turn out to be
unsolvable in practice, like in the case of Classification problems with Ramp Loss functions that
represent an important application in this context. In this paper we show the computational
evidence that a relevant class of these Classification instances can be solved far more
efficiently if a nonlinear, nonconvex reformulation of the indicator constraints is used instead
of the linear one. Inspired by this empirical and surprising observation, we show that aggressive
bound tightening is the crucial ingredient for solving this class of instances, and we devise a
pair of computationally effective algorithmic approaches that exploit it within MILP. More
generally, we argue that aggressive bound tightening is often overlooked in MILP, while it
represents a significant building block for enhancing MILP technology when indicator constraints
and disjunctive terms are present.

Abstract:

Current state-of-the-art MIP technology lacks a powerful modeling language based on global
constraints, a tool which has long been standard in constraint programming. In general, even
basic semantic information about variables and constraints is hidden from the underlying solver.
For example, in a network design model with unsplittable flows, both routing and arc capacity
variables could be binary, and the solver would not be able to distinguish between the two
semantically different groups of variables by looking at type alone. If available, such semantic
partitioning could be used by different parts of the solver, heuristics in primis, to improve
overall performance. In the present paper we will describe several heuristic procedures, all
based on the concept of partition refinement, to automatically recover semantic variable (and
constraint) groups from a flat MIP model. Computational experiments on a heterogeneous testbed of
models, whose original higher-level partition is known a priori, show that one of the proposed
methods is quite effective.

Abstract:

Feasibility pump (FP) is a successful primal heuristic for mixed-integer linear programs
(MILP). The algorithm consists of three main components: rounding fractional solution to a
mixed-integer one, projection of infeasible solutions to the LP relaxation, and a randomization
step used when the algorithm stalls. While many generalizations and improvements to the original
Feasibility Pump have been proposed, they mainly focus on the rounding and projection steps.
We start a more in-depth study of the randomization step in Feasibility Pump. For that,
we propose a new randomization step based on the WalkSAT algorithm for solving SAT instances.
First, we provide theoretical analyses that show the potential of this randomization step; to the
best of our knowledge, this is the first time any theoretical analysis of running-time of
Feasibility Pump or its variants has been conducted. Moreover, we also conduct computational
experiments incorporating the proposed modification into a state-of-the-art Feasibility Pump code
that reinforce the practical value of the new randomization step.

Abstract:

The Feasibility Pump (FP) is probably the best known primal heuristic for mixed integer programming.
The original work by Fischetti, Glover, and Lodi, which introduced the heuristic for 0-1 mixed-integer
linear programs, has been succeeded by more than twenty follow-up publications which improve the
performance of the FP and extend it to other problem classes. Year 2015 was the tenth anniversary of the
first FP publication. The present paper provides an overview of the diverse Feasibility Pump literature
that has been presented over the last decade.

Bibtex

---

31.

Chasing First Queens by Integer Programming

M. Fischetti, D. Salvagnin

CPAIOR 2018 Proceedings, 232-244

Abstract:

The n-queens puzzle is a well-known combinatorial problem that requires to place n queens on an n x n
chessboard so that no two queens can attack each other. Since the 19th century, this problem was studied
by many mathematicians (including Carl Friedrich Gauss) and, more recently, by Edsger Dijkstra who used
it to illustrate a depth-first backtracking algorithm. While finding a solution to the n-queens puzzle
is rather straightforward, the problem of counting the number of such solutions is quite challenging and
received some attention in recent years. Very recently, in a private correspondence, Donald E. Knuth
pointed us to another very challenging version of the n-queens problem, namely, finding the
lexicographically-first (or smallest) feasible solution. Solutions for this type are known in the
literature for n ≤ 55, while for some larger chessboards only partial solutions are known.
The present paper was motivated by Knuth's question of whether Integer Linear Programming (ILP) can be
used to compute solutions for some open instances. We describe alternative ILP-based solution approaches,
and show that they are indeed able to compute (sometimes in unexpectedly-short computing times) many new
lexicographically optimal solutions for n ranging from 56 to 115.

Abstract:

We propose a way to derive symmetry breaking inequalities for a MIP model from the Schreier-Sims
table of the formulation group. We then show how to consider only the action of the formulation group
onto a subset of the variables of interest. Computational results show that this can lead to considerable
speedups on some classes of models.

Abstract:

The position of each source and detector optode on the scalp, and their relative separations, determines the sensitivity of each functional near-infrared spectroscopy (fNIRS) channel to the underlying cortex. As a result, selecting appropriate scalp locations for the available sources and detectors is critical to every fNIRS experiment. At present, it is standard practice for the user to undertake this task manually; to select what they believe are the best locations on the scalp to place their optodes so as to sample a given cortical region-of-interest (ROI). This process is difficult, time-consuming, and highly subjective. Here, we propose a tool, Array Designer, that is able to automatically design optimized fNIRS arrays given a user-defined ROI and certain features of the available fNIRS device. Critically, the Array Designer methodology is generalizable and will be applicable to almost any subject population or fNIRS device. We describe and validate the algorithmic methodology that underpins Array Designer by running multiple simulations of array design problems in a realistic anatomical model. We believe that Array Designer has the potential to end the need for manual array design, and in doing so save researchers time, improve fNIRS data quality, and promote standardization across the field.