This study investigates the advantages and potentials of the metamodelbased multi-objective optimization (MOO) of a turning operation through the application of finite element simulations and evolutionary algorithms to a metal cutting process. The objectives are minimizing the interface temperature and tool wear depth obtained from FE simulations using DEFORM2D software, and maximizing the material removal rate. Tool geometry and process parameters are considered as the input variables. Seven metamodelling methods are employed and evaluated, based on accuracy and suitability. Radial basis functions with a priori bias and Kriging are chosen to model tool–chip interface temperature and tool wear depth, respectively. The non-dominated solutions are found using the strength Pareto evolutionary algorithm SPEA2 and compared with the non-dominated front obtained from pure simulation-based MOO. The metamodel-based MOO method is not only advantageous in terms of reducing the computational time by 70%, but is also able to discover 31 new non-dominated solutions over simulation-based MOO.

3. Metamodel based multi-objective optimization of a turning process by using finite element simulation

This study investigates the advantages and potentials of the metamodelbased multi-objective optimization (MOO) of a turning operation through the application of finite element simulations and evolutionary algorithms to a metal cutting process. The objectives are minimizing the interface temperature and tool wear depth obtained from FE simulations using DEFORM2D software, and maximizing the material removal rate. Tool geometry and process parameters are considered as the input variables. Seven metamodelling methods are employed and evaluated, based on accuracy and suitability. Radial basis functions with a priori bias and Kriging are chosen to model tool–chip interface temperature and tool wear depth, respectively. The non-dominated solutions are found using the strength Pareto evolutionary algorithm SPEA2 and compared with the non-dominated front obtained from pure simulation-based MOO. The metamodel-based MOO method is not only advantageous in terms of reducing the computational time by 70%, but is also able to discover 31 new non-dominated solutions over simulation-based MOO.

Radial basis functions are augmented with a posteriori bias in order to perform robustly when used as metamodels. Recently, it has been proposed that the bias can simply be set a priori by using the normal equation, i.e., the bias becomes the corresponding regression model. In this study, we demonstrate the performance of the suggested approach (RBFpri) with four other well-known metamodeling methods; Kriging, support vector regression, neural network and multivariate adaptive regression. The performance of the five methods is investigated by a comparative study, using 19 mathematical test functions, with five different degrees of dimensionality and sampling size for each function. The performance is evaluated by root mean squared error representing the accuracy, rank error representing the suitability of metamodels when coupled with evolutionary optimization algorithms, training time representing the efficiency and variation of root mean squared error representing the robustness. Furthermore, a rigorous statistical analysis of performance metrics is performed. The results show that the proposed radial basis function with a priori bias achieved the best performance in most of the experiments in terms of all three metrics. When considering the statistical analysis results, the proposed approach again behaved the best, while Kriging was relatively as accurate and support vector regression was almost as fast as RBFpri. The proposed RBF is proven to be the most suitable method in predicting the ranking among pairs of solutions utilized in evolutionary algorithms. Finally, the comparison study is carried out on a real-world engineering optimization problem.

Many real-world production systems are complex in nature and it is a real challenge to find an efficient scheduling method that satisfies the production requirements as well as utilizes the resources efficiently. Tools like discrete event simulation (DES) are very useful for modeling these systems and can be used to test and compare different schedules before dispatching the best schedules to the targeted systems. DES alone, however, cannot be used to find the "optimal" schedule. Simulation-based optimization (SO) can be used to search for optimal schedules efficiently without too much user intervention. Observing that long computing time may prohibit the interest in using SO for industrial scheduling, various techniques to speed up the SO process have to be explored. This paper presents a case study that shows the use of a Web-based parallel and distributed SO platform to support the operations scheduling of a machining line in an automotive factory.

Simulation modeling has the capability to represent complex real-world systems in details and therefore it is suitable to develop simulation models for generating detailed operation plans to control the shop floor. In the literature, there are two major approaches for tackling the simulation-based scheduling problems, namely dispatching rules and using meta-heuristic search algorithms. The purpose of this paper is to illustrate that there are advantages when these two approaches are combined. More precisely, this paper introduces a novel hybrid genetic representation as a combination of both a partially completed schedule (direct) and the optimal dispatching rules (indirect), for setting the schedules for some critical stages (e.g. bottlenecks) and other non-critical stages respectively. When applied to an industrial case study, this hybrid method has been found to outperform the two common approaches, in terms of finding reasonably good solutions within a shorter time period for most of the complex scheduling scenarios.

7. Simulation-based Scheduling using a Genetic Algorithm with Consideration to Robustness

The use of optimization to solve a simulation-based multi-objective problem produces a set of solutions that provide information about the trade-offs that have to be considered by the decision maker. An incomplete or sub-optimal set of solutions will negatively affect the quality of any subsequent decisions. The parameters that control the search behavior of an optimization algorithm can be used to minimize this risk. However, choosing good parameter settings for a given optimization algorithm and problem combination is difficult. The aim of this paper is to take a step towards optimal parameter settings for optimization of simulation-based problems. Two parameter tuning methods, Latin Hypercube Sampling and Genetic Algorithms, are used to maximize the performance of NSGA-II applied to a simulation-based problem with discrete variables. The strengths and weaknesses of both methods are analyzed. The effect of the number of decision variables and the function budget on the optimal parameter settings is also studied.

Evolutionary optimization algorithms typically use one or more parameters that control their behavior. These parameters, which are often kept constant, can be tuned to improve the performance of the algorithm on specific problems. However, past studies have indicated that the performance can be further improved by adapting the parameters during runtime. A limitation of these studies is that they only control, at most, a few parameters, thereby missing potentially beneficial interactions between them. Instead of finding a direct control mechanism, the novel approach in this paper is to use different parameter sets in different stages of an optimization. These multiple parameter sets, which remain static within each stage, are tuned through extensive bi-level optimization experiments that approximate the optimal adaptation of the parameters. The algorithmic performance obtained with tuned multiple parameter sets is compared against that obtained with a single parameter set. For the experiments in this paper, the parameters of NSGA-II are tuned when applied to the ZDT, DTLZ and WFG test problems. The results show that using multiple parameter sets can significantly increase the performance over a single parameter set.

Evolutionary optimization algorithms have parameters that are used to adapt the search strategy to suit different optimization problems. Selecting the optimal parameter values for a given problem is difficult without a-priori knowledge. Experimental studies can provide this knowledge by finding the best parameter values for a specific set of problems. This knowledge can also be constructed into heuristics (rule-of-thumbs) that can adapt the parameters for the problem. The aim of this paper is to assess the heuristics of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization algorithm. This is accomplished by tuning CMA-ES parameters so as to maximize its performance on the CEC'15 problems, using a bilevel optimization approach that searches for the optimal parameter values. The optimized parameter values are compared against the parameter values suggested by the heuristics. The difference between specialized and generalized parameter values are also investigated.

The performance of an Evolutionary Algorithm (EA) can be greatly influenced by its parameters. The optimal parameter settings are also not necessarily the same across different problems. Finding the optimal set of parameters is therefore a difficult and often time-consuming task. This paper presents results of parameter tuning experiments on the NSGA-II and NSGA-III algorithms using the ZDT test problems. The aim is to gain new insights on the characteristics of the optimal parameter settings and to study if the parameters impose the same effect on both NSGA-II and NSGA-III. The experiments also aim at testing if the rule of thumb that the mutation probability should be set to one divided by the number of decision variables is a good heuristic on the ZDT problems. A comparison of the performance of NSGA-II and NSGA-III on the ZDT problems is also made.

The runtime of an evolutionary algorithm can be reduced by increasing the number of parallel evaluations. However, increasing the number of parallel evaluations can also result in wasted computational effort since there is a greater probability of creating solutions that do not contribute to convergence towards the global optimum. A trade-off, therefore, arises between the runtime and computational effort for different levels of parallelization of an evolutionary algorithm. When the computational effort is translated into cost, the trade-off can be restated as runtime versus cost. This trade-off is particularly relevant for cloud computing environments where the computing resources can be exactly matched to the level of parallelization of the algorithm, and the cost is proportional to the runtime and how many instances that are used. This paper empirically investigates this trade-off for two different evolutionary algorithms, NSGA-II and differential evolution (DE) when applied to multi-objective discrete-event simulation-based (DES) problem. Both generational and steadystate asynchronous versions of both algorithms are included. The approach is to perform parameter tuning on a simplified version of the DES model. A subset of the best configurations from each tuning experiment is then evaluated on a cloud computing platform. The results indicate that, for the included DES problem, the steady-state asynchronous version of each algorithm provides a better runtime versus cost trade-off than the generational versions and that DE outperforms NSGA-II.

Most optimization algorithms extract important algorithmic design decisions as control parameters. This is necessary because different problems can require different search strategies to be solved effectively. The control parameters allow for the optimization algorithm to be adapted to the problem at hand. It is however difficult to predict what the optimal control parameters are for any given problem. Finding these optimal control parameter values is referred to as the parameter tuning problem. One approach of solving the parameter tuning problem is to use a bilevel optimization where the parameter tuning problem itself is formulated as an optimization problem involving algorithmic performance as the objective(s). In this paper, we present a framework and architecture that can be used to solve large-scale parameter tuning problems using a bilevel optimization approach. The proposed framework is used to show that evolutionary algorithms are competitive as tuners against irace which is a state-of-the-art tuning method. Two evolutionary algorithms, differential evaluation (DE) and a genetic algorithm (GA) are evaluated as tuner algorithms using the proposed framework and software architecture. The importance of replicating optimizations and avoiding local optima is also investigated. The architecture is deployed and tested by running millions of optimizations using a computing cluster. The results indicate that the evolutionary algorithms can consistently find better control parameter values than irace. The GA, however, needs to be configured for an explicit exploration and exploitation strategy in order avoid local optima.

14. On the Trade-off Between Runtime and Evaluation Efficiency In Evolutionary Algorithms

Evolutionary optimization algorithms typically use one or more parameters that control their behavior. These parameters, which are often kept constant, can be tuned to improve the performance of the algorithm on specific problems. However, past studies have indicated that the performance can be further improved by adapting the parameters during runtime. A limitation of these studies is that they only control, at most, a few parameters, thereby missing potentially beneficial interactions between them. Instead of finding a direct control mechanism, the novel approach in this paper is to use different parameter sets in different stages of an optimization. These multiple parameter sets, which remain static within each stage, are tuned through extensive bi-level optimization experiments that approximate the optimal adaptation of the parameters. The algorithmic performance obtained with tuned multiple parameter sets is compared against that obtained with a single parameter set. For the experiments in this paper, the parameters of NSGAII are tuned when applied to the ZDT, DTLZ and WFG test problems. The results show that using multiple parameter sets can significantly increase the performance over a single parameter set.

This paper presents a simulation-optimization system for personnel scheduling. The system is developed for the Swedish postal services and aims at finding personnel schedules that minimizes both total man hours and the administrative burden of the person responsible for handling schedules. For the optimization, the multi-objective evolutionary algorithm NSGA-II is implemented. The simulation-optimization system is evaluated on a real-world test case and results from the evaluation shows that the algorithm is successful in optimizing the problem.

Research regarding supply chain optimisation has been performed for a long time. However, it is only in the last decade that the research community has started to investigate multi-objective optimisation for supply chains. Supply chains are in general complex networks composed of autonomous entities whereby multiple performance measures in different levels, which in most cases are in conflict with each other, have to be taken into account. In this chapter, we present a comprehensive literature review of existing multi-objective optimisation applications, both analytical-based and simulation-based, in supply chain management publications. Later on in the chapter, we identify the needs of an integration of multi-objective optimisation and system dynamics models, and present a case study on how such kind of integration can be applied for the investigation of bullwhip effects in a supply chain.

Agent-based simulation (ABS) represents a paradigm in the modelling and simulation of complex and dynamic systems distributed in time and space. Since manufacturing and logistics operations are characterised by distributed activities as well as decision making - in both time and in space - and can be regarded as complex, the ABS approach is highly appropriate for these types of systems. The aim of this chapter is to present a new framework of applying ABS and simulation-based optimisation techniques to supply chain management, which considers the entities (supplier, manufacturer, distributor and retailer) in the supply chain as intelligent agents in a simulation. This chapter also gives an outline on how these agents pursue their local objectives/goals as well as how they react and interact with each other to achieve a more holistic objective(s)/goal(s).

The purpose of this study is to introduce an effective methodology for obtaining Pareto-optimal solutions, when combining System Dynamics (SD) and Multi-Objective Optimization (MOO) for supply chain problems.

Design/methodology/approach

This paper proposes a new approach that combines SD and MOO within a simulation-based optimization framework to generate the efficient frontier that supports decision- making in SupplyChain Management (SCM). It also addresses the issue of the curse of dimensionality, commonly found in practical optimization problems, through design space reduction.

Findings

The integrated MOO and SD approach has been shown to be very useful in revealing how the decision variables in the Beer Game affect the optimality of the three common SCM objectives, namely, the minimization of inventory, backlog, and the bullwhip effect. The results of the in-depth Beer Game study clearly show that these three optimization objectives are in conflict with each other, in the sense that a supply chain manager cannot minimize the bullwhip effect without increasing the total inventory and total backlog levels.

Practical implications

Having a methodology that enables the effective generation of optimal trade-off solutions, in terms of computational cost, time, as well as solution diversity and intensification, not only assists decision makers to make decisions on time, but also presents a diverse and intense solution set to choose from.

Originality/value

This paper presents a novel supply chain MOO methodology that helps to find Pareto-optimal solutions in a more effective manner. In order to do so, the methodology tackles the so-called curse of dimensionality, by reducing the design space and focusing the search of the optimization to regions of interest. Together with design space reduction, it is believed that the integrated SD and MOOapproach can provide an innovative and efficient method for the design and analysis of manufacturing supply chain systems in general.

Research regarding supply chain optimization has been performed for a long time. However, it’s only in the last decade that the research community has started to investigate multi-objective optimization for supply chains. Supply chains are in general complex networks composed of autonomous entities whereby multiple performance measures in different levels, which in most cases are in conflict with each other, have to be taken into account. In this paper, we present a literature review of existing multi-objective optimization applications, both analytical-based and simulation-based, in supply chain management publications. Based on the literature review, the need for research in a multi-objective and multi-level optimization framework for supply chain management is proposed. Such a framework considers not only the optimization of the overall supply chain, but also for each entity within the supply chain, in a multi-objective optimization context.

23. Multi-objective Optimization and Analysis of the Inventory Management Model

The aim of this paper is to address the dilemma of supply chain management (SCM) within a truly Pareto-based multi-objective context. This is done by introducing an integration of system dynamics and multi-objective optimisation. An extended version of the well-known pedagogical SCMproblem, the Beer Game, originally developed at MIT since the 1960s, has been used as the illustrative example. As will be discussed in the paper, the integrated multi-objective optimisation and system dynamics model has been shown to be very useful for revealing how the parameters in the Beer Game affect the optimality of the three common SCM objectives, namely, the minimisation of inventory cost, backlog cost, and the bullwhip effect.

The aim of this paper is to address the dilemma of Supply Chain Management (SCM) within a truly Pareto-based multi-objective context. This is done by introducing an integration of System Dynamics and Multi-Objective Optimization. Specifically, the paper contrasts local optimization with global optimization for SCM in which optimal trade-off solutions in the entity level, i.e. optimizing the supply chain from the perspectives of individual (local) entities. e.g., supplier, factory, distributor and retailer, are collected and compared to those obtained from an overall supply chain level (global) optimization. An extended version of the well-known pedagogical SCM problem, the Beer Game, originally developed at MIT since the 1960s, has been used as the illustrative example. As will be discussed in the paper, the integrated multi-objective optimization and system dynamics model has been shown to be very useful for revealing that how the parameters in the Beer Game affect the optimality of the three common SCM objectives, namely, the minimization of inventory, backlog, and the bullwhip effect.

26. Strategy evaluation using system dynamics and multi-objective optimization for an internal supply chain

System dynamics, which is an approach built on information feedbacks and delays in the model in order to understand the dynamical behavior of a system, has successfully been implemented for supply chain management problems for many years. However, research within in multi-objective optimization of supply chain problems modelled through system dynamics has been scares. Supply chain decision making is much more complex than treating it as a single objective optimization problem due to the fact that supply chains are subjected to the multiple performance measures when optimizing its process. This paper presents an industrial application study utilizing the simulation based optimization framework by combining system dynamics simulation and multi-objective optimization. The industrial study depicts a conceptual system dynamics model for internal logistics system with the aim to evaluate the effects of different material flow control strategies by minimizing total system work-on-process as wells as total delivery delay.

Virtual production development is adopted by many companies in the production industry and digital models and virtual tools are utilized for strategic, tactical and operational decisions in almost every stage of the value chain. This paper suggest a testbed concept that aims the production industry to adopt a virtual production development process with integrated tool chains that enables holistic optimizations, all the way from the overall supply chain performance down to individual equipment/devices. The testbed, which is fully virtual, provides a mean for development and testing of integrated digital models and virtual tools, including both technical and methodological aspects.

Old machine reconditioning projects extend the life length of machines with reduced investments, however they frequently involve complex challenges. Due to the lack of technical documentation and the fact that the machines are running in production, they can require a reverse engineering phase and extremely short commissioning times. Recently, emulation software has become a key tool to create Digital Twins and carry out virtual commissioning of new manufacturing systems, reducing the commissioning time and increasing its final quality. This paper presents an industrial application study in which an emulation model is used to support a reconditioning project and where the benefits gained in the working process are highlighted.

Industrial automated systems are mostly designed and pre-adjusted to always work at their maximum production rate. This leaves room for important energy consumption reductions considering the production rate variations of factories in reality. This article presents a multi-objective optimization application targeting cycle time and energy consumption of a robotic cell. A novel approach is presented where an existing emulation model of a fictitious robotic cell was extended with low-level electrical components modeled and encapsulated as FMUs. The model, commanded by PLC and Robot Control software, was subjected to a multi-objective optimization algorithm in order to find the Pareto front between energy consumption and production rate. The result of the optimization process allows selecting the most efficient energy consumption for the robotic cell in order to achieve the required cycle.

We consider a bilevel parameter tuning problem where the goal is to maximize the performance of a given multi-objective evolutionary optimizer on a given problem. The search for optimal algorithmic parameters requires the assessment of several sets of parameters, through multiple optimization runs, in order to mitigate the effect of noise that is inherent to evolutionary algorithms. This task is computationally expensive and therefore, in this paper, we propose to use sampling and metamodeling to approximate the performance of the optimizer as a function of its parameters. While such an approach is not unheard of, the choice of the metamodel to be used still remains unclear. The aim of this paper is to empirically compare 11 different metamodeling techniques with respect to their accuracy and training times in predicting two popular multi-objective performance metrics, namely, the hypervolume and the inverted generational distance. For the experiments in this pilot study, NSGA-II is used as the multi-objective optimizer for solving ZDT problems, 1 through 4.

This paper generalizes the automated innovization framework using genetic programming in the context of higher-level innovization. Automated innovization is an unsupervised machine learning technique that can automatically extract significant mathematical relationships from Pareto-optimal solution sets. These resulting relationships describe the conditions for Pareto-optimality for the multi-objective problem under consideration and can be used by scientists and practitioners as thumb rules to understand the problem better and to innovate new problem solving techniques; hence the name innovization (innovation through optimization). Higher-level innovization involves performing automated innovization on multiple Pareto-optimal solution sets obtained by varying one or more problem parameters. The automated innovization framework was recently updated using genetic programming. We extend this generalization to perform higher-level automated innovization and demonstrate the methodology on a standard two-bar bi-objective truss design problem. The procedure is then applied to a classic case of inventory management with multi-objective optimization performed at both system and process levels. The applicability of automated innovization to this area should motivate its use in other avenues of operational research.

Multi-objective evolutionary algorithms (MOEAs)are often criticized for their high-computational costs. Thisbecomes especially relevant in simulation-based optimizationwhere the objectives lack a closed form and are expensive toevaluate. Over the years, meta-modeling or surrogate modelingtechniques have been used to build inexpensive approximationsof the objective functions which reduce the overall number offunction evaluations (simulations). Some recent studies however,have pointed out that accurate models of the objective functionsmay not be required at all since evolutionary algorithms onlyrely on the relative ranking of candidate solutions. Extendingthis notion to MOEAs, algorithms which can ‘learn’ Paretodominancerelations can be used to compare candidate solutionsunder multiple objectives. With this goal in mind, in thispaper, we study the performance of ten different off-the-shelfclassification algorithms for learning Pareto-dominance relationsin the ZDT test suite of benchmark problems. We considerprediction accuracy and training time as performance measureswith respect to dimensionality and skewness of the training data.Being a preliminary study, this paper does not include results ofintegrating the classifiers into the search process of MOEAs.

Metamodeling plays an important role in simulation-based optimization by providing computationally inexpensive approximations for the objective and constraint functions. Additionally metamodeling can also serve to filter noise, which is inherent in many simulation problems causing optimization algorithms to be mislead. In this paper, we conduct a thorough statistical comparison of four popular metamodeling methods with respect to their approximation accuracy at various levels of noise. We use six scalable benchmark problems from the optimization literature as our test suite. The problems have been chosen to represent different types of fitness landscapes, namely, bowl-shaped, valley-shaped, steep ridges and multi-modal, all of which can significantly influence the impact of noise. Each metamodeling technique is used in combination with four different noise handling techniques that are commonly employed by practitioners in the field of simulation-based optimization. The goal is to identify the metamodeling strategy, i.e. a combination of metamodeling and noise handling, that performs significantly better than others on the fitness landscapes under consideration. We also demonstrate how these results carry over to a simulation-based optimization problem concerning a scalable discrete event model of a simple but realistic production line.

Optimization of production systems often involves numerous simulations of computationally expensive discrete-event models. When derivative-free optimization is sought, one usually resorts to evolutionary and other population-based meta-heuristics. These algorithms typically demand a large number of objective function evaluations, which in turn, drastically increases the computational cost of simulations. To counteract this, meta-models are used to replace expensive simulations with inexpensive approximations. Despite their widespread use, a thorough evaluation of meta-modeling methods has not been carried out yet to the authors' knowledge. In this paper, we analyze 10 different meta-models with respect to their accuracy and training time as a function of the number of training samples and the problem dimension. For our experiments, we choose a standard discrete-event model of an unpaced flow line with scalable number of machines and buffers. The best performing meta-model is then used with an evolutionary algorithm to perform multi-objective optimization of the production model.

Practical multi-objective optimization problems often involve several decision variables that influence the objective space in different ways. All variables may not be equally important in determining the trade-offs of the problem. Decision makers, who are usually only concerned with the objective space, have a hard time identifying such important variables and understanding how the variables impact their decisions and vice versa. Several graphical methods exist in the MCDM literature that can aid decision makers in visualizing and navigating high-dimensional objective spaces. However, visualization methods that can specifically reveal the relationship between decision and objective space have not been developed so far. We address this issue through a novel visualization technique called trend mining that enables a decision maker to quickly comprehend the effect of variables on the structure of the objective space and easily discover interesting variable trends. The method uses moving averages with different windows to calculate an interestingness score for each variable along predefined reference directions. These scores are presented to the user in the form of an interactive heatmap. We demonstrate the working of the method and its usefulness through a benchmark and two engineering problems.

Real-world optimization problems typically involve multiple objectives to be optimized simultaneously under multiple constraints and with respect to several variables. While multi-objective optimization itself can be a challenging task, equally difficult is the ability to make sense of the obtained solutions. In this two-part paper, we deal with data mining methods that can be applied to extract knowledge about multi-objective optimization problems from the solutions generated during optimization. This knowledge is expected to provide deeper insights about the problem to the decision maker, in addition to assisting the optimization process in future design iterations through an expert system. The current paper surveys several existing data mining methods and classifies them by methodology and type of knowledge discovered. Most of these methods come from the domain of exploratory data analysis and can be applied to any multivariate data. We specifically look at methods that can generate explicit knowledge in a machine-usable form. A framework for knowledge-driven optimization is proposed, which involves both online and offline elements of knowledge discovery. One of the conclusions of this survey is that while there are a number of data mining methods that can deal with data involving continuous variables, only a few ad hoc methods exist that can provide explicit knowledge when the variables involved are of a discrete nature. Part B of this paper proposes new techniques that can be used with such datasets and applies them to discrete variable multi-objective problems related to production systems.

The first part of this paper served as a comprehensive survey of data mining methods that have been used to extract knowledge from solutions generated during multi-objective optimization. The current paper addresses three major shortcomings of existing methods, namely, lack of interactiveness in the objective space, inability to handle discrete variables and inability to generate explicit knowledge. Four data mining methods are developed that can discover knowledge in the decision space and visualize it in the objective space. These methods are (i) sequential pattern mining, (ii) clustering-based classification trees, (iii) hybrid learning, and (iv) flexible pattern mining. Each method uses a unique learning strategy to generate explicit knowledge in the form of patterns, decision rules and unsupervised rules. The methods are also capable of taking the decision maker's preferences into account to generate knowledge unique to preferred regions of the objective space. Three realistic production systems involving different types of discrete variables are chosen as application studies. A multi-objective optimization problem is formulated for each system and solved using NSGA-II to generate the optimization datasets. Next, all four methods are applied to each dataset. In each application, the methods discover similar knowledge for specified regions of the objective space. Overall, the unsupervised rules generated by flexible pattern mining are found to be the most consistent, whereas the supervised rules from classification trees are the most sensitive to user-preferences.

Many simulation-based optimization packages provide powerful algorithms to solve industrialproblems. But most of them fail to oer their users the techniques they needto eectively handle multiple-choice problems involving a large set of decision variableswith mixed types (continuous, discrete and combinatorial) and problems that are highlyconstrained (e.g., with many equality constraints). Yet such issues are found in manyreal-world production system design and improvement problems. Thus, this paper introducesa method to eectively embed multiple choice sets and Manhattan-distancebasedconstraint handling into multi-objective optimization algorithms like NSGA-II andNSGA-III. This paper illustrates and evaluates how these two techniques have been appliedtogether to solve optimal workload, buer and workforce allocation problems. Anexample follows, showing their application to a complex production system improvementproblem at an automotive manufacturer.

Many simulation-based optimization packages provide powerful algorithms to solve large-scale system problems. But most of them fall short to offer their users the techniques to effectively handle decision variables that are of multiple-choice type, as well as equality constraints, which can be found in many real-world industrial system design and improvement problems. Hence, this paper introduces how multiple choice sets and Manhattan-distance-based constraint handling can be effectively embedded into a meta-heuristic algorithm for simulation-based optimization. How these two techniques have been applied together to make the improvement of a complex production system, provided by an automotive manufacturer, possible will also be presented.

40. On the convergence of stochastic simulation-based multi-objective optimization for bottleneck identification

By innovatively formulating a bottleneck identication problem into a bi-objective optimization,simulation-based multi-objective optimization (SMO) can be eectively used as a new method for gen-eral production systems improvement. In a single optimization run, all attainable, maximum throughputlevels of the system can be sought through various optimal combinations of improvement changes ofthe resources. Additionally, the post-optimality frequency analysis on the Pareto-optimal solutions cangenerate a rank order of the attributes of the resources required to achieve the target throughput levels.Observing that existing research mainly put emphasis on measuring the convergence of the optimizationin the objective space, leaving no information on when the solutions in the decision space have convergedand stabilized, this paper represents the rst eort in increasing the knowledge about the convergence ofSMO for the rank ordering in the context of bottleneck analysis. By customizing the Spearman's footruledistance and Kendall's tau, this paper presents how these metrics can be used eectively to provide thedesired visual aid in determining the convergence of bottleneck ranking, hence can assist the user todetermine correctly the terminating condition of the optimization process. It illustrates and evaluatesthe convergence of the SMO for bottleneck analysis on a set of scalable benchmark models as well as twoindustrial simulation models. The results have shed promising direction of applying these new metrics tocomplex, real-world applications.

Bottleneck analysis can be defined as the process that includes both bottleneck identification and improvement. In the literature most of the proposed bottleneck-related methods address mainly bottleneck detection. By innovatively formulating a bottleneck analysis into a bi-objective optimization method, recent research has shown that all attainable, maximized TH of a production system, through various combinations of improvement changes of the resources, can be sought in a single optimization run. Nevertheless, when applied to simulation-based evaluation, such a bi-objective optimization is computationally expensive especially when the simulation model is complex and/or with a large amount of decision variables representing the improvement actions. The aim of this paper is therefore to introduce a novel variables screening enabled bi-objective optimization that is customized for bottleneck analysis of production systems. By using the Sequential Bifurcation screening technique which is particularly suitable for large-scale simulation models, fewer simulation runs are required to find the most influenacing factors in a simulation model. With the knowledge of these input variables, the bi-objective optimization used in the bottleneck analysis can customize the genetic operators on these variables individually according to their rank of main effects with the target to speed up the entire optimization process. The screening-enabled algorithm is then applied to a set of experiments designed to evaluate how well it performs when the number of variables increases is a scalable, benchmark model, as well as two real-world industrial-scale simulation models found in the automotive industry. The results have illustrated the promising direction of incorporating the knowledge of influencing variables and variable-wise genetic operators into a multi-objective optimization algorithm for bottleneck analysis.

Manufacturing companies of today are under pressure to run their production most efficiently in order to sustain their competitiveness. Manufacturing systems usually have bottlenecks that impede their performance, and finding the causes of these constraints, or even identifying their locations, is not a straightforward task. SCORE (Simulation-based COnstraint REmoval) is a promising method for detecting and ranking bottlenecks of production systems, that utilizes simulation-based multi-objective optimization (SMO). However, formulating a real-world, large-scale industrial bottleneck analysis problem into a SMO problem using the SCORE-method manually include tedious and error-prone tasks that may prohibit manufacturing companies to benefit from it. This paper presents how the greater part of the manual tasks can be automated by introducing a new, generic way of defining improvements of production systems and illustrates how the simplified application of SCORE can assist manufacturing companies in identifying their production constraints.

A practical question in industry in designing or re-designing a production system is: how small can intermediated buffers be to ensure the desired production rate? This topic is usually called optimal buffer allocation as the goal is to allocate the minimum buffer capacities to optimize the performance of the line. This paper presents a case study of using simulation-based evolutionary multi-objective optimization to determine the optimal buffer capacities and positions in the reconfiguration of a real-world truck axle assembly line in an automobile manufacturer. The case study has not only revealed the applicability of the methodology in seeking optimal configurations in a truly multi-objective context, it also illustrates how additional important knowledge was gained by analyzing the optimization results in the objective space.

This paper discusses information fusion including its nature as well as some models for information fusion. Research on information fusion is dominated by defence applications and therefore, most models to a certain extent are defence specific; it is explained how these can be made more generic by adapting them. It is explained how the manufacturing sector can benefit from information fusion research; some analogies between issues in manufacturing and issues in military applications are given. A specific area in which the manufacturing sector can benefit from research on information fusion is the area of virtual manufacturing. Many issues related to decision support through modelling, simulation and synthetic environments are identical for manufacturing and defence applications. A particular area of interest for the future will be verification, validation and accreditation of modelling and simulation components for synthetic environments with various involved parties.

45. The Information Fusion JDL-U model as a reference model for Virtual Manufacturing

This paper presents a description of Modelling and Simulation as used in the Virtual Systems Research Centre at the University of Skövde. It also gives a summarized account of issues discussed in previous work such as phases in a simulation project, Verification, Validation and Accreditation, and the use of simulation as a tool to reduce uncertainty. The role of the human in various phases/activities in simulation projects is highlighted. Examples of both traditional and advanced applications of Virtual Manufacturing are given. Examples of the latter are simulation-based monitoring and diagnostics, and simulation-based optimization. Two models for Information Fusion, the OODA loop and JDL-U model, are discussed, the latter being an extension of the JDL model that describes various levels of information fusion (JDL=“joint directors of laboratories”). Subsequently, the activities and phases in a Modelling and Simulation project are placed in the context of the JDL-U model. This comparison shows that there are very strong similarities between the six (0–5) levels in the JDL-U model and activities/phases in Modelling and Simulation projects. These similarities lead to the conclusion that the JDL-U model with its associated science base can serve as a novel reference model for Modelling and Simulation. In particular, the associated science base on the “user refinement” level could benefit the Virtual Manufacturing community.