2 Solving a Bi-Objective Winner Determination Problem in a Transportation Procurement Auction Tobias Buer and Giselher Pankratz Abstract This paper introduces a bi-objective winner determination problem which arises in the procurement of transportation contracts via combinatorial auctions. The problem is modelled as an extension to the set covering problem and considers the minimisation of the total procurement costs and the maximisation of the service-quality level of the execution of all transportation contracts tendered. To solve the problem, an exact branch and bound algorithm and eight variants of a multiobjective genetic algorithm are proposed. The algorithms are tested using a set of new benchmark instances which comply with important economic features of the transportation domain. For some smaller instances, the branch and bound algorithm finds all optimal solutions. Large instances are used to compare the relative performance of the eight genetic algorithms. The results indicate that the quality of a solution depends largely on the initialisation heuristic and suggest also that a well-balanced combination of different operators is crucial to obtain good solutions. The best of all eight genetic algorithms is also evaluated using the small instances with the results being compared to those of the exact branch and bound algorithm. keywords: bi-objective winner determination problem; multiobjective genetic algorithm; combinatorial auction University of Hagen, Faculity of Business Administration and Economics Department of Information Systems, Prof. Dr. H. Gehring Profilstr. 8, Hagen, Germany Tel Fax Please cite as: Buer, T. and Pankratz, G.: Solving a Bi-Objective Winner Determination Problem in a Transportation Procurement Auction, Working Paper No. 448, Faculty of Business Administration and Economics, University of Hagen (Germany), 2010.

3 Solving a Bi-Objective Winner Determination Problem in a Transportation Procurement Auction Tobias Buer and Giselher Pankratz 1 Procurement of Transportation Contracts Shippers, like retailers as well as industrial enterprises often procure the transportation services they require via reverse auctions, where the objects under auction are transportation contracts. Usually, such contracts are designed as framework agreements lasting for a period of one to three years, and defining a pick-up location, a delivery location, and the type and volume of goods that are to be transported between both locations. Additionally, further details such as a contract-execution frequency, e.g. delivery twice a week, and the required quality of service, e.g. an on-time delivery quota, are specified in a transportation contract. A carrier can bid for one or more contracts. In each bid, the carrier states how much he wants to be paid for accepting the specified contracts. Transportation procurement auctions are of high economic relevance. Caplice and Sheffi [4] report on the size of real world transportation auctions in which they were involved over a period of five years. According to their report, in a single transportation auction up to 470 (median 100) carriers participated, up to 5,000 (median 800) lanes were tendered, and the annual cost of transportation amounted up to US-$ 700 million (median US-$ 75 million). Elmaghraby and Keskinocak [9] present a case study of a procurement auction event in which a do-it-yourself chain operating mainly in North America procured transportation services for about a quarter of the in-bound moves to their chain stores, which corresponds to a number of over 600 lanes. In the study at hand, the terms lane and transportation contract are used interchangeably. In the scenario presented here there are a number of interesting problems on the carrier s as well as on the shipper s side. This paper focuses on the allocation problem that has to be solved by the shipper after all bids are submitted. In particular, two characteristics of the given scenario are of interest. First, from a carrier s point of view, there are complementarities between some of the contracts. That is, the costs for executing some contracts simultaneously are lower than the sum of the costs of executing each of these contracts in isolation. The cost effect of such complementarities is also referred to as economies of scope. Second, allocation of contracts to carriers has to be done taking into account multiple, often conflicting 1

4 decision criteria. While some of the criteria (e.g. limiting the total number of carriers employed) may be naturally expressed as side constraints, other criteria should be considered explicitly as objectives. In particular, there is usually a trade-off between the classical cost-minimisation goal on the one hand and the desire for high service-quality on the other. Both objectives are of almost equal importance to most shippers, cf. Caplice and Sheffi [3] and Sheffi [19]. In their recent review of the carrier selection literature, Meixell and Norbis [16] identified that the issue of economies of scope is dealt with in only a few papers and should be emphasised in future research. In order to exploit economies of scope (i.e., complementarities) between contracts in the bidding process, the use of so-called combinatorial auctions is increasingly recommended [1], [2], [19]. Combinatorial auctions allow carriers to submit bids on any subset of all tendered contracts ( bundle bids ). Through this, carriers can express their preferences more extensively than in classical auction formats. However, bundle bidding complicates the selection of winning bids. This problem is known as the winner determination problem (WDP) of combinatorial auctions. In the procurement context, the WDP is usually modeled as a variant of a set partitioning or set covering problem, both of which are NP-hard combinatorial optimisation problems. For a survey on winner determination problems see e.g. [1]. As to the multiple-criteria property of the allocation problem, there are two ways by which most shippers solve the conflict between cost and quality goals: One way is to restrict participation in the auction to those carriers that comply with the minimum quality standard required to meet the quality demands of any of the contracts. Thus, the service-quality performance of all remaining carriers is considered equal, and the only objective is to minimise total procurement costs. Unfortunately, unless the contract requirements are fairly homogenous, this approach leads to the quality requirements of many contracts being exceeded. The second way is to take into account service-quality performance differences between carriers by applying penalties or bonuses to the bundle bid prices, depending e.g. on a carrier s service-quality in previous periods. This paper focuses on a third alternative, which integrates quality and cost criteria by explicitly modeling the WDP as a bi-objective optimisation problem. This model extends a previous model presented in [2], which can be seen as a special case of the model presented in this paper. Previous work does not generally focus on modeling and solving winner determination problems under explicit consideration of multiple objectives. Different kinds of winner determination problems in combinatorial auctions for transportation contracts are treated in [4], [9], [14], [19], [20]. All these studies 2

5 focus on bundle bidding to exploit complementarities between contracts and consider minimisation of total procurement costs to be the only objective. The structure of the remaining paper is as follows: section two defines the bi-objective winner determination problem that is being studied. To solve this problem, an exact bi-objective branch and bound and a bi-objective genetic algorithm are introduced in chapter three. The algorithms are evaluated on newly generated benchmark instances in chapter four. Finally, section five gives an outlook on planned future work. 2 A Bi-Objective Winner Determination Problem (2WDP-SC) The winner determination problem (WDP) of a combinatorial procurement auction with two objectives is a generalisation of the well-known set covering problem (SC). Hence the problem at hand is called 2WDP- SC. It is formulated as follows: Given are a set of transport contracts T. Let t denote a transport contract with t T ; a set of bundle bids B where a bundle bid b B is defined as triple b := (c,τ, p). This means a carrier c C is willing to execute the subset of transport contracts τ at a price of p. Given is furthermore a set Q := {q ct c C t T } where q ct 0 indicates the quality level by which carrier c fulfils the transport contract t. The task is to find a set of winning bids W B, such that every transport contract t is covered by at least one bid b. Furthermore the total procurement costs, expressed in objective function f 1, are to be minimised and the total service quality, expressed in objective function f 2, is to be maximised. The 2WDP-SC is modelled as follows: min f 1 (W) = p(b) (1) b W max f 2 (W) = max{q ct c {c(b) b W t τ(b)}} (2) s.t. t T b W τ(b) = T (3) Each transport contract t has to be chosen at least once (3). Accordingly, some contracts may be covered by two or more winning bids and therefore paid more than once by the shipper. Hence, preferring a set covering to a set partitioning formulation might seem at first counterintuitive. However, given the same 3

6 set of bundle bids, the total cost of an optimal solution to the set covering problem never exceeds the total cost of an optimal set partitioning solution and might be even lower. Of course, a set partitioning formulation is appropriate if each carrier could be forced to submit a bundle bid on each of the 2 T 1 contract combinations. However, this seems unrealistic in practical scenarios due to the high number of possible combinations. For this reason, from the shipper s point of view, the set covering formulation appears more suitable. Nevertheless, if a contract is covered by more than one winning bid, there is at least one carrier who must not carry out this contract, although that carrier s bid won the auction. In the scenario at hand this is possible, as it appears reasonable to assume free disposal [18]. In the transportationprocurement context, free disposal means that a carrier has no disadvantage if he is asked by the shipper to carry out fewer contracts than he was paid for. The first objective function (1) minimises the total cost of the winning bids. The second objective function (2) maximises the total service-quality level of all transport contracts. Note that {c(b) b W t τ(b)} is the set of carriers who have won a bid on transport contract t. Since contracts need to be executed only once, but may be part of more than one winning bid, it is not appropriate to simply add up the respective qualification values of all b W. Instead, it appears reasonable to assume that the shipper will break ties in favor of the bidder who offers the highest service level for a given contract. Hence, by assumption, for each transport contract t only the maximum qualification values q ct with c {c(b) b W t τ(b)} are added up. Note that this rule might introduce an incentive for the carriers towards undesired strategic-bidding behavior. As this paper does not focus on auction-mechanism design, we leave this issue to forthcoming research. 3 Solution Approaches for the Bi-Objective Winner Determination Problem To solve the 2WDP-SC, this section presents two algorithms. The first is an exact algorithm based on the idea of branch and bound. Taking into account the NP-hardness of the bi-objective set covering problem, the non-linear objective function f 2, and the large size of real world problems, the branch and bound approach will probably solve only some of the relevant problems in reasonable time. Therefore, a second solution approach is presented which is an extension to a successfully applied multiobjective genetic algorithm. 4

7 Both algorithms aim to find all trade-off solutions without weighting the two objective functions. Thus the shipper does not have to quantify his preferences, which can be challenging [19]. Both algorithms find a set of non-dominated solutions (the true Pareto set or a good approximation set, respectively). The shipper finally has to choose a solution from this set according to his subjective preferences. The latter is outside the scope of this study. For notational convenience, the 2WDP-SC is treated in the following as a pure minimisation problem, i.e. the objective function f 2 is redefined as f 2 := ( 1) f 2 and is to be minimised. At first, the underlying terminology is defined (cf. e.g. [24]): The set of all feasible solutions of an optimisation problem is denoted by X. A solution x X is evaluated by a vector-valued objective function f(x) = ( f 1 (x),..., f m (x)) with f(x) R m. A solution x 1 X dominates another solution x 2 X (written x 1 x 2 ), if and only if no component of the vector-valued objective function f(x 1 ) is larger and at least one component of f(x 1 ) is smaller than the corresponding component of f(x 2 ). A solution x is called Pareto optimal if there is no x X that dominates x. The set of all Pareto optimal solutions is called Pareto (solution) set Ω. A set of solutions Ω is called an approximation of Ω or (Pareto) approximation set, if every solution in Ω is not dominated by any other solution in Ω. 3.1 A Branch and Bound Algorithm Based on the Epsilon-constraint Method In order to solve the 2WDP-SC exactly, the Epsilon-constraint method ([12], [5]) is used. The idea of the Epsilon-Constraint method is to optimise a single objective function, treating the other objective functions as additional side constraints whose values each are bounded by a particular ε. To obtain the Pareto set a proper sequence of the resulting single objective optimisation problem has to be solved for different values of ε. Here, the 2WDP-SC is scalarised by treating f 2 as side constraint. The derived single objective minimisation problem is denoted as εwdp-sc and consists of the objective function (1) with the covering constraint (3) and the epsilon-constraint f 2 (W) < ε. Using a general branch and bound approach based on linear relaxation and independent of the problem, though seeming natural, proved unsuitable for solving the εwdp-sc. This is due to the non-linearity of the second objective function f 2, in which for each transport contract, a max{.} term is calculated and the results are summed up over all contracts. To obtain a linear model, all max{.} terms have to be replaced by additional side constraints and additional decision variables (e.g. [21]). Compared to the B decision variables of the non-linear εwdp-sc, the linearised variant of the model contains B + T + T B decision variables. For example, even for a small problem instance with 40 bundle bids and 20 contracts there 5

8 are already 860 decision variables. Therefore, a problem-specific branch and bound procedure is introduced to solve the εwdp-sc. This algorithm, referred to as εlookahead Branch and Bound (εlbb), consists of two main components. The first component (repeatlbbfordifferentepsilons, Alg. 1) iteratively selects a feasible value for ε and hands it over to the second component, the actual branch and bound procedure LookaheadBB (Alg. 2). This procedure solves the εwdp-sc to find the cost minimal solution for the given quality level ε. Alg. 1 initially determines the worst and the best possible values of f 2, which relate to the maximum and minimum ε-values, respectively (keep in mind that f 2 was redefined to a minimisation objective). On the one hand, the maximum (worst) feasible value for ε is calculated by solving the εwdp-sc using LookaheadBB with ε = 0. The obtained solution coincides with the minimal cost solution of the set covering problem and is moreover the first Pareto optimal solution. On the other hand, the minimum (best) possible value for ε, denoted as ε, is simply given by f 2 (B) (generally, B is not in the Pareto set). After the minimum and maximum bounds for ε are known, repeatlbbfordifferentepsilons triggers LookaheadBB to consecutively calculate the solutions in the Pareto set. Alg. 1 computes in each iteration of the while-loop one solution. The loop starts with the highest (worst) ε, calls LookaheadBB and then decreases ε to the f 2 value of the current Pareto solution until ε = ε. By this approach, the number of required while-iterations to find the Pareto set is minimal, i.e. the number of costly LookaheadBB calls is as low as possible. Algorithm 1 repeatlbbfordifferentepsilons 1: input: set of bundle bids B 2: W LookaheadBB(B, 0) 3: initialise approximation set Ω {W} 4: ε f 2 (W) // worst (highest) ε 5: ε f 2 (B) // best (lowest) ε 6: while ε > ε do 7: W LookaheadBB(B, ε) 8: Ω Ω {W} 9: ε f 2 (W) 10: end while 11: output: Ω which is the Pareto set The branch and bound procedure LookaheadBB (Alg. 2) solves the εwdp-sc for a given ε and the set of bundle bids B, represented as sequence (b i ) b B with 1 i max and max = B, by implicitly enumerating the solution space. The solution space is divided into subspaces which are represented in the 6

10 PN1 is only generated if b PN.i contributes to reach a feasible solution. This means that the current bundle bid b PN.i has to cover at least one transport contract uncovered so far, or, if the epsilon constraint is not yet met, adding b PN.i must reduce f 2. On the other hand, PN2 is only generated if the current winning bids PN.W and the remaining free bids jointly lead to a feasible solution with respect to both the covering and the epsilon constraints. In checking this property, the algorithm has to lookahead on future bundle bids, which led to the labelling Lookahead in εlbb. Solving a relaxed problem to obtain a lower bound. For each problem node, a lower bound is calculated by solving a residual set covering problem which is defined through the remaining free bids, the transport contracts still uncovered, and by dropping the integrality constraints. LookaheadBB uses the procedure processnode (Alg. 3) to control how to continue processing a given PN. Provided that PN.W is feasible and a new lowest cost solution is found, the current best solution and the current best cost value are updated. Additionally, all problem nodes from the queue whose lower bound is less than the current best-known cost value are removed. Provided that PN.W is infeasible, a new lower bound PN.lb is computed. The lower bound equals f 1 (PN.W) plus the cost value of the optimal solution to the residual linear relaxed set covering problem. This set covering problem is defined by those contracts not covered by PN.W which have to be covered by a subset of the bundle bids given by f reebids. Algorithm 3 processnode 1: input: problem node PN 2: if PN.W is feasible then 3: if f 1 (PN.W) < bestcost then 4: bestcost f 1 (PN.W); 5: bestsolution PN.W 6: delete all problem nodes in queue with lower bound bestcost 7: end if 8: else 9: if PN.i B then 10: PN.lb f 1 (PN.W)+ cost of linear relaxed solution to the residual set covering problem. 11: add PN to queue 12: end if 13: end if 8

11 3.2 A Genetic Algorithm Based on SPEA2 To heuristically solve the 2WDP-SC a multiobjective genetic algorithm (MOGA) is applied. This approach has been proven suitable for solving hard multiobjective combinatorial optimisation problems, e.g. [7]. The proposed MOGA follows the Pareto approach and searches for a set of non-dominated solutions. To find a Pareto approximation set, a MOGA controls a set of core heuristics. The core heuristics of a MOGA can be divided into problem-specific and problem-independent operators. For those problemindependent operators which care for the specialties of population management in the multiobjective case (fitness-assignment strategy, selection of parents and insertion of children in the population), the methods proposed by Zitzler et al. in their Strength Pareto Evolutionary Algorithm 2 (SPEA2) are applied [22], [23]. The decision to use SPEA2 relies on its competitive performance particularly for solving bi-objective combinatorial optimisation problems [23]. In addition, standard bitflip mutation and standard uniform crossover [8] have been chosen as problem-independent mutation and crossover operators, respectively. As problem-specific operators, three core heuristics are introduced: Simple Insert, Greedy Randomised Construction and Remove If Feasible. Remove If Feasible is applied as a problem-specific mutation operator, whereas Simple Insert and Greedy Randomised Construction are both used to initialise a population as well as to repair an infeasible solution. The latter is necessary because both the uniform crossover operator and the bitflip mutation operator may end up with infeasible solutions. Since all three problem-specific core heuristics operate on encoded individuals, the chosen encoding is presented first. A binary encoding of a solution seems suitable for set covering-based problems like the 2WDP-SC. Every gene represents a bundle bid b. If b W the gene value is 1, and if b / W the gene value is 0. Simple Insert (SI) in each iteration randomly chooses a bundle bid b which contains at least one still uncovered transportation contract as a winning bid. The transport contracts τ b in bid b are marked as covered. These steps are repeated until all contracts T are covered and SI terminates. Greedy Randomised Construction (GRC) is inspired by the construction phase of the metaheuristic GRASP [11] and is slightly adapted for the bi-objective case (see Alg. 4). During each iteration, a winning bid is selected randomly from the restricted candidate list (RCL). Note that the RCL is an approximation set of best bundles, which holds only non-dominated bundles 9

12 Algorithm 4 GreedyRandomisedConstruction (GRC) 1: input: infeasible solution W 2: while W infeasible do 3: best bundle approximation set RCL {} 4: for all b B\W do 5: if b not dominated by any b RCL then 6: RCL RCL {b} 7: end if 8: end for 9: randomly chose a b from RCL 10: W W {b} 11: end while 12: output: feasible solution W with respect to the rating function g := (g p,g q ) with g p p(b)/ τ(b) \ τ(w) for τ(b) \ τ(w) > 0 (b,w) = for τ(b) \ τ(w) = 0 g q (b,w) = ( f 2 (W) f 2 (W b))/ b W b τ(b ). Both functions assign smaller values to better bundles. While g p rates a bundle according to the average additional costs attributed to each new (i.e. still uncovered) contract in b, g q weights the reduction of f 2 caused by adding b to the solution by the reciprocal total number of procured contracts (in the current solution). Remove If Feasible (RIF) randomly chooses a winning bid b W, labels b as visited, and removes b from W. If after this the solution W is still feasible, then another randomly chosen winning bid (which is also labelled as visited) is removed etc. If W becomes infeasible by removing b, then b is reinserted in W. RIF terminates if all winning bids are labelled as visited. Via combination of the core heuristics a set of different algorithms A is obtained (see Fig. 1). Each algorithm A i A,i = is denoted as a triple, e.g. A 2 is represented by (SI/BF/GRC) which reads as follows: A 2 uses SI to construct solutions, bitflip mutation (BF) as mutation operator and GRC as repair operator. Since uniform crossover is the only crossover operator, this operator is not considered as a distinctive feature in the taxonomy of Fig. 1. In order to refer to a set of algorithms, the wildcard is used at one or more positions, e.g. (*/BF/GRC) identifies A 2 and A 6. 10

13 Initialize Population SI GRC Mutation BF RIF BF RIF Repair SI GRC SI GRC SI GRC SI GRC Variant A i A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 4 Evaluation Figure 1: Eight possible combinations of core heuristics to form an algorithm A i The εlbb and the eight MOGA variants are tested on a set of newly generated benchmark instances which reflect some important economic features of the transportation domain. First, the generation of these instances is described. After that, the results of the εlbb and the eight MOGA variants are presented. 4.1 Generating Test Instances To the best of our knowledge, no benchmark instances exist for a multiobjective WDP like the proposed 2WDP-SC. However, there are several approaches for generating problem instances for single-objective winner determination problems with various economical backgrounds, e.g. the combinatorial auction test suite CATS of Leyton-Brown and Shoham [15] or the bidgraph algorithm introduced by Hudson and Sandholm [13]. To generate test instances for the 2WDP-SC, some ideas of the literature are extended to incorporate features specific to the procurement of transportation contracts. As this investigation does not address any game theoretical issues like strategic bidding and incentive compatibility, it is assumed that carriers reveal their true preferences. Thus, the terms price and cost valuation of a contract combination can be used synonymously. General requirements of artificial instances for combinatorial auctions are stated by Leyton-Brown and Shoham. Both postulations seem self-evident, but have not always be accounted for in the past [15]: Some combinations of contracts are more frequently bid on than other combinations. This is due to usually different synergies between contracts. The charged price of a bundle bid depends on the contracts in this bundle bid. Simple random prices, 11

14 e.g. drawn from [0,1], are unrealistic and can lead to computationally easy instances. Furthermore, it seems reasonable to demand that the following additional requirements specific to transportation procurement auctions are met: All submitted bids are binding and exhibit additive valuations (OR-bids, cf. [17]). Hence, a carrier is supposed to be able to execute any combination of his submitted bids at expenses which do not exceed the sum of the corresponding bid prices. Extra costs do not arise. Due to the medium-term contract period of one to three years in the scenario at hand, capacity adjustments are possible in order to avoid capacity bottlenecks. Furthermore, the carrier has the opportunity to resell some contracts to other carriers who guarantee the same quality of service. From the previous assumption it follows that a rational carrier c does only bid on combinations of contracts which exhibit strictly subadditive cost valuations. The cost valuation of a set of contracts τ is called strictly subadditive, if for each partition T of the set τ, the cost valuation of τ is strictly lower than the sum of the cost valuations of all parts of the respective set partition. Formally, the carrierspecific set Π c of all strict subadditive bids can be defined as expressed in the following formula, in which P(τ) denotes all set partitions of τ and P(τ) denotes the powerset of τ: { Π c = τ T c T P(τ) : p c (τ) < τ T { with P(τ) = T P(τ) τ T τ = τ τ T } p c (τ ), } τ = /0 Strict subadditivity in terms of cost is due to synergies between contracts. Bids composed of contracts which exhibit strict subadditive cost valuations are referred to as essential bids. Since all submitted bids are supposed to be OR-bids, any non-essential bid could always be replaced by an equivalent combination of two or more essential bids. Therefore, bidding on non-essential bids is redundant. The 2WDP-SC was modelled as a set covering problem, as it appeared reasonable to assume free disposal. Free disposal means, that the price charged by carrier c for any subset of a set of contracts τ is not greater than the price carrier c would charge for τ. Formally this is expressed in the following 12

15 formula, in which B c denotes the set of bundle bids submitted by carrier c: p(b ) p(b) τ(b ) τ(b) b,b B c. To be an instance suited to the 2WDP-SC, the bundle bids of each carrier should also feature the free disposal property. Finally, it is assumed that the carrier-specific costs of a transport contract depend on both the contract s resource requirements and the service-quality level at which the carrier is able to perform the contract. The bids are generated using Algorithm 5, which takes four values as input: the number nbids of bids to be generated, the sets C and T which represent carriers and transport contracts, respectively, and the density ρ of the synergy matrix. The synergy matrix consists of binary values which indicate the pairwise synergies between contracts. Synergies between contracts imply that the respective contract combination is cost subadditive. A higher density tends to result in more and larger contract combinations a carrier has to consider. Algorithm 5 BidGeneration 1: input: nbids, density of synergy matrix ρ, T, C 2: c C: randomly select relevant contracts T c T, such that c C T c = T 3: for all carriers c C do 4: i, j T c : set s c i j 1 with probability ρ, indicating that between contracts i and j exist synergies 5: t T c : randomly set contract quality q ct {1,2,3,4,5} 6: t T c : randomly set resource demand r ct [0.1,0.5] 7: determine essential contract combinations Π c 8: subadditivebidgraphalgorithm(π c ) to calculate prices p(τ) for each τ Π c. 9: end for 10: c C: B c select(π c ) 11: output: all carrier bids B = c C B c First of all, BidGeneration (Alg. 5) initialises some variables. For each carrier a subset of contracts T c is determined as the set of contracts which the carrier is supposed to be willing to bid for. While it is not necessary that all T c are disjoint, they must jointly cover all contracts in T. After that, the following steps are performed for each carrier. First, the carrier-specific synergy matrix is randomly filled according to density ρ. The service-quality q ct at which carrier c is able to execute contract t is chosen randomly 13

16 from the integer values one to five, with higher values indicating a higher service level. Furthermore, to each contract a resource demand r ct is assigned. This is an abstract indicator for the resources required by a carrier c to carry out contract t. The resource demand of a given contract may vary from carrier to carrier, as carriers might have, e.g. different locations of their depots, different types of vehicles or existing transportation commitments which influence the required resources. The values r ct are chosen randomly between 0.1 and 0.5. To obtain the set of essential contract combinations in line 7, assume for each carrier c a synergy graph SG c = (T c,e c ). Let the vertices be the contracts T c carrier c is interested in. If two contracts i, j T c feature synergies, that is s c i j = 1, then both contracts are connected via an edge, that is Ec = {(i, j) s c i j = 1 i, j T c }. It is assumed that any number of contracts can be combined in a single bid, as long as the sum of the corresponding resource demands does not exceed a maximum total resource demand of 1. This capacity limit is motivated by the fact that unfolding of complementarities generally is subject to resource limitations. For example, contracts often feature synergies if they are carried out conjointly in the same tour, which, however, is subject to vehicle capacity restrictions. The resource demand of each contract t T c is given by r ct. Then the set of feasible essential combinations of contracts equals the set of all possible induced subgraphs of SG c with t r ct 1. In the next step, a price for each combination of contracts is determined using the SubadditiveBidGraph algorithm, which is explained below. After that, the select operator choses among all feasible contract combinations those combinations on which each carrier is supposed to place his bids. Therefore, all contract combinations in Π are rated according to two criteria: average cost per contract p(b)/ τ(b) and average quality per contract t τ(b) q c(b)t / τ(b). Then, the best contract combinations with respect to these criteria are selected according to the dominance concept. In doing so, select makes sure that on the one hand, the total number of bids submitted by all bidders is nbids, and on the other hand, each t T is covered by at least one bundle bid to obtain a solvable instance. The SubadditiveBidGraph algorithm (cf. Alg. 6) is applied to determine prices for the essential contract combinations which comply with the assumptions of free disposal and strict subadditivity. The algorithm is based on the approach of Hudson and Sandholm [13] which generates bids with free disposal. This approach is extended, such that all generated bids also show strictly subadditive cost valuations. The idea of the original bidgraph algorithm as proposed by Hudson and Sandholm is to define lower bounds LB(τ) and upper bounds UB(τ) for each considered contract combination τ such that free disposal 14

17 Algorithm 6 SubadditiveBidGraph 1: input: set of essential contract combinations Π c, carrier c 2: A sup {(i, j) i, j Π c and i j} 3: A sub {(i, j) i, j Π c and i j} 4: initialise bidgraph BG (Π c,a sup,a sub ) 5: τ Π c with τ = 1: UB(τ) LB(τ) p(τ) RandomBasePrice(τ, c) 6: initialise lower bounds τ Π c with τ = 1: U pdatelowerbounds(bg,τ) 7: initialise upper bounds τ Π c with τ > 1: UB(τ) = t τ p(t) 8: k 2 9: while k Π do 10: for all τ {τ Π τ = k LB(τ) UB(τ)} do 11: set price randomly LB(τ) UB(τ) p(τ) ]LB(τ), UB(τ)[ 12: U pdatelowerbounds(bg, τ) 13: U pdateu pperbounds(bg, τ) 14: end for 15: k k : end while 17: output: prices p(τ) for each τ Π consistent to the free disposal and the subadditivity assumption holds. Then the procedure successively draws a price for each contract combination between its lower and upper bounds; this price is propagated through the bidgraph to sharpen the lower and upper bounds of the remaining contract combinations. In order to extend this approach to support contract combinations which exhibit both free disposal and strictly subadditive cost valuations, the bidgraph is initialised as follows: The vertices of the bidgraph BG represent all essential contract combinations Π. There are two sets of arcs, A sup and A sub. The arcs in A sup indicate a superset relation, i.e., an arc (i, j) from vertex i to j means that the contracts in j are a superset of the contracts in i. Similarly, the arcs in A sub represent all subset relationships. In line 5 through 8 of Alg. 6, the lower and upper bounds of all k-combinations of contracts are initialised. For a given k N, let the set of all k-combinations of contracts be defined as {τ Π : τ = k}. The lower bounds LB for all single contracts (k = 1) are initialised by Algorithm 7. The price p({t}) of a single contract t is a random variable which is normal distributed with mean µ and variance σ 2. The values of p({t}) are forced into the interval [minprice, maxprice] with minprice = 0.5 and maxprice = 1.5. As stated above, higher resource requirements and a higher service level should tend to result in a higher price. Thus, µ depends on the resource demand r ct and the service quality q ct of contract t. The variance σ 2 is set 15

18 to 1.0. Algorithm 7 RandomBasePrice 1: input: single-contract set {t}, carrier c 2: minprice 0.5 3: maxprice 1.5 4: resources multiplier r ct /0.3 //expected mean of r ct (Alg. 5) 5: qualification multiplier q ct /3 //expected mean of q ct (Alg. 5) 6: µ 1.0+resources multiplier qualification multiplier 7: σ : p({t}) normal distributed random variable with mean µ and variance σ 2 9: if p({t}) > maxprice OR p({t}) < minprice then 10: RandomBasePrice({t}, c) 11: end if 12: output: p({t}) After RandomBasePrice (Alg. 7) has initialised the LB of all 1-combinations, Alg. 8 recursively propagates these prices through the bidgraph and updates the lower bounds of all superset contract combinations if necessary. By now, the upper bounds for the k-combinations, k > 1, can be calculated as the sum of the prices of all respective 1-combination contracts. To ensure strictly subadditive valuations, the while-loop of Alg. 6 sets the bid prices for all k-combinations in the order of non-decreasing k, starting with k = 2. For all k-combinations with LB(τ) UB(τ) a price is drawn randomly between LB(τ) and UB(τ) and propagated through the bidgraph to adjust the lower and upper bounds of the other contract combinations. In doing so, it must be assured that the upper bound never exceeds the costs of any partition of τ since this may lead to inconsistencies with respect to the subadditivity requirement. Therefore, Alg. 9 solves a set partitioning problem to optimality. The instance of the set partitioning problem is given by the sets { j (τ, j) A sub } and the associated costs UB( j). Algorithm 8 UpdateLowerBounds 1: input: BG, τ 2: for all τ BG.Π (τ,τ ) BG.A sup do 3: if LB(τ ) < p(τ) then 4: LB(τ ) p(τ) 5: U pdatelowerbounds(bg,τ ) 6: end if 7: end for 16

19 Algorithm 9 UpdateUpperBounds 1: input: BG, τ 2: for all τ BG.Π (τ,τ ) BG.A sup do 3: p price of optimal set partitioning solution to {τ (τ,τ ) BG.A sub } and associated UB(τ ) 4: if p < UB(τ ) then 5: UB(τ ) p 6: U pdateu pperbounds(bg,τ ) 7: end if 8: end for The BidGraphAlgorithm continues until the prices of all essential bids are set. After that, the select- Operator of Alg. 5 is applied as described above. The procedure keeps generating bids for all carriers, until the test instance is complete. 4.2 Measuring the Quality of an Approximation Set To compare the performance of single objective heuristics in terms of achieved solution quality, a major step is to compare the objective function values of the best found solutions, respectively. The matter is more complicated in the bi-objective case, as approximation sets have to be compared. Often there are no clear dominance relations between the solutions of different approximation sets, see e.g. Fig Therefore various indicators to measure the quality of approximation sets are discussed in the literature, cf. [24] for a detailed discussion of the state of the art. To evaluate the solution quality of an approximation set, the popular hypervolume indicator I HV is used [22]. I HV measures the dominated subspace of an approximation set, bounded by a reference point RP. RP must be chosen such that it is dominated by all solutions of the approximation set. Furthermore, the reference point has to be identical for all compared heuristics on the same problem instance. Here, for each instance RP is defined as ( f max 1 ; f max 2 ) = ( f 1 (B);0), respectively. Furthermore, the objective values of all solutions are normalised according to f i = ( f i f min i )/( f max fi min ) with i = 1,2, f1 max = f 1 (B), f2 min = f 2 (B) 1, f2 max = 0. Thus values of I HV range from zero to one, and larger values indicate better approximation sets. However, as RP can be chosen freely to a large degree, I HV is an interval-based measure. Therefore the quality gap between algorithms can only be expressed via absolute differences of I HV, but not via percentage ratios of I HV. 17 i

20 f 2 solution algorithm A f 2 RP solution algorithm B hyper volume min min min f 1 min f 1 (a) Solutions of two approximation sets found by two algorithms. (b) The shaded areas of each algorithm depict the dominated subspace respectively. Note that the light-shaded area is overlapping the dark-shaded area in part. Figure 2: Illustration of hypervolume indicator I HV 4.3 Evaluation of the εlookahead Branch and Bound The εlbb was implemented in Java 6. A floating point precision of ten digits is used. The lower bounds are calculated by Dantzig s Simplex Algorithm in the implementation of the Apache Commons Math Library (version 2.1). The algorithm was tested on an Intel Pentium 4 (2.0 GHz) with 500MB RAM available to the Java Virtual Machine. Preliminary testing gave evidence that computation times of εlbb rapidly increase with the number of bundle bids. Even moderate problem sizes caused the εlbb to run several hours before terminating. Therefore, a set of eight rather small test instances was generated according to Section 4.1 in order to evaluate εlbb. The instances vary only in the number of bundle bids (up to 80) and in the number of transport contracts (up to 40). The number of participating carriers and the density of the synergy matrix are held constant with values of 10 and 50%, respectively. The results of these instances are reported in Tab. 4 in Section The table shows the number of solutions in the Pareto set and the required runtime in seconds. In addition, the table contains results from the MOGA which will be discussed in more detail in Section The findings demonstrate that εlbb is suited to solve small instances with up to 60 bundle bids in less than an hour. For solving problem instances with 80 bundle bids, εlbb consumes several hours of runtime. The test of the instance with 80 bundle bids and 40 contracts was aborted after a runtime of 24 hours. These results strongly suggest that exact approaches like the εlbb are inappropriate as a solution approach for practical procurement scenarios which easily reach problem sizes of several hundreds of bundle bids. 18

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

Chapter 7 Sealed-bid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)

Clustering and scheduling maintenance tasks over time Per Kreuger 2008-04-29 SICS Technical Report T2008:09 Abstract We report results on a maintenance scheduling problem. The problem consists of allocating

n colleges in a set C, m applicants in a set A, where m is much larger than n. each college c i C has a capacity q i - the maximum number of students it will admit each college c i has a strict order i

HYBRID GENETIC ALGORITHMS FOR SCHEDULING ADVERTISEMENTS ON A WEB PAGE Subodha Kumar University of Washington subodha@u.washington.edu Varghese S. Jacob University of Texas at Dallas vjacob@utdallas.edu

THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

Vickrey-Dutch Procurement Auction for Multiple Items Debasis Mishra Dharmaraj Veeramani First version: August 2004, This version: March 2006 Abstract We consider a setting where there is a manufacturer

Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

American Journal of Industrial and Business Management, 2016, 6, 774-789 Published Online June 2016 in SciRes. http://www.scirp.org/journal/ajibm http://dx.doi.org/10.4236/ajibm.2016.66071 A Study of Crossover

A New Multi-objective Evolutionary Optimisation Algorithm: The Two-Archive Algorithm Kata Praditwong 1 and Xin Yao 2 The Centre of Excellence for Research in Computational Intelligence and Applications(CERCIA),

Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC W. T. TUTTE. Introduction. In a recent series of papers [l-4] on graphs and matroids I used definitions equivalent to the following.

Chapter 2 Goal Programming Variants This chapter introduces the major goal programming variants. The purpose and underlying philosophy of each variant are given. The three major variants in terms of underlying

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture - 15 Limited Resource Allocation Today we are going to be talking about

On the k-path cover problem for cacti Zemin Jin and Xueliang Li Center for Combinatorics and LPMC Nankai University Tianjin 300071, P.R. China zeminjin@eyou.com, x.li@eyou.com Abstract In this paper we

An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept

ABSTRACT KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS In many real applications, RDF (Resource Description Framework) has been widely used as a W3C standard to describe data in the Semantic Web. In practice,

NEW VERSION OF DECISION SUPPORT SYSTEM FOR EVALUATING TAKEOVER BIDS IN PRIVATIZATION OF THE PUBLIC ENTERPRISES AND SERVICES Silvija Vlah Kristina Soric Visnja Vojvodic Rosenzweig Department of Mathematics