2018-01-16T15:01:34Z
http://oai.repec.org/oai.php
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:186-1982015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:186-198
article
Exact and heuristic algorithms for the design of hub networks with multiple lines
In this paper we study a hub location problem in which the hubs to be located must form a set of interconnecting lines. The objective is to minimize the total weighted travel time between all pairs of nodes while taking into account a budget constraint on the total set-up cost of the hub network. A mathematical programming formulation, a Benders-branch-and-cut algorithm and several heuristic algorithms, based on variable neighborhood descent, greedy randomized adaptive search, and adaptive large neighborhood search, are presented and compared to solve the problem. Numerical results on two sets of benchmark instances with up to 70 nodes and three lines confirm the efficiency of the proposed solution algorithms.
Hub location; Hub-and-spoke networks; Lines; Network design;
http://www.sciencedirect.com/science/article/pii/S0377221715003100
Martins de Sá, Elisangela
Contreras, Ivan
Cordeau, Jean-François
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:661-6732015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:661-673
article
A model enhancement heuristic for building robust aircraft maintenance personnel rosters with stochastic constraints
This paper presents a heuristic approach to optimize staffing and scheduling at an aircraft maintenance company. The goal is to build robust aircraft maintenance personnel rosters that can achieve a certain service level while minimizing the total labor costs. Robust personnel rosters are rosters that can handle delays associated with stochastic flight arrival times. To deal with this stochasticity, a model enhancement algorithm is proposed that iteratively adjusts a mixed integer linear programming (MILP) model to a stochastic environment based on simulation results. We illustrate the performance of the algorithm with a computational experiment based on real life data of a large aircraft maintenance company located at Brussels Airport in Belgium. The obtained results are compared to deterministic optimization and straightforward optimization. Experiments demonstrate that our model can ensure a certain desired service level with an acceptable increase in labor costs when stochasticity is introduced in the aircraft arrival times.
Model enhancement; Aircraft maintenance; Stochastic optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500380X
De Bruecker, Philippe
Van den Bergh, Jorne
Beliën, Jeroen
Demeulemeester, Erik
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:154-1692015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:154-169
article
Ant colony optimization based binary search for efficient point pattern matching in images
Point Pattern Matching (PPM) is a task to pair up the points in two images of a same scene. There are many existing approaches in literature for point pattern matching. However, the drawback lies in the high complexity of the algorithms. To overcome this drawback, an Ant Colony Optimization based Binary Search Point Pattern Matching (ACOBSPPM) algorithm is proposed. According to this approach, the edges of the image are stored in the form of point patterns. To match an incoming image with the stored images, the ant agent chooses a point value in the incoming image point pattern and employs a binary search method to find a match with the point values in the stored image point pattern chosen for comparison. Once a match occurs, the ant agent finds a match for the next point value in the incoming image point pattern by searching between the matching position and maximum number of point values in the stored image point pattern. The stored image point pattern having the maximum number of matches is the image matching with the incoming image. Experimental results are shown to prove that ACOBSPPM algorithm is efficient when compared to the existing point pattern matching approaches in terms of time complexity and precision accuracy.
Decision support systems; Image recognition; Point pattern matching; Ant Colony Optimization; Binary search;
http://www.sciencedirect.com/science/article/pii/S0377221715002842
Sreeja, N.K.
Sankar, A.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:505-5162015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:505-516
article
Control of Condorcet voting: Complexity and a Relation-Algebraic approach
We study the constructive variant of the control problem for Condorcet voting, where control is done by deleting voters. We prove that this problem remains NP-hard if instead of Condorcet winners the alternatives in the uncovered set win. Furthermore, we present a relation-algebraic model of Condorcet voting and relation-algebraic specifications of the dominance relation and the solutions of the control problem. All our relation-algebraic specifications immediately can be translated into the programming language of the OBDD-based computer system RelView. Our approach is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other voting rules and control problems.
Artificial intelligence; Condorcet voting; Control problem; Uncovered set; Relation algebra;
http://www.sciencedirect.com/science/article/pii/S0377221715003185
Berghammer, Rudolf
Schnoor, Henning
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:34-432015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:34-43
article
An accelerated branch-and-price algorithm for multiple-runway aircraft sequencing problems
This paper presents an effective branch-and-price (B&P) algorithm for multiple-runway aircraft sequencing problems. This approach improves the tractability of the problem by several orders of magnitude when compared with solving a classical 0–1 mixed-integer formulation over a set of computationally challenging instances. Central to the computational efficacy of the B&P algorithm is solving the column generation subproblem as an elementary shortest path problem with aircraft time-windows and non-triangular separation times using an enhanced dynamic programming procedure. We underscore in our computational study the algorithmic features that contribute, in our experience, to accelerating the proposed dynamic programming procedure and, hence, the overall B&P algorithm.
Aircraft sequencing; Branch-and-price; Column generation; Dynamic programming; Elementary shortest path problems;
http://www.sciencedirect.com/science/article/pii/S0377221715003124
Ghoniem, Ahmed
Farhadi, Farbod
Reihaneh, Mohammad
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:128-1392015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:128-139
article
Efficient analysis of the MMAP[K]/PH[K]/1 priority queue
In this paper we consider the MMAP/PH/1 priority queue, both the case of preemptive resume and the case of non-preemptive service. The main idea of the presented analysis procedure is that the sojourn time of the low priority jobs in the preemptive case (and the waiting time distribution in the non-preemptive case) can be represented by the duration of the busy period of a special Markovian fluid model. By making use of the recent results on the busy period analysis of Markovian fluid models it is possible to calculate several queueing performance measures in an efficient way including the sojourn time distribution (both in the time domain and in the Laplace transform domain), the moments of the sojourn time, the generating function of the queue length, the queue length moments and the queue length probabilities.
Queueing; Preemptive resume priority queue; Non-preemptive priority queue; Matrix-analytic methods;
http://www.sciencedirect.com/science/article/pii/S0377221715001976
Horváth, Gábor
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:140-1532015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:140-153
article
A noisy principal component analysis for forward rate curves
Principal Component Analysis (PCA) is the most common nonparametric method for estimating the volatility structure of Gaussian interest rate models. One major difficulty in the estimation of these models is the fact that forward rate curves are not directly observable from the market so that non-trivial observational errors arise in any statistical analysis. In this work, we point out that the classical PCA analysis is not suitable for estimating factors of forward rate curves due to the presence of measurement errors induced by market microstructure effects and numerical interpolation. Our analysis indicates that the PCA based on the long-run covariance matrix is capable to extract the true covariance structure of the forward rate curves in the presence of observational errors. Moreover, it provides a significant reduction in the pricing errors due to noisy data typically found in forward rate curves.
Finance; Pricing; Principal component analysis; Term-structure of interest rates; HJM models;
http://www.sciencedirect.com/science/article/pii/S0377221715003318
Laurini, Márcio Poletti
Ohashi, Alberto
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:421-4342015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:421-434
article
Bi-Objective Multi-Mode Project Scheduling Under Risk Aversion
The paper proposes a model for stochastic multi-mode resource-constrained project scheduling under risk aversion with the two objectives makespan and cost. Activity durations and costs are assumed as uncertain and modeled as random variables. For the scheduling part of the decision problem, the class of early-start policies is considered. In addition to the schedule, the assignment of execution modes to activities has to be selected. To take risk aversion into account, the approach of optimization under multivariate stochastic dominance constraints, recently developed in other fields, is adopted. For the resulting bi-objective stochastic integer programming problem, the Pareto frontier is determined by means of an exact solution method, incorporating a branch-and-bound technique based on the forbidden set branching scheme from stochastic project scheduling. Randomly generated test instances, partially derived from a test case from the PSPLIB, are used to show the computational feasibility of the approach.
Project scheduling; Multi-objective optimization; Stochastic optimization; Risk aversion; Stochastic dominance;
http://www.sciencedirect.com/science/article/pii/S0377221715003768
Gutjahr, Walter J.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:293-3062015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:293-306
article
Simulation-optimization approaches for water pump scheduling and pipe replacement problems
Network operation and rehabilitation are major concerns for water utilities due to their impact on providing a reliable and efficient service. Solving the optimization problems that arise in water networks is challenging mainly due to the nonlinearities inherent in the physics and the often binary nature of decisions. In this paper, we consider the operational problem of pump scheduling and the design problem of leaky pipe replacement. New approaches for these problems based on simulation-optimization are proposed as solution methodologies. For the pump scheduling problem, a novel decomposition technique uses solutions from a simulation-based sub-problem to guide the search. For the leaky pipe replacement problem a knapsack-based heuristic is applied. The proposed solution algorithms are tested and detailed results for two networks from the literature are provided.
Pump scheduling; Pipe replacement; Water networks; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715003215
Naoum-Sawaya, Joe
Ghaddar, Bissan
Arandia, Ernesto
Eck, Bradley
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:413-4202015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:413-420
article
On heuristic solutions for the stochastic flowshop scheduling problem
We address the problem of scheduling jobs in a permutation flowshop when their processing times adopt a given distribution (stochastic flowshop scheduling problem) with the objective of minimization of the expected makespan. For this problem, optimal solutions exist only for very specific cases. Consequently, some heuristics have been proposed in the literature, all of them with similar performance. In our paper, we first focus on the critical issue of estimating the expected makespan of a sequence and found that, for instances with a medium/large variability (expressed as the coefficient of variation of the processing times of the jobs), the number of samples or simulation runs usually employed in the literature may not be sufficient to derive robust conclusions with respect to the performance of the different heuristics. We thus propose a procedure with a variable number of iterations that ensures that the percentage error in the estimation of the expected makespan is bounded with a very high probability. Using this procedure, we test the main heuristics proposed in the literature and find significant differences in their performance, in contrast with existing studies. We also find that the deterministic counterpart of the most efficient heuristic for the stochastic problem performs extremely well for most settings, which indicates that, in some cases, solving the deterministic version of the problem may produce competitive solutions for the stochastic counterpart.
Scheduling; Flowshop; Stochastic; Makespan objective; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715003781
Framinan, Jose M.
Perez-Gonzalez, Paz
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:281-2922015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:281-292
article
Optimal firm growth under the threat of entry
The paper studies the incumbent-entrant problem in a fully dynamic setting. We find that under an open-loop information structure the incumbent anticipates entry by overinvesting, whereas in the Markov perfect equilibrium the incumbent slightly underinvests in the period before the entry. The entry cost level where entry accommodation passes into entry deterrence is lower in the Markov perfect equilibrium. Further we find that the incumbent’s capital stock level needed to deter entry is hump shaped as a function of the entry time, whereas the corresponding entry cost, where the entrant is indifferent between entry and non-entry, is U-shaped.
Economics; Game theory; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715003239
Kort, Peter M.
Wrzaczek, Stefan
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:496-5042015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:496-504
article
Stochastic inflow modeling for hydropower scheduling problems
We introduce a new stochastic model for inflow time series that is designed with the requirements of hydropower scheduling problems in mind. The model is an “iterated function system’’: it models inflow as continuous, but the random innovation at each time step has a discrete distribution. With this inflow model, hydro-scheduling problems can be solved by the stochastic dual dynamic programming (SDDP) algorithm exactly as posed, without the additional sampling error introduced by sample average approximations. The model is fitted to univariate inflow time series by quantile regression. We consider various goodness-of-fit metrics for the new model and some alternatives to it, including performance in an actual hydro-scheduling problem. The numerical data used are for inflows to New Zealand hydropower reservoirs.
OR in energy; Hydro-thermal scheduling; Stochastic dual dynamic programming; Time series; Quantile regression;
http://www.sciencedirect.com/science/article/pii/S0377221715004129
Pritchard, Geoffrey
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:487-4952015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:487-495
article
A direct search method for unconstrained quantile-based simulation optimization
Simulation optimization has gained popularity over the decades because of its ability to solve many practical problems that involve profound randomness. The methodology development of simulation optimization, however, is largely concerned with problems whose objective function is mean-based performance metric. In this paper, we propose a direct search method to solve the unconstrained simulation optimization problems with quantile-based objective functions. Because the proposed method does not require gradient estimation in the search process, it can be applied to solve many practical problems where the gradient of objective function does not exist or is difficult to estimate. We prove that the proposed method possesses desirable convergence guarantee, i.e., the algorithm can converge to the true global optima with probability one. An extensive numerical study shows that the performance of the proposed method is promising. Two illustrative examples are provided in the end to demonstrate the viability of the proposed method in real settings.
Simulation; Quantile; Direct search method; Nelder–Mead simplex method;
http://www.sciencedirect.com/science/article/pii/S0377221715003823
Chang, Kuo-Hao
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:674-6842015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:674-684
article
A multi-criteria Police Districting Problem for the efficient and effective design of patrol sector
The Police Districting Problem (PDP) concerns the efficient and effective design of patrol sectors in terms of performance attributes such as workload, response time, etc. A balanced definition of the patrol sector is desirable as it results in crime reduction and in better service. In this paper, a multi-criteria Police Districting Problem defined in collaboration with the Spanish National Police Corps is presented. This is the first model for the PDP that considers the attributes of area, risk, compactness, and mutual support. The decision-maker can specify his/her preferences on the attributes, on workload balance, and efficiency. The model is solved by means of a heuristic algorithm that is empirically tested on a case study of the Central District of Madrid. The solutions identified by the model are compared to patrol sector configurations currently in use and their quality is evaluated by public safety service coordinators. The model and the algorithm produce designs that significantly improve on the current ones.
Location; Police Districting Problem; Multi-criteria decision making;
http://www.sciencedirect.com/science/article/pii/S0377221715004130
Camacho-Collados, M.
Liberatore, F.
Angulo, J.M.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:517-5272015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:517-527
article
Elicitation of multiattribute value functions through high dimensional model representations: Monotonicity and interactions
This work addresses the early phases of the elicitation of multiattribute value functions proposing a practical method for assessing interactions and monotonicity. We exploit the link between multiattribute value functions and the theory of high dimensional model representations. The resulting elicitation method does not state any a-priori assumption on an individual’s preference structure. We test the approach via an experiment in a riskless context in which subjects are asked to evaluate mobile phone packages that differ on three attributes.
Multiattribute value theory; High dimensional model representations; Value function elicitation; Decision analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715003355
Beccacece, Francesca
Borgonovo, Emanuele
Buzzard, Greg
Cillo, Alessandra
Zionts, Stanley
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:1-192015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:1-19
article
A review of theory and practice in scientometrics
Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws” of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments.
Altmetrics; Citations; H-index; Impact factor; Normalisation;
http://www.sciencedirect.com/science/article/pii/S037722171500274X
Mingers, John
Leydesdorff, Loet
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:609-6182015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:609-618
article
A multi-objective approach with soft constraints for water supply and wastewater coverage improvements
In Brazil, due to public health, social and economic cohesion problems, access to water and wastewater services is certainly one of the main concerns of the different stakeholders in the Brazilian water sector. But as the focus is mainly on the expansion and building of new infrastructures, other features such as the robustness and resiliency of the systems are being forgotten. This reason, among others, highlights the importance of sustainable development and financing for the Brazilian water sector. In order to assess that goal, a multi-objective optimization model was built with the aim of formulating strategies to reach a predefined coverage minimizing time and costs incurred, under specific hard and soft constraints, assembled to deal with key sustainability concepts (e.g., affordability and coverage targets features) as they should not be left apart. For that purpose, an achievement scalarizing function was adopted with three distinct scaling coefficient vectors for a given reference point. To solve this combinatorial optimization problem, we used a mixed integer-linear programming optimizer that resorts to branch-and-bound methods. The work developed, paves the way toward the creation of a decision-aiding tool, without disregarding the number of steps that need to be taken to achieve the proposed objectives.
Multiple criteria analysis; Branch and bound; Combinatorial optimization; Reference point approach; Coverage of water and wastewater services;
http://www.sciencedirect.com/science/article/pii/S037722171500329X
Pinto, F.S.
Figueira, J.R.
Marques, R.C.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:199-2082015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:199-208
article
Pricing of fluctuations in electricity markets
In an electric power system, demand fluctuations may result in significant ancillary cost to suppliers. Furthermore, in the near future, deep penetration of volatile renewable electricity generation is expected to exacerbate the variability of demand on conventional thermal generating units. We address this issue by explicitly modeling the ancillary cost associated with demand variability. We argue that a time-varying price equal to the suppliers’ instantaneous marginal cost may not achieve social optimality, and that consumer demand fluctuations should be properly priced. We propose a dynamic pricing mechanism that explicitly encourages consumers to adapt their consumption so as to offset the variability of demand on conventional units. Through a dynamic game-theoretic formulation, we show that (under suitable convexity assumptions) the proposed pricing mechanism achieves social optimality asymptotically, as the number of consumers increases to infinity. Numerical results demonstrate that compared with marginal cost pricing, the proposed mechanism creates a stronger incentive for consumers to shift their peak load, and therefore has the potential to reduce the need for long-term investment in peaking plants.
OR in energy; Electricity market; Game theory; Dynamic pricing; Social welfare;
http://www.sciencedirect.com/science/article/pii/S0377221715003136
Tsitsiklis, John N.
Xu, Yunjian
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:400-4122015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:400-412
article
An integrative cooperative search framework for multi-decision-attribute combinatorial optimization: Application to the MDPVRP
We introduce the integrative cooperative search method (ICS), a multi-thread cooperative search method for multi-attribute combinatorial optimization problems. ICS musters the combined capabilities of a number of independent exact or meta-heuristic solution methods. A number of these methods work on sub-problems defined by suitably selected subsets of decision-set attributes of the problem, while others combine the resulting partial solutions into complete ones and, eventually, improve them. All these methods cooperate through an adaptive search-guidance mechanism, using the central-memory cooperative search paradigm. Extensive numerical experiments explore the behavior of ICS and its interest through an application to the multi-depot, periodic vehicle routing problem, for which ICS improves the results of the current state-of-the-art methods.
Multi-attribute combinatorial optimization; Integrative cooperative search; Meta-heuristics; Decision-set decomposition; Multi-depot periodic vehicle routing;
http://www.sciencedirect.com/science/article/pii/S0377221715003793
Lahrichi, Nadia
Crainic, Teodor Gabriel
Gendreau, Michel
Rei, Walter
Crişan, Gloria Cerasela
Vidal, Thibaut
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:543-5532015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:543-553
article
Elicitation of criteria importance weights through the Simos method: A robustness concern
In the field of multicriteria decision aid, the Simos method is considered as an effective tool to assess the criteria importance weights. Nevertheless, the method's input data do not lead to a single weighting vector, but infinite ones, which often exhibit great diversification and threaten the stability and acceptability of the results. This paper proves that the feasible weighting solutions, of both the original and the revised Simos procedures, are vectors of a non-empty convex polyhedral set, hence the reason it proposes a set of complementary robustness analysis rules and measures, integrated in a Robust Simos Method. This framework supports analysts and decision makers in gaining insight into the degree of variation of the multiple acceptable sets of weights, and their impact on the stability of the final results. In addition, the proposed measures determine if, and what actions should be implemented, prior to reaching an acceptable set of criteria weights and forming a final decision. Two numerical examples are provided, to illustrate the paper's evidence, and demonstrate the significance of consistently analyzing the robustness of the Simos method results, in both the original and the revised method's versions.
Multiple criteria; Decision analysis; Criteria weights; Robustness analysis; Simos method;
http://www.sciencedirect.com/science/article/pii/S0377221715003306
Siskos, Eleftherios
Tsotsolas, Nikos
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:66-752015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:66-75
article
Stochastic lot sizing manufacturing under the ETS system for maximisation of shareholder wealth
The issues of carbon emission and global warming have increasingly aroused worldwide attention in recent years. Despite huge progresses in carbon abatement, few research studies have reported on the impacts of carbon emission reduction mechanisms on manufacturing optimisation, which often leads to decisions of environmentally unsustainable operations and misestimation of performance. This paper attempts to explore carbon management under the carbon emission trading mechanism for optimisation of lot sizing production planning in stochastic make-to-order manufacturing with the objective to maximise shareholder wealth. We are concerned not only about the economic benefits of investors, but also about the environmental impacts associated with production planning. Numerical experiments illustrate the significant influences of carbon emission trading, pricing, and caps on the dynamic decisions of the lot sizing policy. The result highlights the critical roles of carbon management in production planning for achieving both environmental and economic benefits. It also provides managerial insights into operations management to help mitigate environmental deterioration arising from carbon emission, as well as improve shareholder wealth.
Production planning; Lot sizing; Carbon emission; ETS; Shareholder wealth;
http://www.sciencedirect.com/science/article/pii/S0377221715003148
Wang, X.J.
Choi, S.H.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:250-2622015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:250-262
article
On the estimation of the true demand in call centers with redials and reconnects
In practice, in many call centers customers often perform redials (i.e., reattempt after an abandonment) and reconnects (i.e., reattempt after an answered call). In the literature, call center models usually do not cover these features, while real data analysis and simulation results show ignoring them inevitably leads to inaccurate estimation of the total inbound volume. Therefore, in this paper we propose a performance model that includes both features. In our model, the total volume consists of three types of calls: (1) fresh calls (i.e., initial call attempts), (2) redials, and (3) reconnects. In practice, the total volume is used to make forecasts, while according to the simulation results, this could lead to high forecast errors, and subsequently wrong staffing decisions. However, most of the call center data sets do not have customer-identity information, which makes it difficult to identify how many calls are fresh and what fractions of the calls are redials and reconnects.
Queueing; Forecasting; Redials; Reconnects; Call centers;
http://www.sciencedirect.com/science/article/pii/S0377221715003112
Ding, S.
Koole, G.
van der Mei, R.D.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:651-6602015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:651-660
article
The impact of the internet on the pricing strategies of the European low cost airlines
This study seeks to analyse the price determination of low cost airlines in Europe and the effect that Internet has on this strategy. The outcomes obtained reveal that both users and companies benefit from the use of ICTs in the purchase and sale of airline tickets: the Internet allows consumers to increase their bargaining power comparing different airlines and choosing the most competitive flight, while companies can easily check the behaviour of users to adapt their pricing strategies using internal information.
Low cost airlines; Airline pricing; ICT; Travel industry strategies; Air fares;
http://www.sciencedirect.com/science/article/pii/S0377221715003859
Moreno-Izquierdo, L.
Ramón-Rodríguez, A.
Perles Ribes, J.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:379-3912015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:379-391
article
Scheduling resource-constrained projects with a flexible project structure
In projects with a flexible project structure, the activities that must be scheduled are not completely known in advance. Scheduling such projects includes deciding whether to perform particular activities. This decision also affects precedence constraints among the implemented activities. However, established model formulations and solution approaches for the resource-constrained project scheduling problem (RCPSP) assume that the project structure is provided in advance. In this paper, the traditional RCPSP is extended using a highly general model-endogenous decision on this flexible project structure. This extension is illustrated using the example of the aircraft turnaround process at airports. We present a genetic algorithm to solve this type of scheduling problem and evaluate it in an extensive numerical study.
Project scheduling; Genetic algorithms; RCPSP; Flexible projects;
http://www.sciencedirect.com/science/article/pii/S0377221715003732
Kellenbrink, Carolin
Helber, Stefan
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:232-2412015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:232-241
article
Accommodating heterogeneity and nonlinearity in price effects for predicting brand sales and profits
We propose a hierarchical Bayesian semiparametric approach to account simultaneously for heterogeneity and functional flexibility in store sales models. To estimate own- and cross-price response flexibly, a Bayesian version of P-splines is used. Heterogeneity across stores is accommodated by embedding the semiparametric model into a hierarchical Bayesian framework that yields store-specific own- and cross-price response curves. More specifically, we propose multiplicative store-specific random effects that scale the nonlinear price curves while their overall shape is preserved. Estimation is fully Bayesian and based on novel MCMC techniques. In an empirical study, we demonstrate a higher predictive performance of our new flexible heterogeneous model over competing models that capture heterogeneity or functional flexibility only (or neither of them) for nearly all brands analyzed. In particular, allowing for heterogeneity in addition to functional flexibility can improve the predictive performance of a store sales model considerably, while incorporating heterogeneity alone only moderately improved or even decreased predictive validity. Taking into account model uncertainty, we show that the proposed model leads to higher expected profits as well as to materially different pricing recommendations.
Forecasting; Sales response modeling; Heterogeneity; Functional flexibility; Expected profits;
http://www.sciencedirect.com/science/article/pii/S0377221715001678
Lang, Stefan
Steiner, Winfried J.
Weber, Anett
Wechselberger, Peter
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:476-4862015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:476-486
article
Commodity derivatives pricing with cointegration and stochastic covariances
Empirically, cointegration and stochastic covariances, including stochastic volatilities, are statistically significant for commodity prices and energy products. To capture such market phenomena, we develop a continuous-time dynamics of cointegrated assets with a stochastic covariance matrix and derive the joint characteristic function of asset returns in closed-form. The proposed model offers an endogenous explanation for the stochastic mean-reverting convenience yield. The time series of spot and futures prices of WTI crude oil and gasoline shows cointegration relationship under both physical and risk-neutral measures. The proposed model also allows us to fit the observed term structure of futures prices and calibrate the market-implied cointegration relationship. We apply it to value options on a single commodity and on multiple commodities.
Option pricing; Cointegration; Stochastic covariance; Stochastic convenience yield;
http://www.sciencedirect.com/science/article/pii/S0377221715003847
Chiu, Mei Choi
Wong, Hoi Ying
Zhao, Jing
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:218-2312015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:218-231
article
E-NAUTILUS: A decision support system for complex multiobjective optimization problems based on the NAUTILUS method
Interactive multiobjective optimization methods cannot necessarily be easily used when (industrial) multiobjective optimization problems are involved. There are at least two important factors to be considered with any interactive method: computationally expensive functions and aspects of human behavior. In this paper, we propose a method based on the existing NAUTILUS method and call it the Enhanced NAUTILUS (E-NAUTILUS) method. This method borrows the motivation of NAUTILUS along with the human aspects related to avoiding trading-off and anchoring bias and extends its applicability for computationally expensive multiobjective optimization problems. In the E-NAUTILUS method, a set of Pareto optimal solutions is calculated in a pre-processing stage before the decision maker is involved. When the decision maker interacts with the solution process in the interactive decision making stage, no new optimization problem is solved, thus, avoiding the waiting time for the decision maker to obtain new solutions according to her/his preferences. In this stage, starting from the worst possible objective function values, the decision maker is shown a set of points in the objective space, from which (s)he chooses one as the preferable point. At successive iterations, (s)he always sees points which improve all the objective values achieved by the previously chosen point. In this way, the decision maker remains focused on the solution process, as there is no loss in any objective function value between successive iterations. The last post-processing stage ensures the Pareto optimality of the final solution. A real-life engineering problem is used to demonstrate how E-NAUTILUS works in practice.
Multiple objective programming; Interactive methods; Multiple criteria optimization; Computational cost; Trading-off;
http://www.sciencedirect.com/science/article/pii/S0377221715003203
Ruiz, Ana B.
Sindhya, Karthik
Miettinen, Kaisa
Ruiz, Francisco
Luque, Mariano
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:44-502015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:44-50
article
SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression
This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a globally optimal subset using the branch and bound (BB) algorithm is limited to problems in very low dimension, typically d ≤ 5, as the complexity of the problem increases exponentially with d. We introduce a bold pruning strategy in the BB algorithm that results in a significant reduction in computing time, at the price of a negligeable accuracy lost. The novelty of our algorithm is that the bounds at nodes of the BB tree come from pseudo-convexifications derived using a linearization technique with approximate bounds for the nonlinear terms. The approximate bounds are computed solving an auxiliary semidefinite optimization problem. We show through a computational study that our algorithm performs well in a wide set of the most difficult instances of the LTSE problem.
Global optimization; Integer programming; High breakdown point regression; Branch and bound; Relaxation–linearization technique;
http://www.sciencedirect.com/science/article/pii/S0377221715003173
Flores, Salvador
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:528-5422015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:528-542
article
On the exact solution of the multi-period portfolio choice problem for an exponential utility under return predictability
In this paper we derive the exact solution of the multi-period portfolio choice problem for an exponential utility function under return predictability. It is assumed that the asset returns depend on predictable variables and that the joint random process of the asset returns and the predictable variables follow a vector autoregressive process. We prove that the optimal portfolio weights depend on the covariance matrices of the next two periods and the conditional mean vector of the next period. The case without predictable variables and the case of independent asset returns are partial cases of our solution. Furthermore, we provide an exhaustive empirical study where the cumulative empirical distribution function of the investor’s wealth is calculated using the exact solution. It is compared with the investment strategy obtained under the additional assumption that the asset returns are independently distributed.
Multi-period asset allocation; Expected utility optimization; Exponential utility function; Return predictability;
http://www.sciencedirect.com/science/article/pii/S037722171500332X
Bodnar, Taras
Parolya, Nestor
Schmid, Wolfgang
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:435-4492015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:435-449
article
Reverse supply chains: Effects of collection network and returns classification on profitability
Used products collected for value recovery are characterized by higher uncertainty regarding their quality condition compared to raw materials used in forward supply chains. Because of the need for timely information regarding their quality, a common business practice is to establish procedures for the classification of used products (returns), which is not always error-free. The existence of a multitude of sites where used products can be collected, further increases the complexity of reverse supply chain design and management. In this paper we formulate the objective function for a reverse supply chain with multiple collection sites and the possibility of returns sorting, assuming general distributions of demand and returns quality in a single-period context. We derive conditions for the determination of the optimal acquisition and remanufacturing lot-sizing decisions under alternative locations of the unreliable classification/sorting operation. We provide closed-form expressions for the selection of the optimal sorting location in the special case of identical collection sites and guidelines for tackling the decision-making problem in the general case. Furthermore, we examine analytically the effect of the cost and accuracy of the classification procedure on the profitability of the alternative supply chain configurations. Our analysis, which is accompanied by a brief numerical investigation, offers insights regarding the impact of yield variability, number of collection sites, and location and characteristics of the returns classification operation both on the acquisition decisions and on the profitability of the reverse supply chain.
Multiple suppliers; Random yield; Location of sorting; Returns classification errors; Value of quality information;
http://www.sciencedirect.com/science/article/pii/S0377221715003744
Zikopoulos, Christos
Tagaras, George
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:471-4752015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:471-475
article
Pricing and sales-effort investment under bi-criteria in a supply chain of virtual products involving risk
This work develops a stochastic model of a two-echelon supply chain of virtual products in which the decision makers—a manufacturer and a retailer—may be risk-sensitive. Virtual products allow the retailer to avoid holding costs and ensure timely fulfillment of demand with no risk of shortage. We expand on the work of Chernonog and Avinadav (2014), who investigated the pricing of virtual products under uncertain and price-dependent demand, by including sales-effort as a decision variable that affects demand. Whereas in the previous work equilibrium was obtained exactly as in a deterministic case for any utility function, herein it is not. Consequently, we focus on the strategies of both the manufacturer and the retailer under different profit criteria, including the use of bi-criteria. By formulating the problem as a Stackelberg game, we show that the problem can be analytically solved by assuming certain common structures of the demand function and of the preferences of both the manufacturer and the retailer with regard to risk. We extend the solution to the case of imperfect information regarding the preferences and offer guidelines for the formation of efficient sets of decisions under bi-criteria. Finally, we provide numerical results.
Supply chain; Game theory; Risk; Multiple criteria; Imperfect information;
http://www.sciencedirect.com/science/article/pii/S0377221715004142
Chernonog, Tatyana
Avinadav, Tal
Ben-Zvi, Tal
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:562-5742015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:562-574
article
A systemic method for organisational stakeholder identification and analysis using Soft Systems Methodology (SSM)
This paper presents a systemic methodology for identifying and analysing the stakeholders of an organisation at many different levels. The methodology is based on soft systems methodology and is applicable to all types of organisation, both for profit and non-profit. The methodology begins with the top-level objectives of the organisation, developed through debate and discussion, and breaks these down into the key activities needed to achieve them. A range of stakeholders are identified for each key activity. At the end, the functions and relationships of all the stakeholder groups can clearly be seen. The methodology is illustrated with an actual case study in Hunan University.
Stakeholder identification; Stakeholder analysis; Soft systems methodology;
http://www.sciencedirect.com/science/article/pii/S0377221715003860
Wang, Wei
Liu, Wenbin
Mingers, John
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:76-852015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:76-85
article
Optimal inventory policy for two substitutable products with customer service objectives
We consider a firm facing stochastic demand for two products with downward, supplier-driven substitution and customer service objectives. We assume both products are perishable or prone to obsolescence, hence the firm faces a single period problem. The fundamental challenge facing the firm is to determine in advance of observing demand the profit maximizing inventory levels of both products that will meet given service level objectives. Note that while we speak of inventory levels, the products may be either goods or services. We characterize the firm’s optimal inventory policy with and without customer service objectives. Results of a numerical study reveal the benefits obtained from substitution and show how optimal inventory levels are impacted by customer service objectives.
Inventory management; Capacity management; Substitution; Perishability; Customer service objective;
http://www.sciencedirect.com/science/article/pii/S0377221715003264
Chen, Xu
Feng, Youyi
Keblis, Matthew F.
Xu, Jianjun
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:331-3382015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:331-338
article
Tri-criterion modeling for constructing more-sustainable mutual funds
One of the most important factors shaping world outcomes is where investment dollars are placed. In this regard, there is the rapidly growing area called sustainable investing where environmental, social, and corporate governance (ESG) measures are taken into account. With people interested in this type of investing rarely able to gain exposure to the area other than through a mutual fund, we study a cross section of U.S. mutual funds to assess the extent to which ESG measures are embedded in their portfolios. Our methodology makes heavy use of points on the nondominated surfaces of many tri-criterion portfolio selection problems in which sustainability is modeled, after risk and return, as a third criterion. With the mutual funds acting as a filter, the question is: How effective is the sustainable mutual fund industry in carrying out its charge? Our findings are that the industry has substantial leeway to increase the sustainability quotients of its portfolios at even no cost to risk and return, thus implying that the funds are unnecessarily falling short on the reasons why investors are investing in these funds in the first place.
Socially responsible investing; Multiple criteria optimization; Portfolio selection; Nondominated surfaces; Quadratically constrained linear programs;
http://www.sciencedirect.com/science/article/pii/S0377221715003288
Utz, Sebastian
Wimmer, Maximilian
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:619-6302015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:619-630
article
A moment-matching method to generate arbitrage-free scenarios
We propose a new moment-matching method to build scenario trees that rule out arbitrage opportunities when describing the dynamics of financial assets. The proposed scenario generator is based on the monomial method, a technique to solve systems of algebraic equations. Extensive numerical experiments show the accuracy and efficiency of the proposed moment-matching method when solving financial problems in complete and incomplete markets.
Scenarios; Monomial method; Moment-matching;
http://www.sciencedirect.com/science/article/pii/S0377221715003653
Staino, Alessandro
Russo, Emilio
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:392-3992015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:392-399
article
Optimality cuts and a branch-and-cut algorithm for the K-rooted mini-max spanning forest problem
Let G = (V, E) be an undirected graph with costs associated with its edges and K pre-specified root vertices. The K−rooted mini-max spanning forest problem asks for a spanning forest of G defined by exactly K mutually disjoint trees. Each tree must contain a different root vertex and the cost of the most expensive tree must be minimum. This paper introduces a Branch-and-cut algorithm for the problem. It involves a multi-start Linear Programming heuristic and the separation of some new optimality cuts. Extensive computational tests indicate that the new algorithm significantly improves on the results available in the literature. Improvements being reflected by lower CPU times, smaller enumeration trees, and optimality certificates for previously unattainable K = 2 instances with as many as 200 vertices. Furthermore, for the first time, instances of the problem with K ∈ {3, 4} are solved to proven optimality.
Combinatorial optimization; Branch-and-cut; K-rooted mini–max spanning forest problem; Optimality cuts;
http://www.sciencedirect.com/science/article/pii/S0377221715003719
da Cunha, Alexandre Salles
Simonetti, Luidi
Lucena, Abilio
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:345-3782015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:345-378
article
The third comprehensive survey on scheduling problems with setup times/costs
Scheduling involving setup times/costs plays an important role in today's modern manufacturing and service environments for the delivery of reliable products on time. The setup process is not a value added factor, and hence, setup times/costs need to be explicitly considered while scheduling decisions are made in order to increase productivity, eliminate waste, improve resource utilization, and meet deadlines. However, the vast majority of existing scheduling literature, more than 90 percent, ignores this fact. The interest in scheduling problems where setup times/costs are explicitly considered began in the mid-1960s and the interest has been increasing even though not at an anticipated level. The first comprehensive review paper (Allahverdi et al., 1999) on scheduling problems with setup times/costs was in 1999 covering about 200 papers, from mid-1960s to mid-1988, while the second comprehensive review paper (Allahverdi et al., 2008) covered about 300 papers which were published from mid-1998 to mid-2006. This paper is the third comprehensive survey paper which provides an extensive review of about 500 papers that have appeared since the mid-2006 to the end of 2014, including static, dynamic, deterministic, and stochastic environments. This review paper classifies scheduling problems based on shop environments as single machine, parallel machine, flowshop, job shop, or open shop. It further classifies the problems as family and non-family as well as sequence-dependent and sequence-independent setup times/costs. Given that so many papers have been published in a relatively short period of time, different researchers have addressed the same problem independently, by even using the same methodology. Throughout the survey paper, the independently addressed problems are identified, and need for comparing these results is emphasized. Moreover, based on performance measures, shop and setup times/costs environments, the less studied problems have been identified and the need to address these problems is specified. The current survey paper, along with those of Allahverdi et al. (1999, 2008), is an up to date survey of scheduling problems involving static, dynamic, deterministic, and stochastic problems for different shop environments with setup times/costs since the first research on the topic appeared in the mid-1960s.
Scheduling; Review; Setup time; Setup cost;
http://www.sciencedirect.com/science/article/pii/S0377221715002763
Allahverdi, Ali
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:575-5812015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:575-581
article
On solving matrix games with pay-offs of triangular fuzzy numbers: Certain observations and generalizations
The purpose of this paper is to highlight a serious omission in the recent work of Li (2012) for solving the two person zero-sum matrix games with pay-offs of triangular fuzzy numbers (TFNs) and propose a new methodology for solving such games. Li (2012) proposed a method which always assures that the max player gain-floor and min player loss-ceiling have a common TFN value. The present paper exhibits a flaw in this claim of Li (2012). The flaw arises on account of Li (2012) not explaining the meaning of solution of game under consideration. The present paper attempts to provide certain appropriate modifications in Li’s model to take care of this serious omission. These modifications in conjunction with the results of Clemente, Fernandez, and Puerto (2011) lead to an algorithm to solve matrix games with pay-offs of general piecewise linear fuzzy numbers.
Game theory; Fuzzy pay-offs; Fuzzy numbers; Multiobjective optimization; Pareto optimality;
http://www.sciencedirect.com/science/article/pii/S0377221715003835
Chandra, S.
Aggarwal, A.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:86-1072015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:86-107
article
A biased random-key genetic algorithm for the unequal area facility layout problem
This paper presents a biased random-key genetic algorithm (BRKGA) for the unequal area facility layout problem (UA-FLP) where a set of rectangular facilities with given area requirements has to be placed, without overlapping, on a rectangular floor space. The objective is to find the location and the dimensions of the facilities such that the sum of the weighted distances between the centroids of the facilities is minimized. A hybrid approach combining a BRKGA, to determine the order of placement and the dimensions of each facility, a novel placement strategy, to position each facility, and a linear programming model, to fine-tune the solutions, is developed. The proposed approach is tested on 100 random datasets and 28 of benchmark datasets taken from the literature and compared with 21 other benchmark approaches. The quality of the approach was validated by the improvement of the best known solutions for 19 of the 28 extensively studied benchmark datasets.
Facilities planning and design; Facility layout; Biased random-key genetic algorithms; Random-keys;
http://www.sciencedirect.com/science/article/pii/S0377221715003227
Gonçalves, José Fernando
Resende, Mauricio G.C.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:108-1182015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:108-118
article
Comments on the EOQ model for deteriorating items with conditional trade credit linked to order quantity in the supply chain management
Ouyang et al. (2009) consider an economic order quantity (EOQ) model for deteriorating items with a partially permissible delay in payments linked to order quantity. Basically, their inventory model is practical, but there are some defects from the logical viewpoints of mathematics. In this paper, the functional behaviors of the annual total relevant costs are explored by rigorous methods of mathematics. A complete solution procedure is also developed to make up for the shortcomings of Ouyang et al. (2009). In numerical examples, it is proved that the new solution procedure could avoid making wrong decisions and causing cost penalties.
Inventory; EOQ; Trade credit; Partially permissible delay in payments; Deteriorating items;
http://www.sciencedirect.com/science/article/pii/S0377221715003665
Ting, Pin-Shou
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:339-3422015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:339-342
article
Optimal shelf-space stocking policy using stochastic dominance under supply-driven demand uncertainty
In this paper, we develop an optimal shelf-space stocking policy when demand, in addition to the exogenous uncertainty, is influenced by the amount of inventory displayed (supply) on the shelves. Our model exploits stochastic dominance condition; and, we assume that the distribution of realized demand with higher stocking level stochastically dominates the distribution of realized demand with lower stocking level. We show that the critical fractile with endogenous demand may not exceed the critical fractile of the classical newsvendor model. Our computational results validate the optimality of amount of units stocked on the retail shelves.
Displayed inventory; Stochastic dominance; Newsvendor; Uncertainty modeling;
http://www.sciencedirect.com/science/article/pii/S0377221715003240
Amit, R.K.
Mehta, Peeyush
Tripathi, Rajeev R.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:51-652015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:51-65
article
An efficient genetic algorithm with a corner space algorithm for a cutting stock problem in the TFT-LCD industry
In this study, we investigate a two-dimensional cutting stock problem in the thin film transistor liquid crystal display industry. Given the lack of an efficient and effective mixed production method that can produce various sizes of liquid crystal display panels from a glass substrate sheet, thin film transistor liquid crystal display manufacturers have relied on the batch production method, which only produces one size of liquid crystal display panel from a single substrate. However, batch production is not an effective or flexible strategy because it increases production costs by using an excessive number of glass substrate sheets and causes wastage costs from unused liquid crystal display panels. A number of mixed production approaches or algorithms have been proposed. However, these approaches cannot solve industrial-scale two-dimensional cutting stock problem efficiently because of its computational complexity. We propose an efficient and effective genetic algorithm that incorporates a novel placement procedure, called a corner space algorithm, and a mixed integer programming model to resolve the problem. The key objectives are to reduce the total production costs and to satisfy the requirements of customers. Our computational results show that, in terms of solution quality and computation time, the proposed method significantly outperforms the existing approaches.
Two-dimensional cutting; Mixed production; Genetic algorithm; TFT-LCD; Corner space algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715003379
Lu, Hao-Chun
Huang, Yao-Huei
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:263-2802015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:263-280
article
Joint optimization for coordinated configuration of product families and supply chains by a leader-follower Stackelberg game
Product family design by module configuration is conducive to accommodating product variety while maintaining mass production efficiency. Effective fulfillment of product families necessitates joint decision making of product family configuration (PFC) and downstream supply chain configuration (SCC), due to nowadays manufacturers’ moving towards assembly-to-order production throughout a distributed supply chain network. Existing decision models for joint optimization of product family and supply chain configuration are originated from an “all-in-one” approach that assumes both PFC and SCC decisions can be integrated into one optimization problem by aggregating two different types of objectives into a single objective function. Such an assumption neglects the complex tradeoffs underlying two different decision making problems and fails to reveal the inherent coupling of PFC and SCC.
Product family; Supply chain; Module configuration; Stackelberg game; Bi-level optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500315X
Yang, Dong
Jiao, Jianxin (Roger)
Ji, Yangjian
Du, Gang
Helo, Petri
Valente, Anna
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:554-5612015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:554-561
article
A nonparametric methodology for evaluating convergence in a multi-input multi-output setting
This paper presents a novel nonparametric methodology to evaluate convergence in an industry, considering a multi-input multi-output setting for the assessment of total factor productivity. In particular, we develop two new indexes to evaluate σ-convergence and β-convergence that can be computed using nonparametric techniques such as Data Envelopment Analysis. The methodology developed is particularly useful to enhance productivity assessments based on the Malmquist index. The methodology is applied to a real world context, consisting of a sample of Portuguese construction companies that operated in the sector between 2008 and 2010. The empirical results show that Portuguese companies tended to converge, both in the sense of σ and β, in all construction activity segments in the aftermath of the financial crisis.
Convergence; Productivity; Malmquist index; Data envelopment analysis; Construction industry;
http://www.sciencedirect.com/science/article/pii/S0377221715003872
Horta, Isabel M.
Camanho, Ana S.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:597-6082015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:597-608
article
Exact and heuristic approaches to the airport stand allocation problem
The Stand Allocation Problem (SAP) consists in assigning aircraft activities (arrival, departure and intermediate parking) to aircraft stands (parking positions) with the objective of maximizing the number of passengers/aircraft at contact stands and minimizing the number of towing movements, while respecting a set of operational and commercial requirements. We first prove that the problem of assigning each operation to a compatible stand is NP-complete by a reduction from the circular arc graph coloring problem. As a corollary, this implies that the SAP is NP-hard. We then formulate the SAP as a Mixed Integer Program (MIP) and strengthen the formulation in several ways. Additionally, we introduce two heuristic algorithms based on a spatial and time decomposition leading to smaller MIPs. The methods are tested on realistic scenarios based on actual data from two major European airports. We compare the performance and the quality of the solutions with state-of-the-art algorithms. The results show that our MIP-based methods provide significant improvements to the solutions outlined in previously published approaches. Moreover, their low computation makes them very practical.
Mixed integer programming; Gate assignment problem; Heuristic algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221715003331
Guépet, J.
Acuna-Agost, R.
Briant, O.
Gayon, J.P.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:209-2172015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:209-217
article
A generalized equilibrium efficient frontier data envelopment analysis approach for evaluating DMUs with fixed-sum outputs
The recently published equilibrium efficient frontier data envelopment analysis (EEFDEA) approach (Yang et al., 2014) represents a step forward in evaluating decision-making units (DMUs) with fixed-sum outputs when compared to prior approaches such as FSODEA (fixed-sum outputs DEA) approach (Yang et al., 2011) and ZSG-DEA (zero sum gains DEA) approach (Lins et al., 2003) and so on. Based on the EEFDEA approach, in this paper, we proposed a generalized equilibrium efficient frontier data envelopment analysis approach (GEEFDEA) which improves and strengthens the EEFDEA approach. Compared to EEFDEA approach, this approach makes several improvements in evaluation, namely that (1) it is not necessary to determine the evaluation order in advance, which overcomes the limitation that different evaluation orders will lead to different results; (2) the equilibrium efficient frontier can be achieved in only one step no matter how many DMUs they are, which greatly simplifies the procedure to reach the equilibrium efficient frontier especially when the number of DMUs is large; and (3) the constraint that signs of outputs’ adjustment of each DMU must be same (all non-positive or all non-negative) in prior approaches has been relaxed. In this sense, the result obtained by the proposed approach is more consistent with the demand of practical applications. Finally, the proposed approach combined with assurance regions (AR) is applied to the data set of 2012 London Olympic Games.
Data envelopment analysis (DEA); Generalized equilibrium efficient frontier; Fixed sum outputs; Assurance region;
http://www.sciencedirect.com/science/article/pii/S0377221715003161
Yang, Min
Li, Yong Jun
Liang, Liang
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:320-3302015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:320-330
article
On the value of exposure and secrecy of defense system: First-mover advantage vs. robustness
It is commonly accepted in the literature that, when facing with a strategic terrorist, the government can be better off by manipulating the terrorist’s target selection with exposing her defense levels and thus moving first. However, the impact of terrorist’s private information may significantly affect such government’s first-mover advantage, which has not been extensively studied in the literature. To explore the impact of asymmetry in terrorist’s attributes between government and terrorist on defense equilibrium, we propose a model in which the government chooses between disclosure (sequential game) and secrecy (simultaneous game) of her defense system. Our analysis shows that the government’s first-mover advantage in a sequential game is considerable only when both government and terrorist share relatively similar valuation of targets. In contrast, we interestingly find that the government no longer benefits from the first-mover advantage by exposing her defense levels when the degree of divergence between government and terrorist valuation of targets is high. This is due to the robustness of defense system under secrecy, in the sense that all targets should be defended in equilibrium irrespective of how the terrorist valuation of targets is different to government. We identify two phenomena that lead to this result. First, when the terrorist holds a significantly higher valuation of targets than the government’s belief, the government may waste her budget in a sequential game by over-investing on the high-valued targets. Second, when the terrorist holds a significantly lower valuation of targets, the government may incur a higher expected damage in a sequential game because of not defending the low-valued targets. Finally, we believe that this paper provides some novel insights to homeland security resource allocation problems.
Defense system; Game Theory; Secrecy; Exposure; Robustness;
http://www.sciencedirect.com/science/article/pii/S0377221715003367
Nikoofal, Mohammad E.
Zhuang, Jun
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:119-1272015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:119-127
article
Multivariate control charts based on the James–Stein estimator
In this study, we focus on improving parameter estimation in Phase I study to construct more accurate Phase II control limits for monitoring multivariate quality characteristics. For a multivariate normal distribution with unknown mean vector, the usual mean estimator is known to be inadmissible under the squared error loss function when the dimension of the variables is greater than 2. Shrinkage estimators, such as the James–Stein estimators, are shown to have better performance than the conventional estimators in the literature. We utilize the James–Stein estimators to improve the Phase I parameter estimation. Multivariate control limits for the Phase II monitoring based on the improved estimators are proposed in this study. The resulting control charts, JS-type charts, are shown to have substantial performance improvement over the existing ones.
Average run length; Control chart; Multivariate normal distribution; James–Stein estimator;
http://www.sciencedirect.com/science/article/pii/S0377221715001666
Wang, Hsiuying
Huwang, Longcheen
Yu, Jeng Hung
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:641-6502015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:641-650
article
Optimal design of bilateral contracts for energy procurement
In this paper, we consider the problem of optimizing the portfolio of an aggregator that interacts with the energy grid via bilateral contracts. The purpose of the contracts is to achieve the pointwise procurement of energy to the grid. The challenge raised by the coordination of scattered resources and the securing of obligations over the planning horizon is addressed through a twin-time scale model, where robust short term operational decisions are contingent on long term resource usage incentives that embed the full extent of contract specifications.
Distributed energy resource; Bilateral contract; Dynamic resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221715003707
Gilbert, François
Anjos, Miguel F.
Marcotte, Patrice
Savard, Gilles
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:307-3192015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:307-319
article
Cost-effectiveness measures on convex and nonconvex technologies
Camanho and Dyson (2005) extended Shephard's (1974) revenue-indirect cost efficiency approach to a cost-effectiveness framework, which helps to assess the ability of a firm to achieve the current revenue (expressed in the firm's own prices and quantities) at minimum cost. The degree of cost-effectiveness is quantified as the ratio of the minimum cost to the observed cost of the evaluated firm where the minimum cost is computed by simultaneously adjusting the output levels at the current revenue. In this paper, we develop two cost-effectiveness approaches based on convex data envelopment analysis and nonconvex free disposable hull technologies. The objectives of this paper are threefold. Firstly, we develop a convex cost-effectiveness (CCE) measure which is equivalent to the Camanho–Dyson CCE measure under the constant returns-to-scale assumption. Secondly, we introduce three nonconvex cost-effectiveness (NCCE) measures which are shown to be equivalent with respect to each returns-to-scale nonconvex technology. Finally, we apply our framework to a real data.
Data envelopment analysis (DEA); Free disposal hull (FDH); Convex cost-effectiveness (CCE); Nonconvex cost-effectiveness (NCCE); Returns-to-scale;
http://www.sciencedirect.com/science/article/pii/S0377221715002751
Fukuyama, Hirofumi
Shiraz, Rashed Khanjani
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:462-4702015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:462-470
article
Decentral allocation planning in multi-stage customer hierarchies
This paper presents a novel allocation scheme to improve profits when splitting a scarce product among customer segments. These segments differ by demand and margin and they form a multi-level tree, e.g. according to a geography-based organizational structure. In practice, allocation has to follow an iterative process in which higher level quotas are disaggregated one level at a time, only based on local, aggregate information. We apply well-known econometric concepts such as the Lorenz curve and Theil’s index of inequality to find a non-linear approximation of the profit function in the customer tree. Our resulting Approximate Profit Decentral Allocation (ADA) scheme ensures that a group of truthfully reporting decentral planners makes quasi-coordinated decisions in support of overall profit-maximization in the hierarchy. The new scheme outperforms existing simple rules by a large margin and comes close to the first-best theoretical solution under a central planner and central information.
Supply chain management; Demand fulfillment; Allocation planning; Customer hierarchies; Customer heterogeneity;
http://www.sciencedirect.com/science/article/pii/S0377221715003811
Vogel, Sebastian
Meyr, Herbert
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:20-332015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:20-33
article
Solving stochastic resource-constrained project scheduling problems by closed-loop approximate dynamic programming
Project scheduling problems with both resource constraints and uncertain task durations have applications in a variety of industries. While the existing research literature has been focusing on finding an a priori open-loop task sequence that minimizes the expected makespan, finding a dynamic and adaptive closed-loop policy has been regarded as being computationally intractable. In this research, we develop effective and efficient approximate dynamic programming (ADP) algorithms based on the rollout policy for this category of stochastic scheduling problems. To enhance performance of the rollout algorithm, we employ constraint programming (CP) to improve the performance of base policy offered by a priority-rule heuristic. We further devise a hybrid ADP framework that integrates both the look-back and look-ahead approximation architectures, to simultaneously achieve both the quality of a rollout (look-ahead) policy to sequentially improve a task sequence, and the efficiency of a lookup table (look-back) approach. Computational results on the benchmark instances show that our hybrid ADP algorithm is able to obtain competitive solutions with the state-of-the-art algorithms in reasonable computational time. It performs particularly well for instances with non-symmetric probability distribution of task durations.
Resource-constrained project scheduling; Uncertain task durations; Stochastic scheduling; Approximate dynamic programming; Simulation;
http://www.sciencedirect.com/science/article/pii/S037722171500288X
Li, Haitao
Womer, Norman K.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:582-5962015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:582-596
article
Methods for solving the mean query execution time minimization problem
One of the most significant and common techniques to accelerate user queries in multidimensional databases is view materialization. The problem of choosing an appropriate part of data structure for materialization under limited resources is known as the view selection problem. In this paper, the problem of the mean query execution time minimization under limited storage space is studied. Different heuristics based on a greedy method are examined, proofs regarding their performance are presented, and modifications for them are proposed, which not only improve the solution cost but also shorten the running time. Additionally, the heuristics and a widely used Integer Programming solver are experimentally compared with respect to the running time and the cost of solution. What distinguishes this comparison is its comprehensiveness, which is obtained by the use of performance profiles. Two computational effort reduction schemas, which significantly accelerate heuristics as well as optimal algorithms without increasing the value of the cost function, are also proposed. The presented experiments were done on a large dataset with special attention to the large problems, rarely considered in previous experiments. The main disadvantage of a greedy method indicated in literature was its long running time. The results of the conducted experiments show that the modification of the greedy algorithm together with the computational effort reduction schemas presented in this paper result in the method which finds a solution in short time, even for large lattices.
Decision support systems; Heuristics; OLAP; View materialization; View selection problem;
http://www.sciencedirect.com/science/article/pii/S0377221715003343
Łatuszko, Marek
Pytlak, Radosław
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:631-6402015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:631-640
article
A multi-step rolled forward chance-constrained model and a proactive dynamic approach for the wheat crop quality control problem
Handling weather uncertainty during the harvest season is an indispensable aspect of seed gathering activities. More precisely, the focus of this study refers to the multi-period wheat quality control problem during the crop harvest season under meteorological uncertainty. In order to alleviate the problem curse of dimensionality and to reflect faithfully exogenous uncertainties revealed progressively over time, we propose a multi-step joint chance-constrained model rolled forward step-by-step. This model is subsequently solved by a proactive dynamic approach, specially conceived for this purpose. Based on real-world derived instances, the obtained computational results exhibit proactive and accurate harvest scheduling solutions for the wheat crop quality control problem.
OR in agriculture; Multi-step joint chance constrained programming; Proactive dynamic approach; Exogenous Markov decision process; Wheat crop quality control;
http://www.sciencedirect.com/science/article/pii/S0377221715003689
Borodin, Valeria
Bourtembourg, Jean
Hnaien, Faicel
Labadie, Nacima
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:450-4612015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:450-461
article
A frontier measure of U.S. banking competition
The three main measures of competition (HHI, Lerner index, and H-statistic) are uncorrelated for U.S. banks. We investigate why this occurs, propose a frontier measure of competition, and apply it to five major bank service lines. Fee-based banking services comprise 35 percent of bank revenues so assessing competition by service line is preferred to using a single measure for traditional activities extended to the entire bank. As the Lerner index and the H-statistic together explain only 1 percent of HHI variation and the HHI is similarly unrelated to the frontier method developed here, current merger/acquisition guidelines should be adjusted as banking concentration seems unrelated to likely more accurate competition measures.
(D) Productivity and competitiveness; Competition; Banks;
http://www.sciencedirect.com/science/article/pii/S0377221715003896
Bolt, Wilko
Humphrey, David
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:214-2252016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:214-225
article
Ambiguity in risk preferences in robust stochastic optimization
We consider robust stochastic optimization problems for risk-averse decision makers, where there is ambiguity about both the decision maker’s risk preferences and the underlying probability distribution. We propose and analyze a robust optimization problem that accounts for both types of ambiguity. First, we derive a duality theory for this problem class and identify random utility functions as the Lagrange multipliers. Second, we turn to the computational aspects of this problem. We show how to evaluate our robust optimization problem exactly in some special cases, and then we consider some tractable relaxations for the general case. Finally, we apply our model to both the newsvendor and portfolio optimization problems and discuss its implications.
Stochastic dominance; Robust optimization; Expected utility maximization;
http://www.sciencedirect.com/science/article/pii/S0377221716301448
Haskell, William B.
Fu, Lunce
Dessouky, Maged
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:202-2132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:202-213
article
The influence of challenging goals and structured method on Six Sigma project performance: A mediated moderation analysis
Over the past few decades, Six Sigma has diffused to a wide array of organizations across the globe, which has been fueled by the reported financial benefits of Six Sigma. Implementing Six Sigma entails carrying out a series of Six Sigma projects that improve business processes. Scholars have investigated some mechanisms that influence project success, such as setting challenging goals and adhering to the Six Sigma method. However, these mechanisms have been studied in a piecemeal fashion and do not provide a deeper understanding of their interrelationships. Developing a deeper understanding of these mechanisms helps identify the contingency and boundary conditions that influence Six Sigma project execution. Drawing on Sociotechnical Systems theory, this research conceptualizes and empirically examines the interrelationships of the key mechanisms that influence project execution. Specifically, we examine the interrelationship between Six Sigma project goals (Social System), adherence to the Six Sigma method (Technical System), and knowledge creation. The analysis uses a mediation-moderation approach which helps empirically examine these relationships. The data come from a survey of 324 employees in 102 Six Sigma projects from two organizations. The findings show that project goals and the Six Sigma method can compensate for one another. It also suggests that adherence to the Six Sigma method becomes more beneficial for projects that create a lot of knowledge. Otherwise the method becomes less important. Prior research has not examined these contingencies and boundary conditions, which ultimately influence project success.
Six Sigma; Goal theory; Sociotechnical systems theory; Structured method; Mediated moderation;
http://www.sciencedirect.com/science/article/pii/S0377221716301503
Arumugam, V.
Antony, Jiju
Linderman, Kevin
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:80-912016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:80-91
article
An adaptive large neighborhood search for the two-echelon multiple-trip vehicle routing problem with satellite synchronization
The two-echelon vehicle routing problem (2E-VRP) consists in making deliveries to a set of customers using two distinct fleets of vehicles. First-level vehicles pick up requests at a distribution center and bring them to intermediate sites. At these locations, the requests are transferred to second-level vehicles, which deliver them. This paper addresses a variant of the 2E-VRP that integrates constraints arising in city logistics such as time window constraints, synchronization constraints, and multiple trips at the second level. The corresponding problem is called the two-echelon multiple-trip vehicle routing problem with satellite synchronization (2E-MTVRP-SS). We propose an adaptive large neighborhood search to solve this problem. Custom destruction and repair heuristics and an efficient feasibility check for moves have been designed and evaluated on modified benchmarks for the VRP with time windows.
Routing; Two-echelon VRP; Synchronization; City logistics; Adaptive large neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221716301862
Grangier, Philippe
Gendreau, Michel
Lehuédé, Fabien
Rousseau, Louis-Martin
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:226-2352016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:226-235
article
Quantifiers induced by subjective expected value of sample information with Bernstein polynomials
A kind of personalized quantifier, the so-called SEVSI-induced quantifier as an acronym for Subjective Expected Value of Sample Information, is developed in this paper by introducing Bernstein polynomials of higher degree. This allows us to provide a novel solution to improve the final representation of the quantifier that generally performed poorly in our previous work, thus enhancing the quality of global approximation of functions and improving the operability of this kind of quantifier for practical use. We show some properties of the developed quantifier. We also prove the consistency of the OWA aggregation under the guidance of this type of quantifier. Finally, we experimentally show that the developed quantifier outperforms the one with the piecewise linear interpolation in many aspects of geometrical characteristics and operability. Thus it could be considered as an effective analytical tool to help handle the complex cases involving people's personalities or behavior intentions that have to be considered in decision making under uncertainty.
Uncertainty modeling; Personalized quantifier; Bernstein polynomials; Ordered weighted averaging (OWA) aggregation;
http://www.sciencedirect.com/science/article/pii/S0377221716301436
Guo, Kaihong
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:92-1042016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:92-104
article
Efficient inventory control for imperfect quality items
In this paper, we present a general EOQ model for items that are subject to inspection for imperfect quality. Each lot that is delivered to the sorting facility undertakes a 100 per cent screening and the percentage of defective items per lot reduces according to a learning curve. The generality of the model is viewed as important both from an academic and practitioner perspective. The mathematical formulation considers arbitrary functions of time that allow the decision maker to assess the consequences of a diverse range of strategies by employing a single inventory model. A rigorous methodology is utilised to show that the solution is a unique and global optimal and a general step-by-step solution procedure is presented for continuous intra-cycle periodic review applications. The value of the temperature history and flow time through the supply chain is also used to determine an efficient policy. Furthermore, coordination mechanisms that may affect the supplier and the retailer are explored to improve inventory control at both echelons. The paper provides illustrative examples that demonstrate the application of the theoretical model in different settings and lead to the generation of interesting managerial insights.
Inventory; Imperfect quality; Deterioration; Perishable items; Periodic review;
http://www.sciencedirect.com/science/article/pii/S0377221716302041
Alamri, Adel A.
Harris, Irina
Syntetos, Aris A.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:418-4272016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:418-427
article
Value of information in portfolio selection, with a Taiwan stock market application illustration
Despite many proposed alternatives, the predominant model in portfolio selection is still mean–variance. However, the main weakness of the mean–variance model is in the specification of the expected returns of the individual securities involved. If this process is not accurate, the allocations of capital to the different securities will in almost all certainty be incorrect. If, however, this process can be made accurate, then correct allocations can be made, and the additional expected return following from this is the value of information. This paper thus proposes a methodology to calculate the value of information. A related idea of a level of disappointment is also shown. How value of information calculations can be important in helping a mutual fund settle on how much to set aside for research is discussed in reference to a Taiwan Stock Exchange illustrative application in which the value of information appears to be substantial. Heavy use is made of parametric quadratic programming to keep computation times down for the methodology.
Efficient points; Portfolio selection; Value of information; Piecewise linear paths; Parametric quadratic programming;
http://www.sciencedirect.com/science/article/pii/S0377221716300315
Kao, Chiang
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:113-1262016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:113-126
article
Risk measures and their application to staffing nonstationary service systems
In this paper, we explore the use of static risk measures from the mathematical finance literature to assess the performance of some standard nonstationary queueing systems. To do this we study two important queueing models, namely the infinite server queue and the multi-server queue with abandonment. We derive exact expressions for the value of many standard risk measures for the Mt/M/∞, Mt/G/∞, and Mt/Mt/∞ queueing models. We also derive Gaussian based approximations for the value of risk measures for the Erlang-A queueing model. Unlike more traditional approaches of performance analysis, risk measures offer the ability to satisfy the unique and specific risk preferences or tolerances of service operations managers. We also show how risk measures can be used for staffing nonstationary systems with different risk preferences and assess the impact of these staffing policies via simulation.
Queues and service systems; Risk measures; Healthcare; Time inhomogeneous markov processes; Staffing;
http://www.sciencedirect.com/science/article/pii/S0377221716301400
Pender, Jamol
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:290-2972016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:290-297
article
Scheduling under linear constraints
We introduce a parallel machine scheduling problem in which the processing times of jobs are not given in advance but are determined by a system of linear constraints. The objective is to minimize the makespan, i.e., the maximum job completion time among all feasible choices. This novel problem is motivated by various real-world application scenarios. We discuss the computational complexity and algorithms for various settings of this problem. In particular, we show that if there is only one machine with an arbitrary number of linear constraints, or there is an arbitrary number of machines with no more than two linear constraints, or both the number of machines and the number of linear constraints are fixed constants, then the problem is polynomial-time solvable via solving a series of linear programming problems. If both the number of machines and the number of constraints are inputs of the problem instance, then the problem is NP-Hard. We further propose several approximation algorithms for the latter case.
Parallel machine scheduling; Linear programming; Computational complexity; Approximation algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716300650
Nip, Kameng
Wang, Zhenbo
Wang, Zizhuo
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:29-392016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:29-39
article
KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions
We devote this paper to study a class of interval valued multiobjective programming problems. For this we consider two order relations LU and LS on the set of all closed intervals and propose many concepts of Pareto optimal solutions. Based on convexity concepts (viz. LU and LS-convexity) and generalized differentiability (viz. gH-differentiability) of interval valued functions, the KKT optimality conditions for aforesaid problems are obtained. In addition, we compare our results with the results given in Wu (2009) and we show some advantages of our results. The theoretical development is illustrated by suitable examples.
Interval valued functions; gH-differentiability; LU; LS-convex functions; Pareto optimal solutions; KKT optimality conditions;
http://www.sciencedirect.com/science/article/pii/S0377221716301886
Singh, D.
Dar, B.A.
Kim, D.S.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:825-8422016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:825-842
article
The Hybrid Electric Vehicle – Traveling Salesman Problem
The reduction in carbon dioxide levels by using hybrid electric vehicles is a currently ongoing endeavor. Although this development is quite advanced for hybrid electric passenger cars, small transporters and trucks are far behind. We try to address this challenge by introducing a new optimization problem that describes the delivery of goods with a hybrid electric vehicle to a set of customer locations. The Hybrid Electric Vehicle – Traveling Salesman Problem extends the well-known Traveling Salesman Problem by adding different modes of operation for the vehicle, causing different costs and driving times for each arc within a delivery network.
Travelling salesman; Hybrid electric vehicles; Transportation;
http://www.sciencedirect.com/science/article/pii/S0377221716301163
Doppstadt, C.
Koberstein, A.
Vigo, D.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:697-7102016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:697-710
article
A data analytic approach to forecasting daily stock returns in an emerging marketAuthor-Name: Oztekin, Asil
Forecasting stock market returns is a challenging task due to the complex nature of the data. This study develops a generic methodology to predict daily stock price movements by deploying and integrating three data analytical prediction models: adaptive neuro-fuzzy inference systems, artificial neural networks, and support vector machines. The proposed approach is tested on the Borsa Istanbul BIST 100 Index over an 8 year period from 2007 to 2014, using accuracy, sensitivity, and specificity as metrics to evaluate each model. Using a ten-fold stratified cross-validation to minimize the bias of random sampling, this study demonstrates that the support vector machine outperforms the other models. For all three predictive models, accuracy in predicting down movements in the index outweighs accuracy in predicting the up movements. The study yields more accurate forecasts with fewer input factors compared to prior studies of forecasts for securities trading on Borsa Istanbul. This efficient yet also effective data analytic approach can easily be applied to other emerging market stock return series.
Prediction/forecasting; Stock market return; Business analytics; Borsa Istanbul (BIST 100); Istanbul Stock Exchange (ISE);
http://www.sciencedirect.com/science/article/pii/S0377221716301096
Kizilaslan, Recep
Freund, Steven
Iseri, Ali
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:659-6722016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:659-672
article
A model for clustering data from heterogeneous dissimilarities
Clustering algorithms partition a set of n objects into p groups (called clusters), such that objects assigned to the same groups are homogeneous according to some criteria. To derive these clusters, the data input required is often a single n × n dissimilarity matrix. Yet for many applications, more than one instance of the dissimilarity matrix is available and so to conform to model requirements, it is common practice to aggregate (e.g., sum up, average) the matrices. This aggregation practice results in clustering solutions that mask the true nature of the original data. In this paper we introduce a clustering model which, to handle the heterogeneity, uses all available dissimilarity matrices and identifies for groups of individuals clustering objects in a similar way. The model is a nonconvex problem and difficult to solve exactly, and we thus introduce a Variable Neighborhood Search heuristic to provide solutions efficiently. Computational experiments and an empirical application to perception of chocolate candy show that the heuristic algorithm is efficient and that the proposed model is suited for recovering heterogeneous data. Implications for clustering researchers are discussed.
Data mining; Clustering; Heterogeneity; Optimization; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301618
Santi, Éverton
Aloise, Daniel
Blanchard, Simon J.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:489-5022016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:489-502
article
A DEA based composite measure of quality and its associated data uncertainty interval for health care provider profiling and pay-for-performance
Composite measures calculated from individual performance indicators increasingly are used to profile and reward health care providers. We illustrate an innovative way of using Data Envelopment Analysis (DEA) to create a composite measure of quality for profiling facilities, informing consumers, and pay-for-performance programs. We compare DEA results to several widely used alternative approaches for creating composite measures: opportunity-based-weights (OBW, a form of equal weighting) and a Bayesian latent variable model (BLVM, where weights are driven by variances of the individual measures). Based on point estimates of the composite measures, to a large extent the same facilities appear in the top decile. However, when high performers are identified because the lower limits of their interval estimates are greater than the population average (or, in the case of the BLVM, the upper limits are less), there are substantial differences in the number of facilities identified: OBWs, the BLVM and DEA identify 25, 17 and 5 high-performers, respectively. With DEA, where every facility is given the flexibility to set its own weights, it becomes much harder to distinguish the high performers. In a pay-for-performance program, the different approaches result in very different reward structures: DEA rewards a small group of facilities a larger percentage of the payment pool than the other approaches. Finally, as part of the DEA analyses, we illustrate an approach that uses Monte Carlo resampling with replacement to calculate interval estimates by incorporating uncertainty in the data generating process for facility input and output data. This approach, which can be used when data generating processes are hierarchical, has the potential for wider use than in our particular application.
Data Envelopment Analysis (DEA); Health care quality; Monte Carlo; Bootstrapping; Performance;
http://www.sciencedirect.com/science/article/pii/S0377221716301023
Shwartz, Michael
Burgess, James F.
Zhu, Joe
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:524-5412016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:524-541
article
From stakeholders analysis to cognitive mapping and Multi-Attribute Value Theory: An integrated approach for policy support
One of the fundamental features of policy processes in contemporary societies is complexity. It follows from the plurality of points of view actors adopt in their interventions, and from the plurality of criteria upon which they base their decisions. In this context, collaborative multicriteria decision processes seem to be appropriate to address part of the complexity challenge. This study discusses a decision support framework that guides policy makers in their strategic decisions by using a multi-method approach based on the integration of three tools, i.e., (i) stakeholders analysis, to identify the multiple interests involved in the process, (ii) cognitive mapping, to define the shared set of objectives for the analysis, and (iii) Multi-Attribute Value Theory, to measure the level of achievement of the previously defined objectives by the policy options under investigation. The integrated decision support framework has been tested on a real world project concerning the location of new parking areas in a UNESCO site in Southern Italy. The purpose of this study was to test the operability of an integrated analytical approach to support policy decisions by investigating the combined and synergistic effect of the three aforementioned tools. The ultimate objective was to propose policy recommendations for a sustainable parking area development strategy in the region under consideration. The obtained results illustrate the importance of integrated approaches for the development of accountable public decision processes and consensus policy alternatives. The proposed integrated methodological framework will, hopefully, stimulate the application of other collaborative decision processes in public policy making.
Multiple criteria analysis; Decision analysis; Group decision and negotiations; Decision processes; Policy analytics;
http://www.sciencedirect.com/science/article/pii/S0377221716301072
Ferretti, Valentina
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:253-2682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:253-268
article
Forward thresholds for operation of pumped-storage stations in the real-time energy market
Pumped-storage hydroelectric plants are very valuable assets on the electric grid and in electric markets as they are able to pump and store water for generation, thus allowing for grid-level storage. Within the realm of short-term energy markets, we present a model for determining forward-looking thresholds for making generation and pumping decisions at such plants. A multistage stochastic programming framework is developed to optimize the thresholds with uncertain system prices over the next three days. Tractability issues are discussed and a novel method based on an implementation of the scatter search algorithm is proposed. Given the size of the multistage stochastic programming formulation, we argue that this novel method is a more accurate representation of the decision process. We demonstrate model stability and quality, and show that the forward thresholds obtained using a stochastic programming framework outperform the forward thresholds from a deterministic model, and thus can lead to efficiency gains for both the generation unit owner and the overall system in the real-time market.
Stochastic programming; OR in energy; Large scale optimization; Metaheuristics; Energy markets;
http://www.sciencedirect.com/science/article/pii/S0377221716301485
Vojvodic, Goran
Jarrah, Ahmad I.
Morton, David P.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:312-3192016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:312-319
article
Estimating the hyperbolic distance function: A directional distance function approach
Färe, Grosskopf, and Lovell (1985) merged Farrell’s input and output oriented technical efficiency measures into a new graph-type approach known as hyperbolic distance function (HDF). In spite of its appealing special structure in allowing for the simultaneous and equiproportionate reduction in inputs and increase in outputs, HDF is a non-linear optimization and it is hard to solve particularly when dealing with technologies operating under variable returns to scale. By connecting HDF to the directional distance function, we propose a linear programming based procedure for estimating the exact value of HDF within the non-parametric framework of data envelopment analysis. We illustrate the computational effectiveness of the algorithm on several real-world and simulated data sets, generating the optimal value of HDF through generally solving at most two linear programs. Moreover, our approach has several desirable properties such as: (1) introducing a computational dual formulation for the HDF and providing an economic interpretation in terms of shadow prices; (2) being readily adaptable to measure hyperbolic-oriented super-efficiency; and (3) being flexible to deal with HDF-based efficiency measures on environmental technologies.
Efficiency measurement; Data envelopment analysis; Hyperbolic distance function; Directional distance function;
http://www.sciencedirect.com/science/article/pii/S0377221716301916
Färe, Rolf
Margaritis, Dimitris
Rouse, Paul
Roshdi, Israfil
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:383-3912016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:383-391
article
Optimal production planning for assembly systems with uncertain capacities and random demandAuthor-Name: Ji, Qingkai
We study the optimal production planning for an assembly system consisting of n components in a single period setting. Demand for the end-product is random and production and assembly capacities are uncertain due to unexpected breakdowns, repairs and reworks, etc. The cost-minimizing firm (she) plans components production before the production capacities are realized, and after the outputs of components are observed, she decides the assembly amount before the demand realization. We start with a simplified system of selling two complementary products without an assembly stage and find that the firm's best choices can only be: (a) producing no products or producing only the product of less stock such that its target amount is not higher than the other product's initial stock level, or (b) producing both products such that their target amounts are equal. Leveraging on these findings, the two-dimensional optimization problem is reduced to two single-dimensional sub-problems and the optimal solution is characterized. For a general assembly system with n components, we show that if initially the firm has more end-products than a certain level, she will neither produce any component nor assemble end-product; if she does not have that many end-products but does have enough mated components, she will produce nothing and assemble up to that level; otherwise she will try to assemble all mated components and plan production of components accordingly. We characterize the structure of optimal solutions and find the solutions analytically.
Supply chain management; Assembly system; Uncertain capacity; Production planning;
http://www.sciencedirect.com/science/article/pii/S0377221716300583
Wang, Yunzeng
Hu, Xiangpei
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:681-6962016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:681-696
article
Unpacking multimethodology: Impacts of a community development intervention
Multimethodology interventions are being increasingly employed by operational researchers to cope with the complexity of real-world problems. In keeping with recent calls for more research into the ‘realised’ impacts of multimethodology, we present a detailed account of an intervention to support the planning of business ideas by a management team working in a community development context. Drawing on the rich steam of data gathered during the intervention, we identify a range of cognitive, task and relational impacts experienced by the management team during the intervention. These impacts are the basis for developing a process model that accounts for the personal, social and material changes reported by those involved in the intervention. The model explains how the intervention's analytic and relational capabilities incentivise the interplay of participants’ decision making efforts and integrative behaviours underpinning reported intervention impacts and change. Our findings add much needed empirical case material to enrich further our understanding of the realised impacts of operational research interventions in general, and of multimethodology interventions in particular.
Decision processes; Problem structuring; Multimethodology; Intervention; Impacts;
http://www.sciencedirect.com/science/article/pii/S0377221716300972
Henao, Felipe
Franco, L. Alberto
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:265-2792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:265-279
article
A cycle-based evolutionary algorithm for the fixed-charge capacitated multi-commodity network design problem
This paper presents an evolutionary algorithm for the fixed-charge multicommodity network design problem (MCNDP), which concerns routing multiple commodities from origins to destinations by designing a network through selecting arcs, with an objective of minimizing the fixed costs of the selected arcs plus the variable costs of the flows on each arc. The proposed algorithm evolves a pool of solutions using principles of scatter search, interlinked with an iterated local search as an improvement method. New cycle-based neighborhood operators are presented which enable complete or partial re-routing of multiple commodities. An efficient perturbation strategy, inspired by ejection chains, is introduced to perform local compound cycle-based moves to explore different parts of the solution space. The algorithm also allows infeasible solutions violating arc capacities while performing the “ejection cycles”, and subsequently restores feasibility by systematically applying correction moves. Computational experiments on benchmark MCNDP instances show that the proposed solution method consistently produces high-quality solutions in reasonable computational times.
Multi-commodity network design; Scatter search; Evolutionary algorithms; Ejection chains; Iterated local search;
http://www.sciencedirect.com/science/article/pii/S0377221716000072
Paraskevopoulos, Dimitris C.
Bektaş, Tolga
Crainic, Teodor Gabriel
Potts, Chris N.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:625-6382016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:625-638
article
Hub and Chain: Process Flexibility Design in Non-Identical Systems Using Variance Information
In multi-product multi-plant manufacturing systems, process flexibility is the ability to produce different types of products in the same manufacturing plant or production line. While several design methods and flexibility indices have been proposed in the literature on how to design process flexibility, most of the insights generated are focused on identical production systems whereby all plants have the same capacity and all products have identically distributed demands. In this paper, we examine the process flexibility design problem for non-identical systems. We first study the effect of non-identical demand distributions on the performance of the well-known long chain design, and discover three interesting insights: (1) products with low demand mean will create a bottleneck effect, (2) products with low demand variance will result in inefficient utilization of flexibility links, and (3) long chain efficiency decreases in demand variance of any product, hence the need to provide this product with access to more capacity. Using these insights, we develop the variance-based hub-and-chain method (VHC), a simple and graphically intuitive method which decomposes the long chain into smaller chains, one of which will serve as a hub to which the other chains will be connected. Numerical tests show that VHC outperforms the long chain by 15% on average and outperforms the constraint sampling method by 38% on average. Lastly, we implement VHC on a case study in the edible oil industry in China and find substantial benefits. We then summarize with some managerial insights.
Process flexibility; Chaining strategy; Stochastic maximum flow; Demand variance;
http://www.sciencedirect.com/science/article/pii/S0377221716301473
Chua, Geoffrey A.
Chen, Shaoxiang
Han, Zhiguang
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:179-1872016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:179-187
article
Hierarchical outcomes and collusion neutrality on networks
We investigate TU-game solutions that are neutral to collusive agreements among players. A collusive agreement binds collusion members to act as a single player and is feasible when they are connected on a network. Collusion neutrality requires that no feasible collusive agreement can change the total payoff of collusion members. We show that on the domain of network games, there is a solution satisfying collusion neutrality, efficiency and null-player property if and only if the network is a tree. Considering a tree network, we show that affine combinations of hierarchical outcomes (Demange, 2004; van den Brink, 2012) are the only solutions satisfying the three axioms together with linearity. As corollaries, we establish characterizations of the average tree solution (equally weighted average of hierarchical outcomes); one established earlier in the literature and the others new.
Game theory; Hierarchical outcomes; Collusion neutrality; TU-game; Network game;
http://www.sciencedirect.com/science/article/pii/S0377221716301394
Park, Junghum
Ju, Biung-Ghi
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:68-792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:68-79
article
A service network design model for multimodal municipal solid waste transport
A modal shift from road transport towards inland water or rail transport could reduce the total Green House Gas emissions and societal impact associated with Municipal Solid Waste management. However, this shift will take place only if demonstrated to be at least cost-neutral for the decision makers. In this paper we examine the feasibility of using multimodal truck and inland water transport, instead of truck transport, for shipping separated household waste in bulk from collection centres to waste treatment facilities. We present a dynamic tactical planning model that minimises the sum of transportation costs, external environmental and societal costs. The Municipal Solid Waste Service Network Design Problem allocates Municipal Solid Waste volumes to transport modes and determines transportation frequencies over a planning horizon. This generic model is applied to a real-life case in Flanders, the northern region of Belgium. Computational results show that multimodal truck and inland water transportation can compete with truck transport by avoiding or reducing transhipments and using barge convoys.
Solid Waste Management; Supply chain management; OR in societal problem analysis; Linear Programming; Networks;
http://www.sciencedirect.com/science/article/pii/S0377221716301643
Inghels, Dirk
Dullaert, Wout
Vigo, Daniele
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:843-8552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:843-855
article
Progressive hedging applied as a metaheuristic to schedule production in open-pit mines accounting for reserve uncertainty
Scheduling production in open-pit mines is characterized by uncertainty about the metal content of the orebody (the reserve) and leads to a complex large-scale mixed-integer stochastic optimization problem. In this paper, a two-phase solution approach based on Rockafellar and Wets’ progressive hedging algorithm (PH) is proposed. PH is used in phase I where the problem is first decomposed by partitioning the set of scenarios modeling metal uncertainty into groups, and then the sub-problems associated with each group are solved iteratively to drive their solutions to a common solution. In phase II, a strategy exploiting information obtained during the PH iterations and the structure of the problem under study is used to reduce the size of the original problem, and the resulting smaller problem is solved using a sliding time window heuristic based on a fix-and-optimize scheme. Numerical results show that this approach is efficient in finding near-optimal solutions and that it outperforms existing heuristics for the problem under study.
Open-pit mine production scheduling; Progressive hedging method; Lagrangian relaxation; Sliding time window heuristic; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301357
Lamghari, Amina
Dimitrakopoulos, Roussos
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:746-7602016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:746-760
article
Preference stability over time with multiple elicitation methods to support wastewater infrastructure decision-making
We used a multi-method and repeated elicitation approach across different stakeholder groups to explore possible differences in the outcome of an environmental decision. We compared different preference elicitation procedures based on Multi Criteria Decision Analysis (MCDA) over time for a water infrastructure decision in Switzerland. We implemented the SWING and SMART/SWING weight elicitation methods and also compared results with earlier stakeholder interviews. In all procedures, the weights for environmental protection and well-functioning (waste-)water systems were higher than for cost reduction. The SMART/SWING variant produced statistically significantly different weights than SWING. Weights changed over time with both elicitation methods. Weights were more stable with the SWING method, which was also perceived as slightly more difficult than the SMART/SWING variant. We checked whether the difference in weights produced by the two elicitation methods and the difference in their stability affects the ranking of six alternatives. Overall an unconventional decentralized alternative ranked first or second in 92 percent of all elicitation procedures, which were the online surveys or interviews. For practical decision-making, using multiple methods across different stakeholder groups and repeating elicitation can increase our confidence that the results reflect the true opinions of the decision makers and stakeholders.
Behavioral OR; Weight elicitation; Multiple criteria analysis; Online survey; OR in environment and climate change;
http://www.sciencedirect.com/science/article/pii/S0377221716301382
Lienert, Judit
Duygan, Mert
Zheng, Jun
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:777-7902016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:777-790
article
A queueing model for managing small projects under uncertainties
We consider a situation in which a home improvement project contractor has a team of regular crew members who receive compensation even when they are idle. Because both projects arrivals and the completion time of each project are uncertain, the contractor needs to manage the utilization of his crews carefully. One common approach adopted by many home improvement contractors is to accept multiple projects to keep his crew members busy working on projects to generate positive cash flows. However, this approach has a major drawback because it causes “intentional” (or foreseeable) project delays. Intentional project delays can inflict explicit and implicit costs on the contractor when frustrating customers abandon their projects and/or file complaints or lawsuits. In this paper, we present a queueing model to capture uncertain customer (or project) arrivals and departures, along with the possibility of customer abandonment. Also, associated with each admission policy (i.e., the maximum number of projects that the contractor will accept), we model the underlying tradeoff between accepting too many projects (that can increase customer dissatisfaction) and accepting too few projects (that can reduce crew utilization). We examine this tradeoff analytically so as to determine the optimal admission policy and the optimal number of crew members. We further apply our model to analyze other issues including worker productivity and project pricing. Finally, our model can be extended to allow for multiple classes of projects with different types of crew members.
Project management; Multi-projects; Queueing models; Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301059
Bai, Jiaru
So, Kut C.
Tang, Christopher
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:880-8872016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:880-887
article
From partial derivatives of DEA frontiers to marginal products, marginal rates of substitution, and returns to scaleAuthor-Name: Ouellette, Pierre
The characterization of a technology, from an economic point of view, often uses the first derivatives of either the transformation or the production function. In a parametric setting, these quantities are readily available as they can be easily deduced from the first derivatives of the specified function. In the standard framework of data envelopment analysis (DEA) models these quantities are not so easily obtained. The difficulty resides in the fact that marginal changes of inputs and outputs might affect the position of the frontier itself while the calculation of first derivatives for economic purposes assumes that the frontier is held constant. We develop here a procedure to recover first derivatives of transformation functions in DEA models and we show how we can evacuate the problem of the (marginal) shift of the frontier. We show how the knowledge of the first derivatives of the frontier estimated by DEA can be used to deduce and compute marginal products, marginal rates of substitution, and returns to scale for each decision making unit (DMU) in the sample.
Data envelopment analysis; Marginal products; Transformation function; First derivatives;
http://www.sciencedirect.com/science/article/pii/S037722171630073X
Vigeant, Stéphane
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:761-7762016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:761-776
article
On consumer preferences and the willingness to pay for term life insurance
We run a choice-based conjoint (CBC) analysis for term life insurance on a sample of 2017 German consumers using data from web-based experiments. Individual-level part-worth profiles are estimated by means of a hierarchical Bayes model. Drawing on the elicited preference structures, we then compute relative attribute importances and different willingness to pay measures. In addition, we present comprehensive simulation results for a realistic competitive setting that allows us to assess product switching as well as market expansion effects. On average, brand, critical illness cover, and underwriting procedure turn out to be the most important nonprice product attributes. Hence, if a policy comprises their favored specifications, customers accept substantial markups in the monthly premium. Furthermore, preferences vary considerably across the sample. While some individuals are prepared to pay relatively high monthly premiums, a large fraction exhibits no willingness to pay for term life insurance at all, presumably due to the absence of a need for mortality risk coverage. We also illustrate that utility-driven product optimization is well-suited to gain market shares, avoid competitive price pressure, and access additional profit potential. Finally, based on estimated demand sensitivities and a set of cost assumptions, it is shown that insurers require an in-depth understanding of preferences to identify the profit-maximizing price.
Preferences; Willingness to pay; Term life insurance; Choice-based conjoint analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716300601
Braun, Alexander
Schmeiser, Hato
Schreiber, Florian
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:338-3462016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:338-346
article
The weighted additive distance function
Distance functions in production theory are mathematical structures that characterize the belonging to the reference technology through a numerical value, behave as technical efficiency measures when the focus is analyzing an observed input–output vector within its production possibility set and present a dual relationship with some support function (profit, revenue, cost function). In this paper, we endow the well-known weighted additive models in Data Envelopment Analysis with a distance function structure, introducing the Weighted Additive Distance Function and showing its main properties.
Data envelopment analysis; Distance functions; Weighted additive model; Profit function;
http://www.sciencedirect.com/science/article/pii/S0377221716302259
Aparicio, Juan
Pastor, Jesus T.
Vidal, Fernando
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:169-1782016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:169-178
article
A multi-agent based cooperative approach to scheduling and routing
In this paper, we propose a general agent-based distributed framework where each agent is implementing a different metaheuristic/local search combination. Moreover, an agent continuously adapts itself during the search process using a direct cooperation protocol based on reinforcement learning and pattern matching. Good patterns that make up improving solutions are identified and shared by the agents. This agent-based system aims to provide a modular flexible framework to deal with a variety of different problem domains. We have evaluated the performance of this approach using the proposed framework which embodies a set of well known metaheuristics with different configurations as agents on two problem domains, Permutation Flow-shop Scheduling and Capacitated Vehicle Routing. The results show the success of the approach yielding three new best known results of the Capacitated Vehicle Routing benchmarks tested, whilst the results for Permutation Flow-shop Scheduling are commensurate with the best known values for all the benchmarks tested.
Combinatorial optimization; Scheduling; Vehicle routing; Metaheuristics; Cooperative search;
http://www.sciencedirect.com/science/article/pii/S0377221716300984
Martin, Simon
Ouelhadj, Djamila
Beullens, Patrick
Ozcan, Ender
Juan, Angel A.
Burke, Edmund K.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:791-8102016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:791-810
article
Multicriteria decision support to evaluate potential long-term natural gas supply alternatives: The case of GreeceAuthor-Name: Androulaki, Stella
This paper assesses 27 alternative natural gas supply corridors for the case of Greece, according to a multicriteria analysis approach based on three main pillars: (1) economics of supply, (2) security of supply, and (3) cooperation between countries. The alternatives include onshore and offshore pipeline corridors and LNG shipping, determined after exhaustive investigation of all possible existing and future routes, taking into consideration all possible natural gas infrastructure development projects around Greece. A multicriteria additive value system is assessed via the robust ordinal regression methodology, aiming to support the national energy policy makers to devise favorable strategies, concerning both long-term national natural gas supplies and infrastructure developments. The obtained ranking shows that noticeable alternative corridors for gas passage to Greece do exist both in terms of maritime transport of LNG and in terms of potential future pipeline infrastructure projects.
Multiple criteria decision analysis; Natural gas supply; Energy policy; Robustness; Greece;
http://www.sciencedirect.com/science/article/pii/S0377221716301047
Psarras, John
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:279-2932016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:279-293
article
A multi-objective model for locating search and rescue boats
We present the Incident Based-Boat Allocation Model (IB-BAM), a multi-objective model designed to allocate search and rescue resources. The decision of where to locate search and rescue boats depends upon a set of criteria that are unique to a given problem such as the density and types of incidents responded in the area of interest, resource capabilities, geographical factors and governments’ business rules. Thus, traditional models that incorporate only political decisions are no longer appropriate. IB-BAM considers all these criteria and determines optimal boat allocation plans with the objectives of minimizing response time to incidents, fleet operating cost and the mismatch between boats’ workload and operation capacity hours.
Integer programming; Resource allocation; Search and rescue;
http://www.sciencedirect.com/science/article/pii/S0377221716301540
Razi, Nasuh
Karatas, Mumtaz
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:161-1682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:161-168
article
Queueing network MAP−(GI/∞)K with high-rate arrivals
An analysis of the open queueing network MAP−(GI/∞)K is presented in this paper. The MAP−(GI/∞)K network implements Markov routing, general service time distribution, and an infinite number of servers at each node. Analysis is performed under the condition of a growing fundamental rate for the Markovian arrival process. It is shown that the stationary probability distribution of the number of customers at the nodes can be approximated by multi-dimensional Gaussian distribution. Parameters of this distribution are presented in the paper. Numerical results validate the applicability of the obtained approximations under relevant conditions. The results of the approximations are applied to estimate the optimal number of servers for a network with finite-server nodes. In addition, an approximation of higher-order accuracy is derived.
Queueing network; Infinite number of servers; Markovian arrival process; Asymptotic analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716302302
Moiseev, Alexander
Nazarov, Anatoly
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:503-5132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:503-513
article
Proximal point algorithms for nonsmooth convex optimization with fixed point constraints
The problem of minimizing the sum of nonsmooth, convex objective functions defined on a real Hilbert space over the intersection of fixed point sets of nonexpansive mappings, onto which the projections cannot be efficiently computed, is considered. The use of proximal point algorithms that use the proximity operators of the objective functions and incremental optimization techniques is proposed for solving the problem. With the focus on fixed point approximation techniques, two algorithms are devised for solving the problem. One blends an incremental subgradient method, which is a useful algorithm for nonsmooth convex optimization, with a Halpern-type fixed point iteration algorithm. The other is based on an incremental subgradient method and the Krasnosel’skiĭ–Mann fixed point algorithm. It is shown that any weak sequential cluster point of the sequence generated by the Halpern-type algorithm belongs to the solution set of the problem and that there exists a weak sequential cluster point of the sequence generated by the Krasnosel’skiĭ–Mann-type algorithm, which also belongs to the solution set. Numerical comparisons of the two proposed algorithms with existing subgradient methods for concrete nonsmooth convex optimization show that the proposed algorithms achieve faster convergence.
Fixed point; Halpern algorithm; Incremental subgradient method; Krasnosel’skiĭ–Mann algorithm; Proximal point algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716301102
Iiduka, Hideaki
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:441-4552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:441-455
article
Setting the right incentives for global planning and operations
We study incentive issues seen in a firm performing global planning and manufacturing, and local demand management. The stochastic demands in local markets are best observed by the regional business units, and the firm relies on the business units’ forecasts for planning of global manufacturing operations. We propose a class of performance evaluation schemes that induce the business units to reveal their private demand information truthfully by turning the business units’ demand revelation game into a potential game with truth telling being a potential maximizer, an appealing refinement of Nash equilibrium. Moreover, these cooperative performance evaluation schemes satisfy several essential fairness notions. After analyzing the characteristics of several performance evaluation schemes in this class, we extend our analysis to include the impact of effort on demand.
Production systems; Information asymmetry; Incentive management; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716300662
Norde, Henk
Özen, Ulaş
Slikker, Marco
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:602-6132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:602-613
article
Shared resource capacity expansion decisions for multiple products with quantity discounts
When multiple products compete for the same storage space, their optimal individual lot sizes may need to be reduced to accommodate the storage needs of other products. This challenge is exacerbated with the presence of quantity discounts, which tend to entice larger lot sizes. Under such circumstances, firms may wish to consider storage capacity expansion as an option to take full advantage of quantity discounts. This paper aims to simultaneously determine the optimal storage capacity level along with individual lot sizes for multiple products being offered quantity discounts (either all-units discounts, incremental discounts, or a mixture of both). By utilizing Lagrangian techniques along with a piecewise-linear approximation for capacity cost, our algorithms can generate precise solutions regardless of the functional form of capacity cost (i.e., concave or convex). The algorithms can incorporate simultaneous lot-sizing decisions for thousands of products in a reasonable solution time. We utilize numerical examples and sensitivity analysis to understand the key factors that influence the capacity expansion decision and the performance of the algorithms. The primary characteristic that influences the capacity expansion decision is the size of the quantity discount offered, but variability in demand and capacity per unit influence the expansion decision as well. Furthermore, we discover that all-units quantity discounts are more likely to lead to capacity expansion compared to incremental quantity discounts. Our analysis illuminates the potential for significant savings available to companies willing to explore the option of increasing storage capacity to take advantage of quantity discount offerings for their purchased products.
Purchasing; Quantity discounts; Capacity expansion; Lot sizing; Inventory;
http://www.sciencedirect.com/science/article/pii/S0377221716301527
Jackson, Jonathan E.
Munson, Charles L.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:711-7332016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:711-733
article
Optimal contract design in the joint economic lot size problem with multi-dimensional asymmetric information
Previous work has studied the classical joint economic lot size model as an adverse selection problem with asymmetric cost information. Solving this problem is challenging due to the presence of countervailing incentives and two-dimensional information asymmetry, under which the classical single-crossing condition does not need to hold. In the present work we advance the existing knowledge about the problem on hand by conducting its optimality analysis, which leads to a better informed and an easier problem solution: First, we refine the existing closed-form solution, which simplifies problem solving and its analysis. Second, we prove that Karush–Kuhn–Tucker conditions are necessary for optimality, and demonstrate that the problem may, in general, possess non-optimal stationary points due to non-convexity. Third, we prove that certain types of stationary points are always dominated, which eases the analytical solution of the problem. Fourth, we derive a simple optimality condition stating that a weak Pareto efficiency of the buyer’s possible cost structures implies optimality of any stationary point. It simplifies the analytical solution approach and ensures a successful solution of the problem by means of conventional numerical techniques, e.g. with a general-purpose solver. We further establish properties of optimal solutions and indicate how these are related with the classical results on adverse selection.
Supply chain coordination; Asymmetric information; Nonlinear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716301060
Pishchulov, Grigory
Richter, Knut
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:392-4032016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:392-403
article
The role of co-opetition in low carbon manufacturing
Low carbon manufacturing has become a strategic objective for many developed and developing economies. This study examines the role of co-opetition in achieving this objective. We investigate the pricing and emissions reduction policies for two rival manufacturers with different emission reduction efficiencies under the cap-and-trade policy. We assume that the product demand is price and emission sensitive. Based on non-cooperative and cooperative games, the optimal solutions for the two manufacturers are derived in the purely competitive and co-opetitive market environments respectively. Through the discussion and numerical analysis, we uncovered that in both pure competition and co-opetition models, the two manufacturers’ optimal prices depend on the unit price of carbon emission trading. In addition, higher emission reduction efficiency leads to lower optimal unit carbon emissions and higher profit in both the pure competition and co-petition models. Interestingly, compared to pure competition, co-opetition will lead to more profit and less total carbon emissions. However, the improvement in economic and environmental performance is based on higher product prices and unit carbon emissions.
Low carbon manufacturing; Co-opetition; Carbon emission reduction; Green technology investment; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716300674
Luo, Zheng
Chen, Xu
Wang, Xiaojun
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:734-7452016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:734-745
article
A simple yet effective decision support policy for mass-casualty triage
In the aftermath of a mass-casualty incident, effective policies for timely evaluation and prioritization of patients can mean the difference between life and death. While operations research methods have been used to study the patient prioritization problem, prior research has either proposed decision rules that only apply to very simple cases, or proposed formulating and solving a mathematical program in real time, which may be a barrier to implementation in an urgent situation. We connect these two regimes by proposing a general decision support rule that can handle survival probability functions and an arbitrary number of patient classifications. The proposed survival lookahead policy generalizes not only a myopic policy and a cμ type rule, but also the optimal solution to a version of the problem with two priority classes. This policy has other desirable properties, including index policy structure. Using simple heuristic parameterizations, the survival lookahead policy yields an expected number of survivors that is almost as large as published methods that require mathematical programming, while having the advantage of an intuitive structure and requiring minimal computational support.
Triage; Disaster response; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301151
Mills, Alex F.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:639-6472016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:639-647
article
Designing repetitive screening procedures with imperfect inspections: An empirical Bayes approach
A batch of expensive items, such as IC chips, is often inspected multiple times in a sequential manner to further discover more conforming items. After several rounds of screening, we need to estimate the number of conforming items that still remain in the batch. We propose in this paper an empirical Bayes estimation method and compare its performance with that of the traditional maximum likelihood method. In the repetitive screening procedure, another important decision problem is when to stop the screening process and salvage the remaining items. We propose various types of stopping rules and illustrate their procedures with a simulated inspection data. Finally, we explore various extensions to our empirical Bayes estimation method in multiple inspection plans.
Inspection; Product quality; Reliability; Empirical Bayes estimation;
http://www.sciencedirect.com/science/article/pii/S0377221716301138
Chun, Young H.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:456-4712016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:456-471
article
Value added, educational accountability approaches and their effects on schools’ rankings: Evidence from Chile
Value added models have been proposed to analyze different aspects related to school effectiveness on the basis of student growth. There is consensus in the literature about the need to control for socioeconomic status and other contextual variables at student and school level in the estimation of value added, for which the methodologies employed have largely relied on hierarchical linear models. However, this approach is problematic because results are based on comparisons to the school’s average—implying no real incentive for performance excellence. Meanwhile, activity analysis models to estimate school value added have been unable to control for contextual variables at both the student and school levels. In this study we propose a robust frontier model to estimate contextual value added which merges relevant branches of the activity analysis literature, namely, metafrontiers and partial frontier methods. We provide an application to a large sample of Chilean schools, a relevant country to study due to the reforms made to its educational system that point out to the need of accountability measures. Results indicate not only the general relevance of including contextual variables but also how they contribute to explaining the performance differentials found for the three types of schools—public, privately-owned subsidized, and privately-owned fee-paying. Also, the results indicate that contextual value added models generate school rankings more consistent with the evaluation models currently used in Chile than any other type of evaluation models.
Efficiency; Order-m; School effectiveness; Value added;
http://www.sciencedirect.com/science/article/pii/S0377221716000527
Thieme, Claudio
Prior, Diego
Tortosa-Ausina, Emili
Gempp, René
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:356-3712016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:356-371
article
Pro-active real-time routing in applications with multiple request patterns
Recent research reveals that pro-active real-time routing approaches that use stochastic knowledge about future requests can significantly improve solution quality compared to approaches that simply integrate new requests upon arrival. Many of these approaches assume that request arrivals on different days follow an identical pattern. Thus, they define and apply a single profile of past request days to anticipate future request arrivals. In many real-world applications, however, different days may follow different patterns. Moreover, the pattern of the current day may not be known beforehand, and may need to be identified in real-time during the day. In such cases, applying approaches that use a single profile is not promising. In this paper, we propose a new pro-active real-time routing approach that applies multiple profiles. These profiles are generated by grouping together days with a similar pattern of request arrivals. For each combination of identified profiles, stochastic knowledge about future request arrivals is derived in an offline step. During the day, the approach repeatedly evaluates characteristics of request arrivals and selects a suitable combination of profiles. The performance of the new approach is evaluated in computational experiments in direct comparison with a previous approach that applies only a single profile. Computational results show that the proposed approach significantly outperforms the previous one. We analyze further potential for improvement by comparing the approach with an omniscient variant that knows the actual pattern in advance. Based on the results, managerial implications that allow for a practical application of the new approach are provided.
Dynamic vehicle routing; Multiple request patterns; Request forecasting; Scenario identification; K-means clustering;
http://www.sciencedirect.com/science/article/pii/S0377221716300364
Ferrucci, Francesco
Bock, Stefan
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:570-5832016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:570-583
article
Robust mixed-integer linear programming models for the irregular strip packing problem
Two-dimensional irregular strip packing problems are cutting and packing problems where small pieces have to be cut from a larger object, involving a non-trivial handling of geometry. Increasingly sophisticated and complex heuristic approaches have been developed to address these problems but, despite the apparently good quality of the solutions, there is no guarantee of optimality. Therefore, mixed-integer linear programming (MIP) models started to be developed. However, these models are heavily limited by the complexity of the geometry handling algorithms needed for the piece non-overlapping constraints. This led to pieces simplifications to specialize the developed mathematical models. In this paper, to overcome these limitations, two robust MIP models are proposed. In the first model (DTM) the non-overlapping constraints are stated based on direct trigonometry, while in the second model (NFP−CM) pieces are first decomposed into convex parts and then the non-overlapping constraints are written based on nofit polygons of the convex parts. Both approaches are robust in terms of the type of geometries they can address, considering any kind of non-convex polygon with or without holes. They are also simpler to implement than previous models. This simplicity allowed to consider, for the first time, a variant of the models that deals with piece rotations. Computational experiments with benchmark instances show that NFP−CM outperforms both DTM and the best exact model published in the literature. New real-world based instances with more complex geometries are proposed and used to verify the robustness of the new models.
Packing; Cutting; Nesting; MIP models;
http://www.sciencedirect.com/science/article/pii/S0377221716301370
Cherri, Luiz H.
Mundim, Leandro R.
Andretta, Marina
Toledo, Franklina M.B.
Oliveira, José F.
Carravilla, Maria Antónia
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:304-3112016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:304-311
article
An auto-realignment method in quasi-Monte Carlo for pricing financial derivatives with jump structures
Discontinuities are common in the pricing of financial derivatives and have a tremendous impact on the accuracy of quasi-Monte Carlo (QMC) method. While if the discontinuities are parallel to the axes, good efficiency of the QMC method can still be expected. By realigning the discontinuities to be axes-parallel, [Wang & Tan, 2013] succeeded in recovering the high efficiency of the QMC method for a special class of functions. Motivated by this work, we propose an auto-realignment method to deal with more general discontinuous functions. The k-means clustering algorithm, a classical algorithm of machine learning, is used to select the most representative normal vectors of the discontinuity surface. By applying this new method, the discontinuities of the resulting function are realigned to be friendly for the QMC method. Numerical experiments demonstrate that the proposed method significantly improves the performance of the QMC method.
Pricing; QMC; OT method; QR decomposition; Auto-realignment method;
http://www.sciencedirect.com/science/article/pii/S037722171630162X
Weng, Chengfeng
Wang, Xiaoqun
He, Zhijian
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:320-3372016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:320-337
article
Understanding dynamic mean variance asset allocation
We provide a new portfolio decomposition formula that sheds light on the economics of portfolio choice for investors following the mean-variance (MV) criterion. We show that the number of components of a dynamic portfolio strategy can be reduced to two: the first is preference free and hedges the risk of a discount bond maturing at the investor’s horizon while the second hedges the time variation in pseudo relative risk tolerance. Both components entail strong horizon effects in the dynamic asset allocation as a result of time-varying risk tolerance and investment opportunity sets. We also provide closed-form solutions for the optimal portfolio strategy in the presence of market return predictability. The model parameters are estimated over the period 1963 to 2012 for the U.S. market. We show that (i) intertemporal hedging can be very large, (ii) the MV criterion hugely understates the true extent of risk aversion for high values of the risk aversion parameter, and the more so the shorter the investment horizon, and (iii) the efficient frontiers seem problematic for investment horizons shorter than one year but satisfactory for large horizons. Overall, adopting the MV model leads to acceptable results for medium and long term investors endowed with medium or high risk tolerance, but to very problematic ones otherwise.
Mean variance; Dynamic asset allocation; Time varying risk aversion; Intertemporal hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716302223
Lioui, Abraham
Poncet, Patrice
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:328-3362016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:328-336
article
An ejection chain approach for the quadratic multiple knapsack problem
In an algorithm for a problem whose candidate solutions are selections of objects, an ejection chain is a sequence of moves from one solution to another that begins by removing an object from the current solution. The quadratic multiple knapsack problem extends the familiar 0–1 knapsack problem both with several knapsacks and with values associated with pairs of objects. A hybrid algorithm for this problem extends a local search algorithm through an ejection chain mechanism to create more powerful moves. In addition, adaptive perturbations enhance the diversity of the search process. The resulting algorithm produces results that are competitive with the best heuristics currently published for this problem. In particular, it improves the best known results on 34 out of 60 test problem instances and matches the best known results on all but 6 of the remaining instances.
Ejection chain; Quadratic multiple knapsack problem; Adaptive perturbation; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716300960
Peng, Bo
Liu, Mengqi
Lü, Zhipeng
Kochengber, Gary
Wang, Haibo
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:314-3272016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:314-327
article
Lagrangean relaxation of the hull-reformulation of linear generalized disjunctive programs and its use in disjunctive branch and bound
In this work, we present a Lagrangean relaxation of the hull-reformulation of discrete-continuous optimization problems formulated as linear generalized disjunctive programs (GDP). The proposed Lagrangean relaxation has three important properties. The first property is that it can be applied to any linear GDP. The second property is that the solution to its continuous relaxation always yields 0–1 values for the binary variables of the hull-reformulation. Finally, it is simpler to solve than the continuous relaxation of the hull-reformulation. The proposed Lagrangean relaxation can be used in different GDP solution methods. In this work, we explore its use as primal heuristic to find feasible solutions in a disjunctive branch and bound algorithm. The modified disjunctive branch and bound is tested with several instances with up to 300 variables. The results show that the proposed disjunctive branch and bound performs faster than other versions of the algorithm that do not include this primal heuristic.
MILP; Disjunctive programming; GDP; Lagrangean relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221716301011
Trespalacios, Francisco
Grossmann, Ignacio E.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:593-6012016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:593-601
article
Impact of structure, market share and information asymmetry on supply contracts for a single supplier multiple buyer network
Market share of buyers and the influence of supply chain structure on the choice of supply contracts have received scant attention in the literature. This paper focuses on this gap and examines a network consisting of one supplier and two buyers under complete and partial decentralization. In the completely decentralized setting both buyers are independent of the supplier. In the partially decentralized setting the supplier and one of the buyers form a vertically integrated entity. Both buyers order from the single supplier and produce similar products to sell in the same market. The supplier charges the buyer through a contract. We investigate the influence of supply chain structure, market-share and asymmetry of information on supplier's choice of contracts. We demonstrate that both linear two-part tariff and quantity discount contract can coordinate the supply chain irrespective of the supply chain structure. By comparing profit levels of supply chain agents across different supply chain structures, we show that if a buyer possesses a minimum threshold market potential, the supplier has an incentive to collude with her. We calculate the cut-off policies for wholesale price and two-part tariff contracts by incorporating the reservation profit level of individual agents. The managerial implications of the analyses and the directions of future research are presented in the conclusion.
Supply chain management; Pricing; Asymmetric information; Competition; Market share;
http://www.sciencedirect.com/science/article/pii/S0377221716301424
Biswas, Indranil
Avittathur, Balram
Chatterjee, Ashis K
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:557-5692016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:557-569
article
Benders decomposition without separability: A computational study for capacitated facility location problems
Benders is one of the most famous decomposition tools for Mathematical Programming, and it is the method of choice e.g., in mixed-integer stochastic programming. Its hallmark is the capability of decomposing certain types of models into smaller subproblems, each of which can be solved individually to produce local information (notably, cutting planes) to be exploited by a centralized “master” problem. As its name suggests, the power of the technique comes essentially from the decomposition effect, i.e., the separability of the problem into a master problem and several smaller subproblems. In this paper we address the question of whether the Benders approach can be useful even without separability of the subproblem, i.e., when its application yields a single subproblem of the same size as the original problem. In particular, we focus on the capacitated facility location problem, in two variants: the classical linear case, and a “congested” case where the objective function contains convex but non-separable quadratic terms. We show how to embed the Benders approach within a modern branch-and-cut mixed-integer programming solver, addressing explicitly all the ingredients that are instrumental for its success. In particular, we discuss some computational aspects that are related to the negative effects derived from the lack of separability. Extensive computational results on various classes of instances from the literature are reported, with a comparison with the state-of-the-art exact and heuristic algorithms. The outcome is that a clever but simple implementation of the Benders approach can be very effective even without separability, as its performance is comparable and sometimes even better than that of the most effective and sophisticated algorithms proposed in the previous literature.
Benders decomposition; Congested capacitated facility location; Perspective reformulation; Branch-and-cut; Mixed-integer convex programming;
http://www.sciencedirect.com/science/article/pii/S0377221716301126
Fischetti, Matteo
Ljubić, Ivana
Sinnl, Markus
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:472-4882016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:472-488
article
Stability and chaos in demand-based pricing under social interactions
Demand-based pricing is often used to moderate demand fluctuations so as to level resource utilization and increase profitability. However, such pricing policies may not be effective when customers’ purchase decisions are influenced by social interactions. This paper investigates the demand dynamics, under a demand-based pricing policy, of a frequently purchased service when social interactions are at work. Customers are heterogeneous and adaptively forward-looking. Existing customers’ re-purchase decisions are based on adaptively formed price expectations and reservation prices. Potential customers are attracted through social interactions with existing customers. The demand process is characterized by a two-dimensional dynamical system. It is shown that the equilibrium demand can be unstable. For a given reservation price distribution, we first analyze the stability of the equilibrium demand under various scenarios of social interactions and customers’ adaptively forward-looking behavior, and then characterize their dynamics using the bifurcation plots, Lyapunov exponents and return maps. The results indicate that the demand process can be stable, periodic or chaotic. The study shows that the intended effect of a demand-based pricing policy may be offset by customers’ adaptively forward-looking behavior under the influence of social interactions. In fact, the interplay of these factors may even lead to chaotic demand dynamics. The result highlights the complex dynamics produced by a simple demand-price mechanism under social interactions. For a demand-based pricing strategy to be effective, companies must take social interactions into account.
OR in service industries; Demand dynamics; Forward-looking; Social interaction; Chaos;
http://www.sciencedirect.com/science/article/pii/S037722171630100X
Yuan, Xuchuan
Hwarng, H. Brian
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:337-3552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:337-355
article
Modified Differential Evolution with Locality induced Genetic Operators for dynamic optimizationAuthor-Name: Mukherjee, Rohan
This article presents a modified version of the Differential Evolution (DE) algorithm for solving Dynamic Optimization Problems (DOPs) efficiently. The algorithm, referred as Modified DE with Locality induced Genetic Operators (MDE-LiGO) incorporates changes in the three basic stages of a standard DE framework. The mutation phase has been entrusted to a locality-induced operation that retains traits of Euclidean distance-based closest individuals around a potential solution. Diversity maintenance is further enhanced by inclusion of a local-best crossover operation that empowers the algorithm with an explorative ability without directional bias. An exhaustive dynamic detection technique has been introduced to effectively sense the changes in the landscape. An even distribution of solutions over different regions of the landscape calls for a solution retention technique that adapts this algorithm to dynamism by using the previously stored information in diverse search domains. MDE-LiGO has been compared with seven state-of-the-art evolutionary dynamic optimizers on a set of benchmarks known as the Generalized Dynamic Benchmark Generator (GDBG) used in competition on evolutionary computation in dynamic and uncertain environments held under the 2009 IEEE Congress on Evolutionary Computation (CEC). The experimental results clearly indicate that MDE-LiGO can outperform other algorithms for most of the tested DOP instances in a statistically meaningful way.
Continuous optimization; Dynamic optimization; Differential Evolution; Self adaptation; Genetic operators;
http://www.sciencedirect.com/science/article/pii/S0377221716300959
Debchoudhury, Shantanab
Das, Swagatam
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:673-6802016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:673-680
article
Licensing under general demand and cost functions
We consider a Cournot duopoly under general demand and cost functions, where an incumbent patentee has a cost reducing technology that it can license to its rival by using combinations of royalties and upfront fees (two-part tariffs). We show that for drastic technologies: (a) licensing occurs and both firms stay active if the cost function is superadditive and (b) licensing does not occur and the patentee monopolizes the market if the cost function is additive or subadditive. For non drastic technologies, licensing takes place provided the average efficiency gain from the cost reducing technology is higher than the marginal gain computed at the licensee’s reservation output. Optimal licensing policies have both royalties and fees for significantly superior technologies if the cost function is superadditive. By contrast, for additive and certain subadditive cost functions, optimal licensing policies have only royalties and no fees.
Patent licensing; Superadditive function; Subadditive function; Royalties; Two-part tariff;
http://www.sciencedirect.com/science/article/pii/S037722171600103X
Sen, Debapriya
Stamatopoulos, Giorgos
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:869-8792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:869-879
article
Spline based survival model for credit risk modeling
Survival modeling has been adapted in retail banking because of its capability to analyze the censored data. It is an important tool for credit risk scoring, stress testing and credit asset evaluation. In this paper, we introduce a regression spline based discrete time survival model. The flexibility of spline function allows us to model the nonlinear and irregular shape of the hazard functions. By incorporating the regression spline into the multinomial logistic regression, this approach complements the existing Cox model. From a practical perspective, the logistic regression is relatively easy to understand and implement, and the simple parametric form is especially advantageous for predictive scoring. Using a credit card dataset, we demonstrate how to build a cubic regression spline based survival model. We also compare the performance of spline based discrete time survival model with the classical Cox model, our results show the spline based survival model can provide similar statistical explanatory and improve the prediction accuracy for attrition model which has low event rate.
Retail banking; Credit risk scoring; Survival modeling; Regression spline;
http://www.sciencedirect.com/science/article/pii/S0377221716301035
Luo, Sirong
Kong, Xiao
Nie, Tingting
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:584-5922016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:584-592
article
Lower bounding procedure for the asymmetric quadratic traveling salesman problem
In this paper we consider the Asymmetric Quadratic Traveling Salesman Problem (AQTSP). Given a directed graph and a function that maps every pair of consecutive arcs to a cost, the problem consists in finding a cycle that visits every vertex exactly once and such that the sum of the costs is minimal. We propose an extended Linear Programming formulation that has a variable for each cycle in the graph. Since the number of cycles is exponential in the graph size, we propose a column generation approach. Moreover, we apply a particular reformulation-linearization technique on a compact representation of the problem, and compute lower bounds based on Lagrangian relaxation. We compare our new bounds with those obtained by some linearization models proposed in the literature. Computational results on some set of benchmarks used in the literature show that our lower bounding procedures are very promising.
Traveling salesman; Reformulation-linearization technique; Cycle cover; Column generation; Lower bound;
http://www.sciencedirect.com/science/article/pii/S037722171630159X
Rostami, Borzou
Malucelli, Federico
Belotti, Pietro
Gualandi, Stefano
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:298-3132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:298-313
article
Scheduling cranes at an indented berth
Container terminals are facing great challenges in order to meet the shipping industry’s requirements. An important fact within the industry is the increasing vessel sizes. Actually, within the last decade the ship size in the Asia–Europe trade has effectively doubled. However, port productivity has not doubled along with the larger vessel sizes. This has led to increased vessel turn around times at ports which indeed is a severe problem. In order to meet the industry targets a game-changer in container handling is required. Indented berth structure is one important opportunity to handle this issue. This novel berth structure requires new models and solution techniques for scheduling the quay cranes serving the indented berth. Accordingly, in this paper, we approach the quay crane scheduling problem at an indented berth structure. We focus on the challenges and constraints related to the novel architecture. We model the quay crane scheduling problem under the special structure and develop a solution technique based on branch-and-price. Extensive experiments are conducted to validate the efficiency of the proposed algorithm.
Maritime logistics; Crane sequencing; Crane scheduling; Container terminal operations; Indented berth;
http://www.sciencedirect.com/science/article/pii/S0377221716300753
Beens, Marie-Anne
Ursavas, Evrim
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:856-8682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:856-868
article
It’s not now or never: Implications of investment timing and risk aversion on climate adaptation to extreme events
Public investment into risk reduction infrastructure plays an important role in facilitating adaptation to climate impacted hazards and natural disasters. In this paper, we provide an economic framework to incorporate investment timing and insurance market risk preferences when evaluating projects related to reducing climate impacted risks. The model is applied to a case study of bushfire risk management. We find that optimal timing of the investment may increase the net present value (NPV) of an adaptation project for various levels of risk aversion. Assuming risk neutrality, while the market is risk averse, is found to result in an unnecessary delay of the investment into risk reduction projects. The optimal waiting time is shorter when the insurance market is more risk averse or when a more serious scenario for climatic change is assumed. A higher investment cost or a higher discount rate will increase the optimal waiting time. We also find that a stochastic discount rate results in higher NPVs of the project than a discount rate that is assumed fixed at the long run average level.
Climate change adaptation; Investment timing; Catastrophic risk; Risk aversion; Real option;
http://www.sciencedirect.com/science/article/pii/S0377221716000898
Truong, Chi
Trück, Stefan
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:51-672016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:51-67
article
Modeling parallel movement of lifts and vehicles in tier-captive vehicle-based warehousing systems
This paper models and analyzes tier-captive autonomous vehicle storage and retrieval systems. While previous models assume sequential commissioning of the lift and vehicles, we propose a parallel processing policy for the system, under which an arrival transaction can request the lift and the vehicle simultaneously. To investigate the performance of this policy, we formulate a fork-join queueing network in which an arrival transaction will be split into a horizontal movement task served by the vehicle and a vertical movement task served by the lift. We develop an approximation method based on decomposition of the fork-join queueing network to estimate the system performance. We build simulation models to validate the effectiveness of analytical models. The results show that the fork-join queueing network is accurate in estimating the system performance under the parallel processing policy. Numerical experiments and a real case are carried out to compare the system response time of retrieval transactions under parallel and sequential processing policies. The results show that, in systems with less than 10 tiers, the parallel processing policy outperforms the sequential processing policy by at least 5.51 percent. The advantage of parallel processing policy is decreasing with the rack height and the aisle length. In systems with more than 10 tiers and a length to height ratio larger than 7, we can find a critical retrieval transaction arrival rate, below which the parallel processing policy outperforms the sequential processing policy.
Logistics; Warehousing; AVS/RS; Analytical and simulation modelling; Performance analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716301679
Zou, Bipan
Xu, Xianhao
(Yale) Gong, Yeming
De Koster, René
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:148-1602016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:148-160
article
Strategic behavior in an observable fluid queue with an alternating service process
We consider a fluid queue with two modes of service, that represents a production facility, where the processing of the customers (units) is typically carried out at a much faster time-scale than the machine-related processes. We examine the strategic behavior of the customers, regarding the joining/balking dilemma, under two levels of information upon arrival. Specifically, just after arriving and before making the decision, a customer observes the level of the fluid, but may or may not get informed about the state of the server (fast/slow). Assuming that the customers evaluate their utilities based on a natural reward/cost structure, which incorporates their desire for processing and their unwillingness to wait, we derive symmetric equilibrium strategy profiles. Moreover, we illustrate various effects of the information level on the strategic behavior of the customers. The corresponding social optimization problem is also studied and the inefficiency of the equilibrium strategies is quantified via the Price of Anarchy (PoA) measure.
Queueing; Fluid flow models; Strategic customers; Balking; Equilibrium strategies;
http://www.sciencedirect.com/science/article/pii/S0377221716301928
Economou, Antonis
Manou, Athanasia
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:372-3822016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:372-382
article
Generous, spiteful, or profit maximizing suppliers in the wholesale price contract: A behavioral study
Prior experimental research shows that, in aggregate, decision makers acting as suppliers to a newsvendor do not set the wholesale price to maximize supplier profits. However, these deviations from optimal have rarely been examined at an individual level. In this study, presented with scenarios that differ in terms of how profit is shared between retailer and supplier, suppliers set wholesale price contracts which deviate from profit-maximization in ways that are either generous or spiteful. On an individual basis, these deviations were found to be consistent with how the profit-maximizing contract compares to the subject's idea of a fair contract. Suppliers moved nearer to self-reported ideal allocations when they indicated a high degree of concern for fairness, consistent with previously proposed fairness models, and were found to be more likely to act upon generous inclinations than spiteful ones.
Behavioral OR; Supply chain management; Newsvendor; Contracting; Supplier pricing;
http://www.sciencedirect.com/science/article/pii/S0377221716300595
Niederhoff, Julie A.
Kouvelis, Panos
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:40-502016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:40-50
article
How to escape a declining market: Capacity investment or Exit?
This paper considers a firm that faces a declining profit stream for its established product. The firm has the option to invest in a new technology with which it can produce an innovative product while having the option to exit at any point in time. In the presence of an exit option, earlier work determined the optimal timing to invest, where it was shown that higher uncertainty might accelerate investment timing. In the present paper the firm also decides on capacity. This extension leads to monotonicity, i.e. higher uncertainty delays investment timing. We also find that higher potential profitability of the innovative product market increases the incentive to invest earlier, where, however, we get the counterintuitive result that the firm invests in smaller capacity. Finally, if quantity has a smaller negative effect on price, the firm wants to acquire a larger capacity at a lower investment threshold.
Investment analysis; Exit; Capacity investment; Declining market; Real options;
http://www.sciencedirect.com/science/article/pii/S0377221716302284
Hagspiel, Verena
Huisman, Kuno J.M.
Kort, Peter M.
Nunes, Cláudia
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:514-5232016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:514-523
article
A nonhomogeneous hidden Markov model of response dynamics and mailing optimization in direct marketing
Catalog firms mail billions of catalogs each year. To stay competitive, catalog managers need to maximize the return on these mailings by deciding who should receive a mail-order catalog. In this paper, we propose a two-step approach that allows firms to address the dynamic implications of mailing decisions, and to make efficient mailing decisions by maximizing the long-term value generated by customers. Specifically, we first propose a nonhomogeneous hidden Markov model (HMM) to capture the interactive dynamics between customers and mailings. In the second step, we use the parameters obtained from the HMM to determine the optimal mailing decisions using the Partial Observable Markov Decision Process (POMDP). Both the immediate and the long-term effects of mailings are accounted for. The mailing endogeneity that may result in biased parameter estimates is also corrected. We conduct an empirical study using six years of quarterly solicitation data derived from the well-known DMEF donation data set. All metrics used suggest that the proposed model fits the data well in terms of correct predictions and outperforms all other benchmark models. The simulative experimental results show that the proposed method for optimizing total accrued benefits outperforms the usual targeted-marketing methodology for optimizing each promotion in isolation. We also find that the sequential targeting rules acquired by our proposed methods are more cost-containment oriented in nature compared with the corresponding single-event targeting rules.
OR in marketing; HMM; POMDP; Customer lifetime value; Mailing optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301084
Ma, Shaohui
Hou, Lu
Yao, Wensong
Lee, Baozhen
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:269-2782016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:269-278
article
Age-structured linear-state differential games
In this paper we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of age-structured differential games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub-game perfect. By means of a simple age-structured advertising problem, we provide an application of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium.
Age-structured models; Differential games; Advertising;
http://www.sciencedirect.com/science/article/pii/S0377221716301539
Grosset, Luca
Viscolani, Bruno
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:280-2892016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:280-289
article
Exact and heuristic algorithms for the Hamiltonian p-median problem
This paper presents an exact algorithm, a constructive heuristic algorithm, and a metaheuristic for the Hamiltonian p-Median Problem (HpMP). The exact algorithm is a branch-and-cut algorithm based on an enhanced p-median based formulation, which is proved to dominate an existing p-median based formulation. The constructive heuristic is a giant tour heuristic, based on a dynamic programming formulation to optimally split a given sequence of vertices into cycles. The metaheuristic is an iterated local search algorithm using 2-exchange and 1-opt operators. Computational results show that the branch-and-cut algorithm outperforms the existing exact solution methods.
Hamiltonian; p-median; Branch-and-cut; Metaheuristic;
http://www.sciencedirect.com/science/article/pii/S0377221716300327
Erdoğan, Güneş
Laporte, Gilbert
Rodríguez Chía, Antonio M.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:294-3032016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:294-303
article
Sustaining cooperation in a differential game of advertising goodwill accumulation
The paper suggests a differential game of advertising competition among three symmetric firms, played over an infinite horizon. The objective of the research is to see if a cooperative agreement among the firms can be sustained over time. For this purpose the paper determines the characteristic functions (value functions) of individual players and all possible coalitions. We identify an imputation that belongs to the core. Using this imputation guarantees that, in any subgame starting out on the cooperative state trajectory, no coalition has an incentive to deviate from what was prescribed by the solution of the grand coalition’s optimization problem.
Differential games; Advertising competition; Core imputation;
http://www.sciencedirect.com/science/article/pii/S0377221716301576
Jørgensen, Steffen
Gromova, Ekaterina
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:614-6242016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:614-624
article
Measures of dynamism and urgency in logistics
Dynamism was originally defined as the proportion of online versus offline orders in the literature on dynamic logistics. Such a definition however, loses meaning when considering purely dynamic problems where all customer requests arrive dynamically. Existing measures of dynamism are limited to either (1) measuring the proportion of online versus offline orders or (2) measuring urgency, a concept that is orthogonal to dynamism, instead. The present paper defines separate and independent formal definitions of dynamism and urgency applicable to purely dynamic problems. Using these formal definitions, instances of a dynamic logistic problem with varying levels of dynamism and urgency were constructed and several route scheduling algorithms were executed on these problem instances. Contrary to previous findings, the results indicate that dynamism is positively correlated with route quality; urgency, however, is negatively correlated with route quality. The paper contributes the theory that dynamism and urgency are two distinct concepts that deserve to be treated separately.
Logistics; Transportation; Dynamism; Urgency; Measures;
http://www.sciencedirect.com/science/article/pii/S0377221716301497
van Lon, Rinde R.S.
Ferrante, Eliseo
Turgut, Ali E.
Wenseleers, Tom
Vanden Berghe, Greet
Holvoet, Tom
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:243-2642016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:243-264
article
Sustainable Operations
The field of “Sustainable Operations” and the term itself have arisen only in the last ten to twenty years in the context of sustainable development. Even though the term is frequently used in practice and research, it has hardly been characterized and defined precisely in the literature so far. For reasons of clarity and unambiguity, we present terms and definitions before we demarcate Sustainable Operations from its neighboring topics. We especially focus on the interactions between economic, social and ecological aspects as part of Sustainable Operations, but exclude the development of a normative ethics, instead focusing on the use of quantitative methods from Operations Research. Then the broad subject of Sustainable Operations is structured into various areas arising from the typical structure of an enterprise. For each area, we present examples of applications and refer to the existing literature. The paper concludes with future research directions.
Sustainable Operations; Sustainable development; Operations research; Computational sustainability; Triple bottom line;
http://www.sciencedirect.com/science/article/pii/S0377221716300996
Jaehn, Florian
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:811-8242016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:811-824
article
Local matching of flexible load in smart grids
Today’s power systems are experiencing a transition from primarily fossil fuel based generation toward greater shares of renewable energy sources. It becomes increasingly costly to manage the resulting uncertainty and variability in power system operations solely through flexible generation assets. Incorporating demand side flexibility through appropriately designed incentive structures can add an additional lever to balance demand and supply. Based on a supply model using empirical wind generation data and a discrete model of flexible demand with temporal constraints, we design and evaluate a local online market mechanism for matching flexible load and uncertain supply. Under this mechanism, truthful reporting of flexibility is a dominant strategy for consumers reducing payments and increasing the likelihood of allocation. Suppliers, during periods of scarce supply, benefit from elevated critical-value payments as a result of flexibility-induced competition on the demand side. We find that, for a wide range of the key parameters (supply capacity, flexibility level), the cost of ensuring incentive compatibility in a smart grid market, relative to the welfare-optimal matching, is relatively small. This suggests that local matching of demand and supply can be organized in a decentral manner in the presence of a sufficiently flexible demand side. Extending the stylized demand model to include complementary demand structures, we demonstrate that decentral matching induces only minor efficiency losses if demand is sufficiently flexible. Furthermore, by accounting for physical grid limitations we show that flexibility and grid capacity exhibit complementary characteristics.
OR in energy; Smart grid; Load flexibility; Online mechanism design;
http://www.sciencedirect.com/science/article/pii/S037722171630114X
Ströhle, Philipp
Flath, Christoph M.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:236-2522016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:236-252
article
A two-stage classification technique for bankruptcy prediction
Ensemble techniques such as bagging or boosting, which are based on combinations of classifiers, make it possible to design models that are often more accurate than those that are made up of a unique prediction rule. However, the performance of an ensemble solely relies on the diversity of its different components and, ultimately, on the algorithm that is used to create this diversity. It means that such models, when they are designed to forecast corporate bankruptcy, do not incorporate or use any explicit knowledge about this phenomenon that might supplement or enrich the information they are likely to capture. This is the reason why we propose a method that is precisely based on some knowledge that governs bankruptcy, using the concept of “financial profiles”, and we show how the complementarity between this technique and ensemble techniques can improve forecasts.
Decision support systems; Finance; Bankruptcy; Forecasting; Financial profile;
http://www.sciencedirect.com/science/article/pii/S0377221716301369
du Jardin, Philippe
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:404-4172016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:404-417
article
Logistics capacity planning: A stochastic bin packing formulation and a progressive hedging meta-heuristic
We consider the logistics capacity planning problem arising in the context of supply-chain management. We address the tactical-planning problem of determining the quantity of capacity units, hereafter called bins, of different types to secure for the next period of activity, given the uncertainty on future needs in terms of demand for loads (items) to be moved or stored, and the availability and costs of capacity for these movements or storage activities. We propose a modeling framework introducing a new class of bin packing problems, the Stochastic Variable Cost and Size Bin Packing Problem. The resulting two-stage stochastic formulation with recourse assigns to the first stage the tactical capacity-planning decisions of selecting bins, while the second stage models the subsequent adjustments to the plan, securing extra bins and packing the items into the selected bins, performed each time the plan is applied and new information becomes known. We propose a new meta-heuristic based on progressive hedging ideas that includes advanced strategies to accelerate the search and efficiently address the symmetry strongly present in the problem considered due to the presence of several equivalent bins of each type. Extensive computational results for a large set of instances support the claim of validity for the model, efficiency for the solution method proposed, and quality and robustness for the solutions obtained. The method is also used to explore the impact on the capacity plan and the recourse to spot-market capacity of a quite wide range of variations in the uncertain parameters and the economic environment of the firm.
Logistics capacity planning; Uncertainty; Stochastic Variable Cost and Size Bin Packing; Stochastic programming; Progressive hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716300777
Crainic, Teodor Gabriel
Gobbato, Luca
Perboli, Guido
Rei, Walter
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:105-1122016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:105-112
article
Offsetting inventory replenishment cyclesAuthor-Name: Russell, Robert A.
The inventory-staggering problem is a multi-item inventory problem in which replenishment cycles are scheduled or offset in order to minimize the maximum inventory level over a given planning horizon. We incorporate symmetry-breaking constraints in a mixed-integer programming model to determine optimal and near-optimal solutions. Local-search heuristics and evolutionary polishing heuristics are also presented to achieve effective and efficient solutions. We examine extensions of the problem that include a continuous-time framework as well as the effect of stochastic demand.
Inventory; Replenishment staggering; Symmetry reduction; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716302016
Urban, Timothy L.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:127-1372016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:127-137
article
Cost-effectiveness analysis for heterogeneous samples
The sampling information for the cost-effectiveness analysis typically comes from different health care centers, and, as far as we know, it is taken for granted that the distribution of the cost and the effectiveness does not vary across centers. We argue that this assumption is unrealistic, and prove that to not consider the sample heterogeneity will typically give misleading results. Consequently, a cost-effectiveness procedure for heterogeneous samples is here proposed.
Clustering; Cost-effectiveness; Decision processes; Meta-analysis; Heterogeneous samples;
http://www.sciencedirect.com/science/article/pii/S0377221716301606
Moreno, E.
Girón, F.J.
Vázquez–Polo, F.J.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:648-6582016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:648-658
article
An investigation of model risk in a market with jumps and stochastic volatility
The aim of this paper is to investigate model risk aspects of variance swaps and forward-start options in a realistic market setup where the underlying asset price process exhibits stochastic volatility and jumps. We devise a general framework in order to provide evidence of the model uncertainty attached to variance swaps and forward-start options. In our study, both variance swaps and forward-start options can be valued by means of analytic methods. We measure model risk using a set of 21 models embedding various dynamics with both continuous and discontinuous sample paths. To conduct our empirical analysis, we work with two major equity indices (S&P 500 and Eurostoxx 50) under different market situations. Our results evaluate model risk between 50 and 200 basis points, with an average value slightly above 100 basis points of the contract notional.
Risk management; Model risk; Robustness and sensitivity analysis; Variance swap; Forward-start option;
http://www.sciencedirect.com/science/article/pii/S0377221716301461
Coqueret, Guillaume
Tavin, Bertrand
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:9-182016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:9-18
article
A computational study for bilevel quadratic programs using semidefinite relaxations
In this paper, we deal with bilevel quadratic programming problems with binary decision variables in the leader problem and convex quadratic programs in the follower problem. For this purpose, we transform the bilevel problems into equivalent quadratic single level formulations by replacing the follower problem with the equivalent Karush Kuhn Tucker (KKT) conditions. Then, we use the single level formulations to obtain mixed integer linear programming (MILP) models and semidefinite programming (SDP) relaxations. Thus, we compute optimal solutions and upper bounds using linear programming (LP) and SDP relaxations. Our numerical results indicate that the SDP relaxations are considerably tighter than the LP ones. Consequently, the SDP relaxations allow finding tight feasible solutions for the problem. Especially, when the number of variables in the leader problem is larger than in the follower problem. Moreover, they are solved at a significantly lower computational cost for large scale instances.
(I) Conic programming and interior point methods; Bilevel programming; Semidefinite programming; Mixed integer linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716000497
Adasme, Pablo
Lisser, Abdel
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:1-82016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:1-8
article
Edge coloring: A natural model for sports scheduling
In this work, we consider some basic sports scheduling problems and introduce the notions of graph theory which are needed to build adequate models. We show, in particular, how edge coloring can be used to construct schedules for sports leagues. Due to the emergence of various practical requirements, one cannot be restricted to classical schedules given by standard constructions, such as the circle method, to color the edges of complete graphs. The need of exploring the set of all possible colorings inspires the design of adequate coloring procedures. In order to explore the solution space, local search procedures are applied. The standard definitions of neighborhoods that are used in such procedures need to be extended. Graph theory provides efficient tools for describing various move types in the solution space. We show how formulations in graph theoretical terms give some insights to conceive more general move types. This leads to a series of open questions which are also presented throughout the text.
OR in sports; Scheduling; Graph theory; Edge coloring; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221716301667
Januario, Tiago
Urrutia, Sebastián
Ribeiro, Celso C.
de Werra, Dominique
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:138-1472016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:138-147
article
The predictive power of the business and bank sentiment of firms: A high-dimensional Granger Causality approach
We study the predictive power of industry-specific economic sentiment indicators for future macro-economic developments. In addition to the sentiment of firms towards their own business situation, we study their sentiment with respect to the banking sector – their main credit providers. The use of industry-specific sentiment indicators results in a high-dimensional forecasting problem. To identify the most predictive industries, we present a bootstrap Granger Causality test based on the Adaptive Lasso. This test is more powerful than the standard Wald test in such high-dimensional settings. Forecast accuracy is improved by using only the most predictive industries rather than all industries.
Bootstrap; Granger Causality; Lasso; Sentiment surveys; Time series forecasting;
http://www.sciencedirect.com/science/article/pii/S0377221716301874
Wilms, Ines
Gelper, Sarah
Croux, Christophe
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:543-5562016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:543-556
article
Origin and early evolution of corner polyhedra
Corner Polyhedra are a natural intermediate step between linear programming and integer programming. This paper first describes how the concept of Corner Polyhedra arose unexpectedly from a practical operations research problem, and then describes how it evolved to shed light on fundamental aspects of integer programming and to provide a great variety of cutting planes for integer programming.
Integer programming; Cutting; Linear programming; Corner polyhedra;
http://www.sciencedirect.com/science/article/pii/S0377221716301114
Gomory, Ralph
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:19-282016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:19-28
article
Eidetic Wolf Search Algorithm with a global memory structureAuthor-Name: Fong, Simon
A recently proposed metaheuristics called Wolf Search Algorithm (WSA) has demonstrated its efficacy for various hard-to-solve optimization problems. In this paper, an improved version of WSA namely Eidetic-WSA with a global memory structure (GMS) or just eWSA is presented. eWSA makes use of GMS for improving its search for the optimal fitness value by preventing mediocre visited places in the search space to be visited again in future iterations. Inherited from swarm intelligence, search agents in eWSA and the traditional WSA merge into an optimal solution although the agents behave and make decisions autonomously. Heuristic information gathered from collective memory of the swarm search agents is stored in GMS. The heuristics eventually leads to faster convergence and improved optimal fitness. The concept is similar to a hybrid metaheuristics based on WSA and Tabu Search. eWSA is tested with seven standard optimization functions rigorously. In particular, eWSA is compared with two state-of-the-art metaheuristics, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). eWSA shares some similarity with both approaches with respect to directed-random search. The similarity with ACO is, however, stronger as ACO uses pheromones as global information references that allow a balance between using previous knowledge and exploring new solutions. Under comparable experimental settings (identical population size and number of generations) eWSA is shown to outperform both ACO and PSO with statistical significance. When dedicating the same computation time, only ACO can be outperformed due to a comparably long run time per iteration of eWSA.
Metaheuristics; Wolf Search Algorithm; Global memory structure; Ant Colony Optimization; Particle Swarm Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301898
Deb, Suash
Hanne, Thomas
Li, Jinyan (Leo)
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:188-2012016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:188-201
article
Timing of service investments for retailers under competition and demand uncertainty
We study how retailers can time their service investments when demand for a product is uncertain and consumers care both about price and service when choosing which retailer to buy from. By “service” we mean activities a retailer can invest in and which can drive traffic into the store. We consider offering extended operating hours as an example of such service and examine the timing of service investments for two competing retailers. Specifically, we analyze two retailers who compete on price and service level, and characterize both the prices and the service levels, as well as the timing of their service investment decisions. Our model also considers two effects of retailer service—the effect on total demand for the product and the effect on a retailer’s market share. We show that investing in service before demand realization, although counterintuitive, can be beneficial for competing retailers. On the other hand, a large mismatch between actual and expected demand and a low probability of high demand justifies the postponement of service investments after observing demand. We also show that the incentive to invest in service before demand realization becomes more pronounced when service investments can increase the overall demand for the product in addition to protecting market share. Our findings have important implications for retailers with regards to the timing of their service investment decisions.
Retail; Service; Uncertainty; Competition; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716301515
Perdikaki, Olga
Kostamis, Dimitris
Swaminathan, Jayashankar M.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:428-4402016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:428-440
article
Carbon efficiency evaluation: An analytical framework using fuzzy DEA
Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.
Energy efficiency; Data envelopment analysis; Fuzzy expected interval; Fuzzy expected value; Fuzzy ranking approach;
http://www.sciencedirect.com/science/article/pii/S0377221716300340
Ignatius, Joshua
Ghasemi, M.-R.
Zhang, Feng
Emrouznejad, Ali
Hatami-Marbini, Adel
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:440-4522017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:440-452
article
A fast heuristic attribute reduction approach to ordered decision systems
Rough set theory has shown success in being a filter-based feature selection approach for analyzing information systems. One of its main aims is to search for a feature subset called a reduct, which preserves the classification ability of the original system. In this paper, we consider ordered decision systems, where the preference order, a fundamental concept in dominance-based rough set approach, plays a critical role. In recent literature, based on the greedy hill climbing method, many heuristic attribute reduction algorithms are proposed by utilizing significance measures of attributes, and they are extended to deal with ordered decision systems. Unfortunately, they are often time-consuming, especially when applied to deal with large scale data sets with high dimensions. To reduce the complexity, a novel accelerator is introduced in heuristic algorithms from the perspectives of objects and criteria. Based on the new accelerator, the number of objects and the dimension of criteria are lessened thus making the accelerated algorithms faster than their original counterparts while maintaining the same reducts. Experimental analysis shows the validity and efficiency of the proposed methods.
Dominance-based rough set approach; Ordered decision system; Heuristic attribute reduction algorithm; Accelerator;
http://www.sciencedirect.com/science/article/pii/S0377221717302333
Du, Wen Sheng
Hu, Bao Qing
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:774-7962017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:774-796
article
Wildfire fuel management: Network-based models and optimization of prescribed burning
Wildfires are a common phenomenon on most continents. They have occurred for an estimated 60 million years and are part of a regular climatic cycle. Nevertheless, wildfires represent a real and continuing problem that can have a major impact on people, wildlife and the environment. The intensity and severity of wildfires can be reduced through fuel management activities. The most common and effective fuel management activity is prescribed burning. We propose a multi-period optimization framework based on mixed integer programming (MIP) techniques to determine the optimal spatial allocation of prescribed burning activities over a finite planning horizon. In contrast to the existing fuel management optimization literature, we model fuel accumulation with Olson’s equation. To capture potential fire spread along with irregular landscape connectivity considerations, we use a graph-theoretical approach that allows us to exploit graph connectivity measures (e.g., the number of connected components) as optimization objectives. The resulting mathematical programs can be tackled by general purpose MIP solvers, while for handling larger instances we propose a simple heuristic. Our computational experiments with test instances constructed based on real-life data reveal interesting insights and demonstrate the advantages and limitations of the proposed approaches.
OR in natural resources; Prescribed burning; Fuel management; Network-based model; Graph connectivity;
http://www.sciencedirect.com/science/article/pii/S037722171730591X
Matsypura, Dmytro
Prokopyev, Oleg A.
Zahar, Aizat
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:784-7972017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:784-797
article
Critical rationalism in practice: Strategies to manage subjectivity in OR investigations
The philosophical position referred to as critical rationalism (CR) is potentially important to OR because it holds out the possibility of supporting OR’s claim to offer managers a scientifically ‘rational’ approach. However, as developed by Karl Popper, and subsequently extended by David Miller, CR can only support practice (deciding what to do, how to act) in a very limited way; concentrating on the critical application of deductive logic, the crucial role of subjective judgements in making technical and moral choices are ignored or are at least left underdeveloped. By reflecting on the way that managers, engineers, administrators and other professionals take decisions in practice, three strategies are identified for handling the inevitable subjectivity in practical decision-making. It is argued that these three strategies can be understood as attempts to emulate the scientific process of achieving intersubjective consensus, a process inherent in CR. The perspective developed in the paper provides practitioners with a way of understanding their clients’ approach to decision-making and holds out the possibility of making coherent the claim that they are offering advice on how to apply a scientific approach to decision-making; it presents academics with some philosophical challenges and some new avenues for research.
Practice of OR; Philosophy of OR; Critical rationalism; Critical systems thinking;
http://www.sciencedirect.com/science/article/pii/S0377221713010023
Ormerod, R.J.
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:798-8092017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:798-809
article
Maintaining the Regular Ultra Passum Law in data envelopment analysis
The variable returns to scale data envelopment analysis (DEA) model is developed with a maintained hypothesis of convexity in input–output space. This hypothesis is not consistent with standard microeconomic production theory that posits an S-shape for the production frontier, i.e. for production technologies that obey the Regular Ultra Passum Law. Consequently, measures of technical efficiency assuming convexity are biased downward. In this paper, we provide a more general DEA model that allows the S-shape.
Data envelopment analysis (DEA); S-shaped production function; Convex hull estimation; Isoquant estimation;
http://www.sciencedirect.com/science/article/pii/S0377221714000186
Olesen, Ole B.
Ruggiero, John
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:570-5812017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:570-581
article
Concealment measurement and flow distribution of military supply transportation: A double-entropy model
To address the issues of military supply distribution and transportation under the restrictions of concealment during war preparation and warfare periods, this study proposes a double-entropy model to measure the degree of the comprehensive concealment of military supply transportation from the perspectives of transportation and detection. With respect to the real road conditions, we further develop this double-entropy model with consideration of the width and length of roads and introduction of the limitations of average transportation. The reasonability of this model and its related definitions are then demonstrated by theoretical analysis and mathematical proof. Subsequently, three distinctive properties of military supply transportation via a road or a road network, namely unordering, scalability, and directionality, are investigated. Based on the double-entropy model and the above properties, a flow distribution model of military supply is designed, which addresses a vital issue in the event of a military confrontation or regional war. Finally, we provide an example that calculates an optimal flow distribution schedule to carry out a regional military drill in the Jiangsu Province of China to demonstrate the proposed concepts and approaches.
Transportation; Military supply; Concealment; Double-entropy model; Flow distribution;
http://www.sciencedirect.com/science/article/pii/S0377221717305945
Zhou, Wei
Zhang, Cheng
Wang, Qiangqiang
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:582-6062017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:582-606
article
Energy management for stationary electric energy storage systems: A systematic literature review
Electric Energy Storage Systems (EESS) have received an increased attention in recent years due to their important role in an active management of energy supply systems. Fueled by the increasing shares of intermittent Renewable Energy Sources (RES) in today's energy supply, balancing energy demand and energy supply over time becomes more and more challenging. EESS are recognized as a key technology to overcome this challenge by storing energy and converting it back when needed. Even though some EESS solutions are already available on the market, EESS suffer from technical limitations and entail high investment costs. Energy management is responsible for managing the operations of EESS and the interactions with the surrounding systems. An optimal energy management is an important precondition to ensure economic viability of EESS.
OR in energy; Electric Energy Storage Systems; Energy management; Optimal policy; Optimal strategy; Optimal scheduling; Solution techniques; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221717305933
Weitzel, Timm
Glock, Christoph H.
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:697-7082017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:697-708
article
Emergency response in natural disaster management: Allocation and scheduling of rescue units
Natural disasters, such as earthquakes, tsunamis and hurricanes, cause tremendous harm each year. In order to reduce casualties and economic losses during the response phase, rescue units must be allocated and scheduled efficiently. As this problem is one of the key issues in emergency response and has been addressed only rarely in literature, this paper develops a corresponding decision support model that minimizes the sum of completion times of incidents weighted by their severity. The presented problem is a generalization of the parallel-machine scheduling problem with unrelated machines, non-batch sequence-dependent setup times and a weighted sum of completion times – thus, it is NP-hard. Using literature on scheduling and routing, we propose and computationally compare several heuristics, including a Monte Carlo-based heuristic, the joint application of 8 construction heuristics and 5 improvement heuristics, and GRASP metaheuristics. Our results show that problem instances (with up to 40 incidents and 40 rescue units) can be solved in less than a second, with results being at most 10.9% up to 33.9% higher than optimal values. Compared to current best practice solutions, the overall harm can be reduced by up to 81.8%.
Decision support systems; Natural Disaster Management (NDM); Heuristics; Assignment; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221713008527
Wex, Felix
Schryen, Guido
Feuerriegel, Stefan
Neumann, Dirk
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:453-4612017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:453-461
article
Preference modeling experiments with surrogate weighting procedures for the PROMETHEE method
One of the main tasks in a multi-criteria decision-making process is to define weights for the evaluation criteria. However, in many situations, the decision-maker (DM) may not be confident about defining specific values for these weights and may prefer to use partial information to represent the values of such weights with surrogate weights. Although for the additive model, the use of surrogate weighting procedures has been already explored in the literature, there is a gap with regard to experimenting with such kind of preference modeling in outranking based methods, such as PROMETHEE, for which there already are applications with surrogate weights in the literature. Thus, this paper presents an experimental study on preference modeling based on simulation so as to increase understanding and acceptance of a recommendation obtained when using surrogate weights within the PROMETHEE method. The main approaches to surrogate weights in the literature (EW, RS, RR and ROC) have been evaluated for choice and ranking problematics throughout statistical procedures, including Kendall's tau coefficient. The surrogate weighting procedure that most faithfully represents a DM's value system according to this analysis is the ROC procedure.
Multi criteria decision analysis; Preference modeling; PROMETHEE method; Partial information; Surrogate weights;
http://www.sciencedirect.com/science/article/pii/S037722171730718X
de Almeida Filho, Adiel T.
Clemente, Thárcylla R.N.
Morais, Danielle Costa
de Almeida, Adiel Teixeira
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:115-1282017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:115-128
article
Exact algorithms for the traveling salesman problem with draft limits
This paper deals with the Traveling Salesman Problem (TSP) with Draft Limits (TSPDL), which is a variant of the well-known TSP in the context of maritime transportation. In this recently proposed problem, draft limits are imposed due to restrictions on the port infrastructures. Exact algorithms based on three mathematical formulations are proposed and their performance compared through extensive computational experiments. Optimal solutions are reported for open instances of benchmark problems available in the literature.
Draft limits; Traveling salesman; Cutting planes; Column generation; Extended formulation;
http://www.sciencedirect.com/science/article/pii/S0377221713008655
Battarra, Maria
Pessoa, Artur Alves
Subramanian, Anand
Uchoa, Eduardo
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:675-6852017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:675-685
article
Second-order stochastic dominance constrained portfolio optimization: Theory and computational tests
Due to the definition of second-order stochastic dominance (SSD) in terms of utility theory, portfolio optimization with SSD constraints is of major practical interest. We contribute to the field in two ways: first, we present a self-contained theory with some new results and new proofs of known results; second, we perform a set of tests for computational efficiency. We provide new and simple arguments for the formulation of SSD constraints in a mathematical programming framework. For many individuals, an SSD constraint may seem too severe wherefore various relaxations (ASSD), have been proposed. We introduce yet another relaxation, directional SSD, where a candidate portfolio is admissible if a step from the benchmark in the direction of the candidate yields a dominating portfolio. Optimal step size depends on individual preferences reflected by the objective function. We compare computational efficiency of seven approaches for SD constrained portfolio problems, including SSD and ASSD constrained cases.
Portfolio optimization; Second-order stochastic dominance; Stochastic programming; Mean-risk model; Expected utility;
http://www.sciencedirect.com/science/article/pii/S0377221717306264
Kallio, Markku
Dehghan Hardoroudi, Nasim
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:405-4182017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:405-418
article
UTA-poly and UTA-splines: Additive value functions with polynomial marginals
Additive utility function models are widely used in multiple criteria decision analysis. In such models, a numerical value is associated to each alternative involved in the decision problem. It is computed by aggregating the scores of the alternative on the different criteria of the decision problem. The score of an alternative is determined by a marginal value function that evolves monotonically as a function of the performance of the alternative on this criterion. Determining the shape of the marginals is not easy for a decision maker. It is easier for him/her to make statements such as “alternative a is preferred to b”. In order to help the decision maker, UTA disaggregation procedures use linear programming to approximate the marginals by piecewise linear functions based only on such statements. In this paper, we propose to infer polynomials and splines instead of piecewise linear functions for the marginals. In this aim, we use semidefinite programming instead of linear programming. We illustrate this new elicitation method and present some experimental results.
Multiple criteria decision analysis; Additive value function model; Preference learning; Ordinal regression; Semidefinite programming;
http://www.sciencedirect.com/science/article/pii/S0377221717302254
Sobrie, Olivier
Gillis, Nicolas
Mousseau, Vincent
Pirlot, Marc
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:717-7312017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:717-731
article
Evaluating the strategic behavior of cement producers: An equilibrium problem with equilibrium constraints
This paper investigates the equilibria reached by a number of strategic producers in the cement sector through a technological representation of the market. We present a bilevel model for each producer that characterizes its profit maximizing behavior. In the bilevel model, the upper-level problem of each producer is constrained by a lower-level market clearing problem representing cement trading and whose objective function corresponds to social welfare. Replacing the lower level problem by its optimality condition yields a Mathematical Program with Equilibrium Constraints (MPEC). Then, all strategic producers are jointly considered. Representing their interaction requires solving jointly the interrelated MPECs of all producers, which results in an Equilibrium Problem with Equilibrium Constraints (EPEC).
Linear programming; Cement industry; Equilibrium problem with equilibrium Constraints (EPEC); Mixed-integer linear programming (MILP);
http://www.sciencedirect.com/science/article/pii/S0377221717305842
Allevi, E.
Conejo, A.J.
Oggioni, G.
Riccardi, R.
Ruiz, C.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:62-722017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:62-72
article
An experimental investigation of metaheuristics for the multi-mode resource-constrained project scheduling problem on new dataset instances
In this paper, an overview is presented of the existing metaheuristic solution procedures to solve the multi-mode resource-constrained-project scheduling problem, in which multiple execution modes are available for each of the activities of the project. A fair comparison is made between the different metaheuristic algorithms on the existing benchmark datasets and on a newly generated dataset. Computational results are provided and recommendations for future research are formulated.
Project scheduling; Multi-mode; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221713008357
Van Peteghem, Vincent
Vanhoucke, Mario
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:553-5682017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:553-568
article
Integer programming models for the multidimensional assignment problem with star costs
We consider a variant of the multidimensional assignment problem (MAP) with decomposable costs in which the resulting optimal assignment is described as a set of disjoint stars. This problem arises in the context of multi-sensor multi-target tracking problems, where a set of measurements, obtained from a collection of sensors, must be associated to a set of different targets. To solve this problem we study two different formulations. First, we introduce a continuous nonlinear program and its linearization, along with additional valid inequalities that improve the lower bounds. Second, we state the standard MAP formulation as a set partitioning problem, and solve it via branch and price. These approaches were put to test by solving instances ranging from tripartite to 20-partite graphs of 4 to 30 nodes per partition. Computational results show that our approaches are a viable option to solve this problem. A comparative study is presented.
Combinatorial optimization; Multidimensional assignment problem; Star covering; Multi-sensor multi-target tracking problem; Graph partitioning; Branch and price;
http://www.sciencedirect.com/science/article/pii/S0377221713008710
Walteros, Jose L.
Vogiatzis, Chrysafis
Pasiliao, Eduardo L.
Pardalos, Panos M.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:252-2642017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:252-264
article
A column generation approach for solving the patient admission scheduling problem
This paper addresses the Patient Admission Scheduling (PAS) problem. The PAS problem entails assigning elective patients to beds, while satisfying a number of hard constraints and as many soft constraints as is possible, and arises at all planning levels for hospital management. There exist a few, different variants of this problem. In this paper we consider one such variant and propose an optimization-based heuristic building on branch-and-bound, column generation, and dynamic constraint aggregation to solve it. We achieve tighter lower bounds than previously reported in the literature and, in addition, we are able to produce new best known solutions for five out of twelve instances from a publicly available repository.
OR in health services; Scheduling; Column generation; Dynamic constraint aggregation; Dual disaggregation; Branch and bound;
http://www.sciencedirect.com/science/article/pii/S0377221713008734
Range, Troels Martin
Lusby, Richard Martin
Larsen, Jesper
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:707-7162017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:707-716
article
Entropic risk measures and their comparative statics in portfolio selection: Coherence vs. convexity
We conduct a decision-theoretic analysis of optimal portfolio choices and, in particular, their comparative statics under two types of entropic risk measures, the coherent entropic risk measure (CERM) and the convex entropic risk measure (ERM). Starting with the portfolio selection between a risky and a risk free asset (framework of Arrow (1965) and Pratt (1964)), we find a restrictive all-or-nothing investment decision under the CERM, while the ERM yields diversification. We then address a portfolio problem with two risky assets, and provide comparative statics with respect to the investor’s risk aversion (framework of Ross (1981)). Here, both the CERM and the ERM exhibit closely interrelated inconsistencies with respect to the interpretation of their risk parameters as a measure of risk aversion: for any two investors with different risk parameters, it may happen that the investor with the higher risk parameter invests more in the riskier one of the two assets. Finally, we analyze the portfolio problem “risky vs. risk free” in the presence of an independent background risk, and analyze the effect of changes in this background risk (framework of Gollier and Pratt (1996)). Again, we find questionable predictions: under the CERM, the optimal risky investment is always increasing instead of decreasing when a background risk is introduced, while under the ERM it remains unaffected.
Decision analysis; Entropic risk measure; Portfolio selection; Ross risk aversion; Risk vulnerability;
http://www.sciencedirect.com/science/article/pii/S0377221717306343
Brandtner, Mario
Kürsten, Wolfgang
Rischau, Robert
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:180-1862017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:180-186
article
Reliability analysis of a single warm-standby system subject to repairable and nonrepairable failures
An n-unit system provisioned with a single warm standby is investigated. The individual units are subject to repairable failures, while the entire system is subject to a nonrepairable failure at some finite but random time in the future. System performance measures for systems observed over a time interval of random duration are introduced. Two models to compute these system performance measures, one employing a policy of block replacement, and the other without a block replacement policy, are developed. Distributional assumptions involving distributions of phase type introduce matrix Laplace transformations into the calculations of the performance measures. It is shown that these measures are easily carried out on a laptop computer using Microsoft Excel. A simple economic model is used to illustrate how the performance measures may be used to determine optimal economic design specifications for the warm standby.
Reliability; Applied probability; Warm standby system; Distributions of phase type;
http://www.sciencedirect.com/science/article/pii/S0377221713010114
Wells, Charles E.
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:428-4392017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:428-439
article
Determining the fuzzy measures in multiple criteria decision aiding from the tolerance perspective
We consider multiple criteria decision aiding (MCDA) in the case of interactions between criteria. In dealing with interactions between criteria, fuzzy measures and integrals have demonstrated great advantages. Nevertheless, the determination of fuzzy measures has proven difficult because the capacities of not only single criterion but also all subsets of criteria need to be identified. Due to the value judgment essence of MCDA, the attitudes of the decision maker (DM) are typically modeled to identify fuzzy measures. In this paper, the tolerance attitudes of the DM, which implies a direct requirement instead of partial preference, are modeled with regard to the determination of fuzzy measures for the first time. With two scales developed in this paper, the DM can directly express the tolerance attitudes to certain criteria other than providing partial preference through pairwise comparison. As a result, it requires less prior knowledge and is more efficient to some extent. Further, the inherent interacting mechanism of criteria under different tolerance attitudes is explored. At last, the tolerance attitudes are applied to the process of multiple criteria analysis using a Choquet integral. A classic student evaluation problem is given as an example. The evaluation results are compared with additive models. This paper not only provides a new inspiration to the determination of fuzzy measures but also improves the descriptive capacity of fuzzy measures to the real world.
Multiple criteria analysis; Tolerance attitude; Interaction; Fuzzy measure; Choquet integral;
http://www.sciencedirect.com/science/article/pii/S0377221717304575
Li, Jianping
Yao, Xiaoyang
Sun, Xiaolei
Wu, Dengsheng
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:558-5692017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:558-569
article
Tactical sales forecasting using a very large set of macroeconomic indicators
Tactical forecasting in supply chain management supports planning for inventory, scheduling production, and raw material purchase, amongst other functions. It typically refers to forecasts up to 12 months ahead. Traditional forecasting models take into account univariate information extrapolating from the past, but cannot anticipate macroeconomic events, such as steep increases or declines in national economic activity. In practice this is countered by using managerial expert judgement, which is well known to suffer from various biases, is expensive and not scalable. This paper evaluates multiple approaches to improve tactical sales forecasting using macro-economic leading indicators. The proposed statistical forecast selects automatically both the type of leading indicators, as well as the order of the lead for each of the selected indicators. However as the future values of the leading indicators are unknown an additional uncertainty is introduced. This uncertainty is controlled in our methodology by restricting inputs to an unconditional forecasting setup. We compare this with the conditional setup, where future indicator values are assumed to be known and assess the theoretical loss of forecast accuracy. We also evaluate purely statistical model building against judgement aided models, where potential leading indicators are pre-filtered by experts, quantifying the accuracy-cost trade-off. The proposed framework improves on forecasting accuracy over established time series benchmarks, while providing useful insights about the key leading indicators. We evaluate the proposed approach on a real case study and find 18.8% accuracy gains over the current forecasting process.
Forecasting; Tactical planning; Leading indicators; LASSO; Variable selection;
http://www.sciencedirect.com/science/article/pii/S0377221717305957
Sagaert, Yves R.
Aghezzaf, El-Houssaine
Kourentzes, Nikolaos
Desmet, Bram
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:637-6522017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:637-652
article
A transient stochastic simulation–optimization model for operational fuel planning in-theater
Army fuel planners are responsible for developing daily loading plans that specify which tankers to load, with what fuel, and where to send the loaded tankers. The tools used to accomplish this task are custom built spreadsheets which require large amounts of time and effort to use, update, and keep free of errors. This research presents a transient stochastic simulation–optimization model of the in-theater bulk fuel supply chain, where the simulation model is used to simulate the performance of the fuel supply chain under a particular fuel distribution policy and the optimization portion is used to update the policy so that it results in the performance desired by the Army fuel planner. The fuel distribution policy can then be used to derive the daily loading plan. Due to the multi-objective nature of the problem, the set of policies that form the efficient frontier are all candidate policies for the Army fuel planner to select from. Results of experimentation with a wide variety of supply chain scenarios indicate that, for a given supply chain scenario, the optimization portion of the model identifies a set of fuel distribution policies that address the objectives of the Army fuel planner. In addition, the simulation–optimization model comfortably solves the largest supply chain scenarios the Army fuel planner would reasonably be expected to encounter.
Simulation; Optimization; Army logistics; Fuel supply chain; Decision support;
http://www.sciencedirect.com/science/article/pii/S0377221717305982
Lobo, Benjamin J.
Brown, Donald E.
Gerber, Matthew S.
Grazaitis, Peter J.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:334-3382017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:334-338
article
On two-echelon inventory systems with Poisson demand and lost sales
We consider a two-echelon, continuous review inventory system under Poisson demand and a one-for-one replenishment policy. Demand is lost if no items are available at the local warehouse, the central depot, or in the pipeline in between. We give a simple, fast and accurate approach to approximate the service levels in this system. In contrast to other methods, we do not need an iterative analysis scheme. Our method works very well for a broad set of cases, with deviations to simulation below 0.1% on average and below 0.36% for 95% of all test instances.
Inventory; Two-echelon system; Spare parts; Lost sales; Approximation;
http://www.sciencedirect.com/science/article/pii/S0377221713010151
Alvarez, Elisa
van der Heijden, Matthieu
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:755-7642017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:755-764
article
Measuring performance improvement of Taiwanese commercial banks under uncertainty
In order to enable domestic commercial banks to be more competitive globally, the Taiwanese government has twice attempted to financially restructure them, in 2001 and 2004. Different from other studies which use deterministic analyses to measure changes in performance between two periods, this paper adopts probabilistic analysis to take the uncertainty related to certain factors into account. Data from six years, from 2005 to 2010, are divided into two periods, 2005–2007 and 2008–2010, to calculate the global Malmquist productivity index (MPI) as a measure of the change in performance. By assuming beta distributions for the data, a Monte Carlo simulation is conducted to find the distribution of the MPI. The results show that, in general, the performance of the commercial banks has indeed improved. While conventional deterministic analyses may mislead top managers and make them overconfident about results that are actually uncertain, probabilistic analysis can produce more reliable information that can thus lead to better decisions.
Data envelopment analysis; Malmquist productivity index; Probability; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221713009053
Kao, Chiang
Liu, Shiang-Tai
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:740-7542017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:740-754
article
Enhanced multi-Hoffmann heuristic for efficiently solving real-world assembly line balancing problems in automotive industry
In production systems of automobile manufacturers, multi-variant products are assembled on paced final assembly lines. The assignment of operations to workplaces and workers deter mines the productivity of the manufacturing process. In research, various exact and heuristic solution procedures have been developed for different versions of the so-called assembly line balancing problem.
Scheduling; Production; Assembly line; Line balancing; Combinatorial optimizations; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221713009041
Sternatz, Johannes
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:534-5472017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:534-547
article
The daily tail assignment problem under operational uncertainty using look-ahead maintenance constraints
The tail assignment problem is a critical part of the airline planning process that assigns specific aircraft to sequences of flights, called lines-of-flight, to satisfy operational constraints. The aim of this paper is to develop an operationally flexible method, based upon the one-day routes business model, to compute tail assignments that satisfy short-range—within the next three days—aircraft maintenance requirements. While maintenance plans commonly span multiple days, the methods used to compute tail assignments for the given plans can be overly complex and provide little recourse in the event of schedule perturbations. The presented approach addresses operational uncertainty by using solutions from the one-day routes aircraft maintenance routing approach as input. The daily tail assignment problem is solved with an objective to satisfy maintenance requirements explicitly for the current day and implicitly for the subsequent two days. A computational study will be performed to assess the performance of exact and heuristic solution algorithms that modify the input lines-of-flight to reduce maintenance misalignments. The daily tail assignment problem and the developed algorithms are demonstrated to compute solutions that effectively satisfy maintenance requirements when evaluated using input data collected from three different airlines.
Transportation; Tail assignment; Maintenance planning; Branch-and-price; Iterative algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221717305829
Maher, Stephen J.
Desaulniers, Guy
Soumis, François
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:419-4272017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:419-427
article
Efficient weight vectors from pairwise comparison matrices
Pairwise comparison matrices are frequently applied in multi-criteria decision making. A weight vector is called efficient if no other weight vector is at least as good in approximating the elements of the pairwise comparison matrix, and strictly better in at least one position. A weight vector is weakly efficient if the pairwise ratios cannot be improved in all non-diagonal positions. We show that the principal eigenvector is always weakly efficient, but numerical examples show that it can be inefficient. The linear programs proposed test whether a given weight vector is (weakly) efficient, and in case of (strong) inefficiency, an efficient (strongly) dominating weight vector is calculated. The proposed algorithms are implemented in Pairwise Comparison Matrix Calculator, available at pcmc.online.
Multiple criteria analysis; Pairwise comparison matrix; Pareto optimality; Efficiency; Linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221717305726
Bozóki, Sándor
Fülöp, János
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:491-5072017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:491-507
article
Non-compensatory composite indicators for the evaluation of urban planning policy: The Land-Use Policy Efficiency Index (LUPEI)
In this research paper, we define and test an ELECTRE III-based approach to the construction of non-compensatory composite indicators; these indicators are used for the evaluation of environmental and social performances of urban and regional planning policies. We tested the methodology for the construction of the Land-Use Policy Efficiency Index (LUPEI) on the municipal scale applied to a sample of municipalities in the Apulia Region (Southern Italy). Based on the literature review concerning composite indicators, we found that linear aggregation rules are the most widely applied aggregation procedures for composite indicators. However, their applicability depends on a set of strong theoretical and operational conditions. If these conditions do not hold, then other aggregation and weighting procedures must be applied to construct the composite indicators. We tested the ELECTRE III via a fruitful interaction with three experts who were participating in a focus group. We found that composite indicators are powerful tools when it comes to the assessment of multidimensional planning issues. Since each sub-indicator provides different information and responds to different goals, rankings and assessment based on mono-indicator frameworks can lead to incomplete or even biased results that do not consider an integrated approach to land-use policy efficiency. Moreover, both experts and decision-makers appreciated the role of composite indicators in increasing knowledge and providing deeper insights into complex phenomena in the domains of urban and regional planning.
(S) Multiple-Criteria Analysis; (S) Decision Support Systems; Composite indicators; ELECTRE methods; Land-use change; Land-use policy;
http://www.sciencedirect.com/science/article/pii/S0377221717307075
Attardi, Raffaele
Cerreta, Maria
Sannicandro, Valentina
Torre, Carmelo Maria
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:461-4692017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:461-469
article
Benefits of a truck appointment system on the service quality of inland transport modes at a multimodal container terminal
Container terminals pay more and more attention to the service quality of inland transport modes such as tucks, trains and barges. Truck appointment systems are a common approach to reduce truck turnaround times. This paper provides a tool to use the truck appointment system to increase not only the service quality of trucks, but also of trains, barges and vessels. We propose a mixed integer linear programming model to determine the number of appointments to offer with regard to the overall workload and the available handling capacity. The model is based on a network flow representation of the terminal and aims to minimize overall delays at the terminal. It simultaneously determines the number of truck appointments to offer and allocates straddle carriers to different transport modes. Numerical experiments, conducted on actual data, quantify the benefits of this combined solution approach. Discrete-event simulation validates the results obtained by the optimization model in a stochastic environment.
Container terminal; Intermodal transportation; Resource allocation; Straddle carrier; Truck appointment system;
http://www.sciencedirect.com/science/article/pii/S0377221713005675
Zehendner, Elisabeth
Feillet, Dominique
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:624-6352017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:624-635
article
Spare parts management: Linking distributional assumptions to demand classification
Spare parts are known to be associated with intermittent demand patterns and such patterns cause considerable problems with regards to forecasting and stock control due to their compound nature that renders the normality assumption invalid. Compound distributions have been used to model intermittent demand patterns; there is however a lack of theoretical analysis and little relevant empirical evidence in support of these distributions. In this paper, we conduct a detailed empirical investigation on the goodness of fit of various compound Poisson distributions and we develop a distribution-based demand classification scheme the validity of which is also assessed in empirical terms. Our empirical investigation provides evidence in support of certain demand distributions and the work described in this paper should facilitate the task of selecting such distributions in a real world spare parts inventory context. An extensive discussion on parameter estimation related difficulties in this area is also provided.
Inventory; Demand distributions; Intermittent demand; Spare parts;
http://www.sciencedirect.com/science/article/pii/S0377221713010278
Lengu, D.
Syntetos, A.A.
Babai, M.Z.
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:378-3862017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:378-386
article
A service flow model for the liner shipping network design problem
Global liner shipping is a competitive industry, requiring liner carriers to carefully deploy their vessels efficiently to construct a cost competitive network. This paper presents a novel compact formulation of the liner shipping network design problem (LSNDP) based on service flows. The formulation alleviates issues faced by arc flow formulations with regards to handling multiple calls to the same port. A problem which has not been fully dealt with earlier by LSNDP formulations. Multiple calls are handled by introducing service nodes, together with port nodes in a graph representation of the problem, and by introducing numbered arcs between a port and a novel service node. An arc from a port node to a service node indicate whether a service is calling the port or not. This representation allows recurrent calls of a service to a port, which previously could not be handled by LSNDP models. The model ensures strictly weekly frequencies of services, ensures that port-vessel draft capabilities are not violated, respects vessel capacities and the number of vessels available. The profit of the generated network is maximized, i.e. the revenue of flowed cargo subtracted operational costs of the network and a penalty for not flowed cargo. The model can be used to design liner shipping networks to utilize a container carrier’s assets efficiently and to investigate possible scenarios of changed market conditions. The model is solved as a Mixed Integer Program. Results are presented for the two smallest instances of the benchmark suite LINER-LIB-2012 presented in Brouer, Alvarez, Plum, Pisinger, and Sigurd (2013).
Liner shipping; Network design; Maritime optimization;
http://www.sciencedirect.com/science/article/pii/S0377221713008801
Plum, Christian E.M.
Pisinger, David
Sigurd, Mikkel M.
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:511-5292017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:511-529
article
A rank-dependent bi-criterion equilibrium model for stochastic transportation environment
The paper proposes a rank-dependent bi-criterion (travel time & monetary travel cost) equilibrium model for route choice problems, stochasticities in both the criteria measurements and the subjective preferences are considered simultaneously. Travelers rank all the choices, according to the generalized travel dis-utility, then choose from the first several (see K) best ranked ones. By searching inversely the supporting preference sets for each alternative in each rank, the overall choice probability of a path is determined. The equilibrium model is formulated and transformed into a fixed-point problem. The existence of the equilibrium is given out for a simple two-link network, however may not be guaranteed for more complex network topologies. When K=1, the proposed model reduces to the optimal user equilibrium that allows for the stochasticities of criteria measurements and the arbitrarily distributed preferences. Some remarks about the selection of some parameters in the new model are discussed and also the solution algorithms. Two numerical examples are presented to illustrate the implementation of the model, and also the capability and flexibility of the new model in handling the heterogeneity in traveler preferences and requirements. The paper concludes with discussions about the assumptions and limitations of the new model and possible future research opportunities as well.
Route choice; Rank dependent user equilibrium; Stochastic multicriteria acceptability analysis;
http://www.sciencedirect.com/science/article/pii/S0377221714000447
Wang, Guangchao
Jia, Ning
Ma, Shoufeng
Qi, Hang
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:583-5932017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:583-593
article
Adjusting a railway timetable in case of partial or complete blockades
Unexpected events, such as accidents or track damages, can have a significant impact on the railway system so that trains need to be canceled and delayed. In case of a disruption it is important that dispatchers quickly present a good solution in order to minimize the nuisance for the passengers. In this paper, we focus on adjusting the timetable of a passenger railway operator in case of major disruptions. Both a partial and a complete blockade of a railway line are considered. Given a disrupted infrastructure situation and a forecast of the characteristics of the disruption, our goal is to determine a disposition timetable, specifying which trains will still be operated during the disruption and determining the timetable of these trains. Without explicitly taking the rolling stock rescheduling problem into account, we develop our models such that the probability that feasible solutions to this problem exist, is high. The main objective is to maximize the service level offered to the passengers. We present integer programming formulations and test our models using instances from Netherlands Railways.
Transportation; Railways; Disruption management; Timetabling; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221713010047
Louwerse, Ilse
Huisman, Dennis
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:607-6222017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:607-622
article
A branch-and-price approach to the feeder network design problem
In this paper we consider the problem of designing a container liner shipping feeder network. The designer has to choose which port to serve during many rotations that start and end at a central hub. Many operational characteristics are considered, such as variable leg-by-leg speeds and cargo transit times. Realistic instances are generated from the LinerLib benchmark suite. The problem is solved with a branch-and-price algorithm, which can solve most instances to optimality within one hour. The results also provide insights on the cost structure and desirable features of optimal routes. These insights were obtained by means of an analysis where scenarios are generated varying internal and external conditions, such as fuel costs and port demands.
OR in maritime industry; Network design; Liner shipping; Branch and price; Vehicle routing problem;
http://www.sciencedirect.com/science/article/pii/S0377221717306045
Santini, Alberto
Plum, Christian E.M.
Ropke, Stefan
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:247-2512017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:247-251
article
Computation of the optimal tolls on the traffic network
The present paper is devoted to the computation of optimal tolls on a traffic network that is described as fuzzy bilevel optimization problem. As a fuzzy bilevel optimization problem we consider bilinear optimization problem with crisp upper level and fuzzy lower level. An effective algorithm for computation optimal tolls for the upper level decision-maker is developed under assumption that the lower level decision-maker chooses the optimal solution as well. The algorithm is based on the membership function approach. This algorithm provides us with a global optimal solution of the fuzzy bilevel optimization problem.
Fuzzy optimization; Bilevel optimization; Toll problem;
http://www.sciencedirect.com/science/article/pii/S0377221713008886
Budnitzki, Alina
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:149-1582017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:149-158
article
Class clustering destroys delay differentiation in priority queues
This paper considers a discrete-time priority queueing model with one server and two types (classes) of customers. Class-1 customers have absolute (service) priority over class-2 customers. New customer batches enter the system at the rate of one batch per slot, according to a general independent arrival process, i.e., the batch sizes (total numbers of arrivals) during consecutive time slots are i.i.d. random variables with arbitrary distribution. All customers entering the system during the same time slot (i.e., belonging to the same arrival batch) are of the same type, but customer types may change from slot to slot, i.e., from batch to batch. Specifically, the types of consecutive customer batches are correlated in a Markovian way, i.e., the probability that any batch of customers has type 1 or 2, respectively, depends on the type of the previous customer batch that has entered the system. Such an arrival model allows to vary not only the relative loads of both customer types in the arrival stream, but also the amount of correlation between the types of consecutive arrival batches. The results reveal that the amount of delay differentiation between the two customer classes that can be achieved by the priority mechanism strongly depends on the amount of such interclass correlation (or, class clustering) in the arrival stream. We believe that this phenomenon has been largely overlooked in the priority-scheduling literature.
Priority queueing; Multiclass; Discrete-time; Interclass correlation; Delay differentiation;
http://www.sciencedirect.com/science/article/pii/S0377221713009806
Bruneel, Herwig
Maertens, Tom
Walraevens, Joris
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:387-3982017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:387-398
article
Exact and heuristic methods for placing ships in locks
The ship placement problem constitutes a daily challenge for planners in tide river harbours. In essence, it entails positioning a set of ships into as few lock chambers as possible while satisfying a number of general and specific placement constraints. These constraints make the ship placement problem different from traditional 2D bin packing. A mathematical formulation for the problem is presented. In addition, a decomposition model is developed which allows for computing optimal solutions in a reasonable time. A multi-order best fit heuristic for the ship placement problem is introduced, and its performance is compared with that of the left-right-left-back heuristic. Experiments on simulated and real-life instances show that the multi-order best fit heuristic beats the other heuristics by a landslide, while maintaining comparable calculation times. Finally, the new heuristic’s optimality gap is small, while it clearly outperforms the exact approach with respect to calculation time.
Ship placement problem; Packing; Heuristics; Lock scheduling; Decomposition;
http://www.sciencedirect.com/science/article/pii/S0377221713005523
Verstichel, J.
De Causmaecker, P.
Spieksma, F.C.R.
Vanden Berghe, G.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:187-1942017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:187-194
article
A bi-objective model for the location of landfills for municipal solid waste
This paper models the locations of landfills and transfer stations and simultaneously determines the sizes of the landfills that are to be established. The model is formulated as a bi-objective mixed integer optimization problem, in which one objective is the usual cost-minimization, while the other minimizes pollution. As a matter of fact, pollution is dealt with a two-pronged approach: on the one hand, the model includes constraints that enforce legislated limits on pollution, while one of the objective functions attempts to minimize pollution effects, even though solutions may formally satisfy the letter of the law. The model is formulated and solved for the data of a region in Chile. Computational results for a variety of parameter choices are provided. These results are expected to aid decision makers in the choice of excluding and choosing sites for solid waste facilities.
Landfill location; Transfer stations; Capacity allocation; Pollution decay; Case study; Solid waste management;
http://www.sciencedirect.com/science/article/pii/S0377221713008072
Eiselt, H.A.
Marianov, Vladimir
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:810-8122017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:810-812
article
Notes on classifying inputs and outputs in data envelopment analysis: A comment
Cook and Zhu (2007) introduced an innovative method to deal with flexible measures. Toloo (2009) found a computational problem in their approach and tackled this issue. Amirteimoori and Emrouznejad (2012) claimed that both Cook and Zhu (2007) and Toloo (2009) models overestimate the efficiency. In this response, we prove that their claim is incorrect and there is no overestimate in these approaches.
Data envelopment analysis; Flexible measures; Classifier models;
http://www.sciencedirect.com/science/article/pii/S0377221714000125
Toloo, Mehdi
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:472-4902017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:472-490
article
Co-constructive development of a green chemistry-based model for the assessment of nanoparticles synthesis
Nanomaterials (materials at the nanoscale, 10−9 meters) are extensively used in several industry sectors due to the improved properties they empower commercial products with. There is a pressing need to produce these materials more sustainably. This paper proposes a Multiple Criteria Decision Aiding (MCDA) approach to assess the implementation of green chemistry principles as applied to the protocols for nanoparticles synthesis. In the presence of multiple green and environmentally oriented criteria, decision aiding is performed with a synergy of ordinal regression methods; preference information in the form of desired assignment for a subset of reference protocols is accepted. The classification models, indirectly derived from such information, are composed of an additive value function and a vector of thresholds separating the pre-defined and ordered classes. The method delivers a single representative model that is used to assess the relative importance of the criteria, identify the possible gains with improvement of the protocol’s evaluations and classify the non-reference protocols. Such precise recommendation is validated against the outcomes of robustness analysis exploiting the sets of all classification models compatible with all maximal subsets of consistent assignment examples. The introduced approach is used with real-world data concerning silver nanoparticles. It is proven to effectively resolve inconsistency in the assignment examples, tolerate ordinal and cardinal measurement scales, differentiate between inter- and intra-criteria attractiveness and deliver easily interpretable scores and class assignments. This work thoroughly discusses the learning insights that MCDA provided during the co-constructive development of the classification model.
Multiple criteria analysis; Green chemistry; Silver nanoparticles; Sustainability; Ordinal classification;
http://www.sciencedirect.com/science/article/pii/S0377221716308529
Kadziński, Miłosz
Cinelli, Marco
Ciomek, Krzysztof
Coles, Stuart R.
Nadagouda, Mallikarjuna N.
Varma, Rajender S.
Kirwan, Kerry
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:643-6592017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:643-659
article
Forecasting the volatility of crude oil futures using intraday data
We use the information in intraday data to forecast the volatility of crude oil at a horizon of 1–66days using a variety of models relying on the decomposition of realized variance in its positive or negative (semivariances) part and its continuous or discontinuous part (jumps). We show the importance of these decompositions in predictive (in-sample) regressions using a number of specifications. Nevertheless, an important empirical finding comes from an out-of-sample analysis which unambiguously shows the limited interest of considering these components. Overall, our results indicates that a simple autoregressive specification mimicking long memory and using past realized variances as predictors does not perform significantly worse than more sophisticated models which include the various components of realized variance.
Volatility forecasting; Crude oil futures; Realized variance; Jumps; Realized semivariance;
http://www.sciencedirect.com/science/article/pii/S037722171400040X
Sévi, Benoît
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:765-7742017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:765-774
article
Electricity market clearing with improved scheduling of stochastic production
In this paper, we consider an electricity market that consists of a day-ahead and a balancing settlement, and includes a number of stochastic producers. We first introduce two reference procedures for scheduling and pricing energy in the day-ahead market: on the one hand, a conventional network-constrained auction purely based on the least-cost merit order, where stochastic generation enters with its expected production and a low marginal cost; on the other, a counterfactual auction that also accounts for the projected balancing costs using stochastic programming. Although the stochastic clearing procedure attains higher market efficiency in expectation than the conventional day-ahead auction, it suffers from fundamental drawbacks with a view to its practical implementation. In particular, it requires flexible producers (those that make up for the lack or surplus of stochastic generation) to accept losses in some scenarios. Using a bilevel programming framework, we then show that the conventional auction, if combined with a suitable day-ahead dispatch of stochastic producers (generally different from their expected production), can substantially increase market efficiency and emulate the advantageous features of the stochastic optimization ideal, while avoiding its major pitfalls.
OR in energy; Electricity market; Stochastic programming; Electricity pricing; Wind power; Bilevel programming;
http://www.sciencedirect.com/science/article/pii/S0377221713009120
Morales, Juan M.
Zugno, Marco
Pineda, Salvador
Pinson, Pierre
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:367-3772017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:367-377
article
Methods for strategic liner shipping network design
In this paper the combined fleet-design, ship-scheduling and cargo-routing problem with limited availability of ships in liner shipping is considered. A composite solution approach is proposed in which the ports are first aggregated into port clusters to reduce the problem size. When the cargo flows are disaggregated, a feeder service network is introduced to ship the cargo within a port cluster. The solution method is tested on a problem instance containing 58 ports on the Asia–Europe trade lane of Maersk. The best obtained profit gives an improvement of more than 10% compared to the reference network based on the Maersk network.
Transportation; Liner shipping; Network design; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221713007960
Mulder, Judith
Dekker, Rommert
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:660-6702017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:660-670
article
Testing inference in heteroskedastic fixed effects models
This paper considers the issue of performing testing inference in fixed effects panel data models under heteroskedasticity of unknown form. We use numerical integration to compute the exact null distributions of different quasi-t test statistics and compare them to their limiting counterpart. The test statistics use different heteroskedasticity-consistent standard errors. Our results reveal that the asymptotic approximation is usually poor in small samples when the test statistic is based on the covariance matrix estimator proposed by Arellano (1987). The quality of the approximation is greatly increased when the standard error is obtained using other heteroskedasticity-consistent estimators, most notably the CHC4 estimator. Our results also reveal that the performance of Arellano’s test improves considerably when standard errors are computed using restricted residuals.
Distribution; Simulation; Heteroskedasticity; Numerical integration; Longitudinal data; Quasi-t tests;
http://www.sciencedirect.com/science/article/pii/S0377221714000538
Uchôa, Carlos F.A.
Cribari-Neto, Francisco
Menezes, Tatiane A.
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:594-6152017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:594-615
article
Designing a sustainable closed-loop supply chain network based on triple bottom line approach: A comparison of metaheuristics hybridization techniques
Recently, there is a growing concern about the environmental and social footprint of business operations. While most of the papers in the field of supply chain network design focus on economic performance, recently, some studies have considered environmental dimensions.
Supply chain management; Multi-objective programming; Metaheuristics; Network design; Environmental management; Corporate social responsibility;
http://www.sciencedirect.com/science/article/pii/S0377221713010163
Devika, K.
Jafarian, A.
Nourbakhsh, V.
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:756-7732017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:756-773
article
An integrated approach for scheduling health care activities in a hospital
To effectively utilise hospital beds, operating rooms (OR) and other treatment spaces, it is necessary to precisely plan patient admissions and treatments in advance. As patient treatment and recovery times are unequal and uncertain, this is not easy. In response, a sophisticated flexible job-shop scheduling (FJSS) model is introduced, whereby patients, beds, hospital wards and health care activities are respectively treated as jobs, single machines, parallel machines and operations. Our approach is novel because an entire hospital is describable and schedulable in one integrated approach. The scheduling model can be used to recompute timings after deviations, delays, postponements and cancellations. It also includes advanced conditions such as activity and machine setup times, transfer times between activities, blocking limitations and no wait conditions, timing and occupancy restrictions, buffering for robustness, fixed activities and sequences, release times and strict deadlines. To solve the FJSS problem, constructive algorithms and hybrid meta-heuristics have been developed. Our numerical testing shows that the proposed solution techniques are capable of solving problems of real world size. This outcome further highlights the value of the scheduling model and its potential for integration into actual hospital information systems.
Scheduling; Flexible job shop; Hospital scheduling; Disjunctive graph model; Hybrid meta-heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221717305921
Burdett, Robert L.
Kozan, Erhan
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:653-6642017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:653-664
article
A hybrid of adaptive large neighborhood search and tabu search for the order-batching problem
Given a set of customer orders and a routing policy, the goal of the order-batching problem (OBP) is to group customer orders to picking orders (batches) such that the total length of all tours through a rectangular warehouse is minimized. Because order picking is considered the most labor-intensive process in warehousing, effectively batching customer orders can result in considerable savings. The OBP is NP-hard if the number of orders per batch is greater than two, and the exact solution methods proposed in the literature are not able to consistently solve larger instances. To address larger instances, we develop a metaheuristic hybrid based on adaptive large neighborhood search and tabu search, called ALNS/TS. In numerical studies, we conduct an extensive comparison of ALNS/TS to all previously published OBP methods that have used standard benchmark sets to investigate their performance. ALNS/TS outperforms all comparison methods with respect to both average solution quality and run-time. Compared to the state-of-the-art, ALNS/TS shows the clearest advantages on the larger instances of the existing benchmark sets, which assume a higher number of customer orders and higher capacities of the picking device. Finally, ALNS/TS is able to solve newly generated large-scale instances with up to 600 customer orders and six articles per customer order with reasonable run-times and convincing scaling behavior and robustness.
Logistics; Order batching; Adaptive large neighborhood search; Tabu search; Hybrid metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221717305970
Žulj, Ivan
Kramer, Sergej
Schneider, Michael
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:102-1142017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:102-114
article
A variable neighborhood search with an effective local search for uncapacitated multilevel lot-sizing problems
In this study, we improved the variable neighborhood search (VNS) algorithm for solving uncapacitated multilevel lot-sizing (MLLS) problems. The improvement is twofold. First, we developed an effective local search method known as the Ancestors Depth-first Traversal Search (ADTS), which can be embedded in the VNS to significantly improve the solution quality. Second, we proposed a common and efficient approach for the rapid calculation of the cost change for the VNS and other generate-and-test algorithms. The new VNS algorithm was tested against 176 benchmark problems of different scales (small, medium, and large). The experimental results show that the new VNS algorithm outperforms all of the existing algorithms in the literature for solving uncapacitated MLLS problems because it was able to find all optimal solutions (100%) for 96 small-sized problems and new best-known solutions for 5 of 40 medium-sized problems and for 30 of 40 large-sized problems.
Metaheuristics; Multilevel lot-sizing (MLLS) problem; ADTS local search; Variable neighborhood search (VNS);
http://www.sciencedirect.com/science/article/pii/S0377221713008485
Xiao, Yiyong
Zhang, Renqian
Zhao, Qiuhong
Kaku, Ikou
Xu, Yuchun
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:206-2142017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:206-214
article
Fixed cost and resource allocation based on DEA cross-efficiency
In many managerial applications, situations frequently occur when a fixed cost is used in constructing the common platform of an organization, and needs to be shared by all related entities, or decision making units (DMUs). It is of vital importance to allocate such a cost across DMUs where there is competition for resources. Data envelopment analysis (DEA) has been successfully used in cost and resource allocation problems. Whether it is a cost or resource allocation issue, one needs to consider both the competitive and cooperative situation existing among DMUs in addition to maintaining or improving efficiency. The current paper uses the cross-efficiency concept in DEA to approach cost and resource allocation problems. Because DEA cross-efficiency uses the concept of peer appraisal, it is a very reasonable and appropriate mechanism for allocating a shared resource/cost. It is shown that our proposed iterative approach is always feasible, and ensures that all DMUs become efficient after the fixed cost is allocated as an additional input measure. The cross-efficiency DEA-based iterative method is further extended into a resource-allocation setting to achieve maximization in the aggregated output change by distributing available resources. Such allocations for fixed costs and resources are more acceptable to the players involved, because the allocation results are jointly determined by all DMUs rather than a specific one. The proposed approaches are demonstrated using an existing data set that has been applied in similar studies.
Data envelopment analysis (DEA); Cross-efficiency; Fixed cost allocation; Resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221713008035
Du, Juan
Cook, Wade D.
Liang, Liang
Zhu, Joe
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:524-5332017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:524-533
article
A two-stage supply chain coordination mechanism considering price sensitive demand and quantity discounts
This paper explores the coordination between a supplier and a buyer within a decentralized supply chain, through the use of quantity discounts in a game theoretic model. Within this model, the players face inventory and pricing decisions. We propose both cooperative and non-cooperative approaches considering that the product traded experiences a price sensitive demand. In the first case, we study the dynamics of the game from the supplier's side as the leader in the negotiation obtaining a Stackelberg equilibrium, and then show how the payoff of this player could still improve from this point. In the second case, a cooperative model is formulated, where decisions are taken simultaneously, emulating a centralized firm, showing the benefits of the cooperation between the players. We further formulate a pricing game, where the buyer is allowed to set different prices to the final customer as a reaction to the supplier's discount decisions. For the latter we investigate the difference between feasibility of implementing a retail discount given a current coordination mechanism and without it. Finally the implications of transportation costs are analyzed in the quantity discount schedule. Our findings are illustrated with a numerical example showing the difference in the players’ payoff in each case and the optimal strategies, comparing in each case our results with existing work.
Supply chain coordination; Inventory; Game theoretical model; Price sensitive demand; Quantity discounts;
http://www.sciencedirect.com/science/article/pii/S0377221717305696
Venegas, Bárbara B.
Ventura, José A.
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:616-6232017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:616-623
article
Relief inventory modelling with stochastic lead-time and demand
The irregular demand and communication network disruption that are characteristics of situations demanding humanitarian logistics, particularly after large-scale earthquakes, present a unique challenge for relief inventory modelling. However, there are few quantitative inventory models in humanitarian logistics, and assumptions inherent in commercial logistics naturally have little applicability to humanitarian logistics. This paper develops a humanitarian disaster relief inventory model that assumes a uniformly distributed function in both lead-time and demand parameters, which is appropriate considering the limited historical data on relief operation. Furthermore, this paper presents different combinations of lead-time and demand scenarios to demonstrate the variability of the model. This is followed by the discussion of a case study wherein the decision variables are evaluated and sensitivity analysis is performed. The results reveal the presence of a unique reorder level in the inventory wherever the order quantity is insensitive to some lead-time demand values, providing valuable direction for humanitarian relief planning efforts and future research.
Inventory; Earthquake; Humanitarian logistics; Uniform distribution;
http://www.sciencedirect.com/science/article/pii/S0377221713010266
Das, Rubel
Hanaoka, Shinya
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:742-7552017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:742-755
article
Assessing the cost-effectiveness of university academic recruitment and promotion policies,
This paper develops an approach for higher education institutions to assess the economic efficiency of their recruitment and promotion practices concerning academic staff. Research output potential is a key criterion in most academic appointments. Generally, there is a long lead time between the conduct of research and its ultimate value in the form of disseminated knowledge. This means higher education institutions usually reward financially staff on the prospect of research output, albeit on the basis of research outputs achieved up to the point of recruitment or discretionary salary rise (e.g. through promotion). We propose a Data Envelopment Analysis (DEA) model which can be used retrospectively to set salary costs against corresponding research outputs achieved as a measure of the financial efficacy of past recruitment and promotion practices. The analysis can identify potential issues with those practices and lead to improvements for the future.
Data Envelopment Analysis; Academic promotions; Academic recruitment; Cost efficiency;
http://www.sciencedirect.com/science/article/pii/S0377221717305878
Thanassoulis, E.
Sotiros, D.
Koronakos, G.
Despotis, D.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:38-462017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:38-46
article
Line search methods with guaranteed asymptotical convergence to an improving local optimum of multimodal functions
This paper considers line search optimization methods using a mathematical framework based on the simple concept of a v-pattern and its properties. This framework provides theoretical guarantees on preserving, in the localizing interval, a local optimum no worse than the starting point. Notably, the framework can be applied to arbitrary unidimensional functions, including multimodal and infinitely valued ones. Enhanced versions of the golden section, bisection and Brent’s methods are proposed and analyzed within this framework: they inherit the improving local optimality guarantee. Under mild assumptions the enhanced algorithms are proved to converge to a point in the solution set in a finite number of steps or that all their accumulation points belong to the solution set.
Nonlinear programming; Line search; Golden section method; Brent’s algorithm; Bisection; Multimodal functions;
http://www.sciencedirect.com/science/article/pii/S0377221713010254
Vieira, Douglas Alexandre Gomes
Lisboa, Adriano Chaves
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:159-1692017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:159-169
article
Affine model of inflation-indexed derivatives and inflation risk premium
This paper proposes an affine-based approach which jointly captures the nominal interest rate, the real interest rate, and the inflation risk premium to price inflation-indexed derivatives, including zero-coupon inflation-indexed swaps, year-on-year inflation-indexed swaps, inflation-indexed swaptions, and inflation-indexed caps and floors. We provide an example and explain how to use traded zero-coupon inflation-indexed swap rates to estimate inflation risk premiums.
Inflation-indexed derivatives; Inflation risk premium; Affine models;
http://www.sciencedirect.com/science/article/pii/S0377221713009855
Ho, Hsiao-Wei
Huang, Henry H.
Yildirim, Yildiray
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:548-5572017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:548-557
article
A compact optimization model for the tail assignment problem
This paper investigates a new model for the so-called Tail Assignment Problem, which consists in assigning a well-identified airplane to each flight leg of a given flight schedule, in order to minimize total cost (cost of operating the flights and possible maintenance costs) while complying with a number of operational constraints. The mathematical programming formulation proposed is compact (i.e., involves a number of 0−1 decision variables and constraints polynomial in the problem size parameters) and is shown to be of significantly reduced dimension as compared with previously known compact models. Computational experiments on series of realistic problem instances (obtained by random sampling from real-world data set) are reported. It is shown that with the proposed model, current state-of-the art MIP solvers can efficiently solve to exact optimality large instances representing 30-day flight schedules with typically up to 40 airplanes and 1500 flight legs connecting as many as 21 airports. The model also includes the main existing types of maintenance constraints, and extensive computational experiments are reported on problem instances of size typical of practical applications.
OR in airlines; Tail assignment; Aircraft routing; Maintenance routing; Integer linear program;
http://www.sciencedirect.com/science/article/pii/S0377221717305866
Khaled, Oumaima
Minoux, Michel
Mousseau, Vincent
Michel, Stéphane
Ceugniet, Xavier
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:686-7062017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:686-706
article
Capturing preferences for inequality aversion in decision support
We investigate the situation where there is interest in ranking distributions (of income, of wealth, of health, of service levels) across a population, in which individuals are considered preferentially indistinguishable and where there is some limited information about social preferences. We use a natural dominance relation, generalised Lorenz dominance, used in welfare comparisons in economic theory. In some settings there may be additional information about preferences (for example, if there is policy statement that one distribution is preferred to another) and any dominance relation should respect such preferences. However, characterising this sort of conditional dominance relation (specifically, dominance with respect to the set of all symmetric increasing quasiconcave functions in line with given preference information) turns out to be computationally challenging. This challenge comes about because, through the assumption of symmetry, any one preference statement (“I prefer giving $100 to Jane and $110 to John over giving $150 to Jane and $90 to John”) implies a large number of other preference statements (“I prefer giving $110 to Jane and $100 to John over giving $150 to Jane and $90 to John”; “I prefer giving $100 to Jane and $110 to John over giving $90 to Jane and $150 to John”). We present theoretical results that help deal with these challenges and present tractable linear programming formulations for testing whether dominance holds between any given pair of distributions. We also propose an interactive decision support procedure for ranking a given set of distributions and demonstrate its performance through computational testing.
Multiple criteria analysis; Equitable preferences; Generalised Lorenz dominance; Conditional dominance; Interactive approaches;
http://www.sciencedirect.com/science/article/pii/S0377221717306458
Karsu, Özlem
Morton, Alec
Argyris, Nikos
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:329-3332017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:329-333
article
Supply chain analysis under a price-discount incentive scheme for electric vehicles
We investigate an automobile supply chain where a manufacturer and a retailer serve heterogeneous consumers with electric vehicles (EVs) under a government’s price-discount incentive scheme that involves a price discount rate and a subsidy ceiling. We show that the subsidy ceiling is more effective in influencing the optimal wholesale pricing decision of the manufacturer with a higher unit production cost. However, the discount rate is more effective for the manufacturer with a lower unit production cost. Moreover, the expected sales are increasing in the discount rate but may be decreasing in the subsidy ceiling. Analytic results indicate that an effective incentive scheme should include both a discount rate and a subsidy ceiling. We also derive the necessary condition for the most effective discount rate and subsidy ceiling that maximize the expected sales of EVs, and obtain a unique discount rate and subsidy ceiling that most effectively improve the manufacturer’s incentive for EV production.
Electric vehicle; Supply chain; Price discount; Incentive scheme;
http://www.sciencedirect.com/science/article/pii/S0377221713009387
Luo, Chunlin
Leng, Mingming
Huang, Jian
Liang, Liping
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:1-162017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:1-16
article
A common framework and taxonomy for multicriteria scheduling problems with interfering and competing jobs: Multi-agent scheduling problems
Most classical scheduling research assumes that the objectives sought are common to all jobs to be scheduled. However, many real-life applications can be modeled by considering different sets of jobs, each one with its own objective(s), and an increasing number of papers addressing these problems has appeared over the last few years. Since so far the area lacks a unified view, the studied problems have received different names (such as interfering jobs, multi-agent scheduling, and mixed-criteria), some authors do not seem to be aware of important contributions in related problems, and solution procedures are often developed without taking into account existing ones. Therefore, the topic is in need of a common framework that allows for a systematic recollection of existing contributions, as well as a clear definition of the main research avenues. In this paper we review multicriteria scheduling problems involving two or more sets of jobs and propose an unified framework providing a common definition, name and notation for these problems. Moreover, we systematically review and classify the existing contributions in terms of the complexity of the problems and the proposed solution procedures, discuss the main advances, and point out future research lines in the topic.
Scheduling; Interfering jobs; Multi-customer; Multi-agent scheduling problems; Sets of jobs; Multicriteria;
http://www.sciencedirect.com/science/article/pii/S0377221713007728
Perez-Gonzalez, Paz
Framinan, Jose M.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:233-2462017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:233-246
article
Tree, web and average web values for cycle-free directed graph games
On the class of cycle-free directed graph games with transferable utility solution concepts, called web values, are introduced axiomatically, each one with respect to a chosen coalition of players that is assumed to be an anti-chain in the directed graph and is considered as a management team. We provide their explicit formula representation and simple recursive algorithms to calculate them. Additionally the efficiency and stability of web values are studied. Web values may be considered as natural extensions of the tree and sink values as has been defined correspondingly for rooted and sink forest graph games. In case the management team consists of all sources (sinks) in the graph a kind of tree (sink) value is obtained. In general, at a web value each player receives the worth of this player together with his subordinates minus the total worths of these subordinates. It implies that every coalition of players consisting of a player with all his subordinates receives precisely its worth. We also define the average web value as the average of web values over all management teams in the graph. As application the water distribution problem of a river with multiple sources, a delta and possibly islands is considered.
TU game; Directed graph communication structure; Efficiency; Stability; Management team;
http://www.sciencedirect.com/science/article/pii/S0377221713008370
Khmelnitskaya, Anna
Talman, Dolf
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:665-6742017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:665-674
article
Ranking efficient decision making units in data envelopment analysis based on reference frontier share
Data envelopment analysis (DEA) is a powerful technique for performance evaluation of decision making units (DMUs). Ranking efficient DMUs based on a rational analysis is an issue that yet needs further research. The impact of each efficient DMU in evaluation of inefficient DMUs can be considered as additional information to discriminating among efficient DMUs. The concept of reference frontier share is introduced in which the share of each efficient DMU in construction of the reference frontier for evaluating inefficient DMUs is considered. For this purpose a model for measuring the reference frontier share of each efficient DMU associated with each inefficient one is proposed and then a total measure is provided based on which the ranking is made. The new approach has the capability for ranking extreme and non-extreme efficient DMUs. Further, it has no problem in dealing with negative data. These facts are verified by theorems, discussions and numerical examples.
Data envelopment analysis; Ranking; Reference frontier share;
http://www.sciencedirect.com/science/article/pii/S0377221717306057
Rezaeiani, M.J.
Foroughi, A.A.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:28-372017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:28-37
article
Robust portfolio optimization with copulas
Conditional Value at Risk (CVaR) is widely used in portfolio optimization as a measure of risk. CVaR is clearly dependent on the underlying probability distribution of the portfolio. We show how copulas can be introduced to any problem that involves distributions and how they can provide solutions for the modeling of the portfolio. We use this to provide the copula formulation of the CVaR of a portfolio. Given the critical dependence of CVaR on the underlying distribution, we use a robust framework to extend our approach to Worst Case CVaR (WCVaR). WCVaR is achieved through the use of rival copulas. These rival copulas have the advantage of exploiting a variety of dependence structures, symmetric and not. We compare our model against two other models, Gaussian CVaR and Worst Case Markowitz. Our empirical analysis shows that WCVaR can asses the risk more adequately than the two competitive models during periods of crisis.
Convex programming; Robust optimization; Copulas;
http://www.sciencedirect.com/science/article/pii/S0377221713010060
Kakouris, Iakovos
Rustem, Berç
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:300-3142017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:300-314
article
Joint optimization of spare parts ordering and maintenance policies for multiple identical items subject to silent failures
In this paper the joint maintenance and spare parts ordering problem for more than one identical operating items is studied. The operating items may suffer two types of silent failures: a minor failure, which results in item malfunctioning, and a major failure, which renders the item completely out-of-function. Inspections are periodically held to detect any failures and the inspected items are preventively maintained, repaired or replaced according to their condition. Two ordering policies are investigated to supply the necessary spare parts: a periodic review and a continuous review policy. The expected total maintenance and inventory cost per time unit is derived and the proposed models are optimized for real case data. In addition, the sensitivity of the proposed models is studied through numerical examples and the effect of some key problem characteristics on the optimal decisions is discussed.
Inventory; Spare parts; Maintenance; Replacement;
http://www.sciencedirect.com/science/article/pii/S0377221713008941
Panagiotidou, Sofia
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:341-3492017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:341-349
article
A survey on maritime fleet size and mix problems
This paper presents a literature survey on the fleet size and mix problem in maritime transportation. Fluctuations in the shipping market and frequent mismatches between fleet capacities and demands highlight the relevance of the problem and call for more accurate decision support. After analyzing the available scientific literature on the problem and its variants and extensions, we summarize the state of the art and highlight the main contributions of past research. Furthermore, by identifying important real life aspects of the problem which past research has failed to capture, we uncover the main areas where more research will be needed.
Logistics; Maritime transportation; Fleet planning;
http://www.sciencedirect.com/science/article/pii/S0377221713003846
Pantuso, Giovanni
Fagerholt, Kjetil
Hvattum, Lars Magnus
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:732-7412017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:732-741
article
Connectivity modeling and optimization of linear consecutively connected systems with repairable connecting elements
Linear consecutively connected systems (LCCSs) are systems containing a linear sequence of ordered nodes. Connection elements (CE) characterized by diverse connection ranges, time-to-failure and time-to-repair distributions are allocated to different nodes to provide the system connectivity, i.e., a connection between the source and sink nodes of the LCCS. Examples of LCCSs abound in practical applications such as flow transmission systems and radio communication systems. Considerable research efforts have been expended in modeling and optimizing LCCSs. However, most of the existing works have assumed that CEs either are non-repairable or undergo a restrictive minimal repair policy with constant repair time. This paper makes new technical contributions by modeling and optimizing LCCSs with CEs under corrective maintenance with random repair time and different repair policies (minimal, perfect, and imperfect). The characteristics of CEs can depend on their location because the distance between adjacent nodes and conditions of CE operation and maintenance at different nodes can be different, which further complicates the problem. We first propose a discrete numerical algorithm to evaluate the instantaneous availability of each CE. A universal generating function based method is then implemented for assessing instantaneous and expected system connectivity for a specific CE allocation. As the CE allocation can have significant impacts on the system connectivity, we further define and solve the optimal CE allocation problem, whose objective is to find the CE allocation among LCCS nodes maximizing the expected system connectivity over a given mission time. Effects of different parameters including repair efficiency, mission time and repair time are investigated. As illustrated through examples, optimization results can facilitate optimal decisions on robust design and effective operation and maintenance managements of LCCSs.
Applied probability; Linear consecutively connected system; Connectivity optimization; Random repair time; Operation management;
http://www.sciencedirect.com/science/article/pii/S037722171730588X
Xing, Liudong
Levitin, Gregory
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:431-4472017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:431-447
article
An exact method for scheduling a yard crane
This paper studies an operational problem arising at a container terminal, consisting of scheduling a yard crane to carry out a set of container storage and retrieval requests in a single container block. The objective is to minimize the total travel time of the crane to carry out all requests. The block has multiple input and output (I/O) points located at both the seaside and the landside. The crane must move retrieval containers from the block to the I/O points, and must move storage containers from the I/O points to the block. The problem is modeled as a continuous time integer programming model and the complexity is proven. We use intrinsic properties of the problem to propose a two-phase solution method to optimally solve the problem. In the first phase, we develop a merging algorithm which tries to patch subtours of an optimal solution of an assignment problem relaxation of the problem and obtain a complete crane tour without adding extra travel time to the optimal objective value of the relaxed problem. The algorithm requires common I/O points to patch subtours. This is efficient and often results in obtaining an optimal solution of the problem. If an optimal solution has not been obtained, the solution of the first phase is embedded in the second phase where a branch-and-bound algorithm is used to find an optimal solution. The numerical results show that the proposed method can quickly obtain an optimal solution of the problem. Compared to the random and Nearest Neighbor heuristics, the total travel time is on average reduced by more than 30% and 14%, respectively. We also validate the solution method at a terminal.
Container storage and retrieval; Sequencing; Multiple I/O points; Travel time; Traveling salesman problem;
http://www.sciencedirect.com/science/article/pii/S0377221713007935
Gharehgozli, Amir Hossein
Yu, Yugang
de Koster, René
Udding, Jan Tijmen
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:462-4712017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:462-471
article
Are multi-criteria decision-making tools useful? An experimental comparative study of three methods
Many decision makers still question the usefulness of multi-criteria decision-making methods and prefer to rely on intuitive decisions. In this study we evaluated a number of multi-criteria decision-making tools for their usefulness using incentive-based experiments, which is a novel approach in operations research but common in psychology and experimental economics. In this experiment the participants were asked to compare five coffee shops to win a voucher for their best-rated shop. We found that, although the usefulness of different multi-criteria decision-making tools varied to some extent, all the tools were found to be useful in the sense that, when they decided to change their ranking, they followed the recommendation of the multi-criteria decision-making tool. Moreover, the level of inconsistency in the judgements provided had no significant effect on the usefulness of these tools.
Decision analysis; SMART; AHP; MACBETH; Experimental evaluation;
http://www.sciencedirect.com/science/article/pii/S0377221717304885
Ishizaka, Alessio
Siraj, Sajid
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:225-2322017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:225-232
article
A cosine maximization method for the priority vector derivation in AHP
The derivation of a priority vector from a pair-wise comparison matrix (PCM) is an important issue in the Analytic Hierarchy Process (AHP). The existing methods for the priority vector derivation from PCM include eigenvector method (EV), weighted least squares method (WLS), additive normalization method (AN), logarithmic least squares method (LLS), etc. The derived priority vector should be as similar to each column vector of the PCM as possible if a pair-wise comparison matrix (PCM) is not perfectly consistent. Therefore, a cosine maximization method (CM) based on similarity measure is proposed, which maximizes the sum of the cosine of the angle between the priority vector and each column vector of a PCM. An optimization model for the CM is proposed to derive the reliable priority vector. Using three numerical examples, the CM is compared with the other prioritization methods based on two performance evaluation criteria: Euclidean distance and minimum violation. The results show that the CM is flexible and efficient.
Analytic Hierarchy Process; Pair-wise comparison matrix; Cosine similarity measure; Priority vector; Consistency index;
http://www.sciencedirect.com/science/article/pii/S0377221713008424
Kou, Gang
Lin, Changsheng
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:403-4042017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:403-404
article
Feature cluster: Learning perspectives in Multiple Criteria Decision Analysis
http://www.sciencedirect.com/science/article/pii/S0377221717307774
Greco, Salvatore
Kadziński, Miłosz
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:73-872017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:73-87
article
The distance constrained multiple vehicle traveling purchaser problem
In the Distance Constrained Multiple Vehicle Traveling Purchaser Problem (DC-MVTPP) a fleet of vehicles is available to visit suppliers offering products at different prices and with different quantity availabilities. The DC-MVTPP consists in selecting a subset of suppliers so to satisfy products demand at the minimum traveling and purchasing costs, while ensuring that the distance traveled by each vehicle does not exceed a predefined upper bound. The problem generalizes the classical Traveling Purchaser Problem (TPP) and adds new realistic features to the decision problem. In this paper we present different mathematical programming formulations for the problem. A branch-and-price algorithm is also proposed to solve a set partitioning formulation where columns represent feasible routes for the vehicles. At each node of the branch-and-bound tree, the linear relaxation of the set partitioning formulation, augmented by the branching constraints, is solved through column generation. The pricing problem is solved using dynamic programming. A set of instances has been derived from benchmark instances for the asymmetric TPP. Instances with up to 100 suppliers and 200 products have been solved to optimality.
Multiple vehicle traveling purchaser problem; Distance constraint; Formulations; Branch-and-price; Column generation;
http://www.sciencedirect.com/science/article/pii/S0377221713008412
Bianchessi, N.
Mansini, R.
Speranza, M.G.
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:195-2052017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:195-205
article
Learning from discrete-event simulation: Exploring the high involvement hypothesis
Discussion of learning from discrete-event simulation often takes the form of a hypothesis stating that involving clients in model building provides much of the learning necessary to aid their decisions. Whilst practitioners of simulation may intuitively agree with this hypothesis they are simultaneously motivated to reduce the model building effort through model reuse. As simulation projects are typically limited by time, model reuse offers an alternative learning route for clients as the time saved can be used to conduct more experimentation. We detail a laboratory experiment to test the high involvement hypothesis empirically, identify mechanisms that explain how involvement in model building or model reuse affect learning and explore the factors that inhibit learning from models. Measurement of learning focuses on the management of resource utilisation in a case study of a hospital emergency department and through the choice of scenarios during experimentation. Participants who reused a model benefitted from the increased experimentation time available when learning about resource utilisation. However, participants who were involved in model building simulated a greater variety of scenarios including more validation type scenarios early on. These results suggest that there may be a learning trade-off between model reuse and model building when simulation projects have a fixed budget of time. Further work evaluating client learning in practice should track the origin and choice of variables used in experimentation; studies should also record the methods modellers find most effective in communicating the impact of resource utilisation on queuing.
Psychology of decision; Learning; Model building; Model reuse; Generic models; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221713008047
Monks, Thomas
Robinson, Stewart
Kotiadis, Kathy
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:276-2862017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:276-286
article
Improving the robustness in railway station areas
In order to improve the robustness of a railway system in station areas, this paper introduces an iterative approach to successively optimize the train routing through station areas and to enhance this solution by applying some changes to the timetable in a tabu search environment. We present our vision on robustness and describe how this vision can be used in practice. By introducing the spread of the trains in the objective function for the route choice and timetabling module, we improve the robustness of a railway system. Using a discrete event simulation model, the performance of our algorithms is evaluated based on a case study for the Brussels’ area. The computational results indicate an average improvement in robustness of 6.2% together with a decrease in delay propagation of about 25%. Furthermore, the effect of some measures like changing the train offer to further increase the robustness is evaluated and compared.
Transportation; Robustness; Railway timetabling; Train routing; Bottleneck scheduling; Mixed integer linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221713008916
Dewilde, Thijs
Sels, Peter
Cattrysse, Dirk
Vansteenwegen, Pieter
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:623-6362017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:623-636
article
Production and replacement policies for a deteriorating manufacturing system under random demand and qualityAuthor-Name: Ouaret, Samir
This work investigates the production planning of an unreliable deteriorating manufacturing system under uncertainties. The effect of the deterioration phenomenon on the machine is mainly observed in its availability and the quality of the parts produced, with the rates of failure and defectives increasing with the age of the machine. The option to replace the machine should be considered to mitigate the effect of deterioration in order to ensure long-term satisfaction of demand. The objective of this paper is to find the production rate and the replacement policy that minimize the total discounted cost, which includes inventory, backlog, production, repair and replacement costs, over an infinite planning horizon. We formulate the stochastic control problem in the framework of a semi-Markov decision process to consider the machine's history. The integration of random demand and quality behaviour led us to propose a new modeling approach by developing optimality conditions in terms of a second-order approximation of Hamilton–Jacobi–Bellman (HJB) equations. Numerical methods are used to obtain the optimal control policies. Finally, a numerical example and a sensitivity analysis are presented in order to illustrate and confirm the structure of the optimal solution obtained.
Flexible manufacturing systems; Random quality; Random demand; Stochastic optimal control; Numerical methods;
http://www.sciencedirect.com/science/article/pii/S0377221717306033
Kenné, Jean-Pierre
Gharbi, Ali
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:47-612017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:47-61
article
Equilibrium network design of shared-vehicle systems
An equilibrium network design model is formulated to determine the optimal configuration of a vehicle sharing program (VSP). A VSP involves a fleet of vehicles (bicycles, cars, or electric vehicles) positioned strategically across a network. In a flexible VSP, users are permitted to check out vehicles to perform trips and return the vehicles to stations close to their destinations. VSP operators need to determine an optimal configuration in terms of station locations, vehicle inventories, and station capacities, that maximizes revenue. Since users are likely to use the VSP resources only if their travel utilities improve, a generalized equilibrium based approach is adopted to design the system. The model takes the form of a bi-level, mixed-integer program. Model properties of uniqueness, inefficiency of equilibrium, and transformations that lead to an exact solution approach are presented. Computational tests on several synthetic instances demonstrate the nature of the equilibrium configuration, the trade-offs between operator and user objectives, and insights for deploying such systems.
Equilibrium network design; Bi-level programs; Bicycle sharing; Car sharing;
http://www.sciencedirect.com/science/article/pii/S0377221713007741
Nair, Rahul
Miller-Hooks, Elise
oai:RePEc:eee:ejores:v:264:y:2018:i:2:p:508-5232017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:264:y:2018:i:2:p:508-523
article
The static bike relocation problem with multiple vehicles and visits
This paper introduces the static bike relocation problem with multiple vehicles and visits, the objective of which is to rebalance at minimum cost the stations of a bike sharing system using a fleet of vehicles. The vehicles have identical capacities and service time limits, and are allowed to visit the stations multiple times. We present an integer programming formulation, implemented under a branch-and-cut scheme, in addition to an iterated local search metaheuristic that employs efficient move evaluation procedures. Results of computational experiments on instances ranging from 10 to 200 vertices are provided and analyzed. We also examine the impact of the vehicle capacity and of the number of visits and vehicles on the performance of the proposed algorithms.
Routing; Shared mobility systems; Bike sharing; Pickup and delivery;
http://www.sciencedirect.com/science/article/pii/S0377221717305672
Bulhões, Teobaldo
Subramanian, Anand
Erdoğan, Güneş
Laporte, Gilbert
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:775-7832017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:775-783
article
Constant and variable returns to scale DEA models for socially responsible investment funds
In order to evaluate the performance of socially responsible investment (SRI) funds, we propose some models which use data envelopment analysis (DEA) and can be computed in all phases of the business cycle. These models focus on the most crucial elements of an investment in mutual funds.
Data envelopment analysis; Finance; Mutual fund performance evaluation; Socially responsible investing; Returns to scale;
http://www.sciencedirect.com/science/article/pii/S0377221713009417
Basso, Antonella
Funari, Stefania
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:265-2752017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:265-275
article
Data and queueing analysis of a Japanese air-traffic flow
Congestion is a major cause of inefficiency in air transportation. A question is whether delays during the arrival phase of a flight can be absorbed more fuel-efficiently in the future. In this context, we analyze Japan’s flow strategy empirically and use queueing techniques in order to gain insight into the generation of the observed delays. Based on this, we derive a rule to balance congestion delays more efficiently between ground and en-route. Whether fuel efficiency can be further improved or not will depend on the willingness to review the concept of runway pressure.
Queueing; Applied probability; Air traffic management;
http://www.sciencedirect.com/science/article/pii/S0377221713008795
Gwiggner, Claus
Nagaoka, Sakae
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:170-1792017-11-16RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:170-179
article
Tandem queueing system with infinite and finite intermediate buffers and generalized phase-type service time distribution
A tandem queueing system with infinite and finite intermediate buffers, heterogeneous customers and generalized phase-type service time distribution at the second stage is investigated. The first stage of the tandem has a finite number of servers without buffer. The second stage consists of an infinite and a finite buffers and a finite number of servers. The arrival flow of customers is described by a Marked Markovian arrival process. Type 1 customers arrive to the first stage while type 2 customers arrive to the second stage directly. The service time at the first stage has an exponential distribution. The service times of type 1 and type 2 customers at the second stage have a phase-type distribution with different parameters. During a waiting period in the intermediate buffer, type 1 customers can be impatient and leave the system. The ergodicity condition and the steady-state distribution of the system states are analyzed. Some key performance measures are calculated. The Laplace–Stieltjes transform of the sojourn time distribution of type 2 customers is derived. Numerical examples are presented.
Queueing; Tandem queueing system; Marked Markovian arrival process; Phase-type distribution; Laplace–Stieltjes transform;
http://www.sciencedirect.com/science/article/pii/S0377221713009879
Kim, Chesoong
Dudin, Alexander
Dudina, Olga
Dudin, Sergey
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:200-2072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:200-207
article
Forward search outlier detection in data envelopment analysis
In this paper we tackle the problem of outlier detection in data envelopment analysis (DEA). We propose a procedure where we merge the super-efficiency DEA and the forward search. Since DEA provides efficiency scores which are not parameters to fit the model to the data, we introduce a distance, to be monitored along the search. This distance is obtained through the integration of a regression model and the super-efficiency DEA. We simulate a Cobb–Douglas production function and we compare the super-efficiency DEA and the forward search analysis in both uncontaminated and contaminated settings. For inference about outliers, we exploit envelopes obtained through Monte Carlo simulations.
Data envelopment analysis (DEA); Super-efficiency; Forward search; Outlier detection;
http://www.sciencedirect.com/science/article/pii/S0377221711006254
Bellini, Tiziano
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:348-3572016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:348-357
article
From deterministic to stochastic surrender risk models: Impact of correlation crises on economic capital
In this paper we raise the matter of considering a stochastic model of the surrender rate instead of the classical S-shaped deterministic curve (in function of the spread), still used in almost all insurance companies. For extreme scenarios, due to the lack of data, it could be tempting to assume that surrenders are conditionally independent with respect to a S-curve disturbance. However, we explain why this conditional independence between policyholders decisions, which has the advantage to be the simplest assumption, looks particularly maladaptive when the spread increases. Indeed the correlation between policyholders decisions is most likely to increase in this situation. We suggest and develop a simple model which integrates those phenomena. With stochastic orders it is possible to compare it to the conditional independence approach qualitatively. In a partially internal Solvency II model, we quantify the impact of the correlation phenomenon on a real life portfolio for a global risk management strategy.
Risk management Applied probability Life insurance Surrender risk Correlation risk
http://www.sciencedirect.com/science/article/pii/S0377221711003821
Loisel, Stéphane
Milhaud, Xavier
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:108-1192016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:108-119
article
Supply chain coordination with controllable lead time and asymmetric information
This paper considers coordinated decisions in a decentralized supply chain consisting of a vendor and a buyer with controllable lead time. We analyze two supply chain inventory models. In the first model we assume the vendor has complete information about the buyer’s cost structure. By taking both the vendor and the buyer’s individual rationalities into consideration, a side payment coordination mechanism is designed to realize supply chain Pareto dominance. In the second model we consider a setting where the buyer possesses private cost information. We design the coordination mechanism by using principal–agent model to induce the buyer to report his true cost structure. The solution procedures are also developed to get the optimal solutions of these two models. The results of numerical examples show that shortening lead time to certain extent can reduce inventory cost and the coordination mechanisms designed for both symmetric and asymmetric information situations are effective.
Supply chain management; Asymmetric information; Controllable lead time; Side payment; Coordination mechanism;
http://www.sciencedirect.com/science/article/pii/S037722171100806X
Li, Yina
Xu, Xuejun
Zhao, Xiande
Yeung, Jeff Hoi Yan
Ye, Fei
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:239-2512016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:239-251
article
Optimization models for assessing the peak capacity utilization of intelligent transportation systems
With limited economic and physical resources, it is not feasible to continually expand transportation infrastructure to adequately support the rapid growth in its usage. This is especially true for traffic coordination systems where the expansion of road infrastructure has not been able to keep pace with the increasing number of vehicles, thereby resulting in congestion and delays. Hence, in addition to striving for the construction of new roads, it is imperative to develop new intelligent transportation management and coordination systems. The effectiveness of a new technique can be evaluated by comparing it with the optimal capacity utilization. If this comparison indicates that substantial improvements are possible, then the cost of developing and deploying an intelligent traffic system can be justified. Moreover, developing an optimization model can also help in capacity planning. For instance, at a given level of demand, if the optimal solution worsens significantly, this implies that no amount of intelligent strategies can handle this demand, and expanding the infrastructure would be the only alternative. In this paper, we demonstrate these concepts through a case study of scheduling vehicles on a grid of intersecting roads. We develop two optimization models namely, the mixed integer programming model and the space–time network flow model, and show that the latter model is substantially more effective. Moreover, we prove that the problem is strongly NP-hard and develop two polynomial-time heuristics. The heuristic solutions are then compared with the optimal capacity utilization obtained using the space–time network model. We also present important managerial implications.
Traffic; Transportation; Integer programming; Intelligent system; Space–time network;
http://www.sciencedirect.com/science/article/pii/S0377221711006643
Shah, Nirav
Kumar, Subodha
Bastani, Farokh
Yen, I-Ling
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:252-2542016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:252-254
article
A note on pricing with risk aversion
We consider the pricing problem of a risk-averse seller facing uncertain demand. Demand uncertainty stems from buyers’ valuations being privately observed. By imposing very mild restrictions on the distribution of buyers’ valuations (an increasing generalized failure rate distribution) and the Bernoulli utility function, we show that a risk-averse seller will unambiguously post a lower price than a risk-neutral counterpart.
Economics; Pricing; Risk management; Utility theory;
http://www.sciencedirect.com/science/article/pii/S037722171100659X
Colombo, Luca
Labrecciosa, Paola
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:633-6422016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:633-642
article
Multi-criteria diagnosis of control knowledge for cartographic generalisation
The development of interactive map websites increases the need of efficient automatic cartographic generalisation. The generalisation process, which aims at decreasing the level of details of geographic data in order to produce a map at a given scale, is extremely complex. A classical method for automating the generalisation process consists in using a heuristic tree-search strategy. This type of strategy requires having high quality control knowledge (heuristics) to guide the search for the optimal solution. Unfortunately, this control knowledge is rarely perfect and its evaluation is often difficult. Yet, this evaluation can be very useful to manage knowledge and to determine when to revise it. The objective of our work is to offer an automatic method for evaluating the quality of control knowledge for cartographic generalisation based on a heuristic tree-search strategy. Our diagnosis method consists in analysing the system’s execution logs, and in using a multi-criteria analysis method for evaluating the knowledge global quality. We present an industrial application as a case study using this method for building block generalisation and this experiment shows promising results.
(S) Multiple criteria analysis; (S) Knowledge-based systems; Control knowledge quality diagnosis; Heuristic tree-search strategy; Cartographic generalisation;
http://www.sciencedirect.com/science/article/pii/S0377221711009039
Taillandier, Patrick
Taillandier, Franck
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:175-1852016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:175-185
article
Improving envelopment in Data Envelopment Analysis under variable returns to scale
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker’s value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data.
Data Envelopment Analysis; Efficiency; Productivity; Unobserved DMUs; Value judgements;
http://www.sciencedirect.com/science/article/pii/S0377221711009088
Thanassoulis, Emmanuel
Kortelainen, Mika
Allen, Rachel
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:687-6962016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:687-696
article
Data envelopment analysis models of investment funds
This paper develops theory missing in the sizable literature that uses data envelopment analysis to construct return–risk ratios for investment funds. It explores the production possibility set of the investment funds to identify an appropriate form of returns to scale. It discusses what risk and return measures can justifiably be combined and how to deal with negative risks, and identifies suitable sets of measures. It identifies the problems of failing to deal with diversification and develops an iterative approximation procedure to deal with it. It identifies relationships between diversification, coherent measures of risk and stochastic dominance. It shows how the iterative procedure makes a practical difference using monthly returns of 30 hedge funds over the same time period. It discusses possible shortcomings of the procedure and offers directions for future research.
Data envelopment analysis; Investment fund; Diversification; Coherent risk measure; Returns to scale; Stochastic dominance;
http://www.sciencedirect.com/science/article/pii/S0377221711007600
Lamb, John D.
Tee, Kai-Hong
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:600-6082016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:600-608
article
A heuristic method to schedule training programs for Small and Medium Enterprises
During the life period of Small and Medium Enterprises (SMEs) in incubators they need some training programs to acquire the required knowledge in order to survive and succeed in the business environment. This paper presents a heuristic method based on an optimization model to schedule these programs at the most suitable times. Based on the proposed heuristic, each training program is implemented in a suitable time by considering the SMEs’ requirements and some other logical constraints. The proposed heuristic is described in detail, and its implementation is demonstrated via a real-life numerical example. The numerical results of the heuristic are compared with other methods.
Course scheduling; Heuristics; Incubators; Job scheduling; Small and Medium Enterprises;
http://www.sciencedirect.com/science/article/pii/S0377221711008083
Rezaei, Mahmood
Shamsaei, Fahimeh
Mohammadian, Iman
Van Vyve, Mathieu
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:146-1552016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:146-155
article
Simulation-based Selectee Lane queueing design for passenger checkpoint screening
There are two kinds of passenger checkpoint screening lanes in a typical US airport: a Normal Lane and a Selectee Lane that has enhanced scrutiny. The Selectee Lane is not effectively utilized in some airports due to the small amount of passengers selected to go through it. In this paper, we propose a simulation-based Selectee Lane queueing design framework to study how to effectively utilize the Selectee Lane resource. We assume that passengers are classified into several risk classes via some passenger prescreening system. We consider how to assign passengers from different risk classes to the Selectee Lane based on how many passengers are already in the Selectee Lane. The main objective is to maximize the screening system’s probability of true alarm. We first discuss a steady-state model, formulate it as a nonlinear binary integer program, and propose a rule-based heuristic. Then, a simulation framework is constructed and a neighborhood search procedure is proposed to generate possible solutions based on the heuristic solution of the steady-state model. Using the passenger arrival patterns from a medium-size airport, we conduct a detailed case study. We observe that the heuristic solution from the steady-state model results in more than 4% relative increase in probability of true alarm with respect to the current practice. Moreover, starting from the heuristic solution, we obtain even better solutions in terms of both probability of true alarm and expected time in system via a neighborhood search procedure.
Passenger checkpoint screening; Selectee Lane; Queueing design; Nonlinear binary integer programming; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221711010897
Nie, Xiaofeng
Parab, Gautam
Batta, Rajan
Lin, Li
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:316-3262016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:316-326
article
Identifying large robust network clusters via new compact formulations of maximum k-club problems
Network robustness issues are crucial in a variety of application areas. In many situations, one of the key robustness requirements is the connectivity between each pair of nodes through a path that is short enough, which makes a network cluster more robust with respect to potential network component disruptions. A k-club, which by definition is a subgraph of a diameter of at most k, is a structure that addresses this requirement (assuming that k is small enough with respect to the size of the original network). We develop a new compact linear 0–1 programming formulation for finding maximum k-clubs that has substantially fewer entities compared to the previously known formulation (O(kn2) instead of O(nk+1), which is important in the general case of k>2) and is rather tight despite its compactness. Moreover, we introduce a new related concept referred to as an R-robust k-club (or, (k,R)-club), which naturally arises from the developed k-club formulations and extends the standard definition of a k-club by explicitly requiring that there must be at least R distinct paths of length at most k between all pairs of nodes. A compact formulation for the maximum R-robust k-club problem is also developed, and error and attack tolerance properties of the important special case of R-robust 2-clubs are investigated. Computational results are presented for multiple types of random graph instances.
Combinatorial optimization; Graph theory; Robust network clusters; k-clubs; R-robust k-clubs; Compact 0–1 formulations;
http://www.sciencedirect.com/science/article/pii/S0377221711009477
Veremyev, Alexander
Boginski, Vladimir
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:759-7672016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:759-767
article
Portfolio symmetry and momentum
This paper presents a novel theoretical framework to model the evolution of a dynamic portfolio (i.e., a portfolio whose weights vary over time), considering a given investment policy. The framework is based on graph theory and the quantum probability. Embedding the dynamics of a portfolio into a graph, each node of the graph representing a plausible portfolio, we provide the probabilities for a dynamic portfolio to lie on different nodes of the graph, characterizing its optimality in terms of returns. The framework embeds cross-sectional phenomena, such as the momentum effect, in stochastic processes, using portfolios instead of individual stocks. We apply our methodology to an investment policy similar to the momentum strategy of Jegadeesh and Titman (1993). We find that the strategy symmetry is a source of momentum.
(P) Finance Graph theory Momentum Quantum probability Spectral analysis
http://www.sciencedirect.com/science/article/pii/S0377221711004188
Billio, Monica
Calès, Ludovic
Guégan, Dominique
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:132-1392016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:132-139
article
Mixture cure models in credit scoring: If and when borrowers default
Mixture cure models were originally proposed in medical statistics to model long-term survival of cancer patients in terms of two distinct subpopulations – those that are cured of the event of interest and will never relapse, along with those that are uncured and are susceptible to the event. In the present paper, we introduce mixture cure models to the area of credit scoring, where, similarly to the medical setting, a large proportion of the dataset may not experience the event of interest during the loan term, i.e. default. We estimate a mixture cure model predicting (time to) default on a UK personal loan portfolio, and compare its performance to the Cox proportional hazards method and standard logistic regression. Results for credit scoring at an account level and prediction of the number of defaults at a portfolio level are presented; model performance is evaluated through cross validation on discrimination and calibration measures. Discrimination performance for all three approaches was found to be high and competitive. Calibration performance for the survival approaches was found to be superior to logistic regression for intermediate time intervals and useful for fixed 12month time horizon estimates, reinforcing the flexibility of survival analysis as both a risk ranking tool and for providing robust estimates of probability of default over time. Furthermore, the mixture cure model’s ability to distinguish between two subpopulations can offer additional insights by estimating the parameters that determine susceptibility to default in addition to parameters that influence time to default of a borrower.
Credit scoring; Survival analysis; Mixture cure models; Regression; Risk analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711009064
Tong, Edward N.C.
Mues, Christophe
Thomas, Lyn C.
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:379-3852016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:379-385
article
Uncertainty index based interval assignment by Interval AHP
In a multi-attribute decision making problem, indigenous values are assigned to attributes based on a decision maker’s subjective judgments. The given judgments are often uncertain, because of the uncertainty of situations and intuitiveness of human judgments. In order to reflect the uncertainty in the assigned values, they are denoted as intervals whose widths represent the possibilities of attributes. Since it is difficult for a decision maker to assign values directly to attributes in case of more than two attributes, he/she gives a pairwise comparison matrix by comparing two attributes at one occasion. The given matrix contains two kinds of uncertainty, one is inconsistency among comparisons and the other is incompleteness of comparisons. This paper proposes the models to obtain intervals of attributes from the given uncertain pairwise comparison matrix. At first, the uncertainty indexes of a set of intervals are defined from the viewpoints of entropy in probability, sum or maximum of widths, or ignorance. Then, considering that too uncertain information is not useful, the intervals of attributes are obtained by minimizing their uncertainty indexes.
Decision analysis; Interval analysis; Analytic Hierarchy Process; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221712000276
Entani, Tomoe
Sugihara, Kazutomi
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:356-3662016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:356-366
article
The implications of postponement on contract design and channel performance
We analyze a supply chain consisting of one manufacturer and one retailer under consignment sales with a revenue sharing contract. The manufacturer produces before, but charges price to sell the products through the retailer after the demand curve is revealed. The retailer deducts a fraction from the selling price for each unit sold and remits the balance to manufacturer. We refer to the capability whereby firms delay price decision and make sales in response to actual market condition as postponement. We find that, when market demand admits a multiplicative structure, the revenue share and allocation of channel profit between the firms when they have postponement capability are similar to when they do not have such capability. Postponement improves the profits of individual firms. Such an effect is more phenomenal in the centralized system than in decentralized system, and when the market demand is more sensitive to price changes. However, it causes the profit loss, defined as the percentage deviation of channel profit in the decentralized system relative to the centralized system, to worsen, and the gap widens with retailer’s sales cost. When the demand has an additive structure, while the roles of postponement on firms’ decisions differ slightly from those under the multiplicative structure, the structure of the strategic interactions between firms and relative channel performance are not significantly altered.
Supply chain management; Consignment; Revenue sharing; Postponement;
http://www.sciencedirect.com/science/article/pii/S0377221711006849
Jiang, Li
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:629-6382016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:629-638
article
Generating and improving orthogonal designs by using mixed integer programming
Analysts faced with conducting experiments involving quantitative factors have a variety of potential designs in their portfolio. However, in many experimental settings involving discrete-valued factors (particularly if the factors do not all have the same number of levels), none of these designs are suitable. In this paper, we present a mixed integer programming (MIP) method that is suitable for constructing orthogonal designs, or improving existing orthogonal arrays, for experiments involving quantitative factors with limited numbers of levels of interest. Our formulation makes use of a novel linearization of the correlation calculation. The orthogonal designs we construct do not satisfy the definition of an orthogonal array, so we do not advocate their use for qualitative factors. However, they do allow analysts to study, without sacrificing balance or orthogonality, a greater number of quantitative factors than it is possible to do with orthogonal arrays which have the same number of runs.
Orthogonal design creation Design of experiments Statistics
http://www.sciencedirect.com/science/article/pii/S0377221711006072
Vieira Jr., Hélcio
Sanchez, Susan
Kienitz, Karl Heinz
Belderrain, Mischel Carmen Neyra
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:477-4862016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:477-486
article
Coordination via cost and revenue sharing in manufacturer–retailer channels
The problem of establishing efficiency in a manufacturer–retailer channel (channel coordination) is extensively discussed in the industrial economics, the marketing and the operations research literature. However, studies considering consumer demand to be simultaneously affected by price and non-price variables are scarce. One subset of models investigates efficient contracts with non-linear tariffs, but requires mechanisms which are rarely observed in managerial practice. The other subset analyses channel efficiency effects of alternative royalty payments, but omits to design an efficient contract. We contribute to this literature by investigating a contract of royalty payments that is sufficient for channel coordination. Based on the analysis of the underlying vertical externalities, we show that channel coordination requires cost and revenue sharing via a revenue sharing rate and marketing effort participation rates on both manufacturer and retailer level. Some surprising findings are highlighted: there exists a continuum of efficient contracts. Efficiency requires a retailer’s participation of at least 50% in the manufacturer’s cost of marketing effort. Moreover, the elimination of double marginalisation is not necessary for channel coordination. Manufacturer and retailer can choose an efficient contract via bargaining over the wholesale price. The main challenge for managers will be to create acceptance of new types of royalty payments based on a trustful manufacturer–retailer relationship. We also discuss the cases of the Apple iPhone market launch and of innovative restaurant franchising to further illustrate and underline the relevance of our results.
Marketing; Channel coordination; Cooperative advertising; Revenue sharing; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221711006035
Kunter, Marcus
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:789-8002016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:789-800
article
Supply chain design considering economies of scale and transport frequencies
In this paper we consider a 3-echelon, multi-product supply chain design model with economies of scale in transport and warehousing that explicitly takes transport frequencies into consideration. Our model simultaneously optimizes locations and sizes of tank farms, material flows, and transport frequencies within the network. We consider all relevant costs: product cost, transport cost, tank rental cost, tank throughput cost, and inventory cost. The problem is based on a real-life example from a chemical company. We show that considering economies of scale and transport frequencies in the design stage is crucial and failing to do so can lead to substantially higher costs than optimal. We solve a wide variety of problems with branch-and-bound and with the efficient solution heuristics based on iterative linearization techniques we develop. We show that the heuristics are superior to the standard branch-and-bound technique for large problems like the one of the chemical company that motivated our research.
Supply chain design; Economies of scale; Transport frequencies; Iterative linearization;
http://www.sciencedirect.com/science/article/pii/S0377221711010411
Baumgartner, Kerstin
Fuetterer, André
Thonemann, Ulrich W.
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:163-1742016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:163-174
article
A new methodology for generating and combining statistical forecasting models to enhance competitive event prediction
Forecasting methods are routinely employed to predict the outcome of competitive events (CEs) and to shed light on the factors that influence participants’ winning prospects (e.g., in sports events, political elections). Combining statistical models’ forecasts, shown to be highly successful in other settings, has been neglected in CE prediction. Two particular difficulties arise when developing model-based composite forecasts of CE outcomes: the intensity of rivalry among contestants, and the strength/diversity trade-off among individual models. To overcome these challenges we propose a range of surrogate measures of event outcome to construct a heterogeneous set of base forecasts. To effectively extract the complementary information concealed within these predictions, we develop a novel pooling mechanism which accounts for competition among contestants: a stacking paradigm integrating conditional logit regression and log-likelihood-ratio-based forecast selection. Empirical results using data related to horseracing events demonstrate that: (i) base model strength and diversity are important when combining model-based predictions for CEs; (ii) average-based pooling, commonly employed elsewhere, may not be appropriate for CEs (because average-based pooling exclusively focuses on strength); and (iii) the proposed stacking ensemble provides statistically and economically accurate forecasts. These results have important implications for regulators of betting markets associated with CEs and in particular for the accurate assessment of market efficiency.
Forecasting; Forecast combination; Competitive event prediction;
http://www.sciencedirect.com/science/article/pii/S0377221711009714
Lessmann, Stefan
Sung, Ming-Chien
Johnson, Johnnie E.V.
Ma, Tiejun
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:652-6582016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:652-658
article
Problem structuring methods ‘in the Dock’: Arguing the case for Soft OR
Problem structuring methods (‘soft’ OR) have been around for approximately 40years and yet these methods are still very much overlooked in the OR world. Whilst there is almost certainly a number of explanations for this, two key stumbling blocks are: (1) the subjective nature of the modelling yielding insights rather than testable results, and (2) the demand on users to both manage content (through modelling) and processes (work with rather than ‘on behalf’ of groups). However, as evidenced from practice there are also a number of significant benefits. This paper therefore aims to examine the case of Soft OR through examining the case for and against problem structuring methods.
Problem structuring; Practice of OR; Mixing Methods;
http://www.sciencedirect.com/science/article/pii/S0377221711010010
Ackermann, Fran
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:434-4442016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:434-444
article
Information security trade-offs and optimal patching policies
We develop and simulate a basic mathematical model of the costly deployment of software patches in the presence of trade-offs between confidentiality and availability. The model incorporates representations of the key aspects of the system architecture, the managers’ preferences, and the stochastic nature of the threat environment. Using the model, we compute the optimal frequencies for regular and irregular patching, for both networks and clients, for two example types of organization, military and financial. Such examples are characterized by their constellations of parameters. Military organizations, being relatively less cost-sensitive, tend to apply network patches upon their arrival. The relatively high cost of applying irregular client patches leads both types of organization to avoid deployment upon arrival.
Information security; Optimal policy; Risk reduction; Stochastic processes;
http://www.sciencedirect.com/science/article/pii/S037722171100498X
Ioannidis, Christos
Pym, David
Williams, Julian
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:312-3252016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:312-325
article
Matching product architecture with supply chain design
Product architecture is typically established in the early stages of the product development (PD) cycle. Depending on the type of architecture selected, product design, manufacturing processes, and ultimately supply chain configuration are all significantly affected. Therefore, it is important to integrate product architecture decisions with manufacturing and supply chain decisions during the early stage of the product development. In this paper, we present a multi-objective optimization framework for matching product architecture strategy to supply chain design. In contrast to the existing operations management literature, we incorporate the compatibility between the supply chain partners into our model to ensure the long term viability of the supply chain. Since much of the supplier related information may be very subjective in nature during the early stages of PD, we use fuzzy logic to compute the compatibility index of a supplier. The optimization model is formulated as a weighted goal programming (GP) model with two objectives: minimization of total supply chain costs, and maximization of total supply chain compatibility index. The GP model is solved by using genetic algorithm. We present case examples for two different products to demonstrate the model’s efficacy, and present several managerial implications that evolved from this study.
Product architecture; Supply chain design; Modular strategy; Product development;
http://www.sciencedirect.com/science/article/pii/S0377221711006734
Nepal, Bimal
Monplaisir, Leslie
Famuyiwa, Oluwafemi
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:598-6102016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:598-610
article
The Home Care Crew Scheduling Problem: Preference-based visit clustering and temporal dependencies
In the Home Care Crew Scheduling Problem a staff of home carers has to be assigned a number of visits to patients’ homes, such that the overall service level is maximised. The problem is a generalisation of the vehicle routing problem with time windows. Required travel time between visits and time windows of the visits must be respected. The challenge when assigning visits to home carers lies in the existence of soft preference constraints and in temporal dependencies between the start times of visits.
Home care; Crew scheduling; Vehicle routing; Generalised precedence constraints; Branch-and-price; Set partitioning;
http://www.sciencedirect.com/science/article/pii/S0377221711009891
Rasmussen, Matias Sevel
Justesen, Tor
Dohn, Anders
Larsen, Jesper
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:651-6612016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:651-661
article
Short sales in Log-robust portfolio management
This paper extends the Log-robust portfolio management approach to the case with short sales, i.e., the case where the manager can sell shares he does not yet own. We model the continuously compounded rates of return, which have been established in the literature as the true drivers of uncertainty, as uncertain parameters belonging to polyhedral uncertainty sets, and maximize the worst-case portfolio wealth over that set in a one-period setting. The degree of the manager's aversion to ambiguity is incorporated through a single, intuitive parameter, which determines the size of the uncertainty set. The presence of short-selling requires the development of problem-specific techniques, because the optimization problem is not convex. In the case where assets are independent, we show that the robust optimization problem can be solved exactly as a series of linear programming problems; as a result, the approach remains tractable for large numbers of assets. We also provide insights into the structure of the optimal solution. In the case of correlated assets, we develop and test a heuristic where correlation is maintained only between assets invested in. In computational experiments, the proposed approach exhibits superior performance to that of the traditional robust approach.
Robust optimization Nonlinear optimization Portfolio management
http://www.sciencedirect.com/science/article/pii/S0377221711005716
Kawas, Ban
Thiele, Aurélie
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:114-1222016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:114-122
article
Arithmetic Brownian motion and real options
We treat real option value when the underlying process is arithmetic Brownian motion (ABM). In contrast to the more common assumption of geometric Brownian motion (GBM) and multiplicative diffusion, with ABM the underlying project value is expressed as an additive process. Its variance remains constant over time rather than rising or falling along with the project’s value, even admitting the possibility of negative values. This is a more compelling paradigm for projects that are managed as a component of overall firm value. After outlining the case for ABM, we derive analytical formulas for European calls and puts on dividend-paying assets as well as a numerical algorithm for American-style and other more complex options based on ABM. We also provide examples of their use.
Investment analysis; Real options; Risk-neutral valuation; Arithmetic Brownian motion;
http://www.sciencedirect.com/science/article/pii/S0377221711011003
Alexander, David Richard
Mo, Mengjia
Stent, Alan Fraser
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:270-2792016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:270-279
article
Assessing financial distress where bankruptcy is not an option: An alternative approach for local municipalities
The goal of this paper is to build an operational model for evaluating the financial viability of local municipalities in Greece. For this purpose, a multicriteria methodology is implemented combining a simulation analysis approach (stochastic multicriteria acceptability analysis) with a disaggregation technique. In particular, an evaluation model is developed on the basis of accrual financial data from 360 Greek municipalities for 2007. A set of customized to the local government context financial ratios is defined that rate municipalities and distinguish those with good financial condition from those experiencing financial problems. The model’s results are analyzed on the 2007 data as well as on a subsample of 100 local governments in 2009. The model succeeded in correctly classifying distressed municipalities according to a benchmark set by the central government in 2010. Such a model and methodology could be particularly useful for performance assessment in the context of several European Union countries that have a similar local government framework to the Greek one and apply accrual accounting techniques.
Local governments; Financial distress; Multiple criteria analysis; Financial ratios; Greece;
http://www.sciencedirect.com/science/article/pii/S0377221711009404
Cohen, Sandra
Doumpos, Michael
Neofytou, Evi
Zopounidis, Constantin
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:442-4512016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:442-451
article
Linearized Nelson–Siegel and Svensson models for the estimation of spot interest rates
Linearized versions of the Nelson–Siegel (1987) and Svensson (1994) models for the cross-sectional estimation of spot yield curves from samples of coupon bonds are developed and analyzed. It is shown how these models can be made linear in the level, slope and curvature parameters and how prior information about these parameters can be incorporated in the estimation procedure. The performance of the linearized models are assessed in a Monte Carlo setting and with a sample of US government bonds. The results reveal that the linearized models compare favorably to the original models in terms of parameter estimates stability, computing effort and prevalence of local optima.
Term structure of interest rates; Spot rate curves; Coupon bonds; Prior information; Linearization;
http://www.sciencedirect.com/science/article/pii/S0377221712000057
Gauthier, Geneviève
Simonato, Jean-Guy
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:367-3752016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:367-375
article
On the ordinal equivalence of the Johnston, Banzhaf and Shapley power indices
In this paper, we characterize the games in which Johnston, Shapley–Shubik and Penrose–Banzhaf–Coleman indices are ordinally equivalent, meaning that they rank players in the same way. We prove that these three indices are ordinally equivalent in semicomplete simple games, which is a newly defined class that contains complete games and includes most of the real–world examples of binary voting systems. This result constitutes a twofold extension of Diffo Lambo and Moulen’s result (Diffo Lambo and Moulen, 2002) in the sense that ordinal equivalence emerges for three power indices (not just for the Shapley–Shubik and Penrose–Banzhaf–Coleman indices), and it holds for a class of games strictly larger than the class of complete games.
Game theory; Decision support systems; Simple games; Complete simple games; Power indices; Ordinal equivalence;
http://www.sciencedirect.com/science/article/pii/S0377221711006606
Freixas, Josep
Marciniak, Dorota
Pons, Montserrat
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:225-2312016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:225-231
article
A deterministic resource scheduling model in epidemic control: A case study
The resources available to tackle an epidemic infection are usually limited, while the time and effort required to control it are increasing functions of the starting time of the containment effort. The problem of scheduling limited available resources, when there are several areas where the population is infected, is considered. A deterministic model, appropriate for large populations, where random interactions can be averaged out, is used for the epidemic’s rate of spread. The problem is tackled using the concept of deteriorating jobs, i.e. the model represents increasing loss rate as more susceptibles become infected, and increasing time and effort needed for the epidemic’s containment. A case study for a proposed application of the model in the case of the mass vaccination against A(H1N1)v influenza in the Attica region, Greece and a comparative study of the model’s performance vs. the applied random practice are presented.
Scheduling; Disaster management; Deteriorating jobs; Case study;
http://www.sciencedirect.com/science/article/pii/S0377221711006114
Rachaniotis, Nikolaos P.
Dasaklis, Tom K.
Pappis, Costas P.
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:70-822016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:70-82
article
A hierarchy of relaxations for linear generalized disjunctive programming
Generalized disjunctive programming (GDP), originally developed by Raman and Grossmann (1994), is an extension of the well-known disjunctive programming paradigm developed by Balas in the mid 70s in his seminal technical report (Balas, 1974). This mathematical representation of discrete-continuous optimization problems, which represents an alternative to the mixed-integer program (MIP), led to the development of customized algorithms that successfully exploited the underlying logical structure of the problem. The underlying theory of these methods, however, borrowed only in a limited way from the theories of disjunctive programming, and the unique insights from Balas’ work have not been fully exploited.
Integer programming; Disjunctive programming; Hull relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221711006205
Sawaya, Nicolas
Grossmann, Ignacio
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:224-2332016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:224-233
article
A binary particle swarm optimization algorithm inspired by multi-level organizational learning behavior
Recently, nature-inspired algorithms have increasingly attracted the attention of researchers. Due to the fact that in BPSO the position vectors consisting of ‘0’ and ‘1’ can be seen as a decision behavior (support or oppose), in this paper, we propose a BPSO with hierarchical structure (BPSO_HS for short), on the basis of multi-level organizational learning behavior. At each iteration of BPSO_HS, particles are divided into two classes, named ‘leaders’ and ‘followers’, and different evolutionary strategies are used in each class. In addition, the mutation strategy is adopted to overcome the premature convergence and slow convergent speed during the later stages of optimization. The algorithm was tested on two discrete optimization problems (Traveling Salesman and Bin Packing) as well as seven real-parameter functions. The experimental results showed that the performance of BPSO_HS was significantly better than several existing algorithms.
Binary particle swarm optimization; Multi-level organizational learning behavior; Hierarchical structure; Mutation strategy; Evolutionary strategy;
http://www.sciencedirect.com/science/article/pii/S0377221712000240
Bin, Wei
Qinke, Peng
Jing, Zhao
Xiao, Chen
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:252-2632016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:252-263
article
The effectiveness of manufacturer vs. retailer rebates within a newsvendor framework
This paper studies the impact of direct rebates to the end customer from the manufacturer and/or from the retailer upon the profitability and effectiveness of the policies of both channels. Effectiveness is measured by the ratio of the retailer’s to the manufacturer’s profits and by the sum of the profits for the two parties across scenarios wherein at least one of the parties offers a rebate. The main result is to prove analytically the conditions under which either all three scenarios are equally profitable or the retailer-only rebate policy is dominant. Another important result is to illustrate the likelihood that the manufacturer is able to coordinate the supply chain, by the appropriate choice of its pricing and rebate policies, thereby inducing the retailer to do likewise with its associated best pricing, ordering and rebate policies. Finally, numerical examples highlight the main features of the paper.
Supply-chain management; Cross-functional interfaces; Operations; Marketing; Conceptual modeling;
http://www.sciencedirect.com/science/article/pii/S037722171100600X
Arcelus, F.J.
Kumar, Satyendra
Srinivasan, G.
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:509-5202016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:509-520
article
Copositive optimization – Recent developments and applications
Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.
Clique number; Completely positive matrix; Convexity gap; Crossing number; Robust optimization; Standard quadratic optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711003705
Bomze, Immanuel M.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:546-5582016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:546-558
article
Solving a continuous local access network design problem with a stabilized central column generation approach
In this paper, we focus on a variant of the multi-source Weber problem. In the multi-source Weber problem, the location of a fixed number of concentrators, and the allocation of terminals to them, must be chosen to minimize the total cost of links between terminals and concentrators. In our variant, we have a third hierarchical level, two categories of link costs, and the number of concentrators is unknown. To solve this difficult problem, we propose several heuristics, and use a new stabilized column generation approach, based on a central cutting plane method, to provide lower bounds.
Location Combinatorial optimization Column generation Central cutting plane Multi-source Weber problem
http://www.sciencedirect.com/science/article/pii/S0377221711004462
Trampont, M.
Destré, C.
Faye, A.
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:639-6502016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:639-650
article
Strategic investment under uncertainty: A synthesis
Investment is a central theme in economics, finance, and operational research. Traditionally, the focus of analysis has been either on assessing the value of flexibility (investment under uncertainty) or on describing commitment effects in competitive settings (industrial organization). Research contributions addressing the intersection of investment under uncertainty and industrial organization have become numerous in recent years. In this paper, we provide an overview aimed at categorizing and relating these research streams. We highlight managerial insights concerning the nature of competitive advantage (first- versus second-mover advantage), the manner in which information is revealed, firm heterogeneity, capital increment size, and the number of competing firms.
Finance Investment analysis Real options Strategic investment Option games
http://www.sciencedirect.com/science/article/pii/S0377221711004863
Chevalier-Roignant, Benoît
Flath, Christoph M.
Huchzermeier, Arnd
Trigeorgis, Lenos
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:439-4472016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:439-447
article
A model for efficiency-based resource integration in services
Service processes, such as consulting, require coordinated efforts from the service recipient (client) and the service provider in order to deliver the desired output – a process known as resource integration. Client involvement directly affects the efficiency of service processes, thereby affecting capacity decisions. We present a mathematical model of the resource-integration decision for a service process through which the client and the service provider co-produce resource outputs. This workforce planning model is unique because we include the extent of client involvement as a policy variable and introduce to the resource-planning model efficiency and quality performance measures, which are functions of client involvement. The optimization of resource planning for services produces interesting policy prescriptions due to the presence of a client-modulated efficiency function in the capacity constraint and subjective client value placed on participation in the service process. The primary results of this research are optimal decision rules that provide insights into the optimal levels of client involvement and provider commitment in resource integration.
OR in manpower planning; Services; Coproduction; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221711008125
White, Sheneeta W.
Badinelli, Ralph D.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:333-3412016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:333-341
article
Contracting with asymmetric demand information in supply chains
We solve a buyback contract design problem for a supplier who is working with a retailer who possesses private information about the demand distribution. We model the retailer’s private information as a space of either discrete or continuous demand states so that only the retailer knows its demand state and the demand for the product is stochastically increasing in the state. We focus on contracts that are viable in practice, where the buyback price being strictly less than the wholesale price, which is itself strictly less than the retail price. We derive the optimal (for the supplier) buyback contract that allows for arbitrary allocation of profits to the retailer (subject to the retailer’s reservation profit requirements) and show that in the limit this contract leads to the first-best solution with the supplier keeping the entire channel’s profit (after the retailer’s reservation profit).
Supply chain management; Contracting; Asymmetric information; Return and buyback policies;
http://www.sciencedirect.com/science/article/pii/S0377221711008629
Babich, Volodymyr
Li, Hantao
Ritchken, Peter
Wang, Yunzeng
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:638-6462016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:638-646
article
Whose deletion does not affect your payoff? The difference between the Shapley value, the egalitarian value, the solidarity value, and the Banzhaf value
This study provides a unified axiomatic characterization method of one-point solutions for cooperative games with transferable utilities. Any one-point solution that satisfies efficiency, the balanced cycle contributions property (BCC), and the axioms related to invariance under a player deletion is characterized as a corollary of our general result. BCC is a weaker requirement than the well-known balanced contributions property. Any one-point solution that is both symmetric and linear satisfies BCC. The invariance axioms necessitate that the deletion of a specific player from games does not affect the other players’ payoffs, and this deletion is different with respect to solutions. As corollaries of the above characterization result, we are able to characterize the well-known one-point solutions, the Shapley, egalitarian, and solidarity values, in a unified manner. We also studied characterizations of an inefficient one-point solution, the Banzhaf value that is a well-known alternative to the Shapley value.
Game theory; Axiomatization; Shapley value; Egalitarian value; Solidarity value; Banzhaf value;
http://www.sciencedirect.com/science/article/pii/S0377221711007302
Kamijo, Yoshio
Kongo, Takumi
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:134-1452016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:134-145
article
Measuring the efficiency of highway maintenance contracting strategies: A bootstrapped non-parametric meta-frontier approach
Highly deteriorated US road infrastructure, major budgetary restrictions and the significant growth in traffic have led to an emerging need for improving performance of highway maintenance practices. Privatizing some portions of road maintenance operations by state Departments of Transportation (DOTs) under the auspices of performance-based contracts has been one of the innovative initiatives in response to such a need. This paper adapts the non-parametric meta-frontier framework to the two-stage bootstrapping technique to develop an analytical approach for evaluating the relative efficiency of two highway maintenance contracting strategies. The first strategy pertains to the 180 miles of Virginia’s Interstate highways maintained by Virginia DOT using traditional maintenance practices. The second strategy pertains to the 250 miles of Virginia’s Interstate highways maintained via a Public Private Partnership using a performance-based maintenance approach. The meta-frontier approach accounts for the heterogeneity that exists among different types of highway maintenance contracts due to different limitations and regulations. The two-stage bootstrapping technique accounts for the large set of uncontrollable factors that affect the highway deterioration processes. The preliminary findings, based on the historical data for the state of Virginia, suggest that road authorities (counties) that have used traditional contracting for transforming the maintenance expenditures into the improvement of the road conditions seem to be more efficient than road authorities that have used the performance-based contracting. This paper recommends that road authorities use hybrid contracting approaches that include best practices of both traditional and performance-based highway maintenance contracting.
Data Envelopment Analysis; Meta-frontier; Bootstrapping; Highway maintenance contracting strategies; Performance-based contracting;
http://www.sciencedirect.com/science/article/pii/S0377221711010861
Fallah-Fini, Saeideh
Triantis, Konstantinos
de la Garza, Jesus M.
Seaver, William L.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:45-562016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:45-56
article
Scheduling inspired models for two-dimensional packing problems
We propose two exact algorithms for two-dimensional orthogonal packing problems whose main components are simple mixed-integer linear programming models. Based on the different forms of time representation in scheduling formulations, we extend the concept of multiple time grids into a second dimension and propose a hybrid discrete/continuous-space formulation. By relying on events to continuously locate the rectangles along the strip height, we aim to reduce the size of the resulting mathematical problem when compared to a pure discrete-space model, with hopes of achieving a better computational performance. Through the solution of a set of 29 test instances from the literature, we show that this was mostly accomplished, primarily because the associated search strategy can quickly find good feasible solutions prior to the optimum, which may be very important in real industrial environments. We also provide a comprehensive comparison to seven other conceptually different approaches that have solved the same strip packing problems.
Optimization Integer programming Strip packing Resource-Task Network Spatial grids
http://www.sciencedirect.com/science/article/pii/S0377221711005078
Castro, Pedro M.
Oliveira, José F.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:351-3562016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:351-356
article
Effects of growth volatility on economic performance – Empirical evidence from Turkey
This paper examines the relationship between growth and growth volatility for a small open economy with high growth volatility: Turkey. Quarterly data for the period from 1987Q1 to 2007Q3 suggests that growth volatility reduces growth and that this result is robust under different specifications. This paper contributes to the literature by focusing on how growth volatility affects a set of variables that are crucial for growth. Empirical evidence from Turkey suggests that higher growth volatility reduces total factor productivity, investment, and the foreign currency value of local currency (depreciation). Moreover, it increases employment, though the evidence for this is not statistically significant.
Economics; Sustainability; Growth volatility; Total factor productivity; Investment; Real exchange rate;
http://www.sciencedirect.com/science/article/pii/S037722171100854X
Berument, M. Hakan
Dincer, N. Nergiz
Mustafaoglu, Zafer
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:86-952016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:86-95
article
Fitting piecewise linear continuous functions
We consider the problem of fitting a continuous piecewise linear function to a finite set of data points, modeled as a mathematical program with convex objective. We review some fitting problems that can be modeled as convex programs, and then introduce mixed-binary generalizations that allow variability in the regions defining the best-fit function’s domain. We also study the additional constraints required to impose convexity on the best-fit function.
Integer programming; Quadratic programming; Data fitting/regression; Piecewise linear function;
http://www.sciencedirect.com/science/article/pii/S0377221711011246
Toriello, Alejandro
Vielma, Juan Pablo
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:638-6402016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:638-640
article
A look at the past and present of optimization – An editorial
This special issue of the European Journal of Operational Research is devoted to the EURO XXIV Conference, that was held at the facilities of the University of Lisbon (Portugal) from July 11 to July 14, 2010. With over 700 sessions for a total of approximately 2350 presentations, and with 2700 participants (delegates and accompanying persons) coming from 69 countries, this was the largest EURO conference ever.
OR in research and development; Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711009994
Martello, Silvano
Pinto Paixão, José M.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:392-4002016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:392-400
article
Control of a production–inventory system with returns under imperfect advance return information
We consider a production–inventory system with product returns that are announced in advance by the customers. Demands and announcements of returns occur according to independent Poisson processes. An announced return is either actually returned or cancelled after a random return lead time. We consider both lost sale and backorder situations. Using a Markov decision formulation, the optimal production policy, with respect to the discounted cost over an infinite horizon, is characterized for situations with and without advance return information. We give insights in the potential value of this information. Also some attention is paid to combining advance return and advance demand information. Further applications of the model as well as topics for further research are indicated.
Reverse logistics; Inventory control; Stochastic dynamic programming; Advance return information;
http://www.sciencedirect.com/science/article/pii/S0377221711010046
Flapper, S.D.P.
Gayon, J.P.
Vercraene, S.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:538-5472016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:538-547
article
Estimating the population utility function: A parametric Bayesian approach
In this paper we consider the health utility index mark II for quantifying and describing a population’s health related quality of life over health states composed of multiple attributes. This measure can be used for various purposes such as evaluating the severity of the effect of a disease or comparing different treatment methods. We present a Bayesian framework for population utility estimation and health policy evaluation by introducing a probabilistic interpretation of the multi-attribute utility theory (MAUT) used in health economics. In doing so, our approach combines ideas from the MAUT and Bayesian statistics and provides an alternative method of modeling preferences and utility estimation.
Bayesian inference; Health services; Multi-attribute utility theory; OR in societal problem analysis; Group decision making;
http://www.sciencedirect.com/science/article/pii/S0377221711010083
Musal, R. Muzaffer
Soyer, Refik
McCabe, Christopher
Kharroubi, Samer A.
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:773-7832016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:773-783
article
Mathematical programming formulations for approximate simulation of multistage production systems
Mathematical programming representation has been recently used to describe the behavior of discrete event systems as well as their formal properties. This new way of representing discrete event systems paves the way to the creation of simpler mathematical programming models that reduce the complexity of the system analysis. The paper proposes an approximate representation for a class of production systems characterized by several stages, limited buffer capacities and stochastic production times. The approximation exploits the concept of a time buffer, modeled as a constraint that put into a temporal relationship the completion times of two customers in a sample path. The main advantage of the proposed formulation is that it preserves its linearity even when used for optimization and, for such a reason, it can be adopted in simulation–optimization problems to reduce the initial solution space. The approximate formulation is applied to relevant problems such as buffer capacity allocation in manufacturing systems and control parameters setting in pull systems.
Simulation; Optimization; Queueing systems; Bounds; Linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221712000306
Alfieri, Arianna
Matta, Andrea
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:287-2992016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:287-299
article
Optimally routing and scheduling tow trains for JIT-supply of mixed-model assembly lines
In recent years, more and more automobile producers adopted the supermarket-concept to enable a flexible and reliable Just-in-Time (JIT) part supply of their mixed-model assembly lines. Within this concept, a supermarket is a decentralized in-house logistics area where parts are intermediately stored and then loaded on small tow trains. These tow trains travel across the shop floor on specific routes to make frequent small-lot deliveries which are needed by the stations of the line. To enable a reliable part supply in line with the JIT-principle, the interdependent problems of routing, that is, partitioning stations to be supplied among tow trains, and scheduling, i.e., deciding on the start times of each tow train’s tours through its assigned stations, need to be solved. This paper introduces an exact solution procedure which solves both problems simultaneously in polynomial runtime. Additionally, management implications regarding the trade-off between number and capacity of tow trains and in-process inventory near the line are investigated within a comprehensive computational study.
Mixed-model assembly lines; Just-in-Time; Material supply; Tow trains;
http://www.sciencedirect.com/science/article/pii/S0377221711008162
Emde, Simon
Boysen, Nils
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:420-4282016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:420-428
article
A heuristic method to rectify intransitive judgments in pairwise comparison matrices
This paper investigates the effects of intransitive judgments on the consistency of pairwise comparison matrices. Statistical evidence regarding the occurrence of intransitive judgements in pairwise matrices of acceptable consistency is gathered by using a Monte–Carlo simulation, which confirms that relatively high percentage of comparison matrices, satisfying Saaty’s CR criterion are ordinally inconsistent. It is also shown that ordinal inconsistency does not necessarily decrease in the group aggregation process, in contrast with cardinal inconsistency. A heuristic algorithm is proposed to improve ordinal consistency by identifying and eliminating intransitivities in pairwise comparison matrices. The proposed algorithm generates near-optimal solutions and outperforms other tested approaches with respect to computation time.
Decision analysis; AHP; Pairwise comparisons; Consistency; Simulation; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221711006667
Siraj, Sajid
Mikhailov, Ludmil
Keane, John
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:764-7742016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:764-774
article
Maritime crude oil transportation – A split pickup and split delivery problem
The maritime oil tanker routing and scheduling problem is known to the literature since before 1950. In the presented problem, oil tankers transport crude oil from supply points to demand locations around the globe. The objective is to find ship routes, load sizes, as well as port arrival and departure times, in a way that minimizes transportation costs. We introduce a path flow model where paths are ship routes. Continuous variables distribute the cargo between the different routes. Multiple products are transported by a heterogeneous fleet of tankers. Pickup and delivery requirements are not paired to cargos beforehand and arbitrary split of amounts is allowed. Small realistic test instances can be solved with route pre-generation for this model. The results indicate possible simplifications and stimulate further research.
Routing; Scheduling; Maritime transportation; Pickup and delivery; Split;
http://www.sciencedirect.com/science/article/pii/S0377221711008964
Hennig, F.
Nygreen, B.
Christiansen, M.
Fagerholt, K.
Furman, K.C.
Song, J.
Kocis, G.R.
Warrick, P.H.
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:532-5382016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:532-538
article
Natural gas bilevel cash-out problem: Convergence of a penalty function method
This paper studies a special bi-level programming problem that arises from the dealings of a Natural Gas Shipping Company and the Pipeline Operator, with facilities of the latter used by the former. Because of the business relationships between these two actors, the timing and objectives of their decision-making process are different and sometimes even opposed. In order to model that, bi-level programming was traditionally used in previous works. Later, the problem was expanded and theoretically studied to facilitate its solution; this included extension of the upper level objective function, linear reformulation, heuristic approaches, and branch-and-bound techniques. In this paper, we present a linear programming reformulation of the latest version of the model, which is significantly faster to solve when implemented computationally. More importantly, this new formulation makes it easier to analyze the problem theoretically, allowing us to draw some conclusions about the nature of the solution of the modified problem. Numerical results concerning the running time, convergence, and optimal values, are presented and compared to previous reports, showing a significant improvement in speed without actual sacrifice of the solution's quality.
OR in energy Bi-level programming Linearization Penalty method
http://www.sciencedirect.com/science/article/pii/S0377221711006059
Dempe, Stephan
Kalashnikov, Vyacheslav V.
Pérez-Valdés, Gerardo A.
Kalashnykova, Nataliya I.
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:222-2312016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:222-231
article
Mixed-integer linear optimization for optimal lift-gas allocation with well-separator routing
The lift-gas allocation problem with well-separator routing constraints is a mixed-integer nonlinear program of considerable complexity. To this end, a mixed-integer linear formulation (compact) is obtained by piecewise-linearizing the nonlinear curves, using binary variables to express the linearization and routing decisions. A new formulation (integrated) combining the decisions on linearization and routing is developed by using a single binary variable. The structures of both formulations are explored to generate lifted cover cuts. Numerical tests show that the solution of the integrated formulation using cutting-plane generation is faster in spite of having more variables than the compact formulation.
Integer programming; Piecewise linearization; Lifted cover cuts; Lift-gas allocation; Routing constraints;
http://www.sciencedirect.com/science/article/pii/S0377221711007983
Codas, Andrés
Camponogara, Eduardo
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:679-6862016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:679-686
article
A discrete model for optimal operation of fossil-fuel generators of electricity
This paper presents a new discrete approach to the price-based dynamic economic dispatch (PBDED) problem of fossil-fuel generators of electricity. The objective is to find a sequence of generator temperatures that maximizes profit over a fixed-length time horizon. The generic optimization model presented in this paper can be applied to automatic operation of fossil-fuel generators or to prepare market bids, and it works with various price forecasts. The model’s practical applications are demonstrated by the results of simulation experiments involving 2009 NYISO electricity market data, branch-and-bound, and tabu-search optimization techniques.
OR in energy; Price-based dynamic economic dispatch; Power generation; Tabu search;
http://www.sciencedirect.com/science/article/pii/S0377221711007156
Kalczynski, Pawel J.
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:671-6792016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:671-679
article
Operations Research for green logistics – An overview of aspects, issues, contributions and challenges
The worldwide economic growth of the last century has given rise to a vast consumption of goods while globalization has led to large streams of goods all over the world. The production, transportation, storage and consumption of all these goods, however, have created large environmental problems. Today, global warming, created by large scale emissions of greenhouse gasses, is a top environmental concern. Governments, action groups and companies are asking for measures to counter this threat. Operations Research has a long tradition in improving operations and especially in reducing costs. In this paper, we present a review that highlights the contribution of Operations Research to green logistics, which involves the integration of environmental aspects in logistics. We give a sketch of the present and possible developments, focussing on design, planning and control in a supply chain for transportation, inventory of products and facility decisions. While doing this, we also indicate several areas where environmental aspects could be included in OR models for logistics.
Environment; Logistics; Supply chain management; Transportation;
http://www.sciencedirect.com/science/article/pii/S0377221711009970
Dekker, Rommert
Bloemhof, Jacqueline
Mallidis, Ioannis
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:296-3042016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:296-304
article
Multi-state throughput analysis of a two-stage manufacturing system with parallel unreliable machines and a finite buffer
This paper models and analyzes the throughput of a two-stage manufacturing system with multiple independent unreliable machines at each stage and one finite-sized buffer between the stages. The machines follow exponential operation, failure, and repair processes. Most of the literature uses binary random variables to model unreliable machines in transfer lines and other production lines. This paper first illustrates the importance of using more than two states to model parallel unreliable machines because of their independent and asynchronous operations in the parallel system. The system balance equations are then formulated based on a set of new notations of vector manipulations, and are transformed into a matrix form fitting the properties of the Quasi-Birth–Death (QBD) process. The Matrix-Analytic (MA) method for solving the generic QBD processes is used to calculate the system state probability and throughput. Numerical cases demonstrate that solution method is fast and accurate in analyzing parallel manufacturing systems, and thus prove the applicability of the new model and the effectiveness of the MA-based method. Such multi-state models and their solution techniques can be used as a building block for analyzing larger, more complex manufacturing systems.
Manufacturing; Parallel machine; Markovian analysis; Matrix-Analytic method;
http://www.sciencedirect.com/science/article/pii/S0377221711011027
Liu, Jialu
Yang, Sheng
Wu, Aiguo
Hu, S. Jack
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:584-5932016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:584-593
article
Two-dimensional efficiency decomposition to measure the demand effect in productivity analysis
This paper proposes a two-dimensional efficiency decomposition (2DED) of profitability for a production system to account for the demand effect observed in productivity analysis. The first dimension identifies four components of efficiency: capacity design, demand generation, operations, and demand consumption, using Network Data Envelopment Analysis (Network DEA). The second dimension decomposes the efficiency measures and integrates them into a profitability efficiency framework. Thus, each component’s profitability change can be analyzed based on technical efficiency change, scale efficiency change and allocative efficiency change. An empirical study based on data from 2006 to 2008 for the US airline industry finds that the regress of productivity is mainly caused by a demand fluctuation in 2007–2008 rather than technical regression in production capabilities.
Data envelopment analysis; Productivity and profitability change; Efficiency decomposition; Demand fluctuation; Airlines industry;
http://www.sciencedirect.com/science/article/pii/S0377221711007168
Lee, Chia-Yen
Johnson, Andrew L.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:401-4072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:401-407
article
The impact of cost uncertainty on the location of a distribution center
The location of a distribution center (DC) is a key consideration for the design of supply chain networks. When deciding on it, firms usually allow for transportation costs, but not supplier prices. We consider simultaneously the location of a DC and the choice of suppliers offering different, possibly random, prices for a single product. A buying firm attempts to minimize the sum of the price charged by a chosen supplier, and inbound and outbound transportation costs. No costs are incurred for switching suppliers. We first derive a closed-form optimal location for the case of a demand-populated unit line between two suppliers offering deterministic prices. We then let one of the two suppliers offer a random price. If the price follows a symmetric and unimodal distribution, the optimal location is closer to the supplier with a lower mean price. We also show the dominance of high variability: the buyer can decrease the total cost more for higher price variability for any location. The dominance result holds for normal, uniform, and gamma distributions. We propose an extended model with more than two suppliers on a plane and show that the dominance result still holds. From numerical examples for a line and a plane, we observe that an optimal location gets closer to the center of gravity of demands as the variability of any supplier’s price increases.
Logistics; Location; Transportation; Sourcing; Distribution; Supply chain management;
http://www.sciencedirect.com/science/article/pii/S0377221711010071
Huang, Rongbing
Menezes, Mozart B.C.
Kim, Seokjin
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:544-5522016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:544-552
article
Approximation algorithms for the parallel flow shop problem
We consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson’s rule. For m=2, we present a 32-approximation algorithm, and for m=3, we present a 127-approximation algorithm. Both these algorithms run in O(nlogn) time. These are the first approximation algorithms with fixed worst-case performance guarantees for the parallel flow shop problem.
Scheduling; Parallel flow shop; Hybrid flow shop; Approximation algorithms; Worst-case analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711007193
Zhang, Xiandong
van de Velde, Steef
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:531-5402016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:531-540
article
Setting staffing requirements for time dependent queueing networks: The case of accident and emergency departments
An incentive scheme aimed at reducing patients’ waiting times in accident and emergency departments was introduced by the UK government in 2000. It requires 98% of patients to be discharged, transferred, or admitted to inpatient care within 4hours of arrival. Setting the minimal hour by hour medical staffing levels for achieving the government target, in the presence of complexities like time-varying demand, multiple types of patients, and resource sharing, is the subject of this paper. Building on extensive body of research on time dependent queues, we propose an iterative scheme which uses infinite server networks, the square root staffing law, and simulation to come up with a good solution. The implementation of this algorithm in a typical A&E department suggests that significant improvement on the target can be gained, even without increase in total staff hours.
Staffing emergency departments; 98% Target; Time-dependent queues; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221711009805
Izady, Navid
Worthington, Dave
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:602-6132016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:602-613
article
Consumer environmental awareness and competition in two-stage supply chains
This paper focuses on the impact of competition and consumers’ environmental awareness on key supply chain players. We consider both the production competition between partially substitutable products made by different manufacturers, and the competition between retail stores. We use two-stage Stackelberg game models to investigate the dynamics between the supply chain players given three supply chain network structures. We find that as consumers’ environmental awareness increases, retailers and manufacturers with superior eco-friendly operations will benefit; while the profitability of the inferior eco-friendly firm will tend to increase if the production competition level is low, and will tend to decrease if the production competition level is high. In addition, higher levels of retail competition may make manufacturers with inferior eco-friendly operations more likely to benefit from the increase of consumers’ environmental awareness. Moreover, as production competition intensifies, the profits of the retailers will always increase, while the profits of the manufacturers with inferior eco-friendly operations will always decrease. The profitability of the manufacturers with superior eco-friendly operations will also tend to decrease, unless consumers’ environmental awareness is high and the superior manufacturer has a significant cost advantage related to product environmental improvement.
Environmental awareness; Environmental responsibility; Supply chain management; Stackelberg game;
http://www.sciencedirect.com/science/article/pii/S0377221711010368
Liu, Zugang (Leo)
Anderson, Trisha D.
Cruz, Jose M.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:455-4572016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:455-457
article
Note on "An efficient approach for solving the lot-sizing problem with time-varying storage capacities"
In a recent paper Gutièrrez et al. [1] show that the lot-sizing problem with inventory bounds can be solved in time. In this note we show that their algorithm does not lead to an optimal solution in general.
Inventory Lot-sizing Inventory bounds
http://www.sciencedirect.com/science/article/pii/S0377221711003122
van den Heuvel, Wilco
Gutiérrez, José Miguel
Hwang, Hark-Chin
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:278-2862016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:278-286
article
Comparing branch-and-price algorithms for the Multi-Commodity k-splittable Maximum Flow Problem
The Multi-Commodity k-splittable Maximum Flow Problem consists in routing as much flow as possible through a capacitated network such that each commodity uses at most k paths and the capacities are satisfied. The problem appears in telecommunications, specifically when considering Multi-Protocol Label Switching. The problem has previously been solved to optimality through branch-and-price. In this paper we propose two exact solution methods both based on an alternative decomposition. The two methods differ in their branching strategy. The first method, which branches on forbidden edge sequences, shows some performance difficulty due to large search trees. The second method, which branches on forbidden and forced edge sequences, demonstrates much better performance. The latter also outperforms a leading exact solution method from the literature. Furthermore, a heuristic algorithm is presented. The heuristic is fast and yields good solution values.
Branch and bound; Combinatorial optimization; Multi-commodity flow; k-Splittable; Dantzig–Wolfe decomposition; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221711008988
Gamst, M.
Petersen, B.
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:198-2032016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:198-203
article
Arbitration procedures with multiple arbitrators
We consider two final-offer arbitration procedures in the case where there is more than one arbitrator. Two players, labeled 1 and 2 and interpreted here as Labor and Management, respectively, are in dispute about an increase in the wage rate. They submit final offers to a Referee. There are N arbitrators. Each of the arbitrators has her own assessment and selects the offer which is closest to her assessment. After that each arbitrator informs the Referee about her decision. The Referee counts the votes and declares the player obtaining the most votes to be the winner. Under the second arbitration scheme, the Referee takes into account only the assessments which lie between the players’ offers. The game is modeled as a zero-sum game. The Nash equilibrium in this arbitration game is derived.
Group decision and negotiation; Final-offer arbitration; Multiple arbitrators; Equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221711008174
Mazalov, Vladimir
Tokareva, Julia
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:533-5432016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:533-543
article
A honey-bee mating optimization algorithm for educational timetabling problems
In this work, we propose a variant of the honey-bee mating optimization algorithm for solving educational timetabling problems. The honey-bee algorithm is a nature inspired algorithm which simulates the process of real honey-bees mating. The performance of the proposed algorithm is tested over two benchmark problems; exam (Carter’s un-capacitated datasets) and course (Socha datasets) timetabling problems. We chose these two datasets as they have been widely studied in the literature and we would also like to evaluate our algorithm across two different, yet related, domains. Results demonstrate that the performance of the honey-bee mating optimization algorithm is comparable with the results of other approaches in the scientific literature. Indeed, the proposed approach obtains best results compared with other approaches on some instances, indicating that the honey-bee mating optimization algorithm is a promising approach in solving educational timetabling problems.
Timetabling; Meta-heuristics; Honey-bee mating; Nature inspired;
http://www.sciencedirect.com/science/article/pii/S0377221711007181
Sabar, Nasser R.
Ayob, Masri
Kendall, Graham
Qu, Rong
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:519-5302016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:519-530
article
Optimizing system resilience: A facility protection model with recovery time
Optimizing system resilience is concerned with the development of strategies to restore a system to normal operations as quickly and efficiently as possible following potential disruption. To this end, we present in this article a bilevel mixed integer linear program for protecting an uncapacitated median type facility network against worst-case losses, taking into account the role of facility recovery time on system performance and the possibility of multiple disruptions over time. The model differs from previous types of facility protection models in that protection is not necessarily assumed to prevent facility failure altogether, but more precisely to speed up recovery time following a potential disruption. Three different decomposition approaches are devised to optimally solve medium to large problem instances. Computational results provide a cross comparison of the efficiency of each algorithm. Additionally, we present an analysis to estimate cost-efficient levels of investments in protection resources.
OR in strategic planning; Location; Protection; Bilevel programming; Decomposition;
http://www.sciencedirect.com/science/article/pii/S0377221711008940
Losada, Chaya
Scaparra, M. Paola
O’Hanley, Jesse R.
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:755-7632016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:755-763
article
Stable network topologies using the notion of covering
An alternative perspective to evaluate networks and network evolution is introduced, based on the notion of covering. For a particular node in a network covering captures the idea of being outperformed by another node in terms of, for example, visibility and possibility of information gathering. In this paper, we focus on networks where these subdued network positions do not exist. We call these networks stable. Within this set we identify the minimal stable networks, which frequently have a ‘bubble-like’ structure. Severing a link in such a network results in at least one of the nodes being covered. In a minimal stable network therefore all nodes cooperate to avoid that one of the nodes ends up in a subdued position. Our results can be applied to, for example, the design of (covert) communication networks and the dynamics of social and information networks.
Graph theory; Network evolution; Information network; Degree distribution; Network centric operations;
http://www.sciencedirect.com/science/article/pii/S0377221711010563
Janssen, R.H.P.
Monsuur, H.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:394-4032016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:394-403
article
Dynamic pricing of limited inventories for multi-generation products
In this research, we consider a retailer selling products from two different generations, both with limited inventory over a predetermined selling horizon. Due to the spatial constraints or the popularity of a given product, the retailer may only display goods from one specific generation. If the transaction of the displayed item cannot be completed, the retailer may provide an alternative from another generation. We analyze two models – posted-pricing-first model and negotiation-first model. The former considers negotiation as being allowed on the price of the second product only and in the latter, only the price of the first product is negotiable. Our results show that the retailer can adopt both models effectively depending on the relative inventory levels of the products. In addition, the retailer is better off compared to the take-it-or-leave-it pricing when the inventory level of the negotiable product is high.
Revenue management; Multi-generation products; Bargaining; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221711008484
Kuo, Chia-Wei
Huang, Kwei-Long
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:105-1132016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:105-113
article
Combined m-consecutive and k-out-of-n sliding window systems
This paper proposes a new model that generalizes the linear multi-state sliding window system. In this model the system consists of n linearly ordered multi-state elements. Each element can have different states: from complete failure up to perfect functioning. A performance rate is associated with each state. The system fails if at least one of the following two conditions is met: (1) there exist at least m consecutive overlapping groups of r adjacent elements having the cumulative performance lower than V; (2) there exist at least k arbitrarily located groups of r adjacent elements having the cumulative performance lower than W. An algorithm for system reliability evaluation is suggested which is based on an extended universal moment generating function. Examples of evaluating system reliability and elements’ reliability importance indices are presented. Optimal sequencing of system elements is demonstrated.
m-Consecutive; k-Out-of-n; Sliding window system; Universal moment generating function; Multi-state system;
http://www.sciencedirect.com/science/article/pii/S0377221711010836
Xiang, Yanping
Levitin, Gregory
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:204-2132016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:204-213
article
A post-improvement procedure for the mixed load school bus routing problem
This paper aims to develop a mixed load algorithm for the school bus routing problem (SBRP) and measure its effects on the number of required vehicles. SBRP seeks to find optimal routes for a fleet of vehicles, where each vehicle transports students from their homes and to their schools while satisfying various constraints. When mixed load is allowed, students of different schools can get on the same bus at the same time. Although many of real world SBRP allow mixed load, only a few studies have considered these cases. In this paper, we present a new mixed load improvement algorithm and compare it with the only existing algorithm from the literature. Benchmark problems are proposed to compare the performances of algorithms and to stimulate other researchers’ further study. The proposed algorithm outperforms the existing algorithm on the benchmark problem instances. It has also been successfully applied to some of real-world SBRP and could reduce the required number of vehicles compared with the current practice.
Combinatorial optimization; School bus routing; Mixed load; Vehicle routing problem;
http://www.sciencedirect.com/science/article/pii/S0377221711007636
Park, Junhyuk
Tae, Hyunchul
Kim, Byung-In
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:434-4412016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:434-441
article
Multi-directional efficiency analysis of efficiency patterns in Chinese banks 1997–2008
DEA-type efficiency studies are often used to investigate levels of efficiencies, differences in those levels between subgroups within a data set and possible determinants of such differences. In the current paper we show how differences in the efficiency patterns between different subgroups within a data set can be investigated using the more recent MEA methodology.
Efficiency patterns; Multi-directional efficiency analysis (MEA); Chinese banks;
http://www.sciencedirect.com/science/article/pii/S0377221712000021
Asmild, Mette
Matthews, Kent
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:149-1602016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:149-160
article
Optimizing yard assignment in an automotive transshipment terminal
This paper studies a yard management problem in an automotive transshipment terminal. Groups of cars arrive to and depart from the terminal in a given planning period. These groups must be assigned to parking rows under some constraints resulting from managerial rules. The main objective is the minimization of the total handling time. Model extensions to handle application specific issues such as a rolling horizon and a manpower leveling objective are also discussed. The main features of the problem are modeled as an integer linear program. However, solving this formulation by a state-of-the-art solver is impractical. In view of this, we develop a metaheuristic algorithm based on the adaptive large neighborhood search framework. Computational results on real-life data show the efficacy of the proposed metaheuristic algorithm.
Logistics Yard management Automotive transshipment terminal Adaptive large neighborhood search
http://www.sciencedirect.com/science/article/pii/S0377221711005376
Cordeau, Jean-François
Laporte, Gilbert
Moccia, Luigi
Sorrentino, Gregorio
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:627-6432016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:627-643
article
QoS commitment between vertically integrated autonomous systems
Vertically integrated autonomous systems bargain to provide quality of service guarantees and revenue sharing. Depending on the perceived quality of service and access price, consumers determine whether they subscribe to the access provider's service. Four types of contracts are compared: (i) best effort, (ii) bilateral bargaining, (iii) cascade negotiations and (iv) grand coalition cooperation; the impact of the consumers' QoS sensitivity parameter and power relation are tested for each contract. Assuming that the consumers' quality of service sensitivity parameter is unknown and might evolve dynamically due to error judgement, word-of-mouth effect or competition pressure, a learning algorithm is detailed and implemented by each integrated autonomous systems under asymmetrical information. Its convergence and the influence of bias introduction by the most informed autonomous system is analyzed.
Bilateral bargaining Supply chain Shapley value Learning
http://www.sciencedirect.com/science/article/pii/S0377221711003870
Le Cadre, Hélène
Barth, Dominique
Pouyllau, Hélia
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:94-1042016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:94-104
article
Real-time production planning and control system for job-shop manufacturing: A system dynamics analysis
Much attention has been paid to production planning and control (PPC) in job-shop manufacturing systems. However, there is a remaining gap between theory and practice, in the ability of PPC systems to capture the dynamic disturbances in manufacturing process. Since most job-shop manufacturing systems operate in a stochastic environment, the need for sound PPC systems has emerged, to identify the discrepancy between planned and actual activities in real-time and also to provide corrective measures. By integrating production ordering and batch sizing control mechanisms into a dynamic model, we propose a comprehensive real-time PPC system for arbitrary capacitated job-shop manufacturing. We adopt a system dynamics (SD) approach which is proved to be appropriate for studying the dynamic behavior of complex manufacturing systems. We study the system’s response, under different arrival patterns for customer orders and the existence of various types real-time events related to customer orders and machine failures. We determine the near-optimal values of control variables, which improve the shop performance in terms of average backlogged orders, work in process inventories and tardy jobs. The results of extensive numerical investigation are statistically examined by using analysis of variance (ANOVA). The examination reveals an insensitivity of near-optimal values to real-time events and to arrival pattern and variability of customer orders. In addition, it reveals a positive impact of the proposed real-time PPC system on the shop performance. The efficiency of PPC system is further examined by implementing data from a real-world manufacturer.
System dynamics; Production; Job-shop; Batch sizing; Robustness and sensitivity analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711006242
Georgiadis, Patroklos
Michaloudis, Charalampos
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:580-5882016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:580-588
article
Optimal dynamic pricing of inventories with stochastic demand and discounted criterion
We consider a continuous time dynamic pricing problem for selling a given number of items over a finite or infinite time horizon. The demand is price sensitive and follows a non-homogeneous Poisson process. We formulate this problem as to maximize the expected discounted revenue and obtain the structural properties of the optimal revenue function and optimal price policy by the Hamilton–Jacobi–Bellman (HJB) equation. Moreover, we study the impact of the discount rate on the optimal revenue function and the optimal price. Further, we extend the problem to the case with discounting and time-varying demand, the infinite time horizon problem. Numerical examples are used to illustrate our analytical results.
Revenue management; HJB equation; Optimal pricing; Discounted criterion;
http://www.sciencedirect.com/science/article/pii/S0377221711009015
Cao, Ping
Li, Jianbin
Yan, Hong
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:386-3952016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:386-395
article
Ecological modernization in the electrical utility industry: An application of a bads–goods DEA model of ecological and technical efficiency
Newly-developed data envelopment analysis techniques permit simultaneous consideration of ‘good and bad’ outputs in evaluating efficiency. We use these techniques to determine joint ecological and technical efficiencies of the 437 largest fossil-fueled electricity-generating plants in the United States. Utilizing the EPA’s E-Grid and Clean Air Markets databases and drawing on ecological modernization theory we evaluate whether innovations in organizational practices and technological solutions help achieve joint technical and environmental performance efficiencies.
Data envelopment analysis; Environment; Electrical utilities; Ecological modernization;
http://www.sciencedirect.com/science/article/pii/S0377221711008617
Sarkis, Joseph
Cordeiro, James J.
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:624-6372016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:624-637
article
Partnership formation based on multiple traits
A model of partnership formation based on two traits, called beauty and character, is presented. There are two classes of individual and partners must be of different classes. Individuals prefer prospective partners with a high beauty measure and of a similar character. This problem may be interpreted as e.g. a job search problem in which the classes are employer and employee, or a mate choice problem in which the classes are male and female. Beauty can be observed instantly. However, a costly date (or interview) is required to observe the character of a prospective partner. On observing the beauty of a prospective partner, an individual decides whether he/she wishes to date. During a date, the participants observe each other’s character and then decide whether to form a pair. Mutual acceptance is required both for a date to occur and pair formation. On finding a partner, an individual stops searching. Beauty has a continuous distribution on a finite interval, while character ‘forms a circle’ and has a uniform distribution. Criteria based on the concept of a subgame perfect Nash equilibrium are used to define a symmetric equilibrium of this game. It is argued that this equilibrium is unique. When dating costs are high, this equilibrium is a block separating equilibrium as in more classical formulations of two-sided job search problems. However, for sufficiently small dating costs the form of this equilibrium is essentially different.
Game theory; Partnership formation; Multiple traits; Subgame perfect equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221711007545
Ramsey, David M.
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:347-3592016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:347-359
article
Product line pricing for services with capacity constraints and dynamic substitution
In this paper, we address a service provider’s product line pricing problem for substitutable products in services, such as concerts, sporting events, or online advertisements. For each product, a static price is selected from a pre-defined set such that the total revenue is maximised. The products are differentiated by some of their attributes, and their availability is restricted due to individual capacity constraints. Furthermore, they are simultaneously sold during a common selling period at the end of which the service is delivered. Consumers differ from one another with respect to their willingness to pay, and, hence, their reservation prices vary depending on the product. In the event of a purchase, they choose the product that maximises their consumer surplus.
Pricing; Mixed-integer programming; Branch and bound; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221711011039
Burkart, Wolfgang R.
Klein, Robert
Mayer, Stefan
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:264-2712016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:264-271
article
Sales effort free riding and coordination with price match and channel rebate
This paper studies sales effort coordination for a supply chain with one manufacturer and two retail channels, where an online retailer offers a lower price and free-rides a brick-and-mortar retailer’s sales effort. The free riding effect reduces brick-and-mortar retailer’s desired effort level, and thus hurts the manufacturer’s profit and the overall supply chain performance. To achieve sales effort coordination, we designed a contract with price match and selective compensation rebate. We also examined other contracts, including the target rebate contract and the wholesale price discount contract, both with price match. The numerical analysis shows that the selective rebate outperforms other contracts in coordinating the brick-and-mortar retailer’s sales effort and improving supply chain efficiency.
Supply chain management; Sales effort free riding; Price match; Selective rebate;
http://www.sciencedirect.com/science/article/pii/S0377221711010381
Xing, Dahai
Liu, Tieming
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:442-4522016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:442-452
article
A tabu search heuristic for the dynamic transportation of patients between care units
The problem studied in this paper stems from a real application to the transportation of patients in the Hospital Complex of Tours (France). The ambulance central station of the Hospital Complex has to plan the transportation demands between care units which require a vehicle. Some demands are known in advance and the others arise dynamically. Each demand requires a specific type of vehicle and a vehicle can transport only one person at a time. The demands can be subcontracted to a private company which implies high cost. Moreover, transportations are subject to particular constraints, among them priority of urgent demands, disinfection of a vehicle after the transportation of a patient with contagious disease and respect of the type of vehicle needed. These characteristics involve a distinction between the vehicles and the crews during the modeling phase. We propose a modeling for solving this difficult problem and a tabu search algorithm inspired by Gendreau et al. (1999). This method supports an adaptive memory and a tabu search procedure. Computational experiments on a real-life instance and on randomly generated instances show that the method can provide high-quality solutions for this dynamic problem with a short computation time.
Transportation Real-time Health care Tabu search Vehicle routing
http://www.sciencedirect.com/science/article/pii/S0377221711003778
Kergosien, Y.
Lenté, Ch.
Piton, D.
Billaut, J.-C.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:388-3942016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:388-394
article
Single row facility layout problem using a permutation-based genetic algorithm
In this paper, a permutation-based genetic algorithm (GA) is applied to the NP-hard problem of arranging a number of facilities on a line with minimum cost, known as the single row facility layout problem (SRFLP). The GA individuals are obtained by using some rule-based as well as random permutations of the facilities, which are then improved towards the optimum by means of specially designed crossover and mutation operators. Such schemes led the GA to handle the SRFLP as an unconstrained optimization problem. In the computational experiments carried out with large-size instances of sizes from 60 to 80, available in the literature, the proposed GA improved several previously known best solutions.
Single row facility layout problem Genetic algorithm Combinatorial optimization
http://www.sciencedirect.com/science/article/pii/S0377221711002712
Datta, Dilip
Amaral, André R.S.
Figueira, José Rui
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:435-4412016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:435-441
article
Optimal order lot sizing and pricing with free shipping
Companies, especially those in e-business, are increasingly offering free shipping to buyers whose order sizes exceed the free shipping quantity. In this paper, given that the supplier offers free shipping, we determine the retailer’s optimal order lot size and the optimal retail price. We explicitly incorporate the supplier’s quantity discount, and transportation cost into the model. We analytically and numerically examine the impacts of free shipping, quantity discount and transportation cost on the retailer’s optimal lot sizing and pricing decisions. We find that free shipping can benefit the supplier, the retailer, and the end customers, and can effectively encourage the retailer to order more of the good, to the extent of ordering a few times of the optimal order lot size without free shipping. The order lot size will increase and the retail price will decrease if the supplier offers proper free shipping.
Inventory; Logistics; Free shipping; Pricing; Purchasing;
http://www.sciencedirect.com/science/article/pii/S0377221711010307
Hua, Guowei
Wang, Shouyang
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:188-1972016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:188-197
article
SimLean: Utilising simulation in the implementation of lean in healthcare
Discrete-event simulation (DES) and lean are approaches that have a similar motivation: improvement of processes and service delivery. Both are being used to help improve the delivery of healthcare, but rarely are they used together. This paper explores from a theoretical and an empirical perspective the potential complementary roles of DES and lean in healthcare. The aim is to increase the impact of both approaches in the improvement of healthcare systems. Out of this exploration, the ‘SimLean’ approach is developed in which three roles for DES with lean are identified: education, facilitation and evaluation. These roles are demonstrated through three examples of DES in action with lean. The work demonstrates how the fusion of DES with lean can improve both stakeholder engagement with DES and the impact of lean.
OR in health services; Lean; Discrete-event simulation;
http://www.sciencedirect.com/science/article/pii/S0377221711011234
Robinson, Stewart
Radnor, Zoe J.
Burgess, Nicola
Worthington, Claire
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:479-4822016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:479-482
article
On approximate Monetary Unit Sampling
Monetary Unit Sampling (MUS), also known as Dollar-Unit Sampling, is a popular sampling strategy in Auditing, in which all units are to be randomly selected with probabilities proportional to the book value. However, if units sizes have very large variability, no vector of probabilities exists fulfilling the requirement that all probabilities are proportional to the associated book values. In this note we propose a Mathematical Optimization approach to address this issue. An optimization program is posed, structural properties of the optimal solution are analyzed, and an algorithm yielding the optimal solution in time and space linear to the number of population units is given.
Nonlinear programming; Monetary Unit Sampling; Statistical sampling; Karush–Kuhn–Tucker conditions;
http://www.sciencedirect.com/science/article/pii/S0377221711008654
Carrizosa, Emilio
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:611-6212016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:611-621
article
Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming
Emergency service providers are supposed to locate ambulances such that in case of emergency patients can be reached in a time-efficient manner. Two fundamental decisions and choices need to be made real-time. First of all immediately after a request emerges an appropriate vehicle needs to be dispatched and send to the requests’ site. After having served a request the vehicle needs to be relocated to its next waiting location. We are going to propose a model and solve the underlying optimization problem using approximate dynamic programming (ADP), an emerging and powerful tool for solving stochastic and dynamic problems typically arising in the field of operations research. Empirical tests based on real data from the city of Vienna indicate that by deviating from the classical dispatching rules the average response time can be decreased from 4.60 to 4.01 minutes, which corresponds to an improvement of 12.89%. Furthermore we are going to show that it is essential to consider time-dependent information such as travel times and changes with respect to the request volume explicitly. Ignoring the current time and its consequences thereafter during the stage of modeling and optimization leads to suboptimal decisions.
OR in health services; Emergency vehicles; Ambulance location; Approximate dynamic programming; Stochastic optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711009830
Schmid, Verena
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:801-8092016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:801-809
article
A multicriteria approach to sustainable energy supply for the rural poor
Despite significant progress in energy technology, about two billion people worldwide, particularly the poor in rural areas of developing countries, have no access to electricity. Decision-making concerning the most appropriate energy technology for supplying these areas has been difficult; existing energy decision-support tools have been useful but are mostly incomplete. Trade-offs, as well as impacts that can be positive or negative, may emerge as a result of implementing modern forms of energy. These can affect both community’s livelihoods as well as the confidence of decision-makers in relation to alternative technologies. The paper discusses a newly designed multicriteria approach and its novel robustness analysis for selecting energy generation systems for the improvement of livelihoods in rural areas. The proposed methodology builds upon a sustainable rural livelihoods framework to address multiple interactions and calculate trade-offs aimed at boosting decision-makers’ confidence in the selected technologies. The methodology is tested via a case study in Colombia.
Decision analysis; OR in energy; Multiple criteria analysis; Robustness and sensitivity analysis; OR in societal problem analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711010423
Henao, Felipe
Cherni, Judith A.
Jaramillo, Patricia
Dyner, Isaac
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:94-1072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:94-107
article
A simulated annealing heuristic for the team orienteering problem with time windows
This paper presents a simulated annealing based heuristic approach for the team orienteering problem with time windows (TOPTW). Given a set of known locations, each with a score, a service time, and a time window, the TOPTW finds a set of vehicle tours that maximizes the total collected scores. Each tour is limited in length and a visit to a location must start within the location’s service time window. The proposed heuristic is applied to benchmark instances. Computational results indicate that the proposed heuristic is competitive with other solution approaches in the literature.
Routing; Team orienteering problem; Time window; Simulated annealing;
http://www.sciencedirect.com/science/article/pii/S037722171100765X
Lin, Shih-Wei
Yu, Vincent F.
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:643-6522016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:643-652
article
Efficiency of the medical care industry: Evidence from the Italian regional system
What might be the relation between clinical research and efficiency of medical care suppliers? Is the hypothesis of a positive relation consistent? Considering efficiency as the supplier’s ability to maximize the number of patients hospitalized in a mobility process among regions (i.e. mobility balance), this work aims at highlighting the existence of a positive externality of pharmaceutical clinical research on that kind of efficiency. In other words, an externality is able to affect the patients’ perception of good/bad quality of outputs supplied by the medical care industry, leading their mobility process. Taking Italy and the mobility of patients among regions into account, an Operational Research study will be performed in order to support this assumption.
OR in health services; Data Envelopment Analysis; Pharmaceutical clinical research; Medical researcher; Research subjects; Regional analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711009295
Ippoliti, Roberto
Falavigna, Greta
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:48-572016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:48-57
article
Connectivity-and-hop-constrained design of electricity distribution networks
This paper addresses the problem of designing the configuration of an interconnected electricity distribution network, so as to maximize the minimum power margin over the feeders. In addition to the limitation of feeder power capacity, the distance (as hop count) between any customer and its allocated feeder is also limited for preventing power losses and voltage drops. Feasibility conditions are studied and a complexity analysis is performed before introducing a heuristic algorithm and two integer linear programming formulations for addressing the problem. A cutting-plane algorithm relying on the generation of two classes of cuts for enforcing connectivity and distance requirements respectively is proposed for solving the second integer linear programming formulation. All the approaches are then compared on a set of 190 instances before discussing their performances.
OR in energy; Electricity distribution networks; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221711009052
Rossi, André
Aubry, Alexis
Jacomino, Mireille
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:397-4082016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:397-408
article
Multistage stochastic portfolio optimisation in deregulated electricity markets using linear decision rules
The deregulation of electricity markets increases the financial risk faced by retailers who procure electric energy on the spot market to meet their customers’ electricity demand. To hedge against this exposure, retailers often hold a portfolio of electricity derivative contracts. In this paper, we propose a multistage stochastic mean–variance optimisation model for the management of such a portfolio. To reduce computational complexity, we apply two approximations: we aggregate the decision stages and solve the resulting problem in linear decision rules (LDR). The LDR approach consists of restricting the set of recourse decisions to those affine in the history of the random parameters. When applied to mean–variance optimisation models, it leads to convex quadratic programs. Since their size grows typically only polynomially with the number of periods, they can be efficiently solved. Our numerical experiments illustrate the value of adaptivity inherent in the LDR method and its potential for enabling scalability to problems with many periods.
OR in energy; Electricity portfolio management; Stochastic programming; Risk management; Linear decision rules;
http://www.sciencedirect.com/science/article/pii/S0377221711007132
Rocha, Paula
Kuhn, Daniel
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:491-5072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:491-507
article
Incorporating human behaviour in simulation models of screening for breast cancer
Simulation modelling is widely used in many industries in order to assess and evaluate alternative options and to test strategies or operating rules which are too complex to be modelled analytically. Simulation software has developed its capability in parallel with the growth in computing power since the 1980s. However in practice, the results from the most sophisticated and complex simulation model may not truly reflect what happens in the real world, because such models do not account for human behaviour. For example, in the domain of healthcare simulation is often used to evaluate the outcomes from medical interventions such as new drug treatments. However in reality patients may not complete the course of a prescribed medication, perhaps because they find the side-effects unpleasant. A simulation study designed to evaluate this medication which ignores such behavioural factors may give unreliable results. In this paper we describe a model for screening for breast cancer which includes behavioural factors to model women’s decisions to attend for mammography. The model results indicate that increasing attendance through education or publicity campaigns can be equally as effective as decreasing the intervals between screens. This would have considerable cost implications for healthcare providers.
Discrete-event simulation; Health care modelling; Human behaviour; Breast cancer screening;
http://www.sciencedirect.com/science/article/pii/S0377221711009817
Brailsford, S.C.
Harper, P.R.
Sykes, J.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:305-3152016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:305-315
article
Instance-specific multi-objective parameter tuning based on fuzzy logic
Finding good parameter values for meta-heuristics is known as the parameter setting problem. A new parameter tuning strategy, called IPTS, is proposed that is a novel instance-specific method to take the trade-off between solution quality and computational time into consideration. Two important steps in the method are an a priori statistical analysis to identify the factors that determine heuristic performance in both quality and time for a specific type of problem, and the transformation of these insights into a fuzzy inference system rule base which aims to return parameter values on the Pareto-front with respect to a decision maker’s preference.
Metaheuristics; Combinatorial optimisation; Travelling Salesman Problem; Parameter setting; Fuzzy logic;
http://www.sciencedirect.com/science/article/pii/S0377221711009441
Ries, Jana
Beullens, Patrick
Salt, David
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:368-3782016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:368-378
article
Willingness-to-pay estimation with choice-based conjoint analysis: Addressing extreme response behavior with individually adapted designs
The increasing consideration of behavioral aspects in operations management models has prompted greater use of choice-based conjoint (CBC) studies in operations research. Such studies can elicit consumers’ willingness to pay (WTP), a core input for many optimization models. However, optimization models can yield valid results only if consumers’ WTP is estimated accurately. A simulation study and two field studies show that extreme response behavior in CBC studies, such that consumers always or never choose the no-purchase option, harms the validity of WTP estimates. Reporting the share of consumers who always and never select the no-purchase option allows for detecting extreme response behavior. This study suggests an individually adapted design that avoids extreme response behavior and thus significantly improves WTP estimation accuracy.
Choice-based conjoint analysis; Willingness to pay; Marketing research;
http://www.sciencedirect.com/science/article/pii/S0377221712000033
Gensler, Sonja
Hinz, Oliver
Skiera, Bernd
Theysohn, Sven
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:541-5562016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:541-556
article
A stochastic control formalism for dynamic biologically conformal radiation therapy
State-of-the-art methods for optimizing cancer treatment over several weeks of external beam radiotherapy take a static–deterministic view of the treatment planning process, mainly focusing on spatial distribution of dose. Recent progress in quantitative functional imaging as well as mathematical models of tumor response to radiotherapy is increasingly enabling treatment planners to monitor/predict a patient’s biological response over weeks of treatment. In this paper we introduce dynamic biologically conformal radiation therapy (DBCRT), a mathematical framework intended to exploit these emerging technological and biological modeling advances to design patient-specific radiation treatment strategies that dynamically adapt to the spatiotemporal evolution of a patient’s biological response over several treatment sessions in order to achieve the best possible health outcome. More specifically, we propose a discrete-time stochastic control formalism where we use the patient’s biological condition to model the system state and the beam intensities as controls. Three approximate control schemes are then applied and compared for efficiency. Numerical simulations on test cases show that DBCRT results in a 64–98% improvement in treatment efficacy as compared to the more conventional static–deterministic approach.
OR in health services; Control; Dynamic programming; Intensity modulated radiation therapy; Adaptive radiotherapy;
http://www.sciencedirect.com/science/article/pii/S0377221711009799
Kim, Minsun
Ghate, Archis
Phillips, Mark H.
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:425-4332016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:425-433
article
A systematic two phase approach for the nurse rostering problem
Nurse rostering is an NP-hard combinatorial problem which makes it extremely difficult to efficiently solve real life problems due to their size and complexity. Usually real problem instances have complicated work rules related to safety and quality of service issues in addition to rules about quality of life of the personnel. For the aforementioned reasons computer supported scheduling and rescheduling for the particular problem is indispensable. The specifications of the problem addressed were defined by the First International Nurse Rostering Competition (INRC2010) sponsored by the leading conference in the Automated Timetabling domain, PATAT-2010. Since the competition imposed quality and time constraint requirements, the problem instances were partitioned into sub-problems of manageable computational size and were then solved sequentially using Integer Mathematical Programming. A two phase strategy was implemented where in the first phase the workload for each nurse and for each day of the week was decided while in the second phase the specific daily shifts were assigned. In addition, local optimization techniques for searching across combinations of nurses’ partial schedules were also applied. This sequence is repeated several times depending on the available computational time. The results of our approach and the submitted software produced excellent solutions for both the known and the hidden problem instances, which in respect gave our team the first position in all tracks of the INRC-2010 competition.
Nurse rostering; Integer programming; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221711011362
Valouxis, Christos
Gogos, Christos
Goulas, George
Alefragis, Panayiotis
Housos, Efthymios
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:509-5182016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:509-518
article
Specification and estimation of primal production models
While estimating production technology in a primal framework production function, input and output distance functions and input requirement functions are widely used in the empirical literature. This paper shows that these popular primal based models are algebraically equivalent in the sense that they can be derived from the same underlying transformation (production possibility) function. By assuming that producers maximize profit, we show that in all cases, except one, the use of ordinary least squares (OLS) gives inconsistent estimates irrespective of whether the production, input distance and input requirement functions are used. Based on several specifications of the production and input distance function models, we conclude that one can estimate the input elasticities and returns to scale consistently using instruments on only one regressor. No instruments are needed if either it is assumed that producers know the technology entirely (including the so-called error term) or a system approach is used. We used Norwegian timber harvesting data to illustrate workings of various model specifications.
Production function; Input distance function; Input requirement function; Cobb–Douglas; Translog;
http://www.sciencedirect.com/science/article/pii/S0377221711008939
Kumbhakar, Subal C.
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:140-1512016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:140-151
article
Modeling age-based maintenance strategies with minimal repairs for systems subject to competing failure modes due to degradation and shocks
This paper deals with maintenance strategies with minimal repairs for single-unit repairable systems which are subject to competing and dependent failures due to degradation and traumatic shocks. The main aims are to study different approaches for making a minimal repair decision (i.e., time-based or condition-based) which is a possible corrective maintenance action under the occurrence of shocks, and to show under a given situation which approach can lead to a greater saving in maintenance cost. Two age-based maintenance policies with age-based minimal repairs and degradation-based minimal repairs are modeled, and their performance is compared with a classical pure age-based replacement policy without minimal repairs. Numerical results show the cost saving of the maintenance policies and allow us to make some conclusions about their performance under different situations of system characteristic and maintenance costs. It is shown that carrying out minimal repairs is useful in many situations to improve the performance of maintenance operations. Moreover, the comparison of optimal maintenance costs incurred by both maintenance policies with minimal repairs allows us to justify the appropriate conditions of time-based minimal repair approach and condition-based minimal approach.
Gamma process; Non-homogeneous Poisson process; Age replacement policy; Minimal repairs; Random inspection; Dynamic environment;
http://www.sciencedirect.com/science/article/pii/S0377221711009453
Huynh, K.T.
Castro, I.T.
Barros, A.
Bérenguer, C.
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:280-2922016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:280-292
article
Design of regional production networks for second generation synthetic bio-fuel – A case study in Northern Germany
In the medium-term, second generation synthetic bio-diesel will make an important contribution to sustainable mobility. However, attributed to political, technical, and market related uncertainties, it is still not clear which interest groups will invest in production capacities and which technologies will be used. Hence, a multi-period MIP-model is presented for integrated location, capacity and technology planning for the design of production networks for second generation synthetic bio-diesel. The approach is applied to the region of Niedersachsen, Germany. Principle network configurations are developed for this region considering different scenarios and different risk attitudes of interest groups. As results of the investigation, recommendations are drawn regarding advantageous plant concepts, as well as strategies for the capacity installation. Finally, recommendations for political decision makers as well as for potential investors are deduced.
(D) Supply chain management; Network design; Facility location planning; Synthetic bio-fuel; Case study;
http://www.sciencedirect.com/science/article/pii/S037722171100943X
Walther, Grit
Schatka, Anne
Spengler, Thomas S.
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:697-6992016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:697-699
article
Can international environmental cooperation be bought: Comment
Fuentes-Albero and Rubio (2010) analytically examine the effects of the countries’ heterogeneity on the international environmental cooperation. They consider two types of countries having different abatement costs in one case and different environmental damages in another case. Furthermore it is analyzed whether a self-financed transfer system can diminish these heterogeneity effects. The paper shows for both scenarios of asymmetry and no transfers that the maximum level of cooperation consists of three countries of the same type. For the case of heterogeneity in environmental damages, Fuentes-Albero and Rubio conclude that an agreement between one type 1 and one type 2 country is also self-enforcing given that the differences in the damages are not very large. In this comment, the derivation of the last mentioned result is shown to be incorrect by proving that this coalition is not self-enforcing.
Game theory; Self-enforcing international environmental agreements; Environment; Group decision and negotiation;
http://www.sciencedirect.com/science/article/pii/S037722171100717X
Glanemann, Nicole
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:257-2692016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:257-269
article
Recent advances in optimization techniques for statistical tabular data protection
One of the main services of National Statistical Agencies (NSAs) for the current Information Society is the dissemination of large amounts of tabular data, which is obtained from microdata by crossing one or more categorical variables. NSAs must guarantee that no confidential individual information can be obtained from the released tabular data. Several statistical disclosure control methods are available for this purpose. These methods result in large linear, mixed integer linear, or quadratic mixed integer linear optimization problems. This paper reviews some of the existing approaches, with an emphasis on two of them: cell suppression problem (CSP) and controlled tabular adjustment (CTA). CSP and CTA have concentrated most of the recent research in the tabular data protection field. The particular focus of this work is on methods and results of practical interest for end-users (mostly, NSAs). Therefore, in addition to the resulting optimization models and solution approaches, computational results comparing the main optimization techniques – both optimal and heuristic – using real-world instances are also presented.
Linear programming; Network flows; Mixed integer linear programming; Statistical disclosure control; Large-scale optimization;
http://www.sciencedirect.com/science/article/pii/S037722171100316X
Castro, Jordi
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:964-9712017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:964-971
article
An efficient bicriteria algorithm for stable robotic flow shop scheduling
We consider a flow shop for processing single type of parts serviced by a single robot. The robot transportation times are allowed to have small perturbations. We treat the robotic flow shop scheduling problem considering stability of its schedule where the robot route is fixed and the processing durations of parts are to be specified from given intervals. The stability radius of a schedule is defined as the largest quantity of variations in the transportation times within which the schedule can still be executed as expected. We consider the bicriteria optimization problem which consists of minimizing the cycle time and maximizing the stability radius. The objective is to handle the two criteria simultaneously, that is, to find their Pareto front. We propose a new strongly polynomial algorithm for finding the minimum cycle times for all possible values of stability radius with time complexity of O(m4), where m is the number of processing machines in the flow shop. This implies that we can find the entire Pareto front of the problem in O(m4) time.
Scheduling; Robotic flow shop scheduling; Cyclic scheduling; Stability analysis; Parametric critical path algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221717300681
Che, Ada
Kats, Vladimir
Levner, Eugene
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:926-9342017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:926-934
article
Mergers and acquisitions between risk-averse parties
This paper evaluates mergers and acquisitions (M&A) in a supply chain involving risk-averse parties. In contrast to prior literature, the analysis presented herein suggests that, because of risk considerations, different types of M&A can yield different outcomes. Specifically, we distinguish among three types of M&A arrangements—merger, forward acquisition and backward acquisition—and compare each arrangement to a decentralized supply chain (i.e., before M&A). We further analyze an application of M&A in the software industry. The expected utility gained by each party is examined under each type of M&A, and the effect of each type of M&A on the consumer is evaluated in terms of price and quality of the software product. We find that a merger yields higher expected utility to the parties and leads to higher product quality compared with forward and backward acquisitions; however, it may yield a higher price for the consumer. Moreover, we show that a decentralized supply chain can be more beneficial for the parties than a centralized supply chain (formed by acquisition).
Supply chain management; Mergers and acquisitions; Risk aversion; Mean-variance criterion; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716309559
Avinadav, Tal
Chernonog, Tatyana
Perlman, Yael
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:205-2132017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:205-213
article
Self-interest and equity concerns: A behavioural allocation rule for operational problems
In many economic situations, individuals with different bargaining power must agree on how to divide a given resource. For instance, in the dictator game the proposer has all the bargaining power. In spite of it, the majority of controlled experiments show that she shares an important amount of the resource with the receiver. In the present paper I consider how behavioural and psychological internal conflicting aspects, such as self-interest and equity concerns, determine the split of the resource. The individual allocation proposals are aggregated in terms of altruism and value for the resource under dispute to obtain a single allocation. The resulting allocation rule is generalized to the n-individuals case through efficiency and consistency. Finally, I show that it satisfies a set of desirable properties. The obtained results are of practical interest for a number of situations, such as river sharing problems, sequential allocation and rationing problems.
Behavioural operational research; Sharing rules; Altruism; Equity concerns; Self-interest,;
http://www.sciencedirect.com/science/article/pii/S0377221717300693
Osório, António
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:317-3362017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:317-336
article
Decision rule approximations for the risk averse reservoir management problem
This paper presents a new formulation for the risk averse stochastic reservoir management problem. Using recent advances in robust optimization and stochastic programming, we propose a multi-stage model based on minimization of a risk measure associated with floods and droughts for a hydro-electrical complex. We present our model and then identify approximate solutions using standard affine decision rules commonly found in the literature as well as lifted decision rules. Finally, we conduct thorough numerical experiments based on a real river system in Western Québec and conclude on the relative performance of families of decision rules.
Stochastic programming; Robust optimization; Risk analysis; OR in energy;
http://www.sciencedirect.com/science/article/pii/S0377221717300796
Gauvin, Charles
Delage, Erick
Gendreau, Michel
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1064-10722017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1064-1072
article
An analysis of insurance demand in the newsboy problem
In this paper we study the standard newsboy problem, but under two new assumptions when compared to the existing literature. First, we assume that the wholesaler is an expected profit maximiser who sets the wholesale price optimally, and in doing so, takes into account the salvage value at which the newsboy can return unsold items to the wholesaler. Second, we assume that the salvage value is a choice variable of the newsboy, and in that way, it acts as a standard insurance device. The newsboy’s optimal salvage value then represents an optimal demand for insurance. We study in particular the optimal pricing problem of the wholesaler, and show that it can be expressed as a mark-up equation. We also show that insurance is provided at an actuarially unfair price. As regards the optimal demand for insurance by the newsboy, the problem is too complex for a closed form solution to be possible, so we resort to a simulation which returns the results that a strictly positive level of strictly partial insurance is demanded when the newsboy is strictly risk averse, and the optimal level of insurance coverage increases with risk aversion.
Risk analysis; Newsboy problem; Insurance demand; Salvage value;
http://www.sciencedirect.com/science/article/pii/S0377221716309626
Watt, Richard
Vázquez, Francisco J.
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:129-1422017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:129-142
article
Environmental implications of transport contract choice - capacity investment and pricing under volume and capacity contracts
Inspired by the observation that capacity contracts are used by some retailers to increase their transport provider’s investments in green transport solutions, we investigate and compare a service provider’s optimal investment, and its environmental implications under a volume and a capacity contract respectively. We solve the service provider’s investment problem under the assumption that the retailer uses the service to replenish a warehouse with storable goods. We then show that a capacity contract leads to more green transports, but not necessarily a larger investment in green transport solutions. At the same time, the optimal solution involves heavy investment in inventory at the retailer. The investment in inventory is non-decreasing in the cost benefit of the green transports, which may have a significant negative environmental impact. The implication is that a capacity contract will lead to better environmental performance than a volume contract only when the green transports’ cost benefit is within a given interval. Whether the capacity contract is the more profitable option for the service provider within this interval depends on inventory related costs and the relative environmental costs from transportation and inventory. Interestingly, owing to this, regulation that target the price of the conventional vehicles, such as a carbon tax, may lead to both an increase or a decrease in environmental performance.
Inventory; Green transports; OM and the environment; Contracting; Capacity contracts;
http://www.sciencedirect.com/science/article/pii/S0377221717301157
Berling, Peter
Eng-Larsson, Fredrik
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1115-11282017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1115-1128
article
Effects of process and outcome controls on business process outsourcing performance: Moderating roles of vendor and client capability risks
Control over outsourced projects is a significant concern for both clients and vendors. Although the effect of control on performance has been studied previously, vendor and client capability risks have rarely been merged into the control–performance relationship. Using paired quantitative data collected from 234 business process outsourcing projects, we empirically determine that outcome control is more effective than process control, although both positively influence the performance of outsourced projects. Vendor and client capability risks play miscellaneous moderating roles on the effects of process and outcome controls on performance. In the presence of high vendor capability risk, the effect of process control on performance is high, but the effectiveness of outcome control is low. By contrast, high client capability risk results in low effectiveness of process control but high effectiveness of outcome control. Different control modes have various attributes and generate different levels of performance. Either vendor or client capability risk serves as a double-edged sword with regard to control. Therefore, the risky situation of both vendors and clients should be considered in the selection and enforcement of control in managing outsourced projects.
Project management; Risk management; Business process outsourcing; Control mechanism; Vendor;
http://www.sciencedirect.com/science/article/pii/S0377221717300553
Liu, Shan
Wang, Lin
Huang, Wei (Wayne)
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:801-8172017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:801-817
article
The Benders decomposition algorithm: A literature review
The Benders decomposition algorithm has been successfully applied to a wide range of difficult optimization problems. This paper presents a state-of-the-art survey of this algorithm, emphasizing its use in combinatorial optimization. We discuss the classical algorithm, the impact of the problem formulation on its convergence, and the relationship to other decomposition methods. We introduce a taxonomy of algorithmic enhancements and acceleration strategies based on the main components of the algorithm. The taxonomy provides the framework to synthesize the literature, and to identify shortcomings, trends and potential research directions. We also discuss the use of the Benders Decomposition to develop efficient (meta-)heuristics, describe the limitations of the classical algorithm, and present extensions enabling its application to a broader range of problems.
Combinatorial optimization; Benders decomposition; Acceleration techniques; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221716310244
Rahmaniani, Ragheb
Crainic, Teodor Gabriel
Gendreau, Michel
Rei, Walter
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:368-3892017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:368-389
article
Flexible decision making in the wake of large scale nuclear emergencies: Long-term response
We develop a decision-making model that describes optimal protection and recovery strategies for a single economic location affected by radioactive release from the nearby Nuclear Power Plant. The initial period of release and deposition is characterised by high degrees of uncertainty, which is likely to lead to precautionary emergency measures being carried out regardless of the actual dangers to the public, and therefore it is excluded from the optimisation problem. Instead, the analysis is performed on the timescale of weeks, months, years and decades after the accident, implying that the problem is largely deterministic if one disregards long-term economic uncertainties. It is on these longer timescales that economically-driven decisions could be made on whether or not to implement various protection and recovery measures, which include relocation, remediation, repopulation and food banning. Our model allows one to find the joint cost-minimal strategy across the set of measures, providing certain spatial and temporal flexibilities are permitted. Several qualitatively different strategies are identified, including those with no relocation and delayed remediation. Which strategy is optimal depends on the initial radiation levels, the rates and costs of the individual actions, and the preferred economic valuation of the relevant health effects associated with radiation. Our main message is that in many possible settings relocation should be used sparingly and repopulation should be delayed to exploit natural decay of the radioactive elements. These findings could provide useful recommendations to regulators in civil nuclear industry and help devise better policies for implementing emergency response and recovery measures.
Decision support systems; Large-scale nuclear accidents; Economics of recovery measures; Continuous-time optimisation; Policy;
http://www.sciencedirect.com/science/article/pii/S0377221717301042
Yumashev, Dmitry
Johnson, Paul
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:898-9052017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:898-905
article
A constraint generation approach for two-machine shop problems with jobs selection
We consider job selection problems in two-stage flow shops and job shops. The aim is to select the best job subset with a given cardinality to minimize the makespan. These problems are known to be ordinary NP-hard and the current state of the art algorithms can solve flow shop problems with up to 3000 jobs. We introduce a constraint generation approach to the integer linear programming (ILP) formulation of these problems according to which the constraints associated with nearly all potential critical paths are relaxed and then only the ones violated by the relaxed solution are sequentially reinstated. The proposed approach is capable of solving problems with up to 100 000 jobs.
Scheduling; Jobs selection; Two-machine shop problems; Constraint generation approach;
http://www.sciencedirect.com/science/article/pii/S0377221716309614
Della Croce, Federico
Koulamas, Christos
T'kindt, Vincent
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:88-962017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:88-96
article
Risk-cost optimization for procurement planning in multi-tier supply chain by Pareto Local Search with relaxed acceptance criterion
We address a 2-objective optimization problem to minimize a retailer’s procurement cost and risk that is evaluated as recovery time of the retailer’s business after the procurement is suspended by a catastrophic event. In order to reduce the recovery time, the retailer needs to decentralize ordering to multiple suppliers and have contingency stock, which costs the retailer. In multi-tier supply chains, not only the retailer’s procurement plan but also their suppliers’ procurement plans affect the retailers’ risk and cost. Due to the huge combinations of their plans, it is difficult to find Pareto optimal solutions of the 2-objective optimization problem within a short space of time. We apply Pareto Local Search (PLS) based on heuristics to generate neighbors of a solution by changing suppliers’ plans in the closer tier to the retailer. The original PLS accepts the solutions that are nondominated neighbor solutions for the next search, but the acceptance criterion is too strict to find all Pareto optimal solutions. We relax the acceptance criterion in order to include dominated solutions whose Pareto rank is equal to or less than a threshold. The threshold is updated based on changes of Pareto rank during local searches.
Risk-cost optimization; Multi-tier supply chain; Pareto Local Search;
http://www.sciencedirect.com/science/article/pii/S0377221717300632
Mori, Masakatsu
Kobayashi, Ryoji
Samejima, Masaki
Komoda, Norihisa
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:934-9482017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:934-948
article
Constraint propagation using dominance in interval Branch & Bound for nonlinear biobjective optimization
Constraint propagation has been widely used in nonlinear single-objective optimization inside interval Branch & Bound algorithms as an efficient way to discard infeasible and non-optimal regions of the search space. On the other hand, when considering two objective functions, constraint propagation is uncommon. It has mostly been applied in combinatorial problems inside particular methods. The difficulty is in the exploitation of dominance relations in order to discard the so-called non-Pareto optimal solutions inside a decision domain, which complicates the design of complete and efficient constraint propagation methods exploiting dominance relations.
Nonlinear optimization; Biobjective optimization; Constraint propagation; Interval Branch & Bound;
http://www.sciencedirect.com/science/article/pii/S0377221716303824
Martin, Benjamin
Goldsztejn, Alexandre
Granvilliers, Laurent
Jermann, Christophe
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1069-10802017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1069-1080
article
An interdiction game on a queueing network with multiple intruders
Security forces are deployed to protect networks that are threatened by multiple intruders. To select the best deployment strategy, we analyze an interdiction game that considers multiple simultaneous threats. Intruders route through the network as regular customers, while interdictors arrive at specific nodes as negative customers. When an interdictor arrives at a node where an intruder is present, the intruder is removed from the network. Intruders and interdictors compete over the value of this network, which is the throughput of unintercepted intruders. Intruders attempt to maximize this throughput by selecting a fixed route through the network, while the interdictors aim to minimize the throughput selecting their arrival rate at each node. We analyze this game and characterize optimal strategies. For special cases, we obtain explicit formulas to evaluate the optimal strategies and use these to compute optimal strategies for general networks. We also consider the network with probabilistic routing of intruders and show that for this case, the value and optimal strategies of the interdictor of the resulting game remain the same.
OR in defense; Network interdiction; Game theory; Queueing theory;
http://www.sciencedirect.com/science/article/pii/S0377221717301571
Laan, Corine M.
van der Mijden, Tom
Barros, Ana Isabel
Boucherie, Richard J.
Monsuur, Herman
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:355-3672017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:355-367
article
Optimal service order for mass-casualty incident response
In the aftermath of a mass-casualty incident, one of the first steps in the response is to triage the casualties. Triage systems categorize the casualties based on criticality, and then prioritize casualties for transfer to hospitals for further treatment. The prioritization is usually based on simply ordering the casualty types without considering the available resources to transport them and the scale of the disaster. These factors can significantly affect the outcome of the rescue efforts. In this research we study a mathematical model to incorporate the above mentioned factors in the triage process. We assume a disaster location with a set of casualties, categorized by criticality and care requirements, that must be transported to hospitals in the region using a fleet of available ambulances. The goal is to maximize the expected number of survivors. We analyze the structure of the optimal solution to this problem, and compare the performance of the model with the current practice and other related models in the literature.
OR in disaster relief; Emergency management; Mass-casualty triage; Service order;
http://www.sciencedirect.com/science/article/pii/S0377221717300826
Kamali, Behrooz
Bish, Douglas
Glick, Roger
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:468-4812017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:468-481
article
Competitive algorithms for multistage online scheduling
We study an online flow shop scheduling problem where each job consists of several tasks that have to be completed in t different stages and the goal is to maximize the total weight of accepted jobs. The set of tasks of a job contains one task for each stage and each stage has a dedicated set of identical parallel machines corresponding to it that can only process tasks of this stage. In order to gain the weight (profit) associated with a job j, each of its tasks has to be executed between a task-specific release date and deadline subject to the constraint that all tasks of job j from stages 1,⋯,i−1 have to be completed before the task of the ith stage can be started. In the online version, jobs arrive over time and all information about the tasks of a job becomes available at the release date of its first task. This model can be used to describe production processes in supply chains when customer orders arrive online.
Scheduling; Online optimization; Competitive analysis;
http://www.sciencedirect.com/science/article/pii/S0377221717300024
Hopf, Michael
Thielen, Clemens
Wendt, Oliver
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1152-11632017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1152-1163
article
Optimization of hospital ward resources with patient relocation using Markov chain modeling
Overcrowding of hospital wards is a well-known and often revisited problem in the literature, yet it appears in many different variations. In this study, we present a mathematical model to solve the problem of ensuring sufficient beds to hospital wards by re-distributing beds that are already available to the hospital. Patient flow is modeled using a homogeneous continuous-time Markov chain and optimization is conducted using a local search heuristic. Our model accounts for patient relocation, which has not been done analytically in literature with similar scope. The study objective is to ensure that patient occupancy is reflected by our Markov chain model, and that a local optimum can be derived within a reasonable runtime.
OR in health services; Queueing; Markov chain; Stochastic optimization; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221717300619
Andersen, Anders Reenberg
Nielsen, Bo Friis
Reinhardt, Line Blander
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:841-8552017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:841-855
article
Efficient computation of the search region in multi-objective optimization
Multi-objective optimization procedures usually proceed by iteratively producing new solutions. For this purpose, a key issue is to determine and efficiently update the search region, which corresponds to the part of the objective space where new nondominated points could lie. In this paper we elaborate a specific neighborhood structure among local upper bounds. Thanks to this structure, the update of the search region with respect to a new point can be performed more efficiently compared to existing approaches. Moreover, the neighborhood structure provides new insight into the search region and the location of nondominated points.
Multi-objective optimization; Nondominated set; Search region; Local upper bounds; Scalarization;
http://www.sciencedirect.com/science/article/pii/S0377221716303496
Dächert, Kerstin
Klamroth, Kathrin
Lacour, Renaud
Vanderpooten, Daniel
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:195-2042017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:195-204
article
Inventory disclosure in online retailing
Unlike in a traditional store environment where inventory is directly visible to customers, Internet retailers can selectively choose how to divulge inventory level information to customers. For example, when viewing a particular item page, online shoppers may either see merely “in stock” or a specific inventory level. By choosing the appropriate inventory-level cue to display, a retailer can selectively signal stock-out risk to customers. In this paper, we analyze the optimal structure of an online retailer's inventory disclosure and pricing policy, in a two-period setting—a regular selling period followed by a clearance period. Consumers may potentially face uncertainty regarding the firm's inventory level, as well as the overall market demand. We show that there exists an inventory level threshold below which a retailer should optimally disclose inventory, and above which masking (i.e., showing only “in stock”) is optimal.Even though the optimal price decreases in the stock level, we show that equilibrium full-price sales may increase or decrease, highlighting the non-intuitive consumer behavior implications of selective inventory disclosure. The optimal “threshold-type” inventory disclosure that we derive reflects the practice of several prominent online retailers selling fashion products, and is new to the literature as prior models of inventory sharing invoked assumptions that led to either consistent disclosure or consistent masking to be optimal. We also extend the model to consider a stochastic market size, and thus highlight that the threshold policy structure continues to hold with demand uncertainty.
OR in marketing; Inventory information disclosure; Pricing; Retailing; Strategic consumers;
http://www.sciencedirect.com/science/article/pii/S0377221717300644
Aydinliyim, Tolga
Pangburn, Michael S.
Rabinovich, Elliot
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1121-11312017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1121-1131
article
Multivariate dependence and portfolio optimization algorithms under illiquid market scenarios
We propose a model for optimizing structured portfolios with liquidity-adjusted Value-at-Risk (LVaR) constraints, whereby linear correlations between assets are replaced by the multivariate nonlinear dependence structure based on Dynamic conditional correlation t-copula modeling. Our portfolio optimization algorithm minimizes the LVaR function under adverse market circumstances and multiple operational and financial constraints. When considering a diversified portfolio of international stock and commodity market indices under multiple realistic portfolio optimization scenarios, the obtained results consistently show the superiority of our approach, relative to other competing portfolio strategies including the minimum-variance, risk-parity and equally weighted portfolio allocations.
Finance; Dynamic copulas; LVaR; Dependence structure; Portfolio optimization algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716309444
Al Janabi, Mazin A.M.
Arreola Hernandez, Jose
Berger, Theo
Nguyen, Duc Khuong
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:739-7502017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:739-750
article
Capacity planning with technology replacement by stochastic dynamic programming
Technology replacement is capital intensive and highly risky in fast-paced high-tech industries along lumpy demand. This article proposes a solution to decision-making related to (i) technology replacement policy and (ii) capacity plan of resources to satisfy customer demand under technological changes. The addressed problem is modeled by stochastic dynamic programming for technology replacement in conjunction with an integer programming for simultaneous capacity planning. The overall objective is to maximize the expected net present profit over a finite time horizon. The problem is solved by a pattern search-genetic algorithm. Experiment results indicate that a near optimal solution is achieved in finite time.
Investment analysis; Capacity planning; Stochastic dynamic programming; Technology replacement;
http://www.sciencedirect.com/science/article/pii/S0377221717300012
Wang, Kung-Jeng
Nguyen, Phuc Hong
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:680-6922017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:680-692
article
A retail store SKU promotions optimization model for category multi-period profit maximization
Consumer promotions are an important element of competitive dynamics in retail markets and make a significant difference in the retailer's profits. But no study has so far included all the elements that are required to meet retail business objectives. We extend the existing literatures by considering all the basic requirements for a promotional Decision Support System (DSS): reliance on operational (store-level) data only, the ability to predict sales as a function of prices and the inclusion of other promotional variables affecting the category. The new model delivers an optimizing promotional schedule at Stock-Keeping-Unit (SKU) level which maximizes multi-period category level profit under the constraints of business rules typically applied in practice. We first develop a high dimensional distributed lag demand model which integrates both cross-SKU competitive promotion information and cross-period promotional influences. We estimate the model by proposing a two stage sign constrained regularization approach to ensure realistic promotional parameters. Based on the demand model, we then build a nonlinear integer programming model to maximize the retailer's category profits over a planning horizon under constraints that model important business rules. The output of the model provides optimized prices, display and feature advertising planning together with sales and profit forecasts. Empirical tests over a number of stores and categories using supermarket data suggest that our model generates accurate sales forecasts and increases category profits by approximately 17% and that including cross-item and cross-period effects is also valuable.
OR in marketing; Promotion optimization; Demand forecasting; Fast-moving consumer goods retailing;
http://www.sciencedirect.com/science/article/pii/S0377221716310669
Ma, Shaohui
Fildes, Robert
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1073-10842017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1073-1084
article
A linear programming approach for learning non-monotonic additive value functions in multiple criteria decision aiding
A new framework for preference disaggregation in multiple criteria decision aiding is introduced. The proposed approach aims to infer non-monotonic additive preference models from a set of indirect pairwise comparisons. The preference model is presented as a set of marginal value functions and the discriminatory power of the inferred preference model is maximized against its complexity. To infer a value function that is compatible with the supplied preference information, the proposed methodology leads to a linear programming optimization problem that is easy to solve. The applicability and effectiveness of the new methodology is demonstrated in a thorough experimental analysis covering a broad range of decision problems.
Multiple criteria analysis; Preference disaggregation; Decision analysis; Linear programming; Non-monotonic value functions;
http://www.sciencedirect.com/science/article/pii/S0377221716309638
Ghaderi, Mohammad
Ruiz, Francisco
Agell, Núria
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:482-4932017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:482-493
article
Models and matheuristics for the unrelated parallel machine scheduling problem with additional resources
In this paper we analyze a parallel machine scheduling problem in which the processing of jobs on the machines requires a number of units of a scarce resource. This number depends both on the job and on the machine. The availability of resources is limited and fixed throughout the production horizon. The objective considered is the minimization of the makespan. We model this problem by means of two integer linear programming problems. One of them is based on a model previously proposed in the literature. The other one, which is based on the resemblance to strip packing problems, is an original contribution of this paper. As the models presented are incapable of solving medium-sized instances to optimality, we propose three matheuristic strategies for each of these two models. The algorithms proposed are tested over an extensive computational experience. Results show that the matheuristic strategies significantly outperform the mathematical models.
Scheduling; Parallel machine problem; Additional resources; Matheuristics; Makespan;
http://www.sciencedirect.com/science/article/pii/S0377221717300073
Fanjul-Peyro, Luis
Perea, Federico
Ruiz, Rubén
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:444-4592017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:444-459
article
Solving the maximum min-sum dispersion by alternating formulations of two different problems
The maximum min-sum dispersion problem aims to maximize the minimum accumulative dispersion among the chosen elements. It is known to be strongly NP-hard problem. In this paper we present heuristic where the objective functions of two different problems are shifted within variable neighborhood search framework. Though this heuristic can be seen as an extended variant of variable formulation search approach that takes into account alternative formulations of one problem, the important difference is that it allows using alternative formulations of more than one optimization problem. Here we use one alternative formulation that is of max-sum type of the originally max–min type maximum diversity problem. Computational experiments on the benchmark instances used in the literature show that the suggested approach improves the best known results for most instances in a shorter computing time.
Metaheuristics; Dispersion problems; Binary quadratic programing; Variable neighborhood search; Variable formulation search;
http://www.sciencedirect.com/science/article/pii/S0377221716310736
Amirgaliyeva, Zhazira
Mladenović, Nenad
Todosijević, Raca
Urošević, Dragan
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:917-9252017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:917-925
article
Stock keeping unit fill rate specification
The fill rate is the most widely applied service level measure in industry and yet there is minimal advice available on how it should be differentiated on an individual Stock Keeping Unit (SKU) basis given that there is an overall system target service level. The typical approach utilized in practice, and suggested in academic textbooks, is to set the individual service levels equal to the targeted performance required across an entire stock base or a certain class of SKUs (e.g., in ABC classification). In this paper it is argued that this approach is far from optimal and a simple methodology is proposed that is shown (on real life datasets) to be associated with reductions in stock investments. In addition, the new approach is intuitive, very easy to implement and thus highly likely to be positively received by practitioners and software manufacturers.
Inventory; Service level; Fill rate; Safety stock;
http://www.sciencedirect.com/science/article/pii/S0377221716309420
Teunter, R.H.
Syntetos, A.A.
Babai, M.Z.
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:829-8402017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:829-840
article
Reference points and approximation algorithms in multicriteria discrete optimization
Mathematical research on multicriteria optimization problems predominantly revolves around the set of Pareto optimal solutions. In practice, on the other hand, methods that output a single solution are more widespread. Reference point methods are a successful example of this approach and are widely used in real-world multicriteria optimization. A reference point solution is the solution closest to a given reference point in the objective space.
Multicriteria optimization; Approximation algorithms; Reference point methods; Compromise programming; Combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716303472
Büsing, Christina
Goetzmann, Kai-Simon
Matuschke, Jannik
Stiller, Sebastian
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:557-5702017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:557-570
article
Integrated operational and financial hedging with capacity reshoring
We consider a multinational corporation that employs capacity reshoring, production switching, and financial hedging to manage supply–demand mismatches and currency risk. We optimize mean-conditional value-at-risk (CVaR) by decomposing operations and finance: operational flexibility maximizes expected profit subject to a CVaR constraint, whereas financial hedging minimizes CVaR subject to a minimum expected profit. We report three main findings. First,operational flexibility and financial hedging can be complements: operational flexibility enhances profitability and reduces downside risk, while financial hedging minimizes downside risk and can affect the feasible set of capacity portfolios (albeit indirectly) by relaxing a CVaR constraint. Second,operational flexibility and financial hedging are substitutes in risk reduction (though the latter has greater risk reduction effects in CVaR when used alone). Third,coordinating operations and finance is crucial for minimizing substitution effects. Efficient financial hedging depends on rigorous estimation of cash flow distribution as shaped by operational flexibility, and a capacity portfolio's feasibility relies on financial hedging because of the CVaR constraint.
Production; Operations–finance interface; Capacity reshoring; Operational flexibility; Financial hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716310700
Zhao, Lima
Huchzermeier, Arnd
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1095-11042017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1095-1104
article
Strategic facility location, capacity acquisition, and technology choice decisions under demand uncertainty: Robust vs. non-robust optimization approaches
This paper presents a robust optimization (RO) modeling approach to develop insights into strategic capacity planning and resource acquisition decisions, including the facility location problem and the technology choice problem. The main purpose of the proposed model and numerical experiments is to examine the effects of economies of scale, economies of scope, and the combined effects of scale and scope under uncertain demand realizations using robust optimization. The type of capacities, or technology alternatives, that a firm can acquire can be classified on two basic dimensions. The first dimension relates to the effects of scale via distinction between labor-intensive technologies and capital-intensive technologies. The second dimension relates to the effects of scope via distinction between product-dedicated and flexible technologies. Moreover, each product-dedicated and flexible technology can have different levels of labor or capital-intensity, leading to the joint effects of economies of scale and economies of scope. Extensive computational results document how the optimal technology choice patterns depend on the cost structures and levels of model robustness specified to accommodate uncertain demand realizations. The results obtained by the two-stage robust optimization approach are compared to the results obtained by a non-robust approach.
Uncertainty modeling; Robust optimization; Strategic capacity planning; Technology choice; Flexible resources;
http://www.sciencedirect.com/science/article/pii/S0377221717300528
Jakubovskis, Aldis
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1112-11202017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1112-1120
article
The influence of public subsidies on farm technical efficiency: A robust conditional nonparametric approach
The objective of this paper is to assess the influence of public subsidies on farm technical efficiency using recent advances in nonparametric efficiency analysis. To this end, we use robust conditional frontier techniques as well as insights from recent developments in nonparametric econometrics. The paper contributes to the ongoing methodological discussion on how to model the effect of public subsidies on farmers’ production decisions. The analysis is conducted using an unbalanced panel data of 1604 observations from 313 French farms located in the French region Meuse over the period 2006–2011. The estimates indicate that public subsidies influence negatively the conditional technical efficiency of farms. This suggests that public subsidies affect both the range of the attainable set for the inputs and outputs and the distribution of the efficiency scores inside the attainable set.
Data Envelopment Analysis; Conditional efficiency; Nonparametric econometrics; Public subsidies; Farms;
http://www.sciencedirect.com/science/article/pii/S0377221716309390
Minviel, Jean Joseph
De Witte, Kristof
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:693-7052017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:693-705
article
A utility-based link prediction method in social networks
Link prediction is a fundamental task in social networks, with the goal of estimating the likelihood of a link between each node pair. It can be applied in many situations, such as friend discovery on social media platforms or co-author recommendations in collaboration networks. Compared to the numerous traditional methods, this paper introduces utility analysis to the link prediction method by considering that individual preferences are the main reason behind the decision to form links, and meanwhile it also focuses on the meeting process that is a latent variable during the process of forming links. Accordingly, the link prediction problem is formulated as a machine learning process with latent variables; therefore, an Expectation–Maximization (EM, for short) algorithm is adopted and further developed to cope with the estimation problem. The performance of the present method is tested both on synthetic networks and on real-world datasets from social media networks and collaboration networks. All of the computational results illustrate that the proposed method yields more satisfying link prediction results than the selected benchmarks, and in particular, logistic regression, as a special case of the proposed method, provides the lower boundary of the likelihood function.
Networks; Link prediction; Utility analysis; EM algorithm; Latent variable;
http://www.sciencedirect.com/science/article/pii/S037722171631075X
Li, Yongli
Luo, Peng
Fan, Zhi-ping
Chen, Kun
Liu, Jiaguo
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:54-662017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:54-66
article
An algorithmic framework for the exact solution of tree-star problems
Many problems arising in the area of telecommunication ask for solutions with a tree-star topology. This paper proposes a general procedure for finding optimal solutions to a family of these problems. The family includes problems in the literature named as connected facility location, rent-or-buy and generalized Steiner tree-star. We propose a solution framework based on a branch-and-cut algorithm which also relies on sophisticated reduction and heuristic techniques. An important ingredient of this framework is a dual ascent procedure for asymmetric connected facility location. This paper shows how this procedure can be exploited in combination with various mixed integer programming formulations. Using the new framework, many benchmark instances in the literature for which only heuristic results were available so far, can be solved to provable optimality within seconds. To better assess the computational performance of the new approach, we additionally consider larger instances and provide optimal solutions for most of them too.
Combinatorial optimization; Connected facility location; Branch-and-cut; Dual ascent; Benders decomposition;
http://www.sciencedirect.com/science/article/pii/S0377221717301170
Leitner, Markus
Ljubić, Ivana
Salazar-González, Juan-José
Sinnl, Markus
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:992-10022017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:992-1002
article
Design and analysis of mechanisms for decentralized joint replenishment
We consider jointly replenishing multiple firms that operate under an EOQ like environment in a decentralized, non-cooperative setting. Each firm’s demand rate and inventory holding cost rate are private information. We are interested in finding a mechanism that would determine the joint replenishment frequency and allocate the joint ordering costs to these firms based on their reported stand-alone replenishment frequencies (if they were to order independently). We first provide an impossibility result showing that there is no direct mechanism that simultaneously achieves efficiency, incentive compatibility, individual rationality and budget-balance. We then propose a general, two-parameter mechanism in which one parameter is used to determine the joint replenishment frequency, another is used to allocate the order costs based on firms’ reports. We show that efficiency cannot be achieved in this two-parameter mechanism unless the parameter governing the cost allocation is zero. When the two parameters are same (a single parameter mechanism), we find the equilibrium share levels and corresponding total cost. We finally investigate the effect of this parameter on equilibrium behavior. We show that properly adjusting this parameter leads to mechanisms that are better than other mechanisms suggested earlier in the literature in terms of fairness and efficiency.
Game Theory; Inventory; Joint replenishment; Economic Order Quantity model; Mechanism design;
http://www.sciencedirect.com/science/article/pii/S0377221716309547
Güler, Kemal
Körpeoğlu, Evren
Şen, Alper
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:818-8282017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:818-828
article
Minimizing flowtime for paired tasks
Certain service and production systems require that a single processor complete, for each job, a pair of ordered tasks separated in time by a minimal required delay. In particular, circumstances, this mode of operation can characterize a physician in a hospital Emergency Department, a painting crew at a construction site, or a work station in a job shop. Motivated by these scenarios, we formulate an applicable scheduling problem and investigate its solution. To determine an optimal schedule, we formulate an appropriate mixed integer model in which the key lever for process improvement is the batching of tasks. We further show that two special cases of this problem can be optimally solved efficiently. To expedite decision-making, we propose heuristic approaches supported by spreadsheet based software. Numerical results are then presented and insights discussed.
Heuristics; Scheduling/batching; Integer programing;
http://www.sciencedirect.com/science/article/pii/S0377221716308451
Courtad, Brenda
Baker, Kenneth
Magazine, Michael
Polak, George
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:715-7242017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:715-724
article
Crowd performance in prediction of the World Cup 2014
This paper investigates the performance of the Yahoo crowd and experts in predicting the outcomes of matches in the World Cup in 2014. The analysis finds that the Yahoo crowd was statistically significantly better at predicting outcomes of matches than experts and very similar in performance to established betting odds. In addition, this paper finds that there was a statistically significant difference between the Yahoo crowd and a different crowd's performances, for the same task, suggesting that characteristics of the “crowd matter.” Finally, this paper finds that different crowdsourcing approaches apparently provide different results. Accordingly, it is important to specify the particular crowdsourcing approach, rather than simply “crowdsource.”
OR in sports; Crowdsourcing; Expertise; Brier score; FIFA World Cup;
http://www.sciencedirect.com/science/article/pii/S0377221716310773
O'Leary, Daniel E.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:873-8862017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:873-886
article
Approximate dynamic programming for missile defense interceptor fire control
Given the ubiquitous nature of both offensive and defensive missile systems, the catastrophe-causing potential they represent, and the limited resources available to countries for missile defense, optimizing the defensive response to a missile attack is a necessary national security endeavor. For a single salvo of offensive missiles launched at a set of targets, a missile defense system protecting those targets must determine how many interceptors to fire at each incoming missile. Since such missile engagements often involve the firing of more than one attack salvo, we develop a Markov decision process (MDP) model to examine the optimal fire control policy for the defender. Due to the computational intractability of using exact methods for all but the smallest problem instances, we utilize an approximate dynamic programming (ADP) approach to explore the efficacy of applying approximate methods to the problem. We obtain policy insights by analyzing subsets of the state space that reflect a range of possible defender interceptor inventories. Testing of four instances derived from a representative planning scenario demonstrates that the ADP policy provides high-quality decisions for a majority of the state space, achieving a 7.74% mean optimality gap over all states for the most realistic instance, modeling a longer-term engagement by an attacker who assesses the success of each salvo before launching a subsequent one. Moreover, the ADP algorithm requires only a few minutes of computational effort versus hours for the exact dynamic programming algorithm, providing a method to address more complex and realistically-sized instances.
Approximate dynamic programming; Least squares temporal differences; Markov decision processes; Military applications; Weapon target assignment problem;
http://www.sciencedirect.com/science/article/pii/S0377221716309481
Davis, Michael T.
Robbins, Matthew J.
Lunday, Brian J.
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:247-2592017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:247-259
article
A stabilised scenario decomposition algorithm applied to stochastic unit commitment problems
In recent years the expansion of energy supplies from volatile renewable sources has triggered an increased interest in stochastic optimisation models for hydro-thermal unit commitment. Several studies have modelled this as a two-stage or multi-stage stochastic mixed-integer optimisation problem. Solving such problems directly is computationally intractable for large instances, and alternative approaches are required. In this paper we use a Dantzig–Wolfe reformulation to decompose the stochastic problem by scenarios. We derive and implement a column generation method with dual stabilisation and novel primal and dual initialisation techniques. A fast, novel schedule combination heuristic is used to construct very good primal solutions, and numerical results show that knowing these from the start improves the convergence of the column generation method significantly. We test our method on a central scheduling model based on the British National Grid and illustrate that convergence to within 0.1% of optimality can be achieved quickly.
Stochastic programming; Mixed-integer column generation; Dantzig–Wolfe decomposition; Lagrangian relaxation; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221717301108
Schulze, Tim
Grothey, Andreas
McKinnon, Ken
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:864-8722017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:864-872
article
A network simplex method for the budget-constrained minimum cost flow problem
We present a specialized network simplex algorithm for the budget-constrained minimum cost flow problem, which is an extension of the traditional minimum cost flow problem by a second kind of costs associated with each edge, whose total value in a feasible flow is constrained by a given budget B. We present a fully combinatorial description of the algorithm that is based on a novel incorporation of two kinds of integral node potentials and three kinds of reduced costs. We prove optimality criteria and combine two methods that are commonly used to avoid cycling in traditional network simplex algorithms into new techniques that are applicable to our problem. With these techniques and our definition of the reduced costs, we are able to prove a pseudo-polynomial running time of the overall procedure, which can be further improved by incorporating Dantzig’s pivoting rule. Moreover, we present computational results that compare our procedure with Gurobi (2016).
Combinatorial optimization; Algorithms; Network flow; Minimum cost flow; Network simplex;
http://www.sciencedirect.com/science/article/pii/S0377221716309493
Holzhauser, Michael
Krumke, Sven O.
Thielen, Clemens
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:778-7882017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:778-788
article
Attributing credit to coauthors in academic publishing: The 1/n rule, parallelization, and team bonuses
Universities looking to recruit or to rank researchers have to attribute credit scores to their academic publications. While they could use indexes, there remains the difficulty of coauthored papers. It is unfair to count an n-authored paper as one paper for each coauthor, i.e., as n papers added to the total: this is “feeding the multitude” . Sharing the credit among coauthors by percentages or by simply dividing by n (“1/n rule”) is fairer but somewhat harsh. Accordingly, we propose to take into account the productivity gains of parallelization by introducing a parallelization bonus that multiplies the credit allocated to each coauthor.
OR in scientometrics; Academic publishing; 1/n rule; Parallelization; Productivity gains;
http://www.sciencedirect.com/science/article/pii/S0377221717300140
de Mesnard, Louis
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:935-9482017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:935-948
article
Price and quality decisions in dual-channel supply chains
Since the decision of non-price feature such as product quality draws a little attention in the literature of dual-channel supply chains, this paper investigates price and quality decisions in dual-channel supply chains, in which a single product is delivered through a retail channel, a direct channel, or a dual channel with both retail and direct channels. Considering the supply chains can be centralized or decentralized, we demonstrate that quality improvement can be realized when a new channel is introduced. Moreover, we employ two themes in terms of channel-adding Pareto zone to characterize the impacts of channel structures on supply-chain performance, including the whole system's profit (for the centralized system), each player's profit (for the decentralized system), and consumer surplus. When price and quality decisions are considered, we find the supply chain performance could be improved due to a new channel augmented. Moreover, we show the effects of the quality sensitivity parameters of different channels on price and product quality, as well as profits and consumer surplus.
Dual-channel supply chain; Pricing; Product quality; Pareto zone; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716309419
Chen, Jingxian
Liang, Liang
Yao, Dong-Qing
Sun, Shengnan
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:154-1682017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:154-168
article
A routing and scheduling approach to rail transportation of hazardous materials with demand due dates
This paper investigates the routing and scheduling of rail shipments of hazardous materials (hazmat) in the presence of due dates. In particular, we consider the problem of minimizing the weighted sum of earliness and tardiness for each demand plus the holding cost at each yard, while forcing a risk threshold on each service leg at any time instant. The Federal Railroad Administration (FRA) accident records, between 1999 and 2013, were analyzed to establish that train speed was the most significant factor in derailment. A mixed-integer programming model and a heuristic-based solution method are proposed for preparing the shipment plan. Finally, the analytical framework is used to study and analyze a number of realistic-sized problem instances generated using the infrastructure of a Class I railroad operator.
Transportation; Railway; Hazardous materials; Due date; Train speed;
http://www.sciencedirect.com/science/article/pii/S0377221717300802
Fang, Kan
Ke, Ginger Y.
Verma, Manish
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:828-8282017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:828-828
article
Re. ``Discrete Representation of Non-dominated Sets in Multi-objective Linear Programming'' [European Journal of Operational Research 255 (2016) 687–698]
http://www.sciencedirect.com/science/article/pii/S0377221716306579
Shao, Lizhen
Ehrgott, Matthias
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1014-10232017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1014-1023
article
Flexible lot sizing in hybrid make-to-order/make-to-stock production planning
Hybrid make-to-order/make-to-stock production systems are difficult to control. Batch production of make-to-stock products allows for efficient capacity usage, but fixed batch sizes can make the system less responsive to make-to-order customers. Using Markov Decision Process modeling, we show that the optimal policy for such hybrid systems varies the lot size in response to make-to-order backlogs as well as stock levels. We evaluate the performance of this flexible lot sizing policy for a two-product hybrid system. We find that it leads to savings of up to 23% compared to policies that use either completely or partly fixed lot sizes. We also find that flexible lot sizing is especially beneficial for systems with a low load and where make-to-stock is important in terms of production capacity usage and cost.
Production; Production planning; Lot sizing; Hybrid production; Markov Decision Process;
http://www.sciencedirect.com/science/article/pii/S0377221717300383
Beemsterboer, Bart
Land, Martin
Teunter, Ruud
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:222-2332017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:222-233
article
Undominated nonnegative excesses and core extensions of transferable utility games
The extension of the core for cooperative games with transferable utility is studied. By considering only nonnegative coalitional excesses, we introduce the concept of undominated nonnegative excess vectors and demonstrate that some well-known extended cores can be defined based on this concept. Moreover, we propose two new core extensions: the min-max tax core derived by minimizing the maximal tax paid by all players and the lexicographical min-max tax core derived by lexicographically minimizing the taxes paid by all players in all feasible coalition structures for the stabilization of the grand coalition. Both of the new extended cores coincide with the core when the latter is not empty. We demonstrate that the min-max tax core is different from the least core but coincides with it for super-additive games with empty core, and the lexicographical min-max tax core is different from the positive core but coincides with the latter for all super-additive games. Our study provides a new and taxation interpretation of the least core and the positive core for super-additive games.
Game theory; Cooperative games; Core extension; Taxation; Super-additive games;
http://www.sciencedirect.com/science/article/pii/S0377221717300851
Chen, Haoxun
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:460-4672017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:460-467
article
Identification of unidentified equality constraints for integer programming problems
Characterising the smallest dimension polytope containing all integer solutions of an integer programming problem can be a very challenging task. Frequently, this task is facilitated by identifying linear equality constraints that all integer solutions must satisfy. Typically, some of these constraints are readily available but others need to be discovered by more technical means. This paper develops a method to assist modellers to obtain such equality constraints. Note that the set of new equality constraints is not unique, and the proposed method generates a set of these new equality constraints for a sufficiently large dimension of the underlying problem. These generated constraints may be of a form that is easily extended for general case of the underlying problem, or they may be in a more complicated form where a generalisable pattern is difficult to identify. For the latter case, a new mixed-integer program is developed to detect a pattern-recognisable constraints. Furthermore, this mixed-integer program allows modellers to check if there is a new constraint satisfying specific criteria, such as only permitting coefficients to be 1, 0, and −1, or placing a limit on the number of non-zero coefficients. In order to illustrate the proposed method, a set of new equality constraints to supplement a previously published “Base polytope” are derived. Subsequently, exploiting these results, some techniques are proposed to tighten integer programming problems. Finally, relaxations of widely used TSP formulations are compared against one another and strengthened with help of the newly discovered equality constraints.
Integer programming; Convex hull problem; Combinatorial optimisation problem; Extended formulations; Traveling salesman problem;
http://www.sciencedirect.com/science/article/pii/S0377221716310748
Moeini, Asghar.
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:279-3012017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:279-301
article
Comparison of Kriging-based algorithms for simulation optimization with heterogeneous noise
In this article we investigate the unconstrained optimization (minimization) of the performance of a system that is modeled through a discrete-event simulation. In recent years, several algorithms have been proposed which extend the traditional Kriging-based simulation optimization algorithms (assuming deterministic outputs) to problems with noise. Our objective in this paper is to compare the relative performance of a number of these algorithms on a set of well-known analytical test functions, assuming different patterns of heterogeneous noise. We also apply the algorithms to a popular inventory test problem. The conclusions and insights obtained may serve as a useful guideline for researchers aiming to apply Kriging-based algorithms to solve engineering and/or business problems, and may be useful in the development of future algorithms.
Simulation; Stochastic Kriging; Heterogeneous noise; Ranking and selection; Optimization via simulation;
http://www.sciencedirect.com/science/article/pii/S037722171730070X
Jalali, Hamed
Van Nieuwenhuyse, Inneke
Picheny, Victor
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1164-11742017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1164-1174
article
Procurement auctions with ex post cooperation between capacity constrained bidders
The use of procurement auctions is a common practice for firms to procure goods and services. In this paper, we consider a first-price sealed-bid procurement auction consisting of two bidders with limited capacities. Each bidder is not able to complete the auctioned project on its own due to its capacity constraint. Upon the auction ends, the winning bidder has the option of cooperating with the losing bidder to fulfill the project. We investigate how the ex post cooperation affects bidders’ bidding strategies and equilibrium profits as well as the competition intensity of the auction. Surprisingly, we find that a bidder’s profit at equilibrium may decrease in its capacity level and increase in its unit cost. This is because a bidder’s larger capacity or smaller unit cost would lower the other bidder’s cost to complete the project due to the presence of cooperation, and thus intensify the competition between bidders. We also find that the winning bid price may become higher or lower (depending on bidders’ characteristics) when bidders have the option of ex post cooperation. Further, it is shown that the bidder with a cost advantage may be hurt by the ex post cooperation.
Auctions/bidding; Co-opetition; Subcontracting; Capacity constraint;
http://www.sciencedirect.com/science/article/pii/S0377221717300735
Xu, Jiayan
Feng, Yinbo
He, Wen
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:143-1532017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:143-153
article
Flexible contract design for VMI supply chain with service-sensitive demand: Revenue-sharing and supplier subsidy
To coordinate a VMI (vendor-managed inventory) supply chain with service-level sensitive customers, we relate retailer's service level to customer demand. We establish the dynamic game relationship under a revenue-sharing contract. A pure revenue-sharing contract, focusing on revenue-sharing ratio, often forms a competitive relationship among SC members, as the optimal sharing ratio is determined by the game leader and may fail to optimize SC performance. An improved mechanism is necessary to avoid sub-optimality. In this research, we propose three flexible subsidy contracts for VMI supply chains: (i) subsidizing all surplus products; (ii) subsidizing the inventory that exceeds the target level and unsold; (iii) subsidizing the inventory that exceeds target regardless it is sold or not. These contracts help SC members to arrive at optimal price, revenue-sharing ratio, inventory target, subsidy rate; and to commit inventory early. The proposed mechanism can better ensure SC collaboration, and bring the SC to Pareto improvement by allowing members to negotiate, share profit, subsidize suppliers for their risks, and select from alternative contracts under each VMI setting. They enhance service levels, maximize SC performance, and share SC profits fairly. Numerical examples are provided to validate our findings and to derive managerial implications.
Supply chain management; Flexible contract; Supplier subsidy; Revenue-sharing; Service-sensitive demand;
http://www.sciencedirect.com/science/article/pii/S0377221717300784
Cai, Jianhu
Hu, Xiaoqing
Tadikamalla, Pandu R.
Shang, Jennifer
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1142-11512017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1142-1151
article
Equilibrium joining strategies in batch service queueing systems
We consider strategic customers in a Markovian queue with batch services. We derive customer equilibrium strategies, regarding the joining/balking dilemma, in two cases with respect to the information provided upon arrival, unobservable and observable. In contrast to models with single services, a customer’s decision to join induces both positive and negative externalities to other customers. This fact leads to an intricate mixture of Follow-The-Crowd and Avoid-The-Crowd behavior and possibly multiple equilibrium strategies. Moreover, we discuss the effects of the two levels of information and the batch size on the strategic behavior of the customers and on the overall social welfare. Finally, we present several numerical experiments that reveal important differences in the strategic behavior of customers in batch service systems, in juxtaposition to single service systems.
Queueing; Strategic customers; Batch services; Balking; Equilibrium strategies;
http://www.sciencedirect.com/science/article/pii/S0377221717300590
Bountali, Olga
Economou, Antonis
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:97-1072017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:97-107
article
Capacity allocation under downstream competition and bargaining
In this study, we consider a monopolistic supplier’s capacity-allocation problem under bargaining. The supplier can allocate one type of key element to either an external channel with a manufacturer, an internal channel, or both. The firms use the element to produce substitutable final products and compete in the product market. By building a stylized model, we characterize the equilibrium decisions under different channel choices. The conditions of the equilibrium channel choices are derived. We find that the supplier’s shared capacity increases with his bargaining power, but the manufacturer’s shared capacity decreases with her bargaining power. Meanwhile, the higher bargaining power may backfire on the manufacturer, because her loss from a decreased shared capacity may dominate her benefit from an increase in her bargaining power.
Supply chain management; Capacity allocation; Market competition; Bargaining; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221717300668
Qing, Qiankai
Deng, Tianhu
Wang, Hongwei
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:920-9332017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:920-933
article
A branch-and-bound based heuristic algorithm for convex multi-objective MINLPs
We study convex multi-objective Mixed Integer Non-Linear Programming problems (MINLPs), which are characterized by multiple objective functions and non linearities, features that appear in real-world applications. To derive a good approximated set of non-dominated points for convex multi-objective MINLPs, we propose a heuristic based on a branch-and-bound algorithm. It starts with a set of feasible points, obtained, at the root node of the enumeration tree, by iteratively solving, with an ε-constraint method, a single objective model that incorporates the other objective functions as constraints. Lower bounds are derived by optimally solving Non-Linear Programming problems (NLPs). Each leaf node of the enumeration tree corresponds to a convex multi-objective NLP, which is solved heuristically by varying the weights in a weighted sum approach. In order to improve the obtained points and remove dominated ones, a tailored refinement procedure is designed. Although the proposed method makes no assumptions on the number of objective functions nor on the type of the variables, we test it on bi-objective mixed binary problems. In particular, as a proof-of-concept, we tested the proposed heuristic algorithm on instances of a real-world application concerning power generation, and instances of the convex biobjective Non-Linear Knapsack Problem. We compared the obtained results with those derived by well-known scalarization methods, showing the effectiveness of the proposed method.
Multi-objective; Convex Mixed Integer Non-Linear Programming; Heuristic algorithm; Hydro unit commitment; Knapsack problem;
http://www.sciencedirect.com/science/article/pii/S0377221716308487
Cacchiani, Valentina
D’Ambrosio, Claudia
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:390-4042017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:390-404
article
Optimal operation of a CHP plant participating in the German electricity balancing and day-ahead spot market
During the last years, operators of combined heat and power (CHP) plants are challenged by changing circumstances in the electricity markets. While ensuring a stable heat and power supply for customers, CHP plant operators seek to maximise the profitability of the CHP plant operation. We propose a comprehensive concept and illustrate its potential for increasing the profitability when operating a CHP plant with heat storage by participating in multiple electricity markets. For this purpose, the structure of the decision-making process consisting of bid submission, market clearing and CHP plant operation planning is represented. A multistage stochastic mixed-integer linear programming (MILP) model is developed that simultaneously optimises the operation of the CHP plant with heat storage and bidding in sequential electricity markets. Price uncertainty is captured by means of stochastic processes. The proposed concept is applied to the real case of a municipal energy supply company that participates in the German day-ahead spot and balancing market. Results of the case study illustrate how trading in sequential electricity markets can result in an increased profitability of the CHP plant operation, where additional revenues from trading offset higher generation costs. Furthermore, we exemplify how the optimal operation of the CHP plant and heat storage device are adjusted according to revenue potential in multiple electricity markets.
OR in energy; Combined heat and power; Multistage stochastic programming; Bidding; Electricity markets;
http://www.sciencedirect.com/science/article/pii/S037722171730111X
Kumbartzky, Nadine
Schacht, Matthias
Schulz, Katrin
Werners, Brigitte
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1081-10942017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1081-1094
article
Supporting strategy using system dynamics
This paper presents a protocol for supporting strategy development via system dynamics (SD) modeling in consultation with Chief Executive Officers (CEOs) of small organizations; it also reports on the effectiveness of this protocol one year after an initial study was conducted. The protocol was applied in five small organizations; it involves the development of a SD model that is used to generate scenarios of alternative strategic situations an organization may face. We found that when the CEOs identified more feedback loops and causal relationships among key resources through their modeling analyses, they increased their capacities to generate new strategic ideas through more developed mental models. However, those CEOs who were not able to generate alternative strategic ideas to overcome the challenges of scenarios presented during the simulation sessions found it difficult to make strategic decisions when the scenarios occurred one year after our intervention. This finding suggests that SD modeling can affect firm performance when the facilitation process helps CEOs reflect on potential strategic actions that can be taken in the future. When CEOs cannot change their strategic plans by imagining what should be done in a challenging scenario, they are not able to address challenging situations when they arise.
Strategic planning; Decision processes; System dynamics modeling;
http://www.sciencedirect.com/science/article/pii/S037722171730053X
Torres, Juan Pablo
Kunc, Martin
O'Brien, Frances
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:906-9162017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:906-916
article
Integrated production and distribution scheduling with a perishable product
This research focuses on the practical problem of a perishable product that must be produced and distributed before it becomes unusable but at minimum cost. The problem has some features of the integrated production and distribution scheduling problem in that we seek to determine the fleet size and the trucks’ routes subject to a planning horizon constraint. In particular, this research differs because the product has a limited lifetime, the total demand must be satisfied within a planning horizon, multiple trucks can be used, and the production schedule and the distribution sequence are considered. A mixed integer programming model is formulated to solve the problem and, then, heuristics based on evolutionary algorithms are provided to resolve the models.
Production and distribution; Integrated scheduling; Genetic algorithms; Memetic algorithms; Logistics;
http://www.sciencedirect.com/science/article/pii/S037722171630755X
Devapriya, Priyantha
Ferrell, William
Geismar, Neil
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:546-5562017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:546-556
article
Optimal dynamic pricing and ordering of a perishable product under additive effects of price and time on demand
When perishable products, such as dairy products, fruits and vegetables, drugs, or batteries are priced uniformly, without taking into consideration the amount of time remaining until the expiration date, consumers may gravitate towards fresher products, leaving some inventory unsold. A dynamic pricing policy, in which products are priced differently as they approach expiration, may encourage customers to buy less-fresh products, potentially increasing revenue and eliminating waste. Following scarce literature on dynamic pricing of storable perishable items, this paper develops a model to determine a product's optimal replenishment schedule and dynamic price over time, with the aim of maximizing the retailer's profit. Customer demand is assumed to be a pseudo-additive function of price and time since replenishment. Some properties of the optimal pricing and replenishment policy are derived by means of necessary and sufficient conditions of optimality. A number of examples show the dynamics of the optimal policy under different assumptions regarding demand. In particular, we evaluate the extent to which the retailer can benefit from the implementation of a dynamic pricing policy as opposed to a static one, and we show that the optimal policy is highly dependent on the form of demand incorporated into the model.
Inventory; Dynamic pricing; Differential pricing policy; Additive demand; Perishable items;
http://www.sciencedirect.com/science/article/pii/S0377221716310670
Herbon, Avi
Khmelnitsky, Eugene
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:655-6642017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:655-664
article
Semi-disposability of undesirable outputs in data envelopment analysis for environmental assessments
The assumptions of strong and weak disposability for undesirable outputs have long dominated studies of data envelopment analysis for environmental assessments. Unfortunately, these assumptions cannot describe the diverse technical features of different undesirable outputs during the actual production process. Thus, we introduce a non-disposal degree to develop a new semi-disposability assumption, which can replace the assumptions of strong and weak disposability in environmental assessments. This assumption ensures that decision makers can address undesirable outputs freely within the scope of current production technology; otherwise, they have to reduce desirable outputs in the same proportion to decrease undesirable outputs. A reference point comparison method is proposed for determining the non-disposal degree from an objective perspective. The assumption of semi-disposability is extended to uncertain circumstances by using the interval non-disposal degree. Finally, two empirical examples are provided to illustrate the effectiveness of the semi-disposability assumption.
Data envelopment analysis; Efficiency; Reference point comparison; Return to scale; Semi-disposability;
http://www.sciencedirect.com/science/article/pii/S0377221716310761
Chen, Lei
Wang, Ying-Ming
Lai, Fujun
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1175-11802017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1175-1180
article
The denominator rule for share-weighting aggregation
In this paper we prove that the denominator rule, namely that weighting shares should be defined in term of the denominator variable of the relevant index, is a necessary and sufficient condition for consistent arithmetic aggregation of any ratio-type performance measures, including efficiency indices. We then illustrate the applicability of the denominator rule in the aggregation of scale efficiency, measures of the effect of input congestion, and of capacity utilization indices.
DEA; Share-weighting aggregation; Ratio-type performance measures; Efficiency indices;
http://www.sciencedirect.com/science/article/pii/S0377221717301133
Färe, Rolf
Karagiannis, Giannis
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:650-6542017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:650-654
article
Measuring and decomposing profit inefficiency through the Slacks-Based Measure
The Slacks-Based Measure was introduced by Tone (2001) in order to estimate technical efficiency in the input-output space by taking into account all sources of technical inefficiency and satisfying, at the same time, many interesting properties. Since then, the Slacks-Based Measure has attracted the interest of numerous researchers and practitioners. The Slacks-Based Measure has been applied to technical efficiency determination, productivity change measurement, the analysis of production process performance consisting of networks, and so on. However, so far, the Slacks-Based Measure has not been directly related to profit inefficiency as a component of the overall economic performance of firms. In this note, we show how a specific normalized measure of profit inefficiency may be decomposed through the Slacks-Based Measure.
DATA Envelopment Analysis; Profit inefficiency; Slacks-Based Measure;
http://www.sciencedirect.com/science/article/pii/S0377221716310724
Aparicio, Juan
Ortiz, Lidia
Pastor, Jesus T.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:829-8462017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:829-846
article
Adaptive large neighborhood search heuristics for multi-tier service deployment problems in clouds
This paper proposes adaptive large neighborhood search (ALNS) heuristics for two service deployment problems in a cloud computing context. The problems under study consider the deployment problem of a provider of software-as-a-service applications, and include decisions related to the replication and placement of the provided services. A novel feature of the proposed algorithms is a local search layer on top of the destroy and repair operators. In addition, we use a mixed integer programming-based repair operator in conjunction with other faster heuristic operators. Because of the different time consumption of the repair operators, we need to account for the time usage in the scoring mechanism of the adaptive operator selection. The computational study investigates the benefits of implementing a local search operator on top of the standard ALNS framework. Moreover, we also compare the proposed algorithms with a branch and price (B&P) approach previously developed for the same problems. The results of our experiments show that the benefits of the local search operators increase with the problem size. We also observe that the ALNS with the local search operators outperforms the B&P on larger problems, but it is also comparable with the B&P on smaller problems with a short run time.
Metaheuristics; Cloud computing; Replication; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221716309092
Gullhav, Anders N.
Cordeau, Jean-François
Hvattum, Lars Magnus
Nygreen, Bjørn
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1144-11552017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1144-1155
article
Stochastic dynamic pricing and advertising in isoelastic oligopoly models
In this paper, we analyze stochastic dynamic pricing and advertising differential games in special oligopoly markets with constant price and advertising elasticity. We consider the sale of perishable as well as durable goods and include adoption effects in the demand. Based on a unique stochastic feedback Nash equilibrium, we derive closed-form solution formulas of the value functions and the optimal feedback policies of all competing firms. Efficient simulation techniques are used to evaluate optimally controlled sales processes over time. This way, the evolution of optimal controls as well as the firms’ profit distributions are analyzed. Moreover, we are able to compare feedback solutions of the stochastic model with its deterministic counterpart. We show that the market power of the competing firms is exactly the same as in the deterministic version of the model. Further, we discover two fundamental effects that determine the relation between both models. First, the volatility in demand results in a decline of expected profits compared to the deterministic model. Second, we find that saturation effects in demand have an opposite character. We show that the second effect can be strong enough to either exactly balance or even overcompensate the first one. As a result we are able to identify cases in which feedback solutions of the deterministic model provide useful approximations of solutions of the stochastic model.
Pricing; Advertising; Stochastic differential games; Oligopoly competition; Adoption effects;
http://www.sciencedirect.com/science/article/pii/S0377221716309468
Schlosser, Rainer
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1045-10532017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1045-1053
article
Distribution and reliability evaluation of max-flow in dynamic multi-state flow networks
In this article, each arc in multi-state flow networks, in addition to multi-valued capacities and associated operation probabilities, is weighted with a transit time. As a result, the value of max-flow from source to sink within specified time horizon is multi-valued. Existing literature evaluates the transit time integrated reliability with a restriction that data is transmitted through k disjoint minimal paths. This article considers the same reliability problem. However, the transmission of data is extended from k disjoint minimal paths to a flow that includes all disjoint and non-disjoint minimal paths simultaneously. This paper presents an algorithm to evaluate the probability distribution of the values of dynamic max-flow. The expectation of dynamic max-flow as a representation of the distribution can be induced directly. The transit time integrated reliability Rd,T is then computed, which is the probability that at least d unit of data can be transmitted from source to sink within time horizon T. This study is the first that discusses transit time integrated reliability in terms of flow. Finally, computational experiments are conducted on a benchmark network to explore the properties of the proposed algorithms.
Applied probability; Distribution; Reliability; Transit time; Multi-state flow network;
http://www.sciencedirect.com/science/article/pii/S0377221716310645
Jane, Chin-Chia
Laih, Yih-Wenn
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1043-10682017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1043-1068
article
Circulation network design for urban rail transit station using a PH(n)/PH(n)/C/C queuing network model
Width is crucial to the performance of circulation network in urban rail transit station. However, poor performance has been observed in most existing circulation systems. In fact, randomness and state-dependence exist in circulation network where blocking and feedback cannot be ignored. In this paper, we develop a PH(n)/PH(n)/C/C state-dependent queuing network model with an analytical solution. This model describes the random and state-dependent arrival interval as well as service time by phase-type distribution. Feedback is also taken into account. The existing M/M(n)/C/C is a special case of the proposed PH(n)/PH(n)/C/C queuing network model, and the existing M/G(n)/C/C and D/D/1/C models can be approximated by the proposed network model. Then we present a programming formulation for circulation network design with blocking probability control based on the queuing network model. Finally, we illustrate the applicability of the proposed design method by comparing it with the existing design methods. The results show that: 1) the blocking probability is quite small and evenly distributed in the network designed by the new method, while much bigger and fluctuating blocking probabilities exist in networks designed by the other two methods; 2) other performance measures, like area per passenger, dwell time and throughput, are also considerably improved in the new method; 3) performance measures of the proposed method enjoy high performance-cost elasticity compared with the other two methods. An interesting insight is also obtained that the squared coefficient of variation for arrival interval plays an important role in determining the optimal width for circulation network.
Facilities planning and design; Queuing; Circulation networks; State-dependence; Coefficient of variation;
http://www.sciencedirect.com/science/article/pii/S0377221717300656
Zhu, Juanxiu
Hu, Lu
Jiang, Yangsheng
Khattak, Afaq
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:767-7772017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:767-777
article
Truthfulness with value-maximizing bidders: On the limits of approximation in combinatorial markets
In some markets bidders want to maximize value subject to a budget constraint rather than payoff. This is different to the quasilinear utility functions typically assumed in auction theory and leads to different strategies and outcomes. We refer to bidders who maximize value as value bidders. While simple single-object auction formats are truthful, standard multi-object auction formats allow for manipulation. It is straightforward to show that there cannot be a truthful and revenue-maximizing deterministic auction mechanism with value bidders and general valuations. Approximation has been used as remedy to achieve truthfulness on other mechanism design problems, and we study which approximation ratios we can get from truthful mechanisms. We show that the approximation ratio that can be achieved with a deterministic and truthful approximation mechanism with n bidders cannot be higher than 1/n for general valuations. For randomized approximation mechanisms there is a framework with a ratio that is tight.
Auctions/bidding; Game theory; Approximation mechanisms;
http://www.sciencedirect.com/science/article/pii/S0377221716310657
Fadaei, Salman
Bichler, Martin
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:601-6122017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:601-612
article
Strategic planning: Design and coordination for dual-recycling channel reverse supply chain considering consumer behavior
In this paper, we consider a two-echelon reverse supply chain with dual-recycling channels where the recyclable dealer acts as a Stackelberg game leader and the recycler acts as a follower. Due to the price competition between these two channels, the dominant dealer always faces a challenge on how to strategically design the reverse channel structure. By introducing consumer preference for the online recycling channel into the model, we examine the challenge in three scenarios: single traditional recycling channel, single online-recycling channel, and a hybrid dual-recycling channel with both centralized and decentralized cases. We investigate two problems that are comprised of designing and coordinating a reverse supply chain with a traditional and an online recycling channel. The results show that the dual-recycling channel always outperforms its single channel counterparts from the recyclable dealer's and system's perspectives. In the coordination problem, a contract with transfer and online recycling prices can coordinate the dual-recycling channel reverse supply chain but harms the dealer. Therefore, we propose two complementary contracts – a two-part tariff contract and a profit sharing contract – which succeed in coordinating the reverse supply chain system and create a win-win situation. Finally, numerical examples illustrate the model, and results show that the consumer preference for online recycling affects the acceptance of the above contracts for the recyclable dealer.
Supply chain management; Stackelberg game theory; Online recycling channel; Consumer behaviors; Dual recycling channel;
http://www.sciencedirect.com/science/article/pii/S037722171730005X
Feng, Lipan
Govindan, Kannan
Li, Chunfa
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:234-2462017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:234-246
article
Joint pricing and location decisions in a heterogeneous market
In this paper we consider the problem of joint location and pricing optimization for a firm in a heterogeneous market producing a single product. We assume that customers have a different willingness to pay for the product. We consider two classes of customers who are not uniformly distributed in the market and develop an analytical framework to determine the relationship between the optimal price and location of the firm. We demonstrate that the optimal price and location are closely related to each other, and thus there is a need for simultaneous optimization of the price and location. We provide both analytical and numerical results to illustrate the impact of transportation cost and the level of heterogeneity on the firm’s strategic decisions. Our results show that simplifying the analysis of such markets with a uniform demand assumption and a homogeneity of customers may reduce the firm’s profit significantly.
Location; Heterogeneous customers; Non-uniform market; Pricing;
http://www.sciencedirect.com/science/article/pii/S0377221717301054
Sedghi, Nafiseh
Shavandi, Hassan
Abouee-Mehrizi, Hossein
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:706-7142017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:706-714
article
Project rankings for participatory budget based on the fuzzy TOPSIS method
In this study, a fuzzy technique is proposed for order preference based on the similarity to an ideal solution for the personalized ranking of projects in a participatory budget (PB). A PB is a group decision-making process where citizens distribute public resources among a set of city investment proposals. The dynamic growth in the popularity of PB during the last 10 years has been due to a significant increase in the number of projects submitted and the demonstrable weakness of the traditional majority vote. The rationality of decision-makers is restricted by the large number of possible options from which voters can choose only a few within a limited amount of time, and thus there is no opportunity to review all of the projects. Appropriate decision support tools can assist with the selection of the best outcome and help to address the growth of PB processes. The ranking of PB projects is a specific problem because multi-criteria comparisons are based on non-quantitative criteria, i.e., nominal and fuzzy criteria. The “Technique for Order Preference by Similarity to Ideal Solution” (TOPSIS) method aims to minimize the distance to the ideal alternative while maximizing the distance to the worst. In a fuzzy extension of TOPSIS, the ratings of alternatives and the weights of the criteria are fuzzy numbers or linguistic variables. The major modification required to the TOPSIS method for PB is that the perfect objective solution does not exists among the maximum and minimum values for the criteria. Thus, the subjective choice is the ideal solution for the decision maker and the negative ideal solution is the most dissimilar solution. This study describes the application of fuzzy TOPSIS with a modification for PB based on an empirical example from a Poznań PB project (Poland).
Community operational research; Participatory budget; Project ranking; Fuzzy TOPSIS;
http://www.sciencedirect.com/science/article/pii/S0377221716310785
Walczak, Dariusz
Rutkowska, Aleksandra
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:972-9912017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:972-991
article
A metaheuristic for the time-dependent pollution-routing problem
We propose a metaheuristic for the Time-Dependent Pollution-Routing Problem, which consists of routing a number of vehicles to serve a set of customers and determining their speed on each route segment with the objective of minimizing the cost of driver’s wage and greenhouse gases emissions. The vehicles face traffic congestion which, at peak periods, significantly restricts vehicle speeds and leads to increased emissions. Our algorithm is based on an adaptive large neighborhood search heuristic and uses new removal and insertion operators which significantly improve the quality of the solution. A previously developed departure time and speed optimization procedure is used as a subroutine to optimize departure times and vehicle speeds. Results from extensive computational experiments demonstrate the effectiveness of our algorithm.
Routing; Freight transportation; Green vehicle routing; Greenhouse gases emissions; Metaheuristic algorithm; Departure time and speed optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716309511
Franceschetti, Anna
Demir, Emrah
Honhon, Dorothée
Van Woensel, Tom
Laporte, Gilbert
Stobbe, Mark
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:67-742017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:67-74
article
Anchored reactive and proactive solutions to the CPM-scheduling problem
In a combinatorial optimization problem under uncertainty, it is never the case that the real instance is exactly the baseline instance that has been solved earlier. The anchorage level is the number of individual decisions with the same value in the solutions of the baseline and the real instances. We consider the case of CPM-scheduling with simple precedence constraints when the job durations of the real instance may be different than those of the baseline instance. We show that, given a solution of the baseline instance, computing a reactive solution of the real instance with a maximum anchorage level is a polynomial problem. This maximum level is called the anchorage strength of the baseline solution with respect to the real instance. We also prove that this latter problem becomes NP-hard when the real schedule must satisfy time windows constraints. We finally consider the problem of finding a proactive solution of the baseline instance whose guaranteed anchorage strength is maximum with respect to a subset of real instances. When each real duration belongs to a known uncertainty interval, we show that such a proactive solution (possibly with a deadline constraint) can be polynomially computed.
Scheduling; Robust optimization; Anchored solutions;
http://www.sciencedirect.com/science/article/pii/S0377221717301121
Bendotti, Pascale
Chrétienne, Philippe
Fouilhoux, Pierre
Quilliot, Alain
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:302-3162017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:302-316
article
An integrated assortment and shelf-space optimization model with demand substitution and space-elasticity effects
Retailers must define their assortments and assign shelf space to the items included in these assortments. These two planning problems are mutually dependent if space is scarce. We formulate a model that maximizes a retailer’s profit by selecting the optimal assortment and assigning limited shelf space to items. This model is the first decision model to integrate assortment and shelf-space planning by considering stochastic and space-elastic demand, out-of-assortment and out-of-stock substitution effects. To solve the model, we develop a specialized heuristic that efficiently yields near-optimal results, even for large-scale problems. We show that our approach outperforms alternative approaches, e.g. a sequential planning approach that first picks assortments and then assigns shelf space by up to 18%, and a greedy algorithm by up to 16% in terms of profit.
Retailing; Heuristic; Substitution; Space elasticity; Stochastic demand;
http://www.sciencedirect.com/science/article/pii/S0377221717300747
Hübner, Alexander
Schaal, Kai
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:30-422017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:30-42
article
Models for the piecewise linear unsplittable multicommodity flow problems
In this paper, we consider multicommodity flow problems, with unsplittable flows and piecewise linear routing costs. We first focus on the case where the piecewise linear routing costs are convex. We show that this problem is NP-hard for the general case, but polynomially solvable when there is only one commodity. We then propose a strengthened mixed-integer programming formulation for the problem. We show that the linear relaxation of this formulation always gives the optimal solution of the problem for the single commodity case. We present a wide array of computational experiments, showing this formulation also produces very tight linear programming bounds for the multi-commodity case. Finally, we also adapt our formulation for the non-convex case. Our experimental results imply that the linear programming bounds for this case, are only slightly weaker than the ones of state-of-the-art models for the splittable flow version of the problem.
Networks; OR in telecommunications; Unsplittable flows; Integer programming; Combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221717300863
Fortz, Bernard
Gouveia, Luís
Joyce-Moniz, Martim
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1097-11112017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1097-1111
article
A Nearest Neighbour extension to project duration forecasting with Artificial Intelligence
In this paper, we provide a Nearest Neighbour based extension for project control forecasting with Earned Value Management. The k-Nearest Neighbour method is employed as a predictor and to reduce the size of a training set containing more similar observations. An Artificial Intelligence (AI) method then makes use of the reduced training set to predict the real duration of a project. Additionally, we report on the forecasting stability of the various AI methods and their hybrid Nearest Neighbour counterparts. A large computer experiment is set up to assess the forecasting accuracy and stability of the existing and newly proposed methods. The experiments indicate that the Nearest Neighbour technique yields the best stability results and is able to improve the AI methods when the training set is similar or not equal to the test set. Sensitivity checks vary the amount of historical data and number of neighbours, leading to the conclusion that having more historical data, from which the a relevant subset can be selected by means of the proposed Nearest Neighbour technique, is preferential.
Project management; Earned Value Management (EVM); Prediction; Artificial Intelligence;
http://www.sciencedirect.com/science/article/pii/S0377221716309432
Wauters, Mathieu
Vanhoucke, Mario
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:421-4312017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:421-431
article
Cell-and-bound algorithm for chance constrained programs with discrete distributions
Chance constrained programing (CCP) is often encountered in real-world applications when there is uncertainty in the data and parameters. We consider in this paper a special case of CCP with finite discrete distributions. We propose a novel approach for solving CCP. The methodology is based on the connection between CCP and arrangement of hyperplanes. By involving cell enumeration methods for an arrangement of hyperplanes in discrete geometry, we develop a cell-and-bound algorithm to identify an exact solution to CCP, which is much more efficient than branch-and-bound algorithms especially in the worst case. Furthermore, based on the cell-and-bound algorithm, a new polynomial solvable subclass of CCP is discovered. We also find that the probabilistic version of the classical transportation problem is polynomially solvable when the number of customers is fixed. We report preliminary computational results to demonstrate the effectiveness of our algorithm.
Global optimization; Chance constrained program; Discrete distribution; Cell enumeration; Polynomially solvable;
http://www.sciencedirect.com/science/article/pii/S0377221717300814
Zheng, Xiaojin
Wu, Baiyi
Cui, Xueting
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:337-3542017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:337-354
article
Link-based multi-class hazmat routing-scheduling problem: A multiple demon approach
This paper addresses a hazmat routing and scheduling problem for a general transportation network with multiple hazmat classes when incident probabilities are unknown or inaccurate. A multi-demon formulation is proposed for this purpose. This formulation is link-based (i.e., the decision variables are link flows) and can be transformed into other forms so that a wide range of solution methods can be used to obtain solutions. This paper also proposes a solution strategy to obtain route flow solutions without relying on exhaustive route enumeration and route generation heuristics. Examples are set up to illustrate the problem properties, the method of obtaining route flows from link flows, and the computational efficiency of the solution strategy. Moreover, a case study is used to illustrate our methodology for real-life hazmat shipment problems. From this case study, we obtain four key insights. First, to have the safest shipment of one type of hazmat, different trucks carrying the same type of hazmat need to take different routes and links. Second, in case of multiple-hazmat transportation, it is recommended to use different routes and links for the shipment of different hazmat types. This may increase travel time but can result in safer shipment. Third, if the degree of connectivity in a transportation network is high, the shipment company may have multiple solutions. Fourth, the hazmat flows on critical links (whose removal would make the network disconnected) must be distributed or scheduled over different periods to have safer shipment.
Transportation; Vehicle routing; Non-cooperative game; Hazardous materials; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221717300838
Szeto, W.Y.
Farahani, R.Z.
Sumalee, Agachai
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1180-11902017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1180-1190
article
Scheduling double round-robin tournaments with divisional play using constraint programming
We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit and symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach.
OR in sports; Scheduling; Constraint programming;
http://www.sciencedirect.com/science/article/pii/S0377221716309584
Carlsson, Mats
Johansson, Mikael
Larson, Jeffrey
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:887-8972017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:887-897
article
Hybrid optimization methods for time-dependent sequencing problems
In this paper, we introduce novel optimization methods for sequencing problems in which the setup times between a pair of tasks depend on the relative position of the tasks in the ordering. Our proposed methods rely on a hybrid approach where a constraint programming model is enhanced with two distinct relaxations: One discrete relaxation based on multivalued decision diagrams, and one continuous relaxation based on linear programming. Both relaxations are used to generate bounds and enhance constraint propagation. Experiments conducted on three variants of the time-dependent traveling salesman problem indicate that our techniques substantially outperform general-purpose methods, such as mixed-integer linear programming and constraint programming models.
Constraint programming; Sequencing; Decision diagrams; Additive bounding;
http://www.sciencedirect.com/science/article/pii/S0377221716309602
Kinable, Joris
Cire, Andre A.
van Hoeve, Willem-Jan
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:751-7662017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:751-766
article
Mean-VaR portfolio optimization: A nonparametric approach
Portfolio optimization involves the optimal assignment of limited capital to different available financial assets to achieve a reasonable trade-off between profit and risk. We consider an alternative Markowitz’s mean–variance model in which the variance is replaced with an industry standard risk measure, Value-at-Risk (VaR), in order to better assess market risk exposure associated with financial and commodity asset price fluctuations. Realistic portfolio optimization in the mean-VaR framework is a challenging problem since it leads to a non-convex NP-hard problem which is computationally intractable. In this work, an efficient learning-guided hybrid multi-objective evolutionary algorithm (MODE-GL) is proposed to solve mean-VaR portfolio optimization problems with real-world constraints such as cardinality, quantity, pre-assignment, round-lot and class constraints. A learning-guided solution generation strategy is incorporated into the multi-objective optimization process to promote efficient convergence by guiding the evolutionary search towards promising regions of the search space. The proposed algorithm is compared with the Non-dominated Sorting Genetic Algorithm (NSGA-II) and the Strength Pareto Evolutionary Algorithm (SPEA2). Experimental results using historical daily financial market data from S & P 100 and S & P 500 indices are presented. The results show that MODE-GL outperforms two existing techniques for this important class of portfolio investment problems in terms of solution quality and computational time. The results highlight that the proposed algorithm is able to solve the complex portfolio optimization without simplifications while obtaining good solutions in reasonable time and has significant potential for use in practice.
Evolutionary computations; Multi-objective constrained portfolio optimization; Value at risk; Nonparametric historical simulation;
http://www.sciencedirect.com/science/article/pii/S0377221717300103
Lwin, Khin T.
Qu, Rong
MacCarthy, Bart L.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:494-5062017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:494-506
article
Markov Chain methods for the Bipartite Boolean Quadratic Programming Problem
We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the well known Boolean Quadratic Programming Problem (BQP). Applications of the BBQP include mining discrete patterns from binary data, approximating matrices by rank-one binary matrices, computing the cut-norm of a matrix, and solving optimisation problems such as maximum weight biclique, bipartite maximum weight cut, maximum weight induced sub-graph of a bipartite graph, etc. For the BBQP, we first present several algorithmic components, specifically, hill climbers and mutations, and then show how to combine them in a high-performance metaheuristic. Instead of hand-tuning a standard metaheuristic to test the efficiency of the hybrid of the components, we chose to use an automated generation of a multi-component metaheuristic to save human time, and also improve objectivity in the analysis and comparisons of components. For this we designed a new metaheuristic schema which we call Conditional Markov Chain Search (CMCS). We show that CMCS is flexible enough to model several standard metaheuristics; this flexibility is controlled by multiple numeric parameters, and so is convenient for automated generation. We study the configurations revealed by our approach and show that the best of them outperforms the previous state-of-the-art BBQP algorithm by several orders of magnitude. In our experiments we use benchmark instances introduced in the preliminary version of this paper and described here, which have already become the de facto standard in the BBQP literature.
Artificial intelligence; Bipartite Boolean quadratic programming; Automated heuristic configuration; Benchmark;
http://www.sciencedirect.com/science/article/pii/S0377221717300061
Karapetyan, Daniel
Punnen, Abraham P.
Parkes, Andrew J.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1169-11792017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1169-1179
article
Stochastic model for maintenance in continuously deteriorating systems
We construct a stochastic model for maintenance suitable for the analysis of real-life systems which deteriorate over time before they eventually fail and are replaced. The model uses a continuous deterioration level, where the rate of change depends on the current operating mode as well as the current level of deterioration. We demonstrate how to construct a model in which the uncertainty about the state of deterioration, when the system is not continuously observed, is accurately represented. This feature addresses some drawbacks of previous work that is known to cause modelling errors. The key performance measures for this model can be evaluated efficiently using existing algorithms. The theory is illustrated using numerical examples, in which we discuss how this model can be used in a practical evaluation of different maintenance strategies.
Stochastic processes; Stochastic model for maintenance; Markov driven fluid process; Deterioration; Matrix-analytic methods;
http://www.sciencedirect.com/science/article/pii/S0377221716309572
Samuelson, Aviva
Haigh, Andrew
O'Reilly, Małgorzata M.
Bean, Nigel G.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:949-9712017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:949-971
article
Traveling worker assembly line (re)balancing problem: Model, reduction techniques, and real case studies
The assembly line balancing problem arises from equally dividing the workload among all workstations. Several solution methods explore different variants of the problem, but no model includes all characteristics real assembly lines might contain. This paper presents a mixed integer linear programming model that solves the Traveling Worker Assembly Line Balancing Problem (TWALBP). In this problem, the tasks’ balancing along with the assignment of workers to one or more workstations is determined for a given layout. The assignment flexibility is solved with a traveling salesman problem formulation integrated in the balancing model. Adapted standard datasets and three real case scenarios are used as benchmark sets. These scenarios present particularities such as human and robotic workers, assignment restrictions, zoning constraints, automatic and common tasks. The model successfully determines the tasks’ assignments and the routing of every worker for a layout aware optimization of assembly lines. Better quality balancing solutions were achieved allowing workers to perform tasks at multiple stations, showing a trade-off between assignment flexibility and movement time.
Combinatorial optimization; Assembly line rebalancing; Real-world application; Traveling salesman problem; Mixed integer linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716309523
Sikora, Celso Gustavo Stall
Lopes, Thiago Cantos
Magatão, Leandro
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:725-7382017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:725-738
article
The block-information-sharing strategy for task allocation: A case study for structure assembly with aerial robots
A new paradigm for task allocation in cooperative multi-robot systems is proposed in this paper. The block-information-sharing (BIS) strategy is a fully distributed approach, where robots dynamically allocate their tasks following the principle of share & divide to maintain an optimal allocation according to their capabilities. Prior studies on multi-robot information sharing strategies do not formally address the proof of convergence to the optimal allocation, nor its robustness to dynamic changes in the execution of the global task. The BIS strategy is introduced in a general framework and the convergence to the optimal allocation is theoretically proved. As an illustration of the approach, the strategy is applied to the automatic construction of truss structures with aerial robots. In order to demonstrate the benefits of the strategy, algorithms and simulations are presented for a team of heterogeneous robots that can dynamically reallocate tasks during the execution of a mission.
Multi-agent systems; Dynamic task allocation; Distributed algorithm; Assembly line balancing;
http://www.sciencedirect.com/science/article/pii/S0377221717300048
Caraballo, L.E.
Díaz-Báñez, J.M.
Maza, I.
Ollero, A.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:514-5192017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:514-519
article
Two-agent parallel machine scheduling with a restricted number of overlapped reserved tasks
We consider a two-agent scheduling problem on parallel machines such that each task of agent 2 has a given time window. Furthermore, we introduce a resource constraint under which the number of simultaneously processed tasks of agent 2 is restricted, although some machines are available. The objective is to minimize the total completion time for agent 1 while the total weight of the processed tasks for agent 2 is at or above a given threshold. Because the problem is known to be strongly NP-hard, we focus on the case with unit processing time. We analyze the computational complexity for its special cases, which have some restrictions on four parameters: the weight and the duration of agent 2, the number of machines, and the maximum number of simultaneously processed tasks of agent 2.
Scheduling; Parallel machines; Competing agents; Complexity;
http://www.sciencedirect.com/science/article/pii/S0377221717300334
Choi, Byung-Cheon
Park, Myoung-Ju
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:432-4432017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:432-443
article
Medium-term power planning in electricity markets with pool and bilateral contracts
Many of the existing electricity markets are of the mixed type, which has pool auction and bilateral contracts between producers and distributors. In this case, the problem faced by a Generation Company (GenCo) is that of maximizing the revenues from participating in the market through the pool auction while honoring the bilateral contracts agreed, for which the revenue is fixed.
OR in energy; Medium-term power planning; Bilateral contracts; Stochastic programming; Heuristics;
http://www.sciencedirect.com/science/article/pii/S037722171730108X
Marí, L.
Nabona, N.
Pagès-Bernaus, A.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1017-10352017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1017-1035
article
Resilient supply chain network design under competition: A case study
This research, motivated by a real-life case study in a highly competitive automobile supply chain, experimentally studies the impact of disruption on the competitiveness of supply chains. The studied supply chain faces two major risks: disruption of suppliers and tough competition from competitors. Any disruption in upstream level of the supply chain leads to an inability to meet demand downstream and causes market share to be lost to the competitors. For such a setting, a resilient topology is redesigned that can recover from and react quickly to any disruptive incidents. To this aim, we speculate there are three policies that can be used to mitigate the disruption risk, namely keeping emergency stock at the retailers, reserving back-up capacity at the suppliers, and multiple-sourcing. The problem is addressed using a mixed integer non-linear model to find the most profitable network and mitigation policies. We design a piecewise linear method to solve the model. Based on the data extracted from an automotive supply chain, practical insights of the research are extracted in a controlled experiment. Our analysis suggests that implementing risk mitigation policies not only work to the advantage of the supply chain by sustaining and improving its market share but also benefit customers by stabilizing retail prices in the market. Using the case study, we analyze the contribution of each risk strategy in stabilizing the supply chain's profit, market share, and retail price. Our analysis reveals that downstream “emergency stock” is the most preferable risk mitigation strategy if suppliers are unreliable.
Supply chain management; Resilient supply chain; Disruption; Competition; Automotive industry;
http://www.sciencedirect.com/science/article/pii/S0377221716309663
Rezapour, Shabnam
Farahani, Reza Zanjirani
Pourakbar, Morteza
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:995-10132017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:995-1013
article
The electric location routing problem with time windows and partial recharging
Electric commercial vehicles are expected to contribute significantly to the mobility of the future. Furthermore, there are first pilot projects of logistics companies operating electric commercial vehicles. So far, planning approaches for electric fleets either address routing decisions with emphasis on the limited driving range and long charging times of the vehicles, or focus on the siting of charging stations in order to implement the necessary charging infrastructure. In this paper, we present a location routing approach to consider routing of electric vehicles and siting decisions for charging stations simultaneously in order to support strategic decisions of logistics fleet operators. Thereby, we regard different recharging options due to real world constraints. Furthermore, we also take alternative objective functions into account minimizing not only the traveled distance, but also the number of vehicles needed and the number of charging stations sited as well as total costs. Results are presented for the total traveled distance of the location routing model, and potential improvements compared to a vehicle routing model are shown. Shorter overall distances can be achieved if simultaneous siting as well as extended recharging options are allowed. Besides, results for the other objective functions are shown with respect to the impact of the objectives and conflicting targets.
Routing; Electric logistics fleets; Location routing; Green logistics;
http://www.sciencedirect.com/science/article/pii/S0377221717300346
Schiffer, Maximilian
Walther, Grit
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:534-5452017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:534-545
article
Order picking along a crane-supplied pick face: The SKU switching problem
This paper treats an order picking system where a crane continuously relocates stock keeping units (SKUs) in a high-bay rack subdivided into the bottommost picking level and the upper reserve area. The capacity of the pick face is not large enough to store all SKUs, so that the crane has to ensure that all SKUs demanded by a current picking order are timely provided and picker idle time is avoided. We aim at a processing sequence of picking orders and a SKU switching plan, i.e., an instruction when to exchange which SKUs in the picking level, such that an unobstructed order picking is enabled. Our problem is closely related to the tool switching problem of flexible manufacturing. Here, each job requires a subset of tools to be loaded into the tool magazine (with limited capacity) of a single flexible machine. We, however, show that an alternative objective function, i.e., minimizing the maximum number of switches between any successive job pair, is better suited in the warehouse context and even better results can be obtained by a multi-objective approach. Elementary complexity proofs as well as suited solution procedures are provided and we also address managerial aspects, such as the sizing of the pick face.
Logistics; Warehousing; Order picking; Sequencing;
http://www.sciencedirect.com/science/article/pii/S0377221716310712
Schwerdfeger, Stefan
Boysen, Nils
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:108-1282017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:108-128
article
A simulation study of the performance of twin automated stacking cranes at a seaport container terminal
This paper studies the effect of a handshake area on the performance of twin automated stacking cranes (ASCs) operating on top of a stack with transfer zones at both seaside and landside. The handshake area is a temporary storage location so that one crane can start a request and leave the container there for the other crane to complete the request. By testing settings with and without such a handshake area, the goal is to find robust rules which result in the best performance, measured as (1) the makespan to finish all requests and (2) the total waiting time of the cranes due to interference or nonconsecutive delivery of containers in the handshake area (blocking time). The effect of five decision variables on the performance are tested. The decision variables are (1) the way the requests are handled by the cranes (scheduling), (2) the storage location of the containers in the handshake area, (3) the location of the handshake area in the stack, (4) the size of the handshake area and (5) the number of handshake areas in the stack. For each decision variable, multiple heuristics are developed. The results indicate that settings without a handshake area outperform settings with a handshake area for virtually all instances tested when using the same scheduling heuristic. For both types of settings, the choice for a scheduling heuristic impacts the final performance the most. In this study, we opt for simple heuristics since container terminal operators prefer to avoid any complexity in coordinating and scheduling two ASCs for safety and simplicity reasons.
OR in maritime industry; Simulation model; Seaport container terminal; Twin automated stacking cranes; Handshake area;
http://www.sciencedirect.com/science/article/pii/S0377221717300723
Gharehgozli, Amir Hossein
Vernooij, Floris Gerardus
Zaerpour, Nima
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1181-11992017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1181-1199
article
Multivariate FX models with jumps: Triangles, Quantos and implied correlation
We propose an integrated model of the joint dynamics of FX rates and asset prices for the pricing of FX derivatives, including Quanto products; the model is based on a multivariate construction for Lévy processes which proves to be analytically tractable. The approach allows for simultaneous calibration to market volatility surfaces of currency triangles, and also gives access to market consistent information on dependence between the relevant variables. A successful joint calibration to real market data is presented for the particular case of the Variance Gamma process.
Option pricing; Calibration procedure; Implied correlation; Multivariate Lévy processes; Quanto products,;
http://www.sciencedirect.com/science/article/pii/S0377221717301406
Ballotta, Laura
Deelstra, Griselda
Rayée, Grégory
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:886-9032017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:886-903
article
Ordered Weighted Average optimization in Multiobjective Spanning Tree Problem
Multiobjective Spanning Tree Problems are studied in this paper. The ordered median objective function is used as an averaging operator to aggregate the vector of objective values of feasible solutions. This leads to the Ordered Weighted Average Spanning Tree Problem, a nonlinear combinatorial optimization problem. Different mixed integer linear programs are proposed, based on the most relevant minimum cost spanning tree models in the literature. These formulations are analyzed and several enhancements presented. Their empirical performance is tested over a set of randomly generated benchmark instances. The results of the computational experiments show that the choice of an appropriate formulation allows to solve larger instances with more objectives than those previously solved in the literature.
Combinatorial optimization; Multiobjective optimization; Ordered median; Ordered Weighted Average; Spanning trees;
http://www.sciencedirect.com/science/article/pii/S0377221716308499
Fernández, Elena
Pozo, Miguel A.
Puerto, Justo
Scozzari, Andrea
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1085-10962017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1085-1096
article
The dynamic Black–Litterman approach to asset allocation
We generalize the Black–Litterman (BL) portfolio management framework to incorporate time-variation in the conditional distribution of returns in the asset allocation process. We evaluate the performance of the dynamic BL model using both standard performance ratios as well as other measures that are designed to capture tail risk in the presence of non-normally distributed asset returns. We find that the dynamic BL model outperforms a range of different benchmarks. Moreover, we show that the choice of volatility model has a considerable impact on the performance of the dynamic BL model.
Finance; Black–Litterman model; Multivariate conditional volatility; Portfolio optimization; Tail risk;
http://www.sciencedirect.com/science/article/pii/S0377221716309705
Harris, Richard D.F.
Stoja, Evarist
Tan, Linzhi
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:75-872017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:75-87
article
An iterated greedy heuristic for a market segmentation problem with multiple attributes
A real-world customer segmentation problem from a beverage distribution firm is addressed. The firm wants to partition a set of customers, who share geographical and marketing attributes, into segments according to certain requirements: (a) customers allocated to the same segment must have very similar attributes: type of contract, type of store and the average difference of purchase volume; and (b) compact segments are desired. The main reason for creating a partition with these features is because the firm wants to try different product marketing strategies. In this paper, a detailed attribute formulation and an iterated greedy heuristic that iteratively destroys and reconstructs a given partition are proposed. The initial partition is obtained by using a modified k-means algorithm that involves a GRASP philosophy to get the initial configuration of centers. The heuristic includes an improvement method that employs two local search procedures. Computational results and statistical analyses show the effectiveness of the proposed approach and its individual components. The proposed metaheuristic is also observed very competitive, faster, and more robust when compared to existing methods.
Metaheuristics; Market segmentation; Iterated greedy heuristics; GRASP; Variable neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221717301194
Huerta-Muñoz, Diana L.
Ríos-Mercado, Roger Z.
Ruiz, Rubén
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1132-11432017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1132-1143
article
Integrated optimization of strategic and tactical planning decisions in forestry
The traditional approach to plan the forest products value chain using a combination of sequential and hierarchical planning phases leads to suboptimal solutions. We present an integrated planning model to support forest planning on the long term with anticipation of the impacts on the economic and logistic activities in the forest value chain on a shorter term, and we propose a novel optimization approach that includes acceleration strategies to efficiently solve large-scale practical instances of this integrated planning problem. Our model extends and binds the models implemented in two solver engines that have developed in previous work. The first system, called Logilab, allows for defining and solving value chain optimization problems. The second system, called Silvilab, allows for generating and solving strategic problems. We revisit the tactical model in Logilab and we extend the strategic model in Silvilab so that the integrated planning problem can be solved using column generation decomposition with the subproblems formulated as hypergraphs and solved using a dynamic programing algorithm. Also, a new set of spatial sustainability constraints is considered in this model. Based on numerical experiments on large-scale industrial cases, the integrated approach resulted in up to 13% profit increase in comparison with the non-integrated approach. In addition, the proposed approach compares advantageously with a standard LP column generation approach to the integrated forest planning problem, both in CPU time (with an average 2.4 factor speed-up) and in memory requirement (with an average reduction by a factor of 20).
Large scale systems; Forest industry; Strategic and tactical planning; Integrated planning; Dynamic programing;
http://www.sciencedirect.com/science/article/pii/S037722171630947X
Bouchard, M.
D’Amours, S.
Rönnqvist, M.
Azouzi, R.
Gunn, E.
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1105-11142017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1105-1114
article
When and how much to invest? Investment and capacity choice under product life cycle uncertainty
While empirical research indicates that innovations typically follow a product life cycle that is subject to uncertainty in many industries, endless cash flow growth is still at the heart of most papers guiding investment decisions under uncertainty. This paper studies the effect of an uncertain technological life cycle on the decision to invest in new product introduction, taking into account the combined effects of flexible investment timing and optimal capacity choice. Based on a numerical example referring to investment decisions in facilities for the production of electric vehicle batteries, we find that the optimal investment threshold follows an S-curve over the product life cycle and derive the optimal capacity choice for the given investment decision.
OR in research and development; Bass model based product life cycle; Electric vehicle battery; Real option; Capacity choice;
http://www.sciencedirect.com/science/article/pii/S0377221717300620
Lukas, Elmar
Spengler, Thomas Stefan
Kupfer, Stefan
Kieckhäfer, Karsten
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:805-8062017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:805-806
article
Feature cluster: Recent advances in exact methods for multi-objective optimisation
http://www.sciencedirect.com/science/article/pii/S0377221717301091
Ehrgott, Matthias
Ljubić, Ivana
Parragh, Sophie N.
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:17-292017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:17-29
article
Improved local search approaches to solve the post enrolment course timetabling problem
In this work, we are addressing the post enrollment course timetabling (PE-CTT) problem. We combine different local search algorithms into an iterative two stage procedure. In the first stage, Tabu Search with Sampling and Perturbation (TSSP) is used to generate feasible solutions. In the second stage, we propose an improved variant of Simulated Annealing (SA), which we call Simulated Annealing with Reheating (SAR), to improve the solution quality of feasible solutions. SAR has three features: a novel neighborhood examination scheme, a new way of estimating local optima and a reheating scheme. SAR eliminates the need for extensive tuning as is often required in conventional SA. The proposed methodologies are tested on the three most studied datasets from the scientific literature. Our algorithms perform well and our results are competitive, if not better, compared to the benchmarks set by the state of the art methods. New best known results are provided for many instances.
Timetabling; Combinatorial optimization; Local search; Tabu Search with Sampling and Perturbation (TSSP); Simulated Annealing with Reheating (SAR);
http://www.sciencedirect.com/science/article/pii/S0377221717300759
Goh, Say Leng
Kendall, Graham
Sabar, Nasser R.
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1129-11412017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1129-1141
article
Technical efficiency, unions and decentralized labor contracts
This paper explores the link between the presence of unions in the workplace, the adoption of decentralized labor agreements and technical efficiency, using a large sample of Italian manufacturing firms. We apply the Data Envelopment Analysis, and its robust version based on bootstrap theory, to get reliable estimates of technical efficiency at the firm level in a standard first stage. We devote particular attention to the specific technology adopted, by distinguishing 20 different sector frontiers, as well as to the presence of outliers. The obtained efficiency scores are analyzed in a second stage applying a truncated regression model estimated via Maximum Likelihood, following the Simar and Wilson (2007, 2011) methodology. Our results highlight that the presence of workplace unionization decreases the level of technical efficiency, while aspects limiting the unions’ power such as a strong exposure to international markets, high debt levels or the prevalence of flexible assets partially reduce the negative effect. However, when firms adopt decentralized labor contracts agreements, the effect on efficiency is positive and partially compensates the negative unions’ effect.
Data envelopment analysis; Unions; Decentralized bargaining; Truncated regression model;
http://www.sciencedirect.com/science/article/pii/S0377221717300577
Devicienti, Francesco
Manello, Alessandro
Vannoni, Davide
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:260-2782017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:260-278
article
Different stakeholders’ perspectives for a surgical case assignment problem: Deterministic and robust approaches
Adequate access to health care is one of the main strategic axes considered in the Portuguese National Health Plan. This plan seeks to ensure the best performance and adequacy of care maximizing the use of resources, quality, equity and access. This work results from a close collaboration with a large and publicly funded Portuguese hospital. The aim is to propose a systematic approach to help the surgical planner in the scheduling of elective surgeries, in order to optimize the use of the available surgical resources, and improve equity and access to operated and waiting patients. The decisions to be taken by this surgical case assignment problem are twofold: select patients to be scheduled in the planning horizon from a large waiting list for surgery; and assign a day, an operating room, and a time block to the selected patients. Three versions are modeled in (mixed) integer linear programming: from the administration’s intention up to the surgeons’ current practice, and a halfway reflecting a negotiation with the surgeons. A robust approach is proposed to tackle the uncertain surgeries’ duration without the need to assume a given distribution for these random parameters and allowing to control the level of conservatism in the solutions. Practical and real-sized problems from the hospital are solved providing very good optimization gaps within a short time limit, both for the deterministic and robust approaches. The schedules obtained are analyzed regarding quality and robustness, and are also compared with the surgical schedules performed by the hospital.
OR in health services; Operating room scheduling; Surgical case assignment problem; Mixed integer programming; Robust optimization;
http://www.sciencedirect.com/science/article/pii/S0377221717300711
Marques, Inês
Captivo, M. Eugénia
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:814-8272017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:814-827
article
Discrete representation of the non-dominated set for multi-objective optimization problems using kernels
In this paper, we are interested in producing discrete and tractable representations of the set of non-dominated points for multi-objective optimization problems, both in the continuous and discrete cases. These representations must satisfy some conditions of coverage, i.e. providing a good approximation of the non-dominated set, spacing, i.e. without redundancies, and cardinality, i.e. with the smallest possible number of points. This leads us to introduce the new concept of (ε, ε′)-kernels, or ε-kernels when ɛ′=ɛ is possible, which correspond to ε-Pareto sets satisfying an additional condition of ε′-stability. Among these, the kernels of small, or possibly optimal, cardinality are claimed to be good representations of the non-dominated set.
Multiple objective programming; Pareto set; Non-dominated points; Discrete representation; Exact and approximation algorithms; kernel;
http://www.sciencedirect.com/science/article/pii/S0377221716309456
Bazgan, Cristina
Jamain, Florian
Vanderpooten, Daniel
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:169-1812017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:169-181
article
An improved method for forecasting spare parts demand using extreme value theory
Inventory control for spare parts is essential for many organizations due to the trade-off between preventing high holding cost and stockouts. The lead time demand distribution plays a central role in inventory control. The estimation of this distribution is problematic as the spare part demand is often intermittent, and as a consequence often only a limited number of non-zero data points are available in practice. The well-known empirical method uses historical demand data to construct the lead time demand distribution. Although it performs reasonably well when service requirements are relatively low, it has difficulties in achieving high target service levels. In this paper, we improve the empirical method by applying extreme value theory to model the tail of the lead time demand distribution. To make the most out of a limited number of demand observations, we establish that extreme value theory can be applied to lead time demand periods computed over overlapping intervals. We consider two service levels: the expected waiting time and cycle service level. Our experiments show that our method improves the inventory performance compared to the empirical method and is competitive with the WSS method, Croston’s method and SBA for a range of demand distributions.
Forecasting; Inventory; Spare parts; Extreme value theory; Semi-parametric;
http://www.sciencedirect.com/science/article/pii/S0377221717301030
Zhu, Sha
Dekker, Rommert
van Jaarsveld, Willem
Renjie, Rex Wang
Koning, Alex J.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1036-10442017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1036-1044
article
Refined large deviations asymptotics for Markov-modulated infinite-server systems
Many networking-related settings can be modeled by Markov-modulated infinite-server systems. In such models, the customers’ arrival rates and service rates are modulated by a Markovian background process; additionally, there are infinitely many servers (and consequently the resulting model is often used as a proxy for the corresponding many-server model). The Markov-modulated infinite-server model hardly allows any explicit analysis, apart from results in terms of systems of (ordinary or partial) differential equations for the underlying probability generating functions, and recursions to obtain all moments. As a consequence, recent research efforts have pursued an asymptotic analysis in various limiting regimes, notably the central-limit regime (describing fluctuations around the average behavior) and the large-deviations regime (focusing on rare events). Many of these results use the property that the number of customers in the system obeys a Poisson distribution with a random parameter. The objective of this paper is to develop techniques to accurately approximate tail probabilities in the large-deviations regime. We consider the scaling in which the arrival rates are inflated by a factor N, and we are interested in the probability that the number of customers exceeds a given level Na. Where earlier contributions focused on so-called logarithmic asymptotics of this exceedance probability (which are inherently imprecise), the present paper improves upon those results in that exact asymptotics are established. These are found in two steps: first the distribution of the random parameter of the Poisson distribution is characterized, and then this knowledge is used to identify the exact asymptotics. The paper is concluded by a set of numerical experiments, in which the accuracy of the asymptotic results is assessed.
Queueing; Communication networks; Markov-modulation; Rare events; Large deviations;
http://www.sciencedirect.com/science/article/pii/S0377221716309006
Blom, Joke
De Turck, Koen
Mandjes, Michel
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:182-1942017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:182-194
article
Enriching demand forecasts with managerial information to improve inventory replenishment decisions: Exploiting judgment and fostering learning
This paper is concerned with analysing and modelling the effects of judgmental adjustments to replenishment order quantities. Judgmentally adjusting replenishment quantities suggested by specialized (statistical) software packages is the norm in industry. Yet, to date, no studies have attempted to either analytically model this situation or practically characterize its implications in terms of ‘learning’. We consider a newsvendor setting where information available to managers is reflected in the form of a signal that may or may not be correct, and which may or may not be trusted. We show the analytical equivalence of adjusting an order quantity and deriving an entirely new one in light of a necessary update of the estimated demand distribution. Further, we assess the system’s behaviour through a simulation experiment on theoretically generated data and we study how to foster learning to efficiently utilize managerial information. Judgmental adjustments are found to be beneficial even when the probability of a correct signal is not known. More generally, some interesting insights emerge into the practice of judgmentally adjusting order quantities.
Inventory; Judgement; Judgmental adjustments; Newsvendor model; Learning;
http://www.sciencedirect.com/science/article/pii/S0377221717301066
Rekik, Yacine
Glock, Christoph H.
Syntetos, Aris A.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:631-6492017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:631-649
article
Optimal production, pricing, and substitution policies in continuous review production-inventory systems
We consider the optimal production, pricing, and substitution policies of a continuous-review production-inventory system with two products: a high-end product and a low-end product. Each product has its associated customer stream; however, the demands for the low-end product may be satisfied by the high-end product. We formulate the problem as a Markov decision process and characterize the structure of the optimal control policy, which specifies when to produce for each product, when to use the high-end product as a substitute, and how to set the optimal prices. We show that a base-stock production policy is optimal; however, the optimal base-stock level for each product depends on the inventory level of the other product and it features a monotonic property. We also demonstrate that the optimal substitution policy is a rationing policy with the rationing level depending on the total inventory amount. We find that the optimal prices can be either decreasing or increasing in the inventory levels, depending on the forms of demand functions. Furthermore, we utilize numerical experiments to investigate the impact of different system characteristics on the benefits of using substitution and dynamic pricing. Finally, we investigate when the dynamic pricing strategy and the substitution strategy are complements or substitutes.
Production; Substitution; Dynamic pricing; Continuous review; Markov decision process;
http://www.sciencedirect.com/science/article/pii/S0377221717300358
Yu, Yimin
Shou, Biying
Ni, Yaodong
Chen, Li
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:507-5132017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:507-513
article
Fast approximation algorithms for uniform machine scheduling with processing set restrictions
We consider the problem of nonpreemptively scheduling a set of independent jobs on a set of uniform machines, where each job has a set of machines to which it can be assigned. This kind of restriction is called the processing set restriction. In the literature there are many kinds of processing set restrictions that have been studied. In this paper we consider two kinds: the “inclusive processing set” and the “tree-hierarchical processing set”. Epstein and Levin (2011) have given Polynomial Time Approximation Schemes (PTAS) to solve both classes. However, the running times of their PTAS are rather high. In this paper, we give fast approximation algorithms for both cases and show that they both have a worst-case performance bound of 4/3. Moreover, we show that the bounds are achievable.
Scheduling; Uniform machines; Inclusive processing set; Tree-hierarchical processing set; Makespan; Worst-case bound;
http://www.sciencedirect.com/science/article/pii/S037722171730036X
Leung, Joseph Y-T.
Ng, C.T.
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:873-8852017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:873-885
article
The Quadrant Shrinking Method: A simple and efficient algorithm for solving tri-objective integer programs
We present a new variant of the full 2-split algorithm, the Quadrant Shrinking Method (QSM), for finding all nondominated points of a tri-objective integer program. The algorithm is easy to implement and solves at most 3|YN|+1 single-objective integer programs when computing the nondominated frontier, where YN is the set of all nondominated points. A computational study demonstrates the efficacy of QSM.
Tri-objective integer programs; Quadrant shrinking method; Criterion space search method; Nondominated frontier;
http://www.sciencedirect.com/science/article/pii/S0377221716301631
Boland, Natashia
Charkhgard, Hadi
Savelsbergh, Martin
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:789-8032017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:789-803
article
Economies of scale, technical change and persistent and time-varying cost efficiency in Indian banking: Do ownership, regulation and heterogeneity matter?
Banks with different ownership types adjust differently to changes in regulatory environments. Although this topic has been widely studied, all the previous studies fail to control for bank-heterogeneity (bank-specific effects) in estimating cost structure and efficiency. We propose a model where we control for bank-heterogeneity, and introduce persistent and time-varying inefficiency. Additionally we incorporate determinants of both persistent and time-varying inefficiency as well as production risks. Furthermore, our model allows estimation of different technologies for different ownership types jointly. We use this model to analyze the effect of regulation in Indian banking. We find that private banks have not exhausted their economies of scale, foreign banks are operating under diseconomies of scale, especially after the reforms, and scale economies of state owned banks are unaffected by regulation. Banks of all ownership types have enjoyed technical progress; however, foreign banks have benefited the most, followed by state owned banks. Only state banks were able to improve their cost efficiency, while private banks, and especially foreign banks, were lagging behind their cost frontiers.
Finance; persistent and time-varying inefficiency; Production risk; Private; public and foreign; Regulation,;
http://www.sciencedirect.com/science/article/pii/S0377221717300607
Badunenko, Oleg
Kumbhakar, Subal C.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:520-5332017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:520-533
article
Impact of coordination on costs and carbon emissions for a two-echelon serial economic order quantity problem
Coordination in supply chains consists in aligning the decisions made by several echelons to reach a globally optimal solution called the centralized solution, and to share the benefits among the actors. This concept has been studied widely from a cost optimization perspective but coordination is also proposed by practitioners and academics as a solution to reduce carbon emissions. This article compares the costs and carbon emissions resulting from a non-coordinated two-echelon serial economic order quantity model to that of the centralized solution. Our model accounts for transportation and inventory related costs and emissions and we consider vehicle capacities. We derive new results to solve the problem in the non-coordinated and in the centralized cases. We provide sufficient conditions ensuring that coordination enables reducing both costs and emissions and we show that these conditions are satisfied in many applications. On the other hand, we also identify situations for which coordination leads to an increase in emissions and we provide sufficient conditions. In such situations, we additionally show how to obtain a solution decreasing both costs and carbon emissions. We use multiobjective optimization to identify all these solutions and we provide a series of insights.
Inventory; Coordination; Carbon emissions; Multiobjective optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716310529
Bouchery, Yann
Ghaffari, Asma
Jemai, Zied
Tan, Tarkan
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:43-532017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:43-53
article
Clustering data that are graph connected
A new combinatorial model for clustering is proposed for all applications in which individual and relational data are available. Individual data refer to the intrinsic features of units, they are stored in a matrix D, and are the typical input of all clustering algorithms proposed so far. Relational data refer to the observed links between units, representing social ties such as friendship, joint participation to social events, and so on. Relational data are stored in the graph G=(V,E), and the data available for clustering are the triplet G=(V,E,D), called attributed graph. Known clustering algorithms can take advantage of the relational structure of G to redefine and refine the units membership. For example, uncertain membership of units to groups can be resolved using the sociological principle that ties are more likely to form between similar units. The model proposed here shows how to take into account the graph information, combining the clique partitioning objective function (a known clustering methodology) with connectivity as the structural constraint of the resulting clusters. The model can be formulated and solved using Integer Linear Programming and a new family of cutting planes. Moderate size problems are solved, and heuristic procedures are developed for instances in which the optimal solution can only be approximated. Finally, tests conducted on simulated data show that the clusters quality is greatly improved through this methodology.
Combinatorial optimization; Clustering; Clique partitioning; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221717301145
Benati, Stefano
Puerto, Justo
Rodríguez-Chía, Antonio M.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:403-4202017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:403-420
article
A unified approach to uncertain optimization
In this paper we consider uncertain scalar optimization problems with infinite scenario sets. We apply methods from vector optimization in general spaces, set-valued optimization and scalarization techniques to develop a unified characterization of different concepts of robust optimization and stochastic programming. These methods provide new insights on the interrelation between different concepts for handling uncertainties and naturally lead to new concepts of robustness.
Robust optimization; Stochastic optimization; Set optimization; vector optimization; Nonlinear scalarization;
http://www.sciencedirect.com/science/article/pii/S0377221716310797
Klamroth, Kathrin
Köbis, Elisabeth
Schöbel, Anita
Tammer, Christiane
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:214-2212017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:214-221
article
Investigation on irreducible cost vectors in minimum cost arborescence problems
We study cost allocation rules in minimum cost arborescence problems, where agents need to build a network to a source in order to obtain some resource. Provided a vector of costs of edges (agent/source pairs), the agents cooperate to construct a minimum cost arborescence rooted at the source in order to reduce the total building cost. Minimum cost arborescence problems are extensions of well-studied minimum cost spanning tree problems to deal with asymmetric edge costs. Regarding cost allocation rules in minimum cost arborescence problems, Dutta and Mishra (2012) extended the folk rule, which is one of the most important rules in minimum cost spanning tree problems, based on the problem with the vector of the most reduced costs, called irreducible form. In minimum cost spanning tree problems, several axiomatic characterizations of the folk rule have been proposed. However, it is difficult to extend them in minimum cost arborescence problems. One of the reasons is that strong and reasonable axioms in minimum cost spanning tree problems, which imply irreducible-form dependence of cost allocation rules, are not satisfied by the folk rule in minimum cost arborescence problems. Hence, we search for other axioms which imply irreducible-form dependence. For this purpose, we investigate irreducible cost vectors in minimum cost arborescence problems, and characterize the irreducible form.
Game theory; Graph theory; Minimum cost arborescence problem; Irreducible cost; Cost allocation rule;
http://www.sciencedirect.com/science/article/pii/S0377221717300760
Kusunoki, Yoshifumi
Tanino, Tetsuzo
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:613-6242017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:613-624
article
The study of the unidirectional quay crane scheduling problem: complexity and risk-aversion
As a special case of the quay crane scheduling problem, the unidirectional quay crane scheduling problem has been received more and more attention. In this paper, we analyze the computational complexity of the unidirectional quay crane scheduling problem and propose a tighter mixed integer programming formulation. Next, we develop three makespan-constrained models to obtain risk-averse solutions to mitigate the impacts of ship instability and solution infeasibility (thus, need to revamp or even reschedule the plans). Comprehensive numerical experiments are designed to investigate the benefits of the tighter formulation and the three optimization models for risk-aversion.
OR in maritime industry; Port container terminal; Unidirectional quay crane scheduling problem; Risk aversion; Robustness;
http://www.sciencedirect.com/science/article/pii/S0377221717300127
Chen, Jiang Hang
Bierlaire, Michel
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:904-9192017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:904-919
article
A new method for optimizing a linear function over the efficient set of a multiobjective integer program
We present a new algorithm for optimizing a linear function over the set of efficient solutions of a multiobjective integer program (MOIP). The algorithm’s success relies on the efficiency of a new algorithm for enumerating the nondominated points of a MOIP, which is the result of employing a novel criterion space decomposition scheme which (1) limits the number of subspaces that are created, and (2) limits the number of sets of disjunctive constraints required to define the single-objective IP that searches a subspace for a nondominated point. An extensive computational study shows that the efficacy of the algorithm. Finally, we show that the algorithm can be easily modified to efficiently compute the nadir point of a multiobjective integer program.
Multiobjective integer programming; Nondominated points; Extension of the L-shape search method; Optimizing over the efficient set; Nadir point;
http://www.sciencedirect.com/science/article/pii/S0377221716300741
Boland, Natashia
Charkhgard, Hadi
Savelsbergh, Martin
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:665-6792017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:665-679
article
Weighted sum model with partial preference information: Application to multi-objective optimization
Multi-objective optimization problems often lead to large nondominated sets, as the size of the problem or the number of objectives increases. Generating the whole nondominated set requires significant computation time, while most of the corresponding solutions are irrelevant to the decision maker (DM). Optimizing an aggregation function reduces the computation time and produces one or a very limited number of more focused solutions. This requires, however, the elicitation of precise preference parameters, which is often difficult and partly arbitrary, and might discard solutions of interest. An intermediate approach consists in using partial preference information with an aggregation function. In this work, we present a preference relation based on the weighted sum aggregation, where weights are not precisely defined. We give some properties of this preference relation and define the set of preferred points as the set of nondominated points with respect to this relation. We provide an efficient and generic way of generating this preferred set using any standard multi-objective optimization algorithm. This approach shows competitive performances both on computation time and quality of the generated preferred set.
Multiple objective programming; Weighted sum; Partial preference information;
http://www.sciencedirect.com/science/article/pii/S0377221717300085
Kaddani, Sami
Vanderpooten, Daniel
Vanpeperstraete, Jean-Michel
Aissi, Hassene
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:949-9632017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:949-963
article
Changeover formulations for discrete-time mixed-integer programming scheduling models
Changeover times can have a significant impact on the scheduling of manufacturing operations. Unfortunately, accounting for changeovers in mixed-integer programming (MIP) scheduling formulations makes the resulting models computationally more expensive. We propose five new formulations for sequence-dependent changeovers, applicable to a wide range of scheduling problems. We generate constraints for different sets of time points and sets of tasks. We also propose valid inequalities for makespan minimization. Furthermore, we prove results regarding the relative tightness of each formulation. Finally, we perform a computational study. Interestingly, we find that tighter formulations do not always lead to faster solution times, and we show that some of the new formulations perform better than the previously proposed ones.
Scheduling; Integer programming; Formulation tightness; Manufacturing scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221717300097
Velez, Sara
Dong, Yachao
Maravelias, Christos T.
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:856-8722017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:856-872
article
Multi-objective branch and bound
Branch and bound is a well-known generic method for computing an optimal solution of a single-objective optimization problem. Based on the idea “divide to conquer”, it consists in an implicit enumeration principle viewed as a tree search. Although the branch and bound was first suggested by Land and Doig (1960), the first complete algorithm introduced as a multi-objective branch and bound that we identified was proposed by Kiziltan and Yucaoglu (1983). Rather few multi-objective branch and bound algorithms have been proposed. This situation is not surprising as the contributions on the extensions of the components of branch and bound for multi-objective optimization are recent. For example, the concept of bound sets, which extends the classic notion of bounds, has been mentioned by Villarreal and Karwan (1981). But it was only developed for the first time in 2001 by Ehrgott and Gandibleux, and fully defined in 2007.
Multiple objective programming; Branch and bound; Bound sets;
http://www.sciencedirect.com/science/article/pii/S037722171730067X
Przybylski, Anthony
Gandibleux, Xavier
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:972-9832017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:972-983
article
A methodology for determining an effective subset of heuristics in selection hyper-heuristics
We address the important step of determining an effective subset of heuristics in selection hyper-heuristics. Little attention has been devoted to this in the literature, and the decision is left at the discretion of the investigator. The performance of a hyper-heuristic depends on the quality and size of the heuristic pool. Using more than one heuristic is generally advantageous, however, an unnecessary large pool can decrease the performance of adaptive approaches. Our goal is to bring methodological rigour to this step. The proposed methodology uses non-parametric statistics and fitness landscape measurements from an available set of heuristics and benchmark instances, in order to produce a compact subset of effective heuristics for the underlying problem. We also propose a new iterated local search hyper-heuristic using multi-armed bandits coupled with a change detection mechanism. The methodology is tested on two real-world optimization problems: course timetabling and vehicle routing. The proposed hyper-heuristic with a compact heuristic pool, outperforms state-of-the-art hyper-heuristics and competes with problem-specific methods in course timetabling, even producing new best-known solutions in 5 out of the 24 studied instances.
Metaheuristics; Hyper-heuristics; Adaptive Search; Combinatorial optimization; Iterated local search;
http://www.sciencedirect.com/science/article/pii/S0377221717300772
Soria-Alcaraz, Jorge A.
Ochoa, Gabriela
Sotelo-Figeroa, Marco A.
Burke, Edmund K.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:588-6002017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:588-600
article
Multifirm models of cybersecurity investment competition vs. cooperation and network vulnerability
In this paper, we develop and compare three distinct models for cybersecurity investment in competitive and cooperative situations to safeguard against potential and ongoing threats. We introduce a Nash equilibrium model of noncooperation in terms of cybersecurity levels of the firms involved, which is formulated, analyzed, and solved using variational inequality theory. The equilibrium of this model then acts as the disagreement point over which bargaining takes place in the setting of the second model, which yields a cooperative solution in which the firms are guaranteed that their expected utilities are no lower than those achieved under noncooperation. Nash bargaining theory is utilized to argue for information sharing and to quantify its monetary and security benefits in terms of reduction in network vulnerability to cyberattacks. The third model in this paper also focuses on cooperation among the firms in terms of their cybersecurity levels, but from a system-optimization perspective in which the sum of the expected utilities is maximized. Qualitative properties are provided for the models in terms of existence and uniqueness results along with numerical solutions to two cases focusing on retailers and financial service firms, since these have been subject to some of the most damaging cyberattacks. Sensitivity analysis results are also provided. We compare the solutions of the models for the cases and recommend a course of action that has both financial and policy-related implications.
Cybersecurity; Investments; Game theory; Nash equilibrium; Nash bargaining;
http://www.sciencedirect.com/science/article/pii/S0377221716310682
Nagurney, Anna
Shukla, Shivani
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:625-6302017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:625-630
article
A note on “A multi-period profit maximizing model for retail supply chain management”
In this note we present an efficient exact algorithm to solve the joint pricing and inventory problem for which Bhattacharjee and Ramesh (2000) proposed two heuristics. The algorithm is based on a method proposed by Thomas (1970) and we show additional properties which can be used to arrive at an even more efficient algorithm. Furthermore, we point out several shortcomings in the paper by Bhattacharjee and Ramesh.
Inventory; Pricing; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221717300371
van den Heuvel, Wilco
Wagelmans, Albert P.M.
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1191-11992017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1191-1199
article
Study of aggregation algorithms for aggregating imprecise software requirements’ priorities
Extensive Numerical Assignment (ENA) is a novel Requirements Prioritization Technique introduced by the authors that acknowledges the uncertain and imprecise nature of human judgment. A controlled experiment is conducted during which data are collected using ENA for the requirements assessment of university website system. The objective of this paper is to study how the imprecise data obtained from ENA can be aggregated using aggregation algorithms: Multiple Attribute Utility Theory (MAUT) and Interval Evidential Reasoning (IER) to generate requirements’ priorities in the presence of conflicting personal preferences among assessors. A simplified version of IER called Laplace Evidential Reasoning (LER) is introduced and the results are discussed. LER has the potential to emerge as a competent aggregation algorithm when compared to MAUT and IER, because of its reasonable processing requirements when compared to IER and its ability to produce rich set of outputs when compared to MAUT.
Requirements Prioritization; Interval Evidential Reasoning; Uncertain assessment; Multiple Attribute Utility Theory;
http://www.sciencedirect.com/science/article/pii/S0377221716309651
Voola, Persis
A., Vinaya Babu
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1054-10632017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1054-1063
article
Analytical solution for an investment problem under uncertainties with shocks
We derive the optimal investment decision in a project where both demand and investment cost are stochastic processes, eventually subject to shocks. We extend the approach used in Dixit and Pindyck (1994) to deal with two sources of uncertainty and we assume that the underlying processes are jump diffusion processes. Assuming certain conditions on the parameters, we are able to derive a closed expression for the value of the firm. Finally, we present comparative statics for the investment threshold with respect to the relevant parameters.
Markov processes; Jump-diffusion process; Investment decision; Optimal stopping time problem;
http://www.sciencedirect.com/science/article/pii/S0377221717300139
Nunes, Cláudia
Pimentel, Rita
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1156-11682017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1156-1168
article
Stochastic short-term hydropower planning with inflow scenario trees
This paper presents an optimization approach to solve the short-term hydropower unit commitment and loading problem with uncertain inflows. A scenario tree is built based on a forecasted fan of inflows, which is developed using the weather forecast and the historical weather realizations. The tree-building approach seeks to minimize the nested distance between the stochastic process of historical inflow data and the multistage stochastic process represented in the scenario tree. A two-phase multistage stochastic model is used to solve the problem. The proposed approach is tested on a 31 day rolling-horizon with daily forecasted inflows for three power plants situated in the province of Quebec, Canada, that belong to the company Rio Tinto.
Large scale optimization; Nonlinear programming; OR in energy; Scenarios; Stochastic programming;
http://www.sciencedirect.com/science/article/pii/S0377221716309535
Séguin, Sara
Fleten, Stein-Erik
Côté, Pascal
Pichler, Alois
Audet, Charles
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:1024-10422017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:1024-1042
article
Stochastic impulse control with regime-switching dynamics
Optimal product management problems with multiple product generations in continuous time lead to the consideration of dynamic optimal control problems that feature both intervention costs and partially controlled regime shifts. We therefore investigate and solve such stochastic impulse control problems with regime-switching in a general setting. We analyze the associated coupled systems of quasi-variational inequalities in suitable Sobolev spaces, and we establish a direct approach to construct both the value function and optimal strategies. Our results in particular yield a numerical method for the computation of the optimal value function and the associated strategies.
Control; Product life cycle; Stochastic impulse control; Intervention costs; Regime shifts;
http://www.sciencedirect.com/science/article/pii/S0377221716310633
Korn, Ralf
Melnyk, Yaroslav
Seifried, Frank Thomas
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:984-9942017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:984-994
article
Supply chain forecasting when information is not shared
The operations management literature is abundant in discussions on the benefits of information sharing in supply chains. However, there are many supply chains where information may not be shared due to constraints such as compatibility of information systems, information quality, trust and confidentiality. Furthermore, a steady stream of papers has explored a phenomenon known as Downstream Demand Inference (DDI) where the upstream member in a supply chain can infer the downstream demand without the need for a formal information sharing mechanism. Recent research has shown that, under more realistic circumstances, DDI is not possible with optimal forecasting methods or Single Exponential Smoothing but is possible when supply chains use a Simple Moving Average (SMA) method. In this paper, we evaluate a simple DDI strategy based on SMA for supply chains where information cannot be shared. This strategy allows the upstream member in the supply chain to infer the consumer demand mathematically rather than it being shared. We compare the DDI strategy with the No Information Sharing (NIS) strategy and an optimal Forecast Information Sharing (FIS) strategy in the supply chain. The comparison is made analytically and by experimentation on real sales data from a major European supermarket located in Germany. We show that using the DDI strategy improves on NIS by reducing the Mean Square Error (MSE) of the forecasts, and cutting inventory costs in the supply chain.
Supply chain management; Information sharing; Simple moving average; ARIMA; Downstream demand inference;
http://www.sciencedirect.com/science/article/pii/S0377221716309717
Ali, Mohammad M.
Babai, Mohamed Zied
Boylan, John E.
Syntetos, A.A.
oai:RePEc:eee:ejores:v:260:y:2017:i:3:p:807-8132017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:3:p:807-813
article
The vector linear program solver Bensolve – notes on theoretical background
Bensolve is an open source implementation of Benson’s algorithm and its dual variant. Both algorithms compute primal and dual solutions of vector linear programs (VLP), which include the subclass of multiple objective linear programs (MOLP). The recent version of Bensolve can treat arbitrary vector linear programs whose upper image does not contain lines. This article surveys the theoretical background of the implementation. In particular, the role of VLP duality for the implementation is pointed out. Some numerical examples are provided. In contrast to the existing literature we consider a less restrictive class of vector linear programs.
Vector linear programming; Linear vector optimization; Multi objective optimization,;
http://www.sciencedirect.com/science/article/pii/S0377221716300765
Löhne, Andreas
Weißing, Benjamin
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:1003-10162017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:1003-1016
article
An algorithmic framework for tool switching problems with multiple objectives
The tool switching problem is a classical and extensively studied problem in flexible manufacturing systems. The standard example is a CNC machine with a limited number of tool slots to which tools for drilling and milling have to be assigned, with the goal of minimizing the number of necessary tool switches and/or the number of machine stops over time. In this work we present a branch-and-bound based algorithmic framework for a very general and versatile formulation of this problem (involving arbitrary setup and processing times) that allows addressing both of these objectives simultaneously (or only of them), and that improves over several known approaches from the literature. We demonstrate the usefulness of our algorithm by rigorous theoretical analysis and by experiments with both large real-world and random instances.
Flexible manufacturing system; Tool switching; Branch-and-bound;
http://www.sciencedirect.com/science/article/pii/S0377221716309596
Furrer, Martina
Mütze, Torsten
oai:RePEc:eee:ejores:v:259:y:2017:i:3:p:847-8632017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:259:y:2017:i:3:p:847-863
article
Decomposition algorithms for synchronous flow shop problems with additional resources and setup times
In this paper, we present decomposition algorithms for synchronous flow shop problems with additional resources and setup times. In such an environment, jobs are moved from one machine to the next by an unpaced synchronous transportation system, which implies that the processing is organized in synchronized cycles. In each cycle the current jobs start at the same time on the corresponding machines and after processing have to wait until the last job is finished. Afterwards, all jobs are moved to the next machine simultaneously. During processing, each job needs one additional resource and setup times have to be taken into account when changing from one resource to another. The goal is to find a production sequence of the jobs as well as a feasible assignment of resources to the jobs such that the total production time (makespan) is minimized. We propose two decomposition strategies dealing with the two subproblems of job sequencing and resource assignment hierarchically. Both approaches are computationally evaluated and compared. As a by-product, we also present efficient heuristics for the makespan minimization problem in synchronous flow shops without setup times.
Scheduling; Flow shop; Synchronous movement; Resources; Setup times;
http://www.sciencedirect.com/science/article/pii/S0377221716309407
Waldherr, Stefan
Knust, Sigrid
oai:RePEc:eee:ejores:v:261:y:2017:i:1:p:1-162017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:1:p:1-16
article
Mathematical optimization approaches for facility layout problems: The state-of-the-art and future research directions
Facility layout problems are an important class of operations research problems that has been studied for several decades. Most variants of facility layout are NP-hard, therefore global optimal solutions are difficult or impossible to compute in reasonable time. Mathematical optimization approaches that guarantee global optimality of solutions or tight bounds on the global optimal value have nevertheless been successfully applied to several variants of facility layout. This review covers three classes of layout problems, namely row layout, unequal-areas layout, and multifloor layout. We summarize the main contributions to the area made using mathematical optimization, mostly mixed integer linear optimization and conic optimization. For each class of problems, we also briefly discuss directions that remain open for future research.
Facilities planning and design; Unequal-areas facility layout; Row layout; Mixed integer linear optimization; Semidefinite optimization;
http://www.sciencedirect.com/science/article/pii/S037722171730084X
Anjos, Miguel F.
Vieira, Manuel V.C.
oai:RePEc:eee:ejores:v:260:y:2017:i:2:p:571-5872017-04-20RePEc:eee:ejores
RePEc:eee:ejores:v:260:y:2017:i:2:p:571-587
article
To collaborate or not to collaborate: Prompting upstream eco-efficient innovation in a supply chain
Large retailers are a source of great stress for suppliers in supply chains: they want better environmental performance and ever-lower prices without sacrificing product quality. Retailers’ initiatives pressure suppliers to invest substantially upfront to reduce packaging and energy use. The potential savings in packaging materials, production and shipping costs that could offset suppliers’ upfront investments, however, are not going mainly toward suppliers’ bottom lines, since the retailers appear to share only the savings but not the upfront investment. Thus, retailers’ heralded sustainability initiatives are weighed down by the substantial costs to be borne by suppliers alone, and retailers’ efforts to improve the environmental performance of their supply chains do not materialize as predicted. In this paper, we consider a two-echelon supply chain where an upstream supplier sells through a downstream retailer. The supplier is accountable to invest effort in an eco-efficient innovation, which decreases her unit production cost while improving the per-unit environmental performance of her product and increases the value of the product to consumers (so enhancing market demand), and the retailer who embodies the channel power sets the product price and sells to consumers. First, we delve into the non-collaborative case where the retailer imposes a minimum requirement on the level of eco-efficient innovation effort to be invested by supplier. Second, we study the profit/cost implications of collaboration between two parties for upstream eco-efficient innovation by scrutinizing two types of contracts: a cost-sharing agreement wherein the retailer shares a fraction of the supplier’s upfront cost of investment in innovation; and a revenue-sharing agreement under which the retailer shares a fraction of his revenues generated by the supplier’s eco-efficient innovation effort. For each contract, we also contemplate the possibility of negotiation between the retailer and supplier which forms the basis of division of costs and revenues under a cost- and revenue-sharing contract, respectively.
Supply chain management; Sustainable operations; Eco-efficient innovation; Contracts;
http://www.sciencedirect.com/science/article/pii/S0377221716310694
Yenipazarli, Arda
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:704-7142017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:704-714
article
Evaluation functions and decision conditions of three-way decisions with game-theoretic rough sets
Three-way decisions have been used over the years in many application areas. The rough sets and its extensions provide useful approaches for three-way decisions. Typically, these approaches employ a single evaluation function or criterion to induce three-way decisions. When extending the rough set based three-way decisions to multiple criteria decision making (MCDM), two issues are encountered. The first issue is related to the construction and definition of aggregation mechanisms for dealing with differences in results of evaluation functions. The second issue is related to the setting of choice structure for selecting the three types of decision choices. In this article, we consider the role and use of game-theoretic rough set (GTRS) model to resolve and address these two issues. The issue related to differences in evaluation functions is addressed with GTRS by implementing a game that considers multiple evaluation functions as game players. The game-theoretic analysis in the GTRS is employed to resolve the differences by determining a tradeoff between evaluation functions. The issue related to choice structure is addressed by considering the conditions under which different game outcomes could constitute a game solution. In particular, the equilibrium analysis within games is used to construct the rules for three-way decisions. A demonstrative example is used to explain the use of the proposed approach. The relationship between the proposed approach and the probabilistic rough sets is also discussed.
Game-theoretic rough sets; Game theory; Three-way decisions; Rough sets; Probabilistic rough sets;
http://www.sciencedirect.com/science/article/pii/S0377221717300036
Azam, Nouman
Zhang, Yan
Yao, JingTao
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1052-10652017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1052-1065
article
Improving fleet management in mines: The benefit of heterogeneous match factor
Mining requires large, expensive equipment: it is known that transportation costs from 50–60% of total operational costs. Mixed-fleet optimization is essential to support business sustainability. A new approach, based on differences in the match factor, is proposed: a heterogeneous truck fleet, a heterogeneous shovel fleet, and a fleet comprising both heterogeneous truck and shovel. Simulation study provides evidence that the match factor can be used to determine ranges for numbers of different types of trucks in an optimal fleet. Choice of heuristic truck dispatching methods has a significant influence on the performance. Additionally, the simulated results reveal differences in production with different heterogeneous fleet types.
Simulation; Match factor; Heuristic truck dispatching methods; Heterogeneous fleets;
http://www.sciencedirect.com/science/article/pii/S0377221717301789
Chaowasakoo, Patarawan
Seppälä, Heikki
Koivo, Heikki
Zhou, Quan
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:613-6252017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:613-625
article
A control-chart-based queueing approach for service facility maintenance with energy-delay tradeoff
Maintenance planning and energy consumption control are critical issues in facility operations management. In practice, the energy consumption of a facility, which will be affected by the operation condition, is closely connected with the associated maintenance policy. Specifically, for an energy-consuming service system, though a frequent maintenance activity can keep the facility in a good condition with low energy consumption, it makes the delay time longer and leads to a poor customer experience. In this paper, we study a single-server queueing system with different energy consumption levels in the associated running states to address the conflict between energy consumption and customer delay. Two types of maintenance activities are implemented for the server, i.e., the planned maintenance and the reactive maintenance. The planned maintenance is adopted based on a frequency parameter at the beginning of an idle period, and the reactive maintenance is initialized by the Shewhart’s individual control chart (condition-based maintenance). To capture the energy-delay tradeoff, our objective is to develop an optimal maintenance policy that minimizes the long-run expected total cost of the system under a customer waiting time constraint. Numerical experiments are conducted to analyze the problem, in which useful managerial insights are obtained for the optimal maintenance policy. The results demonstrate the robustness of the proposed maintenance model, its advantage over the model without control chart, and its applicability in general situations.
OR in energy; Energy-delay tradeoff; Queueing system; Maintenance; Control chart;
http://www.sciencedirect.com/science/article/pii/S0377221717302308
Zhou, Wenhui
Zheng, Zhibin
Xie, Wei
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1098-11092017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1098-1109
article
The impact of design uncertainty in engineer-to-order project planning
A major driver of planning complexity in engineer-to-order (ETO) projects is design uncertainty far into the engineering and production processes. This leads to uncertainty in technical information and will typically lead to a revision of parts of the project network itself. Hence, this uncertainty is different from standard task completion uncertainty. We build a stochastic program to draw attention to, and analyse, the engineering-design planning problem, and in particular, to understand what role design flexibility plays in hedging against such uncertainty. The purpose is not to devise a general stochastic dynamic model to be used in practice, but to demonstrate by the use of small model instances how design flexibility actually adds value to a project and what, exactly, it is that produces this value. This will help us understand better where and when to develop flexibility and buffers, even when not actually solving stochastic models.
Project scheduling; Engineer-to-order; Design uncertainty; Design flexibility;
http://www.sciencedirect.com/science/article/pii/S0377221717301844
Vaagen, Hajnalka
Kaut, Michal
Wallace, Stein W.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:994-10002017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:994-1000
article
Behavioral models for first-price sealed-bid auctions with the one-shot decision theory
We build an auction model with the one-shot decision theory which describes the process of a bidder deciding his/her bidding price in first-price sealed-bid auctions. The decision making procedure involves two steps: First, for each of his/her possible bidding prices, the bidder examines every possible highest bidding price provided by the other bidders and chooses one as a focus point of this bidding price of him/her. Then, the bidder determines such a bidding price as the optimal one that generates the best outcome when its focus point occurs. The optimal bidding price can be obtained and two common phenomena in auction markets: throwing away and overbidding are well explained.
Decision support systems; Behavioral models; Auctions/bidding; One-shot decision theory; Throwing away/overbidding;
http://www.sciencedirect.com/science/article/pii/S037722171730228X
Wang, Chao
Guo, Peijun
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1125-11402017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1125-1140
article
Estimating Malmquist productivity indexes using probabilistic directional distances: An application to the European banking sector
Our paper by adopting the latest advances on the probabilistic characterization of directional distance functions as has been introduced by Daraio and Simar (2014), develops a Malmquist productivity index and presents its main decompositions. Specifically, the proposed productivity index is based on the probabilistic version of directional distance functions which are expressed as transformations of radial distances. We illustrate how these indexes can be computed and how different components can be derived. Specifically, we demonstrate how a probabilistic version of the following categories of change can be obtained: technical, efficiency, pure efficiency, scale efficiency, scale change factor and scale bias of technical change. Finally, we apply the probabilistic productivity indexes alongside with their decompositions to inputs/outputs data from a sample of 644 banks from 28 European countries between the years 2007, 2010 and 2014. The results suggest that the EU banks’ productivity levels remained relative unchanged from the initiation of U.S. prime crisis and during the EU sovereign debt crisis. Finally, during the U.S. prime crisis and the Global Financial Crisis, banks’ maintained their productivity levels by utilizing better their inputs and by exploiting scale economies. However, during the sovereign debt crisis banks maintained their productivity levels by investing on financial engineering competences.
Data envelopment analysis; Directional distance functions; Global financial crisis; Probabilistic approach;
http://www.sciencedirect.com/science/article/pii/S0377221717301984
Kevork, Ilias S.
Pange, Jenny
Tzeremes, Panayiotis
Tzeremes, Nickolaos G.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1189-12022017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1189-1202
article
A real options game of alliance timing decisions in biopharmaceutical research and development
In this article we examine the alliance timing trade-off facing both pharmaceutical and biotech firms in a stochastic and competitive environment. Specifically, we introduce a real options game (ROG), where a pharmaceutical company can choose between two competing biotech firms by sequentially offering a licensing deal early or late in the new drug development process. We find that, when the alliance raises the drug market value significantly, the agreement is signed late in the drug development process. This suggests that the postponement effect implied by the use of real options prevails over the biotech firms’ competition effect, which would instead play in favor of an early agreement for pre-emption reasons. When the alliance does not raise the drug market value significantly, the optimal timing depends on the level of royalties retained by the pharmaceutical company. In particular, an early agreement is signed in the presence of a low level of royalties. In this case, indeed, the competition effect becomes predominant because the pharmaceutical company can substantially reduce the upfront payment and thus the potential loss incurred if the biotech partner does not exercise her option to continue the new drug development process. We also show that the alliance timing outcomes of our real options game considerably differ from those obtained when both parties use the net present value (NPV) to assess their payoffs.
OR in research & development; Research & development alliance timing; Real options games; Biopharmaceutical industry;
http://www.sciencedirect.com/science/article/pii/S0377221717302291
Morreale, Azzurra
Robba, Serena
Lo Nigro, Giovanna
Roma, Paolo
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:475-4852017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:475-485
article
Exact algorithms for the Equitable Traveling Salesman Problem
Given a weighted graph G=(V,E), the Equitable Traveling Salesman Problem (ETSP) asks for two perfect matchings in G such that (1) the two matchings together form a Hamiltonian cycle in G and (2) the absolute difference in costs between the two matchings is minimized. The problem is shown to be NP-Hard, even when the graph G is complete. We present two integer programming models to solve the ETSP problem and compare the strength of these formulations. One model is solved through branch-and-cut, whereas the other model is solved through a branch-and-price framework. A simple local search heuristic is also implemented. We conduct computational experiments on different types of instances, often derived from the TSPLib. It turns out that the behavior of the different approaches varies with the type of instances. For small and medium sized instances, branch-and-bound and branch-and-price produce comparable results. However, for larger instances branch-and-bound outperforms branch-and-price.
Combinatorial optimization; Traveling Salesman Problem; Branch-and-bound; Branch-and-price; Exact algorithms;
http://www.sciencedirect.com/science/article/pii/S037722171730139X
Kinable, Joris
Smeulders, Bart
Delcour, Eline
Spieksma, Frits C.R.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:903-9172017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:903-917
article
Asymmetric retailers with different moving sequences: Group buying vs. individual purchasing
Given a quantity discount contract, retailers often prefer group buying to individual purchasing to acquire a lower wholesale price. However, under the combined effects of asymmetric demand information and different moving sequences, information revelation/acquisition between retailers may occur under group buying, thereby directly affecting their preference between individual purchasing and group buying. To capture their real preferences, we develop a model in which two retailers with asymmetric demand information purchase products from a common supplier under either individual purchasing or group buying (when moving first, later or simultaneously) and then sell to the market. We show that the informed retailer may forego group buying due to her loss of information advantage because her order quantity is revealed by the uninformed retailer. Moreover, the uninformed retailer may also reject group buying, despite obtaining demand information, because acquiring information by purposely moving later eliminates the uninformed retailer’s first-mover right and reduces his market share. Furthermore, in the context of information management, we demonstrate that the process of information revelation/acquisition harms the informed retailer to some extent but benefits the uninformed retailer to a greater extent relative to the first-best outcome under perfect information. We also address that the first-mover right becomes less valuable for both retailers with increased information value and, in particular, that their preferences concerning the moving sequence cannot reach a consensus in any case.
Supply chain management; Group buying; Quantity discount contract; Asymmetric information; Moving sequence;
http://www.sciencedirect.com/science/article/pii/S0377221717301479
Yan, Yingchen
Zhao, Ruiqing
Lan, Yanfei
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:735-7542017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:735-754
article
Modeling the Steering of International Roaming Traffic
Telecommunications operators offering international roaming services need to decide to which foreign networks they should steer their customers towards, in order to benefit from the best wholesale commercial conditions. This operational managerial decision translates into a least-cost traffic routing problem for which five mixed integer linear programming models, corresponding to the most used commercial agreements in the industry, are hereby introduced. The models are based on a minimum cost flow problem over a layered network following an underlying year-planning managerial approach, with multi-period decision dependency and in the presence of uncertainty. A computational experiment is carried out using a comprehensive framework designed to generate structured semi-random instances that simulate realistic market and business scenarios. Results for this experiment are discussed according to business sustainability performance metrics and confirm the soundness of the models. Given the nature of the problem we consider that the computational effort required is low.
OR in telecommunications; Roaming; Traffic steering; Optimization; Mixed integer linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221717301522
Martins, Carlos Lúcio
Fonseca, Maria da Conceição
Pato, Margarida Vaz
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:679-6892017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:679-689
article
Efficiency measurement and frontier projection identification for general two-stage systems in data envelopment analysis
The multiplier and envelopment models in data envelopment analysis (DEA) have a primal-dual relationship, and produce the same efficiency measure for a decision making unit (DMU). In addition to measuring efficiency, the multiplier model is able to identify the production frontier facets defined by the DMUs being evaluated, and the envelopment model is able to identify the projection point, based on which the efficiencies are measured. For general two-stage systems where the whole operation of the system is divided into two smaller operations carried out by two divisions connected in series, the multiplier model is generally used to measure division efficiencies, and the envelopment model is used to identify the projection point. This paper shows that the projection point identified by the envelopment model is not the one used by the multiplier model to measure the division efficiencies for general two-stage systems. Based on the primal-dual relationship of the two models, the envelopment model is reformulated to be able to obtain the projection point and measure the division efficiencies at the same time. The input- and output-orientation of the two divisions lead to four forms of the model. A case of measuring the innovation efficiency of thirty-five countries is used to illustrate the characteristics of this model and its differences from the multiplier model.
Data envelopment analysis; General two-stage system; Projection point;
http://www.sciencedirect.com/science/article/pii/S0377221717302023
Kao, Chiang
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:984-9932017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:984-993
article
Higher-degree stochastic dominance optimality and efficiency
We characterize a range of Stochastic Dominance (SD) relations by means of finite systems of convex inequalities. For ‘SD optimality’ of degree 1 to 4 and ‘SD efficiency’ of degree 2 to 5, we obtain exact systems that can be implemented using Linear Programming or Convex Quadratic Programming. For SD optimality of degree five and higher, and SD efficiency of degree six and higher, we obtain necessary conditions. We use separate model variables for the values of the derivatives of all relevant orders at all relevant outcome levels, which allows for preference restrictions beyond the standard sign restrictions. Our systems of inequalities can be interpreted in terms of piecewise polynomial utility functions with a number of pieces that increases with the number of outcomes and the degree of SD. An empirical study analyzes the relevance of higher-order risk preferences for comparing a passive stock market index with actively managed stock portfolios in standard data sets from the empirical asset pricing literature.
Decision analysis; Stochastic dominance; Expected utility; Linear programming; Convex quadratic programming;
http://www.sciencedirect.com/science/article/pii/S0377221717302394
Fang, Yi
Post, Thierry
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:893-9022017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:893-902
article
The Repair Kit Problem with positive replenishment lead times and fixed ordering costs
The Repair Kit Problem (RKP) concerns the determination of a set of items taken by a service engineer to perform on-site product support. Such a set is called a kit. Models developed in the literature have always ignored the lead times associated with delivering items to replenish the kit, thereby limiting the practical relevance of the proposed solutions. Motivated by a real life case, we develop a model with positive lead times to control the replenishment quantities of the items in the kit, and study the performance of (s, S) policies under a service objective. The choice for (s, S) policies is made in order to accommodate fixed ordering costs. We present a method to calculate job fill rates with exact expressions, and discuss a heuristic approach to optimize the reorder level and order-up-to level for each item in the kit. The empirical utility of the model is assessed on real world data from an equipment manufacturer and useful insights are offered to after-sales managers.
OR in service industries; Inventory; Repair kit problem; Lead times; Service parts;
http://www.sciencedirect.com/science/article/pii/S0377221717301418
Prak, Dennis
Saccani, Nicola
Syntetos, Aris
Teunter, Ruud
Visintin, Filippo
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:656-6652017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:656-665
article
Cost-based feature selection for Support Vector Machines: An application in credit scoring
In this work we propose two formulations based on Support Vector Machines for simultaneous classification and feature selection that explicitly incorporate attribute acquisition costs. This is a challenging task for two main reasons: the estimation of the acquisition costs is not straightforward and may depend on multivariate factors, and the inter-dependence between variables must be taken into account for the modelling process since companies usually acquire groups of related variables rather than acquiring them individually. Mixed-integer linear programming models are proposed for constructing classifiers that constrain acquisition costs while classifying adequately. Experimental results using credit scoring datasets demonstrate the effectiveness of our methods in terms of predictive performance at a low cost compared to well-known feature selection approaches.
Analytics; Feature selection; Support Vector Machines; Mixed-integer programming; Credit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221717301595
Maldonado, Sebastián
Pérez, Juan
Bravo, Cristián
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:436-4492017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:436-449
article
Alternative models for markets with nonconvexities
In many electricity markets, the market operator solves a social welfare maximization (SW) model to determine market prices and generation (and consumption) “dispatch” instructions to firms participating in the market. When generation costs (or consumption benefits) are described as mixed integer programs, linear prices cannot, in general, be found such that all market participants are satisfied that the operator’s dispatch instructions maximize profits, i.e., they perceive an opportunity cost. Often, “make whole” payments are made to market participants to bring negative profits up to zero, but not to adjust positive, nonoptimal profits. Make whole payments are added to “uplift” charges to customers for various non-market services provided by market participants. In previous research, “uplift” is extended to include the entire opportunity costs, and prices are adjusted to minimize the part of uplift that is due to discrete variables, while keeping the SW quantity instructions. We show that the SW instructions must be modified if the non-dispatchable demand is price sensitive; to allow for this, we define a model that minimizes total opportunity cost (MTOC), and we compare it to three other models – SW, SW with non-negative profit constraints, and a minimum complementarity (MC) model recently proposed by Gabriel et al. We show that the MC model approximates the MTOC model. Two unit commitment problems illustrate the models . In an online appendix, we also present small MTOC and MC two-commodity models for which an SW model cannot be formulated due to nonintegrability of demand.
OR in energy; Near equilibrium; Uplift; Complementarity;
http://www.sciencedirect.com/science/article/pii/S0377221717301546
David Fuller, J.
Çelebi, Emre
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:715-7342017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:715-734
article
Electricity forward curves with thin granularity: Theory and empirical evidence in the hourly EPEXspot market
We propose a constructive definition of electricity forward price curve with cross-sectional timescales featuring hourly frequency on. The curve is jointly consistent with both risk-neutral market information represented by baseload and peakload futures quotes, and historical market information, as mirrored by periodical patterns exhibited by the time series of day-ahead prices. From a methodological standpoint, we combine nonparametric filtering with monotone convex interpolation such that the resulting forward curve is pathwise smooth and monotonic, cross-sectionally stable, and time local. From an empirical standpoint, we exhibit these features in the context of EPEX Spot and EEX Derivative markets. We perform a backtesting analysis to assess the relative quality of our forward curve estimate compared to the benchmark market model of Benth, Koekebakker, and Ollmar (2007).
Energy finance; Forward pricing; Electricity markets; Forward curve construction;
http://www.sciencedirect.com/science/article/pii/S0377221717301224
Caldana, Ruggero
Fusai, Gianluca
Roncoroni, Andrea
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:563-5712017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:563-571
article
Loss aversion and rationality in the newsvendor problem under recourse option
Risk neutral assumption in the newsvendor problem under recourse option predicts the order quantity insensitive to the selling price; and, risk aversion modeled through common utility functions gives decreasing order quantity with increasing selling price. In this paper, we consider loss aversion to model the choice preference of the decision maker in the newsvendor problem under recourse option, and prove that loss aversion predicts the rational ordering behavior of the newsvendor with respect to the changes in price and cost parameters. Further, we find that loss aversion can significantly improve the performance of utility function based models in predicting the rational behavior. We extend the analysis to a supply chain setting and establish coordinating contract between a loss averse retailer facing a newsvendor problem and a risk neutral supplier under recourse option. We find that the contract parameter does not depend on the loss aversion; hence, the same contract can be implemented with retailers with different levels of loss aversion.
Inventory; Loss aversion; Newsvendor problem; Recourse option;
http://www.sciencedirect.com/science/article/pii/S0377221717301182
Vipin, B.
Amit, R.K.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:515-5292017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:515-529
article
The single machine weighted mean squared deviation problem
This paper studies a single machine problem related to the Just-In-Time (JIT) production objective in which the goal is to minimize the sum of weighted mean squared deviation of the completion times with respect to a common due date. In order to solve the problem, several structural and dominance properties of the optimal solution are investigated. These properties are then integrated within a branch-and-cut approach to solve a time-indexed formulation of the problem. The results of a computational experiment with the proposed algorithm show that the method is able to optimally solve instances with up to 300 jobs within reduced running times, improving other integer programming approaches.
Scheduling; Single machine; JIT; Branch-and-cut; Dominance properties;
http://www.sciencedirect.com/science/article/pii/S0377221717301807
Pereira, Jordi
Vásquez, Óscar C.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:880-8922017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:880-892
article
Optimizing (s, S) policies for multi-period inventory models with demand distribution uncertainty: Robust dynamic programing approaches
This study considers a finite-horizon single-product periodic-review inventory management problem with demand distribution uncertainty. The problem is formulated as a dynamic program and the existence of an optimal (s, S) policy is proved. The corresponding dynamic robust counterpart models are then developed for the box and the ellipsoid uncertainty sets. These counterpart models are transformed into tractable linear and second-order cone programs, respectively. The effectiveness and practicality of the proposed robust optimization approaches are validated through a numerical study.
Inventory; Periodic-review (s, S) policy; Robust optimization; Demand distribution uncertainty; Dynamic programing;
http://www.sciencedirect.com/science/article/pii/S0377221717301492
Qiu, Ruozhen
Sun, Minghe
Lim, Yun Fong
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:819-8342017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:819-834
article
A hybrid Particle Swarm Optimization – Variable Neighborhood Search algorithm for Constrained Shortest Path problems
In this paper, a well known NP-hard problem, the Constrained Shortest Path problem, is studied. As efficient metaheuristic approaches are required for its solution, a new hybridized version of Particle Swarm Optimization algorithm with Variable Neighborhood Search is proposed for solving this significant combinatorial optimization problem. Particle Swarm Optimization (PSO) is a population-based swarm intelligence algorithm that simulates the social behavior of social organisms by using the physical movements of the particles in the swarm. A Variable Neighborhood Search (VNS) algorithm is applied in order to optimize the particles’ position. In the proposed algorithm, the Particle Swarm Optimization with combined Local and Global Expanding Neighborhood Topology (PSOLGENT), a different equation for the velocities of particles is given and a novel expanding neighborhood topology is used. Another issue in the application of the VNS algorithm in the Constrained Shortest Path problem is which local search algorithms are suitable from this problem. In this paper, a number of continuous local search algorithms are used. The algorithm is tested in a number of modified instances from the TSPLIB and comparisons with classic versions of PSO and with other versions of the proposed method are performed. Also, the results of the algorithm are compared with the results of a number of metaheuristic and evolutionary algorithms. The results obtained are very satisfactory and strengthen the efficiency of the algorithm.
Particle Swarm Optimization; Variable Neighborhood Search; Expanding neighborhood topology; Constrained Shortest Path problem;
http://www.sciencedirect.com/science/article/pii/S0377221717302357
Marinakis, Yannis
Migdalas, Athanasios
Sifaleras, Angelo
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:549-5622017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:549-562
article
The directed profitable rural postman problem with incompatibility constraints
In this paper, we study a variant of the directed rural postman problem (RPP) where profits are associated with arcs to be served, and incompatibility constraints may exist between nodes and profitable arcs leaving them. If convenient, some of the incompatibilities can be removed provided that penalties are paid. The problem looks for a tour starting and ending at the depot that maximizes the difference between collected profits and total cost as sum of traveling costs and paid penalties, while satisfying remaining incompatibilities. The problem finds application in the domain of road transportation service, and in particular in the context of horizontal collaboration among carriers and shippers. We call this problem the directed profitable rural postman problem with incompatibility constraints. We propose two problem formulations and introduce a matheuristic procedure exploiting the presence of a variant of the generalized independent set problem (GISP) and of the directed rural postman problem (DRPP) as subproblems. Computational results show how the matheuristic is effective outperforming in many cases the result obtained in one hour computing time by a straightforward branch-and-cut approach implemented with IBM CPLEX 12.6.2 on instances with up to 500 nodes, 1535 arcs, 1132 profitable arcs, and 10,743 incompatibilities.
Routing; Rural postman problem; Incompatibility constraints; Generalized independent set problem;
http://www.sciencedirect.com/science/article/pii/S0377221717301078
Colombi, Marco
Corberán, Ángel
Mansini, Renata
Plana, Isaac
Sanchis, José M.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:755-7712017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:755-771
article
Generation flexibility in ramp rates: Strategic behavior and lessons for electricity market design
A ramp rate usually defines the speed at which an electric power producer can decrease or increase its production in limited time. The availability of fast-ramping generators significantly affects the economic dispatch, especially in the systems with high penetration of intermittent energy sources, e.g. wind power, since the fluctuations in supply are common and sometimes unpredictable. One of the regulatory practices of how to impel generators to provide their true ramp rates is to separate the stages of submitting the bids on ramp rate and production. In this paper we distinguish two types of market structures: one-stage – when electric power producers are deciding their production and ramp rate at the same time, or two-stage – when generators decide their ramp rate first, and choose their production levels at the second stage. We employ one-stage and two-stage equilibrium models respectively to represent these market setups and use a conjectured price response parameter ranging from perfect competition to the Cournot oligopoly to investigate the effect of the market competition structure on the strategic decisions of the generators. We compare these two market setups in a symmetric duopoly case with two time periods and prove that in the two-stage market setup the level of ramp rate is independent of the strategic behavior in the spot market and generally lower than the one offered in the one-stage setup. We also show that the ramp-rate levels in one- and two-stage models coincide at the Cournot oligopoly. We extend the model to asymmetry, several load periods, portfolio bidding, and uncertainty, and show that withholding the ramp rate still occurs in both models. Our findings prove that market regulators cannot rely on only separating the decision stages as an effective measure to mitigate market power and in certain cases it may lead to an adverse effect.
OR in energy; Ramp rate; Strategic decision-making; Bilevel programming; Equilibrium problems;
http://www.sciencedirect.com/science/article/pii/S0377221717301509
Moiseeva, Ekaterina
Wogrin, Sonja
Hesamzadeh, Mohammad Reza
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:450-4592017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:450-459
article
Polynomial optimization for water networks: Global solutions for the valve setting problem
This paper explores polynomial optimization techniques for two formulations of the energy conservation constraint for the valve setting problem in water networks. The sparse hierarchy of semidefinite programing relaxations is used to derive globally optimal bounds for an existing cubic and a new quadratic problem formulation. Both formulations use an approximation for friction loss that has an accuracy consistent with the experimental error of the classical equations. Solutions using the proposed approach are reported on four water networks ranging in size from 4 to 2000 nodes and are compared against a local solver, Ipopt and a global solver, Couenne. Computational results found global solutions using both formulations with the quadratic formulation having better time efficiency due to the reduced degree of the polynomial optimization problem and the sparsity of the constraint matrix. The approaches presented in this paper may also allow global solutions to other water network steady-state optimization problems formulated with continuous variables.
Global optimization; Polynomial optimization; Semidefinite programing; Valve setting problem; Water networks;
http://www.sciencedirect.com/science/article/pii/S0377221717301972
Ghaddar, Bissan
Claeys, Mathieu
Mevissen, Martin
Eck, Bradley J.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:666-6782017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:666-678
article
Stochastic dominance via quantile regression with applications to investigate arbitrage opportunity and market efficiency
Tests for stochastic dominance constructed by translating the inference problem of stochastic dominance into parameter restrictions in quantile regressions are proposed. They are variants of the one-sided Kolmogorov–Smirnoff statistic with a limiting distribution of the standard Brownian bridge. The procedure to obtain the critical values of our proposed test statistics are provided. Simulation results show their superior size and power. They are applied to the NASDAQ 100 and S&P 500 indices to investigate dominance relationship before and after major turning points. Results show no arbitrage opportunity between the bear and bull markets. Our results infer that markets are inefficient and risk averters are better off investing in the bull rather than the bear market.
Quantile regression; Stochastic dominance; Brownian bridge; Internet bubble crisis; Subprime crisis;
http://www.sciencedirect.com/science/article/pii/S0377221717301923
Ng, Pin
Wong, Wing-Keung
Xiao, Zhijie
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1085-10972017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1085-1097
article
Work-education mismatch: An endogenous theory of professionalization
We model the education-workforce pipeline and offer an endogenous theory of professionalization and ever-higher degree attainment. We introduce two mechanisms that act on the education enterprise, causing the number of educated people to increase dramatically with relatively short-term changes in the job market. Using our illustrative dynamic model, we argue that the system is susceptible to small changes and the introduced self-driving growth engines are adequate to over-incentivize degree attainment. We also show that the mechanisms magnify effects of short-term recessions or technological changes, and create long-term waves of mismatch between workforce and jobs. The implication of the theory is degree inflation, magnified pressures on those with lower degrees, underemployment, and job market mismatch and inefficiency.
System dynamics; Education policy; Inefficiency; Education mismatch; Public policy;
http://www.sciencedirect.com/science/article/pii/S0377221717301856
Ghaffarzadegan, Navid
Xue, Yi
Larson, Richard C.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:540-5482017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:540-548
article
Strategic procurement in spot and forward markets considering regulation and capacity constraints
With the generalization of business-to-business electronic exchanges, online spot markets have become an important component of suppliers’ procurement strategies in their aim to increase flexibility and reduce transaction costs. In this article we analyze, both analytically and computationally, how these online spot markets interact with forward contracts as strategic procurement tools. We consider non-storable commodity markets in which the suppliers have market power. We derive the equations describing the equilibrium of this game considering capacity constraints and regulation. We show that price caps increase forward trading and we analyze the conditions under which, in the capacitated model, some suppliers can buy forward to sell spot. Furthermore, we prove that inefficient producers continue to operate in the market as arbitrageurs, selling forward and buying spot. We model the game with asymmetric suppliers, identifying the situations in which it is well defined, and describing how these asymmetries are important for market equilibrium. Finally, we analyze a game with multiple sequential forward contracts: we prove that, when suppliers readjust their forward positions until the start of the spot market, the number of time periods (i.e., market liquidity) has neither effect on the suppliers’ strategic procurement nor on market efficiency.
Supply chain management; Forward contracts; Oligopoly; Procurement; Regulation; Spot markets;
http://www.sciencedirect.com/science/article/pii/S0377221717301029
Oliveira, Fernando S.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:640-6552017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:640-655
article
Efficiency measures and computational approaches for data envelopment analysis models with ratio inputs and outputs
In a recent paper to this journal, the authors developed a methodology that allows the incorporation of ratio inputs and outputs in the variable and constant returns-to-scale DEA models. Practical evaluation of efficiency of decision making units (DMUs) in such models generally goes beyond the application of standard linear programming techniques. In this paper we discuss how the DEA models with ratio measures can be solved. We also introduce a new type of potential ratio (PR) inefficiency. It characterizes DMUs that are strongly efficient in the model of technology with ratio measures but become inefficient if the volume data used to calculate ratio measures become available. Potential ratio inefficiency can be tested by the programming approaches developed in this paper.
Data envelopment analysis; Ratio measures; Efficiency;
http://www.sciencedirect.com/science/article/pii/S0377221717301431
Olesen, Ole Bent
Petersen, Niels Christian
Podinovski, Victor V.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:584-5942017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:584-594
article
Optimal spares allocation to an exchangeable-item repair system with tolerable wait
In a multi-location, exchangeable-item repair system with stochastic demand, the expected waiting time and the fill rate measures are oftentimes used as the optimization criteria for the spares allocation problem. These measures, however, do not take into account that customers will tolerate a reasonable delay and therefore, a firm does not incur reputation costs if customers wait less than their tolerable wait. Accordingly, we generalize the expected waiting time and fill rate measures to reflect customer patience. These generalized measures are termed the truncated waiting time and the window fill rate, respectively. We develop efficient algorithms to solve the problem for each of the criteria and demonstrate how incorporating customer patience provides considerable savings and profoundly affects the optimal spares allocation.
Inventory; Logistics; Truncated waiting time; Window fill rate; Optimization criteria;
http://www.sciencedirect.com/science/article/pii/S0377221717301534
Dreyfuss, Michael
Giat, Yahel
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1170-11882017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1170-1188
article
Bank branch efficiency under environmental change: A bootstrap DEA on monthly profit and loss accounting statements of Greek retail branches
The objective of this study is to measure efficiency change of bank branches under external environment deterioration. In particular, we utilize a bootstrap input-oriented profit DEA and investigate homogeneous and heterogeneous branches according to branch size and location to measure efficiency change by contrasting expansion, recession and capital control effects that constitute a unique phenomenon in the postwar period in the Eurozone. Our primary research explicitly focuses on the whole retail network of a Greek systemic bank based on unpublished monthly branch Profit and Loss statements and covers the period from January 2006 to July 2016. We find that early and deep recession reduces on average branch network efficiency. The imposition of capital controls (end-month June 2015) initially causes marginal effects with a subsequent efficiency improvement in the first seven months of 2016 when economic conditions are normalized. The paper documents that branch size and location matter. On the whole, we capture efficiency deterioration in the long-run contrary to recent European evidence. Apart from the efficiency measurement over time, we provide directions to bank management for performance improvement in the capital control period. More specifically, a bootstrap DEA-based Decision Tree classification exactly quantifies for the first time a potential upgrading of underperforming branches and a second-stage bootstrap DEA regression locates important efficiency drivers such as the diversification of income and the deposit- oriented activity that could improve efficiency of the total retail network.
Capital controls; Retail branches; Bootstrap DEA; Integrated bootstrap DEA-based DT classification; OR in banking;
http://www.sciencedirect.com/science/article/pii/S0377221717301959
Aggelopoulos, Eleftherios
Georgopoulos, Antonios
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:941-9572017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:941-957
article
Locating alternative-fuel refueling stations on a multi-class vehicle transportation network
The existing literature regarding the location of alternative fuel (AF) refueling stations in transportation networks generally assumes that all vehicles are capable of traveling the same driving range and have similar levels of fuel in their tanks at the moment they enter the network and when they exit it. In this article, we relax these assumptions and introduce a multi-class vehicle transportation network in which vehicles have different driving ranges and fuel tank levels at their origins and destinations. A 0-1 linear programming model is proposed for locating a given number of refueling stations that maximize the total traffic flow covered (in round trips per time unit) by the stations on the network. Through numerical experiments with the 2011 medium- and heavy-duty truck traffic data in the Pennsylvania Turnpike, we identify the optimal sets of refueling stations for AF trucks on a multi-class vehicle transportation network.
Location; Refueling infrastructure; Transportation networks; Multi-class alternative fuel vehicles; 0-1 linear programming model;
http://www.sciencedirect.com/science/article/pii/S0377221717301583
Hwang, Seong Wook
Kweon, Sang Jin
Ventura, Jose A.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1066-10842017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1066-1084
article
A parallel multi-objective scatter search for optimising incentive contract design in projects
We present a novel optimisation approach for incentive contract design within a project setting. the structure of the remuneration is one of the key challenges faced by the project owner when (s)he decides to hire a contractor. The proposed technique builds on the recently proposed contract design methodology by Kerkhove and Vanhoucke (Omega, 2015). Specifically, a novel multi-objective scatter search heuristic is proposed and implemented using parallelisation. Both single- and multi-population implementations of this heuristic are compared to the original full-factorial approach as well as commercial optimisation software. The results of the computational experiments indicate that the single-population parallel scatter search procedure significantly outperforms the other solution strategies in terms of both speed and solution quality.
Project management; Contracting; Multi-objective optimisation; Scatter search; Parallel processing;
http://www.sciencedirect.com/science/article/pii/S037722171730187X
Kerkhove, L.-P.
Vanhoucke, M.
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:789-7992017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:789-799
article
On the inefficiency of the merit order in forward electricity markets with uncertain supply
This paper provides insight on the economic inefficiency of the classical merit-order dispatch in electricity markets with uncertain supply. For this, we consider a power system whose operation is driven by a two-stage electricity market, with a forward and a real-time market. We analyze two different clearing mechanisms: a conventional one, whereby the forward and the balancing markets are independently cleared following a merit order, and a stochastic one, whereby both market stages are co-optimized with a view to minimizing the expected aggregate system operating cost. We first derive analytical formulae to determine the dispatch rule prompted by the co-optimized two-stage market for a stylized power system with flexible, inflexible and stochastic power generation and infinite transmission capacity. This exercise sheds light on the conditions for the stochastic market-clearing mechanism to break the merit order. We then introduce and characterize two enhanced variants of the conventional two-stage market that result in either price-consistent or cost-efficient merit-order dispatch solutions, respectively. The first of these variants corresponds to a conventional two-stage market that allows for virtual bidding, while the second requires that the stochastic power production be centrally dispatched. Finally, we discuss the practical implications of our analytical results and illustrate our conclusions through examples.
OR in energy; Electricity market; Uncertain supply; Merit order; Market-clearing mechanism;
http://www.sciencedirect.com/science/article/pii/S0377221717301558
Morales, Juan M.
Pineda, Salvador
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:690-7032017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:690-703
article
Dominance-based rough fuzzy set approach and its application to rule induction
Theories of fuzzy sets and rough sets are related and complementary methodologies to handle uncertainty of vagueness and coarseness, respectively. Marrying both leads to the hybrid notion of rough fuzzy sets in order to get a more accurate account of imperfect information. In this paper, our attention is paid to ordered fuzzy decision systems, where condition criteria are preference-ordered and decision classes are not only ordered but also fuzzy. First, the dominance-based rough fuzzy approximations of an upward or downward cumulated fuzzy set are introduced in ordered fuzzy decision systems. Second, lower and upper reducts relative to a certain cumulated fuzzy set are proposed to eliminate redundant criteria in the system. Then, two approaches to attribute reduction are presented based on the discernibility matrix and the heuristic strategy, respectively. Also, decision rules are extracted directly from these approximations and some applicable and simplified decision rules are obtained according to requirements of decision makers. Finally, a case study in bankruptcy risk analysis is used to illustrate the mechanism of the proposed methods.
Ordered fuzzy decision systems; Dominance-based rough fuzzy sets; Cumulated fuzzy sets; Attribute reduction; Rule induction;
http://www.sciencedirect.com/science/article/pii/S0377221716310232
Du, Wen Sheng
Hu, Bao Qing
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:835-8482017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:835-848
article
Extended GRASP for the job shop scheduling problem with total weighted tardiness objective
The paper proposes a heuristic for the job shop scheduling problem with minimizing the total weighted tardiness of jobs as objective. It is built upon the well known metaheuristic GRASP and strengthened with an inclusion of specific local search components. The design is based on an advanced disjunctive graph model which enables capturing solution schedules through a tree graph called critical tree. The tree graph allows for effectively steering a first-descent search algorithm which further incorporates powerful neighborhood operators and a fast move evaluation procedure based on heads updating. Additionally, amplifying and path relinking is adaptively applied to the best schedules discovered. We present computational results of the new heuristic on two famous sets of benchmark instances, we identify ten new best solutions, and we demonstrate the high potential of the approach through a comparison with state-of-the-art methods.
Scheduling; GRASP; Disjunctive graph; Local search; Heads updating;
http://www.sciencedirect.com/science/article/pii/S0377221717302345
Bierwirth, C.
Kuhpfahl, J.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:918-9282017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:918-928
article
An efficient algorithm for the 2-level capacitated lot-sizing problem with identical capacities at both levels
We present a polynomial time algorithm for solving the 2-level production-in-series lot-sizing problem with capacities at both levels. At each level, we consider a fixed setup cost together with linear production and holding costs. We assume that capacities are stationary and identical at both levels. We introduce a new cost structure, called path non-speculative, generalizing the classical non-speculative cost structure. We show that under this cost structure the problem can be solved in time complexity O(T5), where T is the number of periods of the planning horizon. When the cost structure follows the classical non-speculative motives, the time complexity is reduced to O(T3).
Combinatorial optimization; Capacitated lot sizing; Two-echelon; Dynamic programming; Non-speculative motives;
http://www.sciencedirect.com/science/article/pii/S0377221717301467
Goisque, Guillaume
Rapine, Christophe
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1158-11692017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1158-1169
article
Capacity decisions with debt financing: The effects of agency problem
This paper studies the capacity management problem for a firm that uses debt financing. This is done by analyzing the effect of the associated agency problem when making capacity decisions. The agency problem arises when there are potential conflicts of interest between the firm owner and the lender. We show that this agency problem can constrain the firm's optimal capacity decision, because the borrowing rate will increase as the risk of default increases with capacity level chosen. The firm will therefore try to optimally choose the level so as to reduce the risk of bankruptcy, which the lender will take into account, and as a consequence the firm will try to control the risk associated with potentially high borrowing costs. However, even when the expected bankruptcy cost is carefully controlled, the optimal capacity decision is still made at the risk of incurring considerable agency costs. In addition, the corporate tax level can also play a significant role in capacity choice. We show that although a higher tax rate leads to bigger tax benefit of debt and lower agency cost, it also gives rise to a higher tax liability. After balancing the tax benefit of debt with the agency cost, the firm can make an optimal decision on the capacity level required. The efficacy of financial hedging for mitigating the agency cost is also analyzed. Finally, we compare and contrast our analysis with existing studies, and it appears that we have been able to obtain a deeper insight into the problem.
Decision analysis; Capacity decision; Debt financing; Conflicts of interest; Agency cost;
http://www.sciencedirect.com/science/article/pii/S0377221717301868
Ni, Jian
Chu, Lap Keung
Li, Qiang
oai:RePEc:eee:ejores:v:261:y:2017:i:2:p:772-7882017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:2:p:772-788
article
Optimization approaches to Supervised Classification
The Supervised Classification problem, one of the oldest and most recurrent problems in applied data analysis, has always been analyzed from many different perspectives. When the emphasis is placed on its overall goal of developing classification rules with minimal classification cost, Supervised Classification can be understood as an optimization problem. On the other hand, when the focus is in modeling the uncertainty involved in the classification of future unknown entities, it can be formulated as a statistical problem. Other perspectives that pay particular attention to pattern recognition and machine learning aspects of Supervised Classification have also a long history that has lead to influential insights and different methodologies.
Multivariate statistics; Discriminant analysis; Mathematical programming; Support vector machines;
http://www.sciencedirect.com/science/article/pii/S037722171730142X
Pedro Duarte Silva, A.
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1013-10272017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1013-1027
article
Dynamic theory of losses in wars and conflicts
We present a new theory for the dynamic evolution of losses incurred in combat, which is verified using available published data from WW1, WW2 and later conflicts.
OR in defense; Dynamic loss rates; Risk management; Learning theory; Warfare;
http://www.sciencedirect.com/science/article/pii/S0377221717302679
Duffey, Romney B
oai:RePEc:eee:ejores:v:261:y:2017:i:3:p:1028-10512017-05-11RePEc:eee:ejores
RePEc:eee:ejores:v:261:y:2017:i:3:p:1028-1051
article
A coevolutionary technique based on multi-swarm particle swarm optimization for dynamic multi-objective optimization
In real-world applications, there are many fields involving dynamic multi-objective optimization problems (DMOPs), in which objectives are in conflict with each other and change over time or environments. In this paper, a modified coevolutionary multi-swarm particle swarm optimizer is proposed to solve DMOPs in the rapidly changing environments (denoted as CMPSODMO). A frame of multi-swarm based particle swarm optimization is adopted to optimize the problem in dynamic environments. In CMPSODMO, the number of swarms (PSO) is determined by the number of the objective functions, and all of these swarms utilize an information sharing strategy to evolve cooperatively. Moreover, a new velocity update equation and an effective boundary constraint technique are developed during evolution of each swarm. Then, a similarity detection operator is used to detect whether a change has occurred, followed by a memory based dynamic mechanism to response to the change. The proposed CMPSODMO has been extensively compared with five state-of-the-art algorithms over a test suit of benchmark problems. Experimental results indicate that the proposed algorithm is promising for dealing with the DMOPs in the rapidly changing environments.