2016-10-26T07:37:52Z
http://oai.repec.org/oai.php
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:186-1982015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:186-198
article
Exact and heuristic algorithms for the design of hub networks with multiple lines
In this paper we study a hub location problem in which the hubs to be located must form a set of interconnecting lines. The objective is to minimize the total weighted travel time between all pairs of nodes while taking into account a budget constraint on the total set-up cost of the hub network. A mathematical programming formulation, a Benders-branch-and-cut algorithm and several heuristic algorithms, based on variable neighborhood descent, greedy randomized adaptive search, and adaptive large neighborhood search, are presented and compared to solve the problem. Numerical results on two sets of benchmark instances with up to 70 nodes and three lines confirm the efficiency of the proposed solution algorithms.
Hub location; Hub-and-spoke networks; Lines; Network design;
http://www.sciencedirect.com/science/article/pii/S0377221715003100
Martins de Sá, Elisangela
Contreras, Ivan
Cordeau, Jean-François
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:661-6732015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:661-673
article
A model enhancement heuristic for building robust aircraft maintenance personnel rosters with stochastic constraints
This paper presents a heuristic approach to optimize staffing and scheduling at an aircraft maintenance company. The goal is to build robust aircraft maintenance personnel rosters that can achieve a certain service level while minimizing the total labor costs. Robust personnel rosters are rosters that can handle delays associated with stochastic flight arrival times. To deal with this stochasticity, a model enhancement algorithm is proposed that iteratively adjusts a mixed integer linear programming (MILP) model to a stochastic environment based on simulation results. We illustrate the performance of the algorithm with a computational experiment based on real life data of a large aircraft maintenance company located at Brussels Airport in Belgium. The obtained results are compared to deterministic optimization and straightforward optimization. Experiments demonstrate that our model can ensure a certain desired service level with an acceptable increase in labor costs when stochasticity is introduced in the aircraft arrival times.
Model enhancement; Aircraft maintenance; Stochastic optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500380X
De Bruecker, Philippe
Van den Bergh, Jorne
Beliën, Jeroen
Demeulemeester, Erik
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:154-1692015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:154-169
article
Ant colony optimization based binary search for efficient point pattern matching in images
Point Pattern Matching (PPM) is a task to pair up the points in two images of a same scene. There are many existing approaches in literature for point pattern matching. However, the drawback lies in the high complexity of the algorithms. To overcome this drawback, an Ant Colony Optimization based Binary Search Point Pattern Matching (ACOBSPPM) algorithm is proposed. According to this approach, the edges of the image are stored in the form of point patterns. To match an incoming image with the stored images, the ant agent chooses a point value in the incoming image point pattern and employs a binary search method to find a match with the point values in the stored image point pattern chosen for comparison. Once a match occurs, the ant agent finds a match for the next point value in the incoming image point pattern by searching between the matching position and maximum number of point values in the stored image point pattern. The stored image point pattern having the maximum number of matches is the image matching with the incoming image. Experimental results are shown to prove that ACOBSPPM algorithm is efficient when compared to the existing point pattern matching approaches in terms of time complexity and precision accuracy.
Decision support systems; Image recognition; Point pattern matching; Ant Colony Optimization; Binary search;
http://www.sciencedirect.com/science/article/pii/S0377221715002842
Sreeja, N.K.
Sankar, A.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:505-5162015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:505-516
article
Control of Condorcet voting: Complexity and a Relation-Algebraic approach
We study the constructive variant of the control problem for Condorcet voting, where control is done by deleting voters. We prove that this problem remains NP-hard if instead of Condorcet winners the alternatives in the uncovered set win. Furthermore, we present a relation-algebraic model of Condorcet voting and relation-algebraic specifications of the dominance relation and the solutions of the control problem. All our relation-algebraic specifications immediately can be translated into the programming language of the OBDD-based computer system RelView. Our approach is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other voting rules and control problems.
Artificial intelligence; Condorcet voting; Control problem; Uncovered set; Relation algebra;
http://www.sciencedirect.com/science/article/pii/S0377221715003185
Berghammer, Rudolf
Schnoor, Henning
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:34-432015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:34-43
article
An accelerated branch-and-price algorithm for multiple-runway aircraft sequencing problems
This paper presents an effective branch-and-price (B&P) algorithm for multiple-runway aircraft sequencing problems. This approach improves the tractability of the problem by several orders of magnitude when compared with solving a classical 0–1 mixed-integer formulation over a set of computationally challenging instances. Central to the computational efficacy of the B&P algorithm is solving the column generation subproblem as an elementary shortest path problem with aircraft time-windows and non-triangular separation times using an enhanced dynamic programming procedure. We underscore in our computational study the algorithmic features that contribute, in our experience, to accelerating the proposed dynamic programming procedure and, hence, the overall B&P algorithm.
Aircraft sequencing; Branch-and-price; Column generation; Dynamic programming; Elementary shortest path problems;
http://www.sciencedirect.com/science/article/pii/S0377221715003124
Ghoniem, Ahmed
Farhadi, Farbod
Reihaneh, Mohammad
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:128-1392015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:128-139
article
Efficient analysis of the MMAP[K]/PH[K]/1 priority queue
In this paper we consider the MMAP/PH/1 priority queue, both the case of preemptive resume and the case of non-preemptive service. The main idea of the presented analysis procedure is that the sojourn time of the low priority jobs in the preemptive case (and the waiting time distribution in the non-preemptive case) can be represented by the duration of the busy period of a special Markovian fluid model. By making use of the recent results on the busy period analysis of Markovian fluid models it is possible to calculate several queueing performance measures in an efficient way including the sojourn time distribution (both in the time domain and in the Laplace transform domain), the moments of the sojourn time, the generating function of the queue length, the queue length moments and the queue length probabilities.
Queueing; Preemptive resume priority queue; Non-preemptive priority queue; Matrix-analytic methods;
http://www.sciencedirect.com/science/article/pii/S0377221715001976
Horváth, Gábor
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:140-1532015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:140-153
article
A noisy principal component analysis for forward rate curves
Principal Component Analysis (PCA) is the most common nonparametric method for estimating the volatility structure of Gaussian interest rate models. One major difficulty in the estimation of these models is the fact that forward rate curves are not directly observable from the market so that non-trivial observational errors arise in any statistical analysis. In this work, we point out that the classical PCA analysis is not suitable for estimating factors of forward rate curves due to the presence of measurement errors induced by market microstructure effects and numerical interpolation. Our analysis indicates that the PCA based on the long-run covariance matrix is capable to extract the true covariance structure of the forward rate curves in the presence of observational errors. Moreover, it provides a significant reduction in the pricing errors due to noisy data typically found in forward rate curves.
Finance; Pricing; Principal component analysis; Term-structure of interest rates; HJM models;
http://www.sciencedirect.com/science/article/pii/S0377221715003318
Laurini, Márcio Poletti
Ohashi, Alberto
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:421-4342015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:421-434
article
Bi-Objective Multi-Mode Project Scheduling Under Risk Aversion
The paper proposes a model for stochastic multi-mode resource-constrained project scheduling under risk aversion with the two objectives makespan and cost. Activity durations and costs are assumed as uncertain and modeled as random variables. For the scheduling part of the decision problem, the class of early-start policies is considered. In addition to the schedule, the assignment of execution modes to activities has to be selected. To take risk aversion into account, the approach of optimization under multivariate stochastic dominance constraints, recently developed in other fields, is adopted. For the resulting bi-objective stochastic integer programming problem, the Pareto frontier is determined by means of an exact solution method, incorporating a branch-and-bound technique based on the forbidden set branching scheme from stochastic project scheduling. Randomly generated test instances, partially derived from a test case from the PSPLIB, are used to show the computational feasibility of the approach.
Project scheduling; Multi-objective optimization; Stochastic optimization; Risk aversion; Stochastic dominance;
http://www.sciencedirect.com/science/article/pii/S0377221715003768
Gutjahr, Walter J.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:293-3062015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:293-306
article
Simulation-optimization approaches for water pump scheduling and pipe replacement problems
Network operation and rehabilitation are major concerns for water utilities due to their impact on providing a reliable and efficient service. Solving the optimization problems that arise in water networks is challenging mainly due to the nonlinearities inherent in the physics and the often binary nature of decisions. In this paper, we consider the operational problem of pump scheduling and the design problem of leaky pipe replacement. New approaches for these problems based on simulation-optimization are proposed as solution methodologies. For the pump scheduling problem, a novel decomposition technique uses solutions from a simulation-based sub-problem to guide the search. For the leaky pipe replacement problem a knapsack-based heuristic is applied. The proposed solution algorithms are tested and detailed results for two networks from the literature are provided.
Pump scheduling; Pipe replacement; Water networks; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715003215
Naoum-Sawaya, Joe
Ghaddar, Bissan
Arandia, Ernesto
Eck, Bradley
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:413-4202015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:413-420
article
On heuristic solutions for the stochastic flowshop scheduling problem
We address the problem of scheduling jobs in a permutation flowshop when their processing times adopt a given distribution (stochastic flowshop scheduling problem) with the objective of minimization of the expected makespan. For this problem, optimal solutions exist only for very specific cases. Consequently, some heuristics have been proposed in the literature, all of them with similar performance. In our paper, we first focus on the critical issue of estimating the expected makespan of a sequence and found that, for instances with a medium/large variability (expressed as the coefficient of variation of the processing times of the jobs), the number of samples or simulation runs usually employed in the literature may not be sufficient to derive robust conclusions with respect to the performance of the different heuristics. We thus propose a procedure with a variable number of iterations that ensures that the percentage error in the estimation of the expected makespan is bounded with a very high probability. Using this procedure, we test the main heuristics proposed in the literature and find significant differences in their performance, in contrast with existing studies. We also find that the deterministic counterpart of the most efficient heuristic for the stochastic problem performs extremely well for most settings, which indicates that, in some cases, solving the deterministic version of the problem may produce competitive solutions for the stochastic counterpart.
Scheduling; Flowshop; Stochastic; Makespan objective; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715003781
Framinan, Jose M.
Perez-Gonzalez, Paz
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:281-2922015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:281-292
article
Optimal firm growth under the threat of entry
The paper studies the incumbent-entrant problem in a fully dynamic setting. We find that under an open-loop information structure the incumbent anticipates entry by overinvesting, whereas in the Markov perfect equilibrium the incumbent slightly underinvests in the period before the entry. The entry cost level where entry accommodation passes into entry deterrence is lower in the Markov perfect equilibrium. Further we find that the incumbent’s capital stock level needed to deter entry is hump shaped as a function of the entry time, whereas the corresponding entry cost, where the entrant is indifferent between entry and non-entry, is U-shaped.
Economics; Game theory; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715003239
Kort, Peter M.
Wrzaczek, Stefan
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:496-5042015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:496-504
article
Stochastic inflow modeling for hydropower scheduling problems
We introduce a new stochastic model for inflow time series that is designed with the requirements of hydropower scheduling problems in mind. The model is an “iterated function system’’: it models inflow as continuous, but the random innovation at each time step has a discrete distribution. With this inflow model, hydro-scheduling problems can be solved by the stochastic dual dynamic programming (SDDP) algorithm exactly as posed, without the additional sampling error introduced by sample average approximations. The model is fitted to univariate inflow time series by quantile regression. We consider various goodness-of-fit metrics for the new model and some alternatives to it, including performance in an actual hydro-scheduling problem. The numerical data used are for inflows to New Zealand hydropower reservoirs.
OR in energy; Hydro-thermal scheduling; Stochastic dual dynamic programming; Time series; Quantile regression;
http://www.sciencedirect.com/science/article/pii/S0377221715004129
Pritchard, Geoffrey
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:487-4952015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:487-495
article
A direct search method for unconstrained quantile-based simulation optimization
Simulation optimization has gained popularity over the decades because of its ability to solve many practical problems that involve profound randomness. The methodology development of simulation optimization, however, is largely concerned with problems whose objective function is mean-based performance metric. In this paper, we propose a direct search method to solve the unconstrained simulation optimization problems with quantile-based objective functions. Because the proposed method does not require gradient estimation in the search process, it can be applied to solve many practical problems where the gradient of objective function does not exist or is difficult to estimate. We prove that the proposed method possesses desirable convergence guarantee, i.e., the algorithm can converge to the true global optima with probability one. An extensive numerical study shows that the performance of the proposed method is promising. Two illustrative examples are provided in the end to demonstrate the viability of the proposed method in real settings.
Simulation; Quantile; Direct search method; Nelder–Mead simplex method;
http://www.sciencedirect.com/science/article/pii/S0377221715003823
Chang, Kuo-Hao
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:674-6842015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:674-684
article
A multi-criteria Police Districting Problem for the efficient and effective design of patrol sector
The Police Districting Problem (PDP) concerns the efficient and effective design of patrol sectors in terms of performance attributes such as workload, response time, etc. A balanced definition of the patrol sector is desirable as it results in crime reduction and in better service. In this paper, a multi-criteria Police Districting Problem defined in collaboration with the Spanish National Police Corps is presented. This is the first model for the PDP that considers the attributes of area, risk, compactness, and mutual support. The decision-maker can specify his/her preferences on the attributes, on workload balance, and efficiency. The model is solved by means of a heuristic algorithm that is empirically tested on a case study of the Central District of Madrid. The solutions identified by the model are compared to patrol sector configurations currently in use and their quality is evaluated by public safety service coordinators. The model and the algorithm produce designs that significantly improve on the current ones.
Location; Police Districting Problem; Multi-criteria decision making;
http://www.sciencedirect.com/science/article/pii/S0377221715004130
Camacho-Collados, M.
Liberatore, F.
Angulo, J.M.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:517-5272015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:517-527
article
Elicitation of multiattribute value functions through high dimensional model representations: Monotonicity and interactions
This work addresses the early phases of the elicitation of multiattribute value functions proposing a practical method for assessing interactions and monotonicity. We exploit the link between multiattribute value functions and the theory of high dimensional model representations. The resulting elicitation method does not state any a-priori assumption on an individual’s preference structure. We test the approach via an experiment in a riskless context in which subjects are asked to evaluate mobile phone packages that differ on three attributes.
Multiattribute value theory; High dimensional model representations; Value function elicitation; Decision analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715003355
Beccacece, Francesca
Borgonovo, Emanuele
Buzzard, Greg
Cillo, Alessandra
Zionts, Stanley
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:1-192015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:1-19
article
A review of theory and practice in scientometrics
Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws” of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments.
Altmetrics; Citations; H-index; Impact factor; Normalisation;
http://www.sciencedirect.com/science/article/pii/S037722171500274X
Mingers, John
Leydesdorff, Loet
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:609-6182015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:609-618
article
A multi-objective approach with soft constraints for water supply and wastewater coverage improvements
In Brazil, due to public health, social and economic cohesion problems, access to water and wastewater services is certainly one of the main concerns of the different stakeholders in the Brazilian water sector. But as the focus is mainly on the expansion and building of new infrastructures, other features such as the robustness and resiliency of the systems are being forgotten. This reason, among others, highlights the importance of sustainable development and financing for the Brazilian water sector. In order to assess that goal, a multi-objective optimization model was built with the aim of formulating strategies to reach a predefined coverage minimizing time and costs incurred, under specific hard and soft constraints, assembled to deal with key sustainability concepts (e.g., affordability and coverage targets features) as they should not be left apart. For that purpose, an achievement scalarizing function was adopted with three distinct scaling coefficient vectors for a given reference point. To solve this combinatorial optimization problem, we used a mixed integer-linear programming optimizer that resorts to branch-and-bound methods. The work developed, paves the way toward the creation of a decision-aiding tool, without disregarding the number of steps that need to be taken to achieve the proposed objectives.
Multiple criteria analysis; Branch and bound; Combinatorial optimization; Reference point approach; Coverage of water and wastewater services;
http://www.sciencedirect.com/science/article/pii/S037722171500329X
Pinto, F.S.
Figueira, J.R.
Marques, R.C.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:199-2082015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:199-208
article
Pricing of fluctuations in electricity markets
In an electric power system, demand fluctuations may result in significant ancillary cost to suppliers. Furthermore, in the near future, deep penetration of volatile renewable electricity generation is expected to exacerbate the variability of demand on conventional thermal generating units. We address this issue by explicitly modeling the ancillary cost associated with demand variability. We argue that a time-varying price equal to the suppliers’ instantaneous marginal cost may not achieve social optimality, and that consumer demand fluctuations should be properly priced. We propose a dynamic pricing mechanism that explicitly encourages consumers to adapt their consumption so as to offset the variability of demand on conventional units. Through a dynamic game-theoretic formulation, we show that (under suitable convexity assumptions) the proposed pricing mechanism achieves social optimality asymptotically, as the number of consumers increases to infinity. Numerical results demonstrate that compared with marginal cost pricing, the proposed mechanism creates a stronger incentive for consumers to shift their peak load, and therefore has the potential to reduce the need for long-term investment in peaking plants.
OR in energy; Electricity market; Game theory; Dynamic pricing; Social welfare;
http://www.sciencedirect.com/science/article/pii/S0377221715003136
Tsitsiklis, John N.
Xu, Yunjian
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:400-4122015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:400-412
article
An integrative cooperative search framework for multi-decision-attribute combinatorial optimization: Application to the MDPVRP
We introduce the integrative cooperative search method (ICS), a multi-thread cooperative search method for multi-attribute combinatorial optimization problems. ICS musters the combined capabilities of a number of independent exact or meta-heuristic solution methods. A number of these methods work on sub-problems defined by suitably selected subsets of decision-set attributes of the problem, while others combine the resulting partial solutions into complete ones and, eventually, improve them. All these methods cooperate through an adaptive search-guidance mechanism, using the central-memory cooperative search paradigm. Extensive numerical experiments explore the behavior of ICS and its interest through an application to the multi-depot, periodic vehicle routing problem, for which ICS improves the results of the current state-of-the-art methods.
Multi-attribute combinatorial optimization; Integrative cooperative search; Meta-heuristics; Decision-set decomposition; Multi-depot periodic vehicle routing;
http://www.sciencedirect.com/science/article/pii/S0377221715003793
Lahrichi, Nadia
Crainic, Teodor Gabriel
Gendreau, Michel
Rei, Walter
Crişan, Gloria Cerasela
Vidal, Thibaut
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:543-5532015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:543-553
article
Elicitation of criteria importance weights through the Simos method: A robustness concern
In the field of multicriteria decision aid, the Simos method is considered as an effective tool to assess the criteria importance weights. Nevertheless, the method's input data do not lead to a single weighting vector, but infinite ones, which often exhibit great diversification and threaten the stability and acceptability of the results. This paper proves that the feasible weighting solutions, of both the original and the revised Simos procedures, are vectors of a non-empty convex polyhedral set, hence the reason it proposes a set of complementary robustness analysis rules and measures, integrated in a Robust Simos Method. This framework supports analysts and decision makers in gaining insight into the degree of variation of the multiple acceptable sets of weights, and their impact on the stability of the final results. In addition, the proposed measures determine if, and what actions should be implemented, prior to reaching an acceptable set of criteria weights and forming a final decision. Two numerical examples are provided, to illustrate the paper's evidence, and demonstrate the significance of consistently analyzing the robustness of the Simos method results, in both the original and the revised method's versions.
Multiple criteria; Decision analysis; Criteria weights; Robustness analysis; Simos method;
http://www.sciencedirect.com/science/article/pii/S0377221715003306
Siskos, Eleftherios
Tsotsolas, Nikos
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:66-752015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:66-75
article
Stochastic lot sizing manufacturing under the ETS system for maximisation of shareholder wealth
The issues of carbon emission and global warming have increasingly aroused worldwide attention in recent years. Despite huge progresses in carbon abatement, few research studies have reported on the impacts of carbon emission reduction mechanisms on manufacturing optimisation, which often leads to decisions of environmentally unsustainable operations and misestimation of performance. This paper attempts to explore carbon management under the carbon emission trading mechanism for optimisation of lot sizing production planning in stochastic make-to-order manufacturing with the objective to maximise shareholder wealth. We are concerned not only about the economic benefits of investors, but also about the environmental impacts associated with production planning. Numerical experiments illustrate the significant influences of carbon emission trading, pricing, and caps on the dynamic decisions of the lot sizing policy. The result highlights the critical roles of carbon management in production planning for achieving both environmental and economic benefits. It also provides managerial insights into operations management to help mitigate environmental deterioration arising from carbon emission, as well as improve shareholder wealth.
Production planning; Lot sizing; Carbon emission; ETS; Shareholder wealth;
http://www.sciencedirect.com/science/article/pii/S0377221715003148
Wang, X.J.
Choi, S.H.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:250-2622015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:250-262
article
On the estimation of the true demand in call centers with redials and reconnects
In practice, in many call centers customers often perform redials (i.e., reattempt after an abandonment) and reconnects (i.e., reattempt after an answered call). In the literature, call center models usually do not cover these features, while real data analysis and simulation results show ignoring them inevitably leads to inaccurate estimation of the total inbound volume. Therefore, in this paper we propose a performance model that includes both features. In our model, the total volume consists of three types of calls: (1) fresh calls (i.e., initial call attempts), (2) redials, and (3) reconnects. In practice, the total volume is used to make forecasts, while according to the simulation results, this could lead to high forecast errors, and subsequently wrong staffing decisions. However, most of the call center data sets do not have customer-identity information, which makes it difficult to identify how many calls are fresh and what fractions of the calls are redials and reconnects.
Queueing; Forecasting; Redials; Reconnects; Call centers;
http://www.sciencedirect.com/science/article/pii/S0377221715003112
Ding, S.
Koole, G.
van der Mei, R.D.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:651-6602015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:651-660
article
The impact of the internet on the pricing strategies of the European low cost airlines
This study seeks to analyse the price determination of low cost airlines in Europe and the effect that Internet has on this strategy. The outcomes obtained reveal that both users and companies benefit from the use of ICTs in the purchase and sale of airline tickets: the Internet allows consumers to increase their bargaining power comparing different airlines and choosing the most competitive flight, while companies can easily check the behaviour of users to adapt their pricing strategies using internal information.
Low cost airlines; Airline pricing; ICT; Travel industry strategies; Air fares;
http://www.sciencedirect.com/science/article/pii/S0377221715003859
Moreno-Izquierdo, L.
Ramón-Rodríguez, A.
Perles Ribes, J.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:379-3912015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:379-391
article
Scheduling resource-constrained projects with a flexible project structure
In projects with a flexible project structure, the activities that must be scheduled are not completely known in advance. Scheduling such projects includes deciding whether to perform particular activities. This decision also affects precedence constraints among the implemented activities. However, established model formulations and solution approaches for the resource-constrained project scheduling problem (RCPSP) assume that the project structure is provided in advance. In this paper, the traditional RCPSP is extended using a highly general model-endogenous decision on this flexible project structure. This extension is illustrated using the example of the aircraft turnaround process at airports. We present a genetic algorithm to solve this type of scheduling problem and evaluate it in an extensive numerical study.
Project scheduling; Genetic algorithms; RCPSP; Flexible projects;
http://www.sciencedirect.com/science/article/pii/S0377221715003732
Kellenbrink, Carolin
Helber, Stefan
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:232-2412015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:232-241
article
Accommodating heterogeneity and nonlinearity in price effects for predicting brand sales and profits
We propose a hierarchical Bayesian semiparametric approach to account simultaneously for heterogeneity and functional flexibility in store sales models. To estimate own- and cross-price response flexibly, a Bayesian version of P-splines is used. Heterogeneity across stores is accommodated by embedding the semiparametric model into a hierarchical Bayesian framework that yields store-specific own- and cross-price response curves. More specifically, we propose multiplicative store-specific random effects that scale the nonlinear price curves while their overall shape is preserved. Estimation is fully Bayesian and based on novel MCMC techniques. In an empirical study, we demonstrate a higher predictive performance of our new flexible heterogeneous model over competing models that capture heterogeneity or functional flexibility only (or neither of them) for nearly all brands analyzed. In particular, allowing for heterogeneity in addition to functional flexibility can improve the predictive performance of a store sales model considerably, while incorporating heterogeneity alone only moderately improved or even decreased predictive validity. Taking into account model uncertainty, we show that the proposed model leads to higher expected profits as well as to materially different pricing recommendations.
Forecasting; Sales response modeling; Heterogeneity; Functional flexibility; Expected profits;
http://www.sciencedirect.com/science/article/pii/S0377221715001678
Lang, Stefan
Steiner, Winfried J.
Weber, Anett
Wechselberger, Peter
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:476-4862015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:476-486
article
Commodity derivatives pricing with cointegration and stochastic covariances
Empirically, cointegration and stochastic covariances, including stochastic volatilities, are statistically significant for commodity prices and energy products. To capture such market phenomena, we develop a continuous-time dynamics of cointegrated assets with a stochastic covariance matrix and derive the joint characteristic function of asset returns in closed-form. The proposed model offers an endogenous explanation for the stochastic mean-reverting convenience yield. The time series of spot and futures prices of WTI crude oil and gasoline shows cointegration relationship under both physical and risk-neutral measures. The proposed model also allows us to fit the observed term structure of futures prices and calibrate the market-implied cointegration relationship. We apply it to value options on a single commodity and on multiple commodities.
Option pricing; Cointegration; Stochastic covariance; Stochastic convenience yield;
http://www.sciencedirect.com/science/article/pii/S0377221715003847
Chiu, Mei Choi
Wong, Hoi Ying
Zhao, Jing
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:218-2312015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:218-231
article
E-NAUTILUS: A decision support system for complex multiobjective optimization problems based on the NAUTILUS method
Interactive multiobjective optimization methods cannot necessarily be easily used when (industrial) multiobjective optimization problems are involved. There are at least two important factors to be considered with any interactive method: computationally expensive functions and aspects of human behavior. In this paper, we propose a method based on the existing NAUTILUS method and call it the Enhanced NAUTILUS (E-NAUTILUS) method. This method borrows the motivation of NAUTILUS along with the human aspects related to avoiding trading-off and anchoring bias and extends its applicability for computationally expensive multiobjective optimization problems. In the E-NAUTILUS method, a set of Pareto optimal solutions is calculated in a pre-processing stage before the decision maker is involved. When the decision maker interacts with the solution process in the interactive decision making stage, no new optimization problem is solved, thus, avoiding the waiting time for the decision maker to obtain new solutions according to her/his preferences. In this stage, starting from the worst possible objective function values, the decision maker is shown a set of points in the objective space, from which (s)he chooses one as the preferable point. At successive iterations, (s)he always sees points which improve all the objective values achieved by the previously chosen point. In this way, the decision maker remains focused on the solution process, as there is no loss in any objective function value between successive iterations. The last post-processing stage ensures the Pareto optimality of the final solution. A real-life engineering problem is used to demonstrate how E-NAUTILUS works in practice.
Multiple objective programming; Interactive methods; Multiple criteria optimization; Computational cost; Trading-off;
http://www.sciencedirect.com/science/article/pii/S0377221715003203
Ruiz, Ana B.
Sindhya, Karthik
Miettinen, Kaisa
Ruiz, Francisco
Luque, Mariano
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:44-502015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:44-50
article
SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression
This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a globally optimal subset using the branch and bound (BB) algorithm is limited to problems in very low dimension, typically d ≤ 5, as the complexity of the problem increases exponentially with d. We introduce a bold pruning strategy in the BB algorithm that results in a significant reduction in computing time, at the price of a negligeable accuracy lost. The novelty of our algorithm is that the bounds at nodes of the BB tree come from pseudo-convexifications derived using a linearization technique with approximate bounds for the nonlinear terms. The approximate bounds are computed solving an auxiliary semidefinite optimization problem. We show through a computational study that our algorithm performs well in a wide set of the most difficult instances of the LTSE problem.
Global optimization; Integer programming; High breakdown point regression; Branch and bound; Relaxation–linearization technique;
http://www.sciencedirect.com/science/article/pii/S0377221715003173
Flores, Salvador
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:528-5422015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:528-542
article
On the exact solution of the multi-period portfolio choice problem for an exponential utility under return predictability
In this paper we derive the exact solution of the multi-period portfolio choice problem for an exponential utility function under return predictability. It is assumed that the asset returns depend on predictable variables and that the joint random process of the asset returns and the predictable variables follow a vector autoregressive process. We prove that the optimal portfolio weights depend on the covariance matrices of the next two periods and the conditional mean vector of the next period. The case without predictable variables and the case of independent asset returns are partial cases of our solution. Furthermore, we provide an exhaustive empirical study where the cumulative empirical distribution function of the investor’s wealth is calculated using the exact solution. It is compared with the investment strategy obtained under the additional assumption that the asset returns are independently distributed.
Multi-period asset allocation; Expected utility optimization; Exponential utility function; Return predictability;
http://www.sciencedirect.com/science/article/pii/S037722171500332X
Bodnar, Taras
Parolya, Nestor
Schmid, Wolfgang
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:435-4492015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:435-449
article
Reverse supply chains: Effects of collection network and returns classification on profitability
Used products collected for value recovery are characterized by higher uncertainty regarding their quality condition compared to raw materials used in forward supply chains. Because of the need for timely information regarding their quality, a common business practice is to establish procedures for the classification of used products (returns), which is not always error-free. The existence of a multitude of sites where used products can be collected, further increases the complexity of reverse supply chain design and management. In this paper we formulate the objective function for a reverse supply chain with multiple collection sites and the possibility of returns sorting, assuming general distributions of demand and returns quality in a single-period context. We derive conditions for the determination of the optimal acquisition and remanufacturing lot-sizing decisions under alternative locations of the unreliable classification/sorting operation. We provide closed-form expressions for the selection of the optimal sorting location in the special case of identical collection sites and guidelines for tackling the decision-making problem in the general case. Furthermore, we examine analytically the effect of the cost and accuracy of the classification procedure on the profitability of the alternative supply chain configurations. Our analysis, which is accompanied by a brief numerical investigation, offers insights regarding the impact of yield variability, number of collection sites, and location and characteristics of the returns classification operation both on the acquisition decisions and on the profitability of the reverse supply chain.
Multiple suppliers; Random yield; Location of sorting; Returns classification errors; Value of quality information;
http://www.sciencedirect.com/science/article/pii/S0377221715003744
Zikopoulos, Christos
Tagaras, George
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:471-4752015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:471-475
article
Pricing and sales-effort investment under bi-criteria in a supply chain of virtual products involving risk
This work develops a stochastic model of a two-echelon supply chain of virtual products in which the decision makers—a manufacturer and a retailer—may be risk-sensitive. Virtual products allow the retailer to avoid holding costs and ensure timely fulfillment of demand with no risk of shortage. We expand on the work of Chernonog and Avinadav (2014), who investigated the pricing of virtual products under uncertain and price-dependent demand, by including sales-effort as a decision variable that affects demand. Whereas in the previous work equilibrium was obtained exactly as in a deterministic case for any utility function, herein it is not. Consequently, we focus on the strategies of both the manufacturer and the retailer under different profit criteria, including the use of bi-criteria. By formulating the problem as a Stackelberg game, we show that the problem can be analytically solved by assuming certain common structures of the demand function and of the preferences of both the manufacturer and the retailer with regard to risk. We extend the solution to the case of imperfect information regarding the preferences and offer guidelines for the formation of efficient sets of decisions under bi-criteria. Finally, we provide numerical results.
Supply chain; Game theory; Risk; Multiple criteria; Imperfect information;
http://www.sciencedirect.com/science/article/pii/S0377221715004142
Chernonog, Tatyana
Avinadav, Tal
Ben-Zvi, Tal
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:562-5742015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:562-574
article
A systemic method for organisational stakeholder identification and analysis using Soft Systems Methodology (SSM)
This paper presents a systemic methodology for identifying and analysing the stakeholders of an organisation at many different levels. The methodology is based on soft systems methodology and is applicable to all types of organisation, both for profit and non-profit. The methodology begins with the top-level objectives of the organisation, developed through debate and discussion, and breaks these down into the key activities needed to achieve them. A range of stakeholders are identified for each key activity. At the end, the functions and relationships of all the stakeholder groups can clearly be seen. The methodology is illustrated with an actual case study in Hunan University.
Stakeholder identification; Stakeholder analysis; Soft systems methodology;
http://www.sciencedirect.com/science/article/pii/S0377221715003860
Wang, Wei
Liu, Wenbin
Mingers, John
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:76-852015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:76-85
article
Optimal inventory policy for two substitutable products with customer service objectives
We consider a firm facing stochastic demand for two products with downward, supplier-driven substitution and customer service objectives. We assume both products are perishable or prone to obsolescence, hence the firm faces a single period problem. The fundamental challenge facing the firm is to determine in advance of observing demand the profit maximizing inventory levels of both products that will meet given service level objectives. Note that while we speak of inventory levels, the products may be either goods or services. We characterize the firm’s optimal inventory policy with and without customer service objectives. Results of a numerical study reveal the benefits obtained from substitution and show how optimal inventory levels are impacted by customer service objectives.
Inventory management; Capacity management; Substitution; Perishability; Customer service objective;
http://www.sciencedirect.com/science/article/pii/S0377221715003264
Chen, Xu
Feng, Youyi
Keblis, Matthew F.
Xu, Jianjun
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:331-3382015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:331-338
article
Tri-criterion modeling for constructing more-sustainable mutual funds
One of the most important factors shaping world outcomes is where investment dollars are placed. In this regard, there is the rapidly growing area called sustainable investing where environmental, social, and corporate governance (ESG) measures are taken into account. With people interested in this type of investing rarely able to gain exposure to the area other than through a mutual fund, we study a cross section of U.S. mutual funds to assess the extent to which ESG measures are embedded in their portfolios. Our methodology makes heavy use of points on the nondominated surfaces of many tri-criterion portfolio selection problems in which sustainability is modeled, after risk and return, as a third criterion. With the mutual funds acting as a filter, the question is: How effective is the sustainable mutual fund industry in carrying out its charge? Our findings are that the industry has substantial leeway to increase the sustainability quotients of its portfolios at even no cost to risk and return, thus implying that the funds are unnecessarily falling short on the reasons why investors are investing in these funds in the first place.
Socially responsible investing; Multiple criteria optimization; Portfolio selection; Nondominated surfaces; Quadratically constrained linear programs;
http://www.sciencedirect.com/science/article/pii/S0377221715003288
Utz, Sebastian
Wimmer, Maximilian
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:619-6302015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:619-630
article
A moment-matching method to generate arbitrage-free scenarios
We propose a new moment-matching method to build scenario trees that rule out arbitrage opportunities when describing the dynamics of financial assets. The proposed scenario generator is based on the monomial method, a technique to solve systems of algebraic equations. Extensive numerical experiments show the accuracy and efficiency of the proposed moment-matching method when solving financial problems in complete and incomplete markets.
Scenarios; Monomial method; Moment-matching;
http://www.sciencedirect.com/science/article/pii/S0377221715003653
Staino, Alessandro
Russo, Emilio
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:392-3992015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:392-399
article
Optimality cuts and a branch-and-cut algorithm for the K-rooted mini-max spanning forest problem
Let G = (V, E) be an undirected graph with costs associated with its edges and K pre-specified root vertices. The K−rooted mini-max spanning forest problem asks for a spanning forest of G defined by exactly K mutually disjoint trees. Each tree must contain a different root vertex and the cost of the most expensive tree must be minimum. This paper introduces a Branch-and-cut algorithm for the problem. It involves a multi-start Linear Programming heuristic and the separation of some new optimality cuts. Extensive computational tests indicate that the new algorithm significantly improves on the results available in the literature. Improvements being reflected by lower CPU times, smaller enumeration trees, and optimality certificates for previously unattainable K = 2 instances with as many as 200 vertices. Furthermore, for the first time, instances of the problem with K ∈ {3, 4} are solved to proven optimality.
Combinatorial optimization; Branch-and-cut; K-rooted mini–max spanning forest problem; Optimality cuts;
http://www.sciencedirect.com/science/article/pii/S0377221715003719
da Cunha, Alexandre Salles
Simonetti, Luidi
Lucena, Abilio
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:345-3782015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:345-378
article
The third comprehensive survey on scheduling problems with setup times/costs
Scheduling involving setup times/costs plays an important role in today's modern manufacturing and service environments for the delivery of reliable products on time. The setup process is not a value added factor, and hence, setup times/costs need to be explicitly considered while scheduling decisions are made in order to increase productivity, eliminate waste, improve resource utilization, and meet deadlines. However, the vast majority of existing scheduling literature, more than 90 percent, ignores this fact. The interest in scheduling problems where setup times/costs are explicitly considered began in the mid-1960s and the interest has been increasing even though not at an anticipated level. The first comprehensive review paper (Allahverdi et al., 1999) on scheduling problems with setup times/costs was in 1999 covering about 200 papers, from mid-1960s to mid-1988, while the second comprehensive review paper (Allahverdi et al., 2008) covered about 300 papers which were published from mid-1998 to mid-2006. This paper is the third comprehensive survey paper which provides an extensive review of about 500 papers that have appeared since the mid-2006 to the end of 2014, including static, dynamic, deterministic, and stochastic environments. This review paper classifies scheduling problems based on shop environments as single machine, parallel machine, flowshop, job shop, or open shop. It further classifies the problems as family and non-family as well as sequence-dependent and sequence-independent setup times/costs. Given that so many papers have been published in a relatively short period of time, different researchers have addressed the same problem independently, by even using the same methodology. Throughout the survey paper, the independently addressed problems are identified, and need for comparing these results is emphasized. Moreover, based on performance measures, shop and setup times/costs environments, the less studied problems have been identified and the need to address these problems is specified. The current survey paper, along with those of Allahverdi et al. (1999, 2008), is an up to date survey of scheduling problems involving static, dynamic, deterministic, and stochastic problems for different shop environments with setup times/costs since the first research on the topic appeared in the mid-1960s.
Scheduling; Review; Setup time; Setup cost;
http://www.sciencedirect.com/science/article/pii/S0377221715002763
Allahverdi, Ali
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:575-5812015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:575-581
article
On solving matrix games with pay-offs of triangular fuzzy numbers: Certain observations and generalizations
The purpose of this paper is to highlight a serious omission in the recent work of Li (2012) for solving the two person zero-sum matrix games with pay-offs of triangular fuzzy numbers (TFNs) and propose a new methodology for solving such games. Li (2012) proposed a method which always assures that the max player gain-floor and min player loss-ceiling have a common TFN value. The present paper exhibits a flaw in this claim of Li (2012). The flaw arises on account of Li (2012) not explaining the meaning of solution of game under consideration. The present paper attempts to provide certain appropriate modifications in Li’s model to take care of this serious omission. These modifications in conjunction with the results of Clemente, Fernandez, and Puerto (2011) lead to an algorithm to solve matrix games with pay-offs of general piecewise linear fuzzy numbers.
Game theory; Fuzzy pay-offs; Fuzzy numbers; Multiobjective optimization; Pareto optimality;
http://www.sciencedirect.com/science/article/pii/S0377221715003835
Chandra, S.
Aggarwal, A.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:86-1072015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:86-107
article
A biased random-key genetic algorithm for the unequal area facility layout problem
This paper presents a biased random-key genetic algorithm (BRKGA) for the unequal area facility layout problem (UA-FLP) where a set of rectangular facilities with given area requirements has to be placed, without overlapping, on a rectangular floor space. The objective is to find the location and the dimensions of the facilities such that the sum of the weighted distances between the centroids of the facilities is minimized. A hybrid approach combining a BRKGA, to determine the order of placement and the dimensions of each facility, a novel placement strategy, to position each facility, and a linear programming model, to fine-tune the solutions, is developed. The proposed approach is tested on 100 random datasets and 28 of benchmark datasets taken from the literature and compared with 21 other benchmark approaches. The quality of the approach was validated by the improvement of the best known solutions for 19 of the 28 extensively studied benchmark datasets.
Facilities planning and design; Facility layout; Biased random-key genetic algorithms; Random-keys;
http://www.sciencedirect.com/science/article/pii/S0377221715003227
Gonçalves, José Fernando
Resende, Mauricio G.C.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:108-1182015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:108-118
article
Comments on the EOQ model for deteriorating items with conditional trade credit linked to order quantity in the supply chain management
Ouyang et al. (2009) consider an economic order quantity (EOQ) model for deteriorating items with a partially permissible delay in payments linked to order quantity. Basically, their inventory model is practical, but there are some defects from the logical viewpoints of mathematics. In this paper, the functional behaviors of the annual total relevant costs are explored by rigorous methods of mathematics. A complete solution procedure is also developed to make up for the shortcomings of Ouyang et al. (2009). In numerical examples, it is proved that the new solution procedure could avoid making wrong decisions and causing cost penalties.
Inventory; EOQ; Trade credit; Partially permissible delay in payments; Deteriorating items;
http://www.sciencedirect.com/science/article/pii/S0377221715003665
Ting, Pin-Shou
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:339-3422015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:339-342
article
Optimal shelf-space stocking policy using stochastic dominance under supply-driven demand uncertainty
In this paper, we develop an optimal shelf-space stocking policy when demand, in addition to the exogenous uncertainty, is influenced by the amount of inventory displayed (supply) on the shelves. Our model exploits stochastic dominance condition; and, we assume that the distribution of realized demand with higher stocking level stochastically dominates the distribution of realized demand with lower stocking level. We show that the critical fractile with endogenous demand may not exceed the critical fractile of the classical newsvendor model. Our computational results validate the optimality of amount of units stocked on the retail shelves.
Displayed inventory; Stochastic dominance; Newsvendor; Uncertainty modeling;
http://www.sciencedirect.com/science/article/pii/S0377221715003240
Amit, R.K.
Mehta, Peeyush
Tripathi, Rajeev R.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:51-652015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:51-65
article
An efficient genetic algorithm with a corner space algorithm for a cutting stock problem in the TFT-LCD industry
In this study, we investigate a two-dimensional cutting stock problem in the thin film transistor liquid crystal display industry. Given the lack of an efficient and effective mixed production method that can produce various sizes of liquid crystal display panels from a glass substrate sheet, thin film transistor liquid crystal display manufacturers have relied on the batch production method, which only produces one size of liquid crystal display panel from a single substrate. However, batch production is not an effective or flexible strategy because it increases production costs by using an excessive number of glass substrate sheets and causes wastage costs from unused liquid crystal display panels. A number of mixed production approaches or algorithms have been proposed. However, these approaches cannot solve industrial-scale two-dimensional cutting stock problem efficiently because of its computational complexity. We propose an efficient and effective genetic algorithm that incorporates a novel placement procedure, called a corner space algorithm, and a mixed integer programming model to resolve the problem. The key objectives are to reduce the total production costs and to satisfy the requirements of customers. Our computational results show that, in terms of solution quality and computation time, the proposed method significantly outperforms the existing approaches.
Two-dimensional cutting; Mixed production; Genetic algorithm; TFT-LCD; Corner space algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715003379
Lu, Hao-Chun
Huang, Yao-Huei
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:263-2802015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:263-280
article
Joint optimization for coordinated configuration of product families and supply chains by a leader-follower Stackelberg game
Product family design by module configuration is conducive to accommodating product variety while maintaining mass production efficiency. Effective fulfillment of product families necessitates joint decision making of product family configuration (PFC) and downstream supply chain configuration (SCC), due to nowadays manufacturers’ moving towards assembly-to-order production throughout a distributed supply chain network. Existing decision models for joint optimization of product family and supply chain configuration are originated from an “all-in-one” approach that assumes both PFC and SCC decisions can be integrated into one optimization problem by aggregating two different types of objectives into a single objective function. Such an assumption neglects the complex tradeoffs underlying two different decision making problems and fails to reveal the inherent coupling of PFC and SCC.
Product family; Supply chain; Module configuration; Stackelberg game; Bi-level optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500315X
Yang, Dong
Jiao, Jianxin (Roger)
Ji, Yangjian
Du, Gang
Helo, Petri
Valente, Anna
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:554-5612015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:554-561
article
A nonparametric methodology for evaluating convergence in a multi-input multi-output setting
This paper presents a novel nonparametric methodology to evaluate convergence in an industry, considering a multi-input multi-output setting for the assessment of total factor productivity. In particular, we develop two new indexes to evaluate σ-convergence and β-convergence that can be computed using nonparametric techniques such as Data Envelopment Analysis. The methodology developed is particularly useful to enhance productivity assessments based on the Malmquist index. The methodology is applied to a real world context, consisting of a sample of Portuguese construction companies that operated in the sector between 2008 and 2010. The empirical results show that Portuguese companies tended to converge, both in the sense of σ and β, in all construction activity segments in the aftermath of the financial crisis.
Convergence; Productivity; Malmquist index; Data envelopment analysis; Construction industry;
http://www.sciencedirect.com/science/article/pii/S0377221715003872
Horta, Isabel M.
Camanho, Ana S.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:597-6082015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:597-608
article
Exact and heuristic approaches to the airport stand allocation problem
The Stand Allocation Problem (SAP) consists in assigning aircraft activities (arrival, departure and intermediate parking) to aircraft stands (parking positions) with the objective of maximizing the number of passengers/aircraft at contact stands and minimizing the number of towing movements, while respecting a set of operational and commercial requirements. We first prove that the problem of assigning each operation to a compatible stand is NP-complete by a reduction from the circular arc graph coloring problem. As a corollary, this implies that the SAP is NP-hard. We then formulate the SAP as a Mixed Integer Program (MIP) and strengthen the formulation in several ways. Additionally, we introduce two heuristic algorithms based on a spatial and time decomposition leading to smaller MIPs. The methods are tested on realistic scenarios based on actual data from two major European airports. We compare the performance and the quality of the solutions with state-of-the-art algorithms. The results show that our MIP-based methods provide significant improvements to the solutions outlined in previously published approaches. Moreover, their low computation makes them very practical.
Mixed integer programming; Gate assignment problem; Heuristic algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221715003331
Guépet, J.
Acuna-Agost, R.
Briant, O.
Gayon, J.P.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:209-2172015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:209-217
article
A generalized equilibrium efficient frontier data envelopment analysis approach for evaluating DMUs with fixed-sum outputs
The recently published equilibrium efficient frontier data envelopment analysis (EEFDEA) approach (Yang et al., 2014) represents a step forward in evaluating decision-making units (DMUs) with fixed-sum outputs when compared to prior approaches such as FSODEA (fixed-sum outputs DEA) approach (Yang et al., 2011) and ZSG-DEA (zero sum gains DEA) approach (Lins et al., 2003) and so on. Based on the EEFDEA approach, in this paper, we proposed a generalized equilibrium efficient frontier data envelopment analysis approach (GEEFDEA) which improves and strengthens the EEFDEA approach. Compared to EEFDEA approach, this approach makes several improvements in evaluation, namely that (1) it is not necessary to determine the evaluation order in advance, which overcomes the limitation that different evaluation orders will lead to different results; (2) the equilibrium efficient frontier can be achieved in only one step no matter how many DMUs they are, which greatly simplifies the procedure to reach the equilibrium efficient frontier especially when the number of DMUs is large; and (3) the constraint that signs of outputs’ adjustment of each DMU must be same (all non-positive or all non-negative) in prior approaches has been relaxed. In this sense, the result obtained by the proposed approach is more consistent with the demand of practical applications. Finally, the proposed approach combined with assurance regions (AR) is applied to the data set of 2012 London Olympic Games.
Data envelopment analysis (DEA); Generalized equilibrium efficient frontier; Fixed sum outputs; Assurance region;
http://www.sciencedirect.com/science/article/pii/S0377221715003161
Yang, Min
Li, Yong Jun
Liang, Liang
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:320-3302015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:320-330
article
On the value of exposure and secrecy of defense system: First-mover advantage vs. robustness
It is commonly accepted in the literature that, when facing with a strategic terrorist, the government can be better off by manipulating the terrorist’s target selection with exposing her defense levels and thus moving first. However, the impact of terrorist’s private information may significantly affect such government’s first-mover advantage, which has not been extensively studied in the literature. To explore the impact of asymmetry in terrorist’s attributes between government and terrorist on defense equilibrium, we propose a model in which the government chooses between disclosure (sequential game) and secrecy (simultaneous game) of her defense system. Our analysis shows that the government’s first-mover advantage in a sequential game is considerable only when both government and terrorist share relatively similar valuation of targets. In contrast, we interestingly find that the government no longer benefits from the first-mover advantage by exposing her defense levels when the degree of divergence between government and terrorist valuation of targets is high. This is due to the robustness of defense system under secrecy, in the sense that all targets should be defended in equilibrium irrespective of how the terrorist valuation of targets is different to government. We identify two phenomena that lead to this result. First, when the terrorist holds a significantly higher valuation of targets than the government’s belief, the government may waste her budget in a sequential game by over-investing on the high-valued targets. Second, when the terrorist holds a significantly lower valuation of targets, the government may incur a higher expected damage in a sequential game because of not defending the low-valued targets. Finally, we believe that this paper provides some novel insights to homeland security resource allocation problems.
Defense system; Game Theory; Secrecy; Exposure; Robustness;
http://www.sciencedirect.com/science/article/pii/S0377221715003367
Nikoofal, Mohammad E.
Zhuang, Jun
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:119-1272015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:119-127
article
Multivariate control charts based on the James–Stein estimator
In this study, we focus on improving parameter estimation in Phase I study to construct more accurate Phase II control limits for monitoring multivariate quality characteristics. For a multivariate normal distribution with unknown mean vector, the usual mean estimator is known to be inadmissible under the squared error loss function when the dimension of the variables is greater than 2. Shrinkage estimators, such as the James–Stein estimators, are shown to have better performance than the conventional estimators in the literature. We utilize the James–Stein estimators to improve the Phase I parameter estimation. Multivariate control limits for the Phase II monitoring based on the improved estimators are proposed in this study. The resulting control charts, JS-type charts, are shown to have substantial performance improvement over the existing ones.
Average run length; Control chart; Multivariate normal distribution; James–Stein estimator;
http://www.sciencedirect.com/science/article/pii/S0377221715001666
Wang, Hsiuying
Huwang, Longcheen
Yu, Jeng Hung
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:641-6502015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:641-650
article
Optimal design of bilateral contracts for energy procurement
In this paper, we consider the problem of optimizing the portfolio of an aggregator that interacts with the energy grid via bilateral contracts. The purpose of the contracts is to achieve the pointwise procurement of energy to the grid. The challenge raised by the coordination of scattered resources and the securing of obligations over the planning horizon is addressed through a twin-time scale model, where robust short term operational decisions are contingent on long term resource usage incentives that embed the full extent of contract specifications.
Distributed energy resource; Bilateral contract; Dynamic resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221715003707
Gilbert, François
Anjos, Miguel F.
Marcotte, Patrice
Savard, Gilles
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:307-3192015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:307-319
article
Cost-effectiveness measures on convex and nonconvex technologies
Camanho and Dyson (2005) extended Shephard's (1974) revenue-indirect cost efficiency approach to a cost-effectiveness framework, which helps to assess the ability of a firm to achieve the current revenue (expressed in the firm's own prices and quantities) at minimum cost. The degree of cost-effectiveness is quantified as the ratio of the minimum cost to the observed cost of the evaluated firm where the minimum cost is computed by simultaneously adjusting the output levels at the current revenue. In this paper, we develop two cost-effectiveness approaches based on convex data envelopment analysis and nonconvex free disposable hull technologies. The objectives of this paper are threefold. Firstly, we develop a convex cost-effectiveness (CCE) measure which is equivalent to the Camanho–Dyson CCE measure under the constant returns-to-scale assumption. Secondly, we introduce three nonconvex cost-effectiveness (NCCE) measures which are shown to be equivalent with respect to each returns-to-scale nonconvex technology. Finally, we apply our framework to a real data.
Data envelopment analysis (DEA); Free disposal hull (FDH); Convex cost-effectiveness (CCE); Nonconvex cost-effectiveness (NCCE); Returns-to-scale;
http://www.sciencedirect.com/science/article/pii/S0377221715002751
Fukuyama, Hirofumi
Shiraz, Rashed Khanjani
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:462-4702015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:462-470
article
Decentral allocation planning in multi-stage customer hierarchies
This paper presents a novel allocation scheme to improve profits when splitting a scarce product among customer segments. These segments differ by demand and margin and they form a multi-level tree, e.g. according to a geography-based organizational structure. In practice, allocation has to follow an iterative process in which higher level quotas are disaggregated one level at a time, only based on local, aggregate information. We apply well-known econometric concepts such as the Lorenz curve and Theil’s index of inequality to find a non-linear approximation of the profit function in the customer tree. Our resulting Approximate Profit Decentral Allocation (ADA) scheme ensures that a group of truthfully reporting decentral planners makes quasi-coordinated decisions in support of overall profit-maximization in the hierarchy. The new scheme outperforms existing simple rules by a large margin and comes close to the first-best theoretical solution under a central planner and central information.
Supply chain management; Demand fulfillment; Allocation planning; Customer hierarchies; Customer heterogeneity;
http://www.sciencedirect.com/science/article/pii/S0377221715003811
Vogel, Sebastian
Meyr, Herbert
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:20-332015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:20-33
article
Solving stochastic resource-constrained project scheduling problems by closed-loop approximate dynamic programming
Project scheduling problems with both resource constraints and uncertain task durations have applications in a variety of industries. While the existing research literature has been focusing on finding an a priori open-loop task sequence that minimizes the expected makespan, finding a dynamic and adaptive closed-loop policy has been regarded as being computationally intractable. In this research, we develop effective and efficient approximate dynamic programming (ADP) algorithms based on the rollout policy for this category of stochastic scheduling problems. To enhance performance of the rollout algorithm, we employ constraint programming (CP) to improve the performance of base policy offered by a priority-rule heuristic. We further devise a hybrid ADP framework that integrates both the look-back and look-ahead approximation architectures, to simultaneously achieve both the quality of a rollout (look-ahead) policy to sequentially improve a task sequence, and the efficiency of a lookup table (look-back) approach. Computational results on the benchmark instances show that our hybrid ADP algorithm is able to obtain competitive solutions with the state-of-the-art algorithms in reasonable computational time. It performs particularly well for instances with non-symmetric probability distribution of task durations.
Resource-constrained project scheduling; Uncertain task durations; Stochastic scheduling; Approximate dynamic programming; Simulation;
http://www.sciencedirect.com/science/article/pii/S037722171500288X
Li, Haitao
Womer, Norman K.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:582-5962015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:582-596
article
Methods for solving the mean query execution time minimization problem
One of the most significant and common techniques to accelerate user queries in multidimensional databases is view materialization. The problem of choosing an appropriate part of data structure for materialization under limited resources is known as the view selection problem. In this paper, the problem of the mean query execution time minimization under limited storage space is studied. Different heuristics based on a greedy method are examined, proofs regarding their performance are presented, and modifications for them are proposed, which not only improve the solution cost but also shorten the running time. Additionally, the heuristics and a widely used Integer Programming solver are experimentally compared with respect to the running time and the cost of solution. What distinguishes this comparison is its comprehensiveness, which is obtained by the use of performance profiles. Two computational effort reduction schemas, which significantly accelerate heuristics as well as optimal algorithms without increasing the value of the cost function, are also proposed. The presented experiments were done on a large dataset with special attention to the large problems, rarely considered in previous experiments. The main disadvantage of a greedy method indicated in literature was its long running time. The results of the conducted experiments show that the modification of the greedy algorithm together with the computational effort reduction schemas presented in this paper result in the method which finds a solution in short time, even for large lattices.
Decision support systems; Heuristics; OLAP; View materialization; View selection problem;
http://www.sciencedirect.com/science/article/pii/S0377221715003343
Łatuszko, Marek
Pytlak, Radosław
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:631-6402015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:631-640
article
A multi-step rolled forward chance-constrained model and a proactive dynamic approach for the wheat crop quality control problem
Handling weather uncertainty during the harvest season is an indispensable aspect of seed gathering activities. More precisely, the focus of this study refers to the multi-period wheat quality control problem during the crop harvest season under meteorological uncertainty. In order to alleviate the problem curse of dimensionality and to reflect faithfully exogenous uncertainties revealed progressively over time, we propose a multi-step joint chance-constrained model rolled forward step-by-step. This model is subsequently solved by a proactive dynamic approach, specially conceived for this purpose. Based on real-world derived instances, the obtained computational results exhibit proactive and accurate harvest scheduling solutions for the wheat crop quality control problem.
OR in agriculture; Multi-step joint chance constrained programming; Proactive dynamic approach; Exogenous Markov decision process; Wheat crop quality control;
http://www.sciencedirect.com/science/article/pii/S0377221715003689
Borodin, Valeria
Bourtembourg, Jean
Hnaien, Faicel
Labadie, Nacima
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:450-4612015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:450-461
article
A frontier measure of U.S. banking competition
The three main measures of competition (HHI, Lerner index, and H-statistic) are uncorrelated for U.S. banks. We investigate why this occurs, propose a frontier measure of competition, and apply it to five major bank service lines. Fee-based banking services comprise 35 percent of bank revenues so assessing competition by service line is preferred to using a single measure for traditional activities extended to the entire bank. As the Lerner index and the H-statistic together explain only 1 percent of HHI variation and the HHI is similarly unrelated to the frontier method developed here, current merger/acquisition guidelines should be adjusted as banking concentration seems unrelated to likely more accurate competition measures.
(D) Productivity and competitiveness; Competition; Banks;
http://www.sciencedirect.com/science/article/pii/S0377221715003896
Bolt, Wilko
Humphrey, David
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:214-2252016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:214-225
article
Ambiguity in risk preferences in robust stochastic optimization
We consider robust stochastic optimization problems for risk-averse decision makers, where there is ambiguity about both the decision maker’s risk preferences and the underlying probability distribution. We propose and analyze a robust optimization problem that accounts for both types of ambiguity. First, we derive a duality theory for this problem class and identify random utility functions as the Lagrange multipliers. Second, we turn to the computational aspects of this problem. We show how to evaluate our robust optimization problem exactly in some special cases, and then we consider some tractable relaxations for the general case. Finally, we apply our model to both the newsvendor and portfolio optimization problems and discuss its implications.
Stochastic dominance; Robust optimization; Expected utility maximization;
http://www.sciencedirect.com/science/article/pii/S0377221716301448
Haskell, William B.
Fu, Lunce
Dessouky, Maged
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:202-2132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:202-213
article
The influence of challenging goals and structured method on Six Sigma project performance: A mediated moderation analysis
Over the past few decades, Six Sigma has diffused to a wide array of organizations across the globe, which has been fueled by the reported financial benefits of Six Sigma. Implementing Six Sigma entails carrying out a series of Six Sigma projects that improve business processes. Scholars have investigated some mechanisms that influence project success, such as setting challenging goals and adhering to the Six Sigma method. However, these mechanisms have been studied in a piecemeal fashion and do not provide a deeper understanding of their interrelationships. Developing a deeper understanding of these mechanisms helps identify the contingency and boundary conditions that influence Six Sigma project execution. Drawing on Sociotechnical Systems theory, this research conceptualizes and empirically examines the interrelationships of the key mechanisms that influence project execution. Specifically, we examine the interrelationship between Six Sigma project goals (Social System), adherence to the Six Sigma method (Technical System), and knowledge creation. The analysis uses a mediation-moderation approach which helps empirically examine these relationships. The data come from a survey of 324 employees in 102 Six Sigma projects from two organizations. The findings show that project goals and the Six Sigma method can compensate for one another. It also suggests that adherence to the Six Sigma method becomes more beneficial for projects that create a lot of knowledge. Otherwise the method becomes less important. Prior research has not examined these contingencies and boundary conditions, which ultimately influence project success.
Six Sigma; Goal theory; Sociotechnical systems theory; Structured method; Mediated moderation;
http://www.sciencedirect.com/science/article/pii/S0377221716301503
Arumugam, V.
Antony, Jiju
Linderman, Kevin
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:80-912016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:80-91
article
An adaptive large neighborhood search for the two-echelon multiple-trip vehicle routing problem with satellite synchronization
The two-echelon vehicle routing problem (2E-VRP) consists in making deliveries to a set of customers using two distinct fleets of vehicles. First-level vehicles pick up requests at a distribution center and bring them to intermediate sites. At these locations, the requests are transferred to second-level vehicles, which deliver them. This paper addresses a variant of the 2E-VRP that integrates constraints arising in city logistics such as time window constraints, synchronization constraints, and multiple trips at the second level. The corresponding problem is called the two-echelon multiple-trip vehicle routing problem with satellite synchronization (2E-MTVRP-SS). We propose an adaptive large neighborhood search to solve this problem. Custom destruction and repair heuristics and an efficient feasibility check for moves have been designed and evaluated on modified benchmarks for the VRP with time windows.
Routing; Two-echelon VRP; Synchronization; City logistics; Adaptive large neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221716301862
Grangier, Philippe
Gendreau, Michel
Lehuédé, Fabien
Rousseau, Louis-Martin
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:226-2352016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:226-235
article
Quantifiers induced by subjective expected value of sample information with Bernstein polynomials
A kind of personalized quantifier, the so-called SEVSI-induced quantifier as an acronym for Subjective Expected Value of Sample Information, is developed in this paper by introducing Bernstein polynomials of higher degree. This allows us to provide a novel solution to improve the final representation of the quantifier that generally performed poorly in our previous work, thus enhancing the quality of global approximation of functions and improving the operability of this kind of quantifier for practical use. We show some properties of the developed quantifier. We also prove the consistency of the OWA aggregation under the guidance of this type of quantifier. Finally, we experimentally show that the developed quantifier outperforms the one with the piecewise linear interpolation in many aspects of geometrical characteristics and operability. Thus it could be considered as an effective analytical tool to help handle the complex cases involving people's personalities or behavior intentions that have to be considered in decision making under uncertainty.
Uncertainty modeling; Personalized quantifier; Bernstein polynomials; Ordered weighted averaging (OWA) aggregation;
http://www.sciencedirect.com/science/article/pii/S0377221716301436
Guo, Kaihong
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:92-1042016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:92-104
article
Efficient inventory control for imperfect quality items
In this paper, we present a general EOQ model for items that are subject to inspection for imperfect quality. Each lot that is delivered to the sorting facility undertakes a 100 per cent screening and the percentage of defective items per lot reduces according to a learning curve. The generality of the model is viewed as important both from an academic and practitioner perspective. The mathematical formulation considers arbitrary functions of time that allow the decision maker to assess the consequences of a diverse range of strategies by employing a single inventory model. A rigorous methodology is utilised to show that the solution is a unique and global optimal and a general step-by-step solution procedure is presented for continuous intra-cycle periodic review applications. The value of the temperature history and flow time through the supply chain is also used to determine an efficient policy. Furthermore, coordination mechanisms that may affect the supplier and the retailer are explored to improve inventory control at both echelons. The paper provides illustrative examples that demonstrate the application of the theoretical model in different settings and lead to the generation of interesting managerial insights.
Inventory; Imperfect quality; Deterioration; Perishable items; Periodic review;
http://www.sciencedirect.com/science/article/pii/S0377221716302041
Alamri, Adel A.
Harris, Irina
Syntetos, Aris A.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:418-4272016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:418-427
article
Value of information in portfolio selection, with a Taiwan stock market application illustration
Despite many proposed alternatives, the predominant model in portfolio selection is still mean–variance. However, the main weakness of the mean–variance model is in the specification of the expected returns of the individual securities involved. If this process is not accurate, the allocations of capital to the different securities will in almost all certainty be incorrect. If, however, this process can be made accurate, then correct allocations can be made, and the additional expected return following from this is the value of information. This paper thus proposes a methodology to calculate the value of information. A related idea of a level of disappointment is also shown. How value of information calculations can be important in helping a mutual fund settle on how much to set aside for research is discussed in reference to a Taiwan Stock Exchange illustrative application in which the value of information appears to be substantial. Heavy use is made of parametric quadratic programming to keep computation times down for the methodology.
Efficient points; Portfolio selection; Value of information; Piecewise linear paths; Parametric quadratic programming;
http://www.sciencedirect.com/science/article/pii/S0377221716300315
Kao, Chiang
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:113-1262016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:113-126
article
Risk measures and their application to staffing nonstationary service systems
In this paper, we explore the use of static risk measures from the mathematical finance literature to assess the performance of some standard nonstationary queueing systems. To do this we study two important queueing models, namely the infinite server queue and the multi-server queue with abandonment. We derive exact expressions for the value of many standard risk measures for the Mt/M/∞, Mt/G/∞, and Mt/Mt/∞ queueing models. We also derive Gaussian based approximations for the value of risk measures for the Erlang-A queueing model. Unlike more traditional approaches of performance analysis, risk measures offer the ability to satisfy the unique and specific risk preferences or tolerances of service operations managers. We also show how risk measures can be used for staffing nonstationary systems with different risk preferences and assess the impact of these staffing policies via simulation.
Queues and service systems; Risk measures; Healthcare; Time inhomogeneous markov processes; Staffing;
http://www.sciencedirect.com/science/article/pii/S0377221716301400
Pender, Jamol
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:290-2972016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:290-297
article
Scheduling under linear constraints
We introduce a parallel machine scheduling problem in which the processing times of jobs are not given in advance but are determined by a system of linear constraints. The objective is to minimize the makespan, i.e., the maximum job completion time among all feasible choices. This novel problem is motivated by various real-world application scenarios. We discuss the computational complexity and algorithms for various settings of this problem. In particular, we show that if there is only one machine with an arbitrary number of linear constraints, or there is an arbitrary number of machines with no more than two linear constraints, or both the number of machines and the number of linear constraints are fixed constants, then the problem is polynomial-time solvable via solving a series of linear programming problems. If both the number of machines and the number of constraints are inputs of the problem instance, then the problem is NP-Hard. We further propose several approximation algorithms for the latter case.
Parallel machine scheduling; Linear programming; Computational complexity; Approximation algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716300650
Nip, Kameng
Wang, Zhenbo
Wang, Zizhuo
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:29-392016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:29-39
article
KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions
We devote this paper to study a class of interval valued multiobjective programming problems. For this we consider two order relations LU and LS on the set of all closed intervals and propose many concepts of Pareto optimal solutions. Based on convexity concepts (viz. LU and LS-convexity) and generalized differentiability (viz. gH-differentiability) of interval valued functions, the KKT optimality conditions for aforesaid problems are obtained. In addition, we compare our results with the results given in Wu (2009) and we show some advantages of our results. The theoretical development is illustrated by suitable examples.
Interval valued functions; gH-differentiability; LU; LS-convex functions; Pareto optimal solutions; KKT optimality conditions;
http://www.sciencedirect.com/science/article/pii/S0377221716301886
Singh, D.
Dar, B.A.
Kim, D.S.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:825-8422016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:825-842
article
The Hybrid Electric Vehicle – Traveling Salesman Problem
The reduction in carbon dioxide levels by using hybrid electric vehicles is a currently ongoing endeavor. Although this development is quite advanced for hybrid electric passenger cars, small transporters and trucks are far behind. We try to address this challenge by introducing a new optimization problem that describes the delivery of goods with a hybrid electric vehicle to a set of customer locations. The Hybrid Electric Vehicle – Traveling Salesman Problem extends the well-known Traveling Salesman Problem by adding different modes of operation for the vehicle, causing different costs and driving times for each arc within a delivery network.
Travelling salesman; Hybrid electric vehicles; Transportation;
http://www.sciencedirect.com/science/article/pii/S0377221716301163
Doppstadt, C.
Koberstein, A.
Vigo, D.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:697-7102016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:697-710
article
A data analytic approach to forecasting daily stock returns in an emerging marketAuthor-Name: Oztekin, Asil
Forecasting stock market returns is a challenging task due to the complex nature of the data. This study develops a generic methodology to predict daily stock price movements by deploying and integrating three data analytical prediction models: adaptive neuro-fuzzy inference systems, artificial neural networks, and support vector machines. The proposed approach is tested on the Borsa Istanbul BIST 100 Index over an 8 year period from 2007 to 2014, using accuracy, sensitivity, and specificity as metrics to evaluate each model. Using a ten-fold stratified cross-validation to minimize the bias of random sampling, this study demonstrates that the support vector machine outperforms the other models. For all three predictive models, accuracy in predicting down movements in the index outweighs accuracy in predicting the up movements. The study yields more accurate forecasts with fewer input factors compared to prior studies of forecasts for securities trading on Borsa Istanbul. This efficient yet also effective data analytic approach can easily be applied to other emerging market stock return series.
Prediction/forecasting; Stock market return; Business analytics; Borsa Istanbul (BIST 100); Istanbul Stock Exchange (ISE);
http://www.sciencedirect.com/science/article/pii/S0377221716301096
Kizilaslan, Recep
Freund, Steven
Iseri, Ali
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:659-6722016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:659-672
article
A model for clustering data from heterogeneous dissimilarities
Clustering algorithms partition a set of n objects into p groups (called clusters), such that objects assigned to the same groups are homogeneous according to some criteria. To derive these clusters, the data input required is often a single n × n dissimilarity matrix. Yet for many applications, more than one instance of the dissimilarity matrix is available and so to conform to model requirements, it is common practice to aggregate (e.g., sum up, average) the matrices. This aggregation practice results in clustering solutions that mask the true nature of the original data. In this paper we introduce a clustering model which, to handle the heterogeneity, uses all available dissimilarity matrices and identifies for groups of individuals clustering objects in a similar way. The model is a nonconvex problem and difficult to solve exactly, and we thus introduce a Variable Neighborhood Search heuristic to provide solutions efficiently. Computational experiments and an empirical application to perception of chocolate candy show that the heuristic algorithm is efficient and that the proposed model is suited for recovering heterogeneous data. Implications for clustering researchers are discussed.
Data mining; Clustering; Heterogeneity; Optimization; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301618
Santi, Éverton
Aloise, Daniel
Blanchard, Simon J.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:489-5022016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:489-502
article
A DEA based composite measure of quality and its associated data uncertainty interval for health care provider profiling and pay-for-performance
Composite measures calculated from individual performance indicators increasingly are used to profile and reward health care providers. We illustrate an innovative way of using Data Envelopment Analysis (DEA) to create a composite measure of quality for profiling facilities, informing consumers, and pay-for-performance programs. We compare DEA results to several widely used alternative approaches for creating composite measures: opportunity-based-weights (OBW, a form of equal weighting) and a Bayesian latent variable model (BLVM, where weights are driven by variances of the individual measures). Based on point estimates of the composite measures, to a large extent the same facilities appear in the top decile. However, when high performers are identified because the lower limits of their interval estimates are greater than the population average (or, in the case of the BLVM, the upper limits are less), there are substantial differences in the number of facilities identified: OBWs, the BLVM and DEA identify 25, 17 and 5 high-performers, respectively. With DEA, where every facility is given the flexibility to set its own weights, it becomes much harder to distinguish the high performers. In a pay-for-performance program, the different approaches result in very different reward structures: DEA rewards a small group of facilities a larger percentage of the payment pool than the other approaches. Finally, as part of the DEA analyses, we illustrate an approach that uses Monte Carlo resampling with replacement to calculate interval estimates by incorporating uncertainty in the data generating process for facility input and output data. This approach, which can be used when data generating processes are hierarchical, has the potential for wider use than in our particular application.
Data Envelopment Analysis (DEA); Health care quality; Monte Carlo; Bootstrapping; Performance;
http://www.sciencedirect.com/science/article/pii/S0377221716301023
Shwartz, Michael
Burgess, James F.
Zhu, Joe
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:524-5412016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:524-541
article
From stakeholders analysis to cognitive mapping and Multi-Attribute Value Theory: An integrated approach for policy support
One of the fundamental features of policy processes in contemporary societies is complexity. It follows from the plurality of points of view actors adopt in their interventions, and from the plurality of criteria upon which they base their decisions. In this context, collaborative multicriteria decision processes seem to be appropriate to address part of the complexity challenge. This study discusses a decision support framework that guides policy makers in their strategic decisions by using a multi-method approach based on the integration of three tools, i.e., (i) stakeholders analysis, to identify the multiple interests involved in the process, (ii) cognitive mapping, to define the shared set of objectives for the analysis, and (iii) Multi-Attribute Value Theory, to measure the level of achievement of the previously defined objectives by the policy options under investigation. The integrated decision support framework has been tested on a real world project concerning the location of new parking areas in a UNESCO site in Southern Italy. The purpose of this study was to test the operability of an integrated analytical approach to support policy decisions by investigating the combined and synergistic effect of the three aforementioned tools. The ultimate objective was to propose policy recommendations for a sustainable parking area development strategy in the region under consideration. The obtained results illustrate the importance of integrated approaches for the development of accountable public decision processes and consensus policy alternatives. The proposed integrated methodological framework will, hopefully, stimulate the application of other collaborative decision processes in public policy making.
Multiple criteria analysis; Decision analysis; Group decision and negotiations; Decision processes; Policy analytics;
http://www.sciencedirect.com/science/article/pii/S0377221716301072
Ferretti, Valentina
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:253-2682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:253-268
article
Forward thresholds for operation of pumped-storage stations in the real-time energy market
Pumped-storage hydroelectric plants are very valuable assets on the electric grid and in electric markets as they are able to pump and store water for generation, thus allowing for grid-level storage. Within the realm of short-term energy markets, we present a model for determining forward-looking thresholds for making generation and pumping decisions at such plants. A multistage stochastic programming framework is developed to optimize the thresholds with uncertain system prices over the next three days. Tractability issues are discussed and a novel method based on an implementation of the scatter search algorithm is proposed. Given the size of the multistage stochastic programming formulation, we argue that this novel method is a more accurate representation of the decision process. We demonstrate model stability and quality, and show that the forward thresholds obtained using a stochastic programming framework outperform the forward thresholds from a deterministic model, and thus can lead to efficiency gains for both the generation unit owner and the overall system in the real-time market.
Stochastic programming; OR in energy; Large scale optimization; Metaheuristics; Energy markets;
http://www.sciencedirect.com/science/article/pii/S0377221716301485
Vojvodic, Goran
Jarrah, Ahmad I.
Morton, David P.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:312-3192016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:312-319
article
Estimating the hyperbolic distance function: A directional distance function approach
Färe, Grosskopf, and Lovell (1985) merged Farrell’s input and output oriented technical efficiency measures into a new graph-type approach known as hyperbolic distance function (HDF). In spite of its appealing special structure in allowing for the simultaneous and equiproportionate reduction in inputs and increase in outputs, HDF is a non-linear optimization and it is hard to solve particularly when dealing with technologies operating under variable returns to scale. By connecting HDF to the directional distance function, we propose a linear programming based procedure for estimating the exact value of HDF within the non-parametric framework of data envelopment analysis. We illustrate the computational effectiveness of the algorithm on several real-world and simulated data sets, generating the optimal value of HDF through generally solving at most two linear programs. Moreover, our approach has several desirable properties such as: (1) introducing a computational dual formulation for the HDF and providing an economic interpretation in terms of shadow prices; (2) being readily adaptable to measure hyperbolic-oriented super-efficiency; and (3) being flexible to deal with HDF-based efficiency measures on environmental technologies.
Efficiency measurement; Data envelopment analysis; Hyperbolic distance function; Directional distance function;
http://www.sciencedirect.com/science/article/pii/S0377221716301916
Färe, Rolf
Margaritis, Dimitris
Rouse, Paul
Roshdi, Israfil
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:383-3912016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:383-391
article
Optimal production planning for assembly systems with uncertain capacities and random demandAuthor-Name: Ji, Qingkai
We study the optimal production planning for an assembly system consisting of n components in a single period setting. Demand for the end-product is random and production and assembly capacities are uncertain due to unexpected breakdowns, repairs and reworks, etc. The cost-minimizing firm (she) plans components production before the production capacities are realized, and after the outputs of components are observed, she decides the assembly amount before the demand realization. We start with a simplified system of selling two complementary products without an assembly stage and find that the firm's best choices can only be: (a) producing no products or producing only the product of less stock such that its target amount is not higher than the other product's initial stock level, or (b) producing both products such that their target amounts are equal. Leveraging on these findings, the two-dimensional optimization problem is reduced to two single-dimensional sub-problems and the optimal solution is characterized. For a general assembly system with n components, we show that if initially the firm has more end-products than a certain level, she will neither produce any component nor assemble end-product; if she does not have that many end-products but does have enough mated components, she will produce nothing and assemble up to that level; otherwise she will try to assemble all mated components and plan production of components accordingly. We characterize the structure of optimal solutions and find the solutions analytically.
Supply chain management; Assembly system; Uncertain capacity; Production planning;
http://www.sciencedirect.com/science/article/pii/S0377221716300583
Wang, Yunzeng
Hu, Xiangpei
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:681-6962016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:681-696
article
Unpacking multimethodology: Impacts of a community development intervention
Multimethodology interventions are being increasingly employed by operational researchers to cope with the complexity of real-world problems. In keeping with recent calls for more research into the ‘realised’ impacts of multimethodology, we present a detailed account of an intervention to support the planning of business ideas by a management team working in a community development context. Drawing on the rich steam of data gathered during the intervention, we identify a range of cognitive, task and relational impacts experienced by the management team during the intervention. These impacts are the basis for developing a process model that accounts for the personal, social and material changes reported by those involved in the intervention. The model explains how the intervention's analytic and relational capabilities incentivise the interplay of participants’ decision making efforts and integrative behaviours underpinning reported intervention impacts and change. Our findings add much needed empirical case material to enrich further our understanding of the realised impacts of operational research interventions in general, and of multimethodology interventions in particular.
Decision processes; Problem structuring; Multimethodology; Intervention; Impacts;
http://www.sciencedirect.com/science/article/pii/S0377221716300972
Henao, Felipe
Franco, L. Alberto
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:265-2792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:265-279
article
A cycle-based evolutionary algorithm for the fixed-charge capacitated multi-commodity network design problem
This paper presents an evolutionary algorithm for the fixed-charge multicommodity network design problem (MCNDP), which concerns routing multiple commodities from origins to destinations by designing a network through selecting arcs, with an objective of minimizing the fixed costs of the selected arcs plus the variable costs of the flows on each arc. The proposed algorithm evolves a pool of solutions using principles of scatter search, interlinked with an iterated local search as an improvement method. New cycle-based neighborhood operators are presented which enable complete or partial re-routing of multiple commodities. An efficient perturbation strategy, inspired by ejection chains, is introduced to perform local compound cycle-based moves to explore different parts of the solution space. The algorithm also allows infeasible solutions violating arc capacities while performing the “ejection cycles”, and subsequently restores feasibility by systematically applying correction moves. Computational experiments on benchmark MCNDP instances show that the proposed solution method consistently produces high-quality solutions in reasonable computational times.
Multi-commodity network design; Scatter search; Evolutionary algorithms; Ejection chains; Iterated local search;
http://www.sciencedirect.com/science/article/pii/S0377221716000072
Paraskevopoulos, Dimitris C.
Bektaş, Tolga
Crainic, Teodor Gabriel
Potts, Chris N.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:625-6382016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:625-638
article
Hub and Chain: Process Flexibility Design in Non-Identical Systems Using Variance Information
In multi-product multi-plant manufacturing systems, process flexibility is the ability to produce different types of products in the same manufacturing plant or production line. While several design methods and flexibility indices have been proposed in the literature on how to design process flexibility, most of the insights generated are focused on identical production systems whereby all plants have the same capacity and all products have identically distributed demands. In this paper, we examine the process flexibility design problem for non-identical systems. We first study the effect of non-identical demand distributions on the performance of the well-known long chain design, and discover three interesting insights: (1) products with low demand mean will create a bottleneck effect, (2) products with low demand variance will result in inefficient utilization of flexibility links, and (3) long chain efficiency decreases in demand variance of any product, hence the need to provide this product with access to more capacity. Using these insights, we develop the variance-based hub-and-chain method (VHC), a simple and graphically intuitive method which decomposes the long chain into smaller chains, one of which will serve as a hub to which the other chains will be connected. Numerical tests show that VHC outperforms the long chain by 15% on average and outperforms the constraint sampling method by 38% on average. Lastly, we implement VHC on a case study in the edible oil industry in China and find substantial benefits. We then summarize with some managerial insights.
Process flexibility; Chaining strategy; Stochastic maximum flow; Demand variance;
http://www.sciencedirect.com/science/article/pii/S0377221716301473
Chua, Geoffrey A.
Chen, Shaoxiang
Han, Zhiguang
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:179-1872016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:179-187
article
Hierarchical outcomes and collusion neutrality on networks
We investigate TU-game solutions that are neutral to collusive agreements among players. A collusive agreement binds collusion members to act as a single player and is feasible when they are connected on a network. Collusion neutrality requires that no feasible collusive agreement can change the total payoff of collusion members. We show that on the domain of network games, there is a solution satisfying collusion neutrality, efficiency and null-player property if and only if the network is a tree. Considering a tree network, we show that affine combinations of hierarchical outcomes (Demange, 2004; van den Brink, 2012) are the only solutions satisfying the three axioms together with linearity. As corollaries, we establish characterizations of the average tree solution (equally weighted average of hierarchical outcomes); one established earlier in the literature and the others new.
Game theory; Hierarchical outcomes; Collusion neutrality; TU-game; Network game;
http://www.sciencedirect.com/science/article/pii/S0377221716301394
Park, Junghum
Ju, Biung-Ghi
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:68-792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:68-79
article
A service network design model for multimodal municipal solid waste transport
A modal shift from road transport towards inland water or rail transport could reduce the total Green House Gas emissions and societal impact associated with Municipal Solid Waste management. However, this shift will take place only if demonstrated to be at least cost-neutral for the decision makers. In this paper we examine the feasibility of using multimodal truck and inland water transport, instead of truck transport, for shipping separated household waste in bulk from collection centres to waste treatment facilities. We present a dynamic tactical planning model that minimises the sum of transportation costs, external environmental and societal costs. The Municipal Solid Waste Service Network Design Problem allocates Municipal Solid Waste volumes to transport modes and determines transportation frequencies over a planning horizon. This generic model is applied to a real-life case in Flanders, the northern region of Belgium. Computational results show that multimodal truck and inland water transportation can compete with truck transport by avoiding or reducing transhipments and using barge convoys.
Solid Waste Management; Supply chain management; OR in societal problem analysis; Linear Programming; Networks;
http://www.sciencedirect.com/science/article/pii/S0377221716301643
Inghels, Dirk
Dullaert, Wout
Vigo, Daniele
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:843-8552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:843-855
article
Progressive hedging applied as a metaheuristic to schedule production in open-pit mines accounting for reserve uncertainty
Scheduling production in open-pit mines is characterized by uncertainty about the metal content of the orebody (the reserve) and leads to a complex large-scale mixed-integer stochastic optimization problem. In this paper, a two-phase solution approach based on Rockafellar and Wets’ progressive hedging algorithm (PH) is proposed. PH is used in phase I where the problem is first decomposed by partitioning the set of scenarios modeling metal uncertainty into groups, and then the sub-problems associated with each group are solved iteratively to drive their solutions to a common solution. In phase II, a strategy exploiting information obtained during the PH iterations and the structure of the problem under study is used to reduce the size of the original problem, and the resulting smaller problem is solved using a sliding time window heuristic based on a fix-and-optimize scheme. Numerical results show that this approach is efficient in finding near-optimal solutions and that it outperforms existing heuristics for the problem under study.
Open-pit mine production scheduling; Progressive hedging method; Lagrangian relaxation; Sliding time window heuristic; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301357
Lamghari, Amina
Dimitrakopoulos, Roussos
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:746-7602016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:746-760
article
Preference stability over time with multiple elicitation methods to support wastewater infrastructure decision-making
We used a multi-method and repeated elicitation approach across different stakeholder groups to explore possible differences in the outcome of an environmental decision. We compared different preference elicitation procedures based on Multi Criteria Decision Analysis (MCDA) over time for a water infrastructure decision in Switzerland. We implemented the SWING and SMART/SWING weight elicitation methods and also compared results with earlier stakeholder interviews. In all procedures, the weights for environmental protection and well-functioning (waste-)water systems were higher than for cost reduction. The SMART/SWING variant produced statistically significantly different weights than SWING. Weights changed over time with both elicitation methods. Weights were more stable with the SWING method, which was also perceived as slightly more difficult than the SMART/SWING variant. We checked whether the difference in weights produced by the two elicitation methods and the difference in their stability affects the ranking of six alternatives. Overall an unconventional decentralized alternative ranked first or second in 92 percent of all elicitation procedures, which were the online surveys or interviews. For practical decision-making, using multiple methods across different stakeholder groups and repeating elicitation can increase our confidence that the results reflect the true opinions of the decision makers and stakeholders.
Behavioral OR; Weight elicitation; Multiple criteria analysis; Online survey; OR in environment and climate change;
http://www.sciencedirect.com/science/article/pii/S0377221716301382
Lienert, Judit
Duygan, Mert
Zheng, Jun
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:777-7902016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:777-790
article
A queueing model for managing small projects under uncertainties
We consider a situation in which a home improvement project contractor has a team of regular crew members who receive compensation even when they are idle. Because both projects arrivals and the completion time of each project are uncertain, the contractor needs to manage the utilization of his crews carefully. One common approach adopted by many home improvement contractors is to accept multiple projects to keep his crew members busy working on projects to generate positive cash flows. However, this approach has a major drawback because it causes “intentional” (or foreseeable) project delays. Intentional project delays can inflict explicit and implicit costs on the contractor when frustrating customers abandon their projects and/or file complaints or lawsuits. In this paper, we present a queueing model to capture uncertain customer (or project) arrivals and departures, along with the possibility of customer abandonment. Also, associated with each admission policy (i.e., the maximum number of projects that the contractor will accept), we model the underlying tradeoff between accepting too many projects (that can increase customer dissatisfaction) and accepting too few projects (that can reduce crew utilization). We examine this tradeoff analytically so as to determine the optimal admission policy and the optimal number of crew members. We further apply our model to analyze other issues including worker productivity and project pricing. Finally, our model can be extended to allow for multiple classes of projects with different types of crew members.
Project management; Multi-projects; Queueing models; Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301059
Bai, Jiaru
So, Kut C.
Tang, Christopher
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:880-8872016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:880-887
article
From partial derivatives of DEA frontiers to marginal products, marginal rates of substitution, and returns to scaleAuthor-Name: Ouellette, Pierre
The characterization of a technology, from an economic point of view, often uses the first derivatives of either the transformation or the production function. In a parametric setting, these quantities are readily available as they can be easily deduced from the first derivatives of the specified function. In the standard framework of data envelopment analysis (DEA) models these quantities are not so easily obtained. The difficulty resides in the fact that marginal changes of inputs and outputs might affect the position of the frontier itself while the calculation of first derivatives for economic purposes assumes that the frontier is held constant. We develop here a procedure to recover first derivatives of transformation functions in DEA models and we show how we can evacuate the problem of the (marginal) shift of the frontier. We show how the knowledge of the first derivatives of the frontier estimated by DEA can be used to deduce and compute marginal products, marginal rates of substitution, and returns to scale for each decision making unit (DMU) in the sample.
Data envelopment analysis; Marginal products; Transformation function; First derivatives;
http://www.sciencedirect.com/science/article/pii/S037722171630073X
Vigeant, Stéphane
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:761-7762016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:761-776
article
On consumer preferences and the willingness to pay for term life insurance
We run a choice-based conjoint (CBC) analysis for term life insurance on a sample of 2017 German consumers using data from web-based experiments. Individual-level part-worth profiles are estimated by means of a hierarchical Bayes model. Drawing on the elicited preference structures, we then compute relative attribute importances and different willingness to pay measures. In addition, we present comprehensive simulation results for a realistic competitive setting that allows us to assess product switching as well as market expansion effects. On average, brand, critical illness cover, and underwriting procedure turn out to be the most important nonprice product attributes. Hence, if a policy comprises their favored specifications, customers accept substantial markups in the monthly premium. Furthermore, preferences vary considerably across the sample. While some individuals are prepared to pay relatively high monthly premiums, a large fraction exhibits no willingness to pay for term life insurance at all, presumably due to the absence of a need for mortality risk coverage. We also illustrate that utility-driven product optimization is well-suited to gain market shares, avoid competitive price pressure, and access additional profit potential. Finally, based on estimated demand sensitivities and a set of cost assumptions, it is shown that insurers require an in-depth understanding of preferences to identify the profit-maximizing price.
Preferences; Willingness to pay; Term life insurance; Choice-based conjoint analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716300601
Braun, Alexander
Schmeiser, Hato
Schreiber, Florian
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:338-3462016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:338-346
article
The weighted additive distance function
Distance functions in production theory are mathematical structures that characterize the belonging to the reference technology through a numerical value, behave as technical efficiency measures when the focus is analyzing an observed input–output vector within its production possibility set and present a dual relationship with some support function (profit, revenue, cost function). In this paper, we endow the well-known weighted additive models in Data Envelopment Analysis with a distance function structure, introducing the Weighted Additive Distance Function and showing its main properties.
Data envelopment analysis; Distance functions; Weighted additive model; Profit function;
http://www.sciencedirect.com/science/article/pii/S0377221716302259
Aparicio, Juan
Pastor, Jesus T.
Vidal, Fernando
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:169-1782016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:169-178
article
A multi-agent based cooperative approach to scheduling and routing
In this paper, we propose a general agent-based distributed framework where each agent is implementing a different metaheuristic/local search combination. Moreover, an agent continuously adapts itself during the search process using a direct cooperation protocol based on reinforcement learning and pattern matching. Good patterns that make up improving solutions are identified and shared by the agents. This agent-based system aims to provide a modular flexible framework to deal with a variety of different problem domains. We have evaluated the performance of this approach using the proposed framework which embodies a set of well known metaheuristics with different configurations as agents on two problem domains, Permutation Flow-shop Scheduling and Capacitated Vehicle Routing. The results show the success of the approach yielding three new best known results of the Capacitated Vehicle Routing benchmarks tested, whilst the results for Permutation Flow-shop Scheduling are commensurate with the best known values for all the benchmarks tested.
Combinatorial optimization; Scheduling; Vehicle routing; Metaheuristics; Cooperative search;
http://www.sciencedirect.com/science/article/pii/S0377221716300984
Martin, Simon
Ouelhadj, Djamila
Beullens, Patrick
Ozcan, Ender
Juan, Angel A.
Burke, Edmund K.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:791-8102016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:791-810
article
Multicriteria decision support to evaluate potential long-term natural gas supply alternatives: The case of GreeceAuthor-Name: Androulaki, Stella
This paper assesses 27 alternative natural gas supply corridors for the case of Greece, according to a multicriteria analysis approach based on three main pillars: (1) economics of supply, (2) security of supply, and (3) cooperation between countries. The alternatives include onshore and offshore pipeline corridors and LNG shipping, determined after exhaustive investigation of all possible existing and future routes, taking into consideration all possible natural gas infrastructure development projects around Greece. A multicriteria additive value system is assessed via the robust ordinal regression methodology, aiming to support the national energy policy makers to devise favorable strategies, concerning both long-term national natural gas supplies and infrastructure developments. The obtained ranking shows that noticeable alternative corridors for gas passage to Greece do exist both in terms of maritime transport of LNG and in terms of potential future pipeline infrastructure projects.
Multiple criteria decision analysis; Natural gas supply; Energy policy; Robustness; Greece;
http://www.sciencedirect.com/science/article/pii/S0377221716301047
Psarras, John
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:279-2932016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:279-293
article
A multi-objective model for locating search and rescue boats
We present the Incident Based-Boat Allocation Model (IB-BAM), a multi-objective model designed to allocate search and rescue resources. The decision of where to locate search and rescue boats depends upon a set of criteria that are unique to a given problem such as the density and types of incidents responded in the area of interest, resource capabilities, geographical factors and governments’ business rules. Thus, traditional models that incorporate only political decisions are no longer appropriate. IB-BAM considers all these criteria and determines optimal boat allocation plans with the objectives of minimizing response time to incidents, fleet operating cost and the mismatch between boats’ workload and operation capacity hours.
Integer programming; Resource allocation; Search and rescue;
http://www.sciencedirect.com/science/article/pii/S0377221716301540
Razi, Nasuh
Karatas, Mumtaz
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:161-1682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:161-168
article
Queueing network MAP−(GI/∞)K with high-rate arrivals
An analysis of the open queueing network MAP−(GI/∞)K is presented in this paper. The MAP−(GI/∞)K network implements Markov routing, general service time distribution, and an infinite number of servers at each node. Analysis is performed under the condition of a growing fundamental rate for the Markovian arrival process. It is shown that the stationary probability distribution of the number of customers at the nodes can be approximated by multi-dimensional Gaussian distribution. Parameters of this distribution are presented in the paper. Numerical results validate the applicability of the obtained approximations under relevant conditions. The results of the approximations are applied to estimate the optimal number of servers for a network with finite-server nodes. In addition, an approximation of higher-order accuracy is derived.
Queueing network; Infinite number of servers; Markovian arrival process; Asymptotic analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716302302
Moiseev, Alexander
Nazarov, Anatoly
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:503-5132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:503-513
article
Proximal point algorithms for nonsmooth convex optimization with fixed point constraints
The problem of minimizing the sum of nonsmooth, convex objective functions defined on a real Hilbert space over the intersection of fixed point sets of nonexpansive mappings, onto which the projections cannot be efficiently computed, is considered. The use of proximal point algorithms that use the proximity operators of the objective functions and incremental optimization techniques is proposed for solving the problem. With the focus on fixed point approximation techniques, two algorithms are devised for solving the problem. One blends an incremental subgradient method, which is a useful algorithm for nonsmooth convex optimization, with a Halpern-type fixed point iteration algorithm. The other is based on an incremental subgradient method and the Krasnosel’skiĭ–Mann fixed point algorithm. It is shown that any weak sequential cluster point of the sequence generated by the Halpern-type algorithm belongs to the solution set of the problem and that there exists a weak sequential cluster point of the sequence generated by the Krasnosel’skiĭ–Mann-type algorithm, which also belongs to the solution set. Numerical comparisons of the two proposed algorithms with existing subgradient methods for concrete nonsmooth convex optimization show that the proposed algorithms achieve faster convergence.
Fixed point; Halpern algorithm; Incremental subgradient method; Krasnosel’skiĭ–Mann algorithm; Proximal point algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716301102
Iiduka, Hideaki
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:441-4552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:441-455
article
Setting the right incentives for global planning and operations
We study incentive issues seen in a firm performing global planning and manufacturing, and local demand management. The stochastic demands in local markets are best observed by the regional business units, and the firm relies on the business units’ forecasts for planning of global manufacturing operations. We propose a class of performance evaluation schemes that induce the business units to reveal their private demand information truthfully by turning the business units’ demand revelation game into a potential game with truth telling being a potential maximizer, an appealing refinement of Nash equilibrium. Moreover, these cooperative performance evaluation schemes satisfy several essential fairness notions. After analyzing the characteristics of several performance evaluation schemes in this class, we extend our analysis to include the impact of effort on demand.
Production systems; Information asymmetry; Incentive management; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716300662
Norde, Henk
Özen, Ulaş
Slikker, Marco
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:602-6132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:602-613
article
Shared resource capacity expansion decisions for multiple products with quantity discounts
When multiple products compete for the same storage space, their optimal individual lot sizes may need to be reduced to accommodate the storage needs of other products. This challenge is exacerbated with the presence of quantity discounts, which tend to entice larger lot sizes. Under such circumstances, firms may wish to consider storage capacity expansion as an option to take full advantage of quantity discounts. This paper aims to simultaneously determine the optimal storage capacity level along with individual lot sizes for multiple products being offered quantity discounts (either all-units discounts, incremental discounts, or a mixture of both). By utilizing Lagrangian techniques along with a piecewise-linear approximation for capacity cost, our algorithms can generate precise solutions regardless of the functional form of capacity cost (i.e., concave or convex). The algorithms can incorporate simultaneous lot-sizing decisions for thousands of products in a reasonable solution time. We utilize numerical examples and sensitivity analysis to understand the key factors that influence the capacity expansion decision and the performance of the algorithms. The primary characteristic that influences the capacity expansion decision is the size of the quantity discount offered, but variability in demand and capacity per unit influence the expansion decision as well. Furthermore, we discover that all-units quantity discounts are more likely to lead to capacity expansion compared to incremental quantity discounts. Our analysis illuminates the potential for significant savings available to companies willing to explore the option of increasing storage capacity to take advantage of quantity discount offerings for their purchased products.
Purchasing; Quantity discounts; Capacity expansion; Lot sizing; Inventory;
http://www.sciencedirect.com/science/article/pii/S0377221716301527
Jackson, Jonathan E.
Munson, Charles L.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:711-7332016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:711-733
article
Optimal contract design in the joint economic lot size problem with multi-dimensional asymmetric information
Previous work has studied the classical joint economic lot size model as an adverse selection problem with asymmetric cost information. Solving this problem is challenging due to the presence of countervailing incentives and two-dimensional information asymmetry, under which the classical single-crossing condition does not need to hold. In the present work we advance the existing knowledge about the problem on hand by conducting its optimality analysis, which leads to a better informed and an easier problem solution: First, we refine the existing closed-form solution, which simplifies problem solving and its analysis. Second, we prove that Karush–Kuhn–Tucker conditions are necessary for optimality, and demonstrate that the problem may, in general, possess non-optimal stationary points due to non-convexity. Third, we prove that certain types of stationary points are always dominated, which eases the analytical solution of the problem. Fourth, we derive a simple optimality condition stating that a weak Pareto efficiency of the buyer’s possible cost structures implies optimality of any stationary point. It simplifies the analytical solution approach and ensures a successful solution of the problem by means of conventional numerical techniques, e.g. with a general-purpose solver. We further establish properties of optimal solutions and indicate how these are related with the classical results on adverse selection.
Supply chain coordination; Asymmetric information; Nonlinear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716301060
Pishchulov, Grigory
Richter, Knut
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:392-4032016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:392-403
article
The role of co-opetition in low carbon manufacturing
Low carbon manufacturing has become a strategic objective for many developed and developing economies. This study examines the role of co-opetition in achieving this objective. We investigate the pricing and emissions reduction policies for two rival manufacturers with different emission reduction efficiencies under the cap-and-trade policy. We assume that the product demand is price and emission sensitive. Based on non-cooperative and cooperative games, the optimal solutions for the two manufacturers are derived in the purely competitive and co-opetitive market environments respectively. Through the discussion and numerical analysis, we uncovered that in both pure competition and co-opetition models, the two manufacturers’ optimal prices depend on the unit price of carbon emission trading. In addition, higher emission reduction efficiency leads to lower optimal unit carbon emissions and higher profit in both the pure competition and co-petition models. Interestingly, compared to pure competition, co-opetition will lead to more profit and less total carbon emissions. However, the improvement in economic and environmental performance is based on higher product prices and unit carbon emissions.
Low carbon manufacturing; Co-opetition; Carbon emission reduction; Green technology investment; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716300674
Luo, Zheng
Chen, Xu
Wang, Xiaojun
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:734-7452016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:734-745
article
A simple yet effective decision support policy for mass-casualty triage
In the aftermath of a mass-casualty incident, effective policies for timely evaluation and prioritization of patients can mean the difference between life and death. While operations research methods have been used to study the patient prioritization problem, prior research has either proposed decision rules that only apply to very simple cases, or proposed formulating and solving a mathematical program in real time, which may be a barrier to implementation in an urgent situation. We connect these two regimes by proposing a general decision support rule that can handle survival probability functions and an arbitrary number of patient classifications. The proposed survival lookahead policy generalizes not only a myopic policy and a cμ type rule, but also the optimal solution to a version of the problem with two priority classes. This policy has other desirable properties, including index policy structure. Using simple heuristic parameterizations, the survival lookahead policy yields an expected number of survivors that is almost as large as published methods that require mathematical programming, while having the advantage of an intuitive structure and requiring minimal computational support.
Triage; Disaster response; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301151
Mills, Alex F.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:639-6472016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:639-647
article
Designing repetitive screening procedures with imperfect inspections: An empirical Bayes approach
A batch of expensive items, such as IC chips, is often inspected multiple times in a sequential manner to further discover more conforming items. After several rounds of screening, we need to estimate the number of conforming items that still remain in the batch. We propose in this paper an empirical Bayes estimation method and compare its performance with that of the traditional maximum likelihood method. In the repetitive screening procedure, another important decision problem is when to stop the screening process and salvage the remaining items. We propose various types of stopping rules and illustrate their procedures with a simulated inspection data. Finally, we explore various extensions to our empirical Bayes estimation method in multiple inspection plans.
Inspection; Product quality; Reliability; Empirical Bayes estimation;
http://www.sciencedirect.com/science/article/pii/S0377221716301138
Chun, Young H.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:456-4712016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:456-471
article
Value added, educational accountability approaches and their effects on schools’ rankings: Evidence from Chile
Value added models have been proposed to analyze different aspects related to school effectiveness on the basis of student growth. There is consensus in the literature about the need to control for socioeconomic status and other contextual variables at student and school level in the estimation of value added, for which the methodologies employed have largely relied on hierarchical linear models. However, this approach is problematic because results are based on comparisons to the school’s average—implying no real incentive for performance excellence. Meanwhile, activity analysis models to estimate school value added have been unable to control for contextual variables at both the student and school levels. In this study we propose a robust frontier model to estimate contextual value added which merges relevant branches of the activity analysis literature, namely, metafrontiers and partial frontier methods. We provide an application to a large sample of Chilean schools, a relevant country to study due to the reforms made to its educational system that point out to the need of accountability measures. Results indicate not only the general relevance of including contextual variables but also how they contribute to explaining the performance differentials found for the three types of schools—public, privately-owned subsidized, and privately-owned fee-paying. Also, the results indicate that contextual value added models generate school rankings more consistent with the evaluation models currently used in Chile than any other type of evaluation models.
Efficiency; Order-m; School effectiveness; Value added;
http://www.sciencedirect.com/science/article/pii/S0377221716000527
Thieme, Claudio
Prior, Diego
Tortosa-Ausina, Emili
Gempp, René
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:356-3712016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:356-371
article
Pro-active real-time routing in applications with multiple request patterns
Recent research reveals that pro-active real-time routing approaches that use stochastic knowledge about future requests can significantly improve solution quality compared to approaches that simply integrate new requests upon arrival. Many of these approaches assume that request arrivals on different days follow an identical pattern. Thus, they define and apply a single profile of past request days to anticipate future request arrivals. In many real-world applications, however, different days may follow different patterns. Moreover, the pattern of the current day may not be known beforehand, and may need to be identified in real-time during the day. In such cases, applying approaches that use a single profile is not promising. In this paper, we propose a new pro-active real-time routing approach that applies multiple profiles. These profiles are generated by grouping together days with a similar pattern of request arrivals. For each combination of identified profiles, stochastic knowledge about future request arrivals is derived in an offline step. During the day, the approach repeatedly evaluates characteristics of request arrivals and selects a suitable combination of profiles. The performance of the new approach is evaluated in computational experiments in direct comparison with a previous approach that applies only a single profile. Computational results show that the proposed approach significantly outperforms the previous one. We analyze further potential for improvement by comparing the approach with an omniscient variant that knows the actual pattern in advance. Based on the results, managerial implications that allow for a practical application of the new approach are provided.
Dynamic vehicle routing; Multiple request patterns; Request forecasting; Scenario identification; K-means clustering;
http://www.sciencedirect.com/science/article/pii/S0377221716300364
Ferrucci, Francesco
Bock, Stefan
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:570-5832016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:570-583
article
Robust mixed-integer linear programming models for the irregular strip packing problem
Two-dimensional irregular strip packing problems are cutting and packing problems where small pieces have to be cut from a larger object, involving a non-trivial handling of geometry. Increasingly sophisticated and complex heuristic approaches have been developed to address these problems but, despite the apparently good quality of the solutions, there is no guarantee of optimality. Therefore, mixed-integer linear programming (MIP) models started to be developed. However, these models are heavily limited by the complexity of the geometry handling algorithms needed for the piece non-overlapping constraints. This led to pieces simplifications to specialize the developed mathematical models. In this paper, to overcome these limitations, two robust MIP models are proposed. In the first model (DTM) the non-overlapping constraints are stated based on direct trigonometry, while in the second model (NFP−CM) pieces are first decomposed into convex parts and then the non-overlapping constraints are written based on nofit polygons of the convex parts. Both approaches are robust in terms of the type of geometries they can address, considering any kind of non-convex polygon with or without holes. They are also simpler to implement than previous models. This simplicity allowed to consider, for the first time, a variant of the models that deals with piece rotations. Computational experiments with benchmark instances show that NFP−CM outperforms both DTM and the best exact model published in the literature. New real-world based instances with more complex geometries are proposed and used to verify the robustness of the new models.
Packing; Cutting; Nesting; MIP models;
http://www.sciencedirect.com/science/article/pii/S0377221716301370
Cherri, Luiz H.
Mundim, Leandro R.
Andretta, Marina
Toledo, Franklina M.B.
Oliveira, José F.
Carravilla, Maria Antónia
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:304-3112016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:304-311
article
An auto-realignment method in quasi-Monte Carlo for pricing financial derivatives with jump structures
Discontinuities are common in the pricing of financial derivatives and have a tremendous impact on the accuracy of quasi-Monte Carlo (QMC) method. While if the discontinuities are parallel to the axes, good efficiency of the QMC method can still be expected. By realigning the discontinuities to be axes-parallel, [Wang & Tan, 2013] succeeded in recovering the high efficiency of the QMC method for a special class of functions. Motivated by this work, we propose an auto-realignment method to deal with more general discontinuous functions. The k-means clustering algorithm, a classical algorithm of machine learning, is used to select the most representative normal vectors of the discontinuity surface. By applying this new method, the discontinuities of the resulting function are realigned to be friendly for the QMC method. Numerical experiments demonstrate that the proposed method significantly improves the performance of the QMC method.
Pricing; QMC; OT method; QR decomposition; Auto-realignment method;
http://www.sciencedirect.com/science/article/pii/S037722171630162X
Weng, Chengfeng
Wang, Xiaoqun
He, Zhijian
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:320-3372016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:320-337
article
Understanding dynamic mean variance asset allocation
We provide a new portfolio decomposition formula that sheds light on the economics of portfolio choice for investors following the mean-variance (MV) criterion. We show that the number of components of a dynamic portfolio strategy can be reduced to two: the first is preference free and hedges the risk of a discount bond maturing at the investor’s horizon while the second hedges the time variation in pseudo relative risk tolerance. Both components entail strong horizon effects in the dynamic asset allocation as a result of time-varying risk tolerance and investment opportunity sets. We also provide closed-form solutions for the optimal portfolio strategy in the presence of market return predictability. The model parameters are estimated over the period 1963 to 2012 for the U.S. market. We show that (i) intertemporal hedging can be very large, (ii) the MV criterion hugely understates the true extent of risk aversion for high values of the risk aversion parameter, and the more so the shorter the investment horizon, and (iii) the efficient frontiers seem problematic for investment horizons shorter than one year but satisfactory for large horizons. Overall, adopting the MV model leads to acceptable results for medium and long term investors endowed with medium or high risk tolerance, but to very problematic ones otherwise.
Mean variance; Dynamic asset allocation; Time varying risk aversion; Intertemporal hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716302223
Lioui, Abraham
Poncet, Patrice
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:328-3362016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:328-336
article
An ejection chain approach for the quadratic multiple knapsack problem
In an algorithm for a problem whose candidate solutions are selections of objects, an ejection chain is a sequence of moves from one solution to another that begins by removing an object from the current solution. The quadratic multiple knapsack problem extends the familiar 0–1 knapsack problem both with several knapsacks and with values associated with pairs of objects. A hybrid algorithm for this problem extends a local search algorithm through an ejection chain mechanism to create more powerful moves. In addition, adaptive perturbations enhance the diversity of the search process. The resulting algorithm produces results that are competitive with the best heuristics currently published for this problem. In particular, it improves the best known results on 34 out of 60 test problem instances and matches the best known results on all but 6 of the remaining instances.
Ejection chain; Quadratic multiple knapsack problem; Adaptive perturbation; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716300960
Peng, Bo
Liu, Mengqi
Lü, Zhipeng
Kochengber, Gary
Wang, Haibo
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:314-3272016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:314-327
article
Lagrangean relaxation of the hull-reformulation of linear generalized disjunctive programs and its use in disjunctive branch and bound
In this work, we present a Lagrangean relaxation of the hull-reformulation of discrete-continuous optimization problems formulated as linear generalized disjunctive programs (GDP). The proposed Lagrangean relaxation has three important properties. The first property is that it can be applied to any linear GDP. The second property is that the solution to its continuous relaxation always yields 0–1 values for the binary variables of the hull-reformulation. Finally, it is simpler to solve than the continuous relaxation of the hull-reformulation. The proposed Lagrangean relaxation can be used in different GDP solution methods. In this work, we explore its use as primal heuristic to find feasible solutions in a disjunctive branch and bound algorithm. The modified disjunctive branch and bound is tested with several instances with up to 300 variables. The results show that the proposed disjunctive branch and bound performs faster than other versions of the algorithm that do not include this primal heuristic.
MILP; Disjunctive programming; GDP; Lagrangean relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221716301011
Trespalacios, Francisco
Grossmann, Ignacio E.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:593-6012016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:593-601
article
Impact of structure, market share and information asymmetry on supply contracts for a single supplier multiple buyer network
Market share of buyers and the influence of supply chain structure on the choice of supply contracts have received scant attention in the literature. This paper focuses on this gap and examines a network consisting of one supplier and two buyers under complete and partial decentralization. In the completely decentralized setting both buyers are independent of the supplier. In the partially decentralized setting the supplier and one of the buyers form a vertically integrated entity. Both buyers order from the single supplier and produce similar products to sell in the same market. The supplier charges the buyer through a contract. We investigate the influence of supply chain structure, market-share and asymmetry of information on supplier's choice of contracts. We demonstrate that both linear two-part tariff and quantity discount contract can coordinate the supply chain irrespective of the supply chain structure. By comparing profit levels of supply chain agents across different supply chain structures, we show that if a buyer possesses a minimum threshold market potential, the supplier has an incentive to collude with her. We calculate the cut-off policies for wholesale price and two-part tariff contracts by incorporating the reservation profit level of individual agents. The managerial implications of the analyses and the directions of future research are presented in the conclusion.
Supply chain management; Pricing; Asymmetric information; Competition; Market share;
http://www.sciencedirect.com/science/article/pii/S0377221716301424
Biswas, Indranil
Avittathur, Balram
Chatterjee, Ashis K
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:557-5692016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:557-569
article
Benders decomposition without separability: A computational study for capacitated facility location problems
Benders is one of the most famous decomposition tools for Mathematical Programming, and it is the method of choice e.g., in mixed-integer stochastic programming. Its hallmark is the capability of decomposing certain types of models into smaller subproblems, each of which can be solved individually to produce local information (notably, cutting planes) to be exploited by a centralized “master” problem. As its name suggests, the power of the technique comes essentially from the decomposition effect, i.e., the separability of the problem into a master problem and several smaller subproblems. In this paper we address the question of whether the Benders approach can be useful even without separability of the subproblem, i.e., when its application yields a single subproblem of the same size as the original problem. In particular, we focus on the capacitated facility location problem, in two variants: the classical linear case, and a “congested” case where the objective function contains convex but non-separable quadratic terms. We show how to embed the Benders approach within a modern branch-and-cut mixed-integer programming solver, addressing explicitly all the ingredients that are instrumental for its success. In particular, we discuss some computational aspects that are related to the negative effects derived from the lack of separability. Extensive computational results on various classes of instances from the literature are reported, with a comparison with the state-of-the-art exact and heuristic algorithms. The outcome is that a clever but simple implementation of the Benders approach can be very effective even without separability, as its performance is comparable and sometimes even better than that of the most effective and sophisticated algorithms proposed in the previous literature.
Benders decomposition; Congested capacitated facility location; Perspective reformulation; Branch-and-cut; Mixed-integer convex programming;
http://www.sciencedirect.com/science/article/pii/S0377221716301126
Fischetti, Matteo
Ljubić, Ivana
Sinnl, Markus
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:472-4882016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:472-488
article
Stability and chaos in demand-based pricing under social interactions
Demand-based pricing is often used to moderate demand fluctuations so as to level resource utilization and increase profitability. However, such pricing policies may not be effective when customers’ purchase decisions are influenced by social interactions. This paper investigates the demand dynamics, under a demand-based pricing policy, of a frequently purchased service when social interactions are at work. Customers are heterogeneous and adaptively forward-looking. Existing customers’ re-purchase decisions are based on adaptively formed price expectations and reservation prices. Potential customers are attracted through social interactions with existing customers. The demand process is characterized by a two-dimensional dynamical system. It is shown that the equilibrium demand can be unstable. For a given reservation price distribution, we first analyze the stability of the equilibrium demand under various scenarios of social interactions and customers’ adaptively forward-looking behavior, and then characterize their dynamics using the bifurcation plots, Lyapunov exponents and return maps. The results indicate that the demand process can be stable, periodic or chaotic. The study shows that the intended effect of a demand-based pricing policy may be offset by customers’ adaptively forward-looking behavior under the influence of social interactions. In fact, the interplay of these factors may even lead to chaotic demand dynamics. The result highlights the complex dynamics produced by a simple demand-price mechanism under social interactions. For a demand-based pricing strategy to be effective, companies must take social interactions into account.
OR in service industries; Demand dynamics; Forward-looking; Social interaction; Chaos;
http://www.sciencedirect.com/science/article/pii/S037722171630100X
Yuan, Xuchuan
Hwarng, H. Brian
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:337-3552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:337-355
article
Modified Differential Evolution with Locality induced Genetic Operators for dynamic optimizationAuthor-Name: Mukherjee, Rohan
This article presents a modified version of the Differential Evolution (DE) algorithm for solving Dynamic Optimization Problems (DOPs) efficiently. The algorithm, referred as Modified DE with Locality induced Genetic Operators (MDE-LiGO) incorporates changes in the three basic stages of a standard DE framework. The mutation phase has been entrusted to a locality-induced operation that retains traits of Euclidean distance-based closest individuals around a potential solution. Diversity maintenance is further enhanced by inclusion of a local-best crossover operation that empowers the algorithm with an explorative ability without directional bias. An exhaustive dynamic detection technique has been introduced to effectively sense the changes in the landscape. An even distribution of solutions over different regions of the landscape calls for a solution retention technique that adapts this algorithm to dynamism by using the previously stored information in diverse search domains. MDE-LiGO has been compared with seven state-of-the-art evolutionary dynamic optimizers on a set of benchmarks known as the Generalized Dynamic Benchmark Generator (GDBG) used in competition on evolutionary computation in dynamic and uncertain environments held under the 2009 IEEE Congress on Evolutionary Computation (CEC). The experimental results clearly indicate that MDE-LiGO can outperform other algorithms for most of the tested DOP instances in a statistically meaningful way.
Continuous optimization; Dynamic optimization; Differential Evolution; Self adaptation; Genetic operators;
http://www.sciencedirect.com/science/article/pii/S0377221716300959
Debchoudhury, Shantanab
Das, Swagatam
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:673-6802016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:673-680
article
Licensing under general demand and cost functions
We consider a Cournot duopoly under general demand and cost functions, where an incumbent patentee has a cost reducing technology that it can license to its rival by using combinations of royalties and upfront fees (two-part tariffs). We show that for drastic technologies: (a) licensing occurs and both firms stay active if the cost function is superadditive and (b) licensing does not occur and the patentee monopolizes the market if the cost function is additive or subadditive. For non drastic technologies, licensing takes place provided the average efficiency gain from the cost reducing technology is higher than the marginal gain computed at the licensee’s reservation output. Optimal licensing policies have both royalties and fees for significantly superior technologies if the cost function is superadditive. By contrast, for additive and certain subadditive cost functions, optimal licensing policies have only royalties and no fees.
Patent licensing; Superadditive function; Subadditive function; Royalties; Two-part tariff;
http://www.sciencedirect.com/science/article/pii/S037722171600103X
Sen, Debapriya
Stamatopoulos, Giorgos
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:869-8792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:869-879
article
Spline based survival model for credit risk modeling
Survival modeling has been adapted in retail banking because of its capability to analyze the censored data. It is an important tool for credit risk scoring, stress testing and credit asset evaluation. In this paper, we introduce a regression spline based discrete time survival model. The flexibility of spline function allows us to model the nonlinear and irregular shape of the hazard functions. By incorporating the regression spline into the multinomial logistic regression, this approach complements the existing Cox model. From a practical perspective, the logistic regression is relatively easy to understand and implement, and the simple parametric form is especially advantageous for predictive scoring. Using a credit card dataset, we demonstrate how to build a cubic regression spline based survival model. We also compare the performance of spline based discrete time survival model with the classical Cox model, our results show the spline based survival model can provide similar statistical explanatory and improve the prediction accuracy for attrition model which has low event rate.
Retail banking; Credit risk scoring; Survival modeling; Regression spline;
http://www.sciencedirect.com/science/article/pii/S0377221716301035
Luo, Sirong
Kong, Xiao
Nie, Tingting
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:584-5922016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:584-592
article
Lower bounding procedure for the asymmetric quadratic traveling salesman problem
In this paper we consider the Asymmetric Quadratic Traveling Salesman Problem (AQTSP). Given a directed graph and a function that maps every pair of consecutive arcs to a cost, the problem consists in finding a cycle that visits every vertex exactly once and such that the sum of the costs is minimal. We propose an extended Linear Programming formulation that has a variable for each cycle in the graph. Since the number of cycles is exponential in the graph size, we propose a column generation approach. Moreover, we apply a particular reformulation-linearization technique on a compact representation of the problem, and compute lower bounds based on Lagrangian relaxation. We compare our new bounds with those obtained by some linearization models proposed in the literature. Computational results on some set of benchmarks used in the literature show that our lower bounding procedures are very promising.
Traveling salesman; Reformulation-linearization technique; Cycle cover; Column generation; Lower bound;
http://www.sciencedirect.com/science/article/pii/S037722171630159X
Rostami, Borzou
Malucelli, Federico
Belotti, Pietro
Gualandi, Stefano
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:298-3132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:298-313
article
Scheduling cranes at an indented berth
Container terminals are facing great challenges in order to meet the shipping industry’s requirements. An important fact within the industry is the increasing vessel sizes. Actually, within the last decade the ship size in the Asia–Europe trade has effectively doubled. However, port productivity has not doubled along with the larger vessel sizes. This has led to increased vessel turn around times at ports which indeed is a severe problem. In order to meet the industry targets a game-changer in container handling is required. Indented berth structure is one important opportunity to handle this issue. This novel berth structure requires new models and solution techniques for scheduling the quay cranes serving the indented berth. Accordingly, in this paper, we approach the quay crane scheduling problem at an indented berth structure. We focus on the challenges and constraints related to the novel architecture. We model the quay crane scheduling problem under the special structure and develop a solution technique based on branch-and-price. Extensive experiments are conducted to validate the efficiency of the proposed algorithm.
Maritime logistics; Crane sequencing; Crane scheduling; Container terminal operations; Indented berth;
http://www.sciencedirect.com/science/article/pii/S0377221716300753
Beens, Marie-Anne
Ursavas, Evrim
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:856-8682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:856-868
article
It’s not now or never: Implications of investment timing and risk aversion on climate adaptation to extreme events
Public investment into risk reduction infrastructure plays an important role in facilitating adaptation to climate impacted hazards and natural disasters. In this paper, we provide an economic framework to incorporate investment timing and insurance market risk preferences when evaluating projects related to reducing climate impacted risks. The model is applied to a case study of bushfire risk management. We find that optimal timing of the investment may increase the net present value (NPV) of an adaptation project for various levels of risk aversion. Assuming risk neutrality, while the market is risk averse, is found to result in an unnecessary delay of the investment into risk reduction projects. The optimal waiting time is shorter when the insurance market is more risk averse or when a more serious scenario for climatic change is assumed. A higher investment cost or a higher discount rate will increase the optimal waiting time. We also find that a stochastic discount rate results in higher NPVs of the project than a discount rate that is assumed fixed at the long run average level.
Climate change adaptation; Investment timing; Catastrophic risk; Risk aversion; Real option;
http://www.sciencedirect.com/science/article/pii/S0377221716000898
Truong, Chi
Trück, Stefan
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:51-672016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:51-67
article
Modeling parallel movement of lifts and vehicles in tier-captive vehicle-based warehousing systems
This paper models and analyzes tier-captive autonomous vehicle storage and retrieval systems. While previous models assume sequential commissioning of the lift and vehicles, we propose a parallel processing policy for the system, under which an arrival transaction can request the lift and the vehicle simultaneously. To investigate the performance of this policy, we formulate a fork-join queueing network in which an arrival transaction will be split into a horizontal movement task served by the vehicle and a vertical movement task served by the lift. We develop an approximation method based on decomposition of the fork-join queueing network to estimate the system performance. We build simulation models to validate the effectiveness of analytical models. The results show that the fork-join queueing network is accurate in estimating the system performance under the parallel processing policy. Numerical experiments and a real case are carried out to compare the system response time of retrieval transactions under parallel and sequential processing policies. The results show that, in systems with less than 10 tiers, the parallel processing policy outperforms the sequential processing policy by at least 5.51 percent. The advantage of parallel processing policy is decreasing with the rack height and the aisle length. In systems with more than 10 tiers and a length to height ratio larger than 7, we can find a critical retrieval transaction arrival rate, below which the parallel processing policy outperforms the sequential processing policy.
Logistics; Warehousing; AVS/RS; Analytical and simulation modelling; Performance analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716301679
Zou, Bipan
Xu, Xianhao
(Yale) Gong, Yeming
De Koster, René
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:148-1602016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:148-160
article
Strategic behavior in an observable fluid queue with an alternating service process
We consider a fluid queue with two modes of service, that represents a production facility, where the processing of the customers (units) is typically carried out at a much faster time-scale than the machine-related processes. We examine the strategic behavior of the customers, regarding the joining/balking dilemma, under two levels of information upon arrival. Specifically, just after arriving and before making the decision, a customer observes the level of the fluid, but may or may not get informed about the state of the server (fast/slow). Assuming that the customers evaluate their utilities based on a natural reward/cost structure, which incorporates their desire for processing and their unwillingness to wait, we derive symmetric equilibrium strategy profiles. Moreover, we illustrate various effects of the information level on the strategic behavior of the customers. The corresponding social optimization problem is also studied and the inefficiency of the equilibrium strategies is quantified via the Price of Anarchy (PoA) measure.
Queueing; Fluid flow models; Strategic customers; Balking; Equilibrium strategies;
http://www.sciencedirect.com/science/article/pii/S0377221716301928
Economou, Antonis
Manou, Athanasia
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:372-3822016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:372-382
article
Generous, spiteful, or profit maximizing suppliers in the wholesale price contract: A behavioral study
Prior experimental research shows that, in aggregate, decision makers acting as suppliers to a newsvendor do not set the wholesale price to maximize supplier profits. However, these deviations from optimal have rarely been examined at an individual level. In this study, presented with scenarios that differ in terms of how profit is shared between retailer and supplier, suppliers set wholesale price contracts which deviate from profit-maximization in ways that are either generous or spiteful. On an individual basis, these deviations were found to be consistent with how the profit-maximizing contract compares to the subject's idea of a fair contract. Suppliers moved nearer to self-reported ideal allocations when they indicated a high degree of concern for fairness, consistent with previously proposed fairness models, and were found to be more likely to act upon generous inclinations than spiteful ones.
Behavioral OR; Supply chain management; Newsvendor; Contracting; Supplier pricing;
http://www.sciencedirect.com/science/article/pii/S0377221716300595
Niederhoff, Julie A.
Kouvelis, Panos
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:40-502016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:40-50
article
How to escape a declining market: Capacity investment or Exit?
This paper considers a firm that faces a declining profit stream for its established product. The firm has the option to invest in a new technology with which it can produce an innovative product while having the option to exit at any point in time. In the presence of an exit option, earlier work determined the optimal timing to invest, where it was shown that higher uncertainty might accelerate investment timing. In the present paper the firm also decides on capacity. This extension leads to monotonicity, i.e. higher uncertainty delays investment timing. We also find that higher potential profitability of the innovative product market increases the incentive to invest earlier, where, however, we get the counterintuitive result that the firm invests in smaller capacity. Finally, if quantity has a smaller negative effect on price, the firm wants to acquire a larger capacity at a lower investment threshold.
Investment analysis; Exit; Capacity investment; Declining market; Real options;
http://www.sciencedirect.com/science/article/pii/S0377221716302284
Hagspiel, Verena
Huisman, Kuno J.M.
Kort, Peter M.
Nunes, Cláudia
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:514-5232016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:514-523
article
A nonhomogeneous hidden Markov model of response dynamics and mailing optimization in direct marketing
Catalog firms mail billions of catalogs each year. To stay competitive, catalog managers need to maximize the return on these mailings by deciding who should receive a mail-order catalog. In this paper, we propose a two-step approach that allows firms to address the dynamic implications of mailing decisions, and to make efficient mailing decisions by maximizing the long-term value generated by customers. Specifically, we first propose a nonhomogeneous hidden Markov model (HMM) to capture the interactive dynamics between customers and mailings. In the second step, we use the parameters obtained from the HMM to determine the optimal mailing decisions using the Partial Observable Markov Decision Process (POMDP). Both the immediate and the long-term effects of mailings are accounted for. The mailing endogeneity that may result in biased parameter estimates is also corrected. We conduct an empirical study using six years of quarterly solicitation data derived from the well-known DMEF donation data set. All metrics used suggest that the proposed model fits the data well in terms of correct predictions and outperforms all other benchmark models. The simulative experimental results show that the proposed method for optimizing total accrued benefits outperforms the usual targeted-marketing methodology for optimizing each promotion in isolation. We also find that the sequential targeting rules acquired by our proposed methods are more cost-containment oriented in nature compared with the corresponding single-event targeting rules.
OR in marketing; HMM; POMDP; Customer lifetime value; Mailing optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301084
Ma, Shaohui
Hou, Lu
Yao, Wensong
Lee, Baozhen
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:269-2782016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:269-278
article
Age-structured linear-state differential games
In this paper we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of age-structured differential games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub-game perfect. By means of a simple age-structured advertising problem, we provide an application of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium.
Age-structured models; Differential games; Advertising;
http://www.sciencedirect.com/science/article/pii/S0377221716301539
Grosset, Luca
Viscolani, Bruno
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:280-2892016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:280-289
article
Exact and heuristic algorithms for the Hamiltonian p-median problem
This paper presents an exact algorithm, a constructive heuristic algorithm, and a metaheuristic for the Hamiltonian p-Median Problem (HpMP). The exact algorithm is a branch-and-cut algorithm based on an enhanced p-median based formulation, which is proved to dominate an existing p-median based formulation. The constructive heuristic is a giant tour heuristic, based on a dynamic programming formulation to optimally split a given sequence of vertices into cycles. The metaheuristic is an iterated local search algorithm using 2-exchange and 1-opt operators. Computational results show that the branch-and-cut algorithm outperforms the existing exact solution methods.
Hamiltonian; p-median; Branch-and-cut; Metaheuristic;
http://www.sciencedirect.com/science/article/pii/S0377221716300327
Erdoğan, Güneş
Laporte, Gilbert
Rodríguez Chía, Antonio M.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:294-3032016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:294-303
article
Sustaining cooperation in a differential game of advertising goodwill accumulation
The paper suggests a differential game of advertising competition among three symmetric firms, played over an infinite horizon. The objective of the research is to see if a cooperative agreement among the firms can be sustained over time. For this purpose the paper determines the characteristic functions (value functions) of individual players and all possible coalitions. We identify an imputation that belongs to the core. Using this imputation guarantees that, in any subgame starting out on the cooperative state trajectory, no coalition has an incentive to deviate from what was prescribed by the solution of the grand coalition’s optimization problem.
Differential games; Advertising competition; Core imputation;
http://www.sciencedirect.com/science/article/pii/S0377221716301576
Jørgensen, Steffen
Gromova, Ekaterina
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:614-6242016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:614-624
article
Measures of dynamism and urgency in logistics
Dynamism was originally defined as the proportion of online versus offline orders in the literature on dynamic logistics. Such a definition however, loses meaning when considering purely dynamic problems where all customer requests arrive dynamically. Existing measures of dynamism are limited to either (1) measuring the proportion of online versus offline orders or (2) measuring urgency, a concept that is orthogonal to dynamism, instead. The present paper defines separate and independent formal definitions of dynamism and urgency applicable to purely dynamic problems. Using these formal definitions, instances of a dynamic logistic problem with varying levels of dynamism and urgency were constructed and several route scheduling algorithms were executed on these problem instances. Contrary to previous findings, the results indicate that dynamism is positively correlated with route quality; urgency, however, is negatively correlated with route quality. The paper contributes the theory that dynamism and urgency are two distinct concepts that deserve to be treated separately.
Logistics; Transportation; Dynamism; Urgency; Measures;
http://www.sciencedirect.com/science/article/pii/S0377221716301497
van Lon, Rinde R.S.
Ferrante, Eliseo
Turgut, Ali E.
Wenseleers, Tom
Vanden Berghe, Greet
Holvoet, Tom
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:243-2642016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:243-264
article
Sustainable Operations
The field of “Sustainable Operations” and the term itself have arisen only in the last ten to twenty years in the context of sustainable development. Even though the term is frequently used in practice and research, it has hardly been characterized and defined precisely in the literature so far. For reasons of clarity and unambiguity, we present terms and definitions before we demarcate Sustainable Operations from its neighboring topics. We especially focus on the interactions between economic, social and ecological aspects as part of Sustainable Operations, but exclude the development of a normative ethics, instead focusing on the use of quantitative methods from Operations Research. Then the broad subject of Sustainable Operations is structured into various areas arising from the typical structure of an enterprise. For each area, we present examples of applications and refer to the existing literature. The paper concludes with future research directions.
Sustainable Operations; Sustainable development; Operations research; Computational sustainability; Triple bottom line;
http://www.sciencedirect.com/science/article/pii/S0377221716300996
Jaehn, Florian
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:811-8242016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:811-824
article
Local matching of flexible load in smart grids
Today’s power systems are experiencing a transition from primarily fossil fuel based generation toward greater shares of renewable energy sources. It becomes increasingly costly to manage the resulting uncertainty and variability in power system operations solely through flexible generation assets. Incorporating demand side flexibility through appropriately designed incentive structures can add an additional lever to balance demand and supply. Based on a supply model using empirical wind generation data and a discrete model of flexible demand with temporal constraints, we design and evaluate a local online market mechanism for matching flexible load and uncertain supply. Under this mechanism, truthful reporting of flexibility is a dominant strategy for consumers reducing payments and increasing the likelihood of allocation. Suppliers, during periods of scarce supply, benefit from elevated critical-value payments as a result of flexibility-induced competition on the demand side. We find that, for a wide range of the key parameters (supply capacity, flexibility level), the cost of ensuring incentive compatibility in a smart grid market, relative to the welfare-optimal matching, is relatively small. This suggests that local matching of demand and supply can be organized in a decentral manner in the presence of a sufficiently flexible demand side. Extending the stylized demand model to include complementary demand structures, we demonstrate that decentral matching induces only minor efficiency losses if demand is sufficiently flexible. Furthermore, by accounting for physical grid limitations we show that flexibility and grid capacity exhibit complementary characteristics.
OR in energy; Smart grid; Load flexibility; Online mechanism design;
http://www.sciencedirect.com/science/article/pii/S037722171630114X
Ströhle, Philipp
Flath, Christoph M.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:236-2522016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:236-252
article
A two-stage classification technique for bankruptcy prediction
Ensemble techniques such as bagging or boosting, which are based on combinations of classifiers, make it possible to design models that are often more accurate than those that are made up of a unique prediction rule. However, the performance of an ensemble solely relies on the diversity of its different components and, ultimately, on the algorithm that is used to create this diversity. It means that such models, when they are designed to forecast corporate bankruptcy, do not incorporate or use any explicit knowledge about this phenomenon that might supplement or enrich the information they are likely to capture. This is the reason why we propose a method that is precisely based on some knowledge that governs bankruptcy, using the concept of “financial profiles”, and we show how the complementarity between this technique and ensemble techniques can improve forecasts.
Decision support systems; Finance; Bankruptcy; Forecasting; Financial profile;
http://www.sciencedirect.com/science/article/pii/S0377221716301369
du Jardin, Philippe
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:404-4172016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:404-417
article
Logistics capacity planning: A stochastic bin packing formulation and a progressive hedging meta-heuristic
We consider the logistics capacity planning problem arising in the context of supply-chain management. We address the tactical-planning problem of determining the quantity of capacity units, hereafter called bins, of different types to secure for the next period of activity, given the uncertainty on future needs in terms of demand for loads (items) to be moved or stored, and the availability and costs of capacity for these movements or storage activities. We propose a modeling framework introducing a new class of bin packing problems, the Stochastic Variable Cost and Size Bin Packing Problem. The resulting two-stage stochastic formulation with recourse assigns to the first stage the tactical capacity-planning decisions of selecting bins, while the second stage models the subsequent adjustments to the plan, securing extra bins and packing the items into the selected bins, performed each time the plan is applied and new information becomes known. We propose a new meta-heuristic based on progressive hedging ideas that includes advanced strategies to accelerate the search and efficiently address the symmetry strongly present in the problem considered due to the presence of several equivalent bins of each type. Extensive computational results for a large set of instances support the claim of validity for the model, efficiency for the solution method proposed, and quality and robustness for the solutions obtained. The method is also used to explore the impact on the capacity plan and the recourse to spot-market capacity of a quite wide range of variations in the uncertain parameters and the economic environment of the firm.
Logistics capacity planning; Uncertainty; Stochastic Variable Cost and Size Bin Packing; Stochastic programming; Progressive hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716300777
Crainic, Teodor Gabriel
Gobbato, Luca
Perboli, Guido
Rei, Walter
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:105-1122016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:105-112
article
Offsetting inventory replenishment cyclesAuthor-Name: Russell, Robert A.
The inventory-staggering problem is a multi-item inventory problem in which replenishment cycles are scheduled or offset in order to minimize the maximum inventory level over a given planning horizon. We incorporate symmetry-breaking constraints in a mixed-integer programming model to determine optimal and near-optimal solutions. Local-search heuristics and evolutionary polishing heuristics are also presented to achieve effective and efficient solutions. We examine extensions of the problem that include a continuous-time framework as well as the effect of stochastic demand.
Inventory; Replenishment staggering; Symmetry reduction; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716302016
Urban, Timothy L.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:127-1372016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:127-137
article
Cost-effectiveness analysis for heterogeneous samples
The sampling information for the cost-effectiveness analysis typically comes from different health care centers, and, as far as we know, it is taken for granted that the distribution of the cost and the effectiveness does not vary across centers. We argue that this assumption is unrealistic, and prove that to not consider the sample heterogeneity will typically give misleading results. Consequently, a cost-effectiveness procedure for heterogeneous samples is here proposed.
Clustering; Cost-effectiveness; Decision processes; Meta-analysis; Heterogeneous samples;
http://www.sciencedirect.com/science/article/pii/S0377221716301606
Moreno, E.
Girón, F.J.
Vázquez–Polo, F.J.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:648-6582016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:648-658
article
An investigation of model risk in a market with jumps and stochastic volatility
The aim of this paper is to investigate model risk aspects of variance swaps and forward-start options in a realistic market setup where the underlying asset price process exhibits stochastic volatility and jumps. We devise a general framework in order to provide evidence of the model uncertainty attached to variance swaps and forward-start options. In our study, both variance swaps and forward-start options can be valued by means of analytic methods. We measure model risk using a set of 21 models embedding various dynamics with both continuous and discontinuous sample paths. To conduct our empirical analysis, we work with two major equity indices (S&P 500 and Eurostoxx 50) under different market situations. Our results evaluate model risk between 50 and 200 basis points, with an average value slightly above 100 basis points of the contract notional.
Risk management; Model risk; Robustness and sensitivity analysis; Variance swap; Forward-start option;
http://www.sciencedirect.com/science/article/pii/S0377221716301461
Coqueret, Guillaume
Tavin, Bertrand
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:9-182016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:9-18
article
A computational study for bilevel quadratic programs using semidefinite relaxations
In this paper, we deal with bilevel quadratic programming problems with binary decision variables in the leader problem and convex quadratic programs in the follower problem. For this purpose, we transform the bilevel problems into equivalent quadratic single level formulations by replacing the follower problem with the equivalent Karush Kuhn Tucker (KKT) conditions. Then, we use the single level formulations to obtain mixed integer linear programming (MILP) models and semidefinite programming (SDP) relaxations. Thus, we compute optimal solutions and upper bounds using linear programming (LP) and SDP relaxations. Our numerical results indicate that the SDP relaxations are considerably tighter than the LP ones. Consequently, the SDP relaxations allow finding tight feasible solutions for the problem. Especially, when the number of variables in the leader problem is larger than in the follower problem. Moreover, they are solved at a significantly lower computational cost for large scale instances.
(I) Conic programming and interior point methods; Bilevel programming; Semidefinite programming; Mixed integer linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716000497
Adasme, Pablo
Lisser, Abdel
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:1-82016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:1-8
article
Edge coloring: A natural model for sports scheduling
In this work, we consider some basic sports scheduling problems and introduce the notions of graph theory which are needed to build adequate models. We show, in particular, how edge coloring can be used to construct schedules for sports leagues. Due to the emergence of various practical requirements, one cannot be restricted to classical schedules given by standard constructions, such as the circle method, to color the edges of complete graphs. The need of exploring the set of all possible colorings inspires the design of adequate coloring procedures. In order to explore the solution space, local search procedures are applied. The standard definitions of neighborhoods that are used in such procedures need to be extended. Graph theory provides efficient tools for describing various move types in the solution space. We show how formulations in graph theoretical terms give some insights to conceive more general move types. This leads to a series of open questions which are also presented throughout the text.
OR in sports; Scheduling; Graph theory; Edge coloring; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221716301667
Januario, Tiago
Urrutia, Sebastián
Ribeiro, Celso C.
de Werra, Dominique
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:138-1472016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:138-147
article
The predictive power of the business and bank sentiment of firms: A high-dimensional Granger Causality approach
We study the predictive power of industry-specific economic sentiment indicators for future macro-economic developments. In addition to the sentiment of firms towards their own business situation, we study their sentiment with respect to the banking sector – their main credit providers. The use of industry-specific sentiment indicators results in a high-dimensional forecasting problem. To identify the most predictive industries, we present a bootstrap Granger Causality test based on the Adaptive Lasso. This test is more powerful than the standard Wald test in such high-dimensional settings. Forecast accuracy is improved by using only the most predictive industries rather than all industries.
Bootstrap; Granger Causality; Lasso; Sentiment surveys; Time series forecasting;
http://www.sciencedirect.com/science/article/pii/S0377221716301874
Wilms, Ines
Gelper, Sarah
Croux, Christophe
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:543-5562016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:543-556
article
Origin and early evolution of corner polyhedra
Corner Polyhedra are a natural intermediate step between linear programming and integer programming. This paper first describes how the concept of Corner Polyhedra arose unexpectedly from a practical operations research problem, and then describes how it evolved to shed light on fundamental aspects of integer programming and to provide a great variety of cutting planes for integer programming.
Integer programming; Cutting; Linear programming; Corner polyhedra;
http://www.sciencedirect.com/science/article/pii/S0377221716301114
Gomory, Ralph
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:19-282016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:19-28
article
Eidetic Wolf Search Algorithm with a global memory structureAuthor-Name: Fong, Simon
A recently proposed metaheuristics called Wolf Search Algorithm (WSA) has demonstrated its efficacy for various hard-to-solve optimization problems. In this paper, an improved version of WSA namely Eidetic-WSA with a global memory structure (GMS) or just eWSA is presented. eWSA makes use of GMS for improving its search for the optimal fitness value by preventing mediocre visited places in the search space to be visited again in future iterations. Inherited from swarm intelligence, search agents in eWSA and the traditional WSA merge into an optimal solution although the agents behave and make decisions autonomously. Heuristic information gathered from collective memory of the swarm search agents is stored in GMS. The heuristics eventually leads to faster convergence and improved optimal fitness. The concept is similar to a hybrid metaheuristics based on WSA and Tabu Search. eWSA is tested with seven standard optimization functions rigorously. In particular, eWSA is compared with two state-of-the-art metaheuristics, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). eWSA shares some similarity with both approaches with respect to directed-random search. The similarity with ACO is, however, stronger as ACO uses pheromones as global information references that allow a balance between using previous knowledge and exploring new solutions. Under comparable experimental settings (identical population size and number of generations) eWSA is shown to outperform both ACO and PSO with statistical significance. When dedicating the same computation time, only ACO can be outperformed due to a comparably long run time per iteration of eWSA.
Metaheuristics; Wolf Search Algorithm; Global memory structure; Ant Colony Optimization; Particle Swarm Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301898
Deb, Suash
Hanne, Thomas
Li, Jinyan (Leo)
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:188-2012016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:188-201
article
Timing of service investments for retailers under competition and demand uncertainty
We study how retailers can time their service investments when demand for a product is uncertain and consumers care both about price and service when choosing which retailer to buy from. By “service” we mean activities a retailer can invest in and which can drive traffic into the store. We consider offering extended operating hours as an example of such service and examine the timing of service investments for two competing retailers. Specifically, we analyze two retailers who compete on price and service level, and characterize both the prices and the service levels, as well as the timing of their service investment decisions. Our model also considers two effects of retailer service—the effect on total demand for the product and the effect on a retailer’s market share. We show that investing in service before demand realization, although counterintuitive, can be beneficial for competing retailers. On the other hand, a large mismatch between actual and expected demand and a low probability of high demand justifies the postponement of service investments after observing demand. We also show that the incentive to invest in service before demand realization becomes more pronounced when service investments can increase the overall demand for the product in addition to protecting market share. Our findings have important implications for retailers with regards to the timing of their service investment decisions.
Retail; Service; Uncertainty; Competition; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716301515
Perdikaki, Olga
Kostamis, Dimitris
Swaminathan, Jayashankar M.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:428-4402016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:428-440
article
Carbon efficiency evaluation: An analytical framework using fuzzy DEA
Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.
Energy efficiency; Data envelopment analysis; Fuzzy expected interval; Fuzzy expected value; Fuzzy ranking approach;
http://www.sciencedirect.com/science/article/pii/S0377221716300340
Ignatius, Joshua
Ghasemi, M.-R.
Zhang, Feng
Emrouznejad, Ali
Hatami-Marbini, Adel
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:200-2072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:200-207
article
Forward search outlier detection in data envelopment analysis
In this paper we tackle the problem of outlier detection in data envelopment analysis (DEA). We propose a procedure where we merge the super-efficiency DEA and the forward search. Since DEA provides efficiency scores which are not parameters to fit the model to the data, we introduce a distance, to be monitored along the search. This distance is obtained through the integration of a regression model and the super-efficiency DEA. We simulate a Cobb–Douglas production function and we compare the super-efficiency DEA and the forward search analysis in both uncontaminated and contaminated settings. For inference about outliers, we exploit envelopes obtained through Monte Carlo simulations.
Data envelopment analysis (DEA); Super-efficiency; Forward search; Outlier detection;
http://www.sciencedirect.com/science/article/pii/S0377221711006254
Bellini, Tiziano
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:348-3572016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:348-357
article
From deterministic to stochastic surrender risk models: Impact of correlation crises on economic capital
In this paper we raise the matter of considering a stochastic model of the surrender rate instead of the classical S-shaped deterministic curve (in function of the spread), still used in almost all insurance companies. For extreme scenarios, due to the lack of data, it could be tempting to assume that surrenders are conditionally independent with respect to a S-curve disturbance. However, we explain why this conditional independence between policyholders decisions, which has the advantage to be the simplest assumption, looks particularly maladaptive when the spread increases. Indeed the correlation between policyholders decisions is most likely to increase in this situation. We suggest and develop a simple model which integrates those phenomena. With stochastic orders it is possible to compare it to the conditional independence approach qualitatively. In a partially internal Solvency II model, we quantify the impact of the correlation phenomenon on a real life portfolio for a global risk management strategy.
Risk management Applied probability Life insurance Surrender risk Correlation risk
http://www.sciencedirect.com/science/article/pii/S0377221711003821
Loisel, Stéphane
Milhaud, Xavier
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:108-1192016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:108-119
article
Supply chain coordination with controllable lead time and asymmetric information
This paper considers coordinated decisions in a decentralized supply chain consisting of a vendor and a buyer with controllable lead time. We analyze two supply chain inventory models. In the first model we assume the vendor has complete information about the buyer’s cost structure. By taking both the vendor and the buyer’s individual rationalities into consideration, a side payment coordination mechanism is designed to realize supply chain Pareto dominance. In the second model we consider a setting where the buyer possesses private cost information. We design the coordination mechanism by using principal–agent model to induce the buyer to report his true cost structure. The solution procedures are also developed to get the optimal solutions of these two models. The results of numerical examples show that shortening lead time to certain extent can reduce inventory cost and the coordination mechanisms designed for both symmetric and asymmetric information situations are effective.
Supply chain management; Asymmetric information; Controllable lead time; Side payment; Coordination mechanism;
http://www.sciencedirect.com/science/article/pii/S037722171100806X
Li, Yina
Xu, Xuejun
Zhao, Xiande
Yeung, Jeff Hoi Yan
Ye, Fei
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:239-2512016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:239-251
article
Optimization models for assessing the peak capacity utilization of intelligent transportation systems
With limited economic and physical resources, it is not feasible to continually expand transportation infrastructure to adequately support the rapid growth in its usage. This is especially true for traffic coordination systems where the expansion of road infrastructure has not been able to keep pace with the increasing number of vehicles, thereby resulting in congestion and delays. Hence, in addition to striving for the construction of new roads, it is imperative to develop new intelligent transportation management and coordination systems. The effectiveness of a new technique can be evaluated by comparing it with the optimal capacity utilization. If this comparison indicates that substantial improvements are possible, then the cost of developing and deploying an intelligent traffic system can be justified. Moreover, developing an optimization model can also help in capacity planning. For instance, at a given level of demand, if the optimal solution worsens significantly, this implies that no amount of intelligent strategies can handle this demand, and expanding the infrastructure would be the only alternative. In this paper, we demonstrate these concepts through a case study of scheduling vehicles on a grid of intersecting roads. We develop two optimization models namely, the mixed integer programming model and the space–time network flow model, and show that the latter model is substantially more effective. Moreover, we prove that the problem is strongly NP-hard and develop two polynomial-time heuristics. The heuristic solutions are then compared with the optimal capacity utilization obtained using the space–time network model. We also present important managerial implications.
Traffic; Transportation; Integer programming; Intelligent system; Space–time network;
http://www.sciencedirect.com/science/article/pii/S0377221711006643
Shah, Nirav
Kumar, Subodha
Bastani, Farokh
Yen, I-Ling
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:252-2542016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:252-254
article
A note on pricing with risk aversion
We consider the pricing problem of a risk-averse seller facing uncertain demand. Demand uncertainty stems from buyers’ valuations being privately observed. By imposing very mild restrictions on the distribution of buyers’ valuations (an increasing generalized failure rate distribution) and the Bernoulli utility function, we show that a risk-averse seller will unambiguously post a lower price than a risk-neutral counterpart.
Economics; Pricing; Risk management; Utility theory;
http://www.sciencedirect.com/science/article/pii/S037722171100659X
Colombo, Luca
Labrecciosa, Paola
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:633-6422016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:633-642
article
Multi-criteria diagnosis of control knowledge for cartographic generalisation
The development of interactive map websites increases the need of efficient automatic cartographic generalisation. The generalisation process, which aims at decreasing the level of details of geographic data in order to produce a map at a given scale, is extremely complex. A classical method for automating the generalisation process consists in using a heuristic tree-search strategy. This type of strategy requires having high quality control knowledge (heuristics) to guide the search for the optimal solution. Unfortunately, this control knowledge is rarely perfect and its evaluation is often difficult. Yet, this evaluation can be very useful to manage knowledge and to determine when to revise it. The objective of our work is to offer an automatic method for evaluating the quality of control knowledge for cartographic generalisation based on a heuristic tree-search strategy. Our diagnosis method consists in analysing the system’s execution logs, and in using a multi-criteria analysis method for evaluating the knowledge global quality. We present an industrial application as a case study using this method for building block generalisation and this experiment shows promising results.
(S) Multiple criteria analysis; (S) Knowledge-based systems; Control knowledge quality diagnosis; Heuristic tree-search strategy; Cartographic generalisation;
http://www.sciencedirect.com/science/article/pii/S0377221711009039
Taillandier, Patrick
Taillandier, Franck
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:175-1852016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:175-185
article
Improving envelopment in Data Envelopment Analysis under variable returns to scale
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker’s value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data.
Data Envelopment Analysis; Efficiency; Productivity; Unobserved DMUs; Value judgements;
http://www.sciencedirect.com/science/article/pii/S0377221711009088
Thanassoulis, Emmanuel
Kortelainen, Mika
Allen, Rachel
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:687-6962016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:687-696
article
Data envelopment analysis models of investment funds
This paper develops theory missing in the sizable literature that uses data envelopment analysis to construct return–risk ratios for investment funds. It explores the production possibility set of the investment funds to identify an appropriate form of returns to scale. It discusses what risk and return measures can justifiably be combined and how to deal with negative risks, and identifies suitable sets of measures. It identifies the problems of failing to deal with diversification and develops an iterative approximation procedure to deal with it. It identifies relationships between diversification, coherent measures of risk and stochastic dominance. It shows how the iterative procedure makes a practical difference using monthly returns of 30 hedge funds over the same time period. It discusses possible shortcomings of the procedure and offers directions for future research.
Data envelopment analysis; Investment fund; Diversification; Coherent risk measure; Returns to scale; Stochastic dominance;
http://www.sciencedirect.com/science/article/pii/S0377221711007600
Lamb, John D.
Tee, Kai-Hong
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:600-6082016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:600-608
article
A heuristic method to schedule training programs for Small and Medium Enterprises
During the life period of Small and Medium Enterprises (SMEs) in incubators they need some training programs to acquire the required knowledge in order to survive and succeed in the business environment. This paper presents a heuristic method based on an optimization model to schedule these programs at the most suitable times. Based on the proposed heuristic, each training program is implemented in a suitable time by considering the SMEs’ requirements and some other logical constraints. The proposed heuristic is described in detail, and its implementation is demonstrated via a real-life numerical example. The numerical results of the heuristic are compared with other methods.
Course scheduling; Heuristics; Incubators; Job scheduling; Small and Medium Enterprises;
http://www.sciencedirect.com/science/article/pii/S0377221711008083
Rezaei, Mahmood
Shamsaei, Fahimeh
Mohammadian, Iman
Van Vyve, Mathieu
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:146-1552016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:146-155
article
Simulation-based Selectee Lane queueing design for passenger checkpoint screening
There are two kinds of passenger checkpoint screening lanes in a typical US airport: a Normal Lane and a Selectee Lane that has enhanced scrutiny. The Selectee Lane is not effectively utilized in some airports due to the small amount of passengers selected to go through it. In this paper, we propose a simulation-based Selectee Lane queueing design framework to study how to effectively utilize the Selectee Lane resource. We assume that passengers are classified into several risk classes via some passenger prescreening system. We consider how to assign passengers from different risk classes to the Selectee Lane based on how many passengers are already in the Selectee Lane. The main objective is to maximize the screening system’s probability of true alarm. We first discuss a steady-state model, formulate it as a nonlinear binary integer program, and propose a rule-based heuristic. Then, a simulation framework is constructed and a neighborhood search procedure is proposed to generate possible solutions based on the heuristic solution of the steady-state model. Using the passenger arrival patterns from a medium-size airport, we conduct a detailed case study. We observe that the heuristic solution from the steady-state model results in more than 4% relative increase in probability of true alarm with respect to the current practice. Moreover, starting from the heuristic solution, we obtain even better solutions in terms of both probability of true alarm and expected time in system via a neighborhood search procedure.
Passenger checkpoint screening; Selectee Lane; Queueing design; Nonlinear binary integer programming; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221711010897
Nie, Xiaofeng
Parab, Gautam
Batta, Rajan
Lin, Li
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:316-3262016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:316-326
article
Identifying large robust network clusters via new compact formulations of maximum k-club problems
Network robustness issues are crucial in a variety of application areas. In many situations, one of the key robustness requirements is the connectivity between each pair of nodes through a path that is short enough, which makes a network cluster more robust with respect to potential network component disruptions. A k-club, which by definition is a subgraph of a diameter of at most k, is a structure that addresses this requirement (assuming that k is small enough with respect to the size of the original network). We develop a new compact linear 0–1 programming formulation for finding maximum k-clubs that has substantially fewer entities compared to the previously known formulation (O(kn2) instead of O(nk+1), which is important in the general case of k>2) and is rather tight despite its compactness. Moreover, we introduce a new related concept referred to as an R-robust k-club (or, (k,R)-club), which naturally arises from the developed k-club formulations and extends the standard definition of a k-club by explicitly requiring that there must be at least R distinct paths of length at most k between all pairs of nodes. A compact formulation for the maximum R-robust k-club problem is also developed, and error and attack tolerance properties of the important special case of R-robust 2-clubs are investigated. Computational results are presented for multiple types of random graph instances.
Combinatorial optimization; Graph theory; Robust network clusters; k-clubs; R-robust k-clubs; Compact 0–1 formulations;
http://www.sciencedirect.com/science/article/pii/S0377221711009477
Veremyev, Alexander
Boginski, Vladimir
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:759-7672016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:759-767
article
Portfolio symmetry and momentum
This paper presents a novel theoretical framework to model the evolution of a dynamic portfolio (i.e., a portfolio whose weights vary over time), considering a given investment policy. The framework is based on graph theory and the quantum probability. Embedding the dynamics of a portfolio into a graph, each node of the graph representing a plausible portfolio, we provide the probabilities for a dynamic portfolio to lie on different nodes of the graph, characterizing its optimality in terms of returns. The framework embeds cross-sectional phenomena, such as the momentum effect, in stochastic processes, using portfolios instead of individual stocks. We apply our methodology to an investment policy similar to the momentum strategy of Jegadeesh and Titman (1993). We find that the strategy symmetry is a source of momentum.
(P) Finance Graph theory Momentum Quantum probability Spectral analysis
http://www.sciencedirect.com/science/article/pii/S0377221711004188
Billio, Monica
Calès, Ludovic
Guégan, Dominique
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:132-1392016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:132-139
article
Mixture cure models in credit scoring: If and when borrowers default
Mixture cure models were originally proposed in medical statistics to model long-term survival of cancer patients in terms of two distinct subpopulations – those that are cured of the event of interest and will never relapse, along with those that are uncured and are susceptible to the event. In the present paper, we introduce mixture cure models to the area of credit scoring, where, similarly to the medical setting, a large proportion of the dataset may not experience the event of interest during the loan term, i.e. default. We estimate a mixture cure model predicting (time to) default on a UK personal loan portfolio, and compare its performance to the Cox proportional hazards method and standard logistic regression. Results for credit scoring at an account level and prediction of the number of defaults at a portfolio level are presented; model performance is evaluated through cross validation on discrimination and calibration measures. Discrimination performance for all three approaches was found to be high and competitive. Calibration performance for the survival approaches was found to be superior to logistic regression for intermediate time intervals and useful for fixed 12month time horizon estimates, reinforcing the flexibility of survival analysis as both a risk ranking tool and for providing robust estimates of probability of default over time. Furthermore, the mixture cure model’s ability to distinguish between two subpopulations can offer additional insights by estimating the parameters that determine susceptibility to default in addition to parameters that influence time to default of a borrower.
Credit scoring; Survival analysis; Mixture cure models; Regression; Risk analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711009064
Tong, Edward N.C.
Mues, Christophe
Thomas, Lyn C.
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:379-3852016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:379-385
article
Uncertainty index based interval assignment by Interval AHP
In a multi-attribute decision making problem, indigenous values are assigned to attributes based on a decision maker’s subjective judgments. The given judgments are often uncertain, because of the uncertainty of situations and intuitiveness of human judgments. In order to reflect the uncertainty in the assigned values, they are denoted as intervals whose widths represent the possibilities of attributes. Since it is difficult for a decision maker to assign values directly to attributes in case of more than two attributes, he/she gives a pairwise comparison matrix by comparing two attributes at one occasion. The given matrix contains two kinds of uncertainty, one is inconsistency among comparisons and the other is incompleteness of comparisons. This paper proposes the models to obtain intervals of attributes from the given uncertain pairwise comparison matrix. At first, the uncertainty indexes of a set of intervals are defined from the viewpoints of entropy in probability, sum or maximum of widths, or ignorance. Then, considering that too uncertain information is not useful, the intervals of attributes are obtained by minimizing their uncertainty indexes.
Decision analysis; Interval analysis; Analytic Hierarchy Process; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221712000276
Entani, Tomoe
Sugihara, Kazutomi
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:356-3662016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:356-366
article
The implications of postponement on contract design and channel performance
We analyze a supply chain consisting of one manufacturer and one retailer under consignment sales with a revenue sharing contract. The manufacturer produces before, but charges price to sell the products through the retailer after the demand curve is revealed. The retailer deducts a fraction from the selling price for each unit sold and remits the balance to manufacturer. We refer to the capability whereby firms delay price decision and make sales in response to actual market condition as postponement. We find that, when market demand admits a multiplicative structure, the revenue share and allocation of channel profit between the firms when they have postponement capability are similar to when they do not have such capability. Postponement improves the profits of individual firms. Such an effect is more phenomenal in the centralized system than in decentralized system, and when the market demand is more sensitive to price changes. However, it causes the profit loss, defined as the percentage deviation of channel profit in the decentralized system relative to the centralized system, to worsen, and the gap widens with retailer’s sales cost. When the demand has an additive structure, while the roles of postponement on firms’ decisions differ slightly from those under the multiplicative structure, the structure of the strategic interactions between firms and relative channel performance are not significantly altered.
Supply chain management; Consignment; Revenue sharing; Postponement;
http://www.sciencedirect.com/science/article/pii/S0377221711006849
Jiang, Li
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:629-6382016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:629-638
article
Generating and improving orthogonal designs by using mixed integer programming
Analysts faced with conducting experiments involving quantitative factors have a variety of potential designs in their portfolio. However, in many experimental settings involving discrete-valued factors (particularly if the factors do not all have the same number of levels), none of these designs are suitable. In this paper, we present a mixed integer programming (MIP) method that is suitable for constructing orthogonal designs, or improving existing orthogonal arrays, for experiments involving quantitative factors with limited numbers of levels of interest. Our formulation makes use of a novel linearization of the correlation calculation. The orthogonal designs we construct do not satisfy the definition of an orthogonal array, so we do not advocate their use for qualitative factors. However, they do allow analysts to study, without sacrificing balance or orthogonality, a greater number of quantitative factors than it is possible to do with orthogonal arrays which have the same number of runs.
Orthogonal design creation Design of experiments Statistics
http://www.sciencedirect.com/science/article/pii/S0377221711006072
Vieira Jr., Hélcio
Sanchez, Susan
Kienitz, Karl Heinz
Belderrain, Mischel Carmen Neyra
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:477-4862016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:477-486
article
Coordination via cost and revenue sharing in manufacturer–retailer channels
The problem of establishing efficiency in a manufacturer–retailer channel (channel coordination) is extensively discussed in the industrial economics, the marketing and the operations research literature. However, studies considering consumer demand to be simultaneously affected by price and non-price variables are scarce. One subset of models investigates efficient contracts with non-linear tariffs, but requires mechanisms which are rarely observed in managerial practice. The other subset analyses channel efficiency effects of alternative royalty payments, but omits to design an efficient contract. We contribute to this literature by investigating a contract of royalty payments that is sufficient for channel coordination. Based on the analysis of the underlying vertical externalities, we show that channel coordination requires cost and revenue sharing via a revenue sharing rate and marketing effort participation rates on both manufacturer and retailer level. Some surprising findings are highlighted: there exists a continuum of efficient contracts. Efficiency requires a retailer’s participation of at least 50% in the manufacturer’s cost of marketing effort. Moreover, the elimination of double marginalisation is not necessary for channel coordination. Manufacturer and retailer can choose an efficient contract via bargaining over the wholesale price. The main challenge for managers will be to create acceptance of new types of royalty payments based on a trustful manufacturer–retailer relationship. We also discuss the cases of the Apple iPhone market launch and of innovative restaurant franchising to further illustrate and underline the relevance of our results.
Marketing; Channel coordination; Cooperative advertising; Revenue sharing; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221711006035
Kunter, Marcus
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:789-8002016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:789-800
article
Supply chain design considering economies of scale and transport frequencies
In this paper we consider a 3-echelon, multi-product supply chain design model with economies of scale in transport and warehousing that explicitly takes transport frequencies into consideration. Our model simultaneously optimizes locations and sizes of tank farms, material flows, and transport frequencies within the network. We consider all relevant costs: product cost, transport cost, tank rental cost, tank throughput cost, and inventory cost. The problem is based on a real-life example from a chemical company. We show that considering economies of scale and transport frequencies in the design stage is crucial and failing to do so can lead to substantially higher costs than optimal. We solve a wide variety of problems with branch-and-bound and with the efficient solution heuristics based on iterative linearization techniques we develop. We show that the heuristics are superior to the standard branch-and-bound technique for large problems like the one of the chemical company that motivated our research.
Supply chain design; Economies of scale; Transport frequencies; Iterative linearization;
http://www.sciencedirect.com/science/article/pii/S0377221711010411
Baumgartner, Kerstin
Fuetterer, André
Thonemann, Ulrich W.
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:163-1742016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:163-174
article
A new methodology for generating and combining statistical forecasting models to enhance competitive event prediction
Forecasting methods are routinely employed to predict the outcome of competitive events (CEs) and to shed light on the factors that influence participants’ winning prospects (e.g., in sports events, political elections). Combining statistical models’ forecasts, shown to be highly successful in other settings, has been neglected in CE prediction. Two particular difficulties arise when developing model-based composite forecasts of CE outcomes: the intensity of rivalry among contestants, and the strength/diversity trade-off among individual models. To overcome these challenges we propose a range of surrogate measures of event outcome to construct a heterogeneous set of base forecasts. To effectively extract the complementary information concealed within these predictions, we develop a novel pooling mechanism which accounts for competition among contestants: a stacking paradigm integrating conditional logit regression and log-likelihood-ratio-based forecast selection. Empirical results using data related to horseracing events demonstrate that: (i) base model strength and diversity are important when combining model-based predictions for CEs; (ii) average-based pooling, commonly employed elsewhere, may not be appropriate for CEs (because average-based pooling exclusively focuses on strength); and (iii) the proposed stacking ensemble provides statistically and economically accurate forecasts. These results have important implications for regulators of betting markets associated with CEs and in particular for the accurate assessment of market efficiency.
Forecasting; Forecast combination; Competitive event prediction;
http://www.sciencedirect.com/science/article/pii/S0377221711009714
Lessmann, Stefan
Sung, Ming-Chien
Johnson, Johnnie E.V.
Ma, Tiejun
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:652-6582016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:652-658
article
Problem structuring methods ‘in the Dock’: Arguing the case for Soft OR
Problem structuring methods (‘soft’ OR) have been around for approximately 40years and yet these methods are still very much overlooked in the OR world. Whilst there is almost certainly a number of explanations for this, two key stumbling blocks are: (1) the subjective nature of the modelling yielding insights rather than testable results, and (2) the demand on users to both manage content (through modelling) and processes (work with rather than ‘on behalf’ of groups). However, as evidenced from practice there are also a number of significant benefits. This paper therefore aims to examine the case of Soft OR through examining the case for and against problem structuring methods.
Problem structuring; Practice of OR; Mixing Methods;
http://www.sciencedirect.com/science/article/pii/S0377221711010010
Ackermann, Fran
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:434-4442016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:434-444
article
Information security trade-offs and optimal patching policies
We develop and simulate a basic mathematical model of the costly deployment of software patches in the presence of trade-offs between confidentiality and availability. The model incorporates representations of the key aspects of the system architecture, the managers’ preferences, and the stochastic nature of the threat environment. Using the model, we compute the optimal frequencies for regular and irregular patching, for both networks and clients, for two example types of organization, military and financial. Such examples are characterized by their constellations of parameters. Military organizations, being relatively less cost-sensitive, tend to apply network patches upon their arrival. The relatively high cost of applying irregular client patches leads both types of organization to avoid deployment upon arrival.
Information security; Optimal policy; Risk reduction; Stochastic processes;
http://www.sciencedirect.com/science/article/pii/S037722171100498X
Ioannidis, Christos
Pym, David
Williams, Julian
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:312-3252016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:312-325
article
Matching product architecture with supply chain design
Product architecture is typically established in the early stages of the product development (PD) cycle. Depending on the type of architecture selected, product design, manufacturing processes, and ultimately supply chain configuration are all significantly affected. Therefore, it is important to integrate product architecture decisions with manufacturing and supply chain decisions during the early stage of the product development. In this paper, we present a multi-objective optimization framework for matching product architecture strategy to supply chain design. In contrast to the existing operations management literature, we incorporate the compatibility between the supply chain partners into our model to ensure the long term viability of the supply chain. Since much of the supplier related information may be very subjective in nature during the early stages of PD, we use fuzzy logic to compute the compatibility index of a supplier. The optimization model is formulated as a weighted goal programming (GP) model with two objectives: minimization of total supply chain costs, and maximization of total supply chain compatibility index. The GP model is solved by using genetic algorithm. We present case examples for two different products to demonstrate the model’s efficacy, and present several managerial implications that evolved from this study.
Product architecture; Supply chain design; Modular strategy; Product development;
http://www.sciencedirect.com/science/article/pii/S0377221711006734
Nepal, Bimal
Monplaisir, Leslie
Famuyiwa, Oluwafemi
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:598-6102016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:598-610
article
The Home Care Crew Scheduling Problem: Preference-based visit clustering and temporal dependencies
In the Home Care Crew Scheduling Problem a staff of home carers has to be assigned a number of visits to patients’ homes, such that the overall service level is maximised. The problem is a generalisation of the vehicle routing problem with time windows. Required travel time between visits and time windows of the visits must be respected. The challenge when assigning visits to home carers lies in the existence of soft preference constraints and in temporal dependencies between the start times of visits.
Home care; Crew scheduling; Vehicle routing; Generalised precedence constraints; Branch-and-price; Set partitioning;
http://www.sciencedirect.com/science/article/pii/S0377221711009891
Rasmussen, Matias Sevel
Justesen, Tor
Dohn, Anders
Larsen, Jesper
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:651-6612016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:651-661
article
Short sales in Log-robust portfolio management
This paper extends the Log-robust portfolio management approach to the case with short sales, i.e., the case where the manager can sell shares he does not yet own. We model the continuously compounded rates of return, which have been established in the literature as the true drivers of uncertainty, as uncertain parameters belonging to polyhedral uncertainty sets, and maximize the worst-case portfolio wealth over that set in a one-period setting. The degree of the manager's aversion to ambiguity is incorporated through a single, intuitive parameter, which determines the size of the uncertainty set. The presence of short-selling requires the development of problem-specific techniques, because the optimization problem is not convex. In the case where assets are independent, we show that the robust optimization problem can be solved exactly as a series of linear programming problems; as a result, the approach remains tractable for large numbers of assets. We also provide insights into the structure of the optimal solution. In the case of correlated assets, we develop and test a heuristic where correlation is maintained only between assets invested in. In computational experiments, the proposed approach exhibits superior performance to that of the traditional robust approach.
Robust optimization Nonlinear optimization Portfolio management
http://www.sciencedirect.com/science/article/pii/S0377221711005716
Kawas, Ban
Thiele, Aurélie
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:114-1222016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:114-122
article
Arithmetic Brownian motion and real options
We treat real option value when the underlying process is arithmetic Brownian motion (ABM). In contrast to the more common assumption of geometric Brownian motion (GBM) and multiplicative diffusion, with ABM the underlying project value is expressed as an additive process. Its variance remains constant over time rather than rising or falling along with the project’s value, even admitting the possibility of negative values. This is a more compelling paradigm for projects that are managed as a component of overall firm value. After outlining the case for ABM, we derive analytical formulas for European calls and puts on dividend-paying assets as well as a numerical algorithm for American-style and other more complex options based on ABM. We also provide examples of their use.
Investment analysis; Real options; Risk-neutral valuation; Arithmetic Brownian motion;
http://www.sciencedirect.com/science/article/pii/S0377221711011003
Alexander, David Richard
Mo, Mengjia
Stent, Alan Fraser
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:270-2792016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:270-279
article
Assessing financial distress where bankruptcy is not an option: An alternative approach for local municipalities
The goal of this paper is to build an operational model for evaluating the financial viability of local municipalities in Greece. For this purpose, a multicriteria methodology is implemented combining a simulation analysis approach (stochastic multicriteria acceptability analysis) with a disaggregation technique. In particular, an evaluation model is developed on the basis of accrual financial data from 360 Greek municipalities for 2007. A set of customized to the local government context financial ratios is defined that rate municipalities and distinguish those with good financial condition from those experiencing financial problems. The model’s results are analyzed on the 2007 data as well as on a subsample of 100 local governments in 2009. The model succeeded in correctly classifying distressed municipalities according to a benchmark set by the central government in 2010. Such a model and methodology could be particularly useful for performance assessment in the context of several European Union countries that have a similar local government framework to the Greek one and apply accrual accounting techniques.
Local governments; Financial distress; Multiple criteria analysis; Financial ratios; Greece;
http://www.sciencedirect.com/science/article/pii/S0377221711009404
Cohen, Sandra
Doumpos, Michael
Neofytou, Evi
Zopounidis, Constantin
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:442-4512016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:442-451
article
Linearized Nelson–Siegel and Svensson models for the estimation of spot interest rates
Linearized versions of the Nelson–Siegel (1987) and Svensson (1994) models for the cross-sectional estimation of spot yield curves from samples of coupon bonds are developed and analyzed. It is shown how these models can be made linear in the level, slope and curvature parameters and how prior information about these parameters can be incorporated in the estimation procedure. The performance of the linearized models are assessed in a Monte Carlo setting and with a sample of US government bonds. The results reveal that the linearized models compare favorably to the original models in terms of parameter estimates stability, computing effort and prevalence of local optima.
Term structure of interest rates; Spot rate curves; Coupon bonds; Prior information; Linearization;
http://www.sciencedirect.com/science/article/pii/S0377221712000057
Gauthier, Geneviève
Simonato, Jean-Guy
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:367-3752016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:367-375
article
On the ordinal equivalence of the Johnston, Banzhaf and Shapley power indices
In this paper, we characterize the games in which Johnston, Shapley–Shubik and Penrose–Banzhaf–Coleman indices are ordinally equivalent, meaning that they rank players in the same way. We prove that these three indices are ordinally equivalent in semicomplete simple games, which is a newly defined class that contains complete games and includes most of the real–world examples of binary voting systems. This result constitutes a twofold extension of Diffo Lambo and Moulen’s result (Diffo Lambo and Moulen, 2002) in the sense that ordinal equivalence emerges for three power indices (not just for the Shapley–Shubik and Penrose–Banzhaf–Coleman indices), and it holds for a class of games strictly larger than the class of complete games.
Game theory; Decision support systems; Simple games; Complete simple games; Power indices; Ordinal equivalence;
http://www.sciencedirect.com/science/article/pii/S0377221711006606
Freixas, Josep
Marciniak, Dorota
Pons, Montserrat
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:225-2312016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:225-231
article
A deterministic resource scheduling model in epidemic control: A case study
The resources available to tackle an epidemic infection are usually limited, while the time and effort required to control it are increasing functions of the starting time of the containment effort. The problem of scheduling limited available resources, when there are several areas where the population is infected, is considered. A deterministic model, appropriate for large populations, where random interactions can be averaged out, is used for the epidemic’s rate of spread. The problem is tackled using the concept of deteriorating jobs, i.e. the model represents increasing loss rate as more susceptibles become infected, and increasing time and effort needed for the epidemic’s containment. A case study for a proposed application of the model in the case of the mass vaccination against A(H1N1)v influenza in the Attica region, Greece and a comparative study of the model’s performance vs. the applied random practice are presented.
Scheduling; Disaster management; Deteriorating jobs; Case study;
http://www.sciencedirect.com/science/article/pii/S0377221711006114
Rachaniotis, Nikolaos P.
Dasaklis, Tom K.
Pappis, Costas P.
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:70-822016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:70-82
article
A hierarchy of relaxations for linear generalized disjunctive programming
Generalized disjunctive programming (GDP), originally developed by Raman and Grossmann (1994), is an extension of the well-known disjunctive programming paradigm developed by Balas in the mid 70s in his seminal technical report (Balas, 1974). This mathematical representation of discrete-continuous optimization problems, which represents an alternative to the mixed-integer program (MIP), led to the development of customized algorithms that successfully exploited the underlying logical structure of the problem. The underlying theory of these methods, however, borrowed only in a limited way from the theories of disjunctive programming, and the unique insights from Balas’ work have not been fully exploited.
Integer programming; Disjunctive programming; Hull relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221711006205
Sawaya, Nicolas
Grossmann, Ignacio
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:224-2332016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:224-233
article
A binary particle swarm optimization algorithm inspired by multi-level organizational learning behavior
Recently, nature-inspired algorithms have increasingly attracted the attention of researchers. Due to the fact that in BPSO the position vectors consisting of ‘0’ and ‘1’ can be seen as a decision behavior (support or oppose), in this paper, we propose a BPSO with hierarchical structure (BPSO_HS for short), on the basis of multi-level organizational learning behavior. At each iteration of BPSO_HS, particles are divided into two classes, named ‘leaders’ and ‘followers’, and different evolutionary strategies are used in each class. In addition, the mutation strategy is adopted to overcome the premature convergence and slow convergent speed during the later stages of optimization. The algorithm was tested on two discrete optimization problems (Traveling Salesman and Bin Packing) as well as seven real-parameter functions. The experimental results showed that the performance of BPSO_HS was significantly better than several existing algorithms.
Binary particle swarm optimization; Multi-level organizational learning behavior; Hierarchical structure; Mutation strategy; Evolutionary strategy;
http://www.sciencedirect.com/science/article/pii/S0377221712000240
Bin, Wei
Qinke, Peng
Jing, Zhao
Xiao, Chen
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:252-2632016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:252-263
article
The effectiveness of manufacturer vs. retailer rebates within a newsvendor framework
This paper studies the impact of direct rebates to the end customer from the manufacturer and/or from the retailer upon the profitability and effectiveness of the policies of both channels. Effectiveness is measured by the ratio of the retailer’s to the manufacturer’s profits and by the sum of the profits for the two parties across scenarios wherein at least one of the parties offers a rebate. The main result is to prove analytically the conditions under which either all three scenarios are equally profitable or the retailer-only rebate policy is dominant. Another important result is to illustrate the likelihood that the manufacturer is able to coordinate the supply chain, by the appropriate choice of its pricing and rebate policies, thereby inducing the retailer to do likewise with its associated best pricing, ordering and rebate policies. Finally, numerical examples highlight the main features of the paper.
Supply-chain management; Cross-functional interfaces; Operations; Marketing; Conceptual modeling;
http://www.sciencedirect.com/science/article/pii/S037722171100600X
Arcelus, F.J.
Kumar, Satyendra
Srinivasan, G.
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:509-5202016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:509-520
article
Copositive optimization – Recent developments and applications
Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.
Clique number; Completely positive matrix; Convexity gap; Crossing number; Robust optimization; Standard quadratic optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711003705
Bomze, Immanuel M.
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:546-5582016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:546-558
article
Solving a continuous local access network design problem with a stabilized central column generation approach
In this paper, we focus on a variant of the multi-source Weber problem. In the multi-source Weber problem, the location of a fixed number of concentrators, and the allocation of terminals to them, must be chosen to minimize the total cost of links between terminals and concentrators. In our variant, we have a third hierarchical level, two categories of link costs, and the number of concentrators is unknown. To solve this difficult problem, we propose several heuristics, and use a new stabilized column generation approach, based on a central cutting plane method, to provide lower bounds.
Location Combinatorial optimization Column generation Central cutting plane Multi-source Weber problem
http://www.sciencedirect.com/science/article/pii/S0377221711004462
Trampont, M.
Destré, C.
Faye, A.
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:639-6502016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:639-650
article
Strategic investment under uncertainty: A synthesis
Investment is a central theme in economics, finance, and operational research. Traditionally, the focus of analysis has been either on assessing the value of flexibility (investment under uncertainty) or on describing commitment effects in competitive settings (industrial organization). Research contributions addressing the intersection of investment under uncertainty and industrial organization have become numerous in recent years. In this paper, we provide an overview aimed at categorizing and relating these research streams. We highlight managerial insights concerning the nature of competitive advantage (first- versus second-mover advantage), the manner in which information is revealed, firm heterogeneity, capital increment size, and the number of competing firms.
Finance Investment analysis Real options Strategic investment Option games
http://www.sciencedirect.com/science/article/pii/S0377221711004863
Chevalier-Roignant, Benoît
Flath, Christoph M.
Huchzermeier, Arnd
Trigeorgis, Lenos
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:439-4472016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:439-447
article
A model for efficiency-based resource integration in services
Service processes, such as consulting, require coordinated efforts from the service recipient (client) and the service provider in order to deliver the desired output – a process known as resource integration. Client involvement directly affects the efficiency of service processes, thereby affecting capacity decisions. We present a mathematical model of the resource-integration decision for a service process through which the client and the service provider co-produce resource outputs. This workforce planning model is unique because we include the extent of client involvement as a policy variable and introduce to the resource-planning model efficiency and quality performance measures, which are functions of client involvement. The optimization of resource planning for services produces interesting policy prescriptions due to the presence of a client-modulated efficiency function in the capacity constraint and subjective client value placed on participation in the service process. The primary results of this research are optimal decision rules that provide insights into the optimal levels of client involvement and provider commitment in resource integration.
OR in manpower planning; Services; Coproduction; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221711008125
White, Sheneeta W.
Badinelli, Ralph D.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:333-3412016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:333-341
article
Contracting with asymmetric demand information in supply chains
We solve a buyback contract design problem for a supplier who is working with a retailer who possesses private information about the demand distribution. We model the retailer’s private information as a space of either discrete or continuous demand states so that only the retailer knows its demand state and the demand for the product is stochastically increasing in the state. We focus on contracts that are viable in practice, where the buyback price being strictly less than the wholesale price, which is itself strictly less than the retail price. We derive the optimal (for the supplier) buyback contract that allows for arbitrary allocation of profits to the retailer (subject to the retailer’s reservation profit requirements) and show that in the limit this contract leads to the first-best solution with the supplier keeping the entire channel’s profit (after the retailer’s reservation profit).
Supply chain management; Contracting; Asymmetric information; Return and buyback policies;
http://www.sciencedirect.com/science/article/pii/S0377221711008629
Babich, Volodymyr
Li, Hantao
Ritchken, Peter
Wang, Yunzeng
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:638-6462016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:638-646
article
Whose deletion does not affect your payoff? The difference between the Shapley value, the egalitarian value, the solidarity value, and the Banzhaf value
This study provides a unified axiomatic characterization method of one-point solutions for cooperative games with transferable utilities. Any one-point solution that satisfies efficiency, the balanced cycle contributions property (BCC), and the axioms related to invariance under a player deletion is characterized as a corollary of our general result. BCC is a weaker requirement than the well-known balanced contributions property. Any one-point solution that is both symmetric and linear satisfies BCC. The invariance axioms necessitate that the deletion of a specific player from games does not affect the other players’ payoffs, and this deletion is different with respect to solutions. As corollaries of the above characterization result, we are able to characterize the well-known one-point solutions, the Shapley, egalitarian, and solidarity values, in a unified manner. We also studied characterizations of an inefficient one-point solution, the Banzhaf value that is a well-known alternative to the Shapley value.
Game theory; Axiomatization; Shapley value; Egalitarian value; Solidarity value; Banzhaf value;
http://www.sciencedirect.com/science/article/pii/S0377221711007302
Kamijo, Yoshio
Kongo, Takumi
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:134-1452016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:134-145
article
Measuring the efficiency of highway maintenance contracting strategies: A bootstrapped non-parametric meta-frontier approach
Highly deteriorated US road infrastructure, major budgetary restrictions and the significant growth in traffic have led to an emerging need for improving performance of highway maintenance practices. Privatizing some portions of road maintenance operations by state Departments of Transportation (DOTs) under the auspices of performance-based contracts has been one of the innovative initiatives in response to such a need. This paper adapts the non-parametric meta-frontier framework to the two-stage bootstrapping technique to develop an analytical approach for evaluating the relative efficiency of two highway maintenance contracting strategies. The first strategy pertains to the 180 miles of Virginia’s Interstate highways maintained by Virginia DOT using traditional maintenance practices. The second strategy pertains to the 250 miles of Virginia’s Interstate highways maintained via a Public Private Partnership using a performance-based maintenance approach. The meta-frontier approach accounts for the heterogeneity that exists among different types of highway maintenance contracts due to different limitations and regulations. The two-stage bootstrapping technique accounts for the large set of uncontrollable factors that affect the highway deterioration processes. The preliminary findings, based on the historical data for the state of Virginia, suggest that road authorities (counties) that have used traditional contracting for transforming the maintenance expenditures into the improvement of the road conditions seem to be more efficient than road authorities that have used the performance-based contracting. This paper recommends that road authorities use hybrid contracting approaches that include best practices of both traditional and performance-based highway maintenance contracting.
Data Envelopment Analysis; Meta-frontier; Bootstrapping; Highway maintenance contracting strategies; Performance-based contracting;
http://www.sciencedirect.com/science/article/pii/S0377221711010861
Fallah-Fini, Saeideh
Triantis, Konstantinos
de la Garza, Jesus M.
Seaver, William L.
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:45-562016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:45-56
article
Scheduling inspired models for two-dimensional packing problems
We propose two exact algorithms for two-dimensional orthogonal packing problems whose main components are simple mixed-integer linear programming models. Based on the different forms of time representation in scheduling formulations, we extend the concept of multiple time grids into a second dimension and propose a hybrid discrete/continuous-space formulation. By relying on events to continuously locate the rectangles along the strip height, we aim to reduce the size of the resulting mathematical problem when compared to a pure discrete-space model, with hopes of achieving a better computational performance. Through the solution of a set of 29 test instances from the literature, we show that this was mostly accomplished, primarily because the associated search strategy can quickly find good feasible solutions prior to the optimum, which may be very important in real industrial environments. We also provide a comprehensive comparison to seven other conceptually different approaches that have solved the same strip packing problems.
Optimization Integer programming Strip packing Resource-Task Network Spatial grids
http://www.sciencedirect.com/science/article/pii/S0377221711005078
Castro, Pedro M.
Oliveira, José F.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:351-3562016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:351-356
article
Effects of growth volatility on economic performance – Empirical evidence from Turkey
This paper examines the relationship between growth and growth volatility for a small open economy with high growth volatility: Turkey. Quarterly data for the period from 1987Q1 to 2007Q3 suggests that growth volatility reduces growth and that this result is robust under different specifications. This paper contributes to the literature by focusing on how growth volatility affects a set of variables that are crucial for growth. Empirical evidence from Turkey suggests that higher growth volatility reduces total factor productivity, investment, and the foreign currency value of local currency (depreciation). Moreover, it increases employment, though the evidence for this is not statistically significant.
Economics; Sustainability; Growth volatility; Total factor productivity; Investment; Real exchange rate;
http://www.sciencedirect.com/science/article/pii/S037722171100854X
Berument, M. Hakan
Dincer, N. Nergiz
Mustafaoglu, Zafer
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:86-952016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:86-95
article
Fitting piecewise linear continuous functions
We consider the problem of fitting a continuous piecewise linear function to a finite set of data points, modeled as a mathematical program with convex objective. We review some fitting problems that can be modeled as convex programs, and then introduce mixed-binary generalizations that allow variability in the regions defining the best-fit function’s domain. We also study the additional constraints required to impose convexity on the best-fit function.
Integer programming; Quadratic programming; Data fitting/regression; Piecewise linear function;
http://www.sciencedirect.com/science/article/pii/S0377221711011246
Toriello, Alejandro
Vielma, Juan Pablo
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:638-6402016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:638-640
article
A look at the past and present of optimization – An editorial
This special issue of the European Journal of Operational Research is devoted to the EURO XXIV Conference, that was held at the facilities of the University of Lisbon (Portugal) from July 11 to July 14, 2010. With over 700 sessions for a total of approximately 2350 presentations, and with 2700 participants (delegates and accompanying persons) coming from 69 countries, this was the largest EURO conference ever.
OR in research and development; Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711009994
Martello, Silvano
Pinto Paixão, José M.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:392-4002016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:392-400
article
Control of a production–inventory system with returns under imperfect advance return information
We consider a production–inventory system with product returns that are announced in advance by the customers. Demands and announcements of returns occur according to independent Poisson processes. An announced return is either actually returned or cancelled after a random return lead time. We consider both lost sale and backorder situations. Using a Markov decision formulation, the optimal production policy, with respect to the discounted cost over an infinite horizon, is characterized for situations with and without advance return information. We give insights in the potential value of this information. Also some attention is paid to combining advance return and advance demand information. Further applications of the model as well as topics for further research are indicated.
Reverse logistics; Inventory control; Stochastic dynamic programming; Advance return information;
http://www.sciencedirect.com/science/article/pii/S0377221711010046
Flapper, S.D.P.
Gayon, J.P.
Vercraene, S.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:538-5472016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:538-547
article
Estimating the population utility function: A parametric Bayesian approach
In this paper we consider the health utility index mark II for quantifying and describing a population’s health related quality of life over health states composed of multiple attributes. This measure can be used for various purposes such as evaluating the severity of the effect of a disease or comparing different treatment methods. We present a Bayesian framework for population utility estimation and health policy evaluation by introducing a probabilistic interpretation of the multi-attribute utility theory (MAUT) used in health economics. In doing so, our approach combines ideas from the MAUT and Bayesian statistics and provides an alternative method of modeling preferences and utility estimation.
Bayesian inference; Health services; Multi-attribute utility theory; OR in societal problem analysis; Group decision making;
http://www.sciencedirect.com/science/article/pii/S0377221711010083
Musal, R. Muzaffer
Soyer, Refik
McCabe, Christopher
Kharroubi, Samer A.
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:773-7832016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:773-783
article
Mathematical programming formulations for approximate simulation of multistage production systems
Mathematical programming representation has been recently used to describe the behavior of discrete event systems as well as their formal properties. This new way of representing discrete event systems paves the way to the creation of simpler mathematical programming models that reduce the complexity of the system analysis. The paper proposes an approximate representation for a class of production systems characterized by several stages, limited buffer capacities and stochastic production times. The approximation exploits the concept of a time buffer, modeled as a constraint that put into a temporal relationship the completion times of two customers in a sample path. The main advantage of the proposed formulation is that it preserves its linearity even when used for optimization and, for such a reason, it can be adopted in simulation–optimization problems to reduce the initial solution space. The approximate formulation is applied to relevant problems such as buffer capacity allocation in manufacturing systems and control parameters setting in pull systems.
Simulation; Optimization; Queueing systems; Bounds; Linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221712000306
Alfieri, Arianna
Matta, Andrea
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:287-2992016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:287-299
article
Optimally routing and scheduling tow trains for JIT-supply of mixed-model assembly lines
In recent years, more and more automobile producers adopted the supermarket-concept to enable a flexible and reliable Just-in-Time (JIT) part supply of their mixed-model assembly lines. Within this concept, a supermarket is a decentralized in-house logistics area where parts are intermediately stored and then loaded on small tow trains. These tow trains travel across the shop floor on specific routes to make frequent small-lot deliveries which are needed by the stations of the line. To enable a reliable part supply in line with the JIT-principle, the interdependent problems of routing, that is, partitioning stations to be supplied among tow trains, and scheduling, i.e., deciding on the start times of each tow train’s tours through its assigned stations, need to be solved. This paper introduces an exact solution procedure which solves both problems simultaneously in polynomial runtime. Additionally, management implications regarding the trade-off between number and capacity of tow trains and in-process inventory near the line are investigated within a comprehensive computational study.
Mixed-model assembly lines; Just-in-Time; Material supply; Tow trains;
http://www.sciencedirect.com/science/article/pii/S0377221711008162
Emde, Simon
Boysen, Nils
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:420-4282016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:420-428
article
A heuristic method to rectify intransitive judgments in pairwise comparison matrices
This paper investigates the effects of intransitive judgments on the consistency of pairwise comparison matrices. Statistical evidence regarding the occurrence of intransitive judgements in pairwise matrices of acceptable consistency is gathered by using a Monte–Carlo simulation, which confirms that relatively high percentage of comparison matrices, satisfying Saaty’s CR criterion are ordinally inconsistent. It is also shown that ordinal inconsistency does not necessarily decrease in the group aggregation process, in contrast with cardinal inconsistency. A heuristic algorithm is proposed to improve ordinal consistency by identifying and eliminating intransitivities in pairwise comparison matrices. The proposed algorithm generates near-optimal solutions and outperforms other tested approaches with respect to computation time.
Decision analysis; AHP; Pairwise comparisons; Consistency; Simulation; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221711006667
Siraj, Sajid
Mikhailov, Ludmil
Keane, John
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:764-7742016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:764-774
article
Maritime crude oil transportation – A split pickup and split delivery problem
The maritime oil tanker routing and scheduling problem is known to the literature since before 1950. In the presented problem, oil tankers transport crude oil from supply points to demand locations around the globe. The objective is to find ship routes, load sizes, as well as port arrival and departure times, in a way that minimizes transportation costs. We introduce a path flow model where paths are ship routes. Continuous variables distribute the cargo between the different routes. Multiple products are transported by a heterogeneous fleet of tankers. Pickup and delivery requirements are not paired to cargos beforehand and arbitrary split of amounts is allowed. Small realistic test instances can be solved with route pre-generation for this model. The results indicate possible simplifications and stimulate further research.
Routing; Scheduling; Maritime transportation; Pickup and delivery; Split;
http://www.sciencedirect.com/science/article/pii/S0377221711008964
Hennig, F.
Nygreen, B.
Christiansen, M.
Fagerholt, K.
Furman, K.C.
Song, J.
Kocis, G.R.
Warrick, P.H.
oai:RePEc:eee:ejores:v:215:y:2011:i:3:p:532-5382016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:3:p:532-538
article
Natural gas bilevel cash-out problem: Convergence of a penalty function method
This paper studies a special bi-level programming problem that arises from the dealings of a Natural Gas Shipping Company and the Pipeline Operator, with facilities of the latter used by the former. Because of the business relationships between these two actors, the timing and objectives of their decision-making process are different and sometimes even opposed. In order to model that, bi-level programming was traditionally used in previous works. Later, the problem was expanded and theoretically studied to facilitate its solution; this included extension of the upper level objective function, linear reformulation, heuristic approaches, and branch-and-bound techniques. In this paper, we present a linear programming reformulation of the latest version of the model, which is significantly faster to solve when implemented computationally. More importantly, this new formulation makes it easier to analyze the problem theoretically, allowing us to draw some conclusions about the nature of the solution of the modified problem. Numerical results concerning the running time, convergence, and optimal values, are presented and compared to previous reports, showing a significant improvement in speed without actual sacrifice of the solution's quality.
OR in energy Bi-level programming Linearization Penalty method
http://www.sciencedirect.com/science/article/pii/S0377221711006059
Dempe, Stephan
Kalashnikov, Vyacheslav V.
Pérez-Valdés, Gerardo A.
Kalashnykova, Nataliya I.
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:222-2312016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:222-231
article
Mixed-integer linear optimization for optimal lift-gas allocation with well-separator routing
The lift-gas allocation problem with well-separator routing constraints is a mixed-integer nonlinear program of considerable complexity. To this end, a mixed-integer linear formulation (compact) is obtained by piecewise-linearizing the nonlinear curves, using binary variables to express the linearization and routing decisions. A new formulation (integrated) combining the decisions on linearization and routing is developed by using a single binary variable. The structures of both formulations are explored to generate lifted cover cuts. Numerical tests show that the solution of the integrated formulation using cutting-plane generation is faster in spite of having more variables than the compact formulation.
Integer programming; Piecewise linearization; Lifted cover cuts; Lift-gas allocation; Routing constraints;
http://www.sciencedirect.com/science/article/pii/S0377221711007983
Codas, Andrés
Camponogara, Eduardo
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:679-6862016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:679-686
article
A discrete model for optimal operation of fossil-fuel generators of electricity
This paper presents a new discrete approach to the price-based dynamic economic dispatch (PBDED) problem of fossil-fuel generators of electricity. The objective is to find a sequence of generator temperatures that maximizes profit over a fixed-length time horizon. The generic optimization model presented in this paper can be applied to automatic operation of fossil-fuel generators or to prepare market bids, and it works with various price forecasts. The model’s practical applications are demonstrated by the results of simulation experiments involving 2009 NYISO electricity market data, branch-and-bound, and tabu-search optimization techniques.
OR in energy; Price-based dynamic economic dispatch; Power generation; Tabu search;
http://www.sciencedirect.com/science/article/pii/S0377221711007156
Kalczynski, Pawel J.
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:671-6792016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:671-679
article
Operations Research for green logistics – An overview of aspects, issues, contributions and challenges
The worldwide economic growth of the last century has given rise to a vast consumption of goods while globalization has led to large streams of goods all over the world. The production, transportation, storage and consumption of all these goods, however, have created large environmental problems. Today, global warming, created by large scale emissions of greenhouse gasses, is a top environmental concern. Governments, action groups and companies are asking for measures to counter this threat. Operations Research has a long tradition in improving operations and especially in reducing costs. In this paper, we present a review that highlights the contribution of Operations Research to green logistics, which involves the integration of environmental aspects in logistics. We give a sketch of the present and possible developments, focussing on design, planning and control in a supply chain for transportation, inventory of products and facility decisions. While doing this, we also indicate several areas where environmental aspects could be included in OR models for logistics.
Environment; Logistics; Supply chain management; Transportation;
http://www.sciencedirect.com/science/article/pii/S0377221711009970
Dekker, Rommert
Bloemhof, Jacqueline
Mallidis, Ioannis
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:296-3042016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:296-304
article
Multi-state throughput analysis of a two-stage manufacturing system with parallel unreliable machines and a finite buffer
This paper models and analyzes the throughput of a two-stage manufacturing system with multiple independent unreliable machines at each stage and one finite-sized buffer between the stages. The machines follow exponential operation, failure, and repair processes. Most of the literature uses binary random variables to model unreliable machines in transfer lines and other production lines. This paper first illustrates the importance of using more than two states to model parallel unreliable machines because of their independent and asynchronous operations in the parallel system. The system balance equations are then formulated based on a set of new notations of vector manipulations, and are transformed into a matrix form fitting the properties of the Quasi-Birth–Death (QBD) process. The Matrix-Analytic (MA) method for solving the generic QBD processes is used to calculate the system state probability and throughput. Numerical cases demonstrate that solution method is fast and accurate in analyzing parallel manufacturing systems, and thus prove the applicability of the new model and the effectiveness of the MA-based method. Such multi-state models and their solution techniques can be used as a building block for analyzing larger, more complex manufacturing systems.
Manufacturing; Parallel machine; Markovian analysis; Matrix-Analytic method;
http://www.sciencedirect.com/science/article/pii/S0377221711011027
Liu, Jialu
Yang, Sheng
Wu, Aiguo
Hu, S. Jack
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:584-5932016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:584-593
article
Two-dimensional efficiency decomposition to measure the demand effect in productivity analysis
This paper proposes a two-dimensional efficiency decomposition (2DED) of profitability for a production system to account for the demand effect observed in productivity analysis. The first dimension identifies four components of efficiency: capacity design, demand generation, operations, and demand consumption, using Network Data Envelopment Analysis (Network DEA). The second dimension decomposes the efficiency measures and integrates them into a profitability efficiency framework. Thus, each component’s profitability change can be analyzed based on technical efficiency change, scale efficiency change and allocative efficiency change. An empirical study based on data from 2006 to 2008 for the US airline industry finds that the regress of productivity is mainly caused by a demand fluctuation in 2007–2008 rather than technical regression in production capabilities.
Data envelopment analysis; Productivity and profitability change; Efficiency decomposition; Demand fluctuation; Airlines industry;
http://www.sciencedirect.com/science/article/pii/S0377221711007168
Lee, Chia-Yen
Johnson, Andrew L.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:401-4072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:401-407
article
The impact of cost uncertainty on the location of a distribution center
The location of a distribution center (DC) is a key consideration for the design of supply chain networks. When deciding on it, firms usually allow for transportation costs, but not supplier prices. We consider simultaneously the location of a DC and the choice of suppliers offering different, possibly random, prices for a single product. A buying firm attempts to minimize the sum of the price charged by a chosen supplier, and inbound and outbound transportation costs. No costs are incurred for switching suppliers. We first derive a closed-form optimal location for the case of a demand-populated unit line between two suppliers offering deterministic prices. We then let one of the two suppliers offer a random price. If the price follows a symmetric and unimodal distribution, the optimal location is closer to the supplier with a lower mean price. We also show the dominance of high variability: the buyer can decrease the total cost more for higher price variability for any location. The dominance result holds for normal, uniform, and gamma distributions. We propose an extended model with more than two suppliers on a plane and show that the dominance result still holds. From numerical examples for a line and a plane, we observe that an optimal location gets closer to the center of gravity of demands as the variability of any supplier’s price increases.
Logistics; Location; Transportation; Sourcing; Distribution; Supply chain management;
http://www.sciencedirect.com/science/article/pii/S0377221711010071
Huang, Rongbing
Menezes, Mozart B.C.
Kim, Seokjin
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:544-5522016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:544-552
article
Approximation algorithms for the parallel flow shop problem
We consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson’s rule. For m=2, we present a 32-approximation algorithm, and for m=3, we present a 127-approximation algorithm. Both these algorithms run in O(nlogn) time. These are the first approximation algorithms with fixed worst-case performance guarantees for the parallel flow shop problem.
Scheduling; Parallel flow shop; Hybrid flow shop; Approximation algorithms; Worst-case analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711007193
Zhang, Xiandong
van de Velde, Steef
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:531-5402016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:531-540
article
Setting staffing requirements for time dependent queueing networks: The case of accident and emergency departments
An incentive scheme aimed at reducing patients’ waiting times in accident and emergency departments was introduced by the UK government in 2000. It requires 98% of patients to be discharged, transferred, or admitted to inpatient care within 4hours of arrival. Setting the minimal hour by hour medical staffing levels for achieving the government target, in the presence of complexities like time-varying demand, multiple types of patients, and resource sharing, is the subject of this paper. Building on extensive body of research on time dependent queues, we propose an iterative scheme which uses infinite server networks, the square root staffing law, and simulation to come up with a good solution. The implementation of this algorithm in a typical A&E department suggests that significant improvement on the target can be gained, even without increase in total staff hours.
Staffing emergency departments; 98% Target; Time-dependent queues; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221711009805
Izady, Navid
Worthington, Dave
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:602-6132016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:602-613
article
Consumer environmental awareness and competition in two-stage supply chains
This paper focuses on the impact of competition and consumers’ environmental awareness on key supply chain players. We consider both the production competition between partially substitutable products made by different manufacturers, and the competition between retail stores. We use two-stage Stackelberg game models to investigate the dynamics between the supply chain players given three supply chain network structures. We find that as consumers’ environmental awareness increases, retailers and manufacturers with superior eco-friendly operations will benefit; while the profitability of the inferior eco-friendly firm will tend to increase if the production competition level is low, and will tend to decrease if the production competition level is high. In addition, higher levels of retail competition may make manufacturers with inferior eco-friendly operations more likely to benefit from the increase of consumers’ environmental awareness. Moreover, as production competition intensifies, the profits of the retailers will always increase, while the profits of the manufacturers with inferior eco-friendly operations will always decrease. The profitability of the manufacturers with superior eco-friendly operations will also tend to decrease, unless consumers’ environmental awareness is high and the superior manufacturer has a significant cost advantage related to product environmental improvement.
Environmental awareness; Environmental responsibility; Supply chain management; Stackelberg game;
http://www.sciencedirect.com/science/article/pii/S0377221711010368
Liu, Zugang (Leo)
Anderson, Trisha D.
Cruz, Jose M.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:455-4572016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:455-457
article
Note on "An efficient approach for solving the lot-sizing problem with time-varying storage capacities"
In a recent paper Gutièrrez et al. [1] show that the lot-sizing problem with inventory bounds can be solved in time. In this note we show that their algorithm does not lead to an optimal solution in general.
Inventory Lot-sizing Inventory bounds
http://www.sciencedirect.com/science/article/pii/S0377221711003122
van den Heuvel, Wilco
Gutiérrez, José Miguel
Hwang, Hark-Chin
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:278-2862016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:278-286
article
Comparing branch-and-price algorithms for the Multi-Commodity k-splittable Maximum Flow Problem
The Multi-Commodity k-splittable Maximum Flow Problem consists in routing as much flow as possible through a capacitated network such that each commodity uses at most k paths and the capacities are satisfied. The problem appears in telecommunications, specifically when considering Multi-Protocol Label Switching. The problem has previously been solved to optimality through branch-and-price. In this paper we propose two exact solution methods both based on an alternative decomposition. The two methods differ in their branching strategy. The first method, which branches on forbidden edge sequences, shows some performance difficulty due to large search trees. The second method, which branches on forbidden and forced edge sequences, demonstrates much better performance. The latter also outperforms a leading exact solution method from the literature. Furthermore, a heuristic algorithm is presented. The heuristic is fast and yields good solution values.
Branch and bound; Combinatorial optimization; Multi-commodity flow; k-Splittable; Dantzig–Wolfe decomposition; Heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221711008988
Gamst, M.
Petersen, B.
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:198-2032016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:198-203
article
Arbitration procedures with multiple arbitrators
We consider two final-offer arbitration procedures in the case where there is more than one arbitrator. Two players, labeled 1 and 2 and interpreted here as Labor and Management, respectively, are in dispute about an increase in the wage rate. They submit final offers to a Referee. There are N arbitrators. Each of the arbitrators has her own assessment and selects the offer which is closest to her assessment. After that each arbitrator informs the Referee about her decision. The Referee counts the votes and declares the player obtaining the most votes to be the winner. Under the second arbitration scheme, the Referee takes into account only the assessments which lie between the players’ offers. The game is modeled as a zero-sum game. The Nash equilibrium in this arbitration game is derived.
Group decision and negotiation; Final-offer arbitration; Multiple arbitrators; Equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221711008174
Mazalov, Vladimir
Tokareva, Julia
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:533-5432016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:533-543
article
A honey-bee mating optimization algorithm for educational timetabling problems
In this work, we propose a variant of the honey-bee mating optimization algorithm for solving educational timetabling problems. The honey-bee algorithm is a nature inspired algorithm which simulates the process of real honey-bees mating. The performance of the proposed algorithm is tested over two benchmark problems; exam (Carter’s un-capacitated datasets) and course (Socha datasets) timetabling problems. We chose these two datasets as they have been widely studied in the literature and we would also like to evaluate our algorithm across two different, yet related, domains. Results demonstrate that the performance of the honey-bee mating optimization algorithm is comparable with the results of other approaches in the scientific literature. Indeed, the proposed approach obtains best results compared with other approaches on some instances, indicating that the honey-bee mating optimization algorithm is a promising approach in solving educational timetabling problems.
Timetabling; Meta-heuristics; Honey-bee mating; Nature inspired;
http://www.sciencedirect.com/science/article/pii/S0377221711007181
Sabar, Nasser R.
Ayob, Masri
Kendall, Graham
Qu, Rong
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:519-5302016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:519-530
article
Optimizing system resilience: A facility protection model with recovery time
Optimizing system resilience is concerned with the development of strategies to restore a system to normal operations as quickly and efficiently as possible following potential disruption. To this end, we present in this article a bilevel mixed integer linear program for protecting an uncapacitated median type facility network against worst-case losses, taking into account the role of facility recovery time on system performance and the possibility of multiple disruptions over time. The model differs from previous types of facility protection models in that protection is not necessarily assumed to prevent facility failure altogether, but more precisely to speed up recovery time following a potential disruption. Three different decomposition approaches are devised to optimally solve medium to large problem instances. Computational results provide a cross comparison of the efficiency of each algorithm. Additionally, we present an analysis to estimate cost-efficient levels of investments in protection resources.
OR in strategic planning; Location; Protection; Bilevel programming; Decomposition;
http://www.sciencedirect.com/science/article/pii/S0377221711008940
Losada, Chaya
Scaparra, M. Paola
O’Hanley, Jesse R.
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:755-7632016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:755-763
article
Stable network topologies using the notion of covering
An alternative perspective to evaluate networks and network evolution is introduced, based on the notion of covering. For a particular node in a network covering captures the idea of being outperformed by another node in terms of, for example, visibility and possibility of information gathering. In this paper, we focus on networks where these subdued network positions do not exist. We call these networks stable. Within this set we identify the minimal stable networks, which frequently have a ‘bubble-like’ structure. Severing a link in such a network results in at least one of the nodes being covered. In a minimal stable network therefore all nodes cooperate to avoid that one of the nodes ends up in a subdued position. Our results can be applied to, for example, the design of (covert) communication networks and the dynamics of social and information networks.
Graph theory; Network evolution; Information network; Degree distribution; Network centric operations;
http://www.sciencedirect.com/science/article/pii/S0377221711010563
Janssen, R.H.P.
Monsuur, H.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:394-4032016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:394-403
article
Dynamic pricing of limited inventories for multi-generation products
In this research, we consider a retailer selling products from two different generations, both with limited inventory over a predetermined selling horizon. Due to the spatial constraints or the popularity of a given product, the retailer may only display goods from one specific generation. If the transaction of the displayed item cannot be completed, the retailer may provide an alternative from another generation. We analyze two models – posted-pricing-first model and negotiation-first model. The former considers negotiation as being allowed on the price of the second product only and in the latter, only the price of the first product is negotiable. Our results show that the retailer can adopt both models effectively depending on the relative inventory levels of the products. In addition, the retailer is better off compared to the take-it-or-leave-it pricing when the inventory level of the negotiable product is high.
Revenue management; Multi-generation products; Bargaining; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221711008484
Kuo, Chia-Wei
Huang, Kwei-Long
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:105-1132016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:105-113
article
Combined m-consecutive and k-out-of-n sliding window systems
This paper proposes a new model that generalizes the linear multi-state sliding window system. In this model the system consists of n linearly ordered multi-state elements. Each element can have different states: from complete failure up to perfect functioning. A performance rate is associated with each state. The system fails if at least one of the following two conditions is met: (1) there exist at least m consecutive overlapping groups of r adjacent elements having the cumulative performance lower than V; (2) there exist at least k arbitrarily located groups of r adjacent elements having the cumulative performance lower than W. An algorithm for system reliability evaluation is suggested which is based on an extended universal moment generating function. Examples of evaluating system reliability and elements’ reliability importance indices are presented. Optimal sequencing of system elements is demonstrated.
m-Consecutive; k-Out-of-n; Sliding window system; Universal moment generating function; Multi-state system;
http://www.sciencedirect.com/science/article/pii/S0377221711010836
Xiang, Yanping
Levitin, Gregory
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:204-2132016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:204-213
article
A post-improvement procedure for the mixed load school bus routing problem
This paper aims to develop a mixed load algorithm for the school bus routing problem (SBRP) and measure its effects on the number of required vehicles. SBRP seeks to find optimal routes for a fleet of vehicles, where each vehicle transports students from their homes and to their schools while satisfying various constraints. When mixed load is allowed, students of different schools can get on the same bus at the same time. Although many of real world SBRP allow mixed load, only a few studies have considered these cases. In this paper, we present a new mixed load improvement algorithm and compare it with the only existing algorithm from the literature. Benchmark problems are proposed to compare the performances of algorithms and to stimulate other researchers’ further study. The proposed algorithm outperforms the existing algorithm on the benchmark problem instances. It has also been successfully applied to some of real-world SBRP and could reduce the required number of vehicles compared with the current practice.
Combinatorial optimization; School bus routing; Mixed load; Vehicle routing problem;
http://www.sciencedirect.com/science/article/pii/S0377221711007636
Park, Junhyuk
Tae, Hyunchul
Kim, Byung-In
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:434-4412016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:434-441
article
Multi-directional efficiency analysis of efficiency patterns in Chinese banks 1997–2008
DEA-type efficiency studies are often used to investigate levels of efficiencies, differences in those levels between subgroups within a data set and possible determinants of such differences. In the current paper we show how differences in the efficiency patterns between different subgroups within a data set can be investigated using the more recent MEA methodology.
Efficiency patterns; Multi-directional efficiency analysis (MEA); Chinese banks;
http://www.sciencedirect.com/science/article/pii/S0377221712000021
Asmild, Mette
Matthews, Kent
oai:RePEc:eee:ejores:v:215:y:2011:i:1:p:149-1602016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:215:y:2011:i:1:p:149-160
article
Optimizing yard assignment in an automotive transshipment terminal
This paper studies a yard management problem in an automotive transshipment terminal. Groups of cars arrive to and depart from the terminal in a given planning period. These groups must be assigned to parking rows under some constraints resulting from managerial rules. The main objective is the minimization of the total handling time. Model extensions to handle application specific issues such as a rolling horizon and a manpower leveling objective are also discussed. The main features of the problem are modeled as an integer linear program. However, solving this formulation by a state-of-the-art solver is impractical. In view of this, we develop a metaheuristic algorithm based on the adaptive large neighborhood search framework. Computational results on real-life data show the efficacy of the proposed metaheuristic algorithm.
Logistics Yard management Automotive transshipment terminal Adaptive large neighborhood search
http://www.sciencedirect.com/science/article/pii/S0377221711005376
Cordeau, Jean-François
Laporte, Gilbert
Moccia, Luigi
Sorrentino, Gregorio
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:627-6432016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:627-643
article
QoS commitment between vertically integrated autonomous systems
Vertically integrated autonomous systems bargain to provide quality of service guarantees and revenue sharing. Depending on the perceived quality of service and access price, consumers determine whether they subscribe to the access provider's service. Four types of contracts are compared: (i) best effort, (ii) bilateral bargaining, (iii) cascade negotiations and (iv) grand coalition cooperation; the impact of the consumers' QoS sensitivity parameter and power relation are tested for each contract. Assuming that the consumers' quality of service sensitivity parameter is unknown and might evolve dynamically due to error judgement, word-of-mouth effect or competition pressure, a learning algorithm is detailed and implemented by each integrated autonomous systems under asymmetrical information. Its convergence and the influence of bias introduction by the most informed autonomous system is analyzed.
Bilateral bargaining Supply chain Shapley value Learning
http://www.sciencedirect.com/science/article/pii/S0377221711003870
Le Cadre, Hélène
Barth, Dominique
Pouyllau, Hélia
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:94-1042016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:94-104
article
Real-time production planning and control system for job-shop manufacturing: A system dynamics analysis
Much attention has been paid to production planning and control (PPC) in job-shop manufacturing systems. However, there is a remaining gap between theory and practice, in the ability of PPC systems to capture the dynamic disturbances in manufacturing process. Since most job-shop manufacturing systems operate in a stochastic environment, the need for sound PPC systems has emerged, to identify the discrepancy between planned and actual activities in real-time and also to provide corrective measures. By integrating production ordering and batch sizing control mechanisms into a dynamic model, we propose a comprehensive real-time PPC system for arbitrary capacitated job-shop manufacturing. We adopt a system dynamics (SD) approach which is proved to be appropriate for studying the dynamic behavior of complex manufacturing systems. We study the system’s response, under different arrival patterns for customer orders and the existence of various types real-time events related to customer orders and machine failures. We determine the near-optimal values of control variables, which improve the shop performance in terms of average backlogged orders, work in process inventories and tardy jobs. The results of extensive numerical investigation are statistically examined by using analysis of variance (ANOVA). The examination reveals an insensitivity of near-optimal values to real-time events and to arrival pattern and variability of customer orders. In addition, it reveals a positive impact of the proposed real-time PPC system on the shop performance. The efficiency of PPC system is further examined by implementing data from a real-world manufacturer.
System dynamics; Production; Job-shop; Batch sizing; Robustness and sensitivity analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711006242
Georgiadis, Patroklos
Michaloudis, Charalampos
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:580-5882016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:580-588
article
Optimal dynamic pricing of inventories with stochastic demand and discounted criterion
We consider a continuous time dynamic pricing problem for selling a given number of items over a finite or infinite time horizon. The demand is price sensitive and follows a non-homogeneous Poisson process. We formulate this problem as to maximize the expected discounted revenue and obtain the structural properties of the optimal revenue function and optimal price policy by the Hamilton–Jacobi–Bellman (HJB) equation. Moreover, we study the impact of the discount rate on the optimal revenue function and the optimal price. Further, we extend the problem to the case with discounting and time-varying demand, the infinite time horizon problem. Numerical examples are used to illustrate our analytical results.
Revenue management; HJB equation; Optimal pricing; Discounted criterion;
http://www.sciencedirect.com/science/article/pii/S0377221711009015
Cao, Ping
Li, Jianbin
Yan, Hong
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:386-3952016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:386-395
article
Ecological modernization in the electrical utility industry: An application of a bads–goods DEA model of ecological and technical efficiency
Newly-developed data envelopment analysis techniques permit simultaneous consideration of ‘good and bad’ outputs in evaluating efficiency. We use these techniques to determine joint ecological and technical efficiencies of the 437 largest fossil-fueled electricity-generating plants in the United States. Utilizing the EPA’s E-Grid and Clean Air Markets databases and drawing on ecological modernization theory we evaluate whether innovations in organizational practices and technological solutions help achieve joint technical and environmental performance efficiencies.
Data envelopment analysis; Environment; Electrical utilities; Ecological modernization;
http://www.sciencedirect.com/science/article/pii/S0377221711008617
Sarkis, Joseph
Cordeiro, James J.
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:624-6372016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:624-637
article
Partnership formation based on multiple traits
A model of partnership formation based on two traits, called beauty and character, is presented. There are two classes of individual and partners must be of different classes. Individuals prefer prospective partners with a high beauty measure and of a similar character. This problem may be interpreted as e.g. a job search problem in which the classes are employer and employee, or a mate choice problem in which the classes are male and female. Beauty can be observed instantly. However, a costly date (or interview) is required to observe the character of a prospective partner. On observing the beauty of a prospective partner, an individual decides whether he/she wishes to date. During a date, the participants observe each other’s character and then decide whether to form a pair. Mutual acceptance is required both for a date to occur and pair formation. On finding a partner, an individual stops searching. Beauty has a continuous distribution on a finite interval, while character ‘forms a circle’ and has a uniform distribution. Criteria based on the concept of a subgame perfect Nash equilibrium are used to define a symmetric equilibrium of this game. It is argued that this equilibrium is unique. When dating costs are high, this equilibrium is a block separating equilibrium as in more classical formulations of two-sided job search problems. However, for sufficiently small dating costs the form of this equilibrium is essentially different.
Game theory; Partnership formation; Multiple traits; Subgame perfect equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221711007545
Ramsey, David M.
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:347-3592016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:347-359
article
Product line pricing for services with capacity constraints and dynamic substitution
In this paper, we address a service provider’s product line pricing problem for substitutable products in services, such as concerts, sporting events, or online advertisements. For each product, a static price is selected from a pre-defined set such that the total revenue is maximised. The products are differentiated by some of their attributes, and their availability is restricted due to individual capacity constraints. Furthermore, they are simultaneously sold during a common selling period at the end of which the service is delivered. Consumers differ from one another with respect to their willingness to pay, and, hence, their reservation prices vary depending on the product. In the event of a purchase, they choose the product that maximises their consumer surplus.
Pricing; Mixed-integer programming; Branch and bound; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221711011039
Burkart, Wolfgang R.
Klein, Robert
Mayer, Stefan
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:264-2712016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:264-271
article
Sales effort free riding and coordination with price match and channel rebate
This paper studies sales effort coordination for a supply chain with one manufacturer and two retail channels, where an online retailer offers a lower price and free-rides a brick-and-mortar retailer’s sales effort. The free riding effect reduces brick-and-mortar retailer’s desired effort level, and thus hurts the manufacturer’s profit and the overall supply chain performance. To achieve sales effort coordination, we designed a contract with price match and selective compensation rebate. We also examined other contracts, including the target rebate contract and the wholesale price discount contract, both with price match. The numerical analysis shows that the selective rebate outperforms other contracts in coordinating the brick-and-mortar retailer’s sales effort and improving supply chain efficiency.
Supply chain management; Sales effort free riding; Price match; Selective rebate;
http://www.sciencedirect.com/science/article/pii/S0377221711010381
Xing, Dahai
Liu, Tieming
oai:RePEc:eee:ejores:v:214:y:2011:i:2:p:442-4522016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:2:p:442-452
article
A tabu search heuristic for the dynamic transportation of patients between care units
The problem studied in this paper stems from a real application to the transportation of patients in the Hospital Complex of Tours (France). The ambulance central station of the Hospital Complex has to plan the transportation demands between care units which require a vehicle. Some demands are known in advance and the others arise dynamically. Each demand requires a specific type of vehicle and a vehicle can transport only one person at a time. The demands can be subcontracted to a private company which implies high cost. Moreover, transportations are subject to particular constraints, among them priority of urgent demands, disinfection of a vehicle after the transportation of a patient with contagious disease and respect of the type of vehicle needed. These characteristics involve a distinction between the vehicles and the crews during the modeling phase. We propose a modeling for solving this difficult problem and a tabu search algorithm inspired by Gendreau et al. (1999). This method supports an adaptive memory and a tabu search procedure. Computational experiments on a real-life instance and on randomly generated instances show that the method can provide high-quality solutions for this dynamic problem with a short computation time.
Transportation Real-time Health care Tabu search Vehicle routing
http://www.sciencedirect.com/science/article/pii/S0377221711003778
Kergosien, Y.
Lenté, Ch.
Piton, D.
Billaut, J.-C.
oai:RePEc:eee:ejores:v:213:y:2011:i:2:p:388-3942016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:213:y:2011:i:2:p:388-394
article
Single row facility layout problem using a permutation-based genetic algorithm
In this paper, a permutation-based genetic algorithm (GA) is applied to the NP-hard problem of arranging a number of facilities on a line with minimum cost, known as the single row facility layout problem (SRFLP). The GA individuals are obtained by using some rule-based as well as random permutations of the facilities, which are then improved towards the optimum by means of specially designed crossover and mutation operators. Such schemes led the GA to handle the SRFLP as an unconstrained optimization problem. In the computational experiments carried out with large-size instances of sizes from 60 to 80, available in the literature, the proposed GA improved several previously known best solutions.
Single row facility layout problem Genetic algorithm Combinatorial optimization
http://www.sciencedirect.com/science/article/pii/S0377221711002712
Datta, Dilip
Amaral, André R.S.
Figueira, José Rui
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:435-4412016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:435-441
article
Optimal order lot sizing and pricing with free shipping
Companies, especially those in e-business, are increasingly offering free shipping to buyers whose order sizes exceed the free shipping quantity. In this paper, given that the supplier offers free shipping, we determine the retailer’s optimal order lot size and the optimal retail price. We explicitly incorporate the supplier’s quantity discount, and transportation cost into the model. We analytically and numerically examine the impacts of free shipping, quantity discount and transportation cost on the retailer’s optimal lot sizing and pricing decisions. We find that free shipping can benefit the supplier, the retailer, and the end customers, and can effectively encourage the retailer to order more of the good, to the extent of ordering a few times of the optimal order lot size without free shipping. The order lot size will increase and the retail price will decrease if the supplier offers proper free shipping.
Inventory; Logistics; Free shipping; Pricing; Purchasing;
http://www.sciencedirect.com/science/article/pii/S0377221711010307
Hua, Guowei
Wang, Shouyang
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:219:y:2012:i:1:p:188-1972016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:1:p:188-197
article
SimLean: Utilising simulation in the implementation of lean in healthcare
Discrete-event simulation (DES) and lean are approaches that have a similar motivation: improvement of processes and service delivery. Both are being used to help improve the delivery of healthcare, but rarely are they used together. This paper explores from a theoretical and an empirical perspective the potential complementary roles of DES and lean in healthcare. The aim is to increase the impact of both approaches in the improvement of healthcare systems. Out of this exploration, the ‘SimLean’ approach is developed in which three roles for DES with lean are identified: education, facilitation and evaluation. These roles are demonstrated through three examples of DES in action with lean. The work demonstrates how the fusion of DES with lean can improve both stakeholder engagement with DES and the impact of lean.
OR in health services; Lean; Discrete-event simulation;
http://www.sciencedirect.com/science/article/pii/S0377221711011234
Robinson, Stewart
Radnor, Zoe J.
Burgess, Nicola
Worthington, Claire
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:479-4822016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:479-482
article
On approximate Monetary Unit Sampling
Monetary Unit Sampling (MUS), also known as Dollar-Unit Sampling, is a popular sampling strategy in Auditing, in which all units are to be randomly selected with probabilities proportional to the book value. However, if units sizes have very large variability, no vector of probabilities exists fulfilling the requirement that all probabilities are proportional to the associated book values. In this note we propose a Mathematical Optimization approach to address this issue. An optimization program is posed, structural properties of the optimal solution are analyzed, and an algorithm yielding the optimal solution in time and space linear to the number of population units is given.
Nonlinear programming; Monetary Unit Sampling; Statistical sampling; Karush–Kuhn–Tucker conditions;
http://www.sciencedirect.com/science/article/pii/S0377221711008654
Carrizosa, Emilio
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:611-6212016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:611-621
article
Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming
Emergency service providers are supposed to locate ambulances such that in case of emergency patients can be reached in a time-efficient manner. Two fundamental decisions and choices need to be made real-time. First of all immediately after a request emerges an appropriate vehicle needs to be dispatched and send to the requests’ site. After having served a request the vehicle needs to be relocated to its next waiting location. We are going to propose a model and solve the underlying optimization problem using approximate dynamic programming (ADP), an emerging and powerful tool for solving stochastic and dynamic problems typically arising in the field of operations research. Empirical tests based on real data from the city of Vienna indicate that by deviating from the classical dispatching rules the average response time can be decreased from 4.60 to 4.01 minutes, which corresponds to an improvement of 12.89%. Furthermore we are going to show that it is essential to consider time-dependent information such as travel times and changes with respect to the request volume explicitly. Ignoring the current time and its consequences thereafter during the stage of modeling and optimization leads to suboptimal decisions.
OR in health services; Emergency vehicles; Ambulance location; Approximate dynamic programming; Stochastic optimization;
http://www.sciencedirect.com/science/article/pii/S0377221711009830
Schmid, Verena
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:801-8092016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:801-809
article
A multicriteria approach to sustainable energy supply for the rural poor
Despite significant progress in energy technology, about two billion people worldwide, particularly the poor in rural areas of developing countries, have no access to electricity. Decision-making concerning the most appropriate energy technology for supplying these areas has been difficult; existing energy decision-support tools have been useful but are mostly incomplete. Trade-offs, as well as impacts that can be positive or negative, may emerge as a result of implementing modern forms of energy. These can affect both community’s livelihoods as well as the confidence of decision-makers in relation to alternative technologies. The paper discusses a newly designed multicriteria approach and its novel robustness analysis for selecting energy generation systems for the improvement of livelihoods in rural areas. The proposed methodology builds upon a sustainable rural livelihoods framework to address multiple interactions and calculate trade-offs aimed at boosting decision-makers’ confidence in the selected technologies. The methodology is tested via a case study in Colombia.
Decision analysis; OR in energy; Multiple criteria analysis; Robustness and sensitivity analysis; OR in societal problem analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711010423
Henao, Felipe
Cherni, Judith A.
Jaramillo, Patricia
Dyner, Isaac
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:94-1072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:94-107
article
A simulated annealing heuristic for the team orienteering problem with time windows
This paper presents a simulated annealing based heuristic approach for the team orienteering problem with time windows (TOPTW). Given a set of known locations, each with a score, a service time, and a time window, the TOPTW finds a set of vehicle tours that maximizes the total collected scores. Each tour is limited in length and a visit to a location must start within the location’s service time window. The proposed heuristic is applied to benchmark instances. Computational results indicate that the proposed heuristic is competitive with other solution approaches in the literature.
Routing; Team orienteering problem; Time window; Simulated annealing;
http://www.sciencedirect.com/science/article/pii/S037722171100765X
Lin, Shih-Wei
Yu, Vincent F.
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:643-6522016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:643-652
article
Efficiency of the medical care industry: Evidence from the Italian regional system
What might be the relation between clinical research and efficiency of medical care suppliers? Is the hypothesis of a positive relation consistent? Considering efficiency as the supplier’s ability to maximize the number of patients hospitalized in a mobility process among regions (i.e. mobility balance), this work aims at highlighting the existence of a positive externality of pharmaceutical clinical research on that kind of efficiency. In other words, an externality is able to affect the patients’ perception of good/bad quality of outputs supplied by the medical care industry, leading their mobility process. Taking Italy and the mobility of patients among regions into account, an Operational Research study will be performed in order to support this assumption.
OR in health services; Data Envelopment Analysis; Pharmaceutical clinical research; Medical researcher; Research subjects; Regional analysis;
http://www.sciencedirect.com/science/article/pii/S0377221711009295
Ippoliti, Roberto
Falavigna, Greta
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:48-572016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:48-57
article
Connectivity-and-hop-constrained design of electricity distribution networks
This paper addresses the problem of designing the configuration of an interconnected electricity distribution network, so as to maximize the minimum power margin over the feeders. In addition to the limitation of feeder power capacity, the distance (as hop count) between any customer and its allocated feeder is also limited for preventing power losses and voltage drops. Feasibility conditions are studied and a complexity analysis is performed before introducing a heuristic algorithm and two integer linear programming formulations for addressing the problem. A cutting-plane algorithm relying on the generation of two classes of cuts for enforcing connectivity and distance requirements respectively is proposed for solving the second integer linear programming formulation. All the approaches are then compared on a set of 190 instances before discussing their performances.
OR in energy; Electricity distribution networks; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221711009052
Rossi, André
Aubry, Alexis
Jacomino, Mireille
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:397-4082016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:397-408
article
Multistage stochastic portfolio optimisation in deregulated electricity markets using linear decision rules
The deregulation of electricity markets increases the financial risk faced by retailers who procure electric energy on the spot market to meet their customers’ electricity demand. To hedge against this exposure, retailers often hold a portfolio of electricity derivative contracts. In this paper, we propose a multistage stochastic mean–variance optimisation model for the management of such a portfolio. To reduce computational complexity, we apply two approximations: we aggregate the decision stages and solve the resulting problem in linear decision rules (LDR). The LDR approach consists of restricting the set of recourse decisions to those affine in the history of the random parameters. When applied to mean–variance optimisation models, it leads to convex quadratic programs. Since their size grows typically only polynomially with the number of periods, they can be efficiently solved. Our numerical experiments illustrate the value of adaptivity inherent in the LDR method and its potential for enabling scalability to problems with many periods.
OR in energy; Electricity portfolio management; Stochastic programming; Risk management; Linear decision rules;
http://www.sciencedirect.com/science/article/pii/S0377221711007132
Rocha, Paula
Kuhn, Daniel
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:491-5072016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:491-507
article
Incorporating human behaviour in simulation models of screening for breast cancer
Simulation modelling is widely used in many industries in order to assess and evaluate alternative options and to test strategies or operating rules which are too complex to be modelled analytically. Simulation software has developed its capability in parallel with the growth in computing power since the 1980s. However in practice, the results from the most sophisticated and complex simulation model may not truly reflect what happens in the real world, because such models do not account for human behaviour. For example, in the domain of healthcare simulation is often used to evaluate the outcomes from medical interventions such as new drug treatments. However in reality patients may not complete the course of a prescribed medication, perhaps because they find the side-effects unpleasant. A simulation study designed to evaluate this medication which ignores such behavioural factors may give unreliable results. In this paper we describe a model for screening for breast cancer which includes behavioural factors to model women’s decisions to attend for mammography. The model results indicate that increasing attendance through education or publicity campaigns can be equally as effective as decreasing the intervals between screens. This would have considerable cost implications for healthcare providers.
Discrete-event simulation; Health care modelling; Human behaviour; Breast cancer screening;
http://www.sciencedirect.com/science/article/pii/S0377221711009817
Brailsford, S.C.
Harper, P.R.
Sykes, J.
oai:RePEc:eee:ejores:v:218:y:2012:i:2:p:305-3152016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:2:p:305-315
article
Instance-specific multi-objective parameter tuning based on fuzzy logic
Finding good parameter values for meta-heuristics is known as the parameter setting problem. A new parameter tuning strategy, called IPTS, is proposed that is a novel instance-specific method to take the trade-off between solution quality and computational time into consideration. Two important steps in the method are an a priori statistical analysis to identify the factors that determine heuristic performance in both quality and time for a specific type of problem, and the transformation of these insights into a fuzzy inference system rule base which aims to return parameter values on the Pareto-front with respect to a decision maker’s preference.
Metaheuristics; Combinatorial optimisation; Travelling Salesman Problem; Parameter setting; Fuzzy logic;
http://www.sciencedirect.com/science/article/pii/S0377221711009441
Ries, Jana
Beullens, Patrick
Salt, David
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:368-3782016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:368-378
article
Willingness-to-pay estimation with choice-based conjoint analysis: Addressing extreme response behavior with individually adapted designs
The increasing consideration of behavioral aspects in operations management models has prompted greater use of choice-based conjoint (CBC) studies in operations research. Such studies can elicit consumers’ willingness to pay (WTP), a core input for many optimization models. However, optimization models can yield valid results only if consumers’ WTP is estimated accurately. A simulation study and two field studies show that extreme response behavior in CBC studies, such that consumers always or never choose the no-purchase option, harms the validity of WTP estimates. Reporting the share of consumers who always and never select the no-purchase option allows for detecting extreme response behavior. This study suggests an individually adapted design that avoids extreme response behavior and thus significantly improves WTP estimation accuracy.
Choice-based conjoint analysis; Willingness to pay; Marketing research;
http://www.sciencedirect.com/science/article/pii/S0377221712000033
Gensler, Sonja
Hinz, Oliver
Skiera, Bernd
Theysohn, Sven
oai:RePEc:eee:ejores:v:219:y:2012:i:3:p:541-5562016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:3:p:541-556
article
A stochastic control formalism for dynamic biologically conformal radiation therapy
State-of-the-art methods for optimizing cancer treatment over several weeks of external beam radiotherapy take a static–deterministic view of the treatment planning process, mainly focusing on spatial distribution of dose. Recent progress in quantitative functional imaging as well as mathematical models of tumor response to radiotherapy is increasingly enabling treatment planners to monitor/predict a patient’s biological response over weeks of treatment. In this paper we introduce dynamic biologically conformal radiation therapy (DBCRT), a mathematical framework intended to exploit these emerging technological and biological modeling advances to design patient-specific radiation treatment strategies that dynamically adapt to the spatiotemporal evolution of a patient’s biological response over several treatment sessions in order to achieve the best possible health outcome. More specifically, we propose a discrete-time stochastic control formalism where we use the patient’s biological condition to model the system state and the beam intensities as controls. Three approximate control schemes are then applied and compared for efficiency. Numerical simulations on test cases show that DBCRT results in a 64–98% improvement in treatment efficacy as compared to the more conventional static–deterministic approach.
OR in health services; Control; Dynamic programming; Intensity modulated radiation therapy; Adaptive radiotherapy;
http://www.sciencedirect.com/science/article/pii/S0377221711009799
Kim, Minsun
Ghate, Archis
Phillips, Mark H.
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:425-4332016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:425-433
article
A systematic two phase approach for the nurse rostering problem
Nurse rostering is an NP-hard combinatorial problem which makes it extremely difficult to efficiently solve real life problems due to their size and complexity. Usually real problem instances have complicated work rules related to safety and quality of service issues in addition to rules about quality of life of the personnel. For the aforementioned reasons computer supported scheduling and rescheduling for the particular problem is indispensable. The specifications of the problem addressed were defined by the First International Nurse Rostering Competition (INRC2010) sponsored by the leading conference in the Automated Timetabling domain, PATAT-2010. Since the competition imposed quality and time constraint requirements, the problem instances were partitioned into sub-problems of manageable computational size and were then solved sequentially using Integer Mathematical Programming. A two phase strategy was implemented where in the first phase the workload for each nurse and for each day of the week was decided while in the second phase the specific daily shifts were assigned. In addition, local optimization techniques for searching across combinations of nurses’ partial schedules were also applied. This sequence is repeated several times depending on the available computational time. The results of our approach and the submitted software produced excellent solutions for both the known and the hidden problem instances, which in respect gave our team the first position in all tracks of the INRC-2010 competition.
Nurse rostering; Integer programming; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221711011362
Valouxis, Christos
Gogos, Christos
Goulas, George
Alefragis, Panayiotis
Housos, Efthymios
oai:RePEc:eee:ejores:v:217:y:2012:i:3:p:509-5182016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:3:p:509-518
article
Specification and estimation of primal production models
While estimating production technology in a primal framework production function, input and output distance functions and input requirement functions are widely used in the empirical literature. This paper shows that these popular primal based models are algebraically equivalent in the sense that they can be derived from the same underlying transformation (production possibility) function. By assuming that producers maximize profit, we show that in all cases, except one, the use of ordinary least squares (OLS) gives inconsistent estimates irrespective of whether the production, input distance and input requirement functions are used. Based on several specifications of the production and input distance function models, we conclude that one can estimate the input elasticities and returns to scale consistently using instruments on only one regressor. No instruments are needed if either it is assumed that producers know the technology entirely (including the so-called error term) or a system approach is used. We used Norwegian timber harvesting data to illustrate workings of various model specifications.
Production function; Input distance function; Input requirement function; Cobb–Douglas; Translog;
http://www.sciencedirect.com/science/article/pii/S0377221711008939
Kumbhakar, Subal C.
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:140-1512016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:140-151
article
Modeling age-based maintenance strategies with minimal repairs for systems subject to competing failure modes due to degradation and shocks
This paper deals with maintenance strategies with minimal repairs for single-unit repairable systems which are subject to competing and dependent failures due to degradation and traumatic shocks. The main aims are to study different approaches for making a minimal repair decision (i.e., time-based or condition-based) which is a possible corrective maintenance action under the occurrence of shocks, and to show under a given situation which approach can lead to a greater saving in maintenance cost. Two age-based maintenance policies with age-based minimal repairs and degradation-based minimal repairs are modeled, and their performance is compared with a classical pure age-based replacement policy without minimal repairs. Numerical results show the cost saving of the maintenance policies and allow us to make some conclusions about their performance under different situations of system characteristic and maintenance costs. It is shown that carrying out minimal repairs is useful in many situations to improve the performance of maintenance operations. Moreover, the comparison of optimal maintenance costs incurred by both maintenance policies with minimal repairs allows us to justify the appropriate conditions of time-based minimal repair approach and condition-based minimal approach.
Gamma process; Non-homogeneous Poisson process; Age replacement policy; Minimal repairs; Random inspection; Dynamic environment;
http://www.sciencedirect.com/science/article/pii/S0377221711009453
Huynh, K.T.
Castro, I.T.
Barros, A.
Bérenguer, C.
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:280-2922016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:280-292
article
Design of regional production networks for second generation synthetic bio-fuel – A case study in Northern Germany
In the medium-term, second generation synthetic bio-diesel will make an important contribution to sustainable mobility. However, attributed to political, technical, and market related uncertainties, it is still not clear which interest groups will invest in production capacities and which technologies will be used. Hence, a multi-period MIP-model is presented for integrated location, capacity and technology planning for the design of production networks for second generation synthetic bio-diesel. The approach is applied to the region of Niedersachsen, Germany. Principle network configurations are developed for this region considering different scenarios and different risk attitudes of interest groups. As results of the investigation, recommendations are drawn regarding advantageous plant concepts, as well as strategies for the capacity installation. Finally, recommendations for political decision makers as well as for potential investors are deduced.
(D) Supply chain management; Network design; Facility location planning; Synthetic bio-fuel; Case study;
http://www.sciencedirect.com/science/article/pii/S037722171100943X
Walther, Grit
Schatka, Anne
Spengler, Thomas S.
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:697-6992016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:697-699
article
Can international environmental cooperation be bought: Comment
Fuentes-Albero and Rubio (2010) analytically examine the effects of the countries’ heterogeneity on the international environmental cooperation. They consider two types of countries having different abatement costs in one case and different environmental damages in another case. Furthermore it is analyzed whether a self-financed transfer system can diminish these heterogeneity effects. The paper shows for both scenarios of asymmetry and no transfers that the maximum level of cooperation consists of three countries of the same type. For the case of heterogeneity in environmental damages, Fuentes-Albero and Rubio conclude that an agreement between one type 1 and one type 2 country is also self-enforcing given that the differences in the damages are not very large. In this comment, the derivation of the last mentioned result is shown to be incorrect by proving that this coalition is not self-enforcing.
Game theory; Self-enforcing international environmental agreements; Environment; Group decision and negotiation;
http://www.sciencedirect.com/science/article/pii/S037722171100717X
Glanemann, Nicole
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:257-2692016-06-23RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:257-269
article
Recent advances in optimization techniques for statistical tabular data protection
One of the main services of National Statistical Agencies (NSAs) for the current Information Society is the dissemination of large amounts of tabular data, which is obtained from microdata by crossing one or more categorical variables. NSAs must guarantee that no confidential individual information can be obtained from the released tabular data. Several statistical disclosure control methods are available for this purpose. These methods result in large linear, mixed integer linear, or quadratic mixed integer linear optimization problems. This paper reviews some of the existing approaches, with an emphasis on two of them: cell suppression problem (CSP) and controlled tabular adjustment (CTA). CSP and CTA have concentrated most of the recent research in the tabular data protection field. The particular focus of this work is on methods and results of practical interest for end-users (mostly, NSAs). Therefore, in addition to the resulting optimization models and solution approaches, computational results comparing the main optimization techniques – both optimal and heuristic – using real-world instances are also presented.
Linear programming; Network flows; Mixed integer linear programming; Statistical disclosure control; Large-scale optimization;
http://www.sciencedirect.com/science/article/pii/S037722171100316X
Castro, Jordi
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:567-5762016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:567-576
article
Cooperation and game-theoretic cost allocation in stochastic inventory models with continuous review
We study cooperation strategies for companies that continuously review their inventories and face Poisson demand. Our main goal is to analyze stable cost allocations of the joint costs. These are such that any group of companies has lower costs than the individual companies. If such allocations exist they provide an incentive for the companies to cooperate.
Joint replenishment; Stochastic demand; Cost allocation; Continuous review; Game theory; Inventory model;
http://www.sciencedirect.com/science/article/pii/S0377221713004700
Timmer, Judith
Chessa, Michela
Boucherie, Richard J.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:76-892016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:76-89
article
Optimisation of maintenance routing and scheduling for offshore wind farms
An optimisation model and a solution method for maintenance routing and scheduling at offshore wind farms are proposed. The model finds the optimal schedule for maintaining the turbines and the optimal routes for the crew transfer vessels to service the turbines along with the number of technicians required for each vessel. The model takes into account multiple vessels, multiple periods (days), multiple Operation & Maintenance (O&M) bases, and multiple wind farms. We develop an algorithm based on the Dantzig–Wolfe decomposition method, where a mixed integer linear program is solved for each subset of turbines to generate all feasible routes and maintenance schedules for the vessels for each period. The routes have to consider several constraints such as weather conditions, the availability of vessels, and the number of technicians available at the O&M base. An integer linear program model is then proposed to find the optimal route configuration along with the maintenance schedules that minimise maintenance costs, including travel, technician and penalty costs. The computational experiments show that the proposed optimisation model and solution method find optimal solutions to the problem in reasonable computing times.
Maintenance scheduling; Routing problem; Offshore wind farm;
http://www.sciencedirect.com/science/article/pii/S0377221716303964
Irawan, Chandra Ade
Ouelhadj, Djamila
Jones, Dylan
Stålhane, Magnus
Sperstad, Iver Bakken
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:734-7442016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:734-744
article
A competitive hub location and pricing problem
We formulate and solve a new hub location and pricing problem, describing a situation in which an existing transportation company operates a hub and spoke network, and a new company wants to enter into the same market, using an incomplete hub and spoke network. The entrant maximizes its profit by choosing the best hub locations and network topology and applying optimal pricing, considering that the existing company applies mill pricing. Customers’ behavior is modeled using a logit discrete choice model. We solve instances derived from the CAB dataset using a genetic algorithm and a closed expression for the optimal pricing. Our model confirms that, in competitive settings, seeking the largest market share is dominated by profit maximization. We also describe some conditions under which it is not convenient for the entrant to enter the market.
Location; Competitive models; Hub location problems; Location and pricing; Hub and spoke;
http://www.sciencedirect.com/science/article/pii/S037722171300475X
Lüer-Villagra, Armin
Marianov, Vladimir
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:690-7012016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:690-701
article
Recent developments on Reactivity: Theoretical conceptualization and empirical verification
This paper seeks to enrich the literature of operations and supply chain management through the development of the concept of Reactivity and the introduction of related performance indicators. Reactivity explains the capability to perform operationally and economically under unexpected conditions. A qualitative investigation has aimed to identify useful managerial practices to be adopted to properly perform Reactivity, while an empirical analysis has tested the relevance of each practice as well as the economic benefits that Reactivity provides. The findings suggest that managers and practitioners should develop a Reactivity orientation because it benefits firms’ economic performance when an unexpected event occurs; in addition, several recommended managerial practices should be undertaken to ensure its correct implementation.
Reactivity; Performance; Managerial practice; Unexpected demand; Empirical analysis;
http://www.sciencedirect.com/science/article/pii/S0377221713005377
De Giovanni, Pietro
Cariola, Alfio
Passarelli, Mariacarmela
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:654-6662016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:654-666
article
A novel group ranking model for revealing sequence and quantity knowledge
The aggregation of individuals’ preferences into a consensus ranking is a group ranking problem which has been widely utilized in various applications, such as decision support systems, recommendation systems, and voting systems. Gathering the comparison of preferences and aggregate them to gain consensuses is a conventional issue. For example, b>c⩾d⩾a indicates that b is favorable to c, and c (d) is somewhat favorable but not fully favorable to d (a), where>and⩾are comparators, and a, b, c, and d are items. Recently, a new type of ranking model was proposed to provide temporal orders of items. The order, b&c→a, means that b and c can occur simultaneously and are also before a. Although this model can derive the order ranking of items, the knowledge about quantity-related items is also of importance to approach more real-life circumstances. For example, when enterprises or individuals handle their portfolios in financial management, two considerations, the sequences and the amount of money for investment objects, should be raised initially. In this study, we propose a model for discovering consensus sequential patterns with quantitative linguistic terms. Experiments using synthetic and real datasets showed the model’s computational efficiency, scalability, and effectiveness.
Group ranking; Data mining; Sequential data; Quantitative data; Linguistic terms;
http://www.sciencedirect.com/science/article/pii/S0377221713005626
Huang, Tony Cheng-Kui
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:44-542016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:44-54
article
Optimal allocation of buffer times to increase train schedule robustness
Reliability and punctuality of railway traffic are among the key performance indicators, which have a significant impact on user satisfaction. A way to improve the reliability and on-time performance in the timetable design stage is by improving the timetable robustness. In order to increase the robustness, most railway companies in Europe insert a fixed amount of buffer time between possibly conflicting events in order to reduce or prevent delay propagation if the first event occurs with a delay. However, this often causes an increase of capacity consumption which is a problem for heavily utilised lines. A sufficient amount of buffer time can therefore not be added between every two conflicting events. Thus, buffer times need to be allocated carefully to protect events with the highest priority. In this paper we consider the problem of increasing the robustness of a timetable by finding an optimal allocation of buffer times on a railway corridor. We model this resource allocation problem as a knapsack problem, where each candidate buffer time is treated as an object with the value (priority for buffer time assignment) determined according to the commercial and operational criteria, and size equal to its time duration. The validity of the presented approach is demonstrated on a case study from a busy mixed-traffic line in Sweden.
Buffer times; Capacity; Knapsack problem; Timetable robustness;
http://www.sciencedirect.com/science/article/pii/S0377221716303332
Jovanović, Predrag
Kecman, Pavle
Bojović, Nebojša
Mandić, Dragomir
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:631-6442016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:631-644
article
Specifying measurement errors for required lifetime estimation performance
Lifetime estimation based on the measured health monitoring data has long been investigated and applied in reliability and operational management communities and practices, such as planning maintenance schedules, logistic supports, and production planning. It is known that measurement error (ME) is a source of uncertainty in the measured data considerably affecting the performance of data driven lifetime estimation. While the effect of ME on the performance of data driven lifetime estimation models has been studied recently, a reversed problem—“the specification of the ME range to achieve a desirable lifetime estimation performance” has not been addressed. This problem is related to the usability of the measured health monitoring data for estimating the lifetime. In this paper, we deal with this problem and develop guidelines regarding the formulation of specification limits to the distribution-related ME characteristics. By referring to one widely applied Wiener process-based degradation model, permissible values for the ME bias and standard deviation can be given under a specified lifetime estimation requirement. If the performance of ME does not satisfy the permissible values, the desirable performance for lifetime estimation cannot be ensured by the measured health monitoring data. We further analyze the effect of ME on an age based replacement decision, which is one of the most common and popular maintenance policies in maintenance scheduling. Numerical examples and a case study are provided to illustrate the implementation procedure and usefulness of theoretical results.
Reliability; Replacement; Lifetime; Wiener process; Measurement error;
http://www.sciencedirect.com/science/article/pii/S0377221713004633
Si, Xiao-Sheng
Chen, Mao-Yin
Wang, Wenbin
Hu, Chang-Hua
Zhou, Dong-Hua
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:116-1252016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:116-125
article
The risk-averse newsvendor problem under spectral risk measures: A classification with extensions
We study the risk-averse newsvendor problem by defining the objective function as a spectral risk measure. We analyze the problem under different types of return formulations, focusing on the impact of risk aversion and cost parameters on the optimal ordering decision. We show that the monotonicity of the return function with respect to random demand determines the structural properties of the problem. When the return function is monotone in demand realization, optimal order quantity does not depend on the return margin but only on the overage and underage costs, and it has a monotone relation to risk aversion. However, if return is non-monotone in demand impact of risk aversion depends on the specific setting and it can also be non-monotone. Additionally, it is non-increasing in the margin which leads to varying impact of selling price under distinct settings.
Inventory; Newsvendor; Risk-aversion; Spectral risk measures; Shortage cost;
http://www.sciencedirect.com/science/article/pii/S0377221716304222
Arıkan, Emel
Fichtinger, Johannes
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:275-2912016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:275-291
article
Decomposing productivity indexes into explanatory factors
Productivity measures are increasingly regarded as key indicators of economic performance. Identifying sources of productivity growth is of interest to both firms and policy makers. This paper revisits the debate on how to decompose productivity growth into explanatory factors, with a focus on extracting technical progress, technical efficiency change, and returns to scale components. Using Bjurek's concept of the Malmquist index, introduced into production theory in a systematic way by Caves, Christensen and Diewert, a reference technology is required to define the components of interest. Unlike other approaches, ours do not make any convexity assumptions on the reference technology but instead follows the example of Tulkens and his coauthors in assuming that the reference technology satisfies free disposability assumptions. A new decomposition of a productivity index is provided, with the existence and properties of the underlying distance functions of the decomposition proven under relatively unrestrictive assumptions. The paper also provides for the first time a theoretical justification for the geometric average form of the Bjurek productivity index. These rigorous theoretical contributions provide significant avenues for enhanced understanding of empirical productivity performance.
Malmquist indexes; Data Envelopment Analysis; Free Disposal Hulls; Nonparametric approaches to production theory; Distance functions;
http://www.sciencedirect.com/science/article/pii/S0377221716303800
Diewert, W. Erwin
Fox, Kevin J.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:35-432016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:35-43
article
Matrix completion under interval uncertainty
Matrix completion under interval uncertainty can be cast as a matrix completion problem with element-wise box constraints. We present an efficient alternating-direction parallel coordinate-descent method for the problem. We show that the method outperforms any other known method on a benchmark in image in-painting in terms of signal-to-noise ratio, and that it provides high-quality solutions for an instance of collaborative filtering with 100,198,805 recommendations within 5 minutes on a single personal computer.
Matrix completion; Robust optimization; Collaborative filtering; Coordinate descent; Large-scale optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716305513
Mareček, Jakub
Richtárik, Peter
Takáč, Martin
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:770-7782016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:770-778
article
A linear model for surface mining haul truck allocation incorporating shovel idle probabilities
We present models of trucks and shovels in oil sand surface mines. The models are formulated to minimize the number of trucks for a given set of shovels, subject to throughput and ore grade constraints. We quantify and validate the nonlinear relation between a shovel’s idle probability (which determines the shovel’s productivity) and the number of trucks assigned to the shovel via a simple approximation, based on the theory of finite source queues. We use linearization to incorporate this expression into linear integer programs. We assume in our integer programs that each shovel is assigned a single truck size but we outline how one could account for multiple truck sizes per shovel in an approximate fashion. The linearization of shovel idle probabilities allows us to formulate more accurate truck allocation models that are easily solvable for realistic-sized problems.
Queueing; OR in natural resources; Integer programming; Truck allocation; Oil sand mining;
http://www.sciencedirect.com/science/article/pii/S0377221713005043
Ta, Chung H.
Ingolfsson, Armann
Doucette, John
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:645-6532016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:645-653
article
Algorithmic aspects of mean–variance optimization in Markov decision processes
We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudopolynomial exact and approximation algorithms.
Markov processes; Dynamic programming; Control; Complexity theory;
http://www.sciencedirect.com/science/article/pii/S0377221713005079
Mannor, Shie
Tsitsiklis, John N.
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:535-5462016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:535-546
article
Combining very large scale and ILP based neighborhoods for a two-level location problem
In this paper we tackle a generalization of the Single Source Capacitated Facility Location Problem in which two sets of facilities, called intermediate level and upper level facilities, have to be located; the dimensioning of the intermediate set, the assignment of clients to intermediate level facilities, and of intermediate level facilities to upper level facilities, must be optimized, as well. Such problem arises, for instance, in telecommunication network design: in fact, in hierarchical networks the traffic arising at client nodes often have to be routed through different kinds of facility nodes, which provide different services. We propose a heuristic approach, based on very large scale neighborhood search to tackle the problem, in which both ad hoc algorithms and general purpose solvers are applied to explore the search space. We report on experimental results using datasets from the capacitated location literature. Such results show that the approach is promising and that Integer Linear Programming based neighborhoods are significantly effective.
Location; Local search; Variable neighborhood search; Very large scale neighborhood search; Matheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221713004797
Addis, Bernardetta
Carello, Giuliana
Ceselli, Alberto
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:196-2042016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:196-204
article
Comparison of least squares Monte Carlo methods with applications to energy real options
Least squares Monte Carlo (LSM) is a state-of-the-art approximate dynamic programming approach used in financial engineering and real options to value and manage options with early or multiple exercise opportunities. It is also applicable to capacity investment and inventory/production management problems with demand/supply forecast updates arising in operations and hydropower-reservoir management. LSM has two variants, referred to as regress-now/later (LSMN/L), which compute continuation/value function approximations (C/VFAs). We provide novel numerical evidence for the relative performance of these methods applied to energy swing and storage options, two typical real options, using a common price evolution model. LSMN/L estimate C/VFAs that yield equally accurate (near optimal) and precise lower and dual (upper) bounds on the value of these real options. Estimating the LSMN/L C/VFAs and their associated lower bounds takes similar computational effort. In contrast, the estimation of a dual bound using the LSML VFA instead of the LSMN CFA takes seconds rather than minutes or hours. This finding suggests the use of LSML in lieu of LSMN when estimating dual bounds on the value of early or multiple exercise options, as well as of related capacity investment and inventory/production policies.
Energy; Real options; Least-squares Monte Carlo; Approximate dynamic programming; Information relaxation and duality;
http://www.sciencedirect.com/science/article/pii/S0377221716304404
Nadarajah, Selvaprabu
Margot, François
Secomandi, Nicola
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:178-1862016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:178-186
article
A Geo/G/1 retrial queueing system with priority services
This paper considers a discrete-time retrial queueing system in which the arriving customers can decide to go directly to the server expelling out of the system the customer that is currently being served, if any, or to join the orbit in accordance with a FCFS discipline. An extensive analysis of the model has been carried out, and using a generating functions approach, the distributions of the number of customers in the orbit and in the system with its respective means are obtained. The stochastic decomposition law has been derived, and, as an application, bounds for the proximity between the steady-state distributions for the considered queueing system and its corresponding standard system are obtained. Also, recursive formulae for calculating the steady-state distributions of the orbit and system size have been developed. Besides, we prove that the M/G/1 retrial queue with service interruptions can be approximated by the corresponding discrete-time system. The generating function of the sojourn time of a customer in the orbit and in the system have also been provided. Finally, some numerical examples to illustrate the effect of the parameters on several performance characteristics and a section of conclusions commenting the main research contributions of this paper are presented.
Discrete-time; Retrials; Approximation to continuous-time; Service upgrade; Sojourn times;
http://www.sciencedirect.com/science/article/pii/S0377221716305483
Atencia, I.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:1-162016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:1-16
article
Regression and Kriging metamodels with their experimental designs in simulation: A review
This article reviews the design and analysis of simulation experiments. It focusses on analysis via two types of metamodel (surrogate. emulator); namely, low-order polynomial regression, and Kriging (or Gaussian process). The metamodel type determines the design of the simulation experiment, which determines the input combinations of the simulation model. For example, a first-order polynomial regression metamodel should use a “resolution-III”design, whereas Kriging may use “Latin hypercube sampling”. More generally, polynomials of first or second order may use resolution III, IV, V, or “central composite” designs. Before applying either regression or Kriging metamodeling, the many inputs of a realistic simulation model can be screened via “sequential bifurcation”. Optimization of the simulated system may use either a sequence of low-order polynomials—known as “response surface methodology” (RSM)—or Kriging models fitted through sequential designs—including “efficient global optimization” (EGO). Finally, “robust”optimization accounts for uncertainty in some simulation inputs.
Robustness and sensitivity; Metamodel; Design; Regression; Kriging;
http://www.sciencedirect.com/science/article/pii/S0377221716304623
Kleijnen, Jack P.C.
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:557-5662016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:557-566
article
Mathematical programming time-based decomposition algorithm for discrete event simulation
Mathematical programming has been proposed in the literature as an alternative technique to simulating a special class of Discrete Event Systems. There are several benefits to using mathematical programs for simulation, such as the possibility of performing sensitivity analysis and the ease of better integrating the simulation and optimisation. However, applications are limited by the usually long computational times. This paper proposes a time-based decomposition algorithm that splits the mathematical programming model into a number of submodels that can be solved sequentially to make the mathematical programming approach viable for long running simulations. The number of required submodels is the solution of an optimisation problem that minimises the expected time for solving all of the submodels. In this way, the solution time becomes a linear function of the number of simulated entities.
Simulation; Mathematical programming; Decomposition;
http://www.sciencedirect.com/science/article/pii/S0377221713005419
Alfieri, Arianna
Matta, Andrea
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:261-2742016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:261-274
article
Distributionally robust single machine scheduling with risk aversion
This paper presents a distributionally robust (DR) optimization model for the single machine scheduling problem (SMSP) with random job processing time (JPT). To the best of our knowledge, it is the first time a DR optimization approach is applied to production scheduling problems in the literature. Unlike traditional stochastic programming models, which require an exact distribution, the presented DR-SMSP model needs only the mean-covariance information of JPT. Its aim is to find an optimal job sequence by minimizing the worst-case Conditional Value-at-Risk (Robust CVaR) of the job sequence’s total flow time. We give an explicit expression of Robust CVaR, and decompose the DR-SMSP into an assignment problem and an integer second-order cone programming (I-SOCP) problem. To efficiently solve the I-SOCP problem with uncorrelated JPT, we propose three novel Cauchy-relaxation algorithms. The effectiveness and efficiency of these algorithms are evaluated by comparing them to a CPLEX solver, and robustness of the optimal job sequence is verified via comprehensive simulation experiments. In addition, the impact of confidence levels of CVaR on the tradeoff between optimality and robustness is investigated from both theoretical and practical perspectives. Our results convincingly show that the DR-SMSP model is able to enhance the robustness of the optimal job sequence and achieve risk reduction with a small sacrifice on the optimality of the mean value. Through the simulation experiments, we have also been able to identify the strength of each of the proposed algorithms.
Robustness and sensitivity analysis; Distributionally robust optimization; Single machine scheduling; CVaR; Total flow time;
http://www.sciencedirect.com/science/article/pii/S0377221716304453
Chang, Zhiqi
Song, Shiji
Zhang, Yuli
Ding, Jian-Ya
Zhang, Rui
Chiong, Raymond
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:215-2292016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:215-229
article
Robust two-stage stochastic linear optimization with risk aversion
We study a two-stage stochastic linear optimization problem where the recourse function is risk-averse rather than risk neutral. In particular, we consider the mean-conditional value-at-risk objective function in the second stage. The model is robust in the sense that the distribution of the underlying random variable is assumed to belong to a certain family of distributions rather than to be exactly known. We start from analyzing a simple case where uncertainty arises only in the objective function, and then explore the general case where uncertainty also arises in the constraints. We show that the former problem is equivalent to a semidefinite program and the latter problem is generally NP-hard. Applications to two-stage portfolio optimization, material order problems, stochastic production-transportation problem and single facility minimax distance problem are considered. Numerical results show that the proposed robust risk-averse two-stage stochastic programming model can effectively control the risk with solutions of acceptable good quality.
Uncertainty modeling; Stochastic programming; Robust optimization; Conditional value-at-risk; Semidefinite programming;
http://www.sciencedirect.com/science/article/pii/S0377221716304374
Ling, Aifan
Sun, Jie
Xiu, Naihua
Yang, Xiaoguang
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:55-612016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:55-61
article
FPTAS for the two identical parallel machine problem with a single operator under the free changing mode
We address in this paper the problem of scheduling a set of independent and non-preemptive jobs on two identical parallel machines with a single operator in order to minimize the makespan. The operator supervises the machines through a subset of a given set of modi operandi: the working modes. A working mode models the way the operator divides up his interventions between the machines. The processing times thus become variable as they now depend on the working mode being utilized. To build a schedule, we seek not only a partition of the jobs on the machines, but also a subset of working modes along with their duration. A pseudo-polynomial time algorithm is first exhibited, followed by a fully polynomial time approximation scheme (FPTAS) to generate an optimal solution within the free changing mode.
FPTAS; Free changing mode; Identical parallel machines; Operator;
http://www.sciencedirect.com/science/article/pii/S0377221716304167
Baptiste, Pierre
Rebaine, Djamal
Zouba, Mohammed
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:577-5862016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:577-586
article
Finite and infinite-horizon single vehicle routing problems with a predefined customer sequence and pickup and delivery
We consider the problem of finding the optimal routing of a single vehicle that starts its route from a depot and picks up from and delivers K different products to N customers that are served according to a predefined customer sequence. The vehicle is allowed during its route to return to the depot to unload returned products and restock with new products. The items of all products are of the same size. For each customer the demands for the products that are delivered by the vehicle and the quantity of the products that is returned to the vehicle are discrete random variables with known joint distribution. Under a suitable cost structure, it is shown that the optimal policy that serves all customers has a specific threshold-type structure. We also study a corresponding infinite-time horizon problem in which the service of the customers is not completed when the last customer has been serviced but it continues indefinitely with the same customer order. For each customer, the joint distribution of the quantities that are delivered and the quantity that is picked up is the same at each cycle. The discounted-cost optimal policy and the average-cost optimal policy have the same structure as the optimal policy in the finite-horizon problem. Numerical results are given that illustrate the structural results.
Logistics; Dynamic programming; Routing with pick up and delivery;
http://www.sciencedirect.com/science/article/pii/S0377221713004694
Pandelis, D.G.
Karamatsoukis, C.C.
Kyriakidis, E.G.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:126-1382016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:126-138
article
Heuristics for multi-item two-echelon spare parts inventory control subject to aggregate and individual service measures
We consider a multi-item two-echelon spare parts inventory system in which the central warehouse operates under a (Q, R) policy and local warehouses implement (S−1,S) policy. The objective is to find the policy parameters minimizing expected system-wide inventory holding and fixed ordering subject to aggregate and individual response time constraints. Using an exact evaluation we provide a very efficient and effective heuristic, and also a tight lower bound for real-world, large-scale two-echelon spare parts inventory problems. An extensive numerical study reveals that as the number of parts increases – which is usually the case in practice – the relative gap between the cost of the heuristic solution and the lower bound approaches zero. In line with our findings, we show that the heuristic and the lower bound are asymptotically optimal and asymptotically tight, respectively, in the number of parts. In practice, this means we can solve real-life problems with large numbers of items optimally. We propose an alternative approach between system and item approaches, which are based on setting individual and aggregate service level constraints, respectively. Using our alternative approach, we show that it is possible to keep the cost benefit of using aggregate service levels while avoiding long individual response times. We also show that the well-known sequential determination of policy parameters, i.e., determining the batch sizes first, and then finding the other policy parameters using those batch sizes, which is known for its high performance in single-item models, performs relatively poor for multi-item systems.
Inventory; Two-echelon; Multi-item; Batch ordering; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716304325
Topan, Engin
Bayındır, Z. Pelin
Tan, Tarkan
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:547-5562016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:547-556
article
An exact algorithm to minimize mean squared deviation of job completion times about a common due date
We consider a deterministic n-job, single machine scheduling problem with the objective of minimizing the Mean Squared Deviation (MSD) of job completion times about a common due date (d). The MSD measure is non-regular and its value can decrease when one or more completion times increases. MSD problem is connected with the Completion Time Variance (CTV) problem and has been proved to be NP-hard. This problem finds application in situations where uniformity of service is important. We present an exact algorithm of pseudo-polynomial complexity, using ideas from branch and bound and dynamic programming. We propose a dominance rule and also develop a lower bound on MSD. The dominance rule and lower bound are effectively combined and used in the development of the proposed algorithm. The search space is explored using the breadth first branching strategy. The asymptotic space complexity of the algorithm is O(nd). Irrespective of the version of the problem – tightly constrained, constrained or unconstrained – the proposed algorithm provides optimal solutions for problem instances up to 1000 jobs size under different due date settings.
Scheduling; Mean squared deviation; Branch and bound; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221713005055
Srirangacharyulu, B.
Srinivasan, G.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:62-672016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:62-67
article
A new compact formulation for the discrete p-dispersion problem
This paper addresses the discrete p-dispersion problem (PDP) which is about selecting p facilities from a given set of candidates in such a way that the minimum distance between selected facilities is maximized. We propose a new compact formulation for this problem. In addition, we discuss two simple enhancements of the new formulation: Simple bounds on the optimal distance can be exploited to reduce the size and to increase the tightness of the model at a relatively low cost of additional computation time. Moreover, the new formulation can be further strengthened by adding valid inequalities. We present a computational study carried out over a set of large-scale test instances in order to compare the new formulation against a standard mixed-integer programming model of the PDP, a line search, and a binary search. Our numerical results indicate that the new formulation in combination with the simple bounds is solved to optimality by an out-of-the-box mixed-integer programming solver in 34 out of 40 instances, while this is neither possible with the standard model nor with the search procedures. For instances in which the line and binary search fail to find a provably optimal solution, we achieve this by adding cuts to our enhanced formulation. With the new techniques we are able to exactly solve instances of one order of magnitude larger than previously solved in the literature.
Facility location; Dispersion problems; Max–min objective; Integer programming;
http://www.sciencedirect.com/science/article/pii/S037722171630457X
Sayah, David
Irnich, Stefan
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:17-232016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:17-23
article
A relaxed projection method for solving multiobjective optimization problems
In this paper, we propose an algorithm for solving multiobjective minimization problems on nonempty closed convex subsets of the Euclidean space. The proposed method combines a reflection technique for obtaining a feasible point with a projected subgradient method. Under suitable assumptions, we show that the sequence generated using this method converges to a Pareto optimal point of the problem. We also present some numerical results.
Multiple objective programming; Pareto optimality; Projected subgradient method;
http://www.sciencedirect.com/science/article/pii/S0377221716303460
Brito, A.S.
Cruz Neto, J.X.
Santos, P.S.M.
Souza, S.S.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:102-1152016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:102-115
article
A multi-item approach to repairable stocking and expediting in a fluctuating demand environment
We consider a single inventory location where multiple types of repairable spare parts are kept for service and maintenance of several different fleets of assets. Demand for each part is a Markov modulated Poisson process (MMPP). Each fleet has a target for the maximum expected number of assets down for lack of a spare part. The inventory manager can meet this target by stocking repairables and by expediting the repair of parts. Expedited repairs have a shorter lead time. There are multiple repair shops (or departments) that handle the repair of parts and the load imposed on repair shops by expedited repairs is constrained. A dual-index policy makes stocking and expediting decisions that depend on demand fluctuations for each spare part type. We formulate the above problem as a non-linear non-convex integer programing problem and provide an algorithm based on column generation to compute feasible near optimal solutions and tight lower bounds. We show how to use the MMPP to model demand fluctuations in maintenance and other settings, including a moment fitting algorithm. We quantify the value of lead time flexibility and show that effective use of this flexibility can yield cost reductions of around 25 percent.
Inventory; Spare parts; Column generation; Maintenance; Markov modulated Poisson process;
http://www.sciencedirect.com/science/article/pii/S0377221716304234
Arts, Joachim
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:230-2412016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:230-241
article
Resource pooling in the presence of failures: Efficiency versus risk
This paper studies the effects of resource pooling on system performance in the presence of failures. The goal is to understand whether pooling increases efficiency and/or reduces risk. We consider four queueing systems with different degrees of pooling (one has no pooling, one has only queues pooled, one has queues and failures pooled, and one has servers pooled), estimate efficiency via the mean number of customers in each system, and assess risk via the probability that there are many customers in each system. Our results show that when servers are subject to failures, pooling queues is always beneficial, whereas pooling both queues and servers improves efficiency but also increases risk. Thus, there is a tradeoff between efficiency and risk in the presence of failures. These conclusions are different from reliable systems where pooling simultaneously improves efficiency and reduces risk and more pooling is better than less pooling (e.g., pooling queues and servers is better than pooling queues only). Thus, insights about resource pooling obtained from studying reliable systems should be used with caution in the presence of failures.
Unreliable servers; Pooling; Mean system size; Tail asymptotics; Stochastic ordering;
http://www.sciencedirect.com/science/article/pii/S0377221716303290
Andradóttir, Sigrún
Ayhan, Hayriye
Down, Douglas G.
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:514-5342016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:514-534
article
Mathematical optimization ideas for biodiversity conservation
Several major environmental issues like biodiversity loss and climate change currently concern the international community. These topics that are related to the development of human societies have become increasingly important since the United Nations Conference on Environment and Development (UNCED) or Earth Summit in Rio de Janeiro in 1992. In this article, we are interested in the first issue. We present here many examples of the help that using mathematical programming can provide to decision-makers in the protection of biodiversity. The examples we have chosen concern the selection of nature reserves, the control of adverse effects caused by landscape fragmentation, including the creation or restoration of biological corridors, the ecological exploitation of forests, the control of invasive species, and the maintenance of genetic diversity. Most of the presented models are – or can be approximated with – linear-, quadratic- or fractional-integer formulations and emphasize spatial aspects of conservation planning. Many of them represent decisions taken in a static context but temporal dimension is also considered. The problems presented are generally difficult combinatorial optimization problems, some are well solved and others less well. Research is still needed to progress in solving them in order to deal with real instances satisfactorily. Moreover, relations between researchers and practitioners have to be strengthened. Furthermore, many recent achievements in the field of robust optimization could probably be successfully used for biodiversity protection, a domain in which many data are uncertain.
Biodiversity protection; Mathematical optimization; Integer programming; Nature reserve; Landscape fragmentation; Genetic diversity;
http://www.sciencedirect.com/science/article/pii/S0377221713002531
Billionnet, Alain
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:702-7112016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:702-711
article
Competition among non-life insurers under solvency constraints: A game-theoretic approach
We formulate a noncooperative game to model competition for policyholders among non-life insurance companies, taking into account market premium, solvency level, market share and underwriting results. We study Nash equilibria and Stackelberg equilibria for the premium levels, and give numerical illustrations.
Non-life insurance; Market model; Game theory; Nash equilibrium; Stackelberg equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221713005365
Dutang, Christophe
Albrecher, Hansjoerg
Loisel, Stéphane
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:757-7692016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:757-769
article
A new approach for sheet nesting problem using guided cuckoo search and pairwise clustering
The nesting problem is commonly encountered in sheet metal, clothing and shoe-making industries. The nesting problem is a combinatorial optimization problem in which a given set of irregular polygons is required to be placed on a rectangular sheet. The objective is to minimize the length of the sheet while having all polygons inside the sheet without overlap. In this study, a methodology that hybridizes cuckoo search and guided local search optimization techniques is proposed.
Cutting; Nesting; No-fit polygon; Clustering; Cuckoo search; Guided local search;
http://www.sciencedirect.com/science/article/pii/S0377221713005080
Elkeran, Ahmed
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:90-1012016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:90-101
article
A feasibility-based heuristic for the container pre-marshalling problem
This paper addresses the container pre-marshalling problem (CPMP) which rearranges containers inside a storage bay to a desired layout. By far, target-driven algorithms have relatively good performance among all algorithms; they have two key components: first, containers are rearranged to their desired slots one by one in a certain order; and second, rearranging one container is completed by a sequence of movements. Our paper improves the performance of the target-driven algorithm from both aspects. The proposed heuristic determines the order of container rearrangements by the concepts of state feasibility, container stability, dead-end avoidance and tier-protection proposed in this paper. In addition, we improve the efficiency of performing container rearrangements by discriminating different task types. Computational experiments showcase that the performance of the proposed heuristic is considerable.
Container pre-marshalling problem; Feasibility-based heuristic; Tier-protection;
http://www.sciencedirect.com/science/article/pii/S0377221716304155
Wang, Ning
Jin, Bo
Zhang, Zizhen
Lim, Andrew
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:24-342016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:24-34
article
Computing equilibrium prices for a capital asset pricing model with heterogeneous beliefs and margin-requirement constraints
The mean-variance capital asset pricing model (CAPM) is a useful mathematical tool for studying a variety of financial problems. In contrast to existing work in the literature, which has primarily focused on deriving analytical solutions under restrictive assumptions, we propose a numerical algorithm for efficiently computing the set of equilibrium prices of a CAPM model with heterogeneous investors and arbitrary margin requirements. We present the mathematical formulation of the CAPM model, derive structural properties of the portfolio selection and excess demand functions, and establish the asymptotic convergence of the proposed algorithm under mild conditions. To illustrate the utility of the algorithm, we perform sensitivity analysis on a simple example to study the impact of marginal requirements and interest rates on the resulting equilibrium prices. Numerical studies are also carried out to compare the performance of the algorithm with that of two other popular methods, namely, the fixed point method and the brand-and-bound algorithm.
(B)Finance; Equilibrium price; Margin requirement; Ta^tonnement;
http://www.sciencedirect.com/science/article/pii/S0377221716305471
Tong, Jun
Hu, Jiaqiao
Hu, Jianqiang
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:68-752016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:68-75
article
Group-buying and channel coordination under asymmetric information
Social media and improvements in technology allow retailers to offer a group-buying option to consumers in a variety of markets. Extant research shows that when consumers are sufficiently heterogeneous, group-buying helps a retailer practice price discrimination. Our paper examines when a manufacturer may prefer its reseller to employ the group-buying mechanism in conjunction with a traditional posted price. In our model, the retailer is privately informed about market heterogeneity, which is summarized via the relative size and the level of price sensitivity of two consumer segments. We show that any value to the manufacturer, of requiring the retailer to offer group-buying, revolves around how profitability varies with market heterogeneity. Our principal finding is that group-buying benefits the manufacturer more when the retailer is privately informed about market size than about the level of consumer price sensitivity.
E-commerce; Group buying; Channel; Asymmetric information;
http://www.sciencedirect.com/science/article/pii/S0377221716303915
Tran, Thanh
Desiraju, Ramarao
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:712-7192016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:712-719
article
Distributing inspections in space and time – Proposed solution of a difficult problem
There are several identical facilities in which precious or dangerous material is processed or stored. Since parts of this material may be diverted by some manager or employee of these facilities or since failures in the processing of the material may occur, an authorized organization inspects these facilities regularly at the beginning and at the end of some reference time interval. In order to shorten the time required for detecting such an illegal activity or failures, in addition some interim inspections are performed in these facilities during the reference time interval. The optimal distribution of these interim inspections in space and time poses considerable analytical problems since adversary strategies have to be taken into account. So far only special cases have been analysed successfully, but these results lead to a conjecture for the solution of the general case which is surprisingly simple in view of the complexity of this inspection problem.
Game theory; Inspection games; Expected detection time; Illegal behavior; Error second kind;
http://www.sciencedirect.com/science/article/pii/S0377221713005353
Avenhaus, Rudolf
Krieger, Thomas
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:745-7562016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:745-756
article
Adaptive and robust radiation therapy optimization for lung cancer
A previous approach to robust intensity-modulated radiation therapy (IMRT) treatment planning for moving tumors in the lung involves solving a single planning problem before the start of treatment and using the resulting solution in all of the subsequent treatment sessions. In this paper, we develop an adaptive robust optimization approach to IMRT treatment planning for lung cancer, where information gathered in prior treatment sessions is used to update the uncertainty set and guide the reoptimization of the treatment for the next session. Such an approach allows for the estimate of the uncertain effect to improve as the treatment goes on and represents a generalization of existing robust optimization and adaptive radiation therapy methodologies. Our method is computationally tractable, as it involves solving a sequence of linear optimization problems. We present computational results for a lung cancer patient case and show that using our adaptive robust method, it is possible to attain an improvement over the traditional robust approach in both tumor coverage and organ sparing simultaneously. We also prove that under certain conditions our adaptive robust method is asymptotically optimal, which provides insight into the performance observed in our computational study. The essence of our method – solving a sequence of single-stage robust optimization problems, with the uncertainty set updated each time – can potentially be applied to other problems that involve multi-stage decisions to be made under uncertainty.
OR in health services; Intensity-modulated radiation therapy; Adaptive radiation therapy; Robust optimization; Linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221713004724
Chan, Timothy C.Y.
Mišić, Velibor V.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:242-2512016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:242-251
article
Compact mixed integer linear programming models to the minimum weighted tree reconstruction problem
The Minimum Weighted Tree Reconstruction (MWTR) problem consists of finding a minimum length weighted tree connecting a set of terminal nodes in such a way that the length of the path between each pair of terminal nodes is greater than or equal to a given distance between the considered pair of terminal nodes. This problem has applications in several areas, namely, the inference of phylogenetic trees, the modeling of traffic networks and the analysis of internet infrastructures. In this paper, we investigate the MWTR problem and we present two compact mixed-integer linear programming models to solve the problem. Computational results using two different sets of instances, one from the phylogenetic area and another from the telecommunications area, show that the best of the two models is able to solve instances of the problem having up to 15 terminal nodes.
Mixed integer linear programming; Tree realization; Topology discovery; Routing topology inference; Minimum evolution problem;
http://www.sciencedirect.com/science/article/pii/S0377221716304349
Fortz, Bernard
Oliveira, Olga
Requejo, Cristina
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:667-6892016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:667-689
article
Static search games played over graphs and general metric spaces
We define a general game which forms a basis for modelling situations of static search and concealment over regions with spatial structure. The game involves two players, the searching player and the concealing player, and is played over a metric space. Each player simultaneously chooses to deploy at a point in the space; the searching player receiving a payoff of 1 if his opponent lies within a predetermined radius r of his position, the concealing player receiving a payoff of 1 otherwise. The concepts of dominance and equivalence of strategies are examined in the context of this game, before focusing on the more specific case of the game played over a graph. Methods are presented to simplify the analysis of such games, both by means of the iterated elimination of dominated strategies and through consideration of automorphisms of the graph. Lower and upper bounds on the value of the game are presented and optimal mixed strategies are calculated for games played over a particular family of graphs.
Game theory; Search games; Networks; Graph theory; Metric spaces;
http://www.sciencedirect.com/science/article/pii/S0377221713005122
Oléron Evans, Thomas P.
Bishop, Steven R.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:292-3072016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:292-307
article
Bayesian estimation of the global minimum variance portfolio
In this paper we consider the estimation of the weights of optimal portfolios from the Bayesian point of view under the assumption that the conditional distributions of the logarithmic returns are normal. Using the standard priors for the mean vector and the covariance matrix, we derive the posterior distributions for the weights of the global minimum variance portfolio. Moreover, we reparameterize the model to allow informative and non-informative priors directly for the weights of the global minimum variance portfolio. The posterior distributions of the portfolio weights are derived in explicit form for almost all models. The models are compared by using the coverage probabilities of credible intervals. In an empirical study we analyze the posterior densities of the weights of an international portfolio.
Global minimum variance portfolio; Posterior distribution; Credible interval; Wishart distribution;
http://www.sciencedirect.com/science/article/pii/S0377221716303812
Bodnar, Taras
Mazur, Stepan
Okhrin, Yarema
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:617-6302016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:617-630
article
Bundling decisions in supply chains
Firms often sell products in bundles to extract consumer surplus. While most bundling decisions studied in the literature are geared to integrated firms, we examine a decentralized supply chain where the suppliers retain decision rights. Using a generic distribution of customers’ reservation price we establish equilibrium solutions for three different bundling scenarios in a supply chain, and generate interesting insights for distributions with specific forms. We find that (i) in supply chain bundling the retailer’s margin equals the margin of each independent supplier, and it equals the combined margin when the suppliers are in a coalition, (ii) when the suppliers form a coalition to bundle their products the bundling gain in the supply chain is higher and retail price is lower than when the retailer bundles the products, (iii) the supply chain has more to gain from bundling relative to an integrated firm, (iv) the first-best supply chain bundling remains viable over a larger set of parameter values than those in the case of the integrated firm, (v) supplier led bundling is preferable to separate sales over a wider range of parameter values than if the retailer led the bundling, and (vi) if the reservation prices are uniformly distributed bundling can be profitable when the variable costs are low and valuations of the products are not significantly different from one another. For normally distributed reservation prices, we show that the bundling set is larger and the bundling gain is higher than that for a uniform distribution.
Supply chain management; Bundling; Price discrimination; Supply chain contract; Manufacturer retailer relationship;
http://www.sciencedirect.com/science/article/pii/S0377221713005146
Chakravarty, A.
Mild, A.
Taudes, A.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:252-2602016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:252-260
article
A more human-like portfolio optimization approach
Black and Litterman proposed an improvement to the Markowitz portfolio optimization model. They suggested the construction of views to represent investor’s opinion about the future of stocks’ returns. However, conceiving these views can be quite confusing. It requires the investor to quantify several subjective parameters. In this article, we propose a new way of creating these views using Verbal Decision Analysis. Questionnaires were designed with the intent of making it easier for investors to express their vision about stocks. Following the ZAPROS methodology, the investor answers sets of questions allowing to determine a Formal Index of Quality (FIQ). The views are then derived from the resulting FIQ. Our approach was implemented and tested on data from the Brazilian Stocks. It allows investors to create a personal risk-return balanced portfolio without the help of an expert. The experiments show that the proposed method mitigates the impact of poor view estimation. Also, one must notice that the method is qualitative and its aim is to create a more efficient portfolio considering the investor’s vision.
Decision support systems; Black–Litterman; Portfolio optimization; Asset allocation; Verbal Decision Analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716304386
Silva, Thuener
Pinheiro, Plácido Rogério
Poggi, Marcus
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:151-1622016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:151-162
article
Mitigating variance amplification under stochastic lead-time: The proportional control approach
Logistic volatility is a significant contributor to supply chain inefficiency. In this paper we investigate the amplification of order and inventory fluctuations in a state-space supply chain model with stochastic lead-time, general auto-correlated demand and a proportional order-up-to replenishment policy. We identify the exact distribution functions of the orders and the inventory levels. We give conditions for the ability of proportional control mechanism to simultaneously reduce inventory and order variances. For AR(2) and ARMA(1,1) demand, we show that both variances can be lowered together under the proportional order-up-to policy. Simulation with real demand and lead-time data also confirms a cost benefit exists.
Inventory; Bullwhip effect; Stochastic lead-time; Demand correlation;
http://www.sciencedirect.com/science/article/pii/S0377221716304301
Wang, Xun
Disney, Stephen M.
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:163-1772016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:163-177
article
Relevant states and memory in Markov chain bootstrapping and simulation
Markov chain theory is proving to be a powerful approach to bootstrap and simulate highly nonlinear time series. In this work, we provide a method to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. In particular, the choice of memory lags and the aggregation of irrelevant states are obtained by looking for regularities in the transition probabilities. Our approach is based on an optimization model. More specifically, we consider two competing objectives that a researcher will in general pursue when dealing with bootstrapping and simulation: preserving the “structural” similarity between the original and the resampled series, and assuring a controlled diversification of the latter. A discussion based on information theory is developed to define the desirable properties for such optimal criteria. Two numerical tests are developed to verify the effectiveness of the proposed method.
Bootstrapping; Information theory; Markov chains; Optimization; Simulation;
http://www.sciencedirect.com/science/article/pii/S037722171630426X
Cerqueti, Roy
Falbo, Paolo
Pelizzari, Cristian
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:308-3162016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:308-316
article
Measurement of interest rates using a convex optimization model
Measurement of a single interest rate curve is an important and well-studied inverse problem. To select plausible interest rate curves from the infinite set of possible interest rate curves, forward rates should be used in the regularization. By discretizing the interest rate curve it is shown that the inverse problem can be reformulated as a convex optimization model that can be efficiently solved using existing solvers. The convex optimization model can include bills, bonds, certificates of deposits, forward rate agreements and interest rate swaps using both equality constraints and inequality constraints that stem from bid/ask prices. The importance of an appropriate regularization and allowing for deviations from market prices to obtain stable forward rate curves is illustrated using market data.
Finance; OR in banking; Forward rate; Term structure; Non-linear optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716303903
Blomvall, Jörgen
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:187-1952016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:187-195
article
Multi-server tandem queue with Markovian arrival process, phase-type service times, and finite buffers
We consider multi-server tandem queues where both stations have a finite buffer and all services times are phase-type distributed. Arriving customers enter the first queueing station if buffer space is available or get lost otherwise. After completing service in the first station customers proceed to the second station if buffer space is available, otherwise a server at the first station is blocked until buffer space becomes available at the second station. We provide an exact computational analysis of various steady-state performance measures such as loss and blocking probabilities, expectations and higher moments of numbers of customers in the queues and in the whole system by modeling the tandem queue as a level-dependent quasi-birth-and-death process and applying suitable matrix-analytic methods. Numerical results are presented for selected representative examples.
Queueing; Multi-server tandem queue; Markovian arrival process; Phase-type service time distributions; Matrix-analytic method;
http://www.sciencedirect.com/science/article/pii/S0377221716305720
Baumann, Hendrik
Sandmann, Werner
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:139-1502016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:139-150
article
The impact of customer returns in a supply chain with a common retailer
Efficient distribution of the product in a supply chain is a critical issue in supply chain management. In the paper, we study a manufacturer Stackelberg supply chain in which a retailer can sell either or both of two brands, a well-known brand and a new brand, in a market supplied by two manufacturers. The two brands are differentiated by customer satisfaction rate. The supply chain involves both vertical competition between the retailer and manufacturers, and horizontal competition between the two manufacturers. We identify the conditions under which the retailer should choose one or both of the two manufacturers, and we show that in certain circumstances, the retailer will prefer to work with both manufacturers, even though one brand of product may have no sales. We find that whether a money-back guarantee (MBG) returns policy should be offered for the supply chain depends only on whether or not the retailer can recover the value of the returned product efficiently, even when the retailer incurs a net cost by accepting returns. We also show that an MBG enhances the profit of the manufacturer with low satisfaction rate, resulting in an increase in both the wholesale price and demand, but it has an opposite impact on the manufacturer with high satisfaction rate. In addition, an MBG enhances the retailer's profit and expands the overall market. Numerical examples are included to illustrate the major results.
Customer returns policy; Game theory; Pricing; Money-back guarantee; Supply chain management;
http://www.sciencedirect.com/science/article/pii/S0377221716304313
Yang, Hui
Chen, Jing
Chen, Xu
Chen, Bintong
oai:RePEc:eee:ejores:v:256:y:2017:i:1:p:205-2142016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:256:y:2017:i:1:p:205-214
article
Chance-constrained optimization for pension fund portfolios in the presence of default risk
In this paper, we consider the portfolio optimization problem for a pension fund consisting of various government and corporate bonds. The aim of the problem is to maximize the fund’s cash position at the end of the time horizon, while allowing for the possibility of bond defaults. We model this problem as a stochastic discrete-time optimal control problem with a chance constraint that ensures all future outgoing commitments can be met with sufficiently high probability. We then introduce a corresponding deterministic formulation that is a conservative approximation of the original stochastic optimal control problem. This approximate problem can be solved using gradient-based optimization techniques. We conclude the paper with a simulation study.
Pension fund; Portfolio optimization; Optimal control; Gradient-based optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716304398
Sun, Yufei
Aw, Grace
Loxton, Ryan
Teo, Kok Lay
oai:RePEc:eee:ejores:v:231:y:2013:i:3:p:720-7332016-09-15RePEc:eee:ejores
RePEc:eee:ejores:v:231:y:2013:i:3:p:720-733
article
Rethinking Soft OR interventions: Models as boundary objects
In this paper I draw on research on the role of objects in problem solving collaboration to make a case for the conceptualisation of models as potential boundary objects. Such conceptualisation highlights the possibility that the models used in Soft OR interventions perform three roles with specific effects: transfer to develop a shared language, translation to develop shared meanings, and transformation to develop common interests. If these roles are carried out effectively, models enable those involved to traverse the syntactic, semantic and pragmatic boundaries encountered when tackling a problem situation of mutual concern, and help create new knowledge that has consequences for action. I illustrate these roles and associated effects via two empirical case vignettes drawn from an ongoing action research programme studying the impact of Soft OR interventions. Building on the insights generated by the case vignettes, I develop an analytical framework that articulates the dynamics of knowledge creation within Soft OR interventions. The framework can shed new light on a core aspect of Soft OR practice, especially with regards to the impact of models on the possibilities for action they can afford to those involved. I conclude with a discussion of the prescriptive value of the framework for research into the evaluation of Soft OR interventions, and its implications for the conduct of Soft OR practice.
Soft OR; Models; Intervention; Boundary objects; Knowledge creation; Group decision and negotiation;
http://www.sciencedirect.com/science/article/pii/S0377221713005407
Alberto Franco, L.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:313-3242013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:313-324
article
Economic implications of poor access to antenatal care in rural and remote Western Australian Aboriginal communities: An individual sampling model of pregnancy
Australian Aboriginal women attend antenatal care less frequently and experience poorer pregnancy outcomes than non-Aboriginal women. Improving access to antenatal care is recognised as a means to improve pregnancy outcomes for mother and baby.
Individual sampling model; Decision analytic model; Antenatal care; Pregnancy care; Neonatal outcomes;
http://www.sciencedirect.com/science/article/pii/S0377221712008065
Cannon, Jeffrey W.
Mueller, Ute A.
Hornbuckle, Janet
Larson, Ann
Simmer, Karen
Newnham, John P.
Doherty, Dorota A.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:55-612013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:55-61
article
Lot sizing with carbon emission constraints
This paper introduces new environmental constraints, namely carbon emission constraints, in multi-sourcing lot-sizing problems. These constraints aim at limiting the carbon emission per unit of product supplied with different modes. A mode corresponds to the combination of a production facility and a transportation mode and is characterized by its economical costs and its unitary carbon emission. Four types of constraints are proposed and analyzed in the single-item uncapacitated lot-sizing problem. The periodic case is shown to be polynomially solvable, while the cumulative, global and rolling cases are NP-hard. Perspectives to extend this work are discussed.
Lot sizing; Carbon emission; Dynamic programming; Complexity;
http://www.sciencedirect.com/science/article/pii/S037722171200896X
Absi, Nabil
Dauzère-Pérès, Stéphane
Kedad-Sidhoum, Safia
Penz, Bernard
Rapine, Christophe
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:341-3532013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:341-353
article
Benders Decomposition for multi-stage stochastic mixed complementarity problems – Applied to a global natural gas market model
This paper presents and implements a Benders Decomposition type of algorithm for large-scale, stochastic multi-period mixed complementarity problems. The algorithm is applied to various multi-stage natural gas market models accounting for market power exertion by traders. Due to the non-optimization nature of the natural gas market problem, a straightforward implementation of the traditional Benders Decomposition is not possible. The master and subproblems can be derived from the underlying optimization problems and transformed into complementarity problems. However, to complete the master problems optimality cuts are added using the variational inequality-based method developed in Gabriel and Fuller (2010). In this manner, an alternative derivation of Benders Decomposition for Stochastic MCP is presented, thereby making this approach more applicable to a broader audience. The algorithm can successfully solve problems with up to 256 scenarios and more than 600 thousand variables, and problems with over 117 thousand variables with more than two thousand first-stage capacity expansion variables. The algorithm is efficient for solving two-stage problems. The computational time reduction for other stochastic problems is considerable and would be even larger if a parallel implementation of the algorithm were used. The paper concludes with a discussion of infrastructure expansion results, illustrating the impact of hedging on investment timing and optimal capacity sizes.
Stochastic programming; OR in energy; Benders Decomposition; Natural gas market models; Stochastic mixed complementarity problem; Optimality cut;
http://www.sciencedirect.com/science/article/pii/S0377221712008727
Egging, Ruud
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:332-3402013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:332-340
article
A multi-objective mathematical model for the industrial hazardous waste location-routing problem
Industrial hazardous waste management involves the collection, transportation, treatment, recycling and disposal of industrial hazardous materials that pose risk to their surroundings. In this paper, a new multi-objective location-routing model is developed, and implemented in the Marmara region of Turkey. The aim of the model is to help decision makers decide on locations of treatment centers utilizing different technologies, routing different types of industrial hazardous wastes to compatible treatment centers, locations of recycling centers and routing hazardous waste and waste residues to those centers, and locations of disposal centers and routing waste residues there. In the mathematical model, three criteria are considered: minimizing total cost, which includes total transportation cost of hazardous materials and waste residues and fixed cost of establishing treatment, disposal and recycling centers; minimizing total transportation risk related to the population exposure along transportation routes of hazardous materials and waste residues; and minimizing total risk for the population around treatment and disposal centers, also called site risk. A lexicographic weighted Tchebycheff formulation is developed and computed with CPLEX software to find representative efficient solutions to the problem. Data related to the Marmara region is obtained by utilizing Arcview 9.3 GIS software and Marmara region geographical database.
Routing; Multiple objective programming; Location-routing problem; Pareto optimization; Industrial hazardous waste management; Multi-objective model;
http://www.sciencedirect.com/science/article/pii/S0377221712008624
Samanlioglu, Funda
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:228-2362013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:228-236
article
Double marginalization and coordination in the supply chain with uncertain supply
This paper explores a generalized supply chain model subject to supply uncertainty after the supplier chooses the production input level. Decentralized systems under wholesale price contracts are investigated, with double marginalization effects shown to lead to supply insufficiencies, in the cases of both deterministic and random demands. We then design coordination contracts for each case and find that an accept-all type of contract is required to coordinate the supply chain with random demand, which is a much more complicated situation than that with deterministic demand. Examples are provided to illustrate the application of our findings to specific industrial domains. Moreover, our coordination mechanisms are shown to be applicable to the multi-supplier situation, which fills the research gap on assembly system coordination with random yield and random demand under a voluntary compliance regime.
Supply chain management; Uncertain supply; Contract design; Double marginalization;
http://www.sciencedirect.com/science/article/pii/S0377221712008120
Li, Xiang
Li, Yongjian
Cai, Xiaoqiang
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:203-2102013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:203-210
article
Master corner polyhedron: Vertices
We focus on the vertices of the master corner polyhedron (MCP), a fundamental object in the theory of integer linear programming. We introduce two combinatorial operations that transform vertices to their neighbors. This implies that each MCP can be defined by the initial vertices regarding these operations; we call them support vertices. We prove that the class of support vertices of all MCPs over a group is invariant under automorphisms of this group and describe MCP vertex bases. Among other results, we characterize its irreducible points, establish relations between a vertex and the nontrivial facets that pass through it, and prove that this polyhedron is of diameter 2.
Combinatorial optimization; Integer programming; Master corner polyhedron; Vertices;
http://www.sciencedirect.com/science/article/pii/S0377221712008351
Shlyk, Vladimir A.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:81-872013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:81-87
article
An inventory control model with stochastic review interval and special sale offer
Periodic review inventory models are widely used in practice, especially for inventory systems in which many different items are purchased from the same supplier. However, most of periodic review models have assumed a fixed length of the review periods. In practice, it is possible that the review periods are of a random (stochastic) length. This paper presents an inventory control model in the case of random review intervals and special sale offer from the supplier. The replenishment interval is assumed to obey from two different distributions, namely, exponential and uniform distributions. Also, shortages are allowed in the term of partial backordering. For this model, its convexity condition is discussed and closed form solutions are proposed.
Logistics; Special sale; Stochastic review interval; Partial backlogging;
http://www.sciencedirect.com/science/article/pii/S0377221712009022
Karimi-Nasab, M.
Konstantaras, I.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:522-5352013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:522-535
article
Weighted Multivariate Mean Square Error for processes optimization: A case study on flux-cored arc welding for stainless steel claddings
A mathematical programming technique developed recently that optimizes multiple correlated characteristics is the Multivariate Mean Square Error (MMSE). The MMSE approach has obtained noteworthy results, by avoiding the production of inappropriate optimal points that can occur when a method fails to take into account a correlation structure. Where the MMSE approach is deficient, however, is in cases where the multiple correlated characteristics need to be optimized with varying degrees of importance. The MMSE approach, in treating all responses as having the same importance, is unable to attribute the desired weights. This paper thus introduces a strategy that weights the responses in the MMSE approach. The method, called the Weighted Multivariate Mean Square Error (WMMSE), utilizes a weighting procedure that integrates Principal Component Analysis (PCA) and Response Surface Methodology (RSM). In doing so, WMMSE obtains uncorrelated weighted objective functions from the original responses. After being mathematically programmed, these functions are optimized by employing optimization algorithms. We applied WMMSE to optimize a stainless steel cladding application executed via the flux-cored arc welding (FCAW) process. Four input parameters and eight response variables were considered. Stainless steel cladding, which carries potential benefits for a variety of industries, takes low cost materials and deposits over their surfaces materials having anti-corrosive properties. Optimal results were confirmed, which ensured the deposition of claddings with defect-free beads exhibiting the desired geometry and demonstrating good productivity indexes.
Multiple objective programming; Weighted Multivariate Mean Square Error (WMMSE); Stainless steel claddings; Flux-cored arc welding (FCAW); Response Surface Methodology (RSM);
http://www.sciencedirect.com/science/article/pii/S0377221712008946
Gomes, J.H.F.
Paiva, A.P.
Costa, S.C.
Balestrassi, P.P.
Paiva, E.J.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:268-2762013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:268-276
article
Optimal benchmarking for active portfolio managers
Within an agency theoretic framework adapted to the portfolio delegation issue, we show how to construct optimal benchmarks. In accordance with US regulations, the benchmark-adjusted compensation scheme is taken to be symmetric. The investor’s control consists in forcing the manager to adopt the appropriate benchmark so that his first-best optimum is attained. Solving simultaneously the manager’s and the investor’s dynamic optimization programs in a fairly general framework, we characterize the optimal benchmark. We then provide completely explicit solutions when the investor’s and the manager’s utility functions exhibit different CRRA parameters. We find that, even under optimal benchmarking, it is never optimal for the manager, and therefore for the investor, to follow exactly the benchmark, except in a very restrictive case. We finally assess by simulation the practical importance, in particular in terms of the investor’s welfare, of selecting a sub-optimal benchmark.
Benchmarking; Incentive fees; Mutual funds; Continuous time trading; Martingale approach; Principal-agent model;
http://www.sciencedirect.com/science/article/pii/S0377221712008089
Lioui, Abraham
Poncet, Patrice
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:560-5762013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:560-576
article
On selecting portfolio of international mutual funds using goal programming with extended factors
This paper proposes and investigates the use of several factors for portfolio selection of international mutual funds. Three of the selected factors are specific to mutual funds, additional three factors are taken from Macroeconomics and one factor represents regional and country preferences. Each factor is treated as an objective in the multiple objective approach of goal programming. Three variants of goal programming are utilized.
Goal programming; Portfolio selection; Extended factors; Mutual funds;
http://www.sciencedirect.com/science/article/pii/S0377221712008193
Tamiz, Mehrdad
Azmi, Rania A.
Jones, Dylan F.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:646-6572013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:646-657
article
Efficiency of purchasing and selling agents in markets with quality uncertainty: The case of illicit drug transactions
Since Akerlof’s theory of lemons, economists have viewed quality uncertainty as an informational advantage for sellers. Drawing on frontier techniques, we propose in this paper a simple method for measuring inefficiency of both sellers and buyers in markets for goods with different levels of quality. We apply a non-parametric robust double-frontier framework to the case of illicit substance markets, which suffer from imperfect information about drug quality for purchasers and to a lesser extent for sellers. We use unique data on cannabis and cocaine transactions collected in France that include information about price, quantity exchanged and purity. We find that transactional inefficiency does not really benefit either dealers or purchasers. Furthermore, information influences the performance of agents during market transactions.
(D) Data envelopment analysis; (P) Economics; (P) Pricing; (P) Purchasing; (P) Retailing; (P) Uncertainty modelling;
http://www.sciencedirect.com/science/article/pii/S0377221712009150
Ben Lakhdar, Christian
Leleu, Hervé
Vaillant, Nicolas Gérard
Wolff, François-Charles
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:122-1322013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:122-132
article
Patterns in stock market movements tested as random number generators
This paper shows that tests of Random Number Generators (RNGs) may be used to test the Efficient Market Hypothesis (EMH). It uses the Overlapping Serial Test (OST), a standard test in RNG research, to detect anomalous patterns in the distribution of sequences of stock market movements up and down. Our results show that most stock markets exhibit idiosyncratic recurrent patterns, contrary to the efficient market hypothesis; also that OST detects a different kind of non-randomness to standard econometric long- and short-memory tests. Exposure of these anomalies should contribute to making markets more efficient.
Stock market time series; Financial data mining; Forecasting; Finance; Overlapping serial test;
http://www.sciencedirect.com/science/article/pii/S0377221712009101
Doyle, John R.
Chen, Catherine H.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:88-1002013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:88-100
article
The impact of process deterioration on production and maintenance policies
This paper examines a single-stage production system that deteriorates with production actions, and improves with maintenance. The condition of the process can be in any of several discrete states, and transitions from state to state follow a semi-Markov process. The firm can produce multiple products, which differ by profit earned, expected processing time, and impact on equipment deterioration. The firm can also perform different maintenance actions, which differ by cost incurred, expected down time, and impact on the process condition. The firm needs to determine the optimal production and maintenance choices in each state in a way that maximizes the long-run expected average reward per unit time.
Manufacturing; Maintenance; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221712009046
Kazaz, Burak
Sloan, Thomas W.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:1-112013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:1-11
article
Existence and solution methods for equilibria
Equilibrium problems provide a mathematical framework which includes optimization, variational inequalities, fixed-point and saddle point problems, and noncooperative games as particular cases. This general format received an increasing interest in the last decade mainly because many theoretical and algorithmic results developed for one of these models can be often extended to the others through the unifying language provided by this common format. This survey paper aims at covering the main results concerning the existence of equilibria and the solution methods for finding them.
Equilibrium problem; Monotonicity; Coercivity; Auxiliary principle; Regularization;
http://www.sciencedirect.com/science/article/pii/S0377221712008892
Bigi, Giancarlo
Castellani, Marco
Pappalardo, Massimo
Passacantando, Mauro
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:12-212013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:12-21
article
Constraint qualifications in linear vector semi-infinite optimization
Linear vector semi-infinite optimization deals with the simultaneous minimization of finitely many linear scalar functions subject to infinitely many linear constraints. This paper provides characterizations of the weakly efficient, efficient, properly efficient and strongly efficient points in terms of cones involving the data and Karush–Kuhn–Tucker conditions. The latter characterizations rely on different local and global constraint qualifications. The global constraint qualifications are illustrated on a collection of selected applications.
Multiple objective programming; Linear vector semi-infinite optimization; Constraint qualifications; Cone conditions; KKT conditions;
http://www.sciencedirect.com/science/article/pii/S0377221712006698
Goberna, M.A.
Guerra-Vazquez, F.
Todorov, M.I.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:286-2922013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:286-292
article
Revisiting a game theoretic framework for the robust railway network design against intentional attacks
This paper discusses and extends some competitive aspects of the games proposed in an earlier work, where a robust railway network design problem was proposed as a non-cooperative zero-sum game in normal form between a designer/operator and an attacker. Due to the importance of the order of play and the information available to the players at the moment of their decisions, we here extend those previous models by proposing a formulation of this situation as a dynamic game. Besides, we propose a new mathematical programming model that optimizes both the network design and the allocation of security resources over the network. The paper also proposes a model to distribute security resources over an already existing railway network in order to minimize the negative effects of an intentional attack. For the sake of readability, all concepts are introduced with the help of an illustrative example.
Robust network design; Game theory; Protection resource allocation; Equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221712008399
Perea, Federico
Puerto, Justo
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:246-2572013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:246-257
article
The golden number and Fibonacci sequences in the design of voting structures
Some distinguished types of voters, as vetoes, passers or nulls, as well as some others, play a significant role in voting systems because they are either the most powerful or the least powerful voters in the game independently of the measure used to evaluate power. In this paper we are concerned with the design of voting systems with at least one type of these extreme voters and with few types of equivalent voters. With this purpose in mind we enumerate these special classes of games and find out that its number always follows a Fibonacci sequence with smooth polynomial variations. As a consequence we find several families of games with the same asymptotic exponential behavior except for a multiplicative factor which is the golden number or its square. From a more general point of view, our studies are related with the design of voting structures with a predetermined importance ranking.
91A12; 91A80; 91B12; Game theory; Voting systems; Complete simple games; Enumeration and classification; Operational research structures;
http://www.sciencedirect.com/science/article/pii/S0377221712007631
Freixas, Josep
Kurz, Sascha
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:615-6252013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:615-625
article
The extended QUALIFLEX method for multiple criteria decision analysis based on interval type-2 fuzzy sets and applications to medical decision making
QUALIFLEX, a generalization of Jacquet-Lagreze’s permutation method, is a useful outranking method in decision analysis because of its flexibility with respect to cardinal and ordinal information. This paper develops an extended QUALIFLEX method for handling multiple criteria decision-making problems in the context of interval type-2 fuzzy sets. Interval type-2 fuzzy sets contain membership values that are crisp intervals, which are the most widely used of the higher order fuzzy sets because of their relative simplicity. Using the linguistic rating system converted into interval type-2 trapezoidal fuzzy numbers, the extended QUALIFLEX method investigates all possible permutations of the alternatives with respect to the level of concordance of the complete preference order. Based on a signed distance-based approach, this paper proposes the concordance/discordance index, the weighted concordance/discordance index, and the comprehensive concordance/discordance index as evaluative criteria of the chosen hypothesis for ranking the alternatives. The feasibility and applicability of the proposed methods are illustrated by a medical decision-making problem concerning acute inflammatory demyelinating disease, and a comparative analysis with another outranking approach is conducted to validate the effectiveness of the proposed methodology.
Decision analysis; QUALIFLEX; Outranking method; Multiple criteria decision-making; Interval type-2 fuzzy set;
http://www.sciencedirect.com/science/article/pii/S0377221712008909
Chen, Ting-Yu
Chang, Chien-Hung
Rachel Lu, Jui-fen
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:626-6352013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:626-635
article
Diversification-consistent data envelopment analysis with general deviation measures
We propose new efficiency tests which are based on traditional DEA models and take into account portfolio diversification. The goal is to identify the investment opportunities that perform well without specifying our attitude to risk. We use general deviation measures as the inputs and return measures as the outputs. We discuss the choice of the set of investment opportunities including portfolios with limited number of assets. We compare the optimal values (efficiency scores) of all proposed tests leading to the relations between the sets of efficient opportunities. Strength of the tests is then discussed. We test the efficiency of 25 world financial indices using new DEA models with CVaR deviation measures.
Efficiency tests; Data envelopment analysis; General deviation measures; Diversification-consistency;
http://www.sciencedirect.com/science/article/pii/S0377221712008235
Branda, Martin
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:221-2272013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:221-227
article
Dual-channel closed-loop supply chain with government consumption-subsidy
The government has been acting as an important role in the formation and operation of closed-loop supply chain. This paper focuses on how consumption-subsidy influences dual-channel closed-loop supply chain. After introducing government consumption-subsidy program and dual-channel closed-loop supply chain, the paper analyzes the channel members’ decisions before and after the government-funded program performance, respectively. Finally, influence of consumption-subsidy has been considered from the consumers, the scale of closed-loop supply chain and the enterprises perspectives, which provides an important basis for our propositions. The key propositions of the paper are listed as follows: All the consumers that purchase the new products are beneficiaries of the government consumption-subsidy in varying degrees; the consumption-subsidy is conducive to the expansion of closed-loop supply chain; both the manufacturer and the retailer are beneficiaries of the consumption-subsidy, while the e-tailer benefits or not is uncertain.
Supply chain management; Consumption-subsidy; E-commerce; China;
http://www.sciencedirect.com/science/article/pii/S0377221712007989
Ma, Wei-min
Zhao, Zhang
Ke, Hua
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:636-6452013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:636-645
article
Computing tournament solutions using relation algebra and RelView
We describe a simple computing technique for the tournament choice problem. It rests upon relational modeling and uses the BDD-based computer system RelView for the evaluation of the relation-algebraic expressions that specify the solutions and for the visualization of the computed results. The Copeland set can immediately be identified using RelView’s labeling feature. Relation-algebraic specifications of the Condorcet non-losers, the Schwartz set, the top cycle, the uncovered set, the minimal covering set, the Banks set, and the tournament equilibrium set are delivered. We present an example of a tournament on a small set of alternatives, for which the above choice sets are computed and visualized via RelView. The technique described in this paper is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other problems of social choice and game theory.
Tournament; Relational algebra; RelView; Copeland set; Condorcet non-losers; Schwartz set;
http://www.sciencedirect.com/science/article/pii/S0377221712008739
Berghammer, Rudolf
Rusinowska, Agnieszka
de Swart, Harrie
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:190-1982013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:190-198
article
Multistage optimization of option portfolio using higher order coherent risk measures
Choosing a suitable risk measure to optimize an option portfolio’s performance represents a significant challenge. This paper is concerned with illustrating the advantages of Higher order coherent risk measures to evaluate option risk’s evolution. It discusses the detailed implementation of the resulting dynamic risk optimization problem using stochastic programming. We propose an algorithmic procedure to optimize an option portfolio based on minimization of conditional higher order coherent risk measures. Illustrative examples demonstrate some advantages in the performance of the portfolio’s levels when higher order coherent risk measures are used in the risk optimization criterion.
Coherent risk measures; Duality; Average value-at-risk; Monte Carlo simulation; Kusuoka measure; Stochastic programming;
http://www.sciencedirect.com/science/article/pii/S0377221712009447
Matmoura, Yassine
Penev, Spiridon
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:481-4902013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:481-490
article
Optimal selection of process mean for a stochastic inventory model
It is very common to assume deterministic demand in the literature of integrated targeting – inventory models. However, if variability in demand is high, there may be significant disruptions from using the deterministic solution in probabilistic environment. Thus, the model would not be applicable to real world situations and adjustment must be made. The purpose of this paper is to develop a model for integrated targeting – inventory problem when the demand is a random variable. In particular, the proposed model jointly determines the optimal process mean, lot size and reorder point in (Q,R) continuous review model. In order to investigate the effect of uncertainty in demand, the proposed model is compared with three baseline cases. The first of which considers a hierarchical model where the producer determines the process mean and lot-sizing decisions separately. This hierarchical model is used to show the effect of integrating the process targeting with production/inventory decisions. Another baseline case is the deterministic demand case which is used to show the effect of variation in demand on the optimal solution. The last baseline case is for the situation where the variation in the filling amount is negligible. This case demonstrates the sensitivity of the total cost with respect to the variation in the process output. Also, a procedure is developed to determine the optimal solution for the proposed models. Empirical results show that ignoring randomness in the demand pattern leads to underestimating the expected total cost. Moreover, the results indicate that performance of a process can be improved significantly by reducing its variation.
Quality control; Targeting problem; Production; Demand uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221712008703
Darwish, M.A.
Abdulmalek, F.
Alkhedher, M.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:44-542013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:44-54
article
Bose–Einstein condensation in satisfiability problems
This paper is concerned with the complex behavior arising in satisfiability problems. We present a new statistical physics-based characterization of the satisfiability problem. Specifically, we design an algorithm that is able to produce graphs starting from a k-SAT instance, in order to analyze them and show whether a Bose–Einstein condensation occurs. We observe that, analogously to complex networks, the networks of k-SAT instances follow Bose statistics and can undergo Bose–Einstein condensation. In particular, k-SAT instances move from a fit-get-rich network to a winner-takes-all network as the ratio of clauses to variables decreases, and the phase transition of k-SAT approximates the critical temperature for the Bose–Einstein condensation. Finally, we employ the fitness-based classification to enhance SAT solvers (e.g., ChainSAT) and obtain the consistently highest performing SAT solver for CNF formulas, and therefore a new class of efficient hardware and software verification tools.
k-SAT; Complex networks; Bose–Einstein condensation; Phase transition; Performance;
http://www.sciencedirect.com/science/article/pii/S0377221712008910
Angione, Claudio
Occhipinti, Annalisa
Stracquadanio, Giovanni
Nicosia, Giuseppe
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:166-1732013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:166-173
article
The U.S. Navy explores detailing cost reduction via Data Envelopment Analysis
In this paper we show how a variation of Data Envelopment Analysis, the Generalized Symmetric Weight Assignment Technique, is used to assign sailors to jobs for the U.S. Navy. This method differs from others as the assignment is a multi-objective problem where the importance of each objective, called a metric, is determined by the decision-maker and promoted within the assignment problem. We explore how the method performs as the importance of particular metrics increases. Finally, we show that the proposed method leads to substantial cost savings for the U.S. Navy without degrading the resulting assignments’ performance on other metrics.
Data Envelopment Analysis; OR in military; OR in manpower planning;
http://www.sciencedirect.com/science/article/pii/S0377221712009113
Sutton, Warren
Dimitrov, Stanko
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:461-4702013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:461-470
article
Stochastic multiobjective problems with complementarity constraints and applications in healthcare management
We consider a class of stochastic multiobjective problems with complementarity constraints (SMOPCCs) in this paper. We derive the first-order optimality conditions including the Clarke/Mordukhovich/strong-type stationarity in the Pareto sense for the SMOPCC. Since these first-order optimality systems involve some unknown index sets, we reformulate them as nonlinear equations with simple constraints. Then, we introduce an asymptotic method to solve these constrained equations. Furthermore, we apply this methodology results to a patient allocation problem in healthcare management.
Stochastic multiobjective problem with complementarity constraints; Pareto stationarity; Constrained equation; Asymptotic method; Healthcare;
http://www.sciencedirect.com/science/article/pii/S037722171200820X
Lin, Gui-Hua
Zhang, Dali
Liang, Yan-Chao
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:551-5592013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:551-559
article
The implementor/adversary algorithm for the cyclic and robust scheduling problem in health-care
A general problem in health-care consists in allocating some scarce medical resource, such as operating rooms or medical staff, to medical specialties in order to keep the queue of patients as short as possible. A major difficulty stems from the fact that such an allocation must be established several months in advance, and the exact number of patients for each specialty is an uncertain parameter. Another problem arises for cyclic schedules, where the allocation is defined over a short period, e.g. a week, and then repeated during the time horizon. However, the demand typically varies from week to week: even if we know in advance the exact demand for each week, the weekly schedule cannot be adapted accordingly. We model both the uncertain and the cyclic allocation problem as adjustable robust scheduling problems. We develop a row and column generation algorithm to solve this problem and show that it corresponds to the implementor/adversary algorithm for robust optimization recently introduced by Bienstock for portfolio selection. We apply our general model to compute master surgery schedules for a real-life instance from a large hospital in Oslo.
Health-care optimization; Master surgery scheduling; Robust optimization; Mixed-integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221712007862
Holte, Matias
Mannino, Carlo
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:199-2152013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:199-215
article
Robust supply chain network design with service level against disruptions and demand uncertainties: A real-life case
We have developed a stochastic mathematical formulation for designing a network of multi-product supply chains comprising several capacitated production facilities, distribution centres and retailers in markets under uncertainty. This model considers demand-side and supply-side uncertainties simultaneously, which makes it more realistic in comparison to models in the existing literature. In this model, we consider a discrete set as potential locations of distribution centres and retailing outlets and investigate the impact of strategic facility location decisions on the operational inventory and shipment decisions of the supply chain. We use a path-based formulation that helps us to consider supply-side uncertainties that are possible disruptions in manufacturers, distribution centres and their connecting links. The resultant model, which incorporates the cut-set concept in reliability theory and also the robust optimisation concept, is a mixed integer nonlinear problem. To solve the model to attain global optimality, we have created a transformation based on the piecewise linearisation method. Finally, we illustrate the model outputs and discuss the results through several numerical examples, including a real-life case study from the agri-food industry.
Supply chain network; Risk analysis; Disruption; Robust optimisation; Agri-food;
http://www.sciencedirect.com/science/article/pii/S0377221712009484
Baghalian, Atefeh
Rezapour, Shabnam
Farahani, Reza Zanjirani
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:152-1652013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:152-165
article
Integrated machine scheduling and vehicle routing with time windows
This paper integrates production and outbound distribution scheduling in order to minimize total tardiness. The overall problem consists of two subproblems. The first addresses scheduling a set of jobs on parallel machines with machine-dependent ready times. The second focusses on the delivery of completed jobs with a fleet of vehicles which may differ in their loading capacities and ready times. Job-dependent processing times, delivery time windows, service times, and destinations are taken into account. A genetic algorithm approach is introduced to solve the integrated problem as a whole. Two main questions are examined. Are the results of integrating machine scheduling and vehicle routing significantly better than those of classic decomposition approaches which break down the overall problem, solve the two subproblems successively, and merge the subsolutions to form a solution to the overall problem? And if so, is it possible to capitalize on these potentials despite the complexity of the integrated problem? Both questions are tackled by means of a numerical study. The genetic algorithm outperforms the classic decomposition approaches in case of small-size instances and is able to generate relatively good solutions for instances with up to 50 jobs, 5 machines, and 10 vehicles.
Supply Chain Scheduling; Parallel machines; Vehicle routing; Time windows; Total tardiness; Genetic algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221712009010
Ullrich, Christian A.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:301-3122013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:301-312
article
Modelling activities at a neurological rehabilitation unit
A queuing model of a specialist neurological rehabilitation unit is studied. The application is to the Neurological Rehabilitation Centre at Rookwood Hospital (Cardiff, UK), the national rehabilitation unit for Wales. Due to high demand this 21-bed inpatient facility is nearly always at maximum occupancy, and with a significant bed-cost per day this makes it a prime candidate for mathematical modelling. Central to this study is the concept that treatment intensity has an effect on patient length of stay. The model is constructed in four stages. First, appropriate patient groups are determined based on a number of patient-related attributes. Second, a purpose-built scheduling program is used to deduce typical levels of treatment to patients of each group. These are then used to estimate the mean length of stay for each patient group. Finally, the queuing model is constructed. This consists of a number of disconnected homogeneous server queuing systems; one for each patient group. A Coxian phase-type distribution is fitted to the length of time from admission until discharge readiness and an exponential distribution models the remainder of time until discharge. Some hypothetical scenarios suggested by senior management are then considered and compared on the grounds of a number of performance measures and cost implications.
Queuing theory; Markov modelling; Scheduling; OR in health services;
http://www.sciencedirect.com/science/article/pii/S0377221712008028
Griffiths, J.D.
Williams, J.E.
Wood, R.M.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:277-2852013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:277-285
article
p-Hub approach for the optimal park-and-ride facility location problem
Park and Ride facilities (P&R) are car parks at which users can transfer to public transportation to reach their final destination. We propose a mixed linear programming formulation to determine the location of a fixed number of P&R facilities so that their usage is maximized. The facilities are modeled as hubs. Commuters can use one of the P&R facilities or choose to travel by car to their destinations, and their behavior follows a logit model. We apply a p-hub approach considering that users incur in a known generalized cost of using each P&R facility as input for the logit model. For small instances of the problem, we propose a novel linearization of the logit model, which allows transforming the binary nonlinear programming problem into a mixed linear programming formulation. A modification of the Heuristic Concentration Integer (HCI) procedure is applied to solve larger instances of the problem. Numerical experiments are performed, including a case in Queens, NY. Further research is proposed.
Location; Park and Ride; p-Hub; Logit model; Heuristic concentration integer;
http://www.sciencedirect.com/science/article/pii/S0377221712008223
Aros-Vera, Felipe
Marianov, Vladimir
Mitchell, John E.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:602-6142013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:602-614
article
A Markovian queueing model for ambulance offload delays
Ambulance offload delays are a growing concern for health care providers in many countries. Offload delays occur when ambulance paramedics arriving at a hospital Emergency Department (ED) cannot transfer patient care to staff in the ED immediately. This is typically caused by overcrowding in the ED. Using queueing theory, we model the interface between a regional Emergency Medical Services (EMS) provider and multiple EDs that serve both ambulance and walk-in patients. We introduce Markov chain models for the system and solve for the steady state probability distributions of queue lengths and waiting times using matrix-analytic methods. We develop several algorithms for computing performance measures for the system, particularly the offload delays for ambulance patients. Using these algorithms, we analyze several three-hospital systems and assess the impact of system resources on offload delays. In addition, simulation is used to validate model assumptions.
Queueing theory; Matrix-analytic method; Ambulance offload delay; Priority queues;
http://www.sciencedirect.com/science/article/pii/S037722171200882X
Almehdawe, Eman
Jewkes, Beth
He, Qi-Ming
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:76-802013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:76-80
article
Single-machine scheduling problems with actual time-dependent and job-dependent learning effect
In this study, we introduce an actual time-dependent and job-dependent learning effect into single-machine scheduling problems. We show that the complexity results of the makespan minimization problem and the sum of weighted completion time minimization problem are all NP-hard. The complexity result of the maximum lateness minimization problem is NP-hard in the strong sense. We also provide three special cases which can be solved by polynomial time algorithms.
Scheduling; Learning effect; Actual time-dependent; Job-dependent; NP-hard;
http://www.sciencedirect.com/science/article/pii/S0377221712009381
Jiang, Zhongyi
Chen, Fangfang
Kang, Huiyan
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:142-1512013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:142-151
article
An arc cover–path-cover formulation and strategic analysis of alternative-fuel station locations
In this study, we present a new formulation of the generalized flow-refueling location model that takes vehicle range and trips between origin–destination pairs into account. The new formulation, based on covering the arcs that comprise each path, is more computationally efficient than previous formulations or heuristics. Next, we use the new formulation to provide managerial insights for some key concerns of the industry, such as: whether infrastructure deployment should focus on locating clusters of facilities serving independent regions or connecting these regions by network of facilities; what is the impact of uncertainty in the origin–destination demand forecast; whether station locations will remain optimal as higher-range vehicles are introduced; and whether infrastructure developers should be willing to pay more for stations at higher-cost intersections. Experiments with real and random data sets are encouraging for the industry, as optimal locations tend to be robust under various conditions.
Flow refueling; Alternative-fuel vehicle; Electric vehicle; Fuel station location; Fueling infrastructure;
http://www.sciencedirect.com/science/article/pii/S0377221712008855
Capar, Ismail
Kuby, Michael
Leon, V. Jorge
Tsai, Yu-Jiun
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:62-752013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:62-75
article
Using matrix approximation for high-dimensional discrete optimization problems: Server consolidation based on cyclic time-series data
We consider the assignment of enterprise applications in virtual machines to physical servers, also known as server consolidation problem. Data center operators try to minimize the number of servers, but at the same time provide sufficient computing resources at each point in time. While historical workload data would allow for accurate workload forecasting and optimal allocation of enterprise applications to servers, the volume of data and the large number of resulting capacity constraints in a mathematical problem formulation renders this task impossible for any but small instances. We use singular value decomposition (SVD) to extract significant features from a large constraint matrix and provide a new geometric interpretation of these features, which allows for allocating large sets of applications efficiently to physical servers with this new formulation. While SVD is typically applied for purposes such as time series decomposition, noise filtering, or clustering, in this paper features are used to transform the original allocation problem into a low-dimensional integer program with only the extracted features in a much smaller constraint matrix. We evaluate the approach using workload data from a large data center and show that it leads to high solution quality, but at the same time allows for solving considerably larger problem instances than what would be possible without data reduction and model transform. The overall approach could also be applied to similar packing problems in service operations management.
Matrix approximation; Multi-dimensional packing; Server consolidation; Dimensionality reduction;
http://www.sciencedirect.com/science/article/pii/S0377221712009368
Setzer, Thomas
Bichler, Martin
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:30-432013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:30-43
article
Robust counterparts of inequalities containing sums of maxima of linear functions
This paper addresses the robust counterparts of optimization problems containing sums of maxima of linear functions. These problems include many practical problems, e.g. problems with sums of absolute values, and arise when taking the robust counterpart of a linear inequality that is affine in the decision variables, affine in a parameter with box uncertainty, and affine in a parameter with general uncertainty.
Robustness and sensitivity analysis; Sum of maxima of linear functions; Biaffine uncertainty; Robust conic quadratic constraints;
http://www.sciencedirect.com/science/article/pii/S0377221712007345
Gorissen, Bram L.
den Hertog, Dick
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:258-2672013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:258-267
article
Super efficiencies or super inefficiencies? Insights from a joint computation model for slacks-based measures in DEA
The slacks-based measure (SBM) can incorporate input and output slacks that would otherwise be neglected in the classical DEA model. In parallel, the super-efficiency model for SBM (S-SBM) has been developed for the purpose of ranking SBM efficient decision-making units (DMUs). When implementing SBM in conjunction with S-SBM, however, several issues can arise. First, unlike the standard super-efficiency model, S-SBM can only solve for super-efficiency scores but not SBM scores. Second, the S-SBM model may result in weakly efficient reference points. Third, the S-SBM and SBM scores for certain DMUs may be discontinuous with a perturbation to their inputs and outputs, making it hard to interpret and justify the scores in applications and the efficiency scores may be sensitive to small changes/errors in data. Due to this discontinuity, the S-SBM model may overestimate the super-efficiency score. This paper extends the existing SBM approaches and develops a joint model (J-SBM) that addresses the above issues; namely, the J-SBM model can (1) simultaneously compute SBM scores for inefficient DMUs and super-efficiency for efficient DMUs, (2) guarantee the reference points generated by the joint model are Pareto-efficient, and (3) the J-SBM scores of a firm are continuous in the input and output space. Interestingly, the radial DEA efficiency and super-efficiency scores for a DMU are continuous in the input–output space. The J-SBM model combines the merits of the radial and SBM models (i.e., continuity and Pareto-efficiency).
Data envelopment analysis; Slacks-based measure; Super-efficiency; Pareto-efficiency;
http://www.sciencedirect.com/science/article/pii/S0377221712007965
Chen, Chien-Ming
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:174-1812013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:174-181
article
Estimating technical and allocative efficiency in the public sector: A nonparametric analysis of Dutch schools
Public sector output provision is influenced not only by discretionary inputs but also by exogenous environmental factors. In this paper, we extended the literature by developing a conditional DEA estimator of allocative efficiency that allows a decomposition of overall cost efficiency into allocative and technical components while simultaneously controlling for the environment. We apply the model to analyze technical and allocative efficiency of Dutch secondary schools. The results reveal that allocative efficiency represents a significant 37 percent of overall cost efficiency on average, although technical inefficiency is still the dominant part. Furthermore, the results show that the impact of environment largely differs between schools and that having a more unfavorable environment is very expensive to schools. These results highlight the importance of including environmental variables in both technical and allocative efficiency analysis.
Data envelopment analysis; Education; Allocative efficiency;
http://www.sciencedirect.com/science/article/pii/S0377221712009356
Haelermans, Carla
Ruggiero, John
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:536-5502013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:536-550
article
Global sensitivity measures from given data
Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.
Uncertainty analysis; Global sensitivity analysis; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221712008995
Plischke, Elmar
Borgonovo, Emanuele
Smith, Curtis L.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:507-5152013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:507-515
article
Network DEA pitfalls: Divisional efficiency and frontier projection under general network structures
Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently network DEA models been developed to examine the efficiency of DMUs with internal structures. The internal network structures range from a simple two-stage process to a complex system where multiple divisions are linked together with intermediate measures. In general, there are two types of network DEA models. One is developed under the standard multiplier DEA models based upon the DEA ratio efficiency, and the other under the envelopment DEA models based upon production possibility sets. While the multiplier and envelopment DEA models are dual models and equivalent under the standard DEA, such is not necessarily true for the two types of network DEA models. Pitfalls in network DEA are discussed with respect to the determination of divisional efficiency, frontier type, and projections. We point out that the envelopment-based network DEA model should be used for determining the frontier projection for inefficient DMUs while the multiplier-based network DEA model should be used for determining the divisional efficiency. Finally, we demonstrate that under general network structures, the multiplier and envelopment network DEA models are two different approaches. The divisional efficiency obtained from the multiplier network DEA model can be infeasible in the envelopment network DEA model. This indicates that these two types of network DEA models use different concepts of efficiency. We further demonstrate that the envelopment model’s divisional efficiency may actually be the overall efficiency.
Data envelopment analysis (DEA); Efficiency; Network; Intermediate measure; Link; Frontier;
http://www.sciencedirect.com/science/article/pii/S0377221712008697
Chen, Yao
Cook, Wade D.
Kao, Chiang
Zhu, Joe
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:789-7902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:789-790
article
Book Review by Etienne de Klerk “An Introduction to Polynomial and Semi-Algebraic Optimization” by Jean-Bernard Lasserre, Cambridge University Press, 2015.
http://www.sciencedirect.com/science/article/pii/S0377221715009170
de Klerk, Etienne
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:328-3412015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:328-341
article
Are targets for renewable portfolio standards too low? The impact of market structure on energy policy
In order to limit climate change from greenhouse gas emissions, governments have introduced renewable portfolio standards (RPS) to incentivise renewable energy production. While the response of industry to exogenous RPS targets has been addressed in the literature, setting RPS targets from a policymaker’s perspective has remained an open question. Using a bi-level model, we prove that the optimal RPS target for a perfectly competitive electricity industry is higher than that for a benchmark centrally planned one. Allowing for market power by the non-renewable energy sector within a deregulated industry lowers the RPS target vis-à-vis perfect competition. Moreover, to our surprise, social welfare under perfect competition with RPS is lower than that when the non-renewable energy sector exercises market power. In effect, by subsidising renewable energy and taxing the non-renewable sector, RPS represents an economic distortion that over-compensates damage from emissions. Thus, perfect competition with RPS results in “too much” renewable energy output, whereas the market power of the non-renewable energy sector mitigates this distortion, albeit at the cost of lower consumer surplus and higher emissions. Hence, ignoring the interaction between RPS requirements and the market structure could lead to sub-optimal RPS targets and substantial welfare losses.
OR in environment and climate change; Renewable portfolio standards; Bi-level modelling; Market power;
http://www.sciencedirect.com/science/article/pii/S0377221715009893
Siddiqui, Afzal S.
Tanaka, Makoto
Chen, Yihsu
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:131-1422015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:131-142
article
Outsource planning through option contracts with demand and cost uncertaintyAuthor-Name: Nosoohi, Iman
This research considers a supply chain consisting of one supplier and one manufacturer that produces a kind of product, e.g. innovative products, with a long supply lead-time, a short selling season and a stochastic demand. Complete production of the final product requires both an initial and a final processing operation. The manufacturer performs the initial processing operation with a deterministic cost. The final processing operation may be either performed by the manufacturer or assigned to an outside firm through a bid process. At the time of the supply contract, the final processing cost (FPC) is estimated as a stochastic variable. The uncertainty on FPC is removed before the selling season starts. The present study is an attempt to determine how the manufacturer should place the supply orders within the framework of wholesale price, put, call and bidirectional options. Option contracts provide the manufacturer with the flexibility to adjust his initial orders by exercising purchased options after the FPC is realized. We find optimal exercised orders with each option contract, in addition to equations in which the optimal initial and option orders hold. According to our analysis, if the realized FPC is higher (lower) than a specific level, the manufacturer should decrease (increase) his initial orders. We obtain analytically these specific levels under all types of option contracts. The numerical analysis and managerial insights shed light on the value of option contracts considering different parameter settings.
Supply chain management; Outsource planning; Option contract; Demand/Cost uncertainty;
http://www.sciencedirect.com/science/article/pii/S037722171500956X
Nookabadi, Ali Shahandeh
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:853-8632015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:853-863
article
The effects of integrating management judgement into OUT levels: In or out of context?
Physical inventories constitute a significant proportion of companies’ investments in today's competitive environment. The trade-off between customer service levels and inventory reserves is addressed in practice by statistical inventory software solutions; given the tremendous number of Stock Keeping Units (SKUs) that contemporary organisations deal with, such solutions are fully automated. However, empirical evidence suggests that managers habitually judgementally adjust the output of such solutions, such as replenishment orders or re-order levels. This research is concerned with the value being added, or not, when statistically derived inventory related decisions (Order-Up-To (OUT) levels in particular) are judgementally adjusted. We aim at developing our current understanding on the effects of incorporating human judgement into inventory decisions; to our knowledge such effects do not appear to have been studied empirically before and this is the first endeavour to do so. A number of research questions are examined and a simulation experiment is performed, using an extended database of approximately 1800 SKUs from the electronics industry, in order to evaluate human judgement effects. The linkage between adjustments and their justification is also evaluated; given the apparent lack of comprehensive empirical evidence in this area, including the field of demand forecasting, this is a contribution in its own right. Insights are offered to academics, to facilitate further research in this area, practitioners, to enable more constructive intervention into statistical inventory solutions, and software developers, to consider the interface with human decision makers.
Judgemental adjustments; Inventory management; Behavioural operations;
http://www.sciencedirect.com/science/article/pii/S0377221715006542
Syntetos, Aris A.
Kholidasari, Inna
Naim, Mohamed M.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:691-7052015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:691-705
article
Capacity market design options: A dynamic capacity investment model and a GB case study
Rising feed-in from renewable energy sources decreases margins, load factors, and thereby profitability of conventional generation in several electricity markets around the world. At the same time, conventional generation is still needed to ensure security of electricity supply. Therefore, capacity markets are currently being widely discussed as a measure to ensure generation adequacy in markets such as France, Germany, and the United States (e.g., Texas), or even implemented for example in Great Britain.
Capacity mechanism; Capacity market; Dynamic capacity investment model; Generation adequacy; Conventional electricity generation investment; Renewable energy sources;
http://www.sciencedirect.com/science/article/pii/S0377221715007894
Hach, Daniel
Chyong, Chi Kong
Spinler, Stefan
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1161-11682015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1161-1168
article
Investment and financing for SMEs with a partial guarantee and jump risk
We consider a small- and medium-sized enterprise (SME) with a funding gap intending to invest in a project, of which the cash flow follows a double exponential jump-diffusion process. In contrast to traditional corporate finance theory, we assume the SME is unable to get a loan directly from a bank and hence it enters into a partial guarantee agreement with an insurer and a lender. Utilizing a real options approach, we develop an investment and financing model with a partial guarantee. We explicitly derive the pricing and timing of the option to invest. We find that if the funding gap rises, the option value decreases but its investment threshold first declines and then increases. The larger the guarantee level, the lower the option value and the later the investment. The optimal coupon rate decreases with project risk and a growth of the guarantee level can effectively reduce agency conflicts.
Finance; Investment analysis; Guarantee level; Real options; Double exponential jump-diffusion process;
http://www.sciencedirect.com/science/article/pii/S0377221715008747
Luo, Pengfei
Wang, Huamao
Yang, Zhaojun
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:540-5502015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:540-550
article
A variable neighborhood search for the multi-period collection of recyclable materials
We consider an approach for scheduling the multi-period collection of recyclable materials. Citizens can deposit glass and paper for recycling in small cubes located at several collection points. The cubes are emptied by a vehicle that carries two containers and the material is transported to two treatment facilities. We investigate how the scheduling of emptying and transportation should be done in order to minimize the operation cost, while providing a high service level and ensuring that capacity constraints are not violated. We develop a heuristic solution method for solving the daily planning problem with uncertain accretion rate for materials by considering a rolling time horizon of a few days. We apply a construction heuristic in the first period and re-optimize the solution every subsequent period with a variable neighborhood search. Computational experiments are conducted on real life data.
Inventory routing problem; Multi-period routing; Multi-compartment vehicle; Rolling time horizon; Waste management;
http://www.sciencedirect.com/science/article/pii/S0377221715007900
Elbek, Maria
Wøhlk, Sanne
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:657-6662015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:657-666
article
Asymptotic behaviors of stochastic reserving: Aggregate versus individual models
In this paper, we investigate the asymptotic behaviors of the loss reservings computed by individual data method and its aggregate data versions by Chain-Ladder (CL) and Bornhuetter–Ferguson (BF) algorithms. It is shown that all deviations of the three reservings from the individual loss reserve (the projection of the outstanding liability on the individual data) converge weakly to a zero-mean normal distribution at the nrate. The analytical forms of the asymptotic variances are derived and compared by both analytical and numerical examples. The results show that the individual method has the smallest asymptotic variance, followed by the BF algorithm, and the CL algorithm has the largest asymptotic variance.
Risk management; Stochastic reserving; Individual data model; Aggregate data model; Asymptotic variance;
http://www.sciencedirect.com/science/article/pii/S0377221715008814
Huang, Jinlong
Wu, Xianyi
Zhou, Xian
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:827-8412015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:827-841
article
Behavioural operational research: Towards a framework for understanding behaviour in OR interventions
Stimulated by the growing interest in behavioural issues in the management sciences, research scholars have begun to address the implications of behavioural insights for Operational Research (OR). This current work reviews some foundational debates on the nature of OR to serve as a theoretical backdrop to orient a discussion on a behavioural perspective and OR. The paper addresses a specific research need by outlining that there is a distinct and complementary contribution of a behavioural perspective to OR. However, there is a need to build a theoretical base in which the insights from classical behavioural research is just one of a number of convergent building blocks that together point towards a compelling basis for behavioural OR. In particular, the focus of the paper is a framework that highlights the collective nature of OR practice and provides a distinct and interesting line of enquiry for future research.
Behavioural OR; Process of OR; Philosophy of OR; Collective behaviour;
http://www.sciencedirect.com/science/article/pii/S0377221715006657
White, Leroy
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1139-11432015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1139-1143
article
Dual cone approach to convex-cone dominance in multiple criteria decision making
In a paper published in Management Science in 1984, Korhonen, Wallenius, and Zionts presented the idea and method based on convex-cone dominance in the discrete Multiple Criteria Decision Making framework. In our current paper, we revisit the old idea from a new standpoint and provide the mathematical theory leading to a dual-cone based approach to solving such problems. Our paper makes the old results computationally more tractable. The results provided in the present paper also help extend the theory.
Multiple criteria analysis; Convex-cone dominance; Duality; Linear programming; Dual/Polar cone;
http://www.sciencedirect.com/science/article/pii/S0377221715008851
Korhonen, Pekka
Soleimani-damaneh, Majid
Wallenius, Jyrki
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1092-11012015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1092-1101
article
A branch-and-cut algorithm for the profitable windy rural postman problem
In this paper we study the profitable windy rural postman problem. This is an arc routing problem with profits defined on a windy graph in which there is a profit associated with some of the edges of the graph, consisting of finding a route maximizing the difference between the total profit collected and the total cost. This problem generalizes the rural postman problem and other well-known arc routing problems and has real-life applications, mainly in snow removal operations. We propose here a formulation for the problem and study its associated polyhedron. Several families of facet-inducing inequalities are described and used in the design of a branch-and-cut procedure. The algorithm has been tested on a large set of benchmark instances and compared with other existing algorithms. The results obtained show that the branch-and-cut algorithm is able to solve large-sized instances optimally in reasonable computing times.
Windy rural postman problem; Arc routing; Profits; Branch-and-cut algorithm; Polyhedron;
http://www.sciencedirect.com/science/article/pii/S0377221715009236
Ávila, Thais
Corberán, Ángel
Plana, Isaac
Sanchis, José M.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:179-1912015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:179-191
article
A new method for elicitation of criteria weights in additive models: Flexible and interactive tradeoffAuthor-Name: de Almeida, Adiel Teixeira
This paper proposes the Flexible and Interactive Tradeoff (FITradeoff) method, for eliciting scaling constants or weights of criteria. The FITradeoff uses partial information about decision maker (DM) preferences to determine the most preferred in a specified set of alternatives, according to an additive model in MAVT (Multi-Attribute Value Theory) scope. This method uses the concept of flexible elicitation for improving the applicability of the traditional tradeoff elicitation procedure. FITradeoff offers two main benefits: the information required from the DM is reduced and the DM does not have to make adjustments for the indifference between two consequences (trade-off), which is a critical issue on the traditional tradeoff procedure. It is easier for the DM to make comparisons of consequences (or outcomes) based on strict preference rather than on indifference. The method is built into a decision support system and applied to two cases on supplier selection, already published in the literature.
Multiple criteria analysis; MAVT additive model; Flexible elicitation; Interactive elicitation; Tradeoff elicitation;
http://www.sciencedirect.com/science/article/pii/S0377221715008140
de Almeida, Jonatas Araujo
Costa, Ana Paula Cabral Seixas
de Almeida-Filho, Adiel Teixeira
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1082-10912015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1082-1091
article
Inventory performance under staggered deliveries and autocorrelated demand
Production plans often span a whole week or month, even when independent production lots are completed every day and service performance is tallied daily. Such policies are said to use staggered deliveries, meaning that the production rate for multiple days are determined at a single point in time. Assuming autocorrelated demand, and linear inventory holding and backlog costs, we identify the optimal replenishment policy for order cycles of length P. With the addition of a once-per-cycle audit cost, we optimize the order cycle length P* via an inverse-function approach. In addition, we characterize periodic inventory costs, availability, and fill rate. As a consequence of staggering deliveries, the inventory level becomes cyclically heteroskedastic. This manifests itself as ripples in the expected cost and service levels. Nevertheless, the cost-optimal replenishment policy achieves a constant availability by using time-varying safety stocks; this is not the case with suboptimal constant safety stock policies, where the availability fluctuates over the cycle.
Inventory; Autoregressive demand; Order-up-to-policy; Staggered deliveries; Planning cycles;
http://www.sciencedirect.com/science/article/pii/S0377221715009029
Hedenstierna, Carl Philip T.
Disney, Stephen M.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:717-7272015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:717-727
article
Incorporation of activity sensitivity measures into buffer management to manage project schedule riskAuthor-Name: Hu, Xuejun
Critical Chain Scheduling and Buffer Management (CC/BM) has shown to provide an effective approach for building robust project schedules and to offer a valuable control tool for coping with schedule variability. Yet, the current buffer monitoring mechanism faces a problem of neglecting the dynamic feature of the project execution and related activity information when taking corrective actions. The schedule risk analysis (SRA) method in a traditional PERT framework, on the other hand, provides important information about the relative activity criticality in relation to the project duration which can highlight management focus. It is implied, however, that control actions are independent from the current project schedule performance. This paper attempts to research these defects of both tracking methods and proposes a new project schedule monitoring framework by introducing the activity cruciality index as a trigger for effective expediting to be integrated into the buffer monitoring process. Furthermore, dynamic action threshold settings that depend on the project progress as well as the buffer penetration are presented and examined in order to exhibit a more accurate control system. Our computational experiment demonstrates the relative dominance of the integrated schedule monitoring methods compared to the predominant buffer management approach in generating better control actions with less effort and an increased tracking efficiency, especially when the increasing buffer trigger point is combined with decreasing sensitivity action threshold values.
Buffer management; Schedule monitoring; Activity sensitivity; Schedule risk; Action threshold;
http://www.sciencedirect.com/science/article/pii/S0377221715008243
Cui, Nanfang
Demeulemeester, Erik
Bie, Li
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:864-8772015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:864-877
article
Developing and validating the multidimensional proactive decision-making scale
On the basis of an extensive interdisciplinary literature review proactive decision-making (PDM) is conceptualised as a multidimensional concept. We conduct five studies with over 4000 participants from various countries for developing and validating a theoretically consistent and psychometrically sound scale of PDM. The PDM concept is developed and appropriate items are derived from literature. Six dimensions are conceptualised: the four proactive cognitive skills ‘systematic identification of objectives’, ‘systematic search for information’, ‘systematic identification of alternatives’, and ‘using a decision radar’, and the two proactive personality traits ‘showing initiative’ and ‘striving for improvement’. Using principal component factor analysis and subsequent item analysis as well as confirmatory factor analysis, six conceptually distinct dimensional factors are identified and tested acceptably reliable and valid. Our results are remarkably similar for individuals who are decision-makers, decision analysts, both or none of both with different levels of experience. There is strong evidence that individuals with high scores in a PDM factor, e.g. proactive cognitive skills or personality traits, show a significantly higher decision satisfaction. Thus, the PDM scale can be used in future research to analyse other concepts. Furthermore, the scale can be applied, e.g. by staff teams to work on OR problems effectively or to inform a decision analyst about the decision behaviour in an organisation.
Behavioural OR; Decision analysis; Problem structuring;
http://www.sciencedirect.com/science/article/pii/S0377221715005998
Siebert, Johannes
Kunz, Reinhard
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:959-9672015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:959-967
article
The decoy effect in relative performance evaluation and the debiasing role of DEA
There is overwhelming evidence that performance ratings and evaluations are context dependent. A special case of such context effects is the decoy effect, which implies that the inclusion of a dominated alternative can influence the preference for non-dominated alternatives. Adapting the well-known experimental setting from the area of consumer behavior to the performance evaluation context of Data Envelopment Analysis (DEA), an experiment was conducted. The results show that adding a dominated decision making unit (DMU) to the set of DMUs augments the attractiveness of certain dominating DMUs and that DEA efficiency scores discriminating between efficient and inefficient DMUs serve as an appropriate debiasing procedure. The mention of the existence of slacks for distinguishing between strong and weak efficient DMUs also contributes to reducing the decoy effect, but it is also associated to other unexpected effects.
Behavioral OR; Decoy effect; Data Envelopment Analysis; Debiasing procedure; Performance measurement;
http://www.sciencedirect.com/science/article/pii/S0377221715006797
Ahn, Heinz
Vazquez Novoa, Nadia
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:899-9072015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:899-907
article
Experimental behavioural research in operational research: What we know and what we might come to know
There is a long standing, but thin, stream of experimental behavioural research into understanding how modellers within operational research (OR) behave when constructing models, and how individuals use such models to make decisions. Such research aims to better understand the modelling process, using empirical studies to construct a body of knowledge. Drawing upon this research, and experimental behavioural research in associated research areas, this paper aims to summarise the current body of knowledge. It suggests that we have some experimentally generated findings concerning the construction of models, model usage, the impact of model visualisation, and the effect (or lack thereof) of cognitive style on decision quality. The paper also considers how experiments have been operationalised, and particularly the problem of the dependent variable in such research (that is, what beneficial outputs can be measured in an experiment). It concludes with a consideration of what we might come to know through future experimental behavioural research, suggesting a more inclusive approach to experimental design.
Behavioural OR; Decision processes; Decision Support Systems;
http://www.sciencedirect.com/science/article/pii/S0377221715008693
O'Keefe, Robert M.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1074-10812015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1074-1081
article
Complexity results for storage loading problems with stacking constraints
In this paper, we present complexity results for storage loading problems where the storage area is organized in fixed stacks with a limited common height. Such problems appear in several practical applications, e.g., in the context of container terminals, container ships or warehouses. Incoming items arriving at a storage area have to be assigned to stacks so that certain constraints are respected (e.g., not every item may be stacked on top of every other item). We study structural properties of the general model and special cases where at most two or three items can be stored in each stack. Besides providing polynomial time algorithms for some of these problems, we establish the boundary to NP-hardness.
Storage loading; Stacking; Complexity; Stacking constraints;
http://www.sciencedirect.com/science/article/pii/S0377221715008784
Bruns, Florian
Knust, Sigrid
Shakhlevich, Natalia V.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1063-10732015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1063-1073
article
On the role of psychological heuristics in operational research; and a demonstration in military stability operationsAuthor-Name: Keller, Niklas
Psychological heuristics are formal models for making decisions that (i) rely on core psychological capacities (e.g., recognizing patterns or recalling information from memory), (ii) do not necessarily use all available information, and process the information they use by simple computations (e.g., ordinal comparisons or un-weighted sums), and (iii) are easy to understand, apply and explain. The contribution of this article is fourfold: First, the conceptual foundation of the psychological heuristics research program is provided, along with a discussion of its relationship to soft and hard OR. Second, empirical evidence and theoretical analyses are presented on the conditions under which psychological heuristics perform on par with or even better than more complex standard models in decision problems such as multi-attribute choice, classification, and forecasting, and in domains as varied as health, economics and management. Third, we demonstrate the application of the psychological heuristics approach to the problem of reducing civilian casualties in military stability operations. Finally, we discuss the role that psychological heuristics can play in OR theory and practice.
Behavioural OR; Bounded rationality; Heuristics; Decision analysis; Forecasting;
http://www.sciencedirect.com/science/article/pii/S0377221715006566
Katsikopoulos, Konstantinos V.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:407-4162015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:407-416
article
Lending decisions with limits on capital available: The polygamous marriage problem
In order to stimulate or subdue the economy, banking regulators have sought to impose caps or floors on individual bank's lending to certain types of borrowers. This paper shows that the resultant decision problem for a bank of which potential borrower to accept is a variant of the marriage/secretary problem where one can accept several applicants. The paper solves the decision problem using dynamic programming. We give results on the form of the optimal lending problem and counter examples to further “reasonable” conjectures which do not hold in the general case. By solving numerical examples we show the potential loss of profit and the inconsistency in the lending decision that are caused by introducing floors and caps on lending. The paper also describes some other situations where the same decision occurs.
Dynamic programming; Markov processes; Consumer credit lending;
http://www.sciencedirect.com/science/article/pii/S0377221715000673
So, Mee Chi
Thomas, Lyn C.
Huang, Bo
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:192-2032015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:192-203
article
Opening the technological innovation black box: The case of the electronics industry in Korea
In this system dynamics simulation study we analyze a series of feedback causal relationships wherein R&D investments create new knowledge stocks, increasing technological knowledge “triggers” and interactions among entities of technological innovation, leading to firm profits through the commercialization process.
R&D investment; Technological innovation; Investment portfolio; Product portfolio complexity; Product architecture complexity;
http://www.sciencedirect.com/science/article/pii/S0377221715008103
Choi, Kanghwa
Narasimhan, Ram
Kim, Soo Wook
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:262-2722015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:262-272
article
Performance measurement with multiple interrelated variables and threshold target levels: Evidence from retail firms in the US
In this study, we developed a DEA–based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
OR in service industries; Performance management; Data envelopment analysis; Balanced Scorecard;
http://www.sciencedirect.com/science/article/pii/S0377221715008115
Zervopoulos, Panagiotis D.
Brisimi, Theodora S.
Emrouznejad, Ali
Cheng, Gang
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:155-1632015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:155-163
article
Constant approximation algorithms for the one warehouse multiple retailers problem with backlog or lost-sales
We consider the One Warehouse Multi-Retailer (OWMR) problem with deterministic time-varying demand in the case where shortages are allowed. Demand may be either backlogged or lost. We present a simple combinatorial algorithm to build an approximate solution from a decomposition of the system into single-echelon subproblems. We establish that the algorithm has a performance guarantee of 3 for the OWMR with backlog under mild assumptions on the cost structure. In addition, we improve this guarantee to 2 in the special case of the Joint-Replenishment Problem (JRP) with backlog. As a by-product of our approach, we show that our decomposition provides a new lower bound of the optimal cost. A similar technique also leads to a 2-approximation for the OWMR problem with lost-sales. In all cases, the complexity of the algorithm is linear in the number of retailers and quadratic in the number of time periods, which makes it a valuable tool for practical applications. To the best of our knowledge, these are the first constant approximations for the OWMR with shortages.
Approximation algorithms; Lot-sizing; Inventory control; Distribution systems;
http://www.sciencedirect.com/science/article/pii/S0377221715009807
Gayon, J.-P.
Massonnet, G.
Rapine, C.
Stauffer, G.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:525-5392015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:525-539
article
Pricing derivatives with counterparty risk and collateralization: A fixed point approach
This paper studies a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous PDEs, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask spreads.
Bilateral counterparty risk; Collateralization; Credit valuation adjustment; Fixed point method Contraction mapping,;
http://www.sciencedirect.com/science/article/pii/S0377221715005883
Kim, Jinbeom
Leung, Tim
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:56-642015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:56-64
article
Continuous multifacility ordered median location problems
In this paper we propose a general methodology for solving a broad class of continuous, multifacility location problems, in any dimension and with ℓτ-norms proposing two different methodologies: (1) by a new second order cone mixed integer programming formulation and (2) by formulating a sequence of semidefinite programs that converges to the solution of the problem; each of these relaxed problems solvable with SDP solvers in polynomial time.
Continuous multifacility location; Ordered median problems; Semidefinite programming; Second order cone programming;
http://www.sciencedirect.com/science/article/pii/S0377221715009911
Blanco, Víctor
Puerto, Justo
Ben-Ali, Safae El-Haj
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:487-4972015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:487-497
article
A new Mixture model for the estimation of credit card Exposure at Default
Using a large portfolio of historical observations on defaulted loans, we estimate Exposure at Default at the level of the obligor by estimating the outstanding balance of an account, not only at the time of default, but at any time over the entire loan period. We theorize that the outstanding balance on a credit card account at any time during the loan is a function of the spending by the borrower and is also subject to the credit limit imposed by the card issuer. The predicted value is modelled as a weighted average of the estimated balance and limit, with weights depending on how likely the borrower is to have a balance greater than the limit. The weights are estimated using a discrete-time repeated events survival model to predict the probability of an account having a balance greater than its limit. The expected balance and expected limit are estimated using two panel models with random effects. We are able to get predictions which, overall, are more accurate for outstanding balance, not only at the time of default, but at any time over the entire default loan period, than any other particular technique in the literature.
Risk management; Forecasting; Panel models; Survival models; Macroeconomic variables;
http://www.sciencedirect.com/science/article/pii/S037722171500908X
Leow, Mindy
Crook, Jonathan
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:647-6562015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:647-656
article
Dynamic mean-risk portfolio selection with multiple risk measures in continuous-time
While our society began to recognize the importance to balance the risk performance under different risk measures, the existing literature has confined its research work only under a static mean-risk framework. This paper represents the first attempt to incorporate multiple risk measures into dynamic portfolio selection. More specifically, we investigate the dynamic mean-variance-CVaR (Conditional value at Risk) formulation and the dynamic mean-variance-SFP (Safety-First Principle) formulation in a continuous-time setting, and derive the analytical solutions for both problems. Combining a downside risk measure with the variance (the second order central moment) in a dynamic mean-risk portfolio selection model helps investors control both a symmetric central risk measure and an asymmetric catastrophic downside risk. We find that the optimal portfolio policy derived from our mean-multiple risk portfolio optimization models exhibits a feature of curved V-shape. Our numerical experiments using real market data clearly demonstrate a dominance relationship of our dynamic mean-multiple risk portfolio policies over the static buy-and-hold portfolio policy.
Dynamic mean-risk portfolio selection; Conditional value at risk; Safety-first principle; Stochastic optimization; Martingale approach;
http://www.sciencedirect.com/science/article/pii/S0377221715008292
Gao, Jianjun
Xiong, Yan
Li, Duan
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1014-10232015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1014-1023
article
Elementary modelling and behavioural analysis for emergency evacuations using social media
Social media usage in evacuations and emergency management represents a rapidly expanding field of study. Our paper thus provides quantitative insight into a serious practical problem. Within this context a behavioural approach is key. We discuss when facilitators should consider model-based interventions amid further implications for disaster communication and emergency management. We model the behaviour of individual people by deriving optimal contrarian strategies. We formulate a Bayesian algorithm which enables the optimal evacuation to be conducted sequentially under worsening conditions.
Behavioural OR; OR in disaster relief; Decision analysis; Humanitarian logistics; Modelling;
http://www.sciencedirect.com/science/article/pii/S0377221715004397
Fry, John
Binner, Jane M.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1131-11382015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1131-1138
article
R&D for green technologies in a dynamic oligopoly: Schumpeter, arrow and inverted-U’s
We extend a well-known differential oligopoly game to encompass the possibility for production to generate a negative environmental externality, regulated through Pigouvian taxation and price caps. We show that, if the price cap is set so as to fix the tolerable maximum amount of emissions, the resulting equilibrium investment in green R&D is indeed concave in the structure of the industry. Our analysis appears to indicate that inverted-U-shaped investment curves are generated by regulatory measures instead of being a ‘natural’ feature of firms’ decisions.
Dynamic games; Oligopoly; Environmental externality; R&D;
http://www.sciencedirect.com/science/article/pii/S0377221715008498
Feichtinger, Gustav
Lambertini, Luca
Leitmann, George
Wrzaczek, Stefan
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:305-3132015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:305-313
article
Venture capital, staged financing and optimal funding policies under uncertainty
Our paper presents a dynamic model of entrepreneurial venture financing under uncertainty based on option exercise games between an entrepreneur and a venture capitalist (VC). In particular, we analyze the impact of multi-staged financing and both economic and technological uncertainty on optimal contracting in the context of VC-financing. Our novel approach combines compound option pricing with sequential non-cooperative contracting, allowing us to determine whether renegotiation will improve the probability of coming to an agreement and proceed with the venture. It is shown that both sources of uncertainty positively impact the VC-investor's optimal equity share. Specifically, higher uncertainty leads to a larger stake in the venture, and renegotiation may result in a dramatic shift of control rights in the venture, preventing the venture from failure. Moreover, given ventures with low volatility, situations might occur where the VC-investor loses his first-mover advantage. Based on a comparative-static analysis, new testable hypotheses for further empirical studies are derived from the model.
Bargaining; Decision-making; Game theory; Real options; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221715009777
Lukas, Elmar
Mölls, Sascha
Welling, Andreas
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:842-8522015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:842-852
article
Do ‘big losses’ in judgmental adjustments to statistical forecasts affect experts’ behaviour?
The behaviour of poker players and sports gamblers has been shown to change after winning or losing a significant amount of money on a single hand. In this paper, we explore whether there are changes in experts’ behaviour when performing judgmental adjustments to statistical forecasts and, in particular, examine the impact of ‘big losses’. We define a big loss as a judgmental adjustment that significantly decreases the forecasting accuracy compared to the baseline statistical forecast. In essence, big losses are directly linked with wrong direction or highly overshooting judgmental overrides. Using relevant behavioural theories, we empirically examine the effect of such big losses on subsequent judgmental adjustments exploiting a large multinational data set containing statistical forecasts of demand for pharmaceutical products, expert adjustments and actual sales. We then discuss the implications of our findings for the effective design of forecasting support systems, focusing on the aspects of guidance and restrictiveness.
Forecasting; Judgment; Behavioural analytics; Decision support systems;
http://www.sciencedirect.com/science/article/pii/S037722171500497X
Petropoulos, Fotios
Fildes, Robert
Goodwin, Paul
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:476-4862015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:476-486
article
Modelling repayment patterns in the collections process for unsecured consumer debt: A case studyAuthor-Name: Thomas, Lyn C.
One approach to modelling Loss Given Default (LGD), the percentage of the defaulted amount of a loan that a lender will eventually lose is to model the collections process. This is particularly relevant for unsecured consumer loans where LGD depends both on a defaulter's ability and willingness to repay and the lender's collection strategy. When repaying such defaulted loans, defaulters tend to oscillate between repayment sequences where the borrower is repaying every period and non-repayment sequences where the borrower is not repaying in any period. This paper develops two models – one a Markov chain approach and the other a hazard rate approach to model such payment patterns of debtors. It also looks at simplifications of the models where one assumes that after a few repayment and non-repayment sequences the parameters of the model are fixed for the remaining payment and non-payment sequences. One advantage of these approaches is that they show the impact of different write-off strategies. The models are applied to a real case study and the LGD for that portfolio is calculated under different write-off strategies and compared with the actual LGD results.
OR in banking; Payment patterns; Collection process; Markov chain models; Survival analysis models;
http://www.sciencedirect.com/science/article/pii/S0377221715008371
Matuszyk, Anna
So, Mee Chi
Mues, Christophe
Moore, Angela
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:592-6042015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:592-604
article
A disassembly line balancing problem with fixed number of workstations
In this study, a Disassembly Line Balancing Problem with a fixed number of workstations is considered. The product to be disassembled comprises various components, which are referred to as its parts. There is a specified finite supply of the product to be disassembled and specified minimum release quantities (possible zero) for each part of the product. All units of the product are identical, however different parts can be released from different units of the product. There is a finite number of identical workstations that perform the necessary disassembly operations, referred to as tasks. We present several upper and lower bounding procedures that assign the tasks to the workstations so as to maximize the total net revenue. The computational study has revealed that the procedures produce satisfactory results.
Integer programming; Heuristics; Disassembly lines; Linear programming relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221715008279
Kalaycılar, Eda Göksoy
Azizoğlu, Meral
Yeralan, Sencer
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:204-2132015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:204-213
article
Solving hard control problems in voting systems via integer programming
Voting problems are central in the area of social choice. In this article, we investigate various voting systems and types of control of elections. We present integer linear programming (ILP) formulations for a wide range of NP-hard control problems. Our ILP formulations are flexible in the sense that they can work with an arbitrary number of candidates and voters. Using the off-the-shelf solver Cplex, we show that our approaches can manipulate elections with a large number of voters and candidates efficiently.
Voting system; Election model; Control problem; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715008085
Polyakovskiy, S.
Berghammer, R.
Neumann, F.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:214-2252015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:214-225
article
Hub location under competition
Hubs are consolidation and dissemination points in many-to-many flow networks. Hub location problem is to locate hubs among available nodes and allocate non-hub nodes to these hubs. The mainstream hub location studies focus on optimal decisions of one decision-maker with respect to some objective(s) even though the markets that benefit hubbing are oligopolies. Therefore, in this paper, we propose a competitive hub location problem where the market is assumed to be a duopoly. Two decision-makers (or firms) sequentially decide locations of their hubs and then customers choose one firm with respect to provided service levels. Each decision-maker aims to maximize his/her own market share. We propose two problems for the leader (former decision-maker) and follower (latter decision-maker): (r|Xp)hub−medianoid and (r|p)hub−centroid problems, respectively. Both problems are proven to be NP-complete. Linear programming models are presented for these problems as well as exact solution algorithms for the (r|p)hub−centroid problem. The performance of models and algorithms are tested by computational analysis conducted on CAB and TR data sets.
Hub location; Competition models; Competitive location;
http://www.sciencedirect.com/science/article/pii/S0377221715008322
Mahmutogullari, Ali Irfan
Kara, Bahar Y.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1050-10622015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1050-1062
article
A decision-analysis-based framework for analysing stakeholder behaviour in scenario planning
Scenario planning is a method widely used by strategic planners to address uncertainty about the future. However, current methods either fail to address the future behaviour and impact of stakeholders or they treat the role of stakeholders informally. We present a practical decision-analysis-based methodology for analysing stakeholder objectives and likely behaviour within contested unfolding futures. We address issues of power, interest, and commitment to achieve desired outcomes across a broad stakeholder constituency. Drawing on frameworks for corporate social responsibility (CSR), we provide an illustrative example of our approach to analyse a complex contested issue that crosses geographic, organisational and cultural boundaries. Whilst strategies can be developed by individual organisations that consider the interests of others – for example in consideration of an organisation's CSR agenda – we show that our augmentation of scenario method provides a further, nuanced, analysis of the power and objectives of all concerned stakeholders across a variety of unfolding futures. The resulting modelling framework is intended to yield insights and hence more informed decision making by individual stakeholders or regulators.
Strategic planning; Ethics in OR; Decision processes; Scenario method; Education;
http://www.sciencedirect.com/science/article/pii/S0377221715006669
Cairns, George
Goodwin, Paul
Wright, George
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:919-9302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:919-930
article
Can involving clients in simulation studies help them solve their future problems? A transfer of learning experimentAuthor-Name: Monks, Thomas
It is often stated that involving the client in operational research studies increases conceptual learning about a system which can then be applied repeatedly to other, similar, systems. Our study provides a novel measurement approach for behavioural OR studies that aim to analyse the impact of modelling in long term problem solving and decision making. In particular, our approach is the first to operationalise the measurement of transfer of learning from modelling using the concepts of close and far transfer, and overconfidence. We investigate learning in discrete-event simulation (DES) projects through an experimental study. Participants were trained to manage queuing problems by varying the degree to which they were involved in building and using a DES model of a hospital emergency department. They were then asked to transfer learning to a set of analogous problems. Findings demonstrate that transfer of learning from a simulation study is difficult, but possible. However, this learning is only accessible when sufficient time is provided for clients to process the structural behaviour of the model. Overconfidence is also an issue when the clients who were involved in model building attempt to transfer their learning without the aid of a new model. Behavioural OR studies that aim to understand learning from modelling can ultimately improve our modelling interactions with clients; helping to ensure the benefits for a longer term; and enabling modelling efforts to become more sustainable.
Behavioural OR; Psychology of decision; Model building; Model reuse; Discrete-event simulation;
http://www.sciencedirect.com/science/article/pii/S0377221715007924
Robinson, Stewart
Kotiadis, Kathy
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:551-5592015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:551-559
article
Branch-and-price algorithms for the solution of the multi-trip vehicle routing problem with time windows
We investigate the exact solution of the vehicle routing problem with time windows, where multiple trips are allowed for the vehicles. In contrast to previous works in the literature, we specifically consider the case in which it is mandatory to visit all customers and there is no limitation on duration. We develop two branch-and-price frameworks based on two set covering formulations: a traditional one where columns (variables) represent routes, that is, a sequence of consecutive trips, and a second one in which columns are single trips. One important difficulty related to the latter is the way mutual temporal exclusion of trips can be handled. It raises the issue of time discretization when solving the pricing problem. Our dynamic programming algorithm is based on concept of groups of labels and representative labels. We provide computational results on modified small-sized instances (25 customers) from Solomon’s benchmarks in order to evaluate and compare the two methods. Results show that some difficult instances are out of reach for the first branch-and-price implementation, while they are consistently solved with the second.
Vehicle routing; Time windows; Multiple trips; Column generation; Branch-and-price;
http://www.sciencedirect.com/science/article/pii/S037722171500795X
Hernandez, Florent
Feillet, Dominique
Giroudeau, Rodolphe
Naud, Olivier
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1169-11772015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1169-1177
article
Aggregation heuristic for the open-pit block scheduling problem
In order to establish a production plan, an open-pit mine is partitioned into a three-dimensional array of blocks. The order in which blocks are extracted and processed has a dramatic impact on the economic value of the exploitation. Since realistic models have millions of blocks and constraints, the combinatorial optimization problem of finding the extraction sequence that maximizes the profit is computationally intractable. In this work, we present a procedure, based on innovative aggregation and disaggregation heuristics, that allows us to get feasible and nearly optimal solutions. The method was tested on the public reference library MineLib and improved the best known results in the literature in 9 of the 11 instances of the library. Moreover, the overall procedure is very scalable, which makes it a promising tool for large size problems.
Mine planning; Block aggregation; Open-pit block scheduling; Integer programming; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715009704
Jélvez, Enrique
Morales, Nelson
Nancel-Penard, Pierre
Peypouquet, Juan
Reyes, Patricio
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:945-9582015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:945-958
article
Critical Learning Incidents in system dynamics modelling engagements
This paper reports in-depth behavioural operational research to explore how individual clients learned to resolve dynamically complex problems in system dynamics model-based engagements. Consultant-client dyads involved in ten system dynamics consulting engagements were interviewed to identify individual clients' Critical Learning Incidents—defined as the moment of surprise caused after one's mental model produces unexpected failure and a change in one's mental model produces the desired result. The cases, which are reprised from interviews, include assessments of the nature of the engagement problem, the form of system dynamics model, and the methods employed by consultants during each phase of the engagement. Reported Critical Learning Incidents are noted by engagement phase and consulting method and constructivist learning theory is used to describe a pattern of learning. Research outcomes include descriptions of: the role of different methods applied in engagement phases (for example, the role of concept models to commence problem identification and to introduce iconography and jargon to the engagement participants); how model form associates with the timing of Critical Learning Incidents; and the role of social mediation and negotiation in the learning process.
Systems dynamics; Practice of OR; Critical Learning Incidents; Behavioural OR; Constructivism;
http://www.sciencedirect.com/science/article/pii/S0377221715008905
Thompson, James P.
Howick, Susan
Belton, Valerie
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:784-7882015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:784-788
article
Notes on technical efficiency estimation with multiple inputs and outputs
Collier, Johnson and Ruggiero (2011) deal with the problem of estimating technical efficiency using regression analysis that allows multiple inputs and outputs. This revives an old problem in the analysis of production. In this note we provide an alternative maximum likelihood estimator that addresses the concerns. A Monte Carlo experiment shows that the technique works well in practice. A test for homotheticity, a critical assumption in Collier, Johnson and Ruggiero (2011) is constructed and its behavior is examined using Monte Carlo simulation and an empirical application to European banking.
Efficiency; Least squares; Multiple-output production; Limited information maximum Likelihood;
http://www.sciencedirect.com/science/article/pii/S0377221715009625
Tsionas, Mike G.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:465-4752015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:465-475
article
Take it to the limit: Innovative CVaR applications to extreme credit risk measurement
The Global Financial Crisis (GFC) demonstrated the devastating impact of extreme credit risk on global economic stability. We develop four credit models to better measure credit risk in extreme economic circumstances, by applying innovative Conditional Value at Risk (CVaR) techniques to structural models (called Xtreme-S), transition models (Xtreme-T), quantile regression models (Xtreme-Q), and the author's unique iTransition model (Xtreme-i) which incorporates industry factors into transition matrices. We find the Xtreme-S and Xtreme-Q models to be the most responsive to changing market conditions. The paper also demonstrates how the models can be used to determine capital buffers required to deal with extreme credit risk.
Uncertainty modeling; Credit risk; Conditional Value at Risk; Conditional probability of default; Capital buffers;
http://www.sciencedirect.com/science/article/pii/S0377221714010182
Allen, D.E.
Powell, R.J.
Singh, A.K.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1144-11522015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1144-1152
article
A branch-and-cut algorithm for the truck dock assignment problem with operational time constraints
In this paper, we address a truck dock assignment problem with operational time constraint which has to be faced in the management of cross docks. More specifically, this problem is the subproblem of more involved problems with additional constraints and criteria. We propose a new integer programming model for this problem. The dimension of the polytope associated with the proposed model is identified by introducing a systematic way of generating linearly independent feasible solutions. Several classes of valid inequalities are also introduced. Some of them are proved to be facet-defining. Then, exact separation algorithms are described for separating cuts for classes with exponential number of constraints, and an efficient branch-and-cut algorithm solving real-life size instances in a reasonable time is provided. In most cases, the optimal solution is identified at the root node without requiring any branching.
Truck dock assignment; Polytope; Dimension; Valid inequalities; Facet-defining inequalities;
http://www.sciencedirect.com/science/article/pii/S0377221715008917
Gelareh, Shahin
Monemi, Rahimeh Neamatian
Semet, Frédéric
Goncalves, Gilles
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:273-2902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:273-290
article
Network-flow based algorithms for scheduling production in multi-processor open-pit mines accounting for metal uncertainty
The open-pit mine production scheduling problem (MPSP) deals with the optimization of the net present value of a mining asset and has received significant attention in recent years. Several solution methods have been proposed for its deterministic version. However, little is reported in the literature about its stochastic version, where metal uncertainty is accounted for. Moreover, most methods focus on the mining sequence and do not consider the flow of the material once mined. In this paper, a new MPSP formulation accounting for metal uncertainty and considering multiple destinations for the mined material, including stockpiles, is introduced. In addition, four different heuristics for the problem are compared; namely, a tabu search heuristic incorporating a diversification strategy (TS), a variable neighborhood descent heuristic (VND), a very large neighborhood search heuristic based on network flow techniques (NF), and a diversified local search (DLS) that combines VND and NF. The first two heuristics are extensions of existing methods recently proposed in the literature, while the last two are novel approaches. Numerical tests indicate that the proposed solution methods are effective, able to solve in a few minutes up to a few hours instances that standard commercial solvers fail to solve. They also indicate that NF and DLS are in general more efficient and more robust than TS and VND.
Scheduling; Heuristics; Open-pit mining; Metal uncertainty; Network-flow algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221715008073
Lamghari, Amina
Dimitrakopoulos, Roussos
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:618-6272015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:618-627
article
The parallel stack loading problem to minimize blockages
This paper treats an elementary optimization problem, which arises whenever an inbound stream of items is to be intermediately stored in a given number of parallel stacks, so that blockages during their later retrieval are avoided. A blockage occurs whenever an item to be retrieved earlier is blocked by another item with lower priority stored on top of it in the same stack. Our stack loading problem arises, for instance, if containers arriving by vessel are intermediately stored in a container yard of a port or if, during nighttime, successively arriving wagons are to be parked in multiple parallel dead-end rail tracks of a tram depot. We formalize the resulting basic stack loading problem, investigate its computational complexity, and present suited exact and heuristic solution procedures.
Container storage; Stacking yard; Stack loading; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715008759
Boysen, Nils
Emde, Simon
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:30-452015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:30-45
article
A differential evolution algorithm with self-adaptive strategy and control parameters based on symmetric Latin hypercube design for unconstrained optimization problems
This paper presents a differential evolution (DE) algorithm, namely SLADE, with self-adaptive strategy and control parameters for unconstrained optimization problems. In SLADE, the population is initialized by symmetric Latin hypercube design (SLHD) to increase the diversity of the initial population. Moreover, the trial vector generation strategy assigned to each target individual is adaptively selected from the strategy candidate pool to match different stages of the evolution according to their previous successful experience. SLADE employs Cauchy distribution and normal distribution to update the control parameters CR and F to appropriate values during the evolutionary process. A large amount of simulation experiments and comparisons have been made by employing a set of 25 benchmark functions. Experimental results show that SLADE is better than, or at least comparable to, other classic or adaptive DE algorithms, and SLHD is effective for improving the performance of SLADE.
Evolutionary computations; Differential evolution; Parameter adaptation; Strategy adaptation; Symmetric Latin hypercube design;
http://www.sciencedirect.com/science/article/pii/S0377221715009698
Zhao, Zhiwei
Yang, Jingming
Hu, Ziyu
Che, Haijun
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:506-5162015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:506-516
article
A comparative analysis of the UK and Italian small businesses using Generalised Extreme Value models
This paper presents a cross-country comparison of significant predictors of small business failure between Italy and the UK. Financial measures of profitability, leverage, coverage, liquidity, scale and non-financial information are explored, some commonalities and differences are highlighted. Several models are considered, starting with the logistic regression which is a standard approach in credit risk modelling. Some important improvements are investigated. Generalised Extreme Value (GEV) regression is applied in contrast to the logistic regression in order to produce more conservative estimates of default probability. The assumption of non-linearity is relaxed through application of BGEVA, non-parametric additive model based on the GEV link function. Two methods of handling missing values are compared: multiple imputation and Weights of Evidence (WoE) transformation. The results suggest that the best predictive performance is obtained by BGEVA, thus implying the necessity of taking into account the low volume of defaults and non-linear patterns when modelling SME performance. WoE for the majority of models considered show better prediction as compared to multiple imputation, suggesting that missing values could be informative.
Decision support systems; Risk analysis; Credit scoring; Small and Medium Sized Enterprises; Default prediction;
http://www.sciencedirect.com/science/article/pii/S0377221715007183
Andreeva, Galina
Calabrese, Raffaella
Osmetti, Silvia Angela
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:395-3962015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:395-396
article
EditorialAuthor-Name: Crook, Jonathan
http://www.sciencedirect.com/science/article/pii/S0377221715009169
Bellotti, Tony
Mues, Christophe
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:806-8152015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:806-815
article
An outlook on behavioural OR – Three tasks, three pitfalls, one definition
In their recent paper, Hämäläinen, Luoma, and Saarinen (2013) have made a strong case for the importance of Behavioural OR. With the motivation to contribute to a broad academic outlook in this emerging discipline, this rather programmatic paper intends to further the discussion by describing three types of research tasks that should play an important role in Behavioural OR, namely a descriptive, a methodological and a technological task. Moreover, by relating Behavioural OR to similar academic endeavours, three potential pitfalls are presented that Behavioural OR should avoid: (1) a too narrow understanding of what “behavioural” means, (2) ignorance of interdisciplinary links, and (3) a development without close connection with the core disciplines of OR. The paper concludes by suggesting a definition of Behavioural OR that sums up all points addressed.
Behavioural OR; Interdisciplinary; Social sciences; Organizations; Hard and soft OR;
http://www.sciencedirect.com/science/article/pii/S0377221715008978
Becker, Kai Helge
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:751-7702015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:751-770
article
A dynamic program for valuing corporate securities
We design and implement a dynamic program for valuing corporate securities, seen as derivatives on a firm’s assets, and computing the term structure of yield spreads and default probabilities. Our setting is flexible for it accommodates an extended balance-sheet equality, arbitrary corporate debts, multiple seniority classes, and a reorganization process. This flexibility comes at the expense of a minor loss of efficiency. The analytical approach proposed in the literature is exchanged here for a quasi-analytical approach based on dynamic programming coupled with finite elements. To assess our construction, which shows flexibility and efficiency, we carry out a numerical investigation along with a complete sensitivity analysis.
Option theory; Structural models; Corporate securities; Corporate bonds; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715009522
Ayadi, Mohamed A.
Ben-Ameur, Hatem
Fakhfakh, Tarek
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:983-10042015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:983-1004
article
Understanding behaviour in problem structuring methods interventions with activity theory
This article argues that OR interventions, particularly problem structuring methods (PSM), are complex events that cannot be understood by conventional methods alone. In this paper an alternative approach is introduced, where the units of analysis are the activity systems constituted by and constitutive of PSM interventions. The paper outlines the main theoretical and methodological concerns that need to be appreciated in studying PSM interventions. The paper then explores activity theory as an approach to study them. A case study describing the use of this approach is provided.
Problem structuring methods; Behavioural OR; Activity theory; Collective intentionality;
http://www.sciencedirect.com/science/article/pii/S0377221715006785
White, Leroy
Burger, Katharina
Yearworth, Mike
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1153-11602015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1153-1160
article
A MILP model for the teacher assignment problem considering teachers’ preferences
The Teacher Assignment Problem is part of the University Timetabling Problem and involves assigning teachers to courses, taking their preferences into consideration. This is a complex problem, usually solved by means of heuristic algorithms. In this paper a Mixed Integer Linear Programing model is developed to balance teachers’ teaching load (first optimization criterion), while maximizing teachers’ preferences for courses according to their category (second optimization criterion). The model is used to solve the teachers-courses assignment in the Department of Management at the School of Industrial Engineering of Barcelona, in the Universitat Politècnica de Catalunya. Results are discussed regarding the importance given to the optimization criteria. Moreover, to test the model's performance a computational experiment is carried out using randomly generated instances based on real patterns. Results show that the model is proven to be suitable for many situations (number of teachers-courses and weight of the criteria), being useful for departments with similar requests.
Timetabling; Linear programming; Teacher assignment problem; MILP model;
http://www.sciencedirect.com/science/article/pii/S0377221715008139
Domenech, B
Lusa, A
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1124-11302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1124-1130
article
Optimal policies of M(t)/M/c/c queues with two different levels of servers
This paper deals with optimal control points of M(t)/M/c/c queues with periodic arrival rates and two levels of the number of servers. We use the results of this model to build a Markov decision process (MDP). The problem arose from a case study in the Kelowna General Hospital (KGH). The KGH uses surge beds when the emergency room is overcrowded which results in having two levels for the number of the beds. The objective is to minimize a cost function. The findings of this work are not limited to the healthcare; They may be used in any stochastic system with fluctuation in arrival rates and/or two levels of the number of servers, i.e., call centers, transportation, and internet services. We model the situation and define a cost function which needs to be minimized. In order to find the cost function we need transient solutions of the M(t)/M/c/c queue. We modify the fourth-order Runge–Kutta to calculate the transient solutions and we obtain better solutions than the existing Runge–Kutta method. We show that the periodic variation of arrival rates makes the control policies time-dependent and periodic. We also study how fast the policies converge to a periodic pattern and obtain a criterion for independence of policies in two sequential cycles.
Periodic MDP; Time-dependent queues; Health care; Two-level for number of servers; Hysteretic policy;
http://www.sciencedirect.com/science/article/pii/S0377221715009662
Tirdad, Ali
Grassmann, Winfried K.
Tavakoli, Javad
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:342-3462015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:342-346
article
Environmental efficiency measurement and the materials balance condition reconsidered
This note takes up a shortcoming of Coelli et al.’s (2007) popular environmental efficiency measure and its extension to economic-environmental trade-off analysis (see Van Meensel et al. (2010)), namely that they do not reward emission reductions by pollution control. A new environmental efficiency measure that overcomes this issue and - similar to Coelli et al.’s efficiency measure - is in line with the materials balance principle is proposed and further decomposed into “technical environmental efficiency” and “material and nonmaterial allocative environmental efficiencies”. The new efficiency measure collapses into Coelli et al.’s efficiency measure if none of the considered Decision Making Units control pollutants. A numerical example using Data Envelopment Analysis is provided to further explore the properties of the new efficiency measure.
OR in environment and climate change; Environmental efficiency; Data envelopment analysis; Weak G-disposability;
http://www.sciencedirect.com/science/article/pii/S037722171500987X
Rødseth, Kenneth Løvold
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:120-1302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:120-130
article
A comprehensive annual delivery program for upstream liquefied natural gas supply chain
Developing a cost-effective annual delivery program (ADP) is a challenging task for liquefied natural gas (LNG) suppliers, especially for LNG supply chains with large number of vessels and customers. Given significant operational costs in LNG delivery operations, cost-effective ADPs can yield substantial savings, adding up to millions. Providing an extensive account of supply chain operations and contractual terms, this paper aims to consider a realistic ADP problem faced by large LNG suppliers; suggest alternative delivery options, such as split-delivery; and propose an efficient heuristic solution which outperforms commercial optimizers. The comprehensive numerical study in this research demonstrates that contrary to the common belief in practice, split-delivery may generate substantial cost reductions in LNG supply chains.
OR in maritime industry; Annual delivery planning problem; LNG supply chain; Large-scale optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715009571
Mutlu, Fatih
Msakni, Mohamed K.
Yildiz, Hakan
Sönmez, Erkut
Pokharel, Shaligram
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1033-10492015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1033-1049
article
Reference points in revenue sharing contracts—How to design optimal supply chain contracts
Coordinating supply chains is an important goal for contract designers because it enables the channel members to increase their profits. Recently, many experimental studies have shown that behavioral aspects have to be taken into account when choosing the type of contract and specifying the contract parameters. In this paper, we analyze behavioral aspects of revenue-sharing contracts. We extend the classical normative decision model by incorporating reference-dependent valuation into the decision model and show how this affects inventory decisions. We conduct different lab experiments to test our model. As a result, human inventory decisions deviate from classical normative predictions, and we find evidence for reference-dependent valuation of human decision makers. We also show how contract designers can use the insights we gained to design better contracts.
Supply chain management; Contracting; Behavioral operations research; Experiments; Reference-dependent valuation;
http://www.sciencedirect.com/science/article/pii/S0377221715005214
Becker-Peth, Michael
Thonemann, Ulrich W.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:791-7952015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:791-795
article
Behavioural operational research: Returning to the roots of the OR profession
http://www.sciencedirect.com/science/article/pii/S0377221715009601
Franco, L. Alberto
Hämäläinen, Raimo P.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:440-4562015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:440-456
article
Accuracy of mortgage portfolio risk forecasts during financial crises
This paper explores whether factor based credit portfolio risk models are able to predict losses in severe economic downturns such as the recent Global Financial Crisis (GFC) within standard confidence levels. The paper analyzes (i) the accuracy of default rate forecasts, and (ii) whether forecast downturn percentiles (Value-at-Risk, VaR) are sufficient to cover default rate outcomes over a quarterly and an annual forecast horizon. Uninformative maximum likelihood and informative Bayesian techniques are compared as they imply different degrees of uncertainty.
Bayesian estimation; Maximum likelihood estimation; Model risk; Mortgage; Value-at-risk;
http://www.sciencedirect.com/science/article/pii/S0377221715008310
Lee, Yongwoong
Rösch, Daniel
Scheule, Harald
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:605-6172015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:605-617
article
Incentive strategies for an optimal recovery program in a closed-loop supply chain
We consider a dynamic closed-loop supply chain made up of one manufacturer and one retailer, with both players investing in a product recovery program to increase the rate of return of previously purchased products. End-of use product returns have two impacts. First, they lead to a decrease in the production cost, as manufacturing with used parts is cheaper than using virgin materials. Second, returns boost sales through replacement items.
Closed-loop supply chain; Coordination; Incentive strategies; Pricing; Product recovery programs;
http://www.sciencedirect.com/science/article/pii/S0377221715008450
De Giovanni, Pietro
Reddy, Puduru V.
Zaccour, Georges
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:46-552015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:46-55
article
Fundamental properties and pseudo-polynomial-time algorithm for network containership sailing speed optimization
In container liner shipping, bunker cost is an important component of the total operating cost, and bunker consumption increases dramatically when the sailing speed of containerships increases. A higher speed implies higher bunker consumption (higher bunker cost), shorter transit time (lower inventory cost), and larger shipping capacity per ship per year (lower ship cost). Therefore, a container shipping company aims to determine the optimal sailing speed of containerships in a shipping network to minimize the total cost. We derive analytical solutions for sailing speed optimization on a single ship route with a continuous number of ships. The advantage of analytical solutions lies in that it unveils the underlying structure and properties of the problem, from which a number of valuable managerial insights can be obtained. Based on the analytical solution and the properties of the problem, the optimal integer number of ships to deploy on a ship route can be obtained by solving two equations, each in one unknown, using a simple bi-section search method. The properties further enable us to identify an optimality condition for network containership sailing speed optimization. Based on this optimality condition, we propose a pseudo-polynomial-time solution algorithm that can efficiently obtain an epsilon-optimal solution for sailing speed of containerships in a liner shipping network.
Transportation; Liner shipping; Containership; Sailing speed; Bunker fuel;
http://www.sciencedirect.com/science/article/pii/S0377221715009789
Wang, Shuaian
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:91-1002015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:91-100
article
Obtaining cell counts for contingency tables from rounded conditional frequencies
We present an integer linear programming formulation and solution procedure for determining the tightest bounds on cell counts in a multi-way contingency table, given knowledge of a corresponding derived two-way table of rounded conditional probabilities and the sample size. The problem has application in statistical disclosure limitation, which is concerned with releasing useful data to the public and researchers while also preserving privacy and confidentiality. Previous work on this problem invoked the simplifying assumption that the conditionals were released as fractions in lowest terms, rather than the more realistic and complicated setting of rounded decimal values that is treated here. The proposed procedure finds all possible counts for each cell and runs fast enough to handle moderately sized tables.
OR in government; Integer linear programming; Statistical disclosure control; Tabular data; Fast Fourier transform;
http://www.sciencedirect.com/science/article/pii/S0377221715008358
Sage, Andrew J.
Wright, Stephen E.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:878-8892015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:878-889
article
Different paths to consensus? The impact of need for closure on model-supported group conflict management
Empirical evidence on how cognitive factors impact the effectiveness of model-supported group decision making is lacking. This study reports on an experiment on the effects of need for closure, defined as a desire for definite knowledge on some issue and the eschewal of ambiguity. The study was conducted with over 40 postgraduate student groups. A quantitative analysis shows that compared to groups low in need for closure, groups high in need for closure experienced less conflict when using Value-Focused Thinking to make a budget allocation decision. Furthermore, low need for closure groups used the model to surface conflict and engaged in open discussions to come to an agreement. By contrast, high need for closure groups suppressed conflict and used the model to put boundaries on the discussion. Interestingly, both groups achieve similar levels of consensus, and high need for closure groups are more satisfied than low need for closure groups. A qualitative analysis of a subset of groups reveals that in high need for closure groups only a few participants control the model building process, and final decisions are not based on the model but on simpler tools. The findings highlight the need to account for the effects of cognitive factors when designing and deploying model-based support for practical interventions.
Behavioural OR; Need for closure; Decision processes; Conflict management; Model-based group support;
http://www.sciencedirect.com/science/article/pii/S0377221715005895
Franco, L. Alberto
Rouwette, Etiënne A.J.A.
Korzilius, Hubert
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:560-5762015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:560-576
article
Finding robust timetables for project presentations of student teams
This article describes a methodology developed to find robust solutions to a novel timetabling problem encountered during a course. The problem requires grouping student teams according to diversity/homogeneity criteria and assigning the groups to time-slots for presenting their project results. In this article, we develop a mixed integer programming (MIP) formulation of the problem and then solve it with CPLEX. Rather than simply using the optimal solution reported, we obtain a set of solutions provided by the solution pool feature of the solution engine. We then map these solutions to a network, in which each solution is a node and an edge represents the distance between a pair of solutions (as measured by the number of teams assigned to a different time slot in those solutions). Using a scenario-based exact robustness measure, we test a set of metrics to determine which ones can be used to heuristically rank the solutions in terms of their robustness measure. Using seven semesters’ worth of actual data, we analyze performances of the solution approach and the metrics. The results show that by using the solution pool feature, analysts can quickly obtain a set of Pareto-optimal solutions (with objective function value and the robustness measure as the two criteria). Furthermore, two of the heuristic metrics have strong rank correlation with the robustness measure (mostly above 0.80) making them quite suitable for use in the development of new heuristic search algorithms that can improve the solution pool.
Timetabling; Mixed integer programming; Robustness; Diverse grouping; Bicriteria optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715008036
Akkan, Can
Erdem Külünk, M.
Koçaş, Cenk
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:314-3272015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:314-327
article
Accounting for externalities and disposability: A directional economic environmental distance function
The existence of positive and negative externalities ought to be considered in a productivity analysis in order to obtain unbiased measures of efficiency. In this research we present an additive style, data envelopment analysis model that considers the production of both negative and positive externalities and permits a limited increase in input utilisation where relevant. The directional economic environmental distance (DEED) function is a unified approach based on a linear program that evaluates the relative inefficiency of the units under examination with respect to a unique reference technology. We discuss the impact of disposability assumptions in depth and demonstrate how different versions of the DEED model improve on models presented in the literature to date.
Data envelopment analysis; Negative externalities; Disposability; Additive measure; Environment;
http://www.sciencedirect.com/science/article/pii/S037722171500990X
Adler, Nicole
Volta, Nicola
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:890-8982015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:890-898
article
Path dependence and biases in the even swaps decision analysis method
There are usually multiple paths that can be followed in a decision analysis process. It is possible that these different paths lead to different outcomes, i.e. there can exist path dependence. To demonstrate the phenomenon we show how path dependence emerges in the Even Swaps method. We also discuss the phenomenon in decision analysis in general. The Even Swaps process helps the decision maker to find the most preferred alternative out of a set of multi-attribute alternatives. In our experiment different paths are found to systematically lead to different choices in the Even Swaps process. This is explained by the accumulated effect of successive biased even swap tasks. The biases in these tasks are shown to be due to scale compatibility and loss aversion phenomena. Estimates of the magnitudes of these biases in the even swap tasks are provided. We suggest procedures to cancel out the effects of biases.
Behavioral Operational Research; Decision analysis; Path dependence; Biases; Trade-off; Scale compatibility;
http://www.sciencedirect.com/science/article/pii/S037722171500898X
Lahtinen, Tuomas J.
Hämäläinen, Raimo P.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:101-1192015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:101-119
article
The vehicle-routing problem with time windows and driver-specific times
This paper proposes a tabu search algorithm for the vehicle-routing problem with time windows and driver-specific times (VRPTWDST), a variant of the classical VRPTW that uses driver-specific travel and service times to model the familiarity of the different drivers with the customers to visit. We carry out a systematic investigation of the problem on a comprehensive set of newly generated benchmark instances. We find that consideration of driver knowledge in the route planning clearly improves the efficiency of vehicle routes, an effect that intensifies for higher familiarity levels of the drivers. Increased benefits are produced if the familiar customers of drivers are geographically contiguous. Moreover, a higher number of drivers that are familiar with the same (larger) region provides higher benefits compared to a scenario where each driver is only familiar with a dedicated (smaller) region. Finally, our tabu search is able to prove its performance on the Solomon test instances of the closely related VRPTW, yielding high-quality solutions in short time.
Vehicle routing; Time windows; Driver-specific times; Routing consistency; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715008395
Schneider, Michael
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:143-1542015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:143-154
article
Control and enforcement in order to increase supplier inventory in a JIT contract
Prompt response to customer demand has long been a point of major concern in supply chains. “Inventory wars” between suppliers and their customers are common, owing to cases in which one supply chain party attempts to decrease its stock at the expense of the other party. In order to ensure that suppliers meet their commitments to fulfill orders on time, customers must formulate incentives or, alternatively, enforce penalties. This paper deals with a customer organization that has a contract with a supplier, based on Just-In-Time strategy. Initiating a policy of sanctions, the customer becomes the lead player in a Stackelberg game and forces the supplier to hold inventory, which is made available to the customer in real-time. Using a class of sanctioning functions, we show that the customer can force the supplier to hold inventory up to some maximal value, rendering actual enforcement of sanctions unnecessary. However, contrary to expectations, escalation of the enforcement level can in fact reduce the capacity of the supplier to replenish on time. Consequently, the customer must sanction meticulously in order to receive his inventory on time. Having the possibility to devote a few hours each day to sanctioning activity significantly reduces the customer's expected cost. In particular, numerical examples show that the customer's costs under an enforcement level may be only 2 percent higher than his costs in a situation in which all inventory is necessarily replenished on time.
Just-In-Time; Customer-led supply chain; Replenishment on-time enforcement;
http://www.sciencedirect.com/science/article/pii/S037722171500973X
Shnaiderman, Matan
Ben-Baruch, Liron
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1005-10132015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1005-1013
article
Modelling adherence behaviour for the treatment of obstructive sleep apnoea
Continuous positive airway pressure therapy (CPAP) is known to be the most efficacious treatment for obstructive sleep apnoea (OSA). Unfortunately, poor adherence behaviour in using CPAP reduces its effectiveness and thereby also limits beneficial outcomes. In this paper, we model the dynamics and patterns of patient adherence behaviour as a basis for designing effective and economical interventions. Specifically, we define patient CPAP usage behaviour as a state and develop Markov models for diverse patient cohorts in order to examine the stochastic dynamics of CPAP usage behaviours. We also examine the impact of behavioural intervention scenarios using a Markov decision process (MDP), and suggest a guideline for designing interventions to improve CPAP adherence behaviour. Behavioural intervention policy that addresses economic aspects of treatment is imperative for translation to clinical practice, particularly in resource-constrained environments that are clinically engaged in the chronic care of OSA.
Behavioural OR; Obstructive sleep apnoea; Treatment adherence behaviour; Markov models; Cost-effective interventions;
http://www.sciencedirect.com/science/article/pii/S0377221715006724
Kang, Yuncheol
Sawyer, Amy M.
Griffin, Paul M.
Prabhu, Vittaldas V.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1113-11232015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1113-1123
article
Zero-inefficiency stochastic frontier models with varying mixing proportion: A semiparametric approach
In this paper, we propose a semiparametric version of the zero-inefficiency stochastic frontier model of Kumbhakar, Parmeter, and Tsionas (2013) by allowing for the proportion of firms that are fully efficient to depend on a set of covariates via unknown smooth function. We propose a (iterative) backfitting local maximum likelihood estimation procedure that achieves the optimal convergence rates of both frontier parameters and the nonparametric function of the probability of being efficient. We derive the asymptotic bias and variance of the proposed estimator and establish its asymptotic normality. In addition, we discuss how to test for parametric specification of the proportion of firms that are fully efficient as well as how to test for the presence of fully inefficient firms, based on the sieve likelihood ratio statistics. The finite sample behaviors of the proposed estimation procedure and tests are examined using Monte Carlo simulations. An empirical application is further presented to demonstrate the usefulness of the proposed methodology.
Zero-inefficiency; Varying proportion; Semiparametric approach; Backfitting local maximum likelihood; Sieve likelihood ratio statistics;
http://www.sciencedirect.com/science/article/pii/S0377221715009455
Tran, Kien C.
Tsionas, Mike G.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:1-292015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:1-29
article
ELECTRE: A comprehensive literature review on methodologies and applications
Multi-criteria decision analysis (MCDA) is a valuable resource within operations research and management science. Various MCDA methods have been developed over the years and applied to decision problems in many different areas. The outranking approach, and in particular the family of ELECTRE methods, continues to be a popular research field within MCDA, despite its more than 40 years of existence. In this paper, a comprehensive literature review of English scholarly papers on ELECTRE and ELECTRE-based methods is performed. Our aim is to investigate how ELECTRE and ELECTRE-based methods have been considered in various areas. This includes area of applications, modifications to the methods, comparisons with other methods, and general studies of the ELECTRE methods. Although a significant amount of literature on ELECTRE is in a language different from English, we focus only on English articles, because many researchers may not be able to perform a study in some of the other languages. Each paper is categorized according to its main focus with respect to ELECTRE, i.e. if it considers an application, performs a review, considers ELECTRE with respect to the problem of selecting an MCDA method or considers some methodological aspects of ELECTRE. A total of 686 papers are included in the review. The group of papers considering an application of ELECTRE consists of 544 papers, and these are further categorized into 13 application areas and a number of sub-areas. In addition, all papers are classified according to the country of author affiliation, journal of publication, and year of publication. For the group of applied papers, the distribution by ELECTRE version vs. application area and ELECTRE version vs. year of publication are provided. We believe that this paper can be a valuable source of information for researchers and practitioners in the field of MCDA and ELECTRE in particular.
Multiple criteria decision aiding (MCDA); Outranking; ELECTRE; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221715006529
Govindan, Kannan
Jepsen, Martin Brandt
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:291-3042015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:291-304
article
Parameters measuring bank risk and their estimation
The paper develops estimation of three parameters of banking risk based on an explicit model of expected utility maximization by financial institutions subject to the classical technology restrictions of neoclassical production theory. The parameters are risk aversion, prudence or downside risk aversion and generalized risk resulting from a factor model of loan prices. The model can be estimated using standard econometric techniques, like GMM for dynamic panel data and latent factor analysis for the estimation of covariance matrices. An explicit functional form for the utility function is not needed and we show how measures of risk aversion and prudence (downside risk aversion) can be derived and estimated from the model. The model is estimated using data for Eurozone countries and we focus particularly on (i) the use of the modeling approach as a device close to an “early warning mechanism”, (ii) the bank- and country-specific estimates of risk aversion and prudence (downside risk aversion), and (iii) the derivation of a generalized measure of risk that relies on loan-price uncertainty. Moreover, the model provides estimates of loan price distortions and thus, allocative efficiency.
Financial stability; Banking; Expected utility maximization; Sub-prime crisis; Financial crisis;
http://www.sciencedirect.com/science/article/pii/S0377221715008991
Tsionas, Mike G.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:164-1782015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:164-178
article
Cyclic inventory routing in a line-shaped network
The inventory routing problem (IRP) is a very challenging optimization task that couples two of the most important components of supply chain management, i.e., inventory control and transportation. Routes of vehicles are to be determined to repeatedly resupply multiple customers with constant demand rates from a single depot. We alter this basic IRP setting by two aspects: (i) only cyclic tours are allowed, i.e., each vehicle continuously tours its dedicated route, and (ii) all customers are located along a line. Both characteristics occur, for instance, in liner shipping (when feeder ships service inland ports along a stream) and in facility logistics (when tow trains deliver part bins to the stations of an assembly line). We formalize the resulting problem setting, identify NP-hard as well as polynomially solvable cases, and develop suited solution procedures.
Inventory routing; Cyclic routes; Container shipping; Facility logistics;
http://www.sciencedirect.com/science/article/pii/S0377221715009935
Zenker, Michael
Emde, Simon
Boysen, Nils
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:65-762015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:65-76
article
A mean-shift algorithm for large-scale planar maximal covering location problems
The planar maximal covering location problem (PMCLP) concerns the placement of a given number of facilities anywhere on a plane to maximize coverage. Solving PMCLP requires identifying a candidate locations set (CLS) on the plane before reducing it to the relatively simple maximal covering location problem (MCLP). The techniques for identifying the CLS have been mostly dominated by the well-known circle intersect points set (CIPS) method. In this paper we first review PMCLP, and then discuss the advantages and weaknesses of the CIPS approach. We then present a mean-shift based algorithm for treating large-scale PMCLPs, i.e., MSMC. We test the performance of MSMC against the CIPS approach on randomly generated data sets that vary in size and distribution pattern. The experimental results illustrate MSMC’s outstanding performance in tackling large-scale PMCLPs.
Location; Large scale optimization; Planar maximal covering location problem; Mean shift;
http://www.sciencedirect.com/science/article/pii/S0377221715008309
He, Zhou
Fan, Bo
Cheng, T.C.E.
Wang, Shou-Yang
Tan, Chin-Hon
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:251-2612015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:251-261
article
Sparse and robust normal and t- portfolios by penalized Lq-likelihood minimization
Two important problems arising in traditional asset allocation methods are the sensitivity to estimation error of portfolio weights and the high dimensionality of the set of candidate assets. In this paper, we address both issues by proposing a new criterion for portfolio selection. The new criterion is a two-stage description of the available information, where the q-entropy, a generalized measure of information, is used to code the uncertainty of the data given the parametric model and the uncertainty related to the model choice. The information about the model is coded in terms of a prior distribution that promotes asset weights sparsity. Our approach carries out model selection and estimation in a single step, by selecting a few assets and estimating their portfolio weights simultaneously. The resulting portfolios are doubly robust, in the sense that they can tolerate deviations from both assumed data model and prior distribution for model parameters. Empirical results on simulated and real-world data support the validity of our approach.
Investment analysis; Penalized least squares; q-entropy; Sparsity; Index tracking;
http://www.sciencedirect.com/science/article/pii/S0377221715008127
Giuzio, Margherita
Ferrari, Davide
Paterlini, Sandra
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:498-5052015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:498-505
article
Models and forecasts of credit card balance
Credit card balance is an important factor in retail finance. In this article we consider multivariate models of credit card balance and use a real dataset of credit card data to test the forecasting performance of the models. Several models are considered in a cross-sectional regression context: ordinary least squares, two-stage and mixture regression. After that, we take advantage of the time series structure of the data and model credit card balance using a random effects panel model. The most important predictor variable is previous lagged balance, but other application and behavioural variables are also found to be important. Finally, we present an investigation of forecast accuracy on credit card balance 12 months ahead using each of the proposed models. The panel model is found to be the best model for forecasting credit card balance in terms of mean absolute error (MAE) and the two-stage regression model performs best in terms of root mean squared error (RMSE).
Credit cards; Balance estimation; Mixture model; Panel model;
http://www.sciencedirect.com/science/article/pii/S0377221714010157
Hon, Pak Shun
Bellotti, Tony
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:427-4392015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:427-439
article
An empirical comparison of classification algorithms for mortgage default prediction: evidence from a distressed mortgage market
This paper evaluates the performance of a number of modelling approaches for future mortgage default status. Boosted regression trees, random forests, penalised linear and semi-parametric logistic regression models are applied to four portfolios of over 300,000 Irish owner-occupier mortgages. The main findings are that the selected approaches have varying degrees of predictive power and that boosted regression trees significantly outperform logistic regression. This suggests that boosted regression trees can be a useful addition to the current toolkit for mortgage credit risk assessment by banks and regulators.
Boosting; Random forests; Semi-parametric models; Mortgages; Credit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221715008383
Fitzpatrick, Trevor
Mues, Christophe
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:771-7832015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:771-783
article
Controlling for spatial heterogeneity in nonparametric efficiency models: An empirical proposal
This paper introduces an original methodology, derived by the robust order-m model, to estimate technical efficiency with spatial autocorrelated data using a nonparametric approach. The methodology is aimed to identify potential competitors on a subset of productive units that are identified through spatial dependence, thus focusing on peers located in close proximity of the productive unit. The proposed method is illustrated in a simulation setting that verifies the territorial differences between the nonparametric unconditioned and the conditioned estimates. A firm-level application to the Italian industrial districts is proposed in order to highlight the ability of the new method to separate the global intangible spatial effect from the efficiency term on real data.
Productive efficiency; Conditional nonparametric efficiency; Spatial heterogeneity; Industrial districts;
http://www.sciencedirect.com/science/article/pii/S0377221715009765
Vidoli, Francesco
Canello, Jacopo
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:226-2382015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:226-238
article
Dual sourcing under disruption risk and cost improvement through learning
As suppliers are crucial for successful supply chain management, buying companies have to deal with the risks of supply disruptions due to e.g. labor strikes, natural disasters, supplier bankruptcy, and business failures. Dual sourcing is one potential countermeasure, however, when applying it one loses the full potential of economies of scale. To provide decision support, we analyze the trade-off between risk reduction via dual sourcing under disruption risk and learning benefits on sourcing costs induced by long-term relationships with a single supplier from a buyer’s perspective. The buyer’s optimal volume allocation strategy over a finite dynamic planning horizon is identified and we find that a symmetric demand allocation is not optimal, even if suppliers are symmetric. We obtain insights on how reliability, cost and learning ability of potential suppliers impact the buyer’s sourcing decision and find that the allocation balance increases with learning rate and decreases with reliability and demand level. Further, we quantify the benefit of dual sourcing compared to single sourcing, which increases with learning rate and decreases with reliability. When comparing the optimal policy to heuristic dual sourcing policies, a simple 75:25 allocation rule turns out to be a very robust policy. Finally, we perform sensitivity analysis and find that increasing certainty about supplier reliability and increasing risk aversion of a buyer yield more balanced supply volume allocations among the available suppliers and that the advantage of dual sourcing decreases with uncertainty about supplier reliability. Further, we discuss the impact of demand uncertainty.
Dual sourcing; Learning effects; Disruption risk;
http://www.sciencedirect.com/science/article/pii/S0377221715008413
Silbermayr, Lena
Minner, Stefan
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:931-9442015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:931-944
article
An experimental investigation into the role of simulation models in generating insights
It is often claimed that discrete-event simulation (DES) models are useful for generating insights. There is, however, almost no empirical evidence to support this claim. To address this issue we perform an experimental study which investigates the role of DES, specifically the simulation animation and statistical results, in generating insight (an ‘Aha!’ moment). Undergraduate students were placed in three separate groups and given a task to solve using a model with only animation, a model with only statistical results, or using no model at all. The task was based around the UK's NHS111 telephone service for non-emergency health care. Performance was measured based on whether participants solved the task with insight, the time taken to achieve insight and the participants’ problem-solving patterns. The results show that there is some association between insight generation and the use of a simulation model, particularly the use of the statistical results generated from the model. While there is no evidence that insights were generated more frequently from statistical results than the use of animation, the participants using the statistical results generated insights more rapidly.
Discrete-event simulation; Insight; Animation; Experimentation; Behavioural operational research;
http://www.sciencedirect.com/science/article/pii/S037722171500884X
Gogi, Anastasia
Tako, Antuela A.
Robinson, Stewart
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:816-8262015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:816-826
article
Model-based organizational decision making: A behavioral lens
Operational research assumes that organizational decision-making processes can be improved by making them more rigorous and analytical through the application of quantitative and qualitative modeling. However, we have only a limited understanding of how modeling actually affects organizational decision-making behavior, positively or negatively. Drawing from the Carnegie School's tradition of organizational research, this paper identifies two types of organizational decision-making activities where modeling can be applied: routine decision making and problem solving. These two types of decision-making activities have very different implications for model-based decision support, both in terms of the positive and negative behavioral impacts associated with modeling as well as the criteria used to evaluate models and modeling practices. Overall, the paper offers novel insights that help understand why modeling activities are successful (or not), explains why practitioners adopt some approaches more readily than others and points to new opportunities for empirical research and method development.
Behavioral OR; Behavioral theory of the firm; Bounded rationality; Organizational behavior; Decision making;
http://www.sciencedirect.com/science/article/pii/S0377221715007948
Luoma, Jukka
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:728-7392015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:728-739
article
Strategic entry in a triopoly market of firms with asymmetric cost structures
This paper examines the strategic investment timing decision in a triopoly market comprising firms with asymmetric cost structures. We present three novel results. First, in the case where there are relatively small cost asymmetries between firms and a relatively small first-mover advantage, the firm with the lowest cost structure is not always the first investor. In other cases, the firm with the lowest cost structure is the first investor. Second, an increase in volatility increases the possibility that a firm without the lowest cost structure is the first investor. Finally, even in the three-asymmetric-firm model, we show that the first investor threshold is larger in a triopoly than in a duopoly, although it is smaller in a duopoly than in a monopoly.
Investment analysis; Real options; Competition; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S037722171500819X
Shibata, Takashi
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:77-902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:77-90
article
An iterated multi-stage selection hyper-heuristic
There is a growing interest towards the design of reusable general purpose search methods that are applicable to different problems instead of tailored solutions to a single particular problem. Hyper-heuristics have emerged as such high level methods that explore the space formed by a set of heuristics (move operators) or heuristic components for solving computationally hard problems. A selection hyper-heuristic mixes and controls a predefined set of low level heuristics with the goal of improving an initially generated solution by choosing and applying an appropriate heuristic to a solution in hand and deciding whether to accept or reject the new solution at each step under an iterative framework. Designing an adaptive control mechanism for the heuristic selection and combining it with a suitable acceptance method is a major challenge, because both components can influence the overall performance of a selection hyper-heuristic. In this study, we describe a novel iterated multi-stage hyper-heuristic approach which cycles through two interacting hyper-heuristics and operates based on the principle that not all low level heuristics for a problem domain would be useful at any point of the search process. The empirical results on a hyper-heuristic benchmark indicate the success of the proposed selection hyper-heuristic across six problem domains beating the state-of-the-art approach.
Heuristics; Combinatorial optimisation; Hyper-heuristic; Meta-heuristic; Hybrid approach;
http://www.sciencedirect.com/science/article/pii/S0377221715008255
Kheiri, Ahmed
Özcan, Ender
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:667-6762015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:667-676
article
Accurate algorithms for identifying the median ranking when dealing with weak and partial rankings under the Kemeny axiomatic approach
Preference rankings virtually appear in all fields of science (political sciences, behavioral sciences, machine learning, decision making and so on). The well-known social choice problem consists in trying to find a reasonable procedure to use the aggregate preferences or rankings expressed by subjects to reach a collective decision. This turns out to be equivalent to estimate the consensus (central) ranking from data and it is known to be a NP-hard problem. A useful solution has been proposed by Emond and Mason in 2002 through the Branch-and-Bound algorithm (BB) within the Kemeny and Snell axiomatic framework. As a matter of fact, BB is a time demanding procedure when the complexity of the problem becomes untractable, i.e. a large number of objects, with weak and partial rankings, in presence of a low degree of consensus. As an alternative, we propose an accurate heuristic algorithm called FAST that finds at least one of the consensus ranking solutions found by BB saving a lot of computational time. In addition, we show that the building block of FAST is an algorithm called QUICK that finds already one of the BB solutions so that it can be fruitfully considered to speed up even more the overall searching procedure if the number of objects is low. Simulation studies and applications on real data allows to show the accuracy and the computational efficiency of our proposal.
Preference rankings; Median ranking; Kemeny distance; Social choice problem; Branch-and-bound algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715008048
Amodio, S.
D’Ambrosio, A.
Siciliano, R.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:677-6822015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:677-682
article
A new model for intuitionistic fuzzy multi-attributes decision making
In this study, we discuss linear orders of intuitionistic fuzzy values (IFVs). Then we introduce an intuitionistic fuzzy weighted arithmetic average operator. Some fundamental properties of this operator are investigated. Based on the introduced operator, we propose a new model for intuitionistic fuzzy multi-attributes decision making. The proposed model deals with the degree of membership and degree of nonmembership separately. It is resistant to extreme data.
Decision analysis; Intuitionistic fuzzy sets; Weighted arithmetic average; Weighted geometric average; Admissible order;
http://www.sciencedirect.com/science/article/pii/S0377221715007997
Ouyang, Yao
Pedrycz, Witold
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:706-7162015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:706-716
article
Optimizing layouts of initial AFV refueling stations targeting different drivers, and experiments with agent-based simulations
The number of refuelling stations for AFVs (alternative fuel vehicles) is limited during the early stages of the diffusion of AFVs. Different layouts of these initial stations will result in different degrees of driver concern regarding refueling and will therefore influence individuals’ decisions to adopt AFVs. The question becomes “what is an optimal layout for these initial stations? Should it target all drivers or just a portion of them, and if so, which portion?” Further, how does the number of initial AFV refueling stations influence the adoption of AFVs? This paper explores these questions with agent-based simulations. Using Shanghai as the basis of computational experiments, this paper first generates different optimal layouts using a genetic algorithm to minimize the total concern of different targeted drivers and then conducts agent-based simulations on the diffusion of AFVs with these layouts. The main findings of this study are that (1) targeting drivers in the city center can induce the fastest diffusion of AFVs if AFV technologies are mature and (2) it is possible that a larger number of initial AFV refueling stations may result in slower diffusion of AFVs because these initial stations may not have sufficient customers to survive. The simulations can provide some insights for cities that are trying to promote the diffusion of AFVs.
Simulation; Optimal layout; Alternative fuel vehicles; Initial refueling stations; Agent-based model;
http://www.sciencedirect.com/science/article/pii/S0377221715008218
Zhao, Jiangjiang
Ma, Tieju
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:740-7502015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:740-750
article
Inverse portfolio problem with coherent risk measures
In general, a portfolio problem minimizes risk (or negative utility) of a portfolio of financial assets with respect to portfolio weights subject to a budget constraint. The inverse portfolio problem then arises when an investor assumes that his/her risk preferences have a numerical representation in the form of a certain class of functionals, e.g. in the form of expected utility, coherent risk measure or mean-deviation functional, and aims to identify such a functional, whose minimization results in a portfolio, e.g. a market index, that he/she is most satisfied with. In this work, the portfolio risk is determined by a coherent risk measure, and the rate of return of investor’s preferred portfolio is assumed to be known. The inverse portfolio problem then recovers investor’s coherent risk measure either through finding a convex set of feasible probability measures (risk envelope) or in the form of either mixed CVaR or negative Yaari’s dual utility. It is solved in single-period and multi-period formulations and is demonstrated in a case study with the FTSE 100 index.
Decision making under risk; Coherent risk measure; Portfolio optimization; Inverse portfolio problem;
http://www.sciencedirect.com/science/article/pii/S0377221715008929
Grechuk, Bogdan
Zabarankin, Michael
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:908-9182015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:908-918
article
Recent evidence on the effectiveness of group model building
Group model building (GMB) is a participatory approach to using system dynamics in group decision-making and problem structuring. This paper considers the published quantitative evidence base for GMB since the earlier literature review by Rouwette et al. (2002), to consider the level of understanding on three basic questions: what does it achieve, when should it be applied, and how should it be applied or improved? There have now been at least 45 such studies since 1987, utilising controlled experiments, field experiments, pretest/posttest, and observational research designs. There is evidence of GMB achieving a range of outcomes, particularly with regard to the behaviour of participants and their learning through the process. There is some evidence that GMB is more effective at supporting communication and consensus than traditional facilitation, however GMB has not been compared to other problem structuring methods. GMB has been successfully applied in a range of contexts, but there is little evidence on which to select between different GMB tools, or to understand when certain tools may be more appropriate. There is improving evidence on how GMB works, but this has not yet been translated into changing practice. Overall the evidence base for GMB has continued to improve, supporting its use for improving communication and agreement between participants in group decision processes. This paper argues that future research in group model building would benefit from three main shifts: from single cases to multiple cases; from controlled settings to applied settings; and by augmenting survey results with more objective measures.
Behavioural OR; Group model building; System dynamics; Evidence; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221715006323
Scott, Rodney J
Cavana, Robert Y
Cameron, Donald
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:397-4062015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:397-406
article
“Time-to-profit scorecards for revolving credit”
This paper defines and models time-to-profit for the first time for credit acceptance decisions within the context of revolving credit. This requires the definition of a time-related event: A customer is profitable when monthly cumulative return is at least one (i.e. cumulative profits cover the outstanding balance). Time-to-profit scorecards were produced for a data set of revolving credit from a Colombian lending institution which included socio-demographic and first purchase individual characteristics. Results show that it is possible to obtain good classification accuracy and improve portfolio returns which are continuous by definition through the use of survival models for binary events (i.e. either being profitable or not). It is also shown how predicting time-to-profit can be used for investment planning purposes of credit programmes. It is possible to identify the earliest point in time in which a customer is profitable and hence, generates internal (organic) funds for a credit programme to continue growing and become sustainable. For survival models the effect of segmentation on loan duration was explored. Results were similar in terms of classification accuracy and identifying organic growth opportunities. In particular, loan duration and credit limit usage have a significant economic impact on time-to-profit. This paper confirms that high risk credit programmes can be profitable at different points in time depending on loan duration. Furthermore, existing customers may provide internal funds for the credit programme to continue growing.
Risk management; Time-to-profit; Profit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221715008942
Sanchez-Barrios, Luis Javier
Andreeva, Galina
Ansell, Jake
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1102-11122015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1102-1112
article
Estimating risk preferences of bettors with different bet sizes
We extend the literature on risk preferences of a representative bettor by including odds-dependent bet sizes in our estimations. Accounting for different bet sizes largely reduces the standard errors of all coefficients. Substituting the coefficients from the model with equal bet sizes into the model with odds-dependent sizes leads to a sharp decline in the likelihood which shows that accounting for different amounts is important. Our estimations strongly reject the hypothesis that the overbetting of outcomes with low probabilities (favorite-longshot bias) can be explained by risk-seeking bettors. Depending on the exact specification within cumulative prospect theory, the data can best be described by an overweighting of small probabilities which is more pronounced in the gain domain. Models allowing for two parameters for probability weighting each in the gain- and in the loss domain are superior.
Applied probability; Betting markets; Favorite-longshot bias; Estimation of risk preferences; Overweighting of small probabilities;
http://www.sciencedirect.com/science/article/pii/S0377221715008954
Feess, Eberhard
Müller, Helge
Schumacher, Christoph
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:517-5242015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:517-524
article
Spatial dependence in credit risk and its improvement in credit scoring
Credit scoring models are important tools in the credit granting process. These models measure the credit risk of a prospective client based on idiosyncratic variables and macroeconomic factors. However, small and medium sized enterprises (SMEs) are subject to the effects of the local economy. From a data set with the localization and default information of 9 million Brazilian SMEs, provided by Serasa Experian (the largest Brazilian credit bureau), we propose a measure of the local risk of default based on the application of ordinary kriging. This variable has been included in logistic credit scoring models as an explanatory variable. These models have shown better performance when compared to models without this variable. A gain around 7 percentage points of KS and Gini was observed.
Risk analysis; Spatial dependence; SME credit risk; Ordinary kriging; Credit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221715006463
Fernandes, Guilherme Barreto
Artes, Rinaldo
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:628-6302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:628-630
article
A note: An improved upper bound for the online inventory problem with bounded storage and order costs
This work gives an improved competitive analysis for an online inventory problem with bounded storage and order costs proposed by Larsen and Wøhlk (2010). We improve the upper bound of the competitive ratio from (2+1k)Mmto less than 45(2+1k)Mm,where k, M and m are parameters of the given problem. The key idea is to use linear-fractional programming and primal-dual analysis methods to find the upper bound of a central inequality.
Inventory; Online algorithms; Competitive analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715008875
Dai, Wenqiang
Jiang, Qingzhu
Feng, Yi
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:417-4262015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:417-426
article
Instance-based credit risk assessment for investment decisions in P2P lending
Recent years have witnessed increased attention on peer-to-peer (P2P) lending, which provides an alternative way of financing without the involvement of traditional financial institutions. A key challenge for personal investors in P2P lending marketplaces is the effective allocation of their money across different loans by accurately assessing the credit risk of each loan. Traditional rating-based assessment models cannot meet the needs of individual investors in P2P lending, since they do not provide an explicit mechanism for asset allocation. In this study, we propose a data-driven investment decision-making framework for this emerging market. We designed an instance-based credit risk assessment model, which has the ability of evaluating the return and risk of each individual loan. Moreover, we formulated the investment decision in P2P lending as a portfolio optimization problem with boundary constraints. To validate the proposed model, we performed extensive experiments on real-world datasets from two notable P2P lending marketplaces. Experimental results revealed that the proposed model can effectively improve investment performances compared with existing methods in P2P lending.
Data mining; P2P lending; Credit risk assessment; Instance-based method; Investment decisions;
http://www.sciencedirect.com/science/article/pii/S0377221715004610
Guo, Yanhong
Zhou, Wenjun
Luo, Chunyu
Liu, Chuanren
Xiong, Hui
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:683-6902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:683-690
article
Optimal search for parameters in Monte Carlo simulation for derivative pricing
This paper provides a novel and general framework for the problem of searching parameter space in Monte Carlo simulations. We propose a deterministic online algorithm and a randomized online algorithm to search for suitable parameter values for derivative pricing which are needed to achieve desired precisions. We also give the competitive ratios of the two algorithms and prove the optimality of the algorithms. Experimental results on the performance of the algorithms are presented and analyzed as well.
Finance; Monte Carlo simulation; Deterministic online algorithm; Randomized online algorithm; Competitive ratio;
http://www.sciencedirect.com/science/article/pii/S0377221715008164
Wang, Chuan-Ju
Kao, Ming-Yang
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1024-10322015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1024-1032
article
Optimization and strategic behavior in a passenger–taxi service system
We study a passenger–taxi problem in this paper. The objective to maximize the social welfare and optimize the allocation of taxi market resources. We analyze the strategic behavior of passengers who decide whether to join the system or balk in both observable and unobservable cases. In observable case, we obtain the optimal selfish threshold that maximizes their individual revenues and give the conditions of the existence of the optimal selfless threshold that maximize the social welfare. In unobservable case, we discuss the equilibrium strategies for the selfish passengers and derive the optimal arrival rate for the socially concerned passengers. Further, we analyze how the government controls the number of taxis by subsidizing taxis or levying a tax on taxis.
Double-ended queueing system; Equilibrium; Optimization; Strategic behavior; Threshold;
http://www.sciencedirect.com/science/article/pii/S0377221715006645
Shi, Ying
Lian, Zhaotong
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:968-9822015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:968-982
article
Boundary games: How teams of OR practitioners explore the boundaries of interventionAuthor-Name: Velez-Castiblanco, Jorge
An operational research (OR) practitioner designing an intervention needs to engage in a practical process for choosing methods and implementing them. When a team of OR practitioners does this, and/or clients and stakeholders are involved, the social dynamics of designing the approach can be complex. So far, hardly any theory has been provided to support our understanding of these social dynamics. To this end, our paper offers a theory of ‘boundary games’. It is proposed that decision making on the configuration of the OR approach is shaped by communications concerning boundary judgements. These communications involve the OR practitioners in the team (and other participants, when relevant) ‘setting’, ‘following’, ‘enhancing’, ‘wandering outside’, ‘challenging’ and ‘probing’ boundaries concerning the nature of the context and the methods to be used. Empirical vignettes are provided of a project where three OR practitioners with different forms of methodological expertise collaborated on an intervention to support a Regional Council in New Zealand. In deciding how to approach a problem structuring workshop where the Regional Council employees would be participants, the OR team had to negotiate their methodological boundaries in some detail. The paper demonstrates that the theory of boundary games helps to analyse and describe the shifts in thinking that take place in this kind of team decision making. A number of implications for OR practitioners are discussed, including how this theory can contribute to reflective practice and improve awareness of what is happening during communications with OR colleagues, clients and participants.
Behavioural OR; Boundary games; Critical systems thinking; Multimethodology; Process of OR;
http://www.sciencedirect.com/science/article/pii/S0377221715007237
Brocklesby, John
Midgley, Gerald
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:239-2502015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:239-250
article
Humanitarian logistics network design under mixed uncertainty
In this paper, we address a two-echelon humanitarian logistics network design problem involving multiple central warehouses (CWs) and local distribution centers (LDCs) and develop a novel two-stage scenario-based possibilistic-stochastic programming (SBPSP) approach. The research is motivated by the urgent need for designing a relief network in Tehran in preparation for potential earthquakes to cope with the main logistical problems in pre- and post-disaster phases. During the first stage, the locations for CWs and LDCs are determined along with the prepositioned inventory levels for the relief supplies. In this stage, inherent uncertainties in both supply and demand data as well as the availability level of the transportation network's routes after an earthquake are taken into account. In the second stage, a relief distribution plan is developed based on various disaster scenarios aiming to minimize: total distribution time, the maximum weighted distribution time for the critical items, total cost of unused inventories and weighted shortage cost of unmet demands. A tailored differential evolution (DE) algorithm is developed to find good enough feasible solutions within a reasonable CPU time. Computational results using real data reveal promising performance of the proposed SBPSP model in comparison with the existing relief network in Tehran. The paper contributes to the literature on optimization based design of relief networks under mixed possibilistic-stochastic uncertainty and supports informed decision making by local authorities in increasing resilience of urban areas to natural disasters.
Humanitarian logistics; Integrated stock prepositioning and relief distribution; Mixed possibilistic-stochastic programming; Differential evolution;
http://www.sciencedirect.com/science/article/pii/S0377221715008152
Tofighi, S.
Torabi, S.A.
Mansouri, S.A.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:577-5912015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:577-591
article
An approach using SAT solvers for the RCPSP with logical constraints
This paper presents a new solution approach to solve the resource-constrained project scheduling problem in the presence of three types of logical constraints. Apart from the traditional AND constraints with minimal time-lags, these precedences are extended to OR constraints and bidirectional (BI) relations. These logical constraints extend the set of relations between pairs of activities and make the RCPSP definition somewhat different from the traditional RCPSP research topics in literature. It is known that the RCPSP with AND constraints, and hence its extension to OR and BI constraints, is NP-hard.
Project scheduling; RCPSP; AND/OR/BI constraints; SAT;
http://www.sciencedirect.com/science/article/pii/S0377221715008000
Vanhoucke, Mario
Coelho, José
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:281-2912013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:281-291
article
Should retail stores also RFID-tag ‘cheap’ items?
Despite their implementations in a wide variety of applications, there are very few instances where every item sold at a retail store is RFID-tagged. While the business case for expensive items to be RFID tagged may be somewhat clear, we claim that even ‘cheap’ items (i.e., those that cost less than an RFID tag) should be RFID tagged for retailers to benefit from efficiencies associated with item-level visibility. We study the relative price premiums a retailer with RFID tagged items can command as well as the retailer’s profit to illustrate the significance of item-level RFID-tagging both cheap and expensive items at a retail store. Our results indicate that, under certain conditions, item-level RFID tagging of items that cost less than an RFID tag has the potential to generate significant benefits to the retailer. The retailer is also better off tagging all items regardless of their relative price with respect to that of an RFID tag compared to the case where only the expensive item is RFID-tagged.
RFID; Partial and complete tagging; Retailing;
http://www.sciencedirect.com/science/article/pii/S0377221713007352
Piramuthu, Selwyn
Wochner, Sina
Grunow, Martin
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:125-1302013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:125-130
article
A multi-objective approach to supply chain visibility and risk
This paper investigates the twin effects of supply chain visibility (SCV) and supply chain risk (SCR) on supply chain performance. Operationally, SCV has been linked to the capability of sharing timely and accurate information on exogenous demand, quantity and location of inventory, transport related cost, and other logistics activities throughout an entire supply chain. Similarly, SCR can be viewed as the likelihood that an adverse event has occurred during a certain epoch within a supply chain and the associated consequences of that event which affects supply chain performance. Given the multi-faceted attributes of the decision making process which involves many stages, objectives, and stakeholders, it beckons research into this aspect of the supply chain to utilize a fuzzy multi-objective decision making approach to model SCV and SCR from an operational perspective. Hence, our model incorporates the objectives of SCV maximization, SCR minimization, and cost minimization under the constraints of budget, customer demand, production capacity, and supply availability. A numerical example is used to demonstrate the applicability of the model. Our results suggest that decision makers tend to mitigate SCR first then enhance SCV.
Supply chain management; Multiple objective programming; Supply chain visibility; Supply chain risk;
http://www.sciencedirect.com/science/article/pii/S0377221713007212
Yu, Min-Chun
Goh, Mark
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:131-1442013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:131-144
article
Revenue deficiency under second-price auctions in a supply-chain setting
Consider a firm, called the buyer, that satisfies its demand over two periods by assigning both demands to a supplier via a second-price procurement auction; call this the Standard auction. In the hope of lowering its purchase cost, the firm is considering an alternative procedure in which it will also allow bids on each period individually, where there can be either one or two winners covering the two demands; call this the Multiple Winner auction. Choosing the Multiple Winner auction over the Standard auction can in fact result in a higher cost to the buyer. We provide a bound on how much greater the buyer’s cost can be in the Multiple Winner auction and show that this bound is tight. We then sharpen this bound for two scenarios that can arise when the buyer announces his demands close to the beginning of the demand horizon. Under a monotonicity condition, we achieve a further sharpening of the bound in one of the scenarios. Finally, this monotonicity condition allows us to generalize this bound to the T-period case in which bids are allowed on any subset of period demands.
Procurement; Supply chain; Second-price auction; VCG mechanism; Revenue deficiency;
http://www.sciencedirect.com/science/article/pii/S0377221713006437
Romero Morales, Dolores
Steinberg, Richard
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:16-222013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:16-22
article
Two-stage stochastic linear programs with incomplete information on uncertainty
Two-stage stochastic linear programming is a classical model in operations research. The usual approach to this model requires detailed information on distribution of the random variables involved. In this paper, we only assume the availability of the first and second moments information of the random variables. By using duality of semi-infinite programming and adopting a linear decision rule, we show that a deterministic equivalence of the two-stage problem can be reformulated as a second-order cone optimization problem. Preliminary numerical experiments are presented to demonstrate the computational advantage of this approach.
Stochastic programming; Linear decision rule; Second order cone optimization;
http://www.sciencedirect.com/science/article/pii/S0377221713006413
Ang, James
Meng, Fanwen
Sun, Jie
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:23-332013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:23-33
article
On distributional robust probability functions and their computations
Consider a random vector, and assume that a set of its moments information is known. Among all possible distributions obeying the given moments constraints, the envelope of the probability distribution functions is introduced in this paper as distributional robust probability function. We show that such a function is computable in the bi-variate case under some conditions. Connections to the existing results in the literature and its applications in risk management are discussed as well.
Risk management; Distributional robust; Moment bounds; Semidefinite programming (SDP); Conic programming;
http://www.sciencedirect.com/science/article/pii/S0377221713007285
Wong, Man Hong
Zhang, Shuzhong
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:114-1242013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:114-124
article
Product variety and channel structure strategy for a retailer-Stackelberg supply chain
Motivated by the observations that the direct sales channel is increasingly used for customized products and that retailers wield leadership, we develop in this paper a retailer-Stackelberg pricing model to investigate the product variety and channel structure strategies of manufacturer in a circular spatial market. To avoid channel conflict, we consider the commonly observed case where the indirect channel sells standard products whereas the direct channel offers custom products. Our analytical results indicate that if the reservation price in the indirect channel is sufficiently low, adding the direct channel raises the unit wholesale price and retail price in the indirect channel due to customization in the direct channel. Despite the fact that dual channels for the retailer may dominate the single indirect channel, we find that the motivation for the manufacturer to use dual channels decreases with the unit production cost, while increases with (i) the marginal cost of variety, (ii) the retailer’s marginal selling cost, and (iii) the customer’s fit cost. Interestingly, our equilibrium analysis demonstrates that it is more likely for the manufacturer to use dual channels under the retailer Stackelberg channel leadership scenario than under the manufacturer Stackelberg scenario if offering a greater variety is very expensive. When offering a greater variety is inexpensive, the decentralization of the indirect channel may invert the manufacturer’s channel structure decision. Furthermore, endogenization of product variety will also invert the channel structure decision if the standard product’s reservation price is sufficiently low.
Supply chain management; Product variety; Customization; Dual channels; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221713007224
Xiao, Tiaojun
Choi, Tsan-Ming
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:208-2192013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:208-219
article
Optimal two-phase vaccine allocation to geographically different regions under uncertainty
In this article, we consider a decision process in which vaccination is performed in two phases to contain the outbreak of an infectious disease in a set of geographic regions. In the first phase, a limited number of vaccine doses are allocated to each region; in the second phase, additional doses may be allocated to regions in which the epidemic has not been contained. We develop a simulation model to capture the epidemic dynamics in each region for different vaccination levels. We formulate the vaccine allocation problem as a two-stage stochastic linear program (2-SLP) and use the special problem structure to reduce it to a linear program with a similar size to that of the first stage problem. We also present a Newsvendor model formulation of the problem which provides a closed form solution for the optimal allocation. We construct test cases motivated by vaccine planning for seasonal influenza in the state of North Carolina. Using the 2-SLP formulation, we estimate the value of the stochastic solution and the expected value of perfect information. We also propose and test an easy to implement heuristic for vaccine allocation. We show that our proposed two-phase vaccination policy potentially results in a lower attack rate and a considerable saving in vaccine production and administration cost.
OR in health services; Epidemic control; Two-phase vaccine allocation; Stochastic linear program; Newsvendor model; Value of stochastic solution;
http://www.sciencedirect.com/science/article/pii/S0377221713006929
Yarmand, Hamed
Ivy, Julie S.
Denton, Brian
Lloyd, Alun L.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:159-1702013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:159-170
article
A scenario-based stochastic model for supplier selection in global context with multiple buyers, currency fluctuation uncertainties, and price discounts
Suppliers network in the global context under price discounts and uncertain fluctuations of currency exchange rates have become critical in today’s world economy. We study the problem of suppliers’ selection in the presence of uncertain fluctuations of currency exchange rates and price discounts. We specifically consider a buyer with multiple sites sourcing a product from heterogeneous suppliers and address both the supplier selection and purchased quantity decision. Suppliers are located worldwide and pricing is offered in suppliers’ local currencies. Exchange rates from the local currencies of suppliers to the standard currency of the buyer are subject to uncertain fluctuations overtime. In addition, suppliers offer discounts as a function of the total quantity bought by the different customer’ sites over the time horizon irrespective of the quantity purchased by each site.
Supplier selection; Currency fluctuation uncertainty; Multiple buyers; Price discounts; Global purchasing;
http://www.sciencedirect.com/science/article/pii/S0377221713006851
Hammami, Ramzi
Temponi, Cecilia
Frein, Yannick
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:234-2452013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:234-245
article
Improvements to a large neighborhood search heuristic for an integrated aircraft and passenger recovery problem
Because most commercial passenger airlines operate on a hub-and-spoke network, small disturbances can cause major disruptions in their planned schedules and have a significant impact on their operational costs and performance. When a disturbance occurs, the airline often applies a recovery policy in order to quickly resume normal operations. We present in this paper a large neighborhood search heuristic to solve an integrated aircraft and passenger recovery problem. The problem consists of creating new aircraft routes and passenger itineraries to produce a feasible schedule during the recovery period. The method is based on an existing heuristic, developed in the context of the 2009 ROADEF Challenge, which alternates between three phases: construction, repair and improvement. We introduce a number of refinements in each phase so as to perform a more thorough search of the solution space. The resulting heuristic performs very well on the instances introduced for the challenge, obtaining the best known solution for 17 out of 22 instances within five minutes of computing time and 21 out of 22 instances within 10minutes of computing time.
Airline recovery; Fleet assignment; Aircraft routing; Passenger itineraries; Large neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221713007182
Sinclair, Karine
Cordeau, Jean-François
Laporte, Gilbert
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:263-2722013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:263-272
article
Pricing and market segmentation using opaque selling mechanisms
In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. We develop a stylized model of consumer choice that illustrates the role of opaque selling in market segmentation. We model a firm selling a product via three selling channels: a regular full information channel, an opaque posted price channel and an opaque bidding channel where consumers specify the price they are willing to pay. We illustrate the segmentation created by opaque selling as well as compare optimal revenues and prices for sellers using regular full information channels with those using opaque selling mechanisms in conjunction with regular channels. We also study the segmentation and policy changes induced by capacity constraints.
Revenue management; Marketing: pricing; Segmentation; Auctions; Buyer behavior;
http://www.sciencedirect.com/science/article/pii/S0377221713006838
Anderson, Chris K.
Xie, Xiaoqing
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:246-2622013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:246-262
article
Stochastic models for strategic resource allocation in nonprofit foreclosed housing acquisitions
Increased rates of mortgage foreclosures in the U.S. have had devastating social and economic impacts during and after the 2008 financial crisis. As part of the response to this problem, nonprofit organizations such as community development corporations (CDCs) have been trying to mitigate the negative impacts of mortgage foreclosures by acquiring and redeveloping foreclosed properties. We consider the strategic resource allocation decisions for these organizations which involve budget allocations to different neighborhoods under cost and return uncertainty. Based on interactions with a CDC, we develop stochastic integer programming based frameworks for this decision problem, and assess the practical value of the models by using real-world data. Both policy-related and computational analyses are performed, and several insights such as the trade-offs between different objectives, and the efficiency of different solution approaches are presented.
OR in societal problem analysis; OR in strategic planning; Foreclosures; Stochastic programming; Resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221713007248
Bayram, Armagan
Solak, Senay
Johnson, Michael
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:145-1582013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:145-158
article
Issues Mapping: A problem structuring method for addressing science and technology conflicts
There are new opportunities for the application of problem structuring methods to address science and technology risk conflicts through stakeholder dialogue. Most previous approaches to addressing risk conflicts have been developed from a traditional risk communication perspective, which tends to construct engagement between stakeholders based on the assumption that scientists evaluate technologies using facts, and lay participants do so based on their values. ‘Understanding the facts’ is generally privileged, so the value framings of experts often remain unexposed, and the perspectives of lay participants are marginalized. When this happens, risk communication methodologies fail to achieve authentic dialogue and can exacerbate conflict. This paper introduces ‘Issues Mapping’, a problem structuring method that enables dialogue by using visual modelling techniques to clarify issues and develop mutual understanding between stakeholders. A case study of the first application of Issues Mapping is presented, which engaged science and community protagonists in the genetic engineering debate in New Zealand. Participant and researcher evaluations suggest that Issues Mapping helped to break down stereotypes of both scientists and environmental activists; increased mutual understanding; reduced conflict; identified common ground; started building trust; and supported the emergence of policy options that all stakeholders in the room could live with. The paper ends with some reflections and priorities for further research.
Problem structuring methods; Issues mapping; Science and technology conflicts; Risk communication; Dialogue; Genetic engineering;
http://www.sciencedirect.com/science/article/pii/S0377221713006772
Cronin, Karen
Midgley, Gerald
Jackson, Laurie Skuba
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:220-2332013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:220-233
article
Optimal relay node placement in delay constrained wireless sensor network design
The Delay Constrained Relay Node Placement Problem (DCRNPP) frequently arises in the Wireless Sensor Network (WSN) design. In WSN, Sensor Nodes are placed across a target geographical region to detect relevant signals. These signals are communicated to a central location, known as the Base Station, for further processing. The DCRNPP aims to place the minimum number of additional Relay Nodes at a subset of Candidate Relay Node locations in such a manner that signals from various Sensor Nodes can be communicated to the Base Station within a pre-specified delay bound. In this paper, we study the structure of the projection polyhedron of the problem and develop valid inequalities in form of the node-cut inequalities. We also derive conditions under which these inequalities are facet defining for the projection polyhedron. We formulate a branch-and-cut algorithm, based upon the projection formulation, to solve DCRNPP optimally. A Lagrangian relaxation based heuristic is used to generate a good initial solution for the problem that is used as an initial incumbent solution in the branch-and-cut approach. Computational results are reported on several randomly generated instances to demonstrate the efficacy of the proposed algorithm.
Relay node placement; Cutting plane/facet; Polyhedral theory; Projection; Branch and cut; Lagrangian-relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221713006966
Nigam, Ashutosh
Agarwal, Yogesh K.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:64-742013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:64-74
article
The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost
We study a scheduling problem with rejection on a single serial batching machine, where the objectives are to minimize the total completion time and the total rejection cost. We consider four different problem variations. The first is to minimize the sum of the two objectives. The second and the third are to minimize one objective, given an upper bound on the value of the other objective and the last is to find a Pareto-optimal solution for each Pareto-optimal point. We provide a polynomial time procedure to solve the first variation and show that the three other variations are NP-hard. For solving the three NP-hard problems, we construct a pseudo-polynomial time algorithm. Finally, for one of the NP-hard variants of the problem we propose an FPTAS, provided some conditions hold.
Batch scheduling; Bicriteria scheduling; Rejection; Total completion time;
http://www.sciencedirect.com/science/article/pii/S0377221713006784
Shabtay, Dvir
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:84-932013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:84-93
article
Branch-and-price algorithm for the Resilient Multi-level Hop-constrained Network Design
In this work, we investigate the Resilient Multi-level Hop-constrained Network Design (RMHND) problem, which consists of designing hierarchical telecommunication networks, assuring resilience against random failures and maximum delay guarantees in the communication. Three mathematical formulations are proposed and algorithms based on the proposed formulations are evaluated. A Branch-and-price algorithm, which is based on a delayed column generation approach within a Branch-and-bound framework, is proven to work well, finding optimal solutions for practical telecommunication scenarios within reasonable time. Computational results show that algorithms based on the compact formulations are able to prove optimality for instances of limited size in the scenarios of interest while the proposed Branch-and-price algorithm exhibits a much better performance.
Integer programming; Branch-and-price; Multi-level; Resilience; Hop-constrained;
http://www.sciencedirect.com/science/article/pii/S0377221713006899
Souza, Fernanda S.H.
Gendreau, Michel
Mateus, Geraldo R.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:34-422013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:34-42
article
A geometric characterisation of the quadratic min-power centre
For a given set of nodes in the plane the min-power centre is a point such that the cost of the star centred at this point and spanning all nodes is minimised. The cost of the star is defined as the sum of the costs of its nodes, where the cost of a node is an increasing function of the length of its longest incident edge. The min-power centre problem provides a model for optimally locating a cluster-head amongst a set of radio transmitters, however, the problem can also be formulated within a bicriteria location model involving the 1-centre and a generalised Fermat-Weber point, making it suitable for a variety of facility location problems. We use farthest point Voronoi diagrams and Delaunay triangulations to provide a complete geometric description of the min-power centre of a finite set of nodes in the Euclidean plane when cost is a quadratic function. This leads to a new linear-time algorithm for its construction when the convex hull of the nodes is given. We also provide an upper bound for the performance of the centroid as an approximation to the quadratic min-power centre. Finally, we briefly describe the relationship between solutions under quadratic cost and solutions under more general cost functions.
Networks; Power efficient range assignment; Wireless ad hoc networks; Generalised Fermat–Weber problem; Farthest point Voronoi diagrams;
http://www.sciencedirect.com/science/article/pii/S0377221713007406
Brazil, M.
Ras, C.J.
Thomas, D.A.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:105-1132013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:105-113
article
Competition for cores in remanufacturing
We study competition between an original equipment manufacturer (OEM) and an independently operating remanufacturer (IO). Different from the existing literature, the OEM and IO compete not only for selling their products but also for collecting returned products (cores) through their acquisition prices. We consider a two-period model with manufacturing by the OEM in the first period, and manufacturing as well as remanufacturing in the second period. We find the optimal policies for both players by establishing a Nash equilibrium in the second period, and then determine the optimal manufacturing decision for the OEM in the first period. This leads to a number of managerial insights. One interesting result is that the acquisition price of the OEM only depends on its own cost structure, and not on the acquisition price of the IO. Further insights are obtained from a numerical investigation. We find that when the cost benefits of remanufacturing diminishes and the IO has more chance to collect the available cores, the OEM manufactures less in the first period as the market in the second period gets larger to protect its market share. Finally, we consider the case where consumers have lower willingness to pay for the remanufactured products and find that in that case remanufacturing becomes less profitable overall.
Inventory; Remanufacturing; Competition; Nash equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221713006905
Bulmus, Serra Caner
Zhu, Stuart X.
Teunter, Ruud
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:75-832013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:75-83
article
Optimal single machine scheduling of products with components and changeover cost
We consider the problem of scheduling products with components on a single machine, where changeovers incur fixed costs. The objective is to minimize the weighted sum of total flow time and changeover cost. We provide properties of optimal solutions and develop an explicit characterization of optimal sequences, while showing that this characterization has recurrent properties. Our structural results have interesting implications for practitioners, primarily that the structure of optimal sequences is robust to changes in demand.
Scheduling; Single machine; Components; Flow time; Changeover cost;
http://www.sciencedirect.com/science/article/pii/S0377221713006814
Zhou, Feng
Blocher, James D.
Hu, Xinxin
Sebastian Heese, H.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:184-1922013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:184-192
article
Portfolio optimization in a regime-switching market with derivatives
We consider the optimal asset allocation problem in a continuous-time regime-switching market. The problem is to maximize the expected utility of the terminal wealth of a portfolio that contains an option, an underlying stock and a risk-free bond. The difficulty that arises in our setting is finding a way to represent the return of the option by the returns of the stock and the risk-free bond in an incomplete regime-switching market. To overcome this difficulty, we introduce a functional operator to generate a sequence of value functions, and then show that the optimal value function is the limit of this sequence. The explicit form of each function in the sequence can be obtained by solving an auxiliary portfolio optimization problem in a single-regime market. And then the original optimal value function can be approximated by taking the limit. Additionally, we can also show that the optimal value function is a solution to a dynamic programming equation, which leads to the explicit forms for the optimal value function and the optimal portfolio process. Furthermore, we demonstrate that, as long as the current state of the Markov chain is given, it is still optimal for an investor in a multiple-regime market to simply allocate his/her wealth in the same way as in a single-regime market.
Functional operator; Elasticity approach; Portfolio optimization; Regime switching; Dynamic programming principle;
http://www.sciencedirect.com/science/article/pii/S0377221713007170
Fu, Jun
Wei, Jiaqin
Yang, Hailiang
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:193-2072013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:193-207
article
Customer acceptance mechanisms for home deliveries in metropolitan areas
Efficient and reliable home delivery is crucial for the economic success of online retailers. This is especially challenging for attended home deliveries in metropolitan areas where logistics service providers face congested traffic networks and customers expect deliveries in tight delivery time windows. Our goal is to develop and compare strategies that maximize the profits of a logistics service provider by accepting as many delivery requests as possible, while assessing the potential impact of a request on the service quality of a delivery tour. Several acceptance mechanisms are introduced, differing in the amount of travel time information that is considered in the decision of whether a delivery request can be accommodated or not. A real-world inspired simulation framework is used for comparison of acceptance mechanisms with regard to profits and service quality. Computational experiments utilizing this simulation framework investigate the effectiveness of acceptance mechanisms and help identify when more advanced travel time information may be worth the additional data collection and computational efforts.
Routing; Home delivery; Feasibility check; Congestion; City logistics;
http://www.sciencedirect.com/science/article/pii/S0377221713006930
Ehmke, Jan Fabian
Campbell, Ann Melissa
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:1-152013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:1-15
article
Multimodal freight transportation planning: A literature review
Multimodal transportation offers an advanced platform for more efficient, reliable, flexible, and sustainable freight transportation. Planning such a complicated system provides interesting areas in Operations Research. This paper presents a structured overview of the multimodal transportation literature from 2005 onward. We focus on the traditional strategic, tactical, and operational levels of planning, where we present the relevant models and their developed solution techniques. We conclude our review paper with an outlook to future research directions.
Freight transportation planning; Multimodal; Intermodal; Co-modal; Synchromodal; Review;
http://www.sciencedirect.com/science/article/pii/S0377221713005638
SteadieSeifi, M.
Dellaert, N.P.
Nuijten, W.
Van Woensel, T.
Raoufi, R.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:122-1292014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:122-129
article
Production planning and pricing policy in a make-to-stock system with uncertain demand subject to machine breakdowns
We consider a make-to-stock system served by an unreliable machine that produces one type of product, which is sold to customers at one of two possible prices depending on the inventory level at the time when a customer arrives (i.e., the decision point). The system manager must determine the production level and selling price at each decision point. We first show that the optimal production and pricing policy is a threshold control, which is characterized by three threshold parameters under both the long-run discounted profit and long-run average profit criteria. We then establish the structural relationships among the three threshold parameters that production is off when inventory is above the threshold, and that the optimal selling price should be low when inventory is above the threshold under the scenario where the machine is down or up. Finally we provide some numerical examples to illustrate the analytical results and gain additional insights.
Production planning; Dynamic pricing; Machine breakdown; Uncertain demand; Inventory control;
http://www.sciencedirect.com/science/article/pii/S0377221714002318
Shi, Xiutian
Shen, Houcai
Wu, Ting
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:208-2202014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:208-220
article
The partial adjustment valuation approach with dynamic and variable speeds of adjustment to evaluating and measuring the business value of information technology
In this paper we develop the partial adjustment valuation approach in which the speeds of (partial) adjustment are assumed to be dynamic and variable, rather than fixed or constant, to assessing the value of information technology (IT). The speeds of adjustment are a function of a set of macroeconomic and/or microeconomic variables, observed and unobserved and, hence, become time-varying or dynamic and variable over time. The approach is illustrated by a practical application. The results imply that the constant speeds of adjustment may overestimate or underestimate the actual speeds of adjustment and, accordingly, may miscalculate the values of performance metrics. Thus, the partial adjustment valuation approach with dynamic and variable speeds of adjustment is more realistic and, more importantly, captures the changing patterns and trends of the adjustment speeds and the performance measures as well. As such, the partial adjustment valuation approach with constant speeds of adjustment fails to adequately explain the dynamic production process of a decision making unit. The empirical evidence also conflicts with the lopsided view that the productivity paradox does not exist in developed countries.
Theory of partial adjustment; Constant speeds of adjustment; Dynamic and variable speeds of adjustment; IT productivity paradox; Non-linear least squares;
http://www.sciencedirect.com/science/article/pii/S0377221714002331
Lin, Winston T.
Kao, Ta-Wei (Daniel)
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:814-8232014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:814-823
article
The Red–Blue transportation problem
This paper considers the Red–Blue Transportation Problem (Red–Blue TP), a generalization of the transportation problem where supply nodes are partitioned into two sets and so-called exclusionary constraints are imposed. We encountered a special case of this problem in a hospital context, where patients need to be assigned to rooms. We establish the problem’s complexity, and we compare two integer programming formulations. Furthermore, a maximization variant of Red–Blue TP is presented, for which we propose a constant-factor approximation algorithm. We conclude with a computational study on the performance of the integer programming formulations and the approximation algorithms, by varying the problem size, the partitioning of the supply nodes, and the density of the problem.
Transportation problem; Exclusionary constraints; Complexity; Approximation; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714001908
Vancroonenburg, Wim
Della Croce, Federico
Goossens, Dries
Spieksma, Frits C.R.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:1-172014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:1-17
article
A survey of recent research on location-routing problems
The design of distribution systems raises hard combinatorial optimization problems. For instance, facility location problems must be solved at the strategic decision level to place factories and warehouses, while vehicle routes must be built at the tactical or operational levels to supply customers. In fact, location and routing decisions are interdependent and studies have shown that the overall system cost may be excessive if they are tackled separately. The location-routing problem (LRP) integrates the two kinds of decisions. Given a set of potential depots with opening costs, a fleet of identical vehicles and a set of customers with known demands, the classical LRP consists in opening a subset of depots, assigning customers to them and determining vehicle routes, to minimize a total cost including the cost of open depots, the fixed costs of vehicles used, and the total cost of the routes. Since the last comprehensive survey on the LRP, published by Nagy and Salhi (2007), the number of articles devoted to this problem has grown quickly, calling a review of new research works. This paper analyzes the recent literature (72 articles) on the standard LRP and new extensions such as several distribution echelons, multiple objectives or uncertain data. Results of state-of-the-art metaheuristics are also compared on standard sets of instances for the classical LRP, the two-echelon LRP and the truck and trailer problem.
Location-routing problem; Facility location; Vehicle routing; Distribution; Truck and trailer routing problem;
http://www.sciencedirect.com/science/article/pii/S0377221714000071
Prodhon, Caroline
Prins, Christian
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1095-11042014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1095-1104
article
Developing an early warning system to predict currency crises
The purpose of this paper is to develop an early warning system to predict currency crises. In this study, a data set covering the period of January 1992–December 2011 of Turkish economy is used, and an early warning system is developed with artificial neural networks (ANN), decision trees, and logistic regression models. Financial Pressure Index (FPI) is an aggregated value, composed of the percentage changes in dollar exchange rate, gross foreign exchange reserves of the Central Bank, and overnight interest rate. In this study, FPI is the dependent variable, and thirty-two macroeconomic indicators are the independent variables. Three models, which are tested in Turkish crisis cases, have given clear signals that predicted the 1994 and 2001 crises 12months earlier. Considering all three prediction model results, Turkey’s economy is not expected to have a currency crisis (ceteris paribus) until the end of 2012. This study presents uniqueness in that decision support model developed in this study uses basic macroeconomic indicators to predict crises up to a year before they actually happened with an accuracy rate of approximately 95%. It also ranks the leading factors of currency crisis with regard to their importance in predicting the crisis.
Early warning system; Currency crisis; Perfect signal; Artificial neural networks (ANN); Decision tree; Logistic regression;
http://www.sciencedirect.com/science/article/pii/S0377221714001829
Sevim, Cuneyt
Oztekin, Asil
Bali, Ozkan
Gumus, Serkan
Guresen, Erkam
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:313-3262014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:313-326
article
Using a partially observable Markov chain model to assess colonoscopy screening strategies – A cohort study
Colorectal cancer (CRC) is notoriously hard to combat for its high incidence and mortality rates. However, with improved screening technology and better understanding of disease pathways, CRC is more likely to be detected at early stage and thus more likely to be cured. Among the available screening methods, colonoscopy is most commonly used in the U.S. because of its capability of visualizing the entire colon and removing the polyps it detected. The current national guideline for colonoscopy screening recommends an observation-based screening strategy. Nevertheless, there is scant research studying the cost-effectiveness of the recommended observation-based strategy and its variants. In this paper, we describe a partially observable Markov chain (POMC) model which allows us to assess the cost-effectiveness of both fixed-interval and observation-based colonoscopy screening strategies. In our model, we consider detailed adenomatous polyp states and estimate state transition probabilities based on longitudinal clinical data from a specific population cohort. We conduct a comprehensive numerical study which investigates several key factors in screening strategy design, including screening frequency, initial screening age, screening end age, and screening compliance rate. We also conduct sensitivity analyses on the cost and quality of life parameters. Our numerical result demonstrates the usability of our model in assessing colonoscopy screening strategies with consideration of partial observation of true health states. This research facilitates future design of better colonoscopy screening strategies.
Medical decision making; Cancer screening; Colorectal cancer natural history; Partially observable Markov chain; Cost-effectiveness analysis;
http://www.sciencedirect.com/science/article/pii/S0377221714002185
Li, Y.
Zhu, M.
Klein, R.
Kong, N.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:836-8452014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:836-845
article
Robust combinatorial optimization with variable cost uncertainty
We present in this paper a new model for robust combinatorial optimization with cost uncertainty that generalizes the classical budgeted uncertainty set. We suppose here that the budget of uncertainty is given by a function of the problem variables, yielding an uncertainty multifunction. The new model is less conservative than the classical model and approximates better Value-at-Risk objective functions, especially for vectors with few non-zero components. An example of budget function is constructed from the probabilistic bounds computed by Bertsimas and Sim. We provide an asymptotically tight bound for the cost reduction obtained with the new model. We turn then to the tractability of the resulting optimization problems. We show that when the budget function is affine, the resulting optimization problems can be solved by solving n+1 deterministic problems. We propose combinatorial algorithms to handle problems with more general budget functions. We also adapt existing dynamic programming algorithms to solve faster the robust counterparts of optimization problems, which can be applied both to the traditional budgeted uncertainty model and to our new model. We evaluate numerically the reduction in the price of robustness obtained with the new model on the shortest path problem and on a survivable network design problem.
Combinatorial optimization; Robust optimization; Dynamic programming; Price of robustness; Budgeted uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221714002124
Poss, Michael
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1133-11412014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1133-1141
article
Operational transportation planning of freight forwarding companies in horizontal coalitions
In order to improve profitability, freight forwarding companies try to organize their operational transportation planning systematically, considering not only their own fleet but also external resources. Such external resources include vehicles from closely related subcontractors in vertical cooperations, autonomous common carriers on the transportation market, and cooperating partners in horizontal coalitions. In this paper, the transportation planning process of forwarders is studied and the benefit of including external resources is analyzed. By introducing subcontracting, the conventional routing of own vehicles is extended to an integrated operational transportation planning, which simultaneously constructs fulfillment plans with overall lowest costs using the own fleet and subcontractors’ vehicles. This is then combined with planning strategies, which intend to increase the profitability by exchanging requests among members in horizontal coalitions. Computational results show considerable cost reductions using the proposed planning approach.
Logistics; Distributed decision making; Transportation planning; Subcontracting; Collaborative planning; Request exchange;
http://www.sciencedirect.com/science/article/pii/S037722171400191X
Wang, Xin
Kopfer, Herbert
Gendreau, Michel
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:871-8862014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:871-886
article
Pricing in a supply chain for auction bidding under information asymmetry
We examine a supply chain in which a manufacturer participates in a sealed-bid lowest price procurement auction through a distributor. This form of supply chain is common when a manufacturer is active in an overseas market without establishing a local subsidiary. To gain a strategic advantage in the division of profit, the manufacturer and distributor may intentionally conceal information about the underlying cost distribution of the competition. In this environment of information asymmetry, we determine the equilibrium mark-up, the ex-ante expected mark-up and expected profit of the manufacturer and the equilibrium bid of the distributor. In unilateral communication, we demonstrate the informed agent’s advantage resulting to higher mark-up. Under information sharing, we show that profit is equally shared among the supply chain partners and we explicitly derive the mark-up when the underlying cost distribution is uniform in [0,1]. The model and findings are illustrated by a numerical example.
Auctions/bidding; Supply chain management; Equilibrium mark-up; Information asymmetry; Double marginalization; Information sharing;
http://www.sciencedirect.com/science/article/pii/S0377221714001866
Lorentziadis, Panos L.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:270-2802014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:270-280
article
An intelligent decomposition of pairwise comparison matrices for large-scale decisions
A Pairwise Comparison Matrix (PCM) has been used to compute for relative priorities of elements and are integral components in widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, PCMs suffer from several issues limiting their applications to large-scale decision problems. These limitations can be attributed to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker. This issue results to inconsistent preferences due to the limited cognitive powers of decision makers. To address these limitations, this research proposes a PCM decomposition methodology that reduces the elicited pairwise comparisons. A binary integer program is proposed to intelligently decompose a PCM into several smaller subsets using interdependence scores among elements. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets to derive the global weights of the elements from the original PCM. As a result, the number of pairwise comparison is reduced and consistency is of the comparisons is improved. The proposed decomposition methodology is applied to both AHP and ANP to demonstrate its advantages.
AHP; ANP; Pairwise comparison matrices; Inconsistency; Binary integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714002719
Jalao, Eugene Rex
Wu, Teresa
Shunk, Dan
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1037-10532014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1037-1053
article
The effects of asset specificity on maintenance financial performance: An empirical application of Transaction Cost Theory to the medical device maintenance field
This study uses multivariate regression analysis to examine the effects of asset specificity on the financial performance of both external and internal governance structures for medical device maintenance, and investigates how the financial performance of external governance structures differs depending on whether a hospital is private or public. The hypotheses were tested using information on 764 medical devices and 62 maintenance service providers, resulting in 1403 maintenance transactions. As such, our data sample is significantly larger than those used in previous studies in this area. The results empirically support our core theoretical argument that governance financial performance is influenced by assets specificity.
Maintenance; Multivariate statistics; Econometrics in health;
http://www.sciencedirect.com/science/article/pii/S0377221714001751
Cruz, Antonio Miguel
Haugan, Gregory L.
Rincon, Adriana Maria Rios
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:233-2442014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:233-244
article
Lease expiration management for a single lease term in the apartment industry
Lease expiration management (LEM) in the apartment industry aims to control the number of lease expirations and thus achieve maximal revenue growth. We examine rental rate strategies in the context of LEM for apartment buildings that offer a single lease term and face demand uncertainty. We show that the building may incur a significant revenue loss if it fails to account for LEM in the determination of the rental rate. We also show that the use of LEM is a compromise approach between a limited optimization, where no future demand information is available, and a global optimization, where complete future demand information is available. We show that the use of LEM can enhance the apartment building’s revenue by as much as 8% when the desired number of expirations and associated costs are appropriately estimated. Numerical examples are included to illustrate the major results derived from our models and the impact on the apartment’s revenue of sensitivity to the desired number of expirations and associated costs.
Revenue management; Pricing; Lease expiration management; Apartment industry;
http://www.sciencedirect.com/science/article/pii/S0377221714002549
Chen, Jing
Wang, Jian
Bell, Peter C.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:185-1982014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:185-198
article
Measurement of preferences with self-explicated approaches: A classification and merge of trade-off- and non-trade-off-based evaluation types
Self-explicated approaches are popular preference measurement approaches for products with many attributes. This article classifies previous self-explicated approaches according to their evaluation types, i.e. trade-off- versus non-trade-off-based, and outlines their advantages and disadvantages. In addition, it proposes a new method, the presorted adaptive self-explicated approach that is based on Netzer and Srinivasan’s (2011) adaptive self-explicated approach and that combines trade-off- and non-trade-off-based evaluation types. Two empirical studies compare this new method with the most popular existing self-explicated approaches, including the adaptive self-explicated approach and paired comparison preference measurement. The new method overcomes the insufficient discrimination between importance weights, as usually found in non-trade-off-based evaluation types; discourages respondents’ simplification strategies, as are frequently encountered in trade-off evaluation types; is easy to implement; and yields high predictive validity compared with other popular self-explicated approaches.
Preference measurement; Self-explicated approaches; Marketing research;
http://www.sciencedirect.com/science/article/pii/S0377221714002240
Schlereth, Christian
Eckert, Christine
Schaaf, René
Skiera, Bernd
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:254-2692014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:254-269
article
Impact of productivity on cross-training configurations and optimal staffing decisions in hospitals
Cross-training of nursing staff has been used in hospitals to reduce labor cost, provide scheduling flexibility, and meet patient demand effectively. However, cross-trained nurses may not be as productive as regular nurses in carrying out their tasks because of a new work environment and unfamiliar protocols in the new unit. This leads to the research question: What is the impact of productivity on optimal staffing decisions (both regular and cross-trained) in a two-unit and multi-unit system. We investigate the effect of mean demand, cross-training cost, contract nurse cost, and productivity, on a two-unit, full-flexibility configuration and a three-unit, partial flexibility and chaining (minimal complete chain) configurations under centralized and decentralized decision making. Under centralized decision making, the optimal staffing and cross-training levels are determined simultaneously, while under decentralized decision making, the optimal staffing levels are determined without any knowledge of future cross-training programs. We use two-stage stochastic programming to derive closed form equations and determine the optimal number of cross-trained nurses for two units facing stochastic demand following general, continuous distributions. We find that there exists a productivity level (threshold) beyond which the optimal number of cross-trained nurses declines, as fewer cross-trained nurses are sufficient to obtain the benefit of staffing flexibility. When we account for productivity variations, chaining configuration provides on average 1.20% cost savings over partial flexibility configuration, while centralized decision making averages 1.13% cost savings over decentralized decision making.
Cross-training; Productivity; Chaining; Healthcare; Stochastic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714002720
Gnanlet, Adelina
Gilland, Wendell G.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1008-10202014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1008-1020
article
Computing approximate Nash equilibria in general network revenue management games
Computing optimal capacity allocations in network revenue management is computationally hard. The problem of computing exact Nash equilibria in non-zero-sum games is computationally hard, too. We present a fast heuristic that, in case it cannot converge to an exact Nash equilibrium, computes an approximation to it in general network revenue management problems under competition. We also investigate the question whether it is worth taking competition into account when making (network) capacity allocation decisions. Computational results show that the payoffs in the approximate equilibria are very close to those in exact ones. Taking competition into account never leads to a lower revenue than ignoring competition, no matter what the competitor does. Since we apply linear continuous models, computation time is very short.
Network revenue management; Competition; Approximate Nash equilibria; Algorithmic game theory;
http://www.sciencedirect.com/science/article/pii/S0377221714001805
Grauberger, W.
Kimms, A.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:281-2892014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:281-289
article
Subgroup additivity in the queueing problem
Subgroup additivity requires that a rule assigns the same expected ‘relative’ utility to each agent whether an agent’s expected relative utility is calculated from the problem involving all agents or from its sub-problems with a smaller number of agents. In this paper, we investigate its implications for the queueing problem. As a result, we present characterizations of five important rules: the minimal transfer rule, the maximal transfer rule, the pivotal rule, the reward based pivotal rule, and the symmetrically balanced VCG rule. In addition to some basic axioms and subgroup additivity, the characterization results can be obtained by additionally imposing either a strategic axiom or an equity axiom.
Queueing problem; Subgroup additivity; Weak strategyproofness; Egalitarian equivalence;
http://www.sciencedirect.com/science/article/pii/S037722171400277X
Chun, Youngsub
Mitra, Manipushpak
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1142-11542014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1142-1154
article
Computationally efficient evaluation of appointment schedules in health care
We consider the problem of evaluating and constructing appointment schedules for patients in a health care facility where a single physician treats patients in a consecutive manner, as is common for general practitioners, clinics and for outpatients in hospitals. Specifically, given a fixed-length session during which a physician sees K patients, each patient has to be given an appointment time during this session in advance. Optimising a schedule with respect to patient waiting times, physician idle times, session overtime, etc. usually requires a heuristic search method involving a huge number of repeated schedule evaluations. Hence, our aim is to obtain accurate predictions at very low computational cost. This is achieved by (1) using Lindley’s recursion to allow for explicit expressions and (2) choosing a discrete-time (slotted) setting to make those expressions easy to compute. We assume general, possibly distinct, distributions for the patients’ consultation times, which allows to account for multiple treatment types, emergencies and patient no-shows. The moments of waiting and idle times are obtained and the computational complexity of the algorithm is discussed. Additionally, we calculate the schedule’s performance in between appointments in order to assist a sequential scheduling strategy.
Stochastic Programming; Scheduling; Queueing; Complexity theory;
http://www.sciencedirect.com/science/article/pii/S0377221714002100
De Vuyst, Stijn
Bruneel, Herwig
Fiems, Dieter
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:857-8702014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:857-870
article
A hybrid wrapper–filter approach to detect the source(s) of out-of-control signals in multivariate manufacturing process
With modern data-acquisition equipment and on-line computers used during production, it is now common to monitor several correlated quality characteristics simultaneously in multivariate processes. Multivariate control charts (MCC) are important tools for monitoring multivariate processes. One difficulty encountered with multivariate control charts is the identification of the variable or group of variables that cause an out-of-control signal. Expert knowledge either in combination with wrapper-based supervised classifier or a pre-filter with wrapper are the standard approaches to detect the sources of out-of-control signal. However gathering expert knowledge in source identification is costly and may introduce human error. Individual univariate control charts (UCC) and decomposition of T2 statistics are also used in many cases simultaneously to identify the sources, but these either ignore the correlations between the sources or may take more time with the increase of dimensions. The aim of this paper is to develop a source identification approach that does not need any expert-knowledge and can detect out-of-control signal in less computational complexity. We propose, a hybrid wrapper–filter based source identification approach that hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper. The Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to utilize the knowledge about the intrinsic pattern of the quality characteristics computed by the filter for directing the wrapper search process. To compute optimal ANNIGMA score, we also propose a Global MR-ANNIGMA using non-functional relationship between variables which is independent of the derivative of the objective function and has a potential to overcome the local optimization problem of ANN training. The novelty of the proposed approaches is that they combine the advantages of both filter and wrapper approaches and do not require any expert knowledge about the sources of the out-of-control signals. Heuristic score based subset generation process also reduces the search space into polynomial growth which in turns reduces computational time. The proposed approaches were tested by exhaustive experiments using both simulated and real manufacturing data and compared to existing methods including independent filter, wrapper and Multivariate EWMA (MEWMA) methods. The results indicate that the proposed approaches can identify the sources of out-of-control signals more accurately than existing approaches.
Multivariate control chart; Fault diagnosis; Global optimization; Wrapper and filter approaches;
http://www.sciencedirect.com/science/article/pii/S0377221714001672
Huda, Shamsul
Abdollahian, Mali
Mammadov, Musa
Yearwood, John
Ahmed, Shafiq
Sultan, Ibrahim
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:802-8132014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:802-813
article
An iterated local search algorithm for the single-vehicle cyclic inventory routing problem
The Single-Vehicle Cyclic Inventory Routing Problem (SV-CIRP) belongs to the class of Inventory Routing Problems (IRP) in which the supplier optimises both the distribution costs and the inventory costs at the customers. The goal of the SV-CIRP is to minimise both kinds of costs and to maximise the collected rewards, by selecting a subset of customers from a given set and determining the quantity to be delivered to each customer and the vehicle routes, while avoiding stockouts. A cyclic distribution plan should be developed for a single vehicle.
Routing; Inventory; Single-vehicle cyclic inventory routing problem; Iterated local search; Metaheuristic;
http://www.sciencedirect.com/science/article/pii/S0377221714001350
Vansteenwegen, Pieter
Mateo, Manuel
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:348-3622014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:348-362
article
Online stochastic UAV mission planning with time windows and time-sensitive targets
In this paper we simultaneously consider three extensions to the standard Orienteering Problem (OP) to model characteristics that are of practical relevance in planning reconnaissance missions of Unmanned Aerial Vehicles (UAVs). First, travel and recording times are uncertain. Secondly, the information about each target can only be obtained within a predefined time window. Due to the travel and recording time uncertainty, it is also uncertain whether a target can be reached before the end of its time window. Finally, we consider the appearance of new targets during the flight, so-called time-sensitive targets, which need to be visited immediately if possible. We tackle this online stochastic UAV mission planning problem with time windows and time-sensitive targets using a re-planning approach. To this end, we introduce the Maximum Coverage Stochastic Orienteering Problem with Time Windows (MCS-OPTW). It aims at constructing a tour with maximum expected profit of targets that were already known before the flight. Secondly, it directs the planned tour to predefined areas where time-sensitive targets are expected to appear. We have developed a fast heuristic that can be used to re-plan the tour, each time before leaving a target. In our computational experiments we illustrate the benefits of the MCS-OPTW planning approach with respect to balancing the two objectives: the expected profits of foreseen targets, and expected percentage of time-sensitive targets reached on time. We compare it to a deterministic planning approach and show how it deals with uncertainty in travel and recording times and the appearance of time-sensitive targets.
Stochastic orienteering problem; Time windows; Online planning;
http://www.sciencedirect.com/science/article/pii/S0377221714002288
Evers, Lanah
Barros, Ana Isabel
Monsuur, Herman
Wagelmans, Albert
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:921-9312014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:921-931
article
Cost, revenue and profit efficiency measurement in DEA: A directional distance function approach
Estimation of efficiency of firms in a non-competitive market characterized by heterogeneous inputs and outputs along with their varying prices is questionable when factor-based technology sets are used in data envelopment analysis (DEA). In this scenario, a value-based technology becomes an appropriate reference technology against which efficiency can be assessed. In this contribution, the value-based models of Tone (2002) are extended in a directional DEA set up to develop new directional cost- and revenue-based measures of efficiency, which are then decomposed into their respective directional value-based technical and allocative efficiencies. These new directional value-based measures are more general, and include the existing value-based measures as special cases. These measures satisfy several desirable properties of an ideal efficiency measure. These new measures are advantageous over the existing ones in terms of (1) their ability to satisfy the most important property of translation invariance; (2) choices over the use of suitable direction vectors in handling negative data; and (3) flexibility in providing the decision makers with the option of specifying preferable direction vectors to incorporate their preferences. Finally, under the condition of no prior unit price information, a directional value-based measure of profit inefficiency is developed for firms whose underlying objectives are profit maximization. For an illustrative empirical application, our new measures are applied to a real-life data set of 50 US banks to draw inferences about the production correspondence of banking industry.
Data envelopment analysis; Cost efficiency; Revenue efficiency; Profit efficiency; Translation invariance; Directional distance function;
http://www.sciencedirect.com/science/article/pii/S0377221714001325
Sahoo, Biresh K.
Mehdiloozad, Mahmood
Tone, Kaoru
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:290-2992014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:290-299
article
Systemic risk elicitation: Using causal maps to engage stakeholders and build a comprehensive view of risks
As evidenced through both a historical and contemporary number of reported over-runs, managing projects can be a risky business. Managers are faced with the need to effectively work with a multitude of parties and deal with a wealth of interlocking uncertainties. This paper describes a modelling process developed to assist managers facing such situations. The process helps managers to develop a comprehensive appreciation of risks and gain an understanding of the impact of the interactions between these risks through explicitly engaging a wide stakeholder base using a group support system and causal mapping process. Using a real case the paper describes the modelling process and outcomes along with its implications, before reflecting on the insights, limitations and future research.
Problem structuring; Risk analysis; Group decision making;
http://www.sciencedirect.com/science/article/pii/S0377221714002744
Ackermann, Fran
Howick, Susan
Quigley, John
Walls, Lesley
Houghton, Tom
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:988-9962014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:988-996
article
A system dynamics model for determining the waste disposal charging fee in construction
The waste disposal charging fee (WDCF) has long been adopted for stimulating major project stakeholders’ (particularly project clients and contractors) incentives to minimize solid waste and increase the recovery of wasted materials in the construction industry. However, the present WDCFs applied in many regions of China are mostly determined based on a rule of thumb. Consequently the effectiveness of implementing these WDCFs is very limited. This study aims at addressing this research gap through developing a system dynamics based model to determine an appropriate WDCF in the construction sector. The data used to test and validate the model was collected from Shenzhen of south China. By using the model established, two types of simulations were carried out. One is the base run simulation to investigate the status quo of waste generation in Shenzhen; the other is policy analysis simulation, with which an appropriate WDCF could be determined to reduce waste generation and landfilling, maximize waste recycling, and minimize the waste dumped inappropriately. The model developed can function as a tool to effectively determine an appropriate WDCF in Shenzhen. Further, it can also be used by other regions intending to stimulate construction waste minimization and recycling through implementing an optimal WDCF.
System dynamics; Decision making; Waste disposal charging fee (WDCF); Waste management;
http://www.sciencedirect.com/science/article/pii/S0377221714001696
Yuan, Hongping
Wang, Jiayuan
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:31-402014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:31-40
article
The Share-a-Ride Problem: People and parcels sharing taxis
New city logistics approaches are needed to ensure efficient urban mobility for both people and goods. Usually, these are handled independently in dedicated networks. This paper considers conceptual and mathematical models in which people and parcels are handled in an integrated way by the same taxi network. From a city perspective, this system has a potential to alleviate urban congestion and environmental pollution. From the perspective of a taxi company, new benefits from the parcel delivery service can be obtained. We propose two multi-commodity sharing models. The Share-a-Ride Problem (SARP) is discussed and defined in detail. A reduced problem based on the SARP is proposed: the Freight Insertion Problem (FIP) starts from a given route for handling people requests and inserts parcel requests into this route. We present MILP formulations and perform a numerical study of both static and dynamic scenarios. The obtained numerical results provide valuable insights into successfully implementing a taxi sharing service.
Transportation; Share-a-Ride Problem; Freight insertion problem; Multi-commodity; Taxi;
http://www.sciencedirect.com/science/article/pii/S0377221714002173
Li, Baoxiang
Krushinsky, Dmitry
Reijers, Hajo A.
Van Woensel, Tom
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1054-10662014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1054-1066
article
A travel time estimation model for a high-level picker-to-part system with class-based storage policies
Most previous related studies on warehouse configurations and operations only investigated single-level storage rack systems where the height of storage racks and the vertical movement of the picking operations are both not considered. However, in order to utilize the space efficiently, high-level storage systems are often used in warehouses in practice. This paper presents a travel time estimation model for a high-level picker-to-part system with the considerations of class-based storage policy and various routing policies. The results indicate that the proposed model appears to be sufficiently accurate for practical purposes. Furthermore, the effects of storage and routing policies on the travel time and the optimal warehouse layout are discussed in the paper.
Facilities planning and design; Logistics; Warehouse layout; Order picking;
http://www.sciencedirect.com/science/article/pii/S0377221714001726
Pan, Jason Chao-Hsien
Wu, Ming-Hung
Chang, Wen-Liang
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:175-1842014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:175-184
article
Take back costs and product durability
Extended Producer Responsibility (EPR) initiatives may require a manufacturer to be responsible in the future for taking back the products it produces today. A ramification of EPR is that take back costs may influence firms’ decisions regarding product durability. In the absence of EPR, prior literature has shown that a firm may intentionally lower durability, yielding planned obsolescence. We use a two period model to examine the impact of take back costs on a manufacturer’s product durability and pricing decisions, under both selling and leasing scenarios. We show that compared to selling, leasing provides a greater incentive to raise durability, thus extending a classic insight to a setting with product take backs. Interestingly, we also show that it is possible for the optimal product durability to decrease if the stipulated take back fraction increases. In such situations, were the take back fraction tied to durability rather than a fixed fraction, we demonstrate durability can increase. We explore the impact of take backs on profits and surplus by alternatively considering products for which take back costs are either increasing or decreasing functions of durability. When increasing durability implies higher take back costs, our results demonstrate that leasing can increase durability, profits, and surplus significantly compared to selling. In contrast, when increasing durability implies a lower take back cost, there is a built-in incentive for the firm to increase durability, which can make selling more efficient (i.e., surplus enhancing) than leasing.
Durability; Take-backs; Obsolescence; Production; Pricing;
http://www.sciencedirect.com/science/article/pii/S0377221714002227
Pangburn, Michael S.
Stavrulaki, Euthemia
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:41-532014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:41-53
article
Robust optimization for interactive multiobjective programming with imprecise information applied to R&D project portfolio selection
A multiobjective binary integer programming model for R&D project portfolio selection with competing objectives is developed when problem coefficients in both objective functions and constraints are uncertain. Robust optimization is used in dealing with uncertainty while an interactive procedure is used in making tradeoffs among the multiple objectives. Robust nondominated solutions are generated by solving the linearized counterpart of the robust augmented weighted Tchebycheff programs. A decision maker’s most preferred solution is identified in the interactive robust weighted Tchebycheff procedure by progressively eliciting and incorporating the decision maker’s preference information into the solution process. An example is presented to illustrate the solution approach and performance. The developed approach can also be applied to general multiobjective mixed integer programming problems.
Multiobjective programming; Robust optimization; Imprecise information; Portfolio selection; Interactive procedures;
http://www.sciencedirect.com/science/article/pii/S0377221714002525
Hassanzadeh, Farhad
Nemati, Hamid
Sun, Minghe
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:130-1422014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:130-142
article
Coordination of production and interstage batch delivery with outsourced distribution
In this paper, we consider coordinated production and interstage batch delivery scheduling problems, where a third-party logistics provider (3PP) delivers semi-finished products in batches from one production location to another production location belonging to the same manufacturer. A batch cannot be delivered until all jobs of the batch are completed at the upstream stage. The 3PP is required to deliver each product within a time T from its release at the upstream stage. We consider two transportation modes: regular transportation, for which delivery departure times are fixed at the beginning, and express transportation, for which delivery departure times are flexible. We analyze the problems faced by the 3PP when either the manufacturer dominates or the 3PP dominates. In this context, we investigate the complexity of several problems, providing polynomiality and NP-completeness results.
Supply chain scheduling; Batching and delivery; Outsourced distribution; Two delivery modes; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714002781
Agnetis, Alessandro
Aloulou, Mohamed Ali
Fu, Liang-Liang
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:824-8352014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:824-835
article
Service differentiation through selective lateral transshipments
We consider a multi-item spare parts problem with multiple warehouses and two customer classes, where lateral transshipments are used as a differentiation tool. Specifically, premium requests that cannot be met from stock at their preferred warehouse may be satisfied from stock at other warehouses (so-called lateral transshipments). We first derive approximations for the mean waiting time per class in a single-item model with selective lateral transshipments. Next, we embed our method in a multi-item model minimizing the holding costs and costs of lateral and emergency shipments from upstream locations in the network. Compared to the option of using only selective emergency shipments for differentiation, the addition of selective lateral transshipments can lead to significant further cost savings (14% on average).
Inventory; Service differentiation; Lateral transshipments; Spare parts;
http://www.sciencedirect.com/science/article/pii/S037722171400188X
Alvarez, E.M.
van der Heijden, M.C.
Vliegen, I.M.H.
Zijm, W.H.M.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:77-862014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:77-86
article
Effective learning hyper-heuristics for the course timetabling problem
Course timetabling is an important and recurring administrative activity in most educational institutions. This article combines a general modeling methodology with effective learning hyper-heuristics to solve this problem. The proposed hyper-heuristics are based on an iterated local search procedure that autonomously combines a set of move operators. Two types of learning for operator selection are contrasted: a static (offline) approach, with a clear distinction between training and execution phases; and a dynamic approach that learns on the fly. The resulting algorithms are tested over the set of real-world instances collected by the first and second International Timetabling competitions. The dynamic scheme statistically outperforms the static counterpart, and produces competitive results when compared to the state-of-the-art, even producing a new best-known solution. Importantly, our study illustrates that algorithms with increased autonomy and generality can outperform human designed problem-specific algorithms.
Timetabling; Hyper-heuristics; Heuristics; Metaheuristics; Combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714002859
Soria-Alcaraz, Jorge A.
Ochoa, Gabriela
Swan, Jerry
Carpio, Martin
Puga, Hector
Burke, Edmund K.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:363-3732014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:363-373
article
Mass-casualty triage: Distribution of victims to multiple hospitals using the SAVE model
During a mass casualty incident (MCI), to which one of several area hospitals should each victim be sent? These decisions depend on resource availability (both transport and care) and the survival probabilities of patients. This paper focuses on the critical time period immediately following the onset of an MCI and is concerned with how to effectively evacuate victims to the different area hospitals in order to provide the greatest good to the greatest number of patients while not overwhelming any single hospital. This resource-constrained triage problem is formulated as a mixed-integer program, which we call the Severity-Adjusted Victim Evacuation (SAVE) model. It is compared with a model in the extant literature and also against several current policies commonly used by the so-called incident commander. The experiments indicate that the SAVE model provides a marked improvement over the commonly used ad-hoc policies and an existing model. Two possible implementation strategies are discussed along with managerial conclusions.
OR in service industries; Risk management; Disaster management; Health care; Victim distribution;
http://www.sciencedirect.com/science/article/pii/S0377221714002574
Dean, Matthew D.
Nair, Suresh K.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1105-11182014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1105-1118
article
Modeling framework for optimal evacuation of large-scale crowded pedestrian facilities
The paper presents a simulation–optimization modeling framework for the evacuation of large-scale pedestrian facilities with multiple exit gates. The framework integrates a genetic algorithm (GA) and a microscopic pedestrian simulation–assignment model. The GA searches for the optimal evacuation plan, while the simulation model guides the search through evaluating the quality of the generated evacuation plans. Evacuees are assumed to receive evacuation instructions in terms of the optimal exit gates and evacuation start times. The framework is applied to develop an optimal evacuation plan for a hypothetical crowded exhibition hall. The obtained results show that the model converges to a superior optimal evacuation plan within an acceptable number of iterations. In addition, the obtained evacuation plan outperforms conventional plans that implement nearest-gate immediate evacuation strategies.
Crowd dynamics; Evacuation; Simulation; Cellular automata; Genetic algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221714001891
Abdelghany, Ahmed
Abdelghany, Khaled
Mahmassani, Hani
Alhalabi, Wael
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:221-2322014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:221-232
article
Vertical integration with endogenous contract leadership: Stability and fair profit allocation
This paper studies vertical integration in serial supply chains with a wholesale price contract. We consider a business environment where the contracting leader may be endogenously changed before and after forming the integration. A cooperative game is formulated to normatively analyze the stable and fair profit allocations under the grand coalition in such an environment. Our main result demonstrates that vertical integration is stable when all members are pessimistic in the sense that they are sure that they will not become the contracting leader if they deviate from the grand coalition. We find that in this case, the grand coalition’s profit must be allocated more to the retailer and the members with higher costs. Nevertheless, we also show the conditions under which the upstream manufacturer can have strong power as in traditional supply chains.
Vertical integration; Leader position; Cooperative game; Core allocation; Economics;
http://www.sciencedirect.com/science/article/pii/S0377221714002513
Kumoi, Yuki
Matsubayashi, Nobuo
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:245-2532014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:245-253
article
A new nonlinear interval programming method for uncertain problems with dependent interval variables
This paper proposes a new nonlinear interval programming method that can be used to handle uncertain optimization problems when there are dependencies among the interval variables. The uncertain domain is modeled using a multidimensional parallelepiped interval model. The model depicts single-variable uncertainty using a marginal interval and depicts the degree of dependencies among the interval variables using correlation angles and correlation coefficients. Based on the order relation of interval and the possibility degree of interval, the uncertain optimization problem is converted to a deterministic two-layer nesting optimization problem. The affine coordinate is then introduced to convert the uncertain domain of a multidimensional parallelepiped interval model to a standard interval uncertain domain. A highly efficient iterative algorithm is formulated to generate an efficient solution for the multi-layer nesting optimization problem after the conversion. Three computational examples are given to verify the effectiveness of the proposed method.
Uncertainty modeling; Nonlinear interval programming; Interval model; Uncertain optimization; Variable dependency;
http://www.sciencedirect.com/science/article/pii/S0377221714002586
Jiang, C.
Zhang, Z.G.
Zhang, Q.F.
Han, X.
Xie, H.C.
Liu, J.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1165-11692014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1165-1169
article
A new distance measure including the weak preference relation: Application to the multiple criteria aggregation procedure for mixed evaluations
We introduce a new distance measure between two preorders that captures indifference, strict preference, weak preference and incomparability relations. This measure is the first to capture weak preference relations. We illustrate how this distance measure affords decision makers greater modeling power to capture their preferences, or uncertainty and ambiguity around them, by using our proposed distance measure in a multiple criteria aggregation procedure for mixed evaluations.
Multiple criteria analysis; Uncertainty modeling; Preference relations; Stochastic dominance; Distance measure;
http://www.sciencedirect.com/science/article/pii/S0377221714002756
Ben Amor, Sarah
Martel, Jean-Marc
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1155-11642014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1155-1164
article
Lift ticket prices and quality in French ski resorts: Insights from a non-parametric analysis
Using a unique data set with 168 ski resorts located in France, this paper investigates the relationship between lift ticket prices and supply-related characteristics of ski resorts. A non-parametric analysis combined with a principal component analysis is used to identify the set of efficient ski resorts, defined as those where the lift ticket price is the cheapest for a given level of quality. Results show that the average inefficiency per lift ticket price is less than 1.5euros for resorts located in the Pyrenees and the Southern Alps. The average inefficiency is three times higher for ski resorts located in the Northern Alps, which is explained by the presence of large connected ski areas offering many more runs for a small surcharge.
Data envelopment analysis; Free disposal hull model; Quality; Lift ticket price; Ski resorts;
http://www.sciencedirect.com/science/article/pii/S0377221714002148
Wolff, François-Charles
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:966-9742014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:966-974
article
Interior analysis of the green product mix solution
When modeling optimal product mix under emission restrictions produces a solution with unacceptable level of profit, analyst is moved to investigate the cause(s). Interior analysis (IA) is proposed for this purpose. With IA, analyst can investigate the impact of accommodating emission controls in step-by-step one-at-a-time manner and in doing so track how profit and other important features of product mix degrade and to which emission control enforcements its diminution may be attributed. In this way, analyst can assist manager in identifying implementation strategies. Although IA is presented within context of a linear programming formulation of the green product mix problem, its methodology may be applied to other modeling frameworks. Quantity dependent penalty rates and transformations of emissions to forms with or without economic value are included in the modeling and illustrations of IA.
Linear programming; Product mix problem; Implementation strategy; Sustainability; 0/1 Mixed integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714001623
Wellington, John F.
Guiffrida, Alfred L.
Lewis, Stephen A.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1021-10362014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1021-1036
article
Backward induction algorithm for a class of closed-loop Stackelberg games
In the paper a new deterministic continuum-strategy two-player discrete-time dynamic Stackelberg game is proposed with fixed finite time duration and closed-loop information structure. The considered payoff functions can be widely used in different applications (mainly in conflicts of consuming a limited resource, where one player, called leader, is a superior authority choosing strategy first, and another player, called follower, chooses after).
Game theory; Closed-loop Stackelberg game; Leader–follower equilibrium; Backward induction algorithm; Game regulation; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714001921
Kicsiny, R.
Varga, Z.
Scarelli, A.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:143-1542014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:143-154
article
Optimisation of integrated reverse logistics networks with different product recovery routes
The awareness of importance of product recovery has grown swiftly in the past few decades. This paper focuses on a problem of inventory control and production planning optimisation of a generic type of an integrated Reverse Logistics (RL) network which consists of a traditional forward production route, two alternative recovery routes, including repair and remanufacturing and a disposal route. It is assumed that demand and return quantities are uncertain. A quality level is assigned to each of the returned products. Due to uncertainty in the return quantity, quantity of returned products of a certain quality level is uncertain too. The uncertainties are modelled using fuzzy trapezoidal numbers. Quality thresholds are used to segregate the returned products into repair, remanufacturing or disposal routes. A two phase fuzzy mixed integer optimisation algorithm is developed to provide a solution to the inventory control and production planning problem. In Phase 1, uncertainties in quantity of product returns and quality of returns are considered to calculate the quantities to be sent to different recovery routes. These outputs are inputs into Phase 2 which generates decisions on component procurement, production, repair and disassembly. Finally, numerical experiments and sensitivity analysis are carried out to better understand the effects of quality of returns and RL network parameters on the network performance. These parameters include quantity of returned products, unit repair costs, unit production cost, setup costs and unit disposal cost.
Supply chain management; Reverse logistics; Quality of returned products; Uncertainty modelling; Inventory control; Fuzzy optimisation;
http://www.sciencedirect.com/science/article/pii/S0377221714002732
Niknejad, A.
Petrovic, D.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1083-10942014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1083-1094
article
Capturing and prioritizing students’ requirements for course design by embedding Fuzzy-AHP and linear programming in QFD
Customer requirements play a vital and important role in the design of products and services. Quality Function Deployment (QFD) is a popular, widely used method that helps translate customer requirements into design specifications. Thus, the foundation for a successful QFD implementation lies in the accurate capturing and prioritization of these requirements. This paper proposes and tests the use of an alternative framework for prioritizing students’ requirements within QFD. More specifically, Fuzzy Analytic Hierarchy Process (Fuzzy-AHP) and the linear programming method (LP-GW-AHP) based on Data Envelopment Analysis (DEA) are embedded into QFD (QFD-LP-GW-Fuzzy A