2016-05-29T20:58:54Z
http://oai.repec.org/oai.php
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:186-1982015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:186-198
article
Exact and heuristic algorithms for the design of hub networks with multiple lines
In this paper we study a hub location problem in which the hubs to be located must form a set of interconnecting lines. The objective is to minimize the total weighted travel time between all pairs of nodes while taking into account a budget constraint on the total set-up cost of the hub network. A mathematical programming formulation, a Benders-branch-and-cut algorithm and several heuristic algorithms, based on variable neighborhood descent, greedy randomized adaptive search, and adaptive large neighborhood search, are presented and compared to solve the problem. Numerical results on two sets of benchmark instances with up to 70 nodes and three lines confirm the efficiency of the proposed solution algorithms.
Hub location; Hub-and-spoke networks; Lines; Network design;
http://www.sciencedirect.com/science/article/pii/S0377221715003100
Martins de Sá, Elisangela
Contreras, Ivan
Cordeau, Jean-François
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:661-6732015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:661-673
article
A model enhancement heuristic for building robust aircraft maintenance personnel rosters with stochastic constraints
This paper presents a heuristic approach to optimize staffing and scheduling at an aircraft maintenance company. The goal is to build robust aircraft maintenance personnel rosters that can achieve a certain service level while minimizing the total labor costs. Robust personnel rosters are rosters that can handle delays associated with stochastic flight arrival times. To deal with this stochasticity, a model enhancement algorithm is proposed that iteratively adjusts a mixed integer linear programming (MILP) model to a stochastic environment based on simulation results. We illustrate the performance of the algorithm with a computational experiment based on real life data of a large aircraft maintenance company located at Brussels Airport in Belgium. The obtained results are compared to deterministic optimization and straightforward optimization. Experiments demonstrate that our model can ensure a certain desired service level with an acceptable increase in labor costs when stochasticity is introduced in the aircraft arrival times.
Model enhancement; Aircraft maintenance; Stochastic optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500380X
De Bruecker, Philippe
Van den Bergh, Jorne
Beliën, Jeroen
Demeulemeester, Erik
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:154-1692015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:154-169
article
Ant colony optimization based binary search for efficient point pattern matching in images
Point Pattern Matching (PPM) is a task to pair up the points in two images of a same scene. There are many existing approaches in literature for point pattern matching. However, the drawback lies in the high complexity of the algorithms. To overcome this drawback, an Ant Colony Optimization based Binary Search Point Pattern Matching (ACOBSPPM) algorithm is proposed. According to this approach, the edges of the image are stored in the form of point patterns. To match an incoming image with the stored images, the ant agent chooses a point value in the incoming image point pattern and employs a binary search method to find a match with the point values in the stored image point pattern chosen for comparison. Once a match occurs, the ant agent finds a match for the next point value in the incoming image point pattern by searching between the matching position and maximum number of point values in the stored image point pattern. The stored image point pattern having the maximum number of matches is the image matching with the incoming image. Experimental results are shown to prove that ACOBSPPM algorithm is efficient when compared to the existing point pattern matching approaches in terms of time complexity and precision accuracy.
Decision support systems; Image recognition; Point pattern matching; Ant Colony Optimization; Binary search;
http://www.sciencedirect.com/science/article/pii/S0377221715002842
Sreeja, N.K.
Sankar, A.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:505-5162015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:505-516
article
Control of Condorcet voting: Complexity and a Relation-Algebraic approach
We study the constructive variant of the control problem for Condorcet voting, where control is done by deleting voters. We prove that this problem remains NP-hard if instead of Condorcet winners the alternatives in the uncovered set win. Furthermore, we present a relation-algebraic model of Condorcet voting and relation-algebraic specifications of the dominance relation and the solutions of the control problem. All our relation-algebraic specifications immediately can be translated into the programming language of the OBDD-based computer system RelView. Our approach is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other voting rules and control problems.
Artificial intelligence; Condorcet voting; Control problem; Uncovered set; Relation algebra;
http://www.sciencedirect.com/science/article/pii/S0377221715003185
Berghammer, Rudolf
Schnoor, Henning
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:34-432015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:34-43
article
An accelerated branch-and-price algorithm for multiple-runway aircraft sequencing problems
This paper presents an effective branch-and-price (B&P) algorithm for multiple-runway aircraft sequencing problems. This approach improves the tractability of the problem by several orders of magnitude when compared with solving a classical 0–1 mixed-integer formulation over a set of computationally challenging instances. Central to the computational efficacy of the B&P algorithm is solving the column generation subproblem as an elementary shortest path problem with aircraft time-windows and non-triangular separation times using an enhanced dynamic programming procedure. We underscore in our computational study the algorithmic features that contribute, in our experience, to accelerating the proposed dynamic programming procedure and, hence, the overall B&P algorithm.
Aircraft sequencing; Branch-and-price; Column generation; Dynamic programming; Elementary shortest path problems;
http://www.sciencedirect.com/science/article/pii/S0377221715003124
Ghoniem, Ahmed
Farhadi, Farbod
Reihaneh, Mohammad
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:128-1392015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:128-139
article
Efficient analysis of the MMAP[K]/PH[K]/1 priority queue
In this paper we consider the MMAP/PH/1 priority queue, both the case of preemptive resume and the case of non-preemptive service. The main idea of the presented analysis procedure is that the sojourn time of the low priority jobs in the preemptive case (and the waiting time distribution in the non-preemptive case) can be represented by the duration of the busy period of a special Markovian fluid model. By making use of the recent results on the busy period analysis of Markovian fluid models it is possible to calculate several queueing performance measures in an efficient way including the sojourn time distribution (both in the time domain and in the Laplace transform domain), the moments of the sojourn time, the generating function of the queue length, the queue length moments and the queue length probabilities.
Queueing; Preemptive resume priority queue; Non-preemptive priority queue; Matrix-analytic methods;
http://www.sciencedirect.com/science/article/pii/S0377221715001976
Horváth, Gábor
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:140-1532015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:140-153
article
A noisy principal component analysis for forward rate curves
Principal Component Analysis (PCA) is the most common nonparametric method for estimating the volatility structure of Gaussian interest rate models. One major difficulty in the estimation of these models is the fact that forward rate curves are not directly observable from the market so that non-trivial observational errors arise in any statistical analysis. In this work, we point out that the classical PCA analysis is not suitable for estimating factors of forward rate curves due to the presence of measurement errors induced by market microstructure effects and numerical interpolation. Our analysis indicates that the PCA based on the long-run covariance matrix is capable to extract the true covariance structure of the forward rate curves in the presence of observational errors. Moreover, it provides a significant reduction in the pricing errors due to noisy data typically found in forward rate curves.
Finance; Pricing; Principal component analysis; Term-structure of interest rates; HJM models;
http://www.sciencedirect.com/science/article/pii/S0377221715003318
Laurini, Márcio Poletti
Ohashi, Alberto
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:421-4342015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:421-434
article
Bi-Objective Multi-Mode Project Scheduling Under Risk Aversion
The paper proposes a model for stochastic multi-mode resource-constrained project scheduling under risk aversion with the two objectives makespan and cost. Activity durations and costs are assumed as uncertain and modeled as random variables. For the scheduling part of the decision problem, the class of early-start policies is considered. In addition to the schedule, the assignment of execution modes to activities has to be selected. To take risk aversion into account, the approach of optimization under multivariate stochastic dominance constraints, recently developed in other fields, is adopted. For the resulting bi-objective stochastic integer programming problem, the Pareto frontier is determined by means of an exact solution method, incorporating a branch-and-bound technique based on the forbidden set branching scheme from stochastic project scheduling. Randomly generated test instances, partially derived from a test case from the PSPLIB, are used to show the computational feasibility of the approach.
Project scheduling; Multi-objective optimization; Stochastic optimization; Risk aversion; Stochastic dominance;
http://www.sciencedirect.com/science/article/pii/S0377221715003768
Gutjahr, Walter J.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:293-3062015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:293-306
article
Simulation-optimization approaches for water pump scheduling and pipe replacement problems
Network operation and rehabilitation are major concerns for water utilities due to their impact on providing a reliable and efficient service. Solving the optimization problems that arise in water networks is challenging mainly due to the nonlinearities inherent in the physics and the often binary nature of decisions. In this paper, we consider the operational problem of pump scheduling and the design problem of leaky pipe replacement. New approaches for these problems based on simulation-optimization are proposed as solution methodologies. For the pump scheduling problem, a novel decomposition technique uses solutions from a simulation-based sub-problem to guide the search. For the leaky pipe replacement problem a knapsack-based heuristic is applied. The proposed solution algorithms are tested and detailed results for two networks from the literature are provided.
Pump scheduling; Pipe replacement; Water networks; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715003215
Naoum-Sawaya, Joe
Ghaddar, Bissan
Arandia, Ernesto
Eck, Bradley
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:413-4202015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:413-420
article
On heuristic solutions for the stochastic flowshop scheduling problem
We address the problem of scheduling jobs in a permutation flowshop when their processing times adopt a given distribution (stochastic flowshop scheduling problem) with the objective of minimization of the expected makespan. For this problem, optimal solutions exist only for very specific cases. Consequently, some heuristics have been proposed in the literature, all of them with similar performance. In our paper, we first focus on the critical issue of estimating the expected makespan of a sequence and found that, for instances with a medium/large variability (expressed as the coefficient of variation of the processing times of the jobs), the number of samples or simulation runs usually employed in the literature may not be sufficient to derive robust conclusions with respect to the performance of the different heuristics. We thus propose a procedure with a variable number of iterations that ensures that the percentage error in the estimation of the expected makespan is bounded with a very high probability. Using this procedure, we test the main heuristics proposed in the literature and find significant differences in their performance, in contrast with existing studies. We also find that the deterministic counterpart of the most efficient heuristic for the stochastic problem performs extremely well for most settings, which indicates that, in some cases, solving the deterministic version of the problem may produce competitive solutions for the stochastic counterpart.
Scheduling; Flowshop; Stochastic; Makespan objective; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715003781
Framinan, Jose M.
Perez-Gonzalez, Paz
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:281-2922015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:281-292
article
Optimal firm growth under the threat of entry
The paper studies the incumbent-entrant problem in a fully dynamic setting. We find that under an open-loop information structure the incumbent anticipates entry by overinvesting, whereas in the Markov perfect equilibrium the incumbent slightly underinvests in the period before the entry. The entry cost level where entry accommodation passes into entry deterrence is lower in the Markov perfect equilibrium. Further we find that the incumbent’s capital stock level needed to deter entry is hump shaped as a function of the entry time, whereas the corresponding entry cost, where the entrant is indifferent between entry and non-entry, is U-shaped.
Economics; Game theory; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715003239
Kort, Peter M.
Wrzaczek, Stefan
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:496-5042015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:496-504
article
Stochastic inflow modeling for hydropower scheduling problems
We introduce a new stochastic model for inflow time series that is designed with the requirements of hydropower scheduling problems in mind. The model is an “iterated function system’’: it models inflow as continuous, but the random innovation at each time step has a discrete distribution. With this inflow model, hydro-scheduling problems can be solved by the stochastic dual dynamic programming (SDDP) algorithm exactly as posed, without the additional sampling error introduced by sample average approximations. The model is fitted to univariate inflow time series by quantile regression. We consider various goodness-of-fit metrics for the new model and some alternatives to it, including performance in an actual hydro-scheduling problem. The numerical data used are for inflows to New Zealand hydropower reservoirs.
OR in energy; Hydro-thermal scheduling; Stochastic dual dynamic programming; Time series; Quantile regression;
http://www.sciencedirect.com/science/article/pii/S0377221715004129
Pritchard, Geoffrey
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:487-4952015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:487-495
article
A direct search method for unconstrained quantile-based simulation optimization
Simulation optimization has gained popularity over the decades because of its ability to solve many practical problems that involve profound randomness. The methodology development of simulation optimization, however, is largely concerned with problems whose objective function is mean-based performance metric. In this paper, we propose a direct search method to solve the unconstrained simulation optimization problems with quantile-based objective functions. Because the proposed method does not require gradient estimation in the search process, it can be applied to solve many practical problems where the gradient of objective function does not exist or is difficult to estimate. We prove that the proposed method possesses desirable convergence guarantee, i.e., the algorithm can converge to the true global optima with probability one. An extensive numerical study shows that the performance of the proposed method is promising. Two illustrative examples are provided in the end to demonstrate the viability of the proposed method in real settings.
Simulation; Quantile; Direct search method; Nelder–Mead simplex method;
http://www.sciencedirect.com/science/article/pii/S0377221715003823
Chang, Kuo-Hao
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:674-6842015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:674-684
article
A multi-criteria Police Districting Problem for the efficient and effective design of patrol sector
The Police Districting Problem (PDP) concerns the efficient and effective design of patrol sectors in terms of performance attributes such as workload, response time, etc. A balanced definition of the patrol sector is desirable as it results in crime reduction and in better service. In this paper, a multi-criteria Police Districting Problem defined in collaboration with the Spanish National Police Corps is presented. This is the first model for the PDP that considers the attributes of area, risk, compactness, and mutual support. The decision-maker can specify his/her preferences on the attributes, on workload balance, and efficiency. The model is solved by means of a heuristic algorithm that is empirically tested on a case study of the Central District of Madrid. The solutions identified by the model are compared to patrol sector configurations currently in use and their quality is evaluated by public safety service coordinators. The model and the algorithm produce designs that significantly improve on the current ones.
Location; Police Districting Problem; Multi-criteria decision making;
http://www.sciencedirect.com/science/article/pii/S0377221715004130
Camacho-Collados, M.
Liberatore, F.
Angulo, J.M.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:517-5272015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:517-527
article
Elicitation of multiattribute value functions through high dimensional model representations: Monotonicity and interactions
This work addresses the early phases of the elicitation of multiattribute value functions proposing a practical method for assessing interactions and monotonicity. We exploit the link between multiattribute value functions and the theory of high dimensional model representations. The resulting elicitation method does not state any a-priori assumption on an individual’s preference structure. We test the approach via an experiment in a riskless context in which subjects are asked to evaluate mobile phone packages that differ on three attributes.
Multiattribute value theory; High dimensional model representations; Value function elicitation; Decision analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715003355
Beccacece, Francesca
Borgonovo, Emanuele
Buzzard, Greg
Cillo, Alessandra
Zionts, Stanley
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:1-192015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:1-19
article
A review of theory and practice in scientometrics
Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the “laws” of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments.
Altmetrics; Citations; H-index; Impact factor; Normalisation;
http://www.sciencedirect.com/science/article/pii/S037722171500274X
Mingers, John
Leydesdorff, Loet
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:609-6182015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:609-618
article
A multi-objective approach with soft constraints for water supply and wastewater coverage improvements
In Brazil, due to public health, social and economic cohesion problems, access to water and wastewater services is certainly one of the main concerns of the different stakeholders in the Brazilian water sector. But as the focus is mainly on the expansion and building of new infrastructures, other features such as the robustness and resiliency of the systems are being forgotten. This reason, among others, highlights the importance of sustainable development and financing for the Brazilian water sector. In order to assess that goal, a multi-objective optimization model was built with the aim of formulating strategies to reach a predefined coverage minimizing time and costs incurred, under specific hard and soft constraints, assembled to deal with key sustainability concepts (e.g., affordability and coverage targets features) as they should not be left apart. For that purpose, an achievement scalarizing function was adopted with three distinct scaling coefficient vectors for a given reference point. To solve this combinatorial optimization problem, we used a mixed integer-linear programming optimizer that resorts to branch-and-bound methods. The work developed, paves the way toward the creation of a decision-aiding tool, without disregarding the number of steps that need to be taken to achieve the proposed objectives.
Multiple criteria analysis; Branch and bound; Combinatorial optimization; Reference point approach; Coverage of water and wastewater services;
http://www.sciencedirect.com/science/article/pii/S037722171500329X
Pinto, F.S.
Figueira, J.R.
Marques, R.C.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:199-2082015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:199-208
article
Pricing of fluctuations in electricity markets
In an electric power system, demand fluctuations may result in significant ancillary cost to suppliers. Furthermore, in the near future, deep penetration of volatile renewable electricity generation is expected to exacerbate the variability of demand on conventional thermal generating units. We address this issue by explicitly modeling the ancillary cost associated with demand variability. We argue that a time-varying price equal to the suppliers’ instantaneous marginal cost may not achieve social optimality, and that consumer demand fluctuations should be properly priced. We propose a dynamic pricing mechanism that explicitly encourages consumers to adapt their consumption so as to offset the variability of demand on conventional units. Through a dynamic game-theoretic formulation, we show that (under suitable convexity assumptions) the proposed pricing mechanism achieves social optimality asymptotically, as the number of consumers increases to infinity. Numerical results demonstrate that compared with marginal cost pricing, the proposed mechanism creates a stronger incentive for consumers to shift their peak load, and therefore has the potential to reduce the need for long-term investment in peaking plants.
OR in energy; Electricity market; Game theory; Dynamic pricing; Social welfare;
http://www.sciencedirect.com/science/article/pii/S0377221715003136
Tsitsiklis, John N.
Xu, Yunjian
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:400-4122015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:400-412
article
An integrative cooperative search framework for multi-decision-attribute combinatorial optimization: Application to the MDPVRP
We introduce the integrative cooperative search method (ICS), a multi-thread cooperative search method for multi-attribute combinatorial optimization problems. ICS musters the combined capabilities of a number of independent exact or meta-heuristic solution methods. A number of these methods work on sub-problems defined by suitably selected subsets of decision-set attributes of the problem, while others combine the resulting partial solutions into complete ones and, eventually, improve them. All these methods cooperate through an adaptive search-guidance mechanism, using the central-memory cooperative search paradigm. Extensive numerical experiments explore the behavior of ICS and its interest through an application to the multi-depot, periodic vehicle routing problem, for which ICS improves the results of the current state-of-the-art methods.
Multi-attribute combinatorial optimization; Integrative cooperative search; Meta-heuristics; Decision-set decomposition; Multi-depot periodic vehicle routing;
http://www.sciencedirect.com/science/article/pii/S0377221715003793
Lahrichi, Nadia
Crainic, Teodor Gabriel
Gendreau, Michel
Rei, Walter
Crişan, Gloria Cerasela
Vidal, Thibaut
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:543-5532015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:543-553
article
Elicitation of criteria importance weights through the Simos method: A robustness concern
In the field of multicriteria decision aid, the Simos method is considered as an effective tool to assess the criteria importance weights. Nevertheless, the method's input data do not lead to a single weighting vector, but infinite ones, which often exhibit great diversification and threaten the stability and acceptability of the results. This paper proves that the feasible weighting solutions, of both the original and the revised Simos procedures, are vectors of a non-empty convex polyhedral set, hence the reason it proposes a set of complementary robustness analysis rules and measures, integrated in a Robust Simos Method. This framework supports analysts and decision makers in gaining insight into the degree of variation of the multiple acceptable sets of weights, and their impact on the stability of the final results. In addition, the proposed measures determine if, and what actions should be implemented, prior to reaching an acceptable set of criteria weights and forming a final decision. Two numerical examples are provided, to illustrate the paper's evidence, and demonstrate the significance of consistently analyzing the robustness of the Simos method results, in both the original and the revised method's versions.
Multiple criteria; Decision analysis; Criteria weights; Robustness analysis; Simos method;
http://www.sciencedirect.com/science/article/pii/S0377221715003306
Siskos, Eleftherios
Tsotsolas, Nikos
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:66-752015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:66-75
article
Stochastic lot sizing manufacturing under the ETS system for maximisation of shareholder wealth
The issues of carbon emission and global warming have increasingly aroused worldwide attention in recent years. Despite huge progresses in carbon abatement, few research studies have reported on the impacts of carbon emission reduction mechanisms on manufacturing optimisation, which often leads to decisions of environmentally unsustainable operations and misestimation of performance. This paper attempts to explore carbon management under the carbon emission trading mechanism for optimisation of lot sizing production planning in stochastic make-to-order manufacturing with the objective to maximise shareholder wealth. We are concerned not only about the economic benefits of investors, but also about the environmental impacts associated with production planning. Numerical experiments illustrate the significant influences of carbon emission trading, pricing, and caps on the dynamic decisions of the lot sizing policy. The result highlights the critical roles of carbon management in production planning for achieving both environmental and economic benefits. It also provides managerial insights into operations management to help mitigate environmental deterioration arising from carbon emission, as well as improve shareholder wealth.
Production planning; Lot sizing; Carbon emission; ETS; Shareholder wealth;
http://www.sciencedirect.com/science/article/pii/S0377221715003148
Wang, X.J.
Choi, S.H.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:250-2622015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:250-262
article
On the estimation of the true demand in call centers with redials and reconnects
In practice, in many call centers customers often perform redials (i.e., reattempt after an abandonment) and reconnects (i.e., reattempt after an answered call). In the literature, call center models usually do not cover these features, while real data analysis and simulation results show ignoring them inevitably leads to inaccurate estimation of the total inbound volume. Therefore, in this paper we propose a performance model that includes both features. In our model, the total volume consists of three types of calls: (1) fresh calls (i.e., initial call attempts), (2) redials, and (3) reconnects. In practice, the total volume is used to make forecasts, while according to the simulation results, this could lead to high forecast errors, and subsequently wrong staffing decisions. However, most of the call center data sets do not have customer-identity information, which makes it difficult to identify how many calls are fresh and what fractions of the calls are redials and reconnects.
Queueing; Forecasting; Redials; Reconnects; Call centers;
http://www.sciencedirect.com/science/article/pii/S0377221715003112
Ding, S.
Koole, G.
van der Mei, R.D.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:651-6602015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:651-660
article
The impact of the internet on the pricing strategies of the European low cost airlines
This study seeks to analyse the price determination of low cost airlines in Europe and the effect that Internet has on this strategy. The outcomes obtained reveal that both users and companies benefit from the use of ICTs in the purchase and sale of airline tickets: the Internet allows consumers to increase their bargaining power comparing different airlines and choosing the most competitive flight, while companies can easily check the behaviour of users to adapt their pricing strategies using internal information.
Low cost airlines; Airline pricing; ICT; Travel industry strategies; Air fares;
http://www.sciencedirect.com/science/article/pii/S0377221715003859
Moreno-Izquierdo, L.
Ramón-Rodríguez, A.
Perles Ribes, J.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:379-3912015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:379-391
article
Scheduling resource-constrained projects with a flexible project structure
In projects with a flexible project structure, the activities that must be scheduled are not completely known in advance. Scheduling such projects includes deciding whether to perform particular activities. This decision also affects precedence constraints among the implemented activities. However, established model formulations and solution approaches for the resource-constrained project scheduling problem (RCPSP) assume that the project structure is provided in advance. In this paper, the traditional RCPSP is extended using a highly general model-endogenous decision on this flexible project structure. This extension is illustrated using the example of the aircraft turnaround process at airports. We present a genetic algorithm to solve this type of scheduling problem and evaluate it in an extensive numerical study.
Project scheduling; Genetic algorithms; RCPSP; Flexible projects;
http://www.sciencedirect.com/science/article/pii/S0377221715003732
Kellenbrink, Carolin
Helber, Stefan
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:232-2412015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:232-241
article
Accommodating heterogeneity and nonlinearity in price effects for predicting brand sales and profits
We propose a hierarchical Bayesian semiparametric approach to account simultaneously for heterogeneity and functional flexibility in store sales models. To estimate own- and cross-price response flexibly, a Bayesian version of P-splines is used. Heterogeneity across stores is accommodated by embedding the semiparametric model into a hierarchical Bayesian framework that yields store-specific own- and cross-price response curves. More specifically, we propose multiplicative store-specific random effects that scale the nonlinear price curves while their overall shape is preserved. Estimation is fully Bayesian and based on novel MCMC techniques. In an empirical study, we demonstrate a higher predictive performance of our new flexible heterogeneous model over competing models that capture heterogeneity or functional flexibility only (or neither of them) for nearly all brands analyzed. In particular, allowing for heterogeneity in addition to functional flexibility can improve the predictive performance of a store sales model considerably, while incorporating heterogeneity alone only moderately improved or even decreased predictive validity. Taking into account model uncertainty, we show that the proposed model leads to higher expected profits as well as to materially different pricing recommendations.
Forecasting; Sales response modeling; Heterogeneity; Functional flexibility; Expected profits;
http://www.sciencedirect.com/science/article/pii/S0377221715001678
Lang, Stefan
Steiner, Winfried J.
Weber, Anett
Wechselberger, Peter
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:476-4862015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:476-486
article
Commodity derivatives pricing with cointegration and stochastic covariances
Empirically, cointegration and stochastic covariances, including stochastic volatilities, are statistically significant for commodity prices and energy products. To capture such market phenomena, we develop a continuous-time dynamics of cointegrated assets with a stochastic covariance matrix and derive the joint characteristic function of asset returns in closed-form. The proposed model offers an endogenous explanation for the stochastic mean-reverting convenience yield. The time series of spot and futures prices of WTI crude oil and gasoline shows cointegration relationship under both physical and risk-neutral measures. The proposed model also allows us to fit the observed term structure of futures prices and calibrate the market-implied cointegration relationship. We apply it to value options on a single commodity and on multiple commodities.
Option pricing; Cointegration; Stochastic covariance; Stochastic convenience yield;
http://www.sciencedirect.com/science/article/pii/S0377221715003847
Chiu, Mei Choi
Wong, Hoi Ying
Zhao, Jing
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:218-2312015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:218-231
article
E-NAUTILUS: A decision support system for complex multiobjective optimization problems based on the NAUTILUS method
Interactive multiobjective optimization methods cannot necessarily be easily used when (industrial) multiobjective optimization problems are involved. There are at least two important factors to be considered with any interactive method: computationally expensive functions and aspects of human behavior. In this paper, we propose a method based on the existing NAUTILUS method and call it the Enhanced NAUTILUS (E-NAUTILUS) method. This method borrows the motivation of NAUTILUS along with the human aspects related to avoiding trading-off and anchoring bias and extends its applicability for computationally expensive multiobjective optimization problems. In the E-NAUTILUS method, a set of Pareto optimal solutions is calculated in a pre-processing stage before the decision maker is involved. When the decision maker interacts with the solution process in the interactive decision making stage, no new optimization problem is solved, thus, avoiding the waiting time for the decision maker to obtain new solutions according to her/his preferences. In this stage, starting from the worst possible objective function values, the decision maker is shown a set of points in the objective space, from which (s)he chooses one as the preferable point. At successive iterations, (s)he always sees points which improve all the objective values achieved by the previously chosen point. In this way, the decision maker remains focused on the solution process, as there is no loss in any objective function value between successive iterations. The last post-processing stage ensures the Pareto optimality of the final solution. A real-life engineering problem is used to demonstrate how E-NAUTILUS works in practice.
Multiple objective programming; Interactive methods; Multiple criteria optimization; Computational cost; Trading-off;
http://www.sciencedirect.com/science/article/pii/S0377221715003203
Ruiz, Ana B.
Sindhya, Karthik
Miettinen, Kaisa
Ruiz, Francisco
Luque, Mariano
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:44-502015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:44-50
article
SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression
This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a globally optimal subset using the branch and bound (BB) algorithm is limited to problems in very low dimension, typically d ≤ 5, as the complexity of the problem increases exponentially with d. We introduce a bold pruning strategy in the BB algorithm that results in a significant reduction in computing time, at the price of a negligeable accuracy lost. The novelty of our algorithm is that the bounds at nodes of the BB tree come from pseudo-convexifications derived using a linearization technique with approximate bounds for the nonlinear terms. The approximate bounds are computed solving an auxiliary semidefinite optimization problem. We show through a computational study that our algorithm performs well in a wide set of the most difficult instances of the LTSE problem.
Global optimization; Integer programming; High breakdown point regression; Branch and bound; Relaxation–linearization technique;
http://www.sciencedirect.com/science/article/pii/S0377221715003173
Flores, Salvador
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:528-5422015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:528-542
article
On the exact solution of the multi-period portfolio choice problem for an exponential utility under return predictability
In this paper we derive the exact solution of the multi-period portfolio choice problem for an exponential utility function under return predictability. It is assumed that the asset returns depend on predictable variables and that the joint random process of the asset returns and the predictable variables follow a vector autoregressive process. We prove that the optimal portfolio weights depend on the covariance matrices of the next two periods and the conditional mean vector of the next period. The case without predictable variables and the case of independent asset returns are partial cases of our solution. Furthermore, we provide an exhaustive empirical study where the cumulative empirical distribution function of the investor’s wealth is calculated using the exact solution. It is compared with the investment strategy obtained under the additional assumption that the asset returns are independently distributed.
Multi-period asset allocation; Expected utility optimization; Exponential utility function; Return predictability;
http://www.sciencedirect.com/science/article/pii/S037722171500332X
Bodnar, Taras
Parolya, Nestor
Schmid, Wolfgang
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:435-4492015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:435-449
article
Reverse supply chains: Effects of collection network and returns classification on profitability
Used products collected for value recovery are characterized by higher uncertainty regarding their quality condition compared to raw materials used in forward supply chains. Because of the need for timely information regarding their quality, a common business practice is to establish procedures for the classification of used products (returns), which is not always error-free. The existence of a multitude of sites where used products can be collected, further increases the complexity of reverse supply chain design and management. In this paper we formulate the objective function for a reverse supply chain with multiple collection sites and the possibility of returns sorting, assuming general distributions of demand and returns quality in a single-period context. We derive conditions for the determination of the optimal acquisition and remanufacturing lot-sizing decisions under alternative locations of the unreliable classification/sorting operation. We provide closed-form expressions for the selection of the optimal sorting location in the special case of identical collection sites and guidelines for tackling the decision-making problem in the general case. Furthermore, we examine analytically the effect of the cost and accuracy of the classification procedure on the profitability of the alternative supply chain configurations. Our analysis, which is accompanied by a brief numerical investigation, offers insights regarding the impact of yield variability, number of collection sites, and location and characteristics of the returns classification operation both on the acquisition decisions and on the profitability of the reverse supply chain.
Multiple suppliers; Random yield; Location of sorting; Returns classification errors; Value of quality information;
http://www.sciencedirect.com/science/article/pii/S0377221715003744
Zikopoulos, Christos
Tagaras, George
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:471-4752015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:471-475
article
Pricing and sales-effort investment under bi-criteria in a supply chain of virtual products involving risk
This work develops a stochastic model of a two-echelon supply chain of virtual products in which the decision makers—a manufacturer and a retailer—may be risk-sensitive. Virtual products allow the retailer to avoid holding costs and ensure timely fulfillment of demand with no risk of shortage. We expand on the work of Chernonog and Avinadav (2014), who investigated the pricing of virtual products under uncertain and price-dependent demand, by including sales-effort as a decision variable that affects demand. Whereas in the previous work equilibrium was obtained exactly as in a deterministic case for any utility function, herein it is not. Consequently, we focus on the strategies of both the manufacturer and the retailer under different profit criteria, including the use of bi-criteria. By formulating the problem as a Stackelberg game, we show that the problem can be analytically solved by assuming certain common structures of the demand function and of the preferences of both the manufacturer and the retailer with regard to risk. We extend the solution to the case of imperfect information regarding the preferences and offer guidelines for the formation of efficient sets of decisions under bi-criteria. Finally, we provide numerical results.
Supply chain; Game theory; Risk; Multiple criteria; Imperfect information;
http://www.sciencedirect.com/science/article/pii/S0377221715004142
Chernonog, Tatyana
Avinadav, Tal
Ben-Zvi, Tal
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:562-5742015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:562-574
article
A systemic method for organisational stakeholder identification and analysis using Soft Systems Methodology (SSM)
This paper presents a systemic methodology for identifying and analysing the stakeholders of an organisation at many different levels. The methodology is based on soft systems methodology and is applicable to all types of organisation, both for profit and non-profit. The methodology begins with the top-level objectives of the organisation, developed through debate and discussion, and breaks these down into the key activities needed to achieve them. A range of stakeholders are identified for each key activity. At the end, the functions and relationships of all the stakeholder groups can clearly be seen. The methodology is illustrated with an actual case study in Hunan University.
Stakeholder identification; Stakeholder analysis; Soft systems methodology;
http://www.sciencedirect.com/science/article/pii/S0377221715003860
Wang, Wei
Liu, Wenbin
Mingers, John
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:76-852015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:76-85
article
Optimal inventory policy for two substitutable products with customer service objectives
We consider a firm facing stochastic demand for two products with downward, supplier-driven substitution and customer service objectives. We assume both products are perishable or prone to obsolescence, hence the firm faces a single period problem. The fundamental challenge facing the firm is to determine in advance of observing demand the profit maximizing inventory levels of both products that will meet given service level objectives. Note that while we speak of inventory levels, the products may be either goods or services. We characterize the firm’s optimal inventory policy with and without customer service objectives. Results of a numerical study reveal the benefits obtained from substitution and show how optimal inventory levels are impacted by customer service objectives.
Inventory management; Capacity management; Substitution; Perishability; Customer service objective;
http://www.sciencedirect.com/science/article/pii/S0377221715003264
Chen, Xu
Feng, Youyi
Keblis, Matthew F.
Xu, Jianjun
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:331-3382015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:331-338
article
Tri-criterion modeling for constructing more-sustainable mutual funds
One of the most important factors shaping world outcomes is where investment dollars are placed. In this regard, there is the rapidly growing area called sustainable investing where environmental, social, and corporate governance (ESG) measures are taken into account. With people interested in this type of investing rarely able to gain exposure to the area other than through a mutual fund, we study a cross section of U.S. mutual funds to assess the extent to which ESG measures are embedded in their portfolios. Our methodology makes heavy use of points on the nondominated surfaces of many tri-criterion portfolio selection problems in which sustainability is modeled, after risk and return, as a third criterion. With the mutual funds acting as a filter, the question is: How effective is the sustainable mutual fund industry in carrying out its charge? Our findings are that the industry has substantial leeway to increase the sustainability quotients of its portfolios at even no cost to risk and return, thus implying that the funds are unnecessarily falling short on the reasons why investors are investing in these funds in the first place.
Socially responsible investing; Multiple criteria optimization; Portfolio selection; Nondominated surfaces; Quadratically constrained linear programs;
http://www.sciencedirect.com/science/article/pii/S0377221715003288
Utz, Sebastian
Wimmer, Maximilian
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:619-6302015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:619-630
article
A moment-matching method to generate arbitrage-free scenarios
We propose a new moment-matching method to build scenario trees that rule out arbitrage opportunities when describing the dynamics of financial assets. The proposed scenario generator is based on the monomial method, a technique to solve systems of algebraic equations. Extensive numerical experiments show the accuracy and efficiency of the proposed moment-matching method when solving financial problems in complete and incomplete markets.
Scenarios; Monomial method; Moment-matching;
http://www.sciencedirect.com/science/article/pii/S0377221715003653
Staino, Alessandro
Russo, Emilio
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:392-3992015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:392-399
article
Optimality cuts and a branch-and-cut algorithm for the K-rooted mini-max spanning forest problem
Let G = (V, E) be an undirected graph with costs associated with its edges and K pre-specified root vertices. The K−rooted mini-max spanning forest problem asks for a spanning forest of G defined by exactly K mutually disjoint trees. Each tree must contain a different root vertex and the cost of the most expensive tree must be minimum. This paper introduces a Branch-and-cut algorithm for the problem. It involves a multi-start Linear Programming heuristic and the separation of some new optimality cuts. Extensive computational tests indicate that the new algorithm significantly improves on the results available in the literature. Improvements being reflected by lower CPU times, smaller enumeration trees, and optimality certificates for previously unattainable K = 2 instances with as many as 200 vertices. Furthermore, for the first time, instances of the problem with K ∈ {3, 4} are solved to proven optimality.
Combinatorial optimization; Branch-and-cut; K-rooted mini–max spanning forest problem; Optimality cuts;
http://www.sciencedirect.com/science/article/pii/S0377221715003719
da Cunha, Alexandre Salles
Simonetti, Luidi
Lucena, Abilio
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:345-3782015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:345-378
article
The third comprehensive survey on scheduling problems with setup times/costs
Scheduling involving setup times/costs plays an important role in today's modern manufacturing and service environments for the delivery of reliable products on time. The setup process is not a value added factor, and hence, setup times/costs need to be explicitly considered while scheduling decisions are made in order to increase productivity, eliminate waste, improve resource utilization, and meet deadlines. However, the vast majority of existing scheduling literature, more than 90 percent, ignores this fact. The interest in scheduling problems where setup times/costs are explicitly considered began in the mid-1960s and the interest has been increasing even though not at an anticipated level. The first comprehensive review paper (Allahverdi et al., 1999) on scheduling problems with setup times/costs was in 1999 covering about 200 papers, from mid-1960s to mid-1988, while the second comprehensive review paper (Allahverdi et al., 2008) covered about 300 papers which were published from mid-1998 to mid-2006. This paper is the third comprehensive survey paper which provides an extensive review of about 500 papers that have appeared since the mid-2006 to the end of 2014, including static, dynamic, deterministic, and stochastic environments. This review paper classifies scheduling problems based on shop environments as single machine, parallel machine, flowshop, job shop, or open shop. It further classifies the problems as family and non-family as well as sequence-dependent and sequence-independent setup times/costs. Given that so many papers have been published in a relatively short period of time, different researchers have addressed the same problem independently, by even using the same methodology. Throughout the survey paper, the independently addressed problems are identified, and need for comparing these results is emphasized. Moreover, based on performance measures, shop and setup times/costs environments, the less studied problems have been identified and the need to address these problems is specified. The current survey paper, along with those of Allahverdi et al. (1999, 2008), is an up to date survey of scheduling problems involving static, dynamic, deterministic, and stochastic problems for different shop environments with setup times/costs since the first research on the topic appeared in the mid-1960s.
Scheduling; Review; Setup time; Setup cost;
http://www.sciencedirect.com/science/article/pii/S0377221715002763
Allahverdi, Ali
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:575-5812015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:575-581
article
On solving matrix games with pay-offs of triangular fuzzy numbers: Certain observations and generalizations
The purpose of this paper is to highlight a serious omission in the recent work of Li (2012) for solving the two person zero-sum matrix games with pay-offs of triangular fuzzy numbers (TFNs) and propose a new methodology for solving such games. Li (2012) proposed a method which always assures that the max player gain-floor and min player loss-ceiling have a common TFN value. The present paper exhibits a flaw in this claim of Li (2012). The flaw arises on account of Li (2012) not explaining the meaning of solution of game under consideration. The present paper attempts to provide certain appropriate modifications in Li’s model to take care of this serious omission. These modifications in conjunction with the results of Clemente, Fernandez, and Puerto (2011) lead to an algorithm to solve matrix games with pay-offs of general piecewise linear fuzzy numbers.
Game theory; Fuzzy pay-offs; Fuzzy numbers; Multiobjective optimization; Pareto optimality;
http://www.sciencedirect.com/science/article/pii/S0377221715003835
Chandra, S.
Aggarwal, A.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:86-1072015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:86-107
article
A biased random-key genetic algorithm for the unequal area facility layout problem
This paper presents a biased random-key genetic algorithm (BRKGA) for the unequal area facility layout problem (UA-FLP) where a set of rectangular facilities with given area requirements has to be placed, without overlapping, on a rectangular floor space. The objective is to find the location and the dimensions of the facilities such that the sum of the weighted distances between the centroids of the facilities is minimized. A hybrid approach combining a BRKGA, to determine the order of placement and the dimensions of each facility, a novel placement strategy, to position each facility, and a linear programming model, to fine-tune the solutions, is developed. The proposed approach is tested on 100 random datasets and 28 of benchmark datasets taken from the literature and compared with 21 other benchmark approaches. The quality of the approach was validated by the improvement of the best known solutions for 19 of the 28 extensively studied benchmark datasets.
Facilities planning and design; Facility layout; Biased random-key genetic algorithms; Random-keys;
http://www.sciencedirect.com/science/article/pii/S0377221715003227
Gonçalves, José Fernando
Resende, Mauricio G.C.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:108-1182015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:108-118
article
Comments on the EOQ model for deteriorating items with conditional trade credit linked to order quantity in the supply chain management
Ouyang et al. (2009) consider an economic order quantity (EOQ) model for deteriorating items with a partially permissible delay in payments linked to order quantity. Basically, their inventory model is practical, but there are some defects from the logical viewpoints of mathematics. In this paper, the functional behaviors of the annual total relevant costs are explored by rigorous methods of mathematics. A complete solution procedure is also developed to make up for the shortcomings of Ouyang et al. (2009). In numerical examples, it is proved that the new solution procedure could avoid making wrong decisions and causing cost penalties.
Inventory; EOQ; Trade credit; Partially permissible delay in payments; Deteriorating items;
http://www.sciencedirect.com/science/article/pii/S0377221715003665
Ting, Pin-Shou
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:339-3422015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:339-342
article
Optimal shelf-space stocking policy using stochastic dominance under supply-driven demand uncertainty
In this paper, we develop an optimal shelf-space stocking policy when demand, in addition to the exogenous uncertainty, is influenced by the amount of inventory displayed (supply) on the shelves. Our model exploits stochastic dominance condition; and, we assume that the distribution of realized demand with higher stocking level stochastically dominates the distribution of realized demand with lower stocking level. We show that the critical fractile with endogenous demand may not exceed the critical fractile of the classical newsvendor model. Our computational results validate the optimality of amount of units stocked on the retail shelves.
Displayed inventory; Stochastic dominance; Newsvendor; Uncertainty modeling;
http://www.sciencedirect.com/science/article/pii/S0377221715003240
Amit, R.K.
Mehta, Peeyush
Tripathi, Rajeev R.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:51-652015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:51-65
article
An efficient genetic algorithm with a corner space algorithm for a cutting stock problem in the TFT-LCD industry
In this study, we investigate a two-dimensional cutting stock problem in the thin film transistor liquid crystal display industry. Given the lack of an efficient and effective mixed production method that can produce various sizes of liquid crystal display panels from a glass substrate sheet, thin film transistor liquid crystal display manufacturers have relied on the batch production method, which only produces one size of liquid crystal display panel from a single substrate. However, batch production is not an effective or flexible strategy because it increases production costs by using an excessive number of glass substrate sheets and causes wastage costs from unused liquid crystal display panels. A number of mixed production approaches or algorithms have been proposed. However, these approaches cannot solve industrial-scale two-dimensional cutting stock problem efficiently because of its computational complexity. We propose an efficient and effective genetic algorithm that incorporates a novel placement procedure, called a corner space algorithm, and a mixed integer programming model to resolve the problem. The key objectives are to reduce the total production costs and to satisfy the requirements of customers. Our computational results show that, in terms of solution quality and computation time, the proposed method significantly outperforms the existing approaches.
Two-dimensional cutting; Mixed production; Genetic algorithm; TFT-LCD; Corner space algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715003379
Lu, Hao-Chun
Huang, Yao-Huei
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:263-2802015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:263-280
article
Joint optimization for coordinated configuration of product families and supply chains by a leader-follower Stackelberg game
Product family design by module configuration is conducive to accommodating product variety while maintaining mass production efficiency. Effective fulfillment of product families necessitates joint decision making of product family configuration (PFC) and downstream supply chain configuration (SCC), due to nowadays manufacturers’ moving towards assembly-to-order production throughout a distributed supply chain network. Existing decision models for joint optimization of product family and supply chain configuration are originated from an “all-in-one” approach that assumes both PFC and SCC decisions can be integrated into one optimization problem by aggregating two different types of objectives into a single objective function. Such an assumption neglects the complex tradeoffs underlying two different decision making problems and fails to reveal the inherent coupling of PFC and SCC.
Product family; Supply chain; Module configuration; Stackelberg game; Bi-level optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500315X
Yang, Dong
Jiao, Jianxin (Roger)
Ji, Yangjian
Du, Gang
Helo, Petri
Valente, Anna
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:554-5612015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:554-561
article
A nonparametric methodology for evaluating convergence in a multi-input multi-output setting
This paper presents a novel nonparametric methodology to evaluate convergence in an industry, considering a multi-input multi-output setting for the assessment of total factor productivity. In particular, we develop two new indexes to evaluate σ-convergence and β-convergence that can be computed using nonparametric techniques such as Data Envelopment Analysis. The methodology developed is particularly useful to enhance productivity assessments based on the Malmquist index. The methodology is applied to a real world context, consisting of a sample of Portuguese construction companies that operated in the sector between 2008 and 2010. The empirical results show that Portuguese companies tended to converge, both in the sense of σ and β, in all construction activity segments in the aftermath of the financial crisis.
Convergence; Productivity; Malmquist index; Data envelopment analysis; Construction industry;
http://www.sciencedirect.com/science/article/pii/S0377221715003872
Horta, Isabel M.
Camanho, Ana S.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:597-6082015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:597-608
article
Exact and heuristic approaches to the airport stand allocation problem
The Stand Allocation Problem (SAP) consists in assigning aircraft activities (arrival, departure and intermediate parking) to aircraft stands (parking positions) with the objective of maximizing the number of passengers/aircraft at contact stands and minimizing the number of towing movements, while respecting a set of operational and commercial requirements. We first prove that the problem of assigning each operation to a compatible stand is NP-complete by a reduction from the circular arc graph coloring problem. As a corollary, this implies that the SAP is NP-hard. We then formulate the SAP as a Mixed Integer Program (MIP) and strengthen the formulation in several ways. Additionally, we introduce two heuristic algorithms based on a spatial and time decomposition leading to smaller MIPs. The methods are tested on realistic scenarios based on actual data from two major European airports. We compare the performance and the quality of the solutions with state-of-the-art algorithms. The results show that our MIP-based methods provide significant improvements to the solutions outlined in previously published approaches. Moreover, their low computation makes them very practical.
Mixed integer programming; Gate assignment problem; Heuristic algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221715003331
Guépet, J.
Acuna-Agost, R.
Briant, O.
Gayon, J.P.
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:209-2172015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:209-217
article
A generalized equilibrium efficient frontier data envelopment analysis approach for evaluating DMUs with fixed-sum outputs
The recently published equilibrium efficient frontier data envelopment analysis (EEFDEA) approach (Yang et al., 2014) represents a step forward in evaluating decision-making units (DMUs) with fixed-sum outputs when compared to prior approaches such as FSODEA (fixed-sum outputs DEA) approach (Yang et al., 2011) and ZSG-DEA (zero sum gains DEA) approach (Lins et al., 2003) and so on. Based on the EEFDEA approach, in this paper, we proposed a generalized equilibrium efficient frontier data envelopment analysis approach (GEEFDEA) which improves and strengthens the EEFDEA approach. Compared to EEFDEA approach, this approach makes several improvements in evaluation, namely that (1) it is not necessary to determine the evaluation order in advance, which overcomes the limitation that different evaluation orders will lead to different results; (2) the equilibrium efficient frontier can be achieved in only one step no matter how many DMUs they are, which greatly simplifies the procedure to reach the equilibrium efficient frontier especially when the number of DMUs is large; and (3) the constraint that signs of outputs’ adjustment of each DMU must be same (all non-positive or all non-negative) in prior approaches has been relaxed. In this sense, the result obtained by the proposed approach is more consistent with the demand of practical applications. Finally, the proposed approach combined with assurance regions (AR) is applied to the data set of 2012 London Olympic Games.
Data envelopment analysis (DEA); Generalized equilibrium efficient frontier; Fixed sum outputs; Assurance region;
http://www.sciencedirect.com/science/article/pii/S0377221715003161
Yang, Min
Li, Yong Jun
Liang, Liang
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:320-3302015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:320-330
article
On the value of exposure and secrecy of defense system: First-mover advantage vs. robustness
It is commonly accepted in the literature that, when facing with a strategic terrorist, the government can be better off by manipulating the terrorist’s target selection with exposing her defense levels and thus moving first. However, the impact of terrorist’s private information may significantly affect such government’s first-mover advantage, which has not been extensively studied in the literature. To explore the impact of asymmetry in terrorist’s attributes between government and terrorist on defense equilibrium, we propose a model in which the government chooses between disclosure (sequential game) and secrecy (simultaneous game) of her defense system. Our analysis shows that the government’s first-mover advantage in a sequential game is considerable only when both government and terrorist share relatively similar valuation of targets. In contrast, we interestingly find that the government no longer benefits from the first-mover advantage by exposing her defense levels when the degree of divergence between government and terrorist valuation of targets is high. This is due to the robustness of defense system under secrecy, in the sense that all targets should be defended in equilibrium irrespective of how the terrorist valuation of targets is different to government. We identify two phenomena that lead to this result. First, when the terrorist holds a significantly higher valuation of targets than the government’s belief, the government may waste her budget in a sequential game by over-investing on the high-valued targets. Second, when the terrorist holds a significantly lower valuation of targets, the government may incur a higher expected damage in a sequential game because of not defending the low-valued targets. Finally, we believe that this paper provides some novel insights to homeland security resource allocation problems.
Defense system; Game Theory; Secrecy; Exposure; Robustness;
http://www.sciencedirect.com/science/article/pii/S0377221715003367
Nikoofal, Mohammad E.
Zhuang, Jun
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:119-1272015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:119-127
article
Multivariate control charts based on the James–Stein estimator
In this study, we focus on improving parameter estimation in Phase I study to construct more accurate Phase II control limits for monitoring multivariate quality characteristics. For a multivariate normal distribution with unknown mean vector, the usual mean estimator is known to be inadmissible under the squared error loss function when the dimension of the variables is greater than 2. Shrinkage estimators, such as the James–Stein estimators, are shown to have better performance than the conventional estimators in the literature. We utilize the James–Stein estimators to improve the Phase I parameter estimation. Multivariate control limits for the Phase II monitoring based on the improved estimators are proposed in this study. The resulting control charts, JS-type charts, are shown to have substantial performance improvement over the existing ones.
Average run length; Control chart; Multivariate normal distribution; James–Stein estimator;
http://www.sciencedirect.com/science/article/pii/S0377221715001666
Wang, Hsiuying
Huwang, Longcheen
Yu, Jeng Hung
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:641-6502015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:641-650
article
Optimal design of bilateral contracts for energy procurement
In this paper, we consider the problem of optimizing the portfolio of an aggregator that interacts with the energy grid via bilateral contracts. The purpose of the contracts is to achieve the pointwise procurement of energy to the grid. The challenge raised by the coordination of scattered resources and the securing of obligations over the planning horizon is addressed through a twin-time scale model, where robust short term operational decisions are contingent on long term resource usage incentives that embed the full extent of contract specifications.
Distributed energy resource; Bilateral contract; Dynamic resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221715003707
Gilbert, François
Anjos, Miguel F.
Marcotte, Patrice
Savard, Gilles
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:307-3192015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:307-319
article
Cost-effectiveness measures on convex and nonconvex technologies
Camanho and Dyson (2005) extended Shephard's (1974) revenue-indirect cost efficiency approach to a cost-effectiveness framework, which helps to assess the ability of a firm to achieve the current revenue (expressed in the firm's own prices and quantities) at minimum cost. The degree of cost-effectiveness is quantified as the ratio of the minimum cost to the observed cost of the evaluated firm where the minimum cost is computed by simultaneously adjusting the output levels at the current revenue. In this paper, we develop two cost-effectiveness approaches based on convex data envelopment analysis and nonconvex free disposable hull technologies. The objectives of this paper are threefold. Firstly, we develop a convex cost-effectiveness (CCE) measure which is equivalent to the Camanho–Dyson CCE measure under the constant returns-to-scale assumption. Secondly, we introduce three nonconvex cost-effectiveness (NCCE) measures which are shown to be equivalent with respect to each returns-to-scale nonconvex technology. Finally, we apply our framework to a real data.
Data envelopment analysis (DEA); Free disposal hull (FDH); Convex cost-effectiveness (CCE); Nonconvex cost-effectiveness (NCCE); Returns-to-scale;
http://www.sciencedirect.com/science/article/pii/S0377221715002751
Fukuyama, Hirofumi
Shiraz, Rashed Khanjani
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:462-4702015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:462-470
article
Decentral allocation planning in multi-stage customer hierarchies
This paper presents a novel allocation scheme to improve profits when splitting a scarce product among customer segments. These segments differ by demand and margin and they form a multi-level tree, e.g. according to a geography-based organizational structure. In practice, allocation has to follow an iterative process in which higher level quotas are disaggregated one level at a time, only based on local, aggregate information. We apply well-known econometric concepts such as the Lorenz curve and Theil’s index of inequality to find a non-linear approximation of the profit function in the customer tree. Our resulting Approximate Profit Decentral Allocation (ADA) scheme ensures that a group of truthfully reporting decentral planners makes quasi-coordinated decisions in support of overall profit-maximization in the hierarchy. The new scheme outperforms existing simple rules by a large margin and comes close to the first-best theoretical solution under a central planner and central information.
Supply chain management; Demand fulfillment; Allocation planning; Customer hierarchies; Customer heterogeneity;
http://www.sciencedirect.com/science/article/pii/S0377221715003811
Vogel, Sebastian
Meyr, Herbert
oai:RePEc:eee:ejores:v:246:y:2015:i:1:p:20-332015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:1:p:20-33
article
Solving stochastic resource-constrained project scheduling problems by closed-loop approximate dynamic programming
Project scheduling problems with both resource constraints and uncertain task durations have applications in a variety of industries. While the existing research literature has been focusing on finding an a priori open-loop task sequence that minimizes the expected makespan, finding a dynamic and adaptive closed-loop policy has been regarded as being computationally intractable. In this research, we develop effective and efficient approximate dynamic programming (ADP) algorithms based on the rollout policy for this category of stochastic scheduling problems. To enhance performance of the rollout algorithm, we employ constraint programming (CP) to improve the performance of base policy offered by a priority-rule heuristic. We further devise a hybrid ADP framework that integrates both the look-back and look-ahead approximation architectures, to simultaneously achieve both the quality of a rollout (look-ahead) policy to sequentially improve a task sequence, and the efficiency of a lookup table (look-back) approach. Computational results on the benchmark instances show that our hybrid ADP algorithm is able to obtain competitive solutions with the state-of-the-art algorithms in reasonable computational time. It performs particularly well for instances with non-symmetric probability distribution of task durations.
Resource-constrained project scheduling; Uncertain task durations; Stochastic scheduling; Approximate dynamic programming; Simulation;
http://www.sciencedirect.com/science/article/pii/S037722171500288X
Li, Haitao
Womer, Norman K.
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:582-5962015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:582-596
article
Methods for solving the mean query execution time minimization problem
One of the most significant and common techniques to accelerate user queries in multidimensional databases is view materialization. The problem of choosing an appropriate part of data structure for materialization under limited resources is known as the view selection problem. In this paper, the problem of the mean query execution time minimization under limited storage space is studied. Different heuristics based on a greedy method are examined, proofs regarding their performance are presented, and modifications for them are proposed, which not only improve the solution cost but also shorten the running time. Additionally, the heuristics and a widely used Integer Programming solver are experimentally compared with respect to the running time and the cost of solution. What distinguishes this comparison is its comprehensiveness, which is obtained by the use of performance profiles. Two computational effort reduction schemas, which significantly accelerate heuristics as well as optimal algorithms without increasing the value of the cost function, are also proposed. The presented experiments were done on a large dataset with special attention to the large problems, rarely considered in previous experiments. The main disadvantage of a greedy method indicated in literature was its long running time. The results of the conducted experiments show that the modification of the greedy algorithm together with the computational effort reduction schemas presented in this paper result in the method which finds a solution in short time, even for large lattices.
Decision support systems; Heuristics; OLAP; View materialization; View selection problem;
http://www.sciencedirect.com/science/article/pii/S0377221715003343
Łatuszko, Marek
Pytlak, Radosław
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:631-6402015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:631-640
article
A multi-step rolled forward chance-constrained model and a proactive dynamic approach for the wheat crop quality control problem
Handling weather uncertainty during the harvest season is an indispensable aspect of seed gathering activities. More precisely, the focus of this study refers to the multi-period wheat quality control problem during the crop harvest season under meteorological uncertainty. In order to alleviate the problem curse of dimensionality and to reflect faithfully exogenous uncertainties revealed progressively over time, we propose a multi-step joint chance-constrained model rolled forward step-by-step. This model is subsequently solved by a proactive dynamic approach, specially conceived for this purpose. Based on real-world derived instances, the obtained computational results exhibit proactive and accurate harvest scheduling solutions for the wheat crop quality control problem.
OR in agriculture; Multi-step joint chance constrained programming; Proactive dynamic approach; Exogenous Markov decision process; Wheat crop quality control;
http://www.sciencedirect.com/science/article/pii/S0377221715003689
Borodin, Valeria
Bourtembourg, Jean
Hnaien, Faicel
Labadie, Nacima
oai:RePEc:eee:ejores:v:246:y:2015:i:2:p:450-4612015-06-18RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:2:p:450-461
article
A frontier measure of U.S. banking competition
The three main measures of competition (HHI, Lerner index, and H-statistic) are uncorrelated for U.S. banks. We investigate why this occurs, propose a frontier measure of competition, and apply it to five major bank service lines. Fee-based banking services comprise 35 percent of bank revenues so assessing competition by service line is preferred to using a single measure for traditional activities extended to the entire bank. As the Lerner index and the H-statistic together explain only 1 percent of HHI variation and the HHI is similarly unrelated to the frontier method developed here, current merger/acquisition guidelines should be adjusted as banking concentration seems unrelated to likely more accurate competition measures.
(D) Productivity and competitiveness; Competition; Banks;
http://www.sciencedirect.com/science/article/pii/S0377221715003896
Bolt, Wilko
Humphrey, David
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:214-2252016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:214-225
article
Ambiguity in risk preferences in robust stochastic optimization
We consider robust stochastic optimization problems for risk-averse decision makers, where there is ambiguity about both the decision maker’s risk preferences and the underlying probability distribution. We propose and analyze a robust optimization problem that accounts for both types of ambiguity. First, we derive a duality theory for this problem class and identify random utility functions as the Lagrange multipliers. Second, we turn to the computational aspects of this problem. We show how to evaluate our robust optimization problem exactly in some special cases, and then we consider some tractable relaxations for the general case. Finally, we apply our model to both the newsvendor and portfolio optimization problems and discuss its implications.
Stochastic dominance; Robust optimization; Expected utility maximization;
http://www.sciencedirect.com/science/article/pii/S0377221716301448
Haskell, William B.
Fu, Lunce
Dessouky, Maged
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:202-2132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:202-213
article
The influence of challenging goals and structured method on Six Sigma project performance: A mediated moderation analysis
Over the past few decades, Six Sigma has diffused to a wide array of organizations across the globe, which has been fueled by the reported financial benefits of Six Sigma. Implementing Six Sigma entails carrying out a series of Six Sigma projects that improve business processes. Scholars have investigated some mechanisms that influence project success, such as setting challenging goals and adhering to the Six Sigma method. However, these mechanisms have been studied in a piecemeal fashion and do not provide a deeper understanding of their interrelationships. Developing a deeper understanding of these mechanisms helps identify the contingency and boundary conditions that influence Six Sigma project execution. Drawing on Sociotechnical Systems theory, this research conceptualizes and empirically examines the interrelationships of the key mechanisms that influence project execution. Specifically, we examine the interrelationship between Six Sigma project goals (Social System), adherence to the Six Sigma method (Technical System), and knowledge creation. The analysis uses a mediation-moderation approach which helps empirically examine these relationships. The data come from a survey of 324 employees in 102 Six Sigma projects from two organizations. The findings show that project goals and the Six Sigma method can compensate for one another. It also suggests that adherence to the Six Sigma method becomes more beneficial for projects that create a lot of knowledge. Otherwise the method becomes less important. Prior research has not examined these contingencies and boundary conditions, which ultimately influence project success.
Six Sigma; Goal theory; Sociotechnical systems theory; Structured method; Mediated moderation;
http://www.sciencedirect.com/science/article/pii/S0377221716301503
Arumugam, V.
Antony, Jiju
Linderman, Kevin
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:80-912016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:80-91
article
An adaptive large neighborhood search for the two-echelon multiple-trip vehicle routing problem with satellite synchronization
The two-echelon vehicle routing problem (2E-VRP) consists in making deliveries to a set of customers using two distinct fleets of vehicles. First-level vehicles pick up requests at a distribution center and bring them to intermediate sites. At these locations, the requests are transferred to second-level vehicles, which deliver them. This paper addresses a variant of the 2E-VRP that integrates constraints arising in city logistics such as time window constraints, synchronization constraints, and multiple trips at the second level. The corresponding problem is called the two-echelon multiple-trip vehicle routing problem with satellite synchronization (2E-MTVRP-SS). We propose an adaptive large neighborhood search to solve this problem. Custom destruction and repair heuristics and an efficient feasibility check for moves have been designed and evaluated on modified benchmarks for the VRP with time windows.
Routing; Two-echelon VRP; Synchronization; City logistics; Adaptive large neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221716301862
Grangier, Philippe
Gendreau, Michel
Lehuédé, Fabien
Rousseau, Louis-Martin
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:226-2352016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:226-235
article
Quantifiers induced by subjective expected value of sample information with Bernstein polynomials
A kind of personalized quantifier, the so-called SEVSI-induced quantifier as an acronym for Subjective Expected Value of Sample Information, is developed in this paper by introducing Bernstein polynomials of higher degree. This allows us to provide a novel solution to improve the final representation of the quantifier that generally performed poorly in our previous work, thus enhancing the quality of global approximation of functions and improving the operability of this kind of quantifier for practical use. We show some properties of the developed quantifier. We also prove the consistency of the OWA aggregation under the guidance of this type of quantifier. Finally, we experimentally show that the developed quantifier outperforms the one with the piecewise linear interpolation in many aspects of geometrical characteristics and operability. Thus it could be considered as an effective analytical tool to help handle the complex cases involving people's personalities or behavior intentions that have to be considered in decision making under uncertainty.
Uncertainty modeling; Personalized quantifier; Bernstein polynomials; Ordered weighted averaging (OWA) aggregation;
http://www.sciencedirect.com/science/article/pii/S0377221716301436
Guo, Kaihong
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:92-1042016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:92-104
article
Efficient inventory control for imperfect quality items
In this paper, we present a general EOQ model for items that are subject to inspection for imperfect quality. Each lot that is delivered to the sorting facility undertakes a 100 per cent screening and the percentage of defective items per lot reduces according to a learning curve. The generality of the model is viewed as important both from an academic and practitioner perspective. The mathematical formulation considers arbitrary functions of time that allow the decision maker to assess the consequences of a diverse range of strategies by employing a single inventory model. A rigorous methodology is utilised to show that the solution is a unique and global optimal and a general step-by-step solution procedure is presented for continuous intra-cycle periodic review applications. The value of the temperature history and flow time through the supply chain is also used to determine an efficient policy. Furthermore, coordination mechanisms that may affect the supplier and the retailer are explored to improve inventory control at both echelons. The paper provides illustrative examples that demonstrate the application of the theoretical model in different settings and lead to the generation of interesting managerial insights.
Inventory; Imperfect quality; Deterioration; Perishable items; Periodic review;
http://www.sciencedirect.com/science/article/pii/S0377221716302041
Alamri, Adel A.
Harris, Irina
Syntetos, Aris A.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:418-4272016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:418-427
article
Value of information in portfolio selection, with a Taiwan stock market application illustration
Despite many proposed alternatives, the predominant model in portfolio selection is still mean–variance. However, the main weakness of the mean–variance model is in the specification of the expected returns of the individual securities involved. If this process is not accurate, the allocations of capital to the different securities will in almost all certainty be incorrect. If, however, this process can be made accurate, then correct allocations can be made, and the additional expected return following from this is the value of information. This paper thus proposes a methodology to calculate the value of information. A related idea of a level of disappointment is also shown. How value of information calculations can be important in helping a mutual fund settle on how much to set aside for research is discussed in reference to a Taiwan Stock Exchange illustrative application in which the value of information appears to be substantial. Heavy use is made of parametric quadratic programming to keep computation times down for the methodology.
Efficient points; Portfolio selection; Value of information; Piecewise linear paths; Parametric quadratic programming;
http://www.sciencedirect.com/science/article/pii/S0377221716300315
Kao, Chiang
Steuer, Ralph E.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:113-1262016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:113-126
article
Risk measures and their application to staffing nonstationary service systems
In this paper, we explore the use of static risk measures from the mathematical finance literature to assess the performance of some standard nonstationary queueing systems. To do this we study two important queueing models, namely the infinite server queue and the multi-server queue with abandonment. We derive exact expressions for the value of many standard risk measures for the Mt/M/∞, Mt/G/∞, and Mt/Mt/∞ queueing models. We also derive Gaussian based approximations for the value of risk measures for the Erlang-A queueing model. Unlike more traditional approaches of performance analysis, risk measures offer the ability to satisfy the unique and specific risk preferences or tolerances of service operations managers. We also show how risk measures can be used for staffing nonstationary systems with different risk preferences and assess the impact of these staffing policies via simulation.
Queues and service systems; Risk measures; Healthcare; Time inhomogeneous markov processes; Staffing;
http://www.sciencedirect.com/science/article/pii/S0377221716301400
Pender, Jamol
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:290-2972016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:290-297
article
Scheduling under linear constraints
We introduce a parallel machine scheduling problem in which the processing times of jobs are not given in advance but are determined by a system of linear constraints. The objective is to minimize the makespan, i.e., the maximum job completion time among all feasible choices. This novel problem is motivated by various real-world application scenarios. We discuss the computational complexity and algorithms for various settings of this problem. In particular, we show that if there is only one machine with an arbitrary number of linear constraints, or there is an arbitrary number of machines with no more than two linear constraints, or both the number of machines and the number of linear constraints are fixed constants, then the problem is polynomial-time solvable via solving a series of linear programming problems. If both the number of machines and the number of constraints are inputs of the problem instance, then the problem is NP-Hard. We further propose several approximation algorithms for the latter case.
Parallel machine scheduling; Linear programming; Computational complexity; Approximation algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716300650
Nip, Kameng
Wang, Zhenbo
Wang, Zizhuo
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:29-392016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:29-39
article
KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions
We devote this paper to study a class of interval valued multiobjective programming problems. For this we consider two order relations LU and LS on the set of all closed intervals and propose many concepts of Pareto optimal solutions. Based on convexity concepts (viz. LU and LS-convexity) and generalized differentiability (viz. gH-differentiability) of interval valued functions, the KKT optimality conditions for aforesaid problems are obtained. In addition, we compare our results with the results given in Wu (2009) and we show some advantages of our results. The theoretical development is illustrated by suitable examples.
Interval valued functions; gH-differentiability; LU; LS-convex functions; Pareto optimal solutions; KKT optimality conditions;
http://www.sciencedirect.com/science/article/pii/S0377221716301886
Singh, D.
Dar, B.A.
Kim, D.S.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:825-8422016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:825-842
article
The Hybrid Electric Vehicle – Traveling Salesman Problem
The reduction in carbon dioxide levels by using hybrid electric vehicles is a currently ongoing endeavor. Although this development is quite advanced for hybrid electric passenger cars, small transporters and trucks are far behind. We try to address this challenge by introducing a new optimization problem that describes the delivery of goods with a hybrid electric vehicle to a set of customer locations. The Hybrid Electric Vehicle – Traveling Salesman Problem extends the well-known Traveling Salesman Problem by adding different modes of operation for the vehicle, causing different costs and driving times for each arc within a delivery network.
Travelling salesman; Hybrid electric vehicles; Transportation;
http://www.sciencedirect.com/science/article/pii/S0377221716301163
Doppstadt, C.
Koberstein, A.
Vigo, D.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:697-7102016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:697-710
article
A data analytic approach to forecasting daily stock returns in an emerging marketAuthor-Name: Oztekin, Asil
Forecasting stock market returns is a challenging task due to the complex nature of the data. This study develops a generic methodology to predict daily stock price movements by deploying and integrating three data analytical prediction models: adaptive neuro-fuzzy inference systems, artificial neural networks, and support vector machines. The proposed approach is tested on the Borsa Istanbul BIST 100 Index over an 8 year period from 2007 to 2014, using accuracy, sensitivity, and specificity as metrics to evaluate each model. Using a ten-fold stratified cross-validation to minimize the bias of random sampling, this study demonstrates that the support vector machine outperforms the other models. For all three predictive models, accuracy in predicting down movements in the index outweighs accuracy in predicting the up movements. The study yields more accurate forecasts with fewer input factors compared to prior studies of forecasts for securities trading on Borsa Istanbul. This efficient yet also effective data analytic approach can easily be applied to other emerging market stock return series.
Prediction/forecasting; Stock market return; Business analytics; Borsa Istanbul (BIST 100); Istanbul Stock Exchange (ISE);
http://www.sciencedirect.com/science/article/pii/S0377221716301096
Kizilaslan, Recep
Freund, Steven
Iseri, Ali
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:659-6722016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:659-672
article
A model for clustering data from heterogeneous dissimilarities
Clustering algorithms partition a set of n objects into p groups (called clusters), such that objects assigned to the same groups are homogeneous according to some criteria. To derive these clusters, the data input required is often a single n × n dissimilarity matrix. Yet for many applications, more than one instance of the dissimilarity matrix is available and so to conform to model requirements, it is common practice to aggregate (e.g., sum up, average) the matrices. This aggregation practice results in clustering solutions that mask the true nature of the original data. In this paper we introduce a clustering model which, to handle the heterogeneity, uses all available dissimilarity matrices and identifies for groups of individuals clustering objects in a similar way. The model is a nonconvex problem and difficult to solve exactly, and we thus introduce a Variable Neighborhood Search heuristic to provide solutions efficiently. Computational experiments and an empirical application to perception of chocolate candy show that the heuristic algorithm is efficient and that the proposed model is suited for recovering heterogeneous data. Implications for clustering researchers are discussed.
Data mining; Clustering; Heterogeneity; Optimization; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301618
Santi, Éverton
Aloise, Daniel
Blanchard, Simon J.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:489-5022016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:489-502
article
A DEA based composite measure of quality and its associated data uncertainty interval for health care provider profiling and pay-for-performance
Composite measures calculated from individual performance indicators increasingly are used to profile and reward health care providers. We illustrate an innovative way of using Data Envelopment Analysis (DEA) to create a composite measure of quality for profiling facilities, informing consumers, and pay-for-performance programs. We compare DEA results to several widely used alternative approaches for creating composite measures: opportunity-based-weights (OBW, a form of equal weighting) and a Bayesian latent variable model (BLVM, where weights are driven by variances of the individual measures). Based on point estimates of the composite measures, to a large extent the same facilities appear in the top decile. However, when high performers are identified because the lower limits of their interval estimates are greater than the population average (or, in the case of the BLVM, the upper limits are less), there are substantial differences in the number of facilities identified: OBWs, the BLVM and DEA identify 25, 17 and 5 high-performers, respectively. With DEA, where every facility is given the flexibility to set its own weights, it becomes much harder to distinguish the high performers. In a pay-for-performance program, the different approaches result in very different reward structures: DEA rewards a small group of facilities a larger percentage of the payment pool than the other approaches. Finally, as part of the DEA analyses, we illustrate an approach that uses Monte Carlo resampling with replacement to calculate interval estimates by incorporating uncertainty in the data generating process for facility input and output data. This approach, which can be used when data generating processes are hierarchical, has the potential for wider use than in our particular application.
Data Envelopment Analysis (DEA); Health care quality; Monte Carlo; Bootstrapping; Performance;
http://www.sciencedirect.com/science/article/pii/S0377221716301023
Shwartz, Michael
Burgess, James F.
Zhu, Joe
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:524-5412016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:524-541
article
From stakeholders analysis to cognitive mapping and Multi-Attribute Value Theory: An integrated approach for policy support
One of the fundamental features of policy processes in contemporary societies is complexity. It follows from the plurality of points of view actors adopt in their interventions, and from the plurality of criteria upon which they base their decisions. In this context, collaborative multicriteria decision processes seem to be appropriate to address part of the complexity challenge. This study discusses a decision support framework that guides policy makers in their strategic decisions by using a multi-method approach based on the integration of three tools, i.e., (i) stakeholders analysis, to identify the multiple interests involved in the process, (ii) cognitive mapping, to define the shared set of objectives for the analysis, and (iii) Multi-Attribute Value Theory, to measure the level of achievement of the previously defined objectives by the policy options under investigation. The integrated decision support framework has been tested on a real world project concerning the location of new parking areas in a UNESCO site in Southern Italy. The purpose of this study was to test the operability of an integrated analytical approach to support policy decisions by investigating the combined and synergistic effect of the three aforementioned tools. The ultimate objective was to propose policy recommendations for a sustainable parking area development strategy in the region under consideration. The obtained results illustrate the importance of integrated approaches for the development of accountable public decision processes and consensus policy alternatives. The proposed integrated methodological framework will, hopefully, stimulate the application of other collaborative decision processes in public policy making.
Multiple criteria analysis; Decision analysis; Group decision and negotiations; Decision processes; Policy analytics;
http://www.sciencedirect.com/science/article/pii/S0377221716301072
Ferretti, Valentina
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:253-2682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:253-268
article
Forward thresholds for operation of pumped-storage stations in the real-time energy market
Pumped-storage hydroelectric plants are very valuable assets on the electric grid and in electric markets as they are able to pump and store water for generation, thus allowing for grid-level storage. Within the realm of short-term energy markets, we present a model for determining forward-looking thresholds for making generation and pumping decisions at such plants. A multistage stochastic programming framework is developed to optimize the thresholds with uncertain system prices over the next three days. Tractability issues are discussed and a novel method based on an implementation of the scatter search algorithm is proposed. Given the size of the multistage stochastic programming formulation, we argue that this novel method is a more accurate representation of the decision process. We demonstrate model stability and quality, and show that the forward thresholds obtained using a stochastic programming framework outperform the forward thresholds from a deterministic model, and thus can lead to efficiency gains for both the generation unit owner and the overall system in the real-time market.
Stochastic programming; OR in energy; Large scale optimization; Metaheuristics; Energy markets;
http://www.sciencedirect.com/science/article/pii/S0377221716301485
Vojvodic, Goran
Jarrah, Ahmad I.
Morton, David P.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:312-3192016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:312-319
article
Estimating the hyperbolic distance function: A directional distance function approach
Färe, Grosskopf, and Lovell (1985) merged Farrell’s input and output oriented technical efficiency measures into a new graph-type approach known as hyperbolic distance function (HDF). In spite of its appealing special structure in allowing for the simultaneous and equiproportionate reduction in inputs and increase in outputs, HDF is a non-linear optimization and it is hard to solve particularly when dealing with technologies operating under variable returns to scale. By connecting HDF to the directional distance function, we propose a linear programming based procedure for estimating the exact value of HDF within the non-parametric framework of data envelopment analysis. We illustrate the computational effectiveness of the algorithm on several real-world and simulated data sets, generating the optimal value of HDF through generally solving at most two linear programs. Moreover, our approach has several desirable properties such as: (1) introducing a computational dual formulation for the HDF and providing an economic interpretation in terms of shadow prices; (2) being readily adaptable to measure hyperbolic-oriented super-efficiency; and (3) being flexible to deal with HDF-based efficiency measures on environmental technologies.
Efficiency measurement; Data envelopment analysis; Hyperbolic distance function; Directional distance function;
http://www.sciencedirect.com/science/article/pii/S0377221716301916
Färe, Rolf
Margaritis, Dimitris
Rouse, Paul
Roshdi, Israfil
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:383-3912016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:383-391
article
Optimal production planning for assembly systems with uncertain capacities and random demandAuthor-Name: Ji, Qingkai
We study the optimal production planning for an assembly system consisting of n components in a single period setting. Demand for the end-product is random and production and assembly capacities are uncertain due to unexpected breakdowns, repairs and reworks, etc. The cost-minimizing firm (she) plans components production before the production capacities are realized, and after the outputs of components are observed, she decides the assembly amount before the demand realization. We start with a simplified system of selling two complementary products without an assembly stage and find that the firm's best choices can only be: (a) producing no products or producing only the product of less stock such that its target amount is not higher than the other product's initial stock level, or (b) producing both products such that their target amounts are equal. Leveraging on these findings, the two-dimensional optimization problem is reduced to two single-dimensional sub-problems and the optimal solution is characterized. For a general assembly system with n components, we show that if initially the firm has more end-products than a certain level, she will neither produce any component nor assemble end-product; if she does not have that many end-products but does have enough mated components, she will produce nothing and assemble up to that level; otherwise she will try to assemble all mated components and plan production of components accordingly. We characterize the structure of optimal solutions and find the solutions analytically.
Supply chain management; Assembly system; Uncertain capacity; Production planning;
http://www.sciencedirect.com/science/article/pii/S0377221716300583
Wang, Yunzeng
Hu, Xiangpei
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:681-6962016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:681-696
article
Unpacking multimethodology: Impacts of a community development intervention
Multimethodology interventions are being increasingly employed by operational researchers to cope with the complexity of real-world problems. In keeping with recent calls for more research into the ‘realised’ impacts of multimethodology, we present a detailed account of an intervention to support the planning of business ideas by a management team working in a community development context. Drawing on the rich steam of data gathered during the intervention, we identify a range of cognitive, task and relational impacts experienced by the management team during the intervention. These impacts are the basis for developing a process model that accounts for the personal, social and material changes reported by those involved in the intervention. The model explains how the intervention's analytic and relational capabilities incentivise the interplay of participants’ decision making efforts and integrative behaviours underpinning reported intervention impacts and change. Our findings add much needed empirical case material to enrich further our understanding of the realised impacts of operational research interventions in general, and of multimethodology interventions in particular.
Decision processes; Problem structuring; Multimethodology; Intervention; Impacts;
http://www.sciencedirect.com/science/article/pii/S0377221716300972
Henao, Felipe
Franco, L. Alberto
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:265-2792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:265-279
article
A cycle-based evolutionary algorithm for the fixed-charge capacitated multi-commodity network design problem
This paper presents an evolutionary algorithm for the fixed-charge multicommodity network design problem (MCNDP), which concerns routing multiple commodities from origins to destinations by designing a network through selecting arcs, with an objective of minimizing the fixed costs of the selected arcs plus the variable costs of the flows on each arc. The proposed algorithm evolves a pool of solutions using principles of scatter search, interlinked with an iterated local search as an improvement method. New cycle-based neighborhood operators are presented which enable complete or partial re-routing of multiple commodities. An efficient perturbation strategy, inspired by ejection chains, is introduced to perform local compound cycle-based moves to explore different parts of the solution space. The algorithm also allows infeasible solutions violating arc capacities while performing the “ejection cycles”, and subsequently restores feasibility by systematically applying correction moves. Computational experiments on benchmark MCNDP instances show that the proposed solution method consistently produces high-quality solutions in reasonable computational times.
Multi-commodity network design; Scatter search; Evolutionary algorithms; Ejection chains; Iterated local search;
http://www.sciencedirect.com/science/article/pii/S0377221716000072
Paraskevopoulos, Dimitris C.
Bektaş, Tolga
Crainic, Teodor Gabriel
Potts, Chris N.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:625-6382016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:625-638
article
Hub and Chain: Process Flexibility Design in Non-Identical Systems Using Variance Information
In multi-product multi-plant manufacturing systems, process flexibility is the ability to produce different types of products in the same manufacturing plant or production line. While several design methods and flexibility indices have been proposed in the literature on how to design process flexibility, most of the insights generated are focused on identical production systems whereby all plants have the same capacity and all products have identically distributed demands. In this paper, we examine the process flexibility design problem for non-identical systems. We first study the effect of non-identical demand distributions on the performance of the well-known long chain design, and discover three interesting insights: (1) products with low demand mean will create a bottleneck effect, (2) products with low demand variance will result in inefficient utilization of flexibility links, and (3) long chain efficiency decreases in demand variance of any product, hence the need to provide this product with access to more capacity. Using these insights, we develop the variance-based hub-and-chain method (VHC), a simple and graphically intuitive method which decomposes the long chain into smaller chains, one of which will serve as a hub to which the other chains will be connected. Numerical tests show that VHC outperforms the long chain by 15% on average and outperforms the constraint sampling method by 38% on average. Lastly, we implement VHC on a case study in the edible oil industry in China and find substantial benefits. We then summarize with some managerial insights.
Process flexibility; Chaining strategy; Stochastic maximum flow; Demand variance;
http://www.sciencedirect.com/science/article/pii/S0377221716301473
Chua, Geoffrey A.
Chen, Shaoxiang
Han, Zhiguang
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:179-1872016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:179-187
article
Hierarchical outcomes and collusion neutrality on networks
We investigate TU-game solutions that are neutral to collusive agreements among players. A collusive agreement binds collusion members to act as a single player and is feasible when they are connected on a network. Collusion neutrality requires that no feasible collusive agreement can change the total payoff of collusion members. We show that on the domain of network games, there is a solution satisfying collusion neutrality, efficiency and null-player property if and only if the network is a tree. Considering a tree network, we show that affine combinations of hierarchical outcomes (Demange, 2004; van den Brink, 2012) are the only solutions satisfying the three axioms together with linearity. As corollaries, we establish characterizations of the average tree solution (equally weighted average of hierarchical outcomes); one established earlier in the literature and the others new.
Game theory; Hierarchical outcomes; Collusion neutrality; TU-game; Network game;
http://www.sciencedirect.com/science/article/pii/S0377221716301394
Park, Junghum
Ju, Biung-Ghi
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:68-792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:68-79
article
A service network design model for multimodal municipal solid waste transport
A modal shift from road transport towards inland water or rail transport could reduce the total Green House Gas emissions and societal impact associated with Municipal Solid Waste management. However, this shift will take place only if demonstrated to be at least cost-neutral for the decision makers. In this paper we examine the feasibility of using multimodal truck and inland water transport, instead of truck transport, for shipping separated household waste in bulk from collection centres to waste treatment facilities. We present a dynamic tactical planning model that minimises the sum of transportation costs, external environmental and societal costs. The Municipal Solid Waste Service Network Design Problem allocates Municipal Solid Waste volumes to transport modes and determines transportation frequencies over a planning horizon. This generic model is applied to a real-life case in Flanders, the northern region of Belgium. Computational results show that multimodal truck and inland water transportation can compete with truck transport by avoiding or reducing transhipments and using barge convoys.
Solid Waste Management; Supply chain management; OR in societal problem analysis; Linear Programming; Networks;
http://www.sciencedirect.com/science/article/pii/S0377221716301643
Inghels, Dirk
Dullaert, Wout
Vigo, Daniele
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:843-8552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:843-855
article
Progressive hedging applied as a metaheuristic to schedule production in open-pit mines accounting for reserve uncertainty
Scheduling production in open-pit mines is characterized by uncertainty about the metal content of the orebody (the reserve) and leads to a complex large-scale mixed-integer stochastic optimization problem. In this paper, a two-phase solution approach based on Rockafellar and Wets’ progressive hedging algorithm (PH) is proposed. PH is used in phase I where the problem is first decomposed by partitioning the set of scenarios modeling metal uncertainty into groups, and then the sub-problems associated with each group are solved iteratively to drive their solutions to a common solution. In phase II, a strategy exploiting information obtained during the PH iterations and the structure of the problem under study is used to reduce the size of the original problem, and the resulting smaller problem is solved using a sliding time window heuristic based on a fix-and-optimize scheme. Numerical results show that this approach is efficient in finding near-optimal solutions and that it outperforms existing heuristics for the problem under study.
Open-pit mine production scheduling; Progressive hedging method; Lagrangian relaxation; Sliding time window heuristic; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301357
Lamghari, Amina
Dimitrakopoulos, Roussos
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:746-7602016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:746-760
article
Preference stability over time with multiple elicitation methods to support wastewater infrastructure decision-making
We used a multi-method and repeated elicitation approach across different stakeholder groups to explore possible differences in the outcome of an environmental decision. We compared different preference elicitation procedures based on Multi Criteria Decision Analysis (MCDA) over time for a water infrastructure decision in Switzerland. We implemented the SWING and SMART/SWING weight elicitation methods and also compared results with earlier stakeholder interviews. In all procedures, the weights for environmental protection and well-functioning (waste-)water systems were higher than for cost reduction. The SMART/SWING variant produced statistically significantly different weights than SWING. Weights changed over time with both elicitation methods. Weights were more stable with the SWING method, which was also perceived as slightly more difficult than the SMART/SWING variant. We checked whether the difference in weights produced by the two elicitation methods and the difference in their stability affects the ranking of six alternatives. Overall an unconventional decentralized alternative ranked first or second in 92 percent of all elicitation procedures, which were the online surveys or interviews. For practical decision-making, using multiple methods across different stakeholder groups and repeating elicitation can increase our confidence that the results reflect the true opinions of the decision makers and stakeholders.
Behavioral OR; Weight elicitation; Multiple criteria analysis; Online survey; OR in environment and climate change;
http://www.sciencedirect.com/science/article/pii/S0377221716301382
Lienert, Judit
Duygan, Mert
Zheng, Jun
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:777-7902016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:777-790
article
A queueing model for managing small projects under uncertainties
We consider a situation in which a home improvement project contractor has a team of regular crew members who receive compensation even when they are idle. Because both projects arrivals and the completion time of each project are uncertain, the contractor needs to manage the utilization of his crews carefully. One common approach adopted by many home improvement contractors is to accept multiple projects to keep his crew members busy working on projects to generate positive cash flows. However, this approach has a major drawback because it causes “intentional” (or foreseeable) project delays. Intentional project delays can inflict explicit and implicit costs on the contractor when frustrating customers abandon their projects and/or file complaints or lawsuits. In this paper, we present a queueing model to capture uncertain customer (or project) arrivals and departures, along with the possibility of customer abandonment. Also, associated with each admission policy (i.e., the maximum number of projects that the contractor will accept), we model the underlying tradeoff between accepting too many projects (that can increase customer dissatisfaction) and accepting too few projects (that can reduce crew utilization). We examine this tradeoff analytically so as to determine the optimal admission policy and the optimal number of crew members. We further apply our model to analyze other issues including worker productivity and project pricing. Finally, our model can be extended to allow for multiple classes of projects with different types of crew members.
Project management; Multi-projects; Queueing models; Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301059
Bai, Jiaru
So, Kut C.
Tang, Christopher
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:880-8872016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:880-887
article
From partial derivatives of DEA frontiers to marginal products, marginal rates of substitution, and returns to scaleAuthor-Name: Ouellette, Pierre
The characterization of a technology, from an economic point of view, often uses the first derivatives of either the transformation or the production function. In a parametric setting, these quantities are readily available as they can be easily deduced from the first derivatives of the specified function. In the standard framework of data envelopment analysis (DEA) models these quantities are not so easily obtained. The difficulty resides in the fact that marginal changes of inputs and outputs might affect the position of the frontier itself while the calculation of first derivatives for economic purposes assumes that the frontier is held constant. We develop here a procedure to recover first derivatives of transformation functions in DEA models and we show how we can evacuate the problem of the (marginal) shift of the frontier. We show how the knowledge of the first derivatives of the frontier estimated by DEA can be used to deduce and compute marginal products, marginal rates of substitution, and returns to scale for each decision making unit (DMU) in the sample.
Data envelopment analysis; Marginal products; Transformation function; First derivatives;
http://www.sciencedirect.com/science/article/pii/S037722171630073X
Vigeant, Stéphane
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:761-7762016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:761-776
article
On consumer preferences and the willingness to pay for term life insurance
We run a choice-based conjoint (CBC) analysis for term life insurance on a sample of 2017 German consumers using data from web-based experiments. Individual-level part-worth profiles are estimated by means of a hierarchical Bayes model. Drawing on the elicited preference structures, we then compute relative attribute importances and different willingness to pay measures. In addition, we present comprehensive simulation results for a realistic competitive setting that allows us to assess product switching as well as market expansion effects. On average, brand, critical illness cover, and underwriting procedure turn out to be the most important nonprice product attributes. Hence, if a policy comprises their favored specifications, customers accept substantial markups in the monthly premium. Furthermore, preferences vary considerably across the sample. While some individuals are prepared to pay relatively high monthly premiums, a large fraction exhibits no willingness to pay for term life insurance at all, presumably due to the absence of a need for mortality risk coverage. We also illustrate that utility-driven product optimization is well-suited to gain market shares, avoid competitive price pressure, and access additional profit potential. Finally, based on estimated demand sensitivities and a set of cost assumptions, it is shown that insurers require an in-depth understanding of preferences to identify the profit-maximizing price.
Preferences; Willingness to pay; Term life insurance; Choice-based conjoint analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716300601
Braun, Alexander
Schmeiser, Hato
Schreiber, Florian
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:338-3462016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:338-346
article
The weighted additive distance function
Distance functions in production theory are mathematical structures that characterize the belonging to the reference technology through a numerical value, behave as technical efficiency measures when the focus is analyzing an observed input–output vector within its production possibility set and present a dual relationship with some support function (profit, revenue, cost function). In this paper, we endow the well-known weighted additive models in Data Envelopment Analysis with a distance function structure, introducing the Weighted Additive Distance Function and showing its main properties.
Data envelopment analysis; Distance functions; Weighted additive model; Profit function;
http://www.sciencedirect.com/science/article/pii/S0377221716302259
Aparicio, Juan
Pastor, Jesus T.
Vidal, Fernando
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:169-1782016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:169-178
article
A multi-agent based cooperative approach to scheduling and routing
In this paper, we propose a general agent-based distributed framework where each agent is implementing a different metaheuristic/local search combination. Moreover, an agent continuously adapts itself during the search process using a direct cooperation protocol based on reinforcement learning and pattern matching. Good patterns that make up improving solutions are identified and shared by the agents. This agent-based system aims to provide a modular flexible framework to deal with a variety of different problem domains. We have evaluated the performance of this approach using the proposed framework which embodies a set of well known metaheuristics with different configurations as agents on two problem domains, Permutation Flow-shop Scheduling and Capacitated Vehicle Routing. The results show the success of the approach yielding three new best known results of the Capacitated Vehicle Routing benchmarks tested, whilst the results for Permutation Flow-shop Scheduling are commensurate with the best known values for all the benchmarks tested.
Combinatorial optimization; Scheduling; Vehicle routing; Metaheuristics; Cooperative search;
http://www.sciencedirect.com/science/article/pii/S0377221716300984
Martin, Simon
Ouelhadj, Djamila
Beullens, Patrick
Ozcan, Ender
Juan, Angel A.
Burke, Edmund K.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:791-8102016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:791-810
article
Multicriteria decision support to evaluate potential long-term natural gas supply alternatives: The case of GreeceAuthor-Name: Androulaki, Stella
This paper assesses 27 alternative natural gas supply corridors for the case of Greece, according to a multicriteria analysis approach based on three main pillars: (1) economics of supply, (2) security of supply, and (3) cooperation between countries. The alternatives include onshore and offshore pipeline corridors and LNG shipping, determined after exhaustive investigation of all possible existing and future routes, taking into consideration all possible natural gas infrastructure development projects around Greece. A multicriteria additive value system is assessed via the robust ordinal regression methodology, aiming to support the national energy policy makers to devise favorable strategies, concerning both long-term national natural gas supplies and infrastructure developments. The obtained ranking shows that noticeable alternative corridors for gas passage to Greece do exist both in terms of maritime transport of LNG and in terms of potential future pipeline infrastructure projects.
Multiple criteria decision analysis; Natural gas supply; Energy policy; Robustness; Greece;
http://www.sciencedirect.com/science/article/pii/S0377221716301047
Psarras, John
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:279-2932016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:279-293
article
A multi-objective model for locating search and rescue boats
We present the Incident Based-Boat Allocation Model (IB-BAM), a multi-objective model designed to allocate search and rescue resources. The decision of where to locate search and rescue boats depends upon a set of criteria that are unique to a given problem such as the density and types of incidents responded in the area of interest, resource capabilities, geographical factors and governments’ business rules. Thus, traditional models that incorporate only political decisions are no longer appropriate. IB-BAM considers all these criteria and determines optimal boat allocation plans with the objectives of minimizing response time to incidents, fleet operating cost and the mismatch between boats’ workload and operation capacity hours.
Integer programming; Resource allocation; Search and rescue;
http://www.sciencedirect.com/science/article/pii/S0377221716301540
Razi, Nasuh
Karatas, Mumtaz
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:161-1682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:161-168
article
Queueing network MAP−(GI/∞)K with high-rate arrivals
An analysis of the open queueing network MAP−(GI/∞)K is presented in this paper. The MAP−(GI/∞)K network implements Markov routing, general service time distribution, and an infinite number of servers at each node. Analysis is performed under the condition of a growing fundamental rate for the Markovian arrival process. It is shown that the stationary probability distribution of the number of customers at the nodes can be approximated by multi-dimensional Gaussian distribution. Parameters of this distribution are presented in the paper. Numerical results validate the applicability of the obtained approximations under relevant conditions. The results of the approximations are applied to estimate the optimal number of servers for a network with finite-server nodes. In addition, an approximation of higher-order accuracy is derived.
Queueing network; Infinite number of servers; Markovian arrival process; Asymptotic analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716302302
Moiseev, Alexander
Nazarov, Anatoly
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:503-5132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:503-513
article
Proximal point algorithms for nonsmooth convex optimization with fixed point constraints
The problem of minimizing the sum of nonsmooth, convex objective functions defined on a real Hilbert space over the intersection of fixed point sets of nonexpansive mappings, onto which the projections cannot be efficiently computed, is considered. The use of proximal point algorithms that use the proximity operators of the objective functions and incremental optimization techniques is proposed for solving the problem. With the focus on fixed point approximation techniques, two algorithms are devised for solving the problem. One blends an incremental subgradient method, which is a useful algorithm for nonsmooth convex optimization, with a Halpern-type fixed point iteration algorithm. The other is based on an incremental subgradient method and the Krasnosel’skiĭ–Mann fixed point algorithm. It is shown that any weak sequential cluster point of the sequence generated by the Halpern-type algorithm belongs to the solution set of the problem and that there exists a weak sequential cluster point of the sequence generated by the Krasnosel’skiĭ–Mann-type algorithm, which also belongs to the solution set. Numerical comparisons of the two proposed algorithms with existing subgradient methods for concrete nonsmooth convex optimization show that the proposed algorithms achieve faster convergence.
Fixed point; Halpern algorithm; Incremental subgradient method; Krasnosel’skiĭ–Mann algorithm; Proximal point algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221716301102
Iiduka, Hideaki
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:441-4552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:441-455
article
Setting the right incentives for global planning and operations
We study incentive issues seen in a firm performing global planning and manufacturing, and local demand management. The stochastic demands in local markets are best observed by the regional business units, and the firm relies on the business units’ forecasts for planning of global manufacturing operations. We propose a class of performance evaluation schemes that induce the business units to reveal their private demand information truthfully by turning the business units’ demand revelation game into a potential game with truth telling being a potential maximizer, an appealing refinement of Nash equilibrium. Moreover, these cooperative performance evaluation schemes satisfy several essential fairness notions. After analyzing the characteristics of several performance evaluation schemes in this class, we extend our analysis to include the impact of effort on demand.
Production systems; Information asymmetry; Incentive management; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716300662
Norde, Henk
Özen, Ulaş
Slikker, Marco
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:602-6132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:602-613
article
Shared resource capacity expansion decisions for multiple products with quantity discounts
When multiple products compete for the same storage space, their optimal individual lot sizes may need to be reduced to accommodate the storage needs of other products. This challenge is exacerbated with the presence of quantity discounts, which tend to entice larger lot sizes. Under such circumstances, firms may wish to consider storage capacity expansion as an option to take full advantage of quantity discounts. This paper aims to simultaneously determine the optimal storage capacity level along with individual lot sizes for multiple products being offered quantity discounts (either all-units discounts, incremental discounts, or a mixture of both). By utilizing Lagrangian techniques along with a piecewise-linear approximation for capacity cost, our algorithms can generate precise solutions regardless of the functional form of capacity cost (i.e., concave or convex). The algorithms can incorporate simultaneous lot-sizing decisions for thousands of products in a reasonable solution time. We utilize numerical examples and sensitivity analysis to understand the key factors that influence the capacity expansion decision and the performance of the algorithms. The primary characteristic that influences the capacity expansion decision is the size of the quantity discount offered, but variability in demand and capacity per unit influence the expansion decision as well. Furthermore, we discover that all-units quantity discounts are more likely to lead to capacity expansion compared to incremental quantity discounts. Our analysis illuminates the potential for significant savings available to companies willing to explore the option of increasing storage capacity to take advantage of quantity discount offerings for their purchased products.
Purchasing; Quantity discounts; Capacity expansion; Lot sizing; Inventory;
http://www.sciencedirect.com/science/article/pii/S0377221716301527
Jackson, Jonathan E.
Munson, Charles L.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:711-7332016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:711-733
article
Optimal contract design in the joint economic lot size problem with multi-dimensional asymmetric information
Previous work has studied the classical joint economic lot size model as an adverse selection problem with asymmetric cost information. Solving this problem is challenging due to the presence of countervailing incentives and two-dimensional information asymmetry, under which the classical single-crossing condition does not need to hold. In the present work we advance the existing knowledge about the problem on hand by conducting its optimality analysis, which leads to a better informed and an easier problem solution: First, we refine the existing closed-form solution, which simplifies problem solving and its analysis. Second, we prove that Karush–Kuhn–Tucker conditions are necessary for optimality, and demonstrate that the problem may, in general, possess non-optimal stationary points due to non-convexity. Third, we prove that certain types of stationary points are always dominated, which eases the analytical solution of the problem. Fourth, we derive a simple optimality condition stating that a weak Pareto efficiency of the buyer’s possible cost structures implies optimality of any stationary point. It simplifies the analytical solution approach and ensures a successful solution of the problem by means of conventional numerical techniques, e.g. with a general-purpose solver. We further establish properties of optimal solutions and indicate how these are related with the classical results on adverse selection.
Supply chain coordination; Asymmetric information; Nonlinear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716301060
Pishchulov, Grigory
Richter, Knut
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:392-4032016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:392-403
article
The role of co-opetition in low carbon manufacturing
Low carbon manufacturing has become a strategic objective for many developed and developing economies. This study examines the role of co-opetition in achieving this objective. We investigate the pricing and emissions reduction policies for two rival manufacturers with different emission reduction efficiencies under the cap-and-trade policy. We assume that the product demand is price and emission sensitive. Based on non-cooperative and cooperative games, the optimal solutions for the two manufacturers are derived in the purely competitive and co-opetitive market environments respectively. Through the discussion and numerical analysis, we uncovered that in both pure competition and co-opetition models, the two manufacturers’ optimal prices depend on the unit price of carbon emission trading. In addition, higher emission reduction efficiency leads to lower optimal unit carbon emissions and higher profit in both the pure competition and co-petition models. Interestingly, compared to pure competition, co-opetition will lead to more profit and less total carbon emissions. However, the improvement in economic and environmental performance is based on higher product prices and unit carbon emissions.
Low carbon manufacturing; Co-opetition; Carbon emission reduction; Green technology investment; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716300674
Luo, Zheng
Chen, Xu
Wang, Xiaojun
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:734-7452016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:734-745
article
A simple yet effective decision support policy for mass-casualty triage
In the aftermath of a mass-casualty incident, effective policies for timely evaluation and prioritization of patients can mean the difference between life and death. While operations research methods have been used to study the patient prioritization problem, prior research has either proposed decision rules that only apply to very simple cases, or proposed formulating and solving a mathematical program in real time, which may be a barrier to implementation in an urgent situation. We connect these two regimes by proposing a general decision support rule that can handle survival probability functions and an arbitrary number of patient classifications. The proposed survival lookahead policy generalizes not only a myopic policy and a cμ type rule, but also the optimal solution to a version of the problem with two priority classes. This policy has other desirable properties, including index policy structure. Using simple heuristic parameterizations, the survival lookahead policy yields an expected number of survivors that is almost as large as published methods that require mathematical programming, while having the advantage of an intuitive structure and requiring minimal computational support.
Triage; Disaster response; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716301151
Mills, Alex F.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:639-6472016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:639-647
article
Designing repetitive screening procedures with imperfect inspections: An empirical Bayes approach
A batch of expensive items, such as IC chips, is often inspected multiple times in a sequential manner to further discover more conforming items. After several rounds of screening, we need to estimate the number of conforming items that still remain in the batch. We propose in this paper an empirical Bayes estimation method and compare its performance with that of the traditional maximum likelihood method. In the repetitive screening procedure, another important decision problem is when to stop the screening process and salvage the remaining items. We propose various types of stopping rules and illustrate their procedures with a simulated inspection data. Finally, we explore various extensions to our empirical Bayes estimation method in multiple inspection plans.
Inspection; Product quality; Reliability; Empirical Bayes estimation;
http://www.sciencedirect.com/science/article/pii/S0377221716301138
Chun, Young H.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:456-4712016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:456-471
article
Value added, educational accountability approaches and their effects on schools’ rankings: Evidence from Chile
Value added models have been proposed to analyze different aspects related to school effectiveness on the basis of student growth. There is consensus in the literature about the need to control for socioeconomic status and other contextual variables at student and school level in the estimation of value added, for which the methodologies employed have largely relied on hierarchical linear models. However, this approach is problematic because results are based on comparisons to the school’s average—implying no real incentive for performance excellence. Meanwhile, activity analysis models to estimate school value added have been unable to control for contextual variables at both the student and school levels. In this study we propose a robust frontier model to estimate contextual value added which merges relevant branches of the activity analysis literature, namely, metafrontiers and partial frontier methods. We provide an application to a large sample of Chilean schools, a relevant country to study due to the reforms made to its educational system that point out to the need of accountability measures. Results indicate not only the general relevance of including contextual variables but also how they contribute to explaining the performance differentials found for the three types of schools—public, privately-owned subsidized, and privately-owned fee-paying. Also, the results indicate that contextual value added models generate school rankings more consistent with the evaluation models currently used in Chile than any other type of evaluation models.
Efficiency; Order-m; School effectiveness; Value added;
http://www.sciencedirect.com/science/article/pii/S0377221716000527
Thieme, Claudio
Prior, Diego
Tortosa-Ausina, Emili
Gempp, René
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:356-3712016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:356-371
article
Pro-active real-time routing in applications with multiple request patterns
Recent research reveals that pro-active real-time routing approaches that use stochastic knowledge about future requests can significantly improve solution quality compared to approaches that simply integrate new requests upon arrival. Many of these approaches assume that request arrivals on different days follow an identical pattern. Thus, they define and apply a single profile of past request days to anticipate future request arrivals. In many real-world applications, however, different days may follow different patterns. Moreover, the pattern of the current day may not be known beforehand, and may need to be identified in real-time during the day. In such cases, applying approaches that use a single profile is not promising. In this paper, we propose a new pro-active real-time routing approach that applies multiple profiles. These profiles are generated by grouping together days with a similar pattern of request arrivals. For each combination of identified profiles, stochastic knowledge about future request arrivals is derived in an offline step. During the day, the approach repeatedly evaluates characteristics of request arrivals and selects a suitable combination of profiles. The performance of the new approach is evaluated in computational experiments in direct comparison with a previous approach that applies only a single profile. Computational results show that the proposed approach significantly outperforms the previous one. We analyze further potential for improvement by comparing the approach with an omniscient variant that knows the actual pattern in advance. Based on the results, managerial implications that allow for a practical application of the new approach are provided.
Dynamic vehicle routing; Multiple request patterns; Request forecasting; Scenario identification; K-means clustering;
http://www.sciencedirect.com/science/article/pii/S0377221716300364
Ferrucci, Francesco
Bock, Stefan
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:570-5832016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:570-583
article
Robust mixed-integer linear programming models for the irregular strip packing problem
Two-dimensional irregular strip packing problems are cutting and packing problems where small pieces have to be cut from a larger object, involving a non-trivial handling of geometry. Increasingly sophisticated and complex heuristic approaches have been developed to address these problems but, despite the apparently good quality of the solutions, there is no guarantee of optimality. Therefore, mixed-integer linear programming (MIP) models started to be developed. However, these models are heavily limited by the complexity of the geometry handling algorithms needed for the piece non-overlapping constraints. This led to pieces simplifications to specialize the developed mathematical models. In this paper, to overcome these limitations, two robust MIP models are proposed. In the first model (DTM) the non-overlapping constraints are stated based on direct trigonometry, while in the second model (NFP−CM) pieces are first decomposed into convex parts and then the non-overlapping constraints are written based on nofit polygons of the convex parts. Both approaches are robust in terms of the type of geometries they can address, considering any kind of non-convex polygon with or without holes. They are also simpler to implement than previous models. This simplicity allowed to consider, for the first time, a variant of the models that deals with piece rotations. Computational experiments with benchmark instances show that NFP−CM outperforms both DTM and the best exact model published in the literature. New real-world based instances with more complex geometries are proposed and used to verify the robustness of the new models.
Packing; Cutting; Nesting; MIP models;
http://www.sciencedirect.com/science/article/pii/S0377221716301370
Cherri, Luiz H.
Mundim, Leandro R.
Andretta, Marina
Toledo, Franklina M.B.
Oliveira, José F.
Carravilla, Maria Antónia
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:304-3112016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:304-311
article
An auto-realignment method in quasi-Monte Carlo for pricing financial derivatives with jump structures
Discontinuities are common in the pricing of financial derivatives and have a tremendous impact on the accuracy of quasi-Monte Carlo (QMC) method. While if the discontinuities are parallel to the axes, good efficiency of the QMC method can still be expected. By realigning the discontinuities to be axes-parallel, [Wang & Tan, 2013] succeeded in recovering the high efficiency of the QMC method for a special class of functions. Motivated by this work, we propose an auto-realignment method to deal with more general discontinuous functions. The k-means clustering algorithm, a classical algorithm of machine learning, is used to select the most representative normal vectors of the discontinuity surface. By applying this new method, the discontinuities of the resulting function are realigned to be friendly for the QMC method. Numerical experiments demonstrate that the proposed method significantly improves the performance of the QMC method.
Pricing; QMC; OT method; QR decomposition; Auto-realignment method;
http://www.sciencedirect.com/science/article/pii/S037722171630162X
Weng, Chengfeng
Wang, Xiaoqun
He, Zhijian
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:320-3372016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:320-337
article
Understanding dynamic mean variance asset allocation
We provide a new portfolio decomposition formula that sheds light on the economics of portfolio choice for investors following the mean-variance (MV) criterion. We show that the number of components of a dynamic portfolio strategy can be reduced to two: the first is preference free and hedges the risk of a discount bond maturing at the investor’s horizon while the second hedges the time variation in pseudo relative risk tolerance. Both components entail strong horizon effects in the dynamic asset allocation as a result of time-varying risk tolerance and investment opportunity sets. We also provide closed-form solutions for the optimal portfolio strategy in the presence of market return predictability. The model parameters are estimated over the period 1963 to 2012 for the U.S. market. We show that (i) intertemporal hedging can be very large, (ii) the MV criterion hugely understates the true extent of risk aversion for high values of the risk aversion parameter, and the more so the shorter the investment horizon, and (iii) the efficient frontiers seem problematic for investment horizons shorter than one year but satisfactory for large horizons. Overall, adopting the MV model leads to acceptable results for medium and long term investors endowed with medium or high risk tolerance, but to very problematic ones otherwise.
Mean variance; Dynamic asset allocation; Time varying risk aversion; Intertemporal hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716302223
Lioui, Abraham
Poncet, Patrice
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:328-3362016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:328-336
article
An ejection chain approach for the quadratic multiple knapsack problem
In an algorithm for a problem whose candidate solutions are selections of objects, an ejection chain is a sequence of moves from one solution to another that begins by removing an object from the current solution. The quadratic multiple knapsack problem extends the familiar 0–1 knapsack problem both with several knapsacks and with values associated with pairs of objects. A hybrid algorithm for this problem extends a local search algorithm through an ejection chain mechanism to create more powerful moves. In addition, adaptive perturbations enhance the diversity of the search process. The resulting algorithm produces results that are competitive with the best heuristics currently published for this problem. In particular, it improves the best known results on 34 out of 60 test problem instances and matches the best known results on all but 6 of the remaining instances.
Ejection chain; Quadratic multiple knapsack problem; Adaptive perturbation; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716300960
Peng, Bo
Liu, Mengqi
Lü, Zhipeng
Kochengber, Gary
Wang, Haibo
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:314-3272016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:314-327
article
Lagrangean relaxation of the hull-reformulation of linear generalized disjunctive programs and its use in disjunctive branch and bound
In this work, we present a Lagrangean relaxation of the hull-reformulation of discrete-continuous optimization problems formulated as linear generalized disjunctive programs (GDP). The proposed Lagrangean relaxation has three important properties. The first property is that it can be applied to any linear GDP. The second property is that the solution to its continuous relaxation always yields 0–1 values for the binary variables of the hull-reformulation. Finally, it is simpler to solve than the continuous relaxation of the hull-reformulation. The proposed Lagrangean relaxation can be used in different GDP solution methods. In this work, we explore its use as primal heuristic to find feasible solutions in a disjunctive branch and bound algorithm. The modified disjunctive branch and bound is tested with several instances with up to 300 variables. The results show that the proposed disjunctive branch and bound performs faster than other versions of the algorithm that do not include this primal heuristic.
MILP; Disjunctive programming; GDP; Lagrangean relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221716301011
Trespalacios, Francisco
Grossmann, Ignacio E.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:593-6012016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:593-601
article
Impact of structure, market share and information asymmetry on supply contracts for a single supplier multiple buyer network
Market share of buyers and the influence of supply chain structure on the choice of supply contracts have received scant attention in the literature. This paper focuses on this gap and examines a network consisting of one supplier and two buyers under complete and partial decentralization. In the completely decentralized setting both buyers are independent of the supplier. In the partially decentralized setting the supplier and one of the buyers form a vertically integrated entity. Both buyers order from the single supplier and produce similar products to sell in the same market. The supplier charges the buyer through a contract. We investigate the influence of supply chain structure, market-share and asymmetry of information on supplier's choice of contracts. We demonstrate that both linear two-part tariff and quantity discount contract can coordinate the supply chain irrespective of the supply chain structure. By comparing profit levels of supply chain agents across different supply chain structures, we show that if a buyer possesses a minimum threshold market potential, the supplier has an incentive to collude with her. We calculate the cut-off policies for wholesale price and two-part tariff contracts by incorporating the reservation profit level of individual agents. The managerial implications of the analyses and the directions of future research are presented in the conclusion.
Supply chain management; Pricing; Asymmetric information; Competition; Market share;
http://www.sciencedirect.com/science/article/pii/S0377221716301424
Biswas, Indranil
Avittathur, Balram
Chatterjee, Ashis K
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:557-5692016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:557-569
article
Benders decomposition without separability: A computational study for capacitated facility location problems
Benders is one of the most famous decomposition tools for Mathematical Programming, and it is the method of choice e.g., in mixed-integer stochastic programming. Its hallmark is the capability of decomposing certain types of models into smaller subproblems, each of which can be solved individually to produce local information (notably, cutting planes) to be exploited by a centralized “master” problem. As its name suggests, the power of the technique comes essentially from the decomposition effect, i.e., the separability of the problem into a master problem and several smaller subproblems. In this paper we address the question of whether the Benders approach can be useful even without separability of the subproblem, i.e., when its application yields a single subproblem of the same size as the original problem. In particular, we focus on the capacitated facility location problem, in two variants: the classical linear case, and a “congested” case where the objective function contains convex but non-separable quadratic terms. We show how to embed the Benders approach within a modern branch-and-cut mixed-integer programming solver, addressing explicitly all the ingredients that are instrumental for its success. In particular, we discuss some computational aspects that are related to the negative effects derived from the lack of separability. Extensive computational results on various classes of instances from the literature are reported, with a comparison with the state-of-the-art exact and heuristic algorithms. The outcome is that a clever but simple implementation of the Benders approach can be very effective even without separability, as its performance is comparable and sometimes even better than that of the most effective and sophisticated algorithms proposed in the previous literature.
Benders decomposition; Congested capacitated facility location; Perspective reformulation; Branch-and-cut; Mixed-integer convex programming;
http://www.sciencedirect.com/science/article/pii/S0377221716301126
Fischetti, Matteo
Ljubić, Ivana
Sinnl, Markus
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:472-4882016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:472-488
article
Stability and chaos in demand-based pricing under social interactions
Demand-based pricing is often used to moderate demand fluctuations so as to level resource utilization and increase profitability. However, such pricing policies may not be effective when customers’ purchase decisions are influenced by social interactions. This paper investigates the demand dynamics, under a demand-based pricing policy, of a frequently purchased service when social interactions are at work. Customers are heterogeneous and adaptively forward-looking. Existing customers’ re-purchase decisions are based on adaptively formed price expectations and reservation prices. Potential customers are attracted through social interactions with existing customers. The demand process is characterized by a two-dimensional dynamical system. It is shown that the equilibrium demand can be unstable. For a given reservation price distribution, we first analyze the stability of the equilibrium demand under various scenarios of social interactions and customers’ adaptively forward-looking behavior, and then characterize their dynamics using the bifurcation plots, Lyapunov exponents and return maps. The results indicate that the demand process can be stable, periodic or chaotic. The study shows that the intended effect of a demand-based pricing policy may be offset by customers’ adaptively forward-looking behavior under the influence of social interactions. In fact, the interplay of these factors may even lead to chaotic demand dynamics. The result highlights the complex dynamics produced by a simple demand-price mechanism under social interactions. For a demand-based pricing strategy to be effective, companies must take social interactions into account.
OR in service industries; Demand dynamics; Forward-looking; Social interaction; Chaos;
http://www.sciencedirect.com/science/article/pii/S037722171630100X
Yuan, Xuchuan
Hwarng, H. Brian
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:337-3552016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:337-355
article
Modified Differential Evolution with Locality induced Genetic Operators for dynamic optimizationAuthor-Name: Mukherjee, Rohan
This article presents a modified version of the Differential Evolution (DE) algorithm for solving Dynamic Optimization Problems (DOPs) efficiently. The algorithm, referred as Modified DE with Locality induced Genetic Operators (MDE-LiGO) incorporates changes in the three basic stages of a standard DE framework. The mutation phase has been entrusted to a locality-induced operation that retains traits of Euclidean distance-based closest individuals around a potential solution. Diversity maintenance is further enhanced by inclusion of a local-best crossover operation that empowers the algorithm with an explorative ability without directional bias. An exhaustive dynamic detection technique has been introduced to effectively sense the changes in the landscape. An even distribution of solutions over different regions of the landscape calls for a solution retention technique that adapts this algorithm to dynamism by using the previously stored information in diverse search domains. MDE-LiGO has been compared with seven state-of-the-art evolutionary dynamic optimizers on a set of benchmarks known as the Generalized Dynamic Benchmark Generator (GDBG) used in competition on evolutionary computation in dynamic and uncertain environments held under the 2009 IEEE Congress on Evolutionary Computation (CEC). The experimental results clearly indicate that MDE-LiGO can outperform other algorithms for most of the tested DOP instances in a statistically meaningful way.
Continuous optimization; Dynamic optimization; Differential Evolution; Self adaptation; Genetic operators;
http://www.sciencedirect.com/science/article/pii/S0377221716300959
Debchoudhury, Shantanab
Das, Swagatam
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:673-6802016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:673-680
article
Licensing under general demand and cost functions
We consider a Cournot duopoly under general demand and cost functions, where an incumbent patentee has a cost reducing technology that it can license to its rival by using combinations of royalties and upfront fees (two-part tariffs). We show that for drastic technologies: (a) licensing occurs and both firms stay active if the cost function is superadditive and (b) licensing does not occur and the patentee monopolizes the market if the cost function is additive or subadditive. For non drastic technologies, licensing takes place provided the average efficiency gain from the cost reducing technology is higher than the marginal gain computed at the licensee’s reservation output. Optimal licensing policies have both royalties and fees for significantly superior technologies if the cost function is superadditive. By contrast, for additive and certain subadditive cost functions, optimal licensing policies have only royalties and no fees.
Patent licensing; Superadditive function; Subadditive function; Royalties; Two-part tariff;
http://www.sciencedirect.com/science/article/pii/S037722171600103X
Sen, Debapriya
Stamatopoulos, Giorgos
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:869-8792016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:869-879
article
Spline based survival model for credit risk modeling
Survival modeling has been adapted in retail banking because of its capability to analyze the censored data. It is an important tool for credit risk scoring, stress testing and credit asset evaluation. In this paper, we introduce a regression spline based discrete time survival model. The flexibility of spline function allows us to model the nonlinear and irregular shape of the hazard functions. By incorporating the regression spline into the multinomial logistic regression, this approach complements the existing Cox model. From a practical perspective, the logistic regression is relatively easy to understand and implement, and the simple parametric form is especially advantageous for predictive scoring. Using a credit card dataset, we demonstrate how to build a cubic regression spline based survival model. We also compare the performance of spline based discrete time survival model with the classical Cox model, our results show the spline based survival model can provide similar statistical explanatory and improve the prediction accuracy for attrition model which has low event rate.
Retail banking; Credit risk scoring; Survival modeling; Regression spline;
http://www.sciencedirect.com/science/article/pii/S0377221716301035
Luo, Sirong
Kong, Xiao
Nie, Tingting
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:584-5922016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:584-592
article
Lower bounding procedure for the asymmetric quadratic traveling salesman problem
In this paper we consider the Asymmetric Quadratic Traveling Salesman Problem (AQTSP). Given a directed graph and a function that maps every pair of consecutive arcs to a cost, the problem consists in finding a cycle that visits every vertex exactly once and such that the sum of the costs is minimal. We propose an extended Linear Programming formulation that has a variable for each cycle in the graph. Since the number of cycles is exponential in the graph size, we propose a column generation approach. Moreover, we apply a particular reformulation-linearization technique on a compact representation of the problem, and compute lower bounds based on Lagrangian relaxation. We compare our new bounds with those obtained by some linearization models proposed in the literature. Computational results on some set of benchmarks used in the literature show that our lower bounding procedures are very promising.
Traveling salesman; Reformulation-linearization technique; Cycle cover; Column generation; Lower bound;
http://www.sciencedirect.com/science/article/pii/S037722171630159X
Rostami, Borzou
Malucelli, Federico
Belotti, Pietro
Gualandi, Stefano
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:298-3132016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:298-313
article
Scheduling cranes at an indented berth
Container terminals are facing great challenges in order to meet the shipping industry’s requirements. An important fact within the industry is the increasing vessel sizes. Actually, within the last decade the ship size in the Asia–Europe trade has effectively doubled. However, port productivity has not doubled along with the larger vessel sizes. This has led to increased vessel turn around times at ports which indeed is a severe problem. In order to meet the industry targets a game-changer in container handling is required. Indented berth structure is one important opportunity to handle this issue. This novel berth structure requires new models and solution techniques for scheduling the quay cranes serving the indented berth. Accordingly, in this paper, we approach the quay crane scheduling problem at an indented berth structure. We focus on the challenges and constraints related to the novel architecture. We model the quay crane scheduling problem under the special structure and develop a solution technique based on branch-and-price. Extensive experiments are conducted to validate the efficiency of the proposed algorithm.
Maritime logistics; Crane sequencing; Crane scheduling; Container terminal operations; Indented berth;
http://www.sciencedirect.com/science/article/pii/S0377221716300753
Beens, Marie-Anne
Ursavas, Evrim
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:856-8682016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:856-868
article
It’s not now or never: Implications of investment timing and risk aversion on climate adaptation to extreme events
Public investment into risk reduction infrastructure plays an important role in facilitating adaptation to climate impacted hazards and natural disasters. In this paper, we provide an economic framework to incorporate investment timing and insurance market risk preferences when evaluating projects related to reducing climate impacted risks. The model is applied to a case study of bushfire risk management. We find that optimal timing of the investment may increase the net present value (NPV) of an adaptation project for various levels of risk aversion. Assuming risk neutrality, while the market is risk averse, is found to result in an unnecessary delay of the investment into risk reduction projects. The optimal waiting time is shorter when the insurance market is more risk averse or when a more serious scenario for climatic change is assumed. A higher investment cost or a higher discount rate will increase the optimal waiting time. We also find that a stochastic discount rate results in higher NPVs of the project than a discount rate that is assumed fixed at the long run average level.
Climate change adaptation; Investment timing; Catastrophic risk; Risk aversion; Real option;
http://www.sciencedirect.com/science/article/pii/S0377221716000898
Truong, Chi
Trück, Stefan
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:51-672016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:51-67
article
Modeling parallel movement of lifts and vehicles in tier-captive vehicle-based warehousing systems
This paper models and analyzes tier-captive autonomous vehicle storage and retrieval systems. While previous models assume sequential commissioning of the lift and vehicles, we propose a parallel processing policy for the system, under which an arrival transaction can request the lift and the vehicle simultaneously. To investigate the performance of this policy, we formulate a fork-join queueing network in which an arrival transaction will be split into a horizontal movement task served by the vehicle and a vertical movement task served by the lift. We develop an approximation method based on decomposition of the fork-join queueing network to estimate the system performance. We build simulation models to validate the effectiveness of analytical models. The results show that the fork-join queueing network is accurate in estimating the system performance under the parallel processing policy. Numerical experiments and a real case are carried out to compare the system response time of retrieval transactions under parallel and sequential processing policies. The results show that, in systems with less than 10 tiers, the parallel processing policy outperforms the sequential processing policy by at least 5.51 percent. The advantage of parallel processing policy is decreasing with the rack height and the aisle length. In systems with more than 10 tiers and a length to height ratio larger than 7, we can find a critical retrieval transaction arrival rate, below which the parallel processing policy outperforms the sequential processing policy.
Logistics; Warehousing; AVS/RS; Analytical and simulation modelling; Performance analysis;
http://www.sciencedirect.com/science/article/pii/S0377221716301679
Zou, Bipan
Xu, Xianhao
(Yale) Gong, Yeming
De Koster, René
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:148-1602016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:148-160
article
Strategic behavior in an observable fluid queue with an alternating service process
We consider a fluid queue with two modes of service, that represents a production facility, where the processing of the customers (units) is typically carried out at a much faster time-scale than the machine-related processes. We examine the strategic behavior of the customers, regarding the joining/balking dilemma, under two levels of information upon arrival. Specifically, just after arriving and before making the decision, a customer observes the level of the fluid, but may or may not get informed about the state of the server (fast/slow). Assuming that the customers evaluate their utilities based on a natural reward/cost structure, which incorporates their desire for processing and their unwillingness to wait, we derive symmetric equilibrium strategy profiles. Moreover, we illustrate various effects of the information level on the strategic behavior of the customers. The corresponding social optimization problem is also studied and the inefficiency of the equilibrium strategies is quantified via the Price of Anarchy (PoA) measure.
Queueing; Fluid flow models; Strategic customers; Balking; Equilibrium strategies;
http://www.sciencedirect.com/science/article/pii/S0377221716301928
Economou, Antonis
Manou, Athanasia
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:372-3822016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:372-382
article
Generous, spiteful, or profit maximizing suppliers in the wholesale price contract: A behavioral study
Prior experimental research shows that, in aggregate, decision makers acting as suppliers to a newsvendor do not set the wholesale price to maximize supplier profits. However, these deviations from optimal have rarely been examined at an individual level. In this study, presented with scenarios that differ in terms of how profit is shared between retailer and supplier, suppliers set wholesale price contracts which deviate from profit-maximization in ways that are either generous or spiteful. On an individual basis, these deviations were found to be consistent with how the profit-maximizing contract compares to the subject's idea of a fair contract. Suppliers moved nearer to self-reported ideal allocations when they indicated a high degree of concern for fairness, consistent with previously proposed fairness models, and were found to be more likely to act upon generous inclinations than spiteful ones.
Behavioral OR; Supply chain management; Newsvendor; Contracting; Supplier pricing;
http://www.sciencedirect.com/science/article/pii/S0377221716300595
Niederhoff, Julie A.
Kouvelis, Panos
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:40-502016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:40-50
article
How to escape a declining market: Capacity investment or Exit?
This paper considers a firm that faces a declining profit stream for its established product. The firm has the option to invest in a new technology with which it can produce an innovative product while having the option to exit at any point in time. In the presence of an exit option, earlier work determined the optimal timing to invest, where it was shown that higher uncertainty might accelerate investment timing. In the present paper the firm also decides on capacity. This extension leads to monotonicity, i.e. higher uncertainty delays investment timing. We also find that higher potential profitability of the innovative product market increases the incentive to invest earlier, where, however, we get the counterintuitive result that the firm invests in smaller capacity. Finally, if quantity has a smaller negative effect on price, the firm wants to acquire a larger capacity at a lower investment threshold.
Investment analysis; Exit; Capacity investment; Declining market; Real options;
http://www.sciencedirect.com/science/article/pii/S0377221716302284
Hagspiel, Verena
Huisman, Kuno J.M.
Kort, Peter M.
Nunes, Cláudia
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:514-5232016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:514-523
article
A nonhomogeneous hidden Markov model of response dynamics and mailing optimization in direct marketing
Catalog firms mail billions of catalogs each year. To stay competitive, catalog managers need to maximize the return on these mailings by deciding who should receive a mail-order catalog. In this paper, we propose a two-step approach that allows firms to address the dynamic implications of mailing decisions, and to make efficient mailing decisions by maximizing the long-term value generated by customers. Specifically, we first propose a nonhomogeneous hidden Markov model (HMM) to capture the interactive dynamics between customers and mailings. In the second step, we use the parameters obtained from the HMM to determine the optimal mailing decisions using the Partial Observable Markov Decision Process (POMDP). Both the immediate and the long-term effects of mailings are accounted for. The mailing endogeneity that may result in biased parameter estimates is also corrected. We conduct an empirical study using six years of quarterly solicitation data derived from the well-known DMEF donation data set. All metrics used suggest that the proposed model fits the data well in terms of correct predictions and outperforms all other benchmark models. The simulative experimental results show that the proposed method for optimizing total accrued benefits outperforms the usual targeted-marketing methodology for optimizing each promotion in isolation. We also find that the sequential targeting rules acquired by our proposed methods are more cost-containment oriented in nature compared with the corresponding single-event targeting rules.
OR in marketing; HMM; POMDP; Customer lifetime value; Mailing optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301084
Ma, Shaohui
Hou, Lu
Yao, Wensong
Lee, Baozhen
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:269-2782016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:269-278
article
Age-structured linear-state differential games
In this paper we search for conditions on age-structured differential games to make their analysis more tractable. We focus on a class of age-structured differential games which show the features of ordinary linear-state differential games, and we prove that their open-loop Nash equilibria are sub-game perfect. By means of a simple age-structured advertising problem, we provide an application of the theoretical results presented in the paper, and we show how to determine an open-loop Nash equilibrium.
Age-structured models; Differential games; Advertising;
http://www.sciencedirect.com/science/article/pii/S0377221716301539
Grosset, Luca
Viscolani, Bruno
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:280-2892016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:280-289
article
Exact and heuristic algorithms for the Hamiltonian p-median problem
This paper presents an exact algorithm, a constructive heuristic algorithm, and a metaheuristic for the Hamiltonian p-Median Problem (HpMP). The exact algorithm is a branch-and-cut algorithm based on an enhanced p-median based formulation, which is proved to dominate an existing p-median based formulation. The constructive heuristic is a giant tour heuristic, based on a dynamic programming formulation to optimally split a given sequence of vertices into cycles. The metaheuristic is an iterated local search algorithm using 2-exchange and 1-opt operators. Computational results show that the branch-and-cut algorithm outperforms the existing exact solution methods.
Hamiltonian; p-median; Branch-and-cut; Metaheuristic;
http://www.sciencedirect.com/science/article/pii/S0377221716300327
Erdoğan, Güneş
Laporte, Gilbert
Rodríguez Chía, Antonio M.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:294-3032016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:294-303
article
Sustaining cooperation in a differential game of advertising goodwill accumulation
The paper suggests a differential game of advertising competition among three symmetric firms, played over an infinite horizon. The objective of the research is to see if a cooperative agreement among the firms can be sustained over time. For this purpose the paper determines the characteristic functions (value functions) of individual players and all possible coalitions. We identify an imputation that belongs to the core. Using this imputation guarantees that, in any subgame starting out on the cooperative state trajectory, no coalition has an incentive to deviate from what was prescribed by the solution of the grand coalition’s optimization problem.
Differential games; Advertising competition; Core imputation;
http://www.sciencedirect.com/science/article/pii/S0377221716301576
Jørgensen, Steffen
Gromova, Ekaterina
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:614-6242016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:614-624
article
Measures of dynamism and urgency in logistics
Dynamism was originally defined as the proportion of online versus offline orders in the literature on dynamic logistics. Such a definition however, loses meaning when considering purely dynamic problems where all customer requests arrive dynamically. Existing measures of dynamism are limited to either (1) measuring the proportion of online versus offline orders or (2) measuring urgency, a concept that is orthogonal to dynamism, instead. The present paper defines separate and independent formal definitions of dynamism and urgency applicable to purely dynamic problems. Using these formal definitions, instances of a dynamic logistic problem with varying levels of dynamism and urgency were constructed and several route scheduling algorithms were executed on these problem instances. Contrary to previous findings, the results indicate that dynamism is positively correlated with route quality; urgency, however, is negatively correlated with route quality. The paper contributes the theory that dynamism and urgency are two distinct concepts that deserve to be treated separately.
Logistics; Transportation; Dynamism; Urgency; Measures;
http://www.sciencedirect.com/science/article/pii/S0377221716301497
van Lon, Rinde R.S.
Ferrante, Eliseo
Turgut, Ali E.
Wenseleers, Tom
Vanden Berghe, Greet
Holvoet, Tom
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:243-2642016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:243-264
article
Sustainable Operations
The field of “Sustainable Operations” and the term itself have arisen only in the last ten to twenty years in the context of sustainable development. Even though the term is frequently used in practice and research, it has hardly been characterized and defined precisely in the literature so far. For reasons of clarity and unambiguity, we present terms and definitions before we demarcate Sustainable Operations from its neighboring topics. We especially focus on the interactions between economic, social and ecological aspects as part of Sustainable Operations, but exclude the development of a normative ethics, instead focusing on the use of quantitative methods from Operations Research. Then the broad subject of Sustainable Operations is structured into various areas arising from the typical structure of an enterprise. For each area, we present examples of applications and refer to the existing literature. The paper concludes with future research directions.
Sustainable Operations; Sustainable development; Operations research; Computational sustainability; Triple bottom line;
http://www.sciencedirect.com/science/article/pii/S0377221716300996
Jaehn, Florian
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:811-8242016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:811-824
article
Local matching of flexible load in smart grids
Today’s power systems are experiencing a transition from primarily fossil fuel based generation toward greater shares of renewable energy sources. It becomes increasingly costly to manage the resulting uncertainty and variability in power system operations solely through flexible generation assets. Incorporating demand side flexibility through appropriately designed incentive structures can add an additional lever to balance demand and supply. Based on a supply model using empirical wind generation data and a discrete model of flexible demand with temporal constraints, we design and evaluate a local online market mechanism for matching flexible load and uncertain supply. Under this mechanism, truthful reporting of flexibility is a dominant strategy for consumers reducing payments and increasing the likelihood of allocation. Suppliers, during periods of scarce supply, benefit from elevated critical-value payments as a result of flexibility-induced competition on the demand side. We find that, for a wide range of the key parameters (supply capacity, flexibility level), the cost of ensuring incentive compatibility in a smart grid market, relative to the welfare-optimal matching, is relatively small. This suggests that local matching of demand and supply can be organized in a decentral manner in the presence of a sufficiently flexible demand side. Extending the stylized demand model to include complementary demand structures, we demonstrate that decentral matching induces only minor efficiency losses if demand is sufficiently flexible. Furthermore, by accounting for physical grid limitations we show that flexibility and grid capacity exhibit complementary characteristics.
OR in energy; Smart grid; Load flexibility; Online mechanism design;
http://www.sciencedirect.com/science/article/pii/S037722171630114X
Ströhle, Philipp
Flath, Christoph M.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:236-2522016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:236-252
article
A two-stage classification technique for bankruptcy prediction
Ensemble techniques such as bagging or boosting, which are based on combinations of classifiers, make it possible to design models that are often more accurate than those that are made up of a unique prediction rule. However, the performance of an ensemble solely relies on the diversity of its different components and, ultimately, on the algorithm that is used to create this diversity. It means that such models, when they are designed to forecast corporate bankruptcy, do not incorporate or use any explicit knowledge about this phenomenon that might supplement or enrich the information they are likely to capture. This is the reason why we propose a method that is precisely based on some knowledge that governs bankruptcy, using the concept of “financial profiles”, and we show how the complementarity between this technique and ensemble techniques can improve forecasts.
Decision support systems; Finance; Bankruptcy; Forecasting; Financial profile;
http://www.sciencedirect.com/science/article/pii/S0377221716301369
du Jardin, Philippe
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:404-4172016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:404-417
article
Logistics capacity planning: A stochastic bin packing formulation and a progressive hedging meta-heuristic
We consider the logistics capacity planning problem arising in the context of supply-chain management. We address the tactical-planning problem of determining the quantity of capacity units, hereafter called bins, of different types to secure for the next period of activity, given the uncertainty on future needs in terms of demand for loads (items) to be moved or stored, and the availability and costs of capacity for these movements or storage activities. We propose a modeling framework introducing a new class of bin packing problems, the Stochastic Variable Cost and Size Bin Packing Problem. The resulting two-stage stochastic formulation with recourse assigns to the first stage the tactical capacity-planning decisions of selecting bins, while the second stage models the subsequent adjustments to the plan, securing extra bins and packing the items into the selected bins, performed each time the plan is applied and new information becomes known. We propose a new meta-heuristic based on progressive hedging ideas that includes advanced strategies to accelerate the search and efficiently address the symmetry strongly present in the problem considered due to the presence of several equivalent bins of each type. Extensive computational results for a large set of instances support the claim of validity for the model, efficiency for the solution method proposed, and quality and robustness for the solutions obtained. The method is also used to explore the impact on the capacity plan and the recourse to spot-market capacity of a quite wide range of variations in the uncertain parameters and the economic environment of the firm.
Logistics capacity planning; Uncertainty; Stochastic Variable Cost and Size Bin Packing; Stochastic programming; Progressive hedging;
http://www.sciencedirect.com/science/article/pii/S0377221716300777
Crainic, Teodor Gabriel
Gobbato, Luca
Perboli, Guido
Rei, Walter
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:105-1122016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:105-112
article
Offsetting inventory replenishment cyclesAuthor-Name: Russell, Robert A.
The inventory-staggering problem is a multi-item inventory problem in which replenishment cycles are scheduled or offset in order to minimize the maximum inventory level over a given planning horizon. We incorporate symmetry-breaking constraints in a mixed-integer programming model to determine optimal and near-optimal solutions. Local-search heuristics and evolutionary polishing heuristics are also presented to achieve effective and efficient solutions. We examine extensions of the problem that include a continuous-time framework as well as the effect of stochastic demand.
Inventory; Replenishment staggering; Symmetry reduction; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221716302016
Urban, Timothy L.
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:127-1372016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:127-137
article
Cost-effectiveness analysis for heterogeneous samples
The sampling information for the cost-effectiveness analysis typically comes from different health care centers, and, as far as we know, it is taken for granted that the distribution of the cost and the effectiveness does not vary across centers. We argue that this assumption is unrealistic, and prove that to not consider the sample heterogeneity will typically give misleading results. Consequently, a cost-effectiveness procedure for heterogeneous samples is here proposed.
Clustering; Cost-effectiveness; Decision processes; Meta-analysis; Heterogeneous samples;
http://www.sciencedirect.com/science/article/pii/S0377221716301606
Moreno, E.
Girón, F.J.
Vázquez–Polo, F.J.
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:648-6582016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:648-658
article
An investigation of model risk in a market with jumps and stochastic volatility
The aim of this paper is to investigate model risk aspects of variance swaps and forward-start options in a realistic market setup where the underlying asset price process exhibits stochastic volatility and jumps. We devise a general framework in order to provide evidence of the model uncertainty attached to variance swaps and forward-start options. In our study, both variance swaps and forward-start options can be valued by means of analytic methods. We measure model risk using a set of 21 models embedding various dynamics with both continuous and discontinuous sample paths. To conduct our empirical analysis, we work with two major equity indices (S&P 500 and Eurostoxx 50) under different market situations. Our results evaluate model risk between 50 and 200 basis points, with an average value slightly above 100 basis points of the contract notional.
Risk management; Model risk; Robustness and sensitivity analysis; Variance swap; Forward-start option;
http://www.sciencedirect.com/science/article/pii/S0377221716301461
Coqueret, Guillaume
Tavin, Bertrand
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:9-182016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:9-18
article
A computational study for bilevel quadratic programs using semidefinite relaxations
In this paper, we deal with bilevel quadratic programming problems with binary decision variables in the leader problem and convex quadratic programs in the follower problem. For this purpose, we transform the bilevel problems into equivalent quadratic single level formulations by replacing the follower problem with the equivalent Karush Kuhn Tucker (KKT) conditions. Then, we use the single level formulations to obtain mixed integer linear programming (MILP) models and semidefinite programming (SDP) relaxations. Thus, we compute optimal solutions and upper bounds using linear programming (LP) and SDP relaxations. Our numerical results indicate that the SDP relaxations are considerably tighter than the LP ones. Consequently, the SDP relaxations allow finding tight feasible solutions for the problem. Especially, when the number of variables in the leader problem is larger than in the follower problem. Moreover, they are solved at a significantly lower computational cost for large scale instances.
(I) Conic programming and interior point methods; Bilevel programming; Semidefinite programming; Mixed integer linear programming;
http://www.sciencedirect.com/science/article/pii/S0377221716000497
Adasme, Pablo
Lisser, Abdel
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:1-82016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:1-8
article
Edge coloring: A natural model for sports scheduling
In this work, we consider some basic sports scheduling problems and introduce the notions of graph theory which are needed to build adequate models. We show, in particular, how edge coloring can be used to construct schedules for sports leagues. Due to the emergence of various practical requirements, one cannot be restricted to classical schedules given by standard constructions, such as the circle method, to color the edges of complete graphs. The need of exploring the set of all possible colorings inspires the design of adequate coloring procedures. In order to explore the solution space, local search procedures are applied. The standard definitions of neighborhoods that are used in such procedures need to be extended. Graph theory provides efficient tools for describing various move types in the solution space. We show how formulations in graph theoretical terms give some insights to conceive more general move types. This leads to a series of open questions which are also presented throughout the text.
OR in sports; Scheduling; Graph theory; Edge coloring; Local search;
http://www.sciencedirect.com/science/article/pii/S0377221716301667
Januario, Tiago
Urrutia, Sebastián
Ribeiro, Celso C.
de Werra, Dominique
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:138-1472016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:138-147
article
The predictive power of the business and bank sentiment of firms: A high-dimensional Granger Causality approach
We study the predictive power of industry-specific economic sentiment indicators for future macro-economic developments. In addition to the sentiment of firms towards their own business situation, we study their sentiment with respect to the banking sector – their main credit providers. The use of industry-specific sentiment indicators results in a high-dimensional forecasting problem. To identify the most predictive industries, we present a bootstrap Granger Causality test based on the Adaptive Lasso. This test is more powerful than the standard Wald test in such high-dimensional settings. Forecast accuracy is improved by using only the most predictive industries rather than all industries.
Bootstrap; Granger Causality; Lasso; Sentiment surveys; Time series forecasting;
http://www.sciencedirect.com/science/article/pii/S0377221716301874
Wilms, Ines
Gelper, Sarah
Croux, Christophe
oai:RePEc:eee:ejores:v:253:y:2016:i:3:p:543-5562016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:3:p:543-556
article
Origin and early evolution of corner polyhedra
Corner Polyhedra are a natural intermediate step between linear programming and integer programming. This paper first describes how the concept of Corner Polyhedra arose unexpectedly from a practical operations research problem, and then describes how it evolved to shed light on fundamental aspects of integer programming and to provide a great variety of cutting planes for integer programming.
Integer programming; Cutting; Linear programming; Corner polyhedra;
http://www.sciencedirect.com/science/article/pii/S0377221716301114
Gomory, Ralph
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:19-282016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:19-28
article
Eidetic Wolf Search Algorithm with a global memory structureAuthor-Name: Fong, Simon
A recently proposed metaheuristics called Wolf Search Algorithm (WSA) has demonstrated its efficacy for various hard-to-solve optimization problems. In this paper, an improved version of WSA namely Eidetic-WSA with a global memory structure (GMS) or just eWSA is presented. eWSA makes use of GMS for improving its search for the optimal fitness value by preventing mediocre visited places in the search space to be visited again in future iterations. Inherited from swarm intelligence, search agents in eWSA and the traditional WSA merge into an optimal solution although the agents behave and make decisions autonomously. Heuristic information gathered from collective memory of the swarm search agents is stored in GMS. The heuristics eventually leads to faster convergence and improved optimal fitness. The concept is similar to a hybrid metaheuristics based on WSA and Tabu Search. eWSA is tested with seven standard optimization functions rigorously. In particular, eWSA is compared with two state-of-the-art metaheuristics, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). eWSA shares some similarity with both approaches with respect to directed-random search. The similarity with ACO is, however, stronger as ACO uses pheromones as global information references that allow a balance between using previous knowledge and exploring new solutions. Under comparable experimental settings (identical population size and number of generations) eWSA is shown to outperform both ACO and PSO with statistical significance. When dedicating the same computation time, only ACO can be outperformed due to a comparably long run time per iteration of eWSA.
Metaheuristics; Wolf Search Algorithm; Global memory structure; Ant Colony Optimization; Particle Swarm Optimization;
http://www.sciencedirect.com/science/article/pii/S0377221716301898
Deb, Suash
Hanne, Thomas
Li, Jinyan (Leo)
oai:RePEc:eee:ejores:v:254:y:2016:i:1:p:188-2012016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:254:y:2016:i:1:p:188-201
article
Timing of service investments for retailers under competition and demand uncertainty
We study how retailers can time their service investments when demand for a product is uncertain and consumers care both about price and service when choosing which retailer to buy from. By “service” we mean activities a retailer can invest in and which can drive traffic into the store. We consider offering extended operating hours as an example of such service and examine the timing of service investments for two competing retailers. Specifically, we analyze two retailers who compete on price and service level, and characterize both the prices and the service levels, as well as the timing of their service investment decisions. Our model also considers two effects of retailer service—the effect on total demand for the product and the effect on a retailer’s market share. We show that investing in service before demand realization, although counterintuitive, can be beneficial for competing retailers. On the other hand, a large mismatch between actual and expected demand and a low probability of high demand justifies the postponement of service investments after observing demand. We also show that the incentive to invest in service before demand realization becomes more pronounced when service investments can increase the overall demand for the product in addition to protecting market share. Our findings have important implications for retailers with regards to the timing of their service investment decisions.
Retail; Service; Uncertainty; Competition; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221716301515
Perdikaki, Olga
Kostamis, Dimitris
Swaminathan, Jayashankar M.
oai:RePEc:eee:ejores:v:253:y:2016:i:2:p:428-4402016-05-19RePEc:eee:ejores
RePEc:eee:ejores:v:253:y:2016:i:2:p:428-440
article
Carbon efficiency evaluation: An analytical framework using fuzzy DEA
Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.
Energy efficiency; Data envelopment analysis; Fuzzy expected interval; Fuzzy expected value; Fuzzy ranking approach;
http://www.sciencedirect.com/science/article/pii/S0377221716300340
Ignatius, Joshua
Ghasemi, M.-R.
Zhang, Feng
Emrouznejad, Ali
Hatami-Marbini, Adel
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:122-1312012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:122-131
article
Multi-criteria analysis and the resolution of sustainable development dilemmas: A stakeholder management approach
We demonstrate that stakeholder-oriented multi-criteria analysis (MCA) can adequately address a variety of sustainable development dilemmas in decision-making, especially when applied to complex project evaluations involving multiple objectives and multiple stakeholder groups. Such evaluations are typically geared towards satisfying simultaneously private economic goals, broader social objectives and environmental targets. We show that, under specific conditions, a variety of stakeholder-oriented MCA approaches may be able to contribute substantively to the resolution or improved governance of societal conflicts and the pursuit of the public good in the form of sustainable development. We contrast the potential usefulness of these stakeholder-oriented approaches – in terms of their ability to contribute to sustainable development – with more conventional MCA approaches and social cost–benefit analysis.
Multi-criteria analysis; Stakeholder management; Institutional economics; Sustainable development; Ethics;
http://www.sciencedirect.com/science/article/pii/S0377221712001385
De Brucker, Klaas
Macharis, Cathy
Verbeke, Alain
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:219-2262012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:219-226
article
High-order computational methods for option valuation under multifactor models
Many of the different numerical techniques in the partial differential equations framework for solving option pricing problems have employed only standard second-order discretization schemes. A higher-order discretization has the advantage of producing low size matrix systems for computing sufficiently accurate option prices and this paper proposes new computational schemes yielding high-order convergence rates for the solution of multi-factor option problems. These new schemes employ Galerkin finite element discretizations with quadratic basis functions for the approximation of the spatial derivatives in the pricing equations for stochastic volatility and two-asset option problems and time integration of the resulting semi-discrete systems requires the computation of a single matrix exponential. The computations indicate that this combination of high-order finite elements and exponential time integration leads to efficient algorithms for multi-factor problems. Highly accurate European prices are obtained with relatively coarse meshes and high-order convergence rates are also observed for options with the American early exercise feature. Various numerical examples are provided for illustrating the accuracy of the option prices for Heston’s and Bates stochastic volatility models and for two-asset problems under Merton’s jump-diffusion model.
Finance; American options; Galerkin discretization; Exponential time integration; Stochastic volatility model;
http://www.sciencedirect.com/science/article/pii/S0377221712005644
Rambeerich, N.
Tangman, D.Y.
Lollchund, M.R.
Bhuruth, M.
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:23-402012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:23-40
article
Deterministic and stochastic global optimization techniques for planar covering with ellipses problems
Problems of planar covering with ellipses are tackled in this work. Ellipses can have a fixed angle or each of them can be freely rotated. Deterministic global optimization methods are developed for both cases, while a stochastic version of the method is also proposed for large instances of the latter case. Numerical results show the effectiveness and efficiency of the proposed methods.
Global optimization; Non-linear programming; Planar covering with ellipses; Algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221712005619
Andretta, M.
Birgin, E.G.
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:752-7612012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:752-761
article
Measurement of simultaneous scale and mix changes in inputs and outputs using DEA facets and RTS
We show a new use of the efficient facets in DEA. Specifically, once we have identified all facets of the DEA technology, we are able to estimate the potential changes in some inputs and outputs, while fixing other inputs and outputs, ranges of simultaneous scale and mix changes in inputs and outputs, while proportionally increasing or decreasing other inputs and outputs, and, finally, the RTS. The proposed algorithms are applied to corporate planning processes of chemical companies.
Data envelopment analysis; Changes in inputs and outputs; Returns to scale; Facets;
http://www.sciencedirect.com/science/article/pii/S0377221712005395
Amatatsu, Hirofumi
Ueda, Tohru
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:65-782012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:65-78
article
The Capacitated Team Orienteering Problem: A Bi-level Filter-and-Fan method
This paper focuses on vehicle routing problems with profits and addresses the so-called Capacitated Team Orienteering Problem. Given a set of customers with a priori known profits and demands, the objective is to find the subset of customers, for which the collected profit is maximized, and to determine the visiting sequence and assignment to vehicle routes assuming capacity and route duration restrictions. The proposed method adopts a hierarchical bi-level search framework that takes advantage of different search landscapes. At the upper level, the solution space is explored on the basis of the collected profit, using a Filter-and-Fan method and a combination of profit oriented neighborhoods, while at the lower level the routing of customers is optimized in terms of traveling distance via a Variable Neighborhood Descent method. Computational experiments on benchmark data sets illustrate the efficiency and effectiveness of the proposed approach. Compared to existing results, new upper bounds are produced with competitive computational times.
Orienteering Problem; Vehicle routing; Filter-and-Fan; Tabu Search;
http://www.sciencedirect.com/science/article/pii/S0377221712005735
Tarantilis, C.D.
Stavropoulou, F.
Repoussis, P.P.
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:659-6682012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:659-668
article
A new continuous model for multiple re-entrant manufacturing systems
The semiconductor manufacturing systems that involve a large number of products and many steps can be modeled through conservation laws for a continuous density variable on production processes. In this paper, the basic partial differential equation (PDE) models for single-product re-entrant manufacturing systems are proposed first. However, through the validation of numerical examples, the basic continuous models do not perform well for single-product re-entrant systems. Then, a new state equation that takes into account the re-entrant degree of a product is introduced to improve the basic continuous models. The applicability of the modified continuous model is illustrated through numerical examples. The influence of the influx variation on the outflux is also discussed. With the changes of influx, the outflux has a reverse phenomenon. Based on the new state equation, the continuous model for multi-product re-entrant systems with different priorities is established, and an example is provided to illustrate the applicability of the new continuous model.
Manufacturing; Continuous modeling; Simulation; Re-entrant factor; Multi-product;
http://www.sciencedirect.com/science/article/pii/S0377221712005231
Dong, Ming
He, Fenglan
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:701-7082012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:701-708
article
Equilibruim approach of asset pricing under Lévy process
This work considers the equilibrium approach of asset pricing for Lévy process. It derives the equity premium and pricing kernel analytically for the stock price process, obtains an equilibrium option pricing formula, and explains some empirical evidence such as the negative variance risk premium, implied volatility smirk, and negative skewness risk premium by comparing the physical and risk-neutral distributions of the log return. Different from most of the current studies in equilibrium pricing under jump diffusion models, this work models the underlying asset price as the exponential of a Lévy process and thus allows nearly an arbitrage distribution of the jump component.
Pricing; Equilibrium approach; Lévy process; Equity risk premium; Variance risk premium;
http://www.sciencedirect.com/science/article/pii/S0377221712004924
Fu, Jun
Yang, Hailiang
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:189-2082012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:189-208
article
Lower and upper bounds for location-arc routing problems with vehicle capacity constraints
This paper addresses multi-depot location arc routing problems with vehicle capacity constraints. Two mixed integer programming models are presented for single and multi-depot problems. Relaxing these formulations leads to other integer programming models whose solutions provide good lower bounds for the total cost. A powerful insertion heuristic has been developed for solving the underlying capacitated arc routing problem. This heuristic is used together with a novel location–allocation heuristic to solve the problem within a simulated annealing framework. Extensive computational results demonstrate that the proposed algorithm can find high quality solutions. We also show that the potential cost saving resulting from adding location decisions to the capacitated arc routing problem is significant.
Location-arc routing; Arc routing; Integrated logistics; Mixed integer programming; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221712004705
Hashemi Doulabi, Seyed Hossein
Seifi, Abbas
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:739-7512012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:739-751
article
Unifying temporal and organizational scales in multiscale decision-making
In enterprise systems, making decisions is a complex task for agents at all levels of the organizational hierarchy. To calculate an optimal course of action, an agent has to include uncertainties and the anticipated decisions of other agents, recognizing that they also engage in a stochastic, game-theoretic reasoning process. Furthermore, higher-level agents seek to align the interests of their subordinates by providing incentives. Incentive-giving and receiving agents need to include the effect of the incentive on their payoffs in the optimal strategy calculations. In this paper, we present a multiscale decision-making model that accounts for uncertainties and organizational interdependencies over time. Multiscale decision-making combines stochastic games with hierarchical Markov decision processes to model and solve multi-organizational-scale and multi-time-scale problems. This is the first model that unifies the organizational and temporal scales and can solve a 3-agent, 3-period problem. Solutions can be derived as analytic equations with low computational effort. We apply the model to a service enterprise challenge that illustrates the applicability and relevance of the model. This paper makes an important contribution to the foundation of multiscale decision theory and represents a key step towards solving the general X-agent, T-period problem.
Distributed decision-making; Game theory; Markov processes; Multi-agent systems; OR in service industries;
http://www.sciencedirect.com/science/article/pii/S0377221712004936
Wernz, Christian
Deshmukh, Abhijit
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:775-7842012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:775-784
article
Belief rule-based system for portfolio optimisation with nonlinear cash-flows and constraints
A belief rule-based (BRB) system is a generic nonlinear modelling and inference scheme. It is based on the concept of belief structures and evidential reasoning (ER), and has been shown to be capable of capturing complicated nonlinear causal relationships between antecedent attributes and consequents. The aim of this paper is to develop a BRB system that complements the RiskMetrics WealthBench system for portfolio optimisation with nonlinear cash-flows and constraints. Two optimisation methods are presented to locate efficient portfolios under different constraints specified by the investors. Numerical studies demonstrate the effectiveness and efficiency of the proposed methodology.
Portfolio optimisation; Efficient frontier; Belief rule base; Evidential reasoning;
http://www.sciencedirect.com/science/article/pii/S0377221712005292
Chen, Yu-Wang
Poon, Ser-Huang
Yang, Jian-Bo
Xu, Dong-Ling
Zhang, Dongxu
Acomb, Simon
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:834-8412012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:834-841
article
Resampling DEA estimates of investment fund performance
Data envelopment analysis (DEA) is attractive for comparing investment funds because it handles different characteristics of fund distribution and gives a way to rank funds. There is substantial literature applying DEA to funds, based on the time series of funds’ returns. This article looks at the issue of uncertainty in the resulting DEA efficiency estimates, investigating consistency and bias. It uses the bootstrap to develop stochastic DEA models for funds, derive confidence intervals and develop techniques to compare and rank funds and represent the ranking. It investigates how to deal with autocorrelation in the time series and considers models that deal with correlation in the funds’ returns.
Data envelopment analysis; Bootstrap; Investment fund; Rank; Bias;
http://www.sciencedirect.com/science/article/pii/S0377221712005371
Lamb, John D.
Tee, Kai-Hong
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:52-642012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:52-64
article
A computational approach for eliminating error in the solution of the location set covering problem
The location set covering problem continues to be an important and challenging spatial optimization problem. The range of practical planning applications underscores its importance, spanning fire station siting, warning siren positioning, security monitoring and nature reserve design, to name but a few. It is challenging on a number of fronts. First, it can be difficult to solve for medium to large size problem instances, which are often encountered in combination with geographic information systems (GIS) based analysis. Second, the need to cover a region efficiently often brings about complications associated with the abstraction of geographic space. Representation as points can lead to significant gaps in actual coverage, whereas representation as polygons can result in a substantial overestimate of facilities needed. Computational complexity along with spatial abstraction sensitivity combine to make advances in solving this problem much needed. To this end, a solution framework for ensuring complete coverage of a region with a minimum number of facilities is proposed that eliminates potential error. Applications to emergency warning siren and fire station siting are presented to demonstrate the effectiveness of the developed approach. The approach can be applied to convex, non-convex and non-contiguous regions and is unaffected by arbitrary initial spatial representations of space.
Spatial optimization; GIS; Facility location;
http://www.sciencedirect.com/science/article/pii/S0377221712005681
Murray, Alan T.
Wei, Ran
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:209-2182012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:209-218
article
Using the Viable System Model (VSM) to structure information processing complexity in disaster response
Earthquakes, hurricanes, flooding and terrorist attacks continue to threaten our society and, when the worst happens, lives depend on different agencies to manage the response. The literature shows that there is significant potential for operational research (OR) to aid disaster management and that, whilst some of this potential has been delivered, there is more that OR can contribute. In particular, OR can provide detailed support to analysing the complexity of information processing – an essential topic as failure could cause response agencies to act on low quality information or act too slowly – putting responders and victims at risk. However, there is a gap in methods for analysing information processing whilst delivering rapid response. This paper explores how OR can fill this gap through taking a Viable System Model (VSM) approach to analyse information processing. It contributes to the OR literature by showing how VSM can support the analysis of information processing as well as how the OR modelling technique can be further strengthened to facilitate the task.
Viable system model; Information processing; Emergency response; Disaster; Problem structuring;
http://www.sciencedirect.com/science/article/pii/S0377221712004870
Preece, Gary
Shaw, Duncan
Hayashi, Haruo
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:154-1662012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:154-166
article
Simulation-based framework to improve patient experience in an emergency department
The global economic crisis has a significant impact on healthcare resource provision worldwide. The management of limited healthcare resources is further challenged by the high level of uncertainty in demand, which can lead to unbalanced utilization of the available resources and a potential deterioration of patient satisfaction in terms of longer waiting times and perceived reduced quality of services. Therefore, healthcare managers require timely and accurate tools to optimize resource utility in a complex and ever-changing patient care process. An interactive simulation-based decision support framework is presented in this paper for healthcare process improvement. Complexity and different levels of variability within the process are incorporated into the process modeling phase, followed by developing a simulation model to examine the impact of potential alternatives. As a performance management tool, balanced scorecard (BSC) is incorporated within the framework to support continual and sustainable improvement by using strategic-linked performance measures and actions. These actions are evaluated by the simulation model developed, whilst the trade-off between objectives, though somewhat conflicting, is analysed by a preference model. The preference model is designed in an interactive and iterative process considering decision makers preferences regarding the selected key performance indicators (KPIs). A detailed implementation of the framework is demonstrated on an emergency department (ED) of an adult teaching hospital in north Dublin, Ireland. The results show that the unblocking of ED outflows by in-patient bed management is more effective than increasing only the ED physical capacity or the ED workforce.
Simulation; Multiple criteria decision analysis; Emergency department; Healthcare management;
http://www.sciencedirect.com/science/article/pii/S0377221712005693
Abo-Hamad, Waleed
Arisha, Amr
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:722-7312012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:722-731
article
Estimation of the number of failures in the Weibull model using the ordinary differential equation
In estimating the number of failures using right truncated grouped data, we often encounter cases that the estimate is smaller than the true one when we use the likelihood principle to conditional probability. In infectious disease spread predictions, the SIR model described by simultaneous ordinary differential equations is commonly used, and it can predict reasonably well the number of infected patients even when the size of observed data is small. We have investigated whether the ordinary differential equation model can estimate the number of failures more accurately than does the likelihood principle under the condition of right truncated grouped data. The positive results are obtained in the Weibull model, similarly to the cases of the SARS, A(H1N1), and FMD.
Reliability; Right truncated grouped data; Likelihood principle; Differential equation; Weibull distribution; SARS A(H1N1) FMD;
http://www.sciencedirect.com/science/article/pii/S0377221712005322
Hirose, Hideo
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:842-8452012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:842-845
article
Forward dynamic utility functions: A new model and new results
A major obstacle in the existing models of forward dynamic utilities and investment performance evaluation is to establish the existence and uniqueness of the optimal solutions. Consequently, we present a new model of forward dynamic utilities. In doing so, we establish the existence and uniqueness of the solutions for a general (smooth) utility function, and we show that the assumptions needed for such solutions are similar to those under the backward formulation. Moreover, we provide unique viscosity solutions. We also provide discontinuous viscosity solutions. In addition, we introduce Hausdorff-continuous viscosity solutions to the portfolio model.
Forward utility; Investment; Portfolio; Stochastic; Viscosity solutions; HJB PDE;
http://www.sciencedirect.com/science/article/pii/S0377221712004985
Alghalith, Moawia
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:110-1212012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:110-121
article
A goal-driven approach to the 2D bin packing and variable-sized bin packing problems
In this paper, we examine the two-dimensional variable-sized bin packing problem (2DVSBPP), where the task is to pack all given rectangles into bins of various sizes such that the total area of the used bins is minimized. We partition the search space of the 2DVSBPP into sets and impose an order on the sets, and then use a goal-driven approach to take advantage of the special structure of this partitioned solution space. Since the 2DVSBPP is a generalization of the two-dimensional bin packing problem (2DBPP), our approach can be adapted to the 2DBPP with minimal changes. Computational experiments on the standard benchmark data for both the 2DVSBPP and 2DBPP shows that our approach is more effective than existing approaches in literature.
Packing; 2D bin packing; Goal-driven search; Best-fit heuristic;
http://www.sciencedirect.com/science/article/pii/S0377221712006054
Wei, Lijun
Oon, Wee-Chong
Zhu, Wenbin
Lim, Andrew
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:585-5942012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:585-594
article
Research advances in environmentally and socially sustainable operations
Consumers and governments are pressuring firms to strike a balance between profitability and sustainability. However, this balance can only be maintained in the long run if the firm can take a holistic approach to sustain the financial flow (profit), resource flow (planet) and development flow (people) for the entire ecosystem comprising poor producers in emerging/developing markets, global supply chain partners, consumers in developed countries, and the planet. By considering the flows associated with different entities within the ecosystem, we classify and summarize recent Operations Research/Management Science (OR/MS) research developments. Also, we identify several gaps for future research in this important area.
Environmental responsibility; Social responsibility; Sustainability; OR/MS models;
http://www.sciencedirect.com/science/article/pii/S0377221712005711
Tang, Christopher S.
Zhou, Sean
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:732-7382012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:732-738
article
On the DEA total weight flexibility and the aggregation in cross-efficiency evaluations
This paper discusses the DEA total weight flexibility in the context of the cross-efficiency evaluation. The DMUs in DEA are often assessed with unrealistic weighting schemes in their attempt to achieve the best ratings in their self-evaluation. We claim here that in a peer-appraisal like the cross-efficiency evaluation the cross-efficiencies provided by such weights cannot play the same role as those obtained with more reasonable weights. To address this issue, we propose to calculate the cross-efficiency scores by means of a weighted average of cross-efficiencies, instead of with the usual arithmetic mean, so the aggregation weights reflect the disequilibrium in the profiles of DEA weights that are used. Thus, the cross-efficiencies provided by profiles with large differences in their weights, especially those obtained with zero weights, would be attached lower aggregation weights (less importance) than those provided by more balanced profiles of weights.
Data envelopment analysis; Cross-efficiency evaluation; Aggregation;
http://www.sciencedirect.com/science/article/pii/S0377221712004663
Ruiz, José L.
Sirvent, Inmaculada
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:614-6252012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:614-625
article
A linear programming embedded probabilistic tabu search for the unequal-area facility layout problem with flexible bays
In this paper, a probabilistic tabu search (PTS) approach is proposed to solve the facility layout problem (FLP) with unequal area departments. For the representation, the flexible bay structure (FBS), which is a very common layout in many manufacturing and retail facilities, is used. In this paper, the FBS is relaxed by allowing empty spaces within bays, which results in more flexibility in assigning departments into bays. In addition, departments are allowed to be located more freely within the bays, and they can have different side lengths as long as they are within the bay boundaries and do not overlap. To achieve these goals, department shapes and their locations within bays are determined LP. A PTS approach is developed to search an overall layout structure that describes relative positions of departments for the relaxed-FBS (RFBS). The proposed LP embedded PTS–RFBS approach is used to solve thirteen FLP instances from the literature with varying sizes. The comparative results show that this approach is very promising and able to find new best solutions for several test problems.
Facilities planning and design; Unequal area facility layout; Flexible bay structure; Probabilistic tabu search; Constrained combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221712005607
Kulturel-Konak, Sadan
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:785-7932012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:785-793
article
Mean–variance asset–liability management: Cointegrated assets and insurance liability
The cointegration of major financial markets around the globe is well evidenced with strong empirical support. This paper considers the continuous-time mean–variance (MV) asset–liability management (ALM) problem for an insurer investing in an incomplete financial market with cointegrated assets. The number of trading assets is allowed to be less than the number of Brownian motions spanning the market. The insurer also faces the risk of paying uncertain insurance claims during the investment period. We assume that the cointegration market follows the diffusion limit of the error-correction model for cointegrated time series. Using the Markowitz (1952) MV portfolio criterion, we consider the insurer’s problem of minimizing variance in the terminal wealth, given an expected terminal wealth subject to interim random liability payments following a compound Poisson process. We generalize the technique developed by Lim (2005) to tackle this problem. The particular structure of cointegration enables us to solve the ALM problem completely in the sense that the solutions of the continuous-time portfolio policy and efficient frontier are obtained as explicit and closed-form formulas.
Asset–liability management; Cointegration; Mean–variance portfolio theory;
http://www.sciencedirect.com/science/article/pii/S0377221712005309
Chiu, Mei Choi
Wong, Hoi Ying
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:605-6132012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:605-613
article
Metaheuristics for multi-mode capital-constrained project payment scheduling
This paper involves the multi-mode capital-constrained project payment scheduling problem, where the objective is to assign activity modes and payments so as to maximize the net present value (NPV) of the contractor under the constraint of capital availability. In the light of different payment patterns adopted, four optimization models are constructed using the event-based method. For the NP-hardness of the problem, metaheuristics, including tabu search and simulated annealing, are developed and compared with multi-start iterative improvement and random sampling based on a computational experiment performed on a data set generated randomly. The results indicate that the loop nested tabu search is the most promising procedure for the problem studied. Moreover, the effects of several key parameters on the contractor’s NPV are investigated and the following conclusions are drawn: The contractor’s NPV rises with the increase of the contractor’s initial capital availability, the payment number, the payment proportion, or the project deadline; the contractor has a decreasing marginal return as the contractor’s initial capital availability goes up; the contractor’s NPVs under the milestone event based payment pattern are not less than those under the other three payment patterns.
Project scheduling; Payments; Net present value; Capital constraint; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S037722171200536X
He, Zhengwen
Liu, Renjing
Jia, Tao
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:794-8062012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:794-806
article
Restoring infrastructure systems: An integrated network design and scheduling (INDS) problem
We consider the problem of restoring services provided by infrastructure systems after an extreme event disrupts them. This research proposes a novel integrated network design and scheduling problem that models these restoration efforts. In this problem, work groups must be allocated to build nodes and arcs into a network in order to maximize the cumulative weighted flow in the network over a horizon. We develop a novel heuristic dispatching rule that selects the next set of tasks to be processed by the work groups. We further propose families of valid inequalities for an integer programming formulation of the problem, one of which specifically links the network design and scheduling decisions. Our methods are tested on realistic data sets representing the infrastructure systems of New Hanover County, North Carolina in the United States and lower Manhattan in New York City. These results indicate that our methods can be used in both real-time restoration activities and long-term scenario planning activities. Our models are also applied to explore the effects on the restoration activities of aligning them with the goals of an emergency manager and to benchmark existing restoration procedures.
Infrastructure restoration; Extreme events; Network design; Scheduling; Dispatching rules;
http://www.sciencedirect.com/science/article/pii/S0377221712005310
Nurre, Sarah G.
Cavdaroglu, Burak
Mitchell, John E.
Sharkey, Thomas C.
Wallace, William A.
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:807-8172012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:807-817
article
An analytical theory of knowledge behaviour in networks
To date OR has no means of modelling, and therefore predicting the behaviour of knowledge in a system. Such knowledge bearing systems are ubiquitous, and include social networking structures (of increasing importance in politics and in marketing) and more conventional organisational structures (such as communities of practice). Taking into account the critical nature of knowledge production and dissemination as strategic issues for firms, this is a serious gap in our capability.
Organisation theory; OR in strategic planning; Systems dynamics; Knowledge networks; Knowledge dynamics;
http://www.sciencedirect.com/science/article/pii/S0377221712003657
Swart, Juani
Powell, John
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:132-1402012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:132-140
article
Shortest path games
We study cooperative games that arise from the problem of finding shortest paths from a specified source to all other nodes in a network. Such networks model, among other things, efficient development of a commuter rail system for a growing metropolitan area. We motivate and define these games and provide reasonable conditions for the corresponding rail application. We show that the core of a shortest path game is nonempty and satisfies the given conditions, but that the Shapley value for these games may lie outside the core. However, we show that the shortest path game is convex for the special case of tree networks, and we provide a simple, polynomial time formula for the Shapley value in this case. In addition, we extend our tree results to the case where users of the network travel to nodes other than the source. Finally, we provide a necessary and sufficient condition for shortest paths to remain optimal in dynamic shortest path games, where nodes are added to the network sequentially over time.
Game theory; Shortest paths; Network design;
http://www.sciencedirect.com/science/article/pii/S0377221712005346
Rosenthal, Edward C.
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:8-222012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:8-22
article
Carryover sequence-dependent group scheduling with the integration of internal and external setup times
This paper addresses a group scheduling problem in a two-machine flow shop with a bicriteria objective and carryover sequence-dependent setup times. This special type of group scheduling problem typically arises in the assembly of printed circuit boards (PCBs). The objective is to sequence all board types in a board group as well as board groups themselves in a way that the objective function is minimized. We introduce the carryover sequence-dependent setup on machines, and call it internal setup. As an opportunity for manufacturers to decrease the costs, the focus is to completely eliminate the role of the kitting staff. Thus, we introduce the external setup (kitting) time for the next board group and require it to be performed by the machine operator during the time he is idle. Consequently, the internal and external setup times are integrated in this research, and to the best of our knowledge it is for the first time a research on PCB group scheduling is performed by integrating both setups. In order to solve this problem, first a mathematical model is developed. Then a heuristic together with two other meta-heuristic algorithms (one based on tabu search and the other based on genetic algorithm) are proposed and their efficiency and effectiveness on several problems are tested. Also a statistical experimental design is performed in order to evaluate the impact of different factors on the performance of the algorithms.
PCB group scheduling; Carryover sequence-dependent setups; Kitting times; Tabu search algorithm; Genetic algorithm; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221712005358
Yazdani Sabouni, M.T.
Logendran, Rasaratnam
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:227-2382012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:227-238
article
Scheduling electric power production at a wind farm
We present a model for scheduling power generation at a wind farm, and introduce a particle swarm optimization algorithm with a small world network structure to solve the model. The solution generated by the algorithm defines the operational status of wind turbines for a scheduling horizon selected by a decision maker. Different operational scenarios are constructed based on time series data of electricity price, grid demand, and wind speed. The computational results provide insights into management of a wind farm.
Scheduling; Evolutionary computations; Wind farm; Particle swarm optimization; Small world network;
http://www.sciencedirect.com/science/article/pii/S0377221712006005
Zhang, Zijun
Kusiak, Andrew
Song, Zhe
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:79-922012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:79-92
article
Aggregate-level demand management in evacuation planning
Without successful large-scale regional evacuations, threats such as hurricanes and wild-fires can cause a large loss of life. In this context, automobiles are oftentimes an essential transportation mode for evacuations, but the ensuing traffic typically overwhelms the roadway capacity and causes congestion on a massive scale. Congestion leads to many problems including longer, costlier, and more stressful evacuations, lower compliance rates, and increased risk to the population. Supply-based strategies have traditionally been used in evacuation planning, but they have been proven to be insufficient to reduce congestion to acceptable levels. In this paper, we study the demand-based strategies of aggregate-level staging and routing to structure the evacuation demand, both with and without congestion. We provide a novel modeling framework that offers strategic flexibility and utilizes a lexicographic objective function that represents a hierarchy of relevant evacuation-based goals. We also provide insights into the nature and effect of network bottlenecks. We compare our model with and without congestion in relation to tractability, normative optimality, and robustness under demand uncertainty. We also show the effectiveness of using demand-based strategies as opposed to using the status quo that involves a non-staged or simultaneous evacuation process. Effective solution procedures are developed and tested using hypothetical problem instances as well as using a larger study based on a portion of coastal Virginia, USA.
Integer programming; Evacuation planning; Disaster management; Staging; Routing;
http://www.sciencedirect.com/science/article/pii/S0377221712005930
Bish, Douglas R.
Sherali, Hanif D.
oai:RePEc:eee:ejores:v:224:y:2013:i:1:p:93-1002012-10-18RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:1:p:93-100
article
Tabu search for the single row facility layout problem using exhaustive 2-opt and insertion neighborhoods
The single row facility layout problem (SRFLP) is the problem of arranging facilities with given lengths on a line, while minimizing the weighted sum of the distances between all pairs of facilities. The problem is NP-hard. In this paper, we present two tabu search implementations, one involving an exhaustive search of the 2-opt neighborhood and the other involving an exhaustive search of the insertion neighborhood. We also present techniques to significantly speed up the search of the two neighborhoods. Our computational experiments show that the speed up techniques are effective, and our tabu search implementations are competitive. Our tabu search implementations improved previously known best solutions for 23 out of the 43 large sized SRFLP benchmark instances.
Facilities planning and design; Single row facility layout; Tabu search;
http://www.sciencedirect.com/science/article/pii/S0377221712005942
Kothari, Ravi
Ghosh, Diptesh
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:313-3242013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:313-324
article
Economic implications of poor access to antenatal care in rural and remote Western Australian Aboriginal communities: An individual sampling model of pregnancy
Australian Aboriginal women attend antenatal care less frequently and experience poorer pregnancy outcomes than non-Aboriginal women. Improving access to antenatal care is recognised as a means to improve pregnancy outcomes for mother and baby.
Individual sampling model; Decision analytic model; Antenatal care; Pregnancy care; Neonatal outcomes;
http://www.sciencedirect.com/science/article/pii/S0377221712008065
Cannon, Jeffrey W.
Mueller, Ute A.
Hornbuckle, Janet
Larson, Ann
Simmer, Karen
Newnham, John P.
Doherty, Dorota A.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:55-612013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:55-61
article
Lot sizing with carbon emission constraints
This paper introduces new environmental constraints, namely carbon emission constraints, in multi-sourcing lot-sizing problems. These constraints aim at limiting the carbon emission per unit of product supplied with different modes. A mode corresponds to the combination of a production facility and a transportation mode and is characterized by its economical costs and its unitary carbon emission. Four types of constraints are proposed and analyzed in the single-item uncapacitated lot-sizing problem. The periodic case is shown to be polynomially solvable, while the cumulative, global and rolling cases are NP-hard. Perspectives to extend this work are discussed.
Lot sizing; Carbon emission; Dynamic programming; Complexity;
http://www.sciencedirect.com/science/article/pii/S037722171200896X
Absi, Nabil
Dauzère-Pérès, Stéphane
Kedad-Sidhoum, Safia
Penz, Bernard
Rapine, Christophe
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:341-3532013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:341-353
article
Benders Decomposition for multi-stage stochastic mixed complementarity problems – Applied to a global natural gas market model
This paper presents and implements a Benders Decomposition type of algorithm for large-scale, stochastic multi-period mixed complementarity problems. The algorithm is applied to various multi-stage natural gas market models accounting for market power exertion by traders. Due to the non-optimization nature of the natural gas market problem, a straightforward implementation of the traditional Benders Decomposition is not possible. The master and subproblems can be derived from the underlying optimization problems and transformed into complementarity problems. However, to complete the master problems optimality cuts are added using the variational inequality-based method developed in Gabriel and Fuller (2010). In this manner, an alternative derivation of Benders Decomposition for Stochastic MCP is presented, thereby making this approach more applicable to a broader audience. The algorithm can successfully solve problems with up to 256 scenarios and more than 600 thousand variables, and problems with over 117 thousand variables with more than two thousand first-stage capacity expansion variables. The algorithm is efficient for solving two-stage problems. The computational time reduction for other stochastic problems is considerable and would be even larger if a parallel implementation of the algorithm were used. The paper concludes with a discussion of infrastructure expansion results, illustrating the impact of hedging on investment timing and optimal capacity sizes.
Stochastic programming; OR in energy; Benders Decomposition; Natural gas market models; Stochastic mixed complementarity problem; Optimality cut;
http://www.sciencedirect.com/science/article/pii/S0377221712008727
Egging, Ruud
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:332-3402013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:332-340
article
A multi-objective mathematical model for the industrial hazardous waste location-routing problem
Industrial hazardous waste management involves the collection, transportation, treatment, recycling and disposal of industrial hazardous materials that pose risk to their surroundings. In this paper, a new multi-objective location-routing model is developed, and implemented in the Marmara region of Turkey. The aim of the model is to help decision makers decide on locations of treatment centers utilizing different technologies, routing different types of industrial hazardous wastes to compatible treatment centers, locations of recycling centers and routing hazardous waste and waste residues to those centers, and locations of disposal centers and routing waste residues there. In the mathematical model, three criteria are considered: minimizing total cost, which includes total transportation cost of hazardous materials and waste residues and fixed cost of establishing treatment, disposal and recycling centers; minimizing total transportation risk related to the population exposure along transportation routes of hazardous materials and waste residues; and minimizing total risk for the population around treatment and disposal centers, also called site risk. A lexicographic weighted Tchebycheff formulation is developed and computed with CPLEX software to find representative efficient solutions to the problem. Data related to the Marmara region is obtained by utilizing Arcview 9.3 GIS software and Marmara region geographical database.
Routing; Multiple objective programming; Location-routing problem; Pareto optimization; Industrial hazardous waste management; Multi-objective model;
http://www.sciencedirect.com/science/article/pii/S0377221712008624
Samanlioglu, Funda
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:228-2362013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:228-236
article
Double marginalization and coordination in the supply chain with uncertain supply
This paper explores a generalized supply chain model subject to supply uncertainty after the supplier chooses the production input level. Decentralized systems under wholesale price contracts are investigated, with double marginalization effects shown to lead to supply insufficiencies, in the cases of both deterministic and random demands. We then design coordination contracts for each case and find that an accept-all type of contract is required to coordinate the supply chain with random demand, which is a much more complicated situation than that with deterministic demand. Examples are provided to illustrate the application of our findings to specific industrial domains. Moreover, our coordination mechanisms are shown to be applicable to the multi-supplier situation, which fills the research gap on assembly system coordination with random yield and random demand under a voluntary compliance regime.
Supply chain management; Uncertain supply; Contract design; Double marginalization;
http://www.sciencedirect.com/science/article/pii/S0377221712008120
Li, Xiang
Li, Yongjian
Cai, Xiaoqiang
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:203-2102013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:203-210
article
Master corner polyhedron: Vertices
We focus on the vertices of the master corner polyhedron (MCP), a fundamental object in the theory of integer linear programming. We introduce two combinatorial operations that transform vertices to their neighbors. This implies that each MCP can be defined by the initial vertices regarding these operations; we call them support vertices. We prove that the class of support vertices of all MCPs over a group is invariant under automorphisms of this group and describe MCP vertex bases. Among other results, we characterize its irreducible points, establish relations between a vertex and the nontrivial facets that pass through it, and prove that this polyhedron is of diameter 2.
Combinatorial optimization; Integer programming; Master corner polyhedron; Vertices;
http://www.sciencedirect.com/science/article/pii/S0377221712008351
Shlyk, Vladimir A.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:81-872013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:81-87
article
An inventory control model with stochastic review interval and special sale offer
Periodic review inventory models are widely used in practice, especially for inventory systems in which many different items are purchased from the same supplier. However, most of periodic review models have assumed a fixed length of the review periods. In practice, it is possible that the review periods are of a random (stochastic) length. This paper presents an inventory control model in the case of random review intervals and special sale offer from the supplier. The replenishment interval is assumed to obey from two different distributions, namely, exponential and uniform distributions. Also, shortages are allowed in the term of partial backordering. For this model, its convexity condition is discussed and closed form solutions are proposed.
Logistics; Special sale; Stochastic review interval; Partial backlogging;
http://www.sciencedirect.com/science/article/pii/S0377221712009022
Karimi-Nasab, M.
Konstantaras, I.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:522-5352013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:522-535
article
Weighted Multivariate Mean Square Error for processes optimization: A case study on flux-cored arc welding for stainless steel claddings
A mathematical programming technique developed recently that optimizes multiple correlated characteristics is the Multivariate Mean Square Error (MMSE). The MMSE approach has obtained noteworthy results, by avoiding the production of inappropriate optimal points that can occur when a method fails to take into account a correlation structure. Where the MMSE approach is deficient, however, is in cases where the multiple correlated characteristics need to be optimized with varying degrees of importance. The MMSE approach, in treating all responses as having the same importance, is unable to attribute the desired weights. This paper thus introduces a strategy that weights the responses in the MMSE approach. The method, called the Weighted Multivariate Mean Square Error (WMMSE), utilizes a weighting procedure that integrates Principal Component Analysis (PCA) and Response Surface Methodology (RSM). In doing so, WMMSE obtains uncorrelated weighted objective functions from the original responses. After being mathematically programmed, these functions are optimized by employing optimization algorithms. We applied WMMSE to optimize a stainless steel cladding application executed via the flux-cored arc welding (FCAW) process. Four input parameters and eight response variables were considered. Stainless steel cladding, which carries potential benefits for a variety of industries, takes low cost materials and deposits over their surfaces materials having anti-corrosive properties. Optimal results were confirmed, which ensured the deposition of claddings with defect-free beads exhibiting the desired geometry and demonstrating good productivity indexes.
Multiple objective programming; Weighted Multivariate Mean Square Error (WMMSE); Stainless steel claddings; Flux-cored arc welding (FCAW); Response Surface Methodology (RSM);
http://www.sciencedirect.com/science/article/pii/S0377221712008946
Gomes, J.H.F.
Paiva, A.P.
Costa, S.C.
Balestrassi, P.P.
Paiva, E.J.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:268-2762013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:268-276
article
Optimal benchmarking for active portfolio managers
Within an agency theoretic framework adapted to the portfolio delegation issue, we show how to construct optimal benchmarks. In accordance with US regulations, the benchmark-adjusted compensation scheme is taken to be symmetric. The investor’s control consists in forcing the manager to adopt the appropriate benchmark so that his first-best optimum is attained. Solving simultaneously the manager’s and the investor’s dynamic optimization programs in a fairly general framework, we characterize the optimal benchmark. We then provide completely explicit solutions when the investor’s and the manager’s utility functions exhibit different CRRA parameters. We find that, even under optimal benchmarking, it is never optimal for the manager, and therefore for the investor, to follow exactly the benchmark, except in a very restrictive case. We finally assess by simulation the practical importance, in particular in terms of the investor’s welfare, of selecting a sub-optimal benchmark.
Benchmarking; Incentive fees; Mutual funds; Continuous time trading; Martingale approach; Principal-agent model;
http://www.sciencedirect.com/science/article/pii/S0377221712008089
Lioui, Abraham
Poncet, Patrice
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:560-5762013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:560-576
article
On selecting portfolio of international mutual funds using goal programming with extended factors
This paper proposes and investigates the use of several factors for portfolio selection of international mutual funds. Three of the selected factors are specific to mutual funds, additional three factors are taken from Macroeconomics and one factor represents regional and country preferences. Each factor is treated as an objective in the multiple objective approach of goal programming. Three variants of goal programming are utilized.
Goal programming; Portfolio selection; Extended factors; Mutual funds;
http://www.sciencedirect.com/science/article/pii/S0377221712008193
Tamiz, Mehrdad
Azmi, Rania A.
Jones, Dylan F.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:646-6572013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:646-657
article
Efficiency of purchasing and selling agents in markets with quality uncertainty: The case of illicit drug transactions
Since Akerlof’s theory of lemons, economists have viewed quality uncertainty as an informational advantage for sellers. Drawing on frontier techniques, we propose in this paper a simple method for measuring inefficiency of both sellers and buyers in markets for goods with different levels of quality. We apply a non-parametric robust double-frontier framework to the case of illicit substance markets, which suffer from imperfect information about drug quality for purchasers and to a lesser extent for sellers. We use unique data on cannabis and cocaine transactions collected in France that include information about price, quantity exchanged and purity. We find that transactional inefficiency does not really benefit either dealers or purchasers. Furthermore, information influences the performance of agents during market transactions.
(D) Data envelopment analysis; (P) Economics; (P) Pricing; (P) Purchasing; (P) Retailing; (P) Uncertainty modelling;
http://www.sciencedirect.com/science/article/pii/S0377221712009150
Ben Lakhdar, Christian
Leleu, Hervé
Vaillant, Nicolas Gérard
Wolff, François-Charles
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:122-1322013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:122-132
article
Patterns in stock market movements tested as random number generators
This paper shows that tests of Random Number Generators (RNGs) may be used to test the Efficient Market Hypothesis (EMH). It uses the Overlapping Serial Test (OST), a standard test in RNG research, to detect anomalous patterns in the distribution of sequences of stock market movements up and down. Our results show that most stock markets exhibit idiosyncratic recurrent patterns, contrary to the efficient market hypothesis; also that OST detects a different kind of non-randomness to standard econometric long- and short-memory tests. Exposure of these anomalies should contribute to making markets more efficient.
Stock market time series; Financial data mining; Forecasting; Finance; Overlapping serial test;
http://www.sciencedirect.com/science/article/pii/S0377221712009101
Doyle, John R.
Chen, Catherine H.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:88-1002013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:88-100
article
The impact of process deterioration on production and maintenance policies
This paper examines a single-stage production system that deteriorates with production actions, and improves with maintenance. The condition of the process can be in any of several discrete states, and transitions from state to state follow a semi-Markov process. The firm can produce multiple products, which differ by profit earned, expected processing time, and impact on equipment deterioration. The firm can also perform different maintenance actions, which differ by cost incurred, expected down time, and impact on the process condition. The firm needs to determine the optimal production and maintenance choices in each state in a way that maximizes the long-run expected average reward per unit time.
Manufacturing; Maintenance; Scheduling;
http://www.sciencedirect.com/science/article/pii/S0377221712009046
Kazaz, Burak
Sloan, Thomas W.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:1-112013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:1-11
article
Existence and solution methods for equilibria
Equilibrium problems provide a mathematical framework which includes optimization, variational inequalities, fixed-point and saddle point problems, and noncooperative games as particular cases. This general format received an increasing interest in the last decade mainly because many theoretical and algorithmic results developed for one of these models can be often extended to the others through the unifying language provided by this common format. This survey paper aims at covering the main results concerning the existence of equilibria and the solution methods for finding them.
Equilibrium problem; Monotonicity; Coercivity; Auxiliary principle; Regularization;
http://www.sciencedirect.com/science/article/pii/S0377221712008892
Bigi, Giancarlo
Castellani, Marco
Pappalardo, Massimo
Passacantando, Mauro
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:12-212013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:12-21
article
Constraint qualifications in linear vector semi-infinite optimization
Linear vector semi-infinite optimization deals with the simultaneous minimization of finitely many linear scalar functions subject to infinitely many linear constraints. This paper provides characterizations of the weakly efficient, efficient, properly efficient and strongly efficient points in terms of cones involving the data and Karush–Kuhn–Tucker conditions. The latter characterizations rely on different local and global constraint qualifications. The global constraint qualifications are illustrated on a collection of selected applications.
Multiple objective programming; Linear vector semi-infinite optimization; Constraint qualifications; Cone conditions; KKT conditions;
http://www.sciencedirect.com/science/article/pii/S0377221712006698
Goberna, M.A.
Guerra-Vazquez, F.
Todorov, M.I.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:286-2922013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:286-292
article
Revisiting a game theoretic framework for the robust railway network design against intentional attacks
This paper discusses and extends some competitive aspects of the games proposed in an earlier work, where a robust railway network design problem was proposed as a non-cooperative zero-sum game in normal form between a designer/operator and an attacker. Due to the importance of the order of play and the information available to the players at the moment of their decisions, we here extend those previous models by proposing a formulation of this situation as a dynamic game. Besides, we propose a new mathematical programming model that optimizes both the network design and the allocation of security resources over the network. The paper also proposes a model to distribute security resources over an already existing railway network in order to minimize the negative effects of an intentional attack. For the sake of readability, all concepts are introduced with the help of an illustrative example.
Robust network design; Game theory; Protection resource allocation; Equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221712008399
Perea, Federico
Puerto, Justo
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:246-2572013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:246-257
article
The golden number and Fibonacci sequences in the design of voting structures
Some distinguished types of voters, as vetoes, passers or nulls, as well as some others, play a significant role in voting systems because they are either the most powerful or the least powerful voters in the game independently of the measure used to evaluate power. In this paper we are concerned with the design of voting systems with at least one type of these extreme voters and with few types of equivalent voters. With this purpose in mind we enumerate these special classes of games and find out that its number always follows a Fibonacci sequence with smooth polynomial variations. As a consequence we find several families of games with the same asymptotic exponential behavior except for a multiplicative factor which is the golden number or its square. From a more general point of view, our studies are related with the design of voting structures with a predetermined importance ranking.
91A12; 91A80; 91B12; Game theory; Voting systems; Complete simple games; Enumeration and classification; Operational research structures;
http://www.sciencedirect.com/science/article/pii/S0377221712007631
Freixas, Josep
Kurz, Sascha
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:615-6252013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:615-625
article
The extended QUALIFLEX method for multiple criteria decision analysis based on interval type-2 fuzzy sets and applications to medical decision making
QUALIFLEX, a generalization of Jacquet-Lagreze’s permutation method, is a useful outranking method in decision analysis because of its flexibility with respect to cardinal and ordinal information. This paper develops an extended QUALIFLEX method for handling multiple criteria decision-making problems in the context of interval type-2 fuzzy sets. Interval type-2 fuzzy sets contain membership values that are crisp intervals, which are the most widely used of the higher order fuzzy sets because of their relative simplicity. Using the linguistic rating system converted into interval type-2 trapezoidal fuzzy numbers, the extended QUALIFLEX method investigates all possible permutations of the alternatives with respect to the level of concordance of the complete preference order. Based on a signed distance-based approach, this paper proposes the concordance/discordance index, the weighted concordance/discordance index, and the comprehensive concordance/discordance index as evaluative criteria of the chosen hypothesis for ranking the alternatives. The feasibility and applicability of the proposed methods are illustrated by a medical decision-making problem concerning acute inflammatory demyelinating disease, and a comparative analysis with another outranking approach is conducted to validate the effectiveness of the proposed methodology.
Decision analysis; QUALIFLEX; Outranking method; Multiple criteria decision-making; Interval type-2 fuzzy set;
http://www.sciencedirect.com/science/article/pii/S0377221712008909
Chen, Ting-Yu
Chang, Chien-Hung
Rachel Lu, Jui-fen
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:626-6352013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:626-635
article
Diversification-consistent data envelopment analysis with general deviation measures
We propose new efficiency tests which are based on traditional DEA models and take into account portfolio diversification. The goal is to identify the investment opportunities that perform well without specifying our attitude to risk. We use general deviation measures as the inputs and return measures as the outputs. We discuss the choice of the set of investment opportunities including portfolios with limited number of assets. We compare the optimal values (efficiency scores) of all proposed tests leading to the relations between the sets of efficient opportunities. Strength of the tests is then discussed. We test the efficiency of 25 world financial indices using new DEA models with CVaR deviation measures.
Efficiency tests; Data envelopment analysis; General deviation measures; Diversification-consistency;
http://www.sciencedirect.com/science/article/pii/S0377221712008235
Branda, Martin
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:221-2272013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:221-227
article
Dual-channel closed-loop supply chain with government consumption-subsidy
The government has been acting as an important role in the formation and operation of closed-loop supply chain. This paper focuses on how consumption-subsidy influences dual-channel closed-loop supply chain. After introducing government consumption-subsidy program and dual-channel closed-loop supply chain, the paper analyzes the channel members’ decisions before and after the government-funded program performance, respectively. Finally, influence of consumption-subsidy has been considered from the consumers, the scale of closed-loop supply chain and the enterprises perspectives, which provides an important basis for our propositions. The key propositions of the paper are listed as follows: All the consumers that purchase the new products are beneficiaries of the government consumption-subsidy in varying degrees; the consumption-subsidy is conducive to the expansion of closed-loop supply chain; both the manufacturer and the retailer are beneficiaries of the consumption-subsidy, while the e-tailer benefits or not is uncertain.
Supply chain management; Consumption-subsidy; E-commerce; China;
http://www.sciencedirect.com/science/article/pii/S0377221712007989
Ma, Wei-min
Zhao, Zhang
Ke, Hua
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:636-6452013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:636-645
article
Computing tournament solutions using relation algebra and RelView
We describe a simple computing technique for the tournament choice problem. It rests upon relational modeling and uses the BDD-based computer system RelView for the evaluation of the relation-algebraic expressions that specify the solutions and for the visualization of the computed results. The Copeland set can immediately be identified using RelView’s labeling feature. Relation-algebraic specifications of the Condorcet non-losers, the Schwartz set, the top cycle, the uncovered set, the minimal covering set, the Banks set, and the tournament equilibrium set are delivered. We present an example of a tournament on a small set of alternatives, for which the above choice sets are computed and visualized via RelView. The technique described in this paper is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other problems of social choice and game theory.
Tournament; Relational algebra; RelView; Copeland set; Condorcet non-losers; Schwartz set;
http://www.sciencedirect.com/science/article/pii/S0377221712008739
Berghammer, Rudolf
Rusinowska, Agnieszka
de Swart, Harrie
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:190-1982013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:190-198
article
Multistage optimization of option portfolio using higher order coherent risk measures
Choosing a suitable risk measure to optimize an option portfolio’s performance represents a significant challenge. This paper is concerned with illustrating the advantages of Higher order coherent risk measures to evaluate option risk’s evolution. It discusses the detailed implementation of the resulting dynamic risk optimization problem using stochastic programming. We propose an algorithmic procedure to optimize an option portfolio based on minimization of conditional higher order coherent risk measures. Illustrative examples demonstrate some advantages in the performance of the portfolio’s levels when higher order coherent risk measures are used in the risk optimization criterion.
Coherent risk measures; Duality; Average value-at-risk; Monte Carlo simulation; Kusuoka measure; Stochastic programming;
http://www.sciencedirect.com/science/article/pii/S0377221712009447
Matmoura, Yassine
Penev, Spiridon
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:481-4902013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:481-490
article
Optimal selection of process mean for a stochastic inventory model
It is very common to assume deterministic demand in the literature of integrated targeting – inventory models. However, if variability in demand is high, there may be significant disruptions from using the deterministic solution in probabilistic environment. Thus, the model would not be applicable to real world situations and adjustment must be made. The purpose of this paper is to develop a model for integrated targeting – inventory problem when the demand is a random variable. In particular, the proposed model jointly determines the optimal process mean, lot size and reorder point in (Q,R) continuous review model. In order to investigate the effect of uncertainty in demand, the proposed model is compared with three baseline cases. The first of which considers a hierarchical model where the producer determines the process mean and lot-sizing decisions separately. This hierarchical model is used to show the effect of integrating the process targeting with production/inventory decisions. Another baseline case is the deterministic demand case which is used to show the effect of variation in demand on the optimal solution. The last baseline case is for the situation where the variation in the filling amount is negligible. This case demonstrates the sensitivity of the total cost with respect to the variation in the process output. Also, a procedure is developed to determine the optimal solution for the proposed models. Empirical results show that ignoring randomness in the demand pattern leads to underestimating the expected total cost. Moreover, the results indicate that performance of a process can be improved significantly by reducing its variation.
Quality control; Targeting problem; Production; Demand uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221712008703
Darwish, M.A.
Abdulmalek, F.
Alkhedher, M.
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:44-542013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:44-54
article
Bose–Einstein condensation in satisfiability problems
This paper is concerned with the complex behavior arising in satisfiability problems. We present a new statistical physics-based characterization of the satisfiability problem. Specifically, we design an algorithm that is able to produce graphs starting from a k-SAT instance, in order to analyze them and show whether a Bose–Einstein condensation occurs. We observe that, analogously to complex networks, the networks of k-SAT instances follow Bose statistics and can undergo Bose–Einstein condensation. In particular, k-SAT instances move from a fit-get-rich network to a winner-takes-all network as the ratio of clauses to variables decreases, and the phase transition of k-SAT approximates the critical temperature for the Bose–Einstein condensation. Finally, we employ the fitness-based classification to enhance SAT solvers (e.g., ChainSAT) and obtain the consistently highest performing SAT solver for CNF formulas, and therefore a new class of efficient hardware and software verification tools.
k-SAT; Complex networks; Bose–Einstein condensation; Phase transition; Performance;
http://www.sciencedirect.com/science/article/pii/S0377221712008910
Angione, Claudio
Occhipinti, Annalisa
Stracquadanio, Giovanni
Nicosia, Giuseppe
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:166-1732013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:166-173
article
The U.S. Navy explores detailing cost reduction via Data Envelopment Analysis
In this paper we show how a variation of Data Envelopment Analysis, the Generalized Symmetric Weight Assignment Technique, is used to assign sailors to jobs for the U.S. Navy. This method differs from others as the assignment is a multi-objective problem where the importance of each objective, called a metric, is determined by the decision-maker and promoted within the assignment problem. We explore how the method performs as the importance of particular metrics increases. Finally, we show that the proposed method leads to substantial cost savings for the U.S. Navy without degrading the resulting assignments’ performance on other metrics.
Data Envelopment Analysis; OR in military; OR in manpower planning;
http://www.sciencedirect.com/science/article/pii/S0377221712009113
Sutton, Warren
Dimitrov, Stanko
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:461-4702013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:461-470
article
Stochastic multiobjective problems with complementarity constraints and applications in healthcare management
We consider a class of stochastic multiobjective problems with complementarity constraints (SMOPCCs) in this paper. We derive the first-order optimality conditions including the Clarke/Mordukhovich/strong-type stationarity in the Pareto sense for the SMOPCC. Since these first-order optimality systems involve some unknown index sets, we reformulate them as nonlinear equations with simple constraints. Then, we introduce an asymptotic method to solve these constrained equations. Furthermore, we apply this methodology results to a patient allocation problem in healthcare management.
Stochastic multiobjective problem with complementarity constraints; Pareto stationarity; Constrained equation; Asymptotic method; Healthcare;
http://www.sciencedirect.com/science/article/pii/S037722171200820X
Lin, Gui-Hua
Zhang, Dali
Liang, Yan-Chao
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:551-5592013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:551-559
article
The implementor/adversary algorithm for the cyclic and robust scheduling problem in health-care
A general problem in health-care consists in allocating some scarce medical resource, such as operating rooms or medical staff, to medical specialties in order to keep the queue of patients as short as possible. A major difficulty stems from the fact that such an allocation must be established several months in advance, and the exact number of patients for each specialty is an uncertain parameter. Another problem arises for cyclic schedules, where the allocation is defined over a short period, e.g. a week, and then repeated during the time horizon. However, the demand typically varies from week to week: even if we know in advance the exact demand for each week, the weekly schedule cannot be adapted accordingly. We model both the uncertain and the cyclic allocation problem as adjustable robust scheduling problems. We develop a row and column generation algorithm to solve this problem and show that it corresponds to the implementor/adversary algorithm for robust optimization recently introduced by Bienstock for portfolio selection. We apply our general model to compute master surgery schedules for a real-life instance from a large hospital in Oslo.
Health-care optimization; Master surgery scheduling; Robust optimization; Mixed-integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221712007862
Holte, Matias
Mannino, Carlo
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:199-2152013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:199-215
article
Robust supply chain network design with service level against disruptions and demand uncertainties: A real-life case
We have developed a stochastic mathematical formulation for designing a network of multi-product supply chains comprising several capacitated production facilities, distribution centres and retailers in markets under uncertainty. This model considers demand-side and supply-side uncertainties simultaneously, which makes it more realistic in comparison to models in the existing literature. In this model, we consider a discrete set as potential locations of distribution centres and retailing outlets and investigate the impact of strategic facility location decisions on the operational inventory and shipment decisions of the supply chain. We use a path-based formulation that helps us to consider supply-side uncertainties that are possible disruptions in manufacturers, distribution centres and their connecting links. The resultant model, which incorporates the cut-set concept in reliability theory and also the robust optimisation concept, is a mixed integer nonlinear problem. To solve the model to attain global optimality, we have created a transformation based on the piecewise linearisation method. Finally, we illustrate the model outputs and discuss the results through several numerical examples, including a real-life case study from the agri-food industry.
Supply chain network; Risk analysis; Disruption; Robust optimisation; Agri-food;
http://www.sciencedirect.com/science/article/pii/S0377221712009484
Baghalian, Atefeh
Rezapour, Shabnam
Farahani, Reza Zanjirani
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:152-1652013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:152-165
article
Integrated machine scheduling and vehicle routing with time windows
This paper integrates production and outbound distribution scheduling in order to minimize total tardiness. The overall problem consists of two subproblems. The first addresses scheduling a set of jobs on parallel machines with machine-dependent ready times. The second focusses on the delivery of completed jobs with a fleet of vehicles which may differ in their loading capacities and ready times. Job-dependent processing times, delivery time windows, service times, and destinations are taken into account. A genetic algorithm approach is introduced to solve the integrated problem as a whole. Two main questions are examined. Are the results of integrating machine scheduling and vehicle routing significantly better than those of classic decomposition approaches which break down the overall problem, solve the two subproblems successively, and merge the subsolutions to form a solution to the overall problem? And if so, is it possible to capitalize on these potentials despite the complexity of the integrated problem? Both questions are tackled by means of a numerical study. The genetic algorithm outperforms the classic decomposition approaches in case of small-size instances and is able to generate relatively good solutions for instances with up to 50 jobs, 5 machines, and 10 vehicles.
Supply Chain Scheduling; Parallel machines; Vehicle routing; Time windows; Total tardiness; Genetic algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221712009010
Ullrich, Christian A.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:301-3122013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:301-312
article
Modelling activities at a neurological rehabilitation unit
A queuing model of a specialist neurological rehabilitation unit is studied. The application is to the Neurological Rehabilitation Centre at Rookwood Hospital (Cardiff, UK), the national rehabilitation unit for Wales. Due to high demand this 21-bed inpatient facility is nearly always at maximum occupancy, and with a significant bed-cost per day this makes it a prime candidate for mathematical modelling. Central to this study is the concept that treatment intensity has an effect on patient length of stay. The model is constructed in four stages. First, appropriate patient groups are determined based on a number of patient-related attributes. Second, a purpose-built scheduling program is used to deduce typical levels of treatment to patients of each group. These are then used to estimate the mean length of stay for each patient group. Finally, the queuing model is constructed. This consists of a number of disconnected homogeneous server queuing systems; one for each patient group. A Coxian phase-type distribution is fitted to the length of time from admission until discharge readiness and an exponential distribution models the remainder of time until discharge. Some hypothetical scenarios suggested by senior management are then considered and compared on the grounds of a number of performance measures and cost implications.
Queuing theory; Markov modelling; Scheduling; OR in health services;
http://www.sciencedirect.com/science/article/pii/S0377221712008028
Griffiths, J.D.
Williams, J.E.
Wood, R.M.
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:277-2852013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:277-285
article
p-Hub approach for the optimal park-and-ride facility location problem
Park and Ride facilities (P&R) are car parks at which users can transfer to public transportation to reach their final destination. We propose a mixed linear programming formulation to determine the location of a fixed number of P&R facilities so that their usage is maximized. The facilities are modeled as hubs. Commuters can use one of the P&R facilities or choose to travel by car to their destinations, and their behavior follows a logit model. We apply a p-hub approach considering that users incur in a known generalized cost of using each P&R facility as input for the logit model. For small instances of the problem, we propose a novel linearization of the logit model, which allows transforming the binary nonlinear programming problem into a mixed linear programming formulation. A modification of the Heuristic Concentration Integer (HCI) procedure is applied to solve larger instances of the problem. Numerical experiments are performed, including a case in Queens, NY. Further research is proposed.
Location; Park and Ride; p-Hub; Logit model; Heuristic concentration integer;
http://www.sciencedirect.com/science/article/pii/S0377221712008223
Aros-Vera, Felipe
Marianov, Vladimir
Mitchell, John E.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:602-6142013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:602-614
article
A Markovian queueing model for ambulance offload delays
Ambulance offload delays are a growing concern for health care providers in many countries. Offload delays occur when ambulance paramedics arriving at a hospital Emergency Department (ED) cannot transfer patient care to staff in the ED immediately. This is typically caused by overcrowding in the ED. Using queueing theory, we model the interface between a regional Emergency Medical Services (EMS) provider and multiple EDs that serve both ambulance and walk-in patients. We introduce Markov chain models for the system and solve for the steady state probability distributions of queue lengths and waiting times using matrix-analytic methods. We develop several algorithms for computing performance measures for the system, particularly the offload delays for ambulance patients. Using these algorithms, we analyze several three-hospital systems and assess the impact of system resources on offload delays. In addition, simulation is used to validate model assumptions.
Queueing theory; Matrix-analytic method; Ambulance offload delay; Priority queues;
http://www.sciencedirect.com/science/article/pii/S037722171200882X
Almehdawe, Eman
Jewkes, Beth
He, Qi-Ming
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:76-802013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:76-80
article
Single-machine scheduling problems with actual time-dependent and job-dependent learning effect
In this study, we introduce an actual time-dependent and job-dependent learning effect into single-machine scheduling problems. We show that the complexity results of the makespan minimization problem and the sum of weighted completion time minimization problem are all NP-hard. The complexity result of the maximum lateness minimization problem is NP-hard in the strong sense. We also provide three special cases which can be solved by polynomial time algorithms.
Scheduling; Learning effect; Actual time-dependent; Job-dependent; NP-hard;
http://www.sciencedirect.com/science/article/pii/S0377221712009381
Jiang, Zhongyi
Chen, Fangfang
Kang, Huiyan
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:142-1512013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:142-151
article
An arc cover–path-cover formulation and strategic analysis of alternative-fuel station locations
In this study, we present a new formulation of the generalized flow-refueling location model that takes vehicle range and trips between origin–destination pairs into account. The new formulation, based on covering the arcs that comprise each path, is more computationally efficient than previous formulations or heuristics. Next, we use the new formulation to provide managerial insights for some key concerns of the industry, such as: whether infrastructure deployment should focus on locating clusters of facilities serving independent regions or connecting these regions by network of facilities; what is the impact of uncertainty in the origin–destination demand forecast; whether station locations will remain optimal as higher-range vehicles are introduced; and whether infrastructure developers should be willing to pay more for stations at higher-cost intersections. Experiments with real and random data sets are encouraging for the industry, as optimal locations tend to be robust under various conditions.
Flow refueling; Alternative-fuel vehicle; Electric vehicle; Fuel station location; Fueling infrastructure;
http://www.sciencedirect.com/science/article/pii/S0377221712008855
Capar, Ismail
Kuby, Michael
Leon, V. Jorge
Tsai, Yu-Jiun
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:62-752013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:62-75
article
Using matrix approximation for high-dimensional discrete optimization problems: Server consolidation based on cyclic time-series data
We consider the assignment of enterprise applications in virtual machines to physical servers, also known as server consolidation problem. Data center operators try to minimize the number of servers, but at the same time provide sufficient computing resources at each point in time. While historical workload data would allow for accurate workload forecasting and optimal allocation of enterprise applications to servers, the volume of data and the large number of resulting capacity constraints in a mathematical problem formulation renders this task impossible for any but small instances. We use singular value decomposition (SVD) to extract significant features from a large constraint matrix and provide a new geometric interpretation of these features, which allows for allocating large sets of applications efficiently to physical servers with this new formulation. While SVD is typically applied for purposes such as time series decomposition, noise filtering, or clustering, in this paper features are used to transform the original allocation problem into a low-dimensional integer program with only the extracted features in a much smaller constraint matrix. We evaluate the approach using workload data from a large data center and show that it leads to high solution quality, but at the same time allows for solving considerably larger problem instances than what would be possible without data reduction and model transform. The overall approach could also be applied to similar packing problems in service operations management.
Matrix approximation; Multi-dimensional packing; Server consolidation; Dimensionality reduction;
http://www.sciencedirect.com/science/article/pii/S0377221712009368
Setzer, Thomas
Bichler, Martin
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:30-432013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:30-43
article
Robust counterparts of inequalities containing sums of maxima of linear functions
This paper addresses the robust counterparts of optimization problems containing sums of maxima of linear functions. These problems include many practical problems, e.g. problems with sums of absolute values, and arise when taking the robust counterpart of a linear inequality that is affine in the decision variables, affine in a parameter with box uncertainty, and affine in a parameter with general uncertainty.
Robustness and sensitivity analysis; Sum of maxima of linear functions; Biaffine uncertainty; Robust conic quadratic constraints;
http://www.sciencedirect.com/science/article/pii/S0377221712007345
Gorissen, Bram L.
den Hertog, Dick
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:258-2672013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:258-267
article
Super efficiencies or super inefficiencies? Insights from a joint computation model for slacks-based measures in DEA
The slacks-based measure (SBM) can incorporate input and output slacks that would otherwise be neglected in the classical DEA model. In parallel, the super-efficiency model for SBM (S-SBM) has been developed for the purpose of ranking SBM efficient decision-making units (DMUs). When implementing SBM in conjunction with S-SBM, however, several issues can arise. First, unlike the standard super-efficiency model, S-SBM can only solve for super-efficiency scores but not SBM scores. Second, the S-SBM model may result in weakly efficient reference points. Third, the S-SBM and SBM scores for certain DMUs may be discontinuous with a perturbation to their inputs and outputs, making it hard to interpret and justify the scores in applications and the efficiency scores may be sensitive to small changes/errors in data. Due to this discontinuity, the S-SBM model may overestimate the super-efficiency score. This paper extends the existing SBM approaches and develops a joint model (J-SBM) that addresses the above issues; namely, the J-SBM model can (1) simultaneously compute SBM scores for inefficient DMUs and super-efficiency for efficient DMUs, (2) guarantee the reference points generated by the joint model are Pareto-efficient, and (3) the J-SBM scores of a firm are continuous in the input and output space. Interestingly, the radial DEA efficiency and super-efficiency scores for a DMU are continuous in the input–output space. The J-SBM model combines the merits of the radial and SBM models (i.e., continuity and Pareto-efficiency).
Data envelopment analysis; Slacks-based measure; Super-efficiency; Pareto-efficiency;
http://www.sciencedirect.com/science/article/pii/S0377221712007965
Chen, Chien-Ming
oai:RePEc:eee:ejores:v:227:y:2013:i:1:p:174-1812013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:227:y:2013:i:1:p:174-181
article
Estimating technical and allocative efficiency in the public sector: A nonparametric analysis of Dutch schools
Public sector output provision is influenced not only by discretionary inputs but also by exogenous environmental factors. In this paper, we extended the literature by developing a conditional DEA estimator of allocative efficiency that allows a decomposition of overall cost efficiency into allocative and technical components while simultaneously controlling for the environment. We apply the model to analyze technical and allocative efficiency of Dutch secondary schools. The results reveal that allocative efficiency represents a significant 37 percent of overall cost efficiency on average, although technical inefficiency is still the dominant part. Furthermore, the results show that the impact of environment largely differs between schools and that having a more unfavorable environment is very expensive to schools. These results highlight the importance of including environmental variables in both technical and allocative efficiency analysis.
Data envelopment analysis; Education; Allocative efficiency;
http://www.sciencedirect.com/science/article/pii/S0377221712009356
Haelermans, Carla
Ruggiero, John
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:536-5502013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:536-550
article
Global sensitivity measures from given data
Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.
Uncertainty analysis; Global sensitivity analysis; Simulation;
http://www.sciencedirect.com/science/article/pii/S0377221712008995
Plischke, Elmar
Borgonovo, Emanuele
Smith, Curtis L.
oai:RePEc:eee:ejores:v:226:y:2013:i:3:p:507-5152013-02-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:3:p:507-515
article
Network DEA pitfalls: Divisional efficiency and frontier projection under general network structures
Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently network DEA models been developed to examine the efficiency of DMUs with internal structures. The internal network structures range from a simple two-stage process to a complex system where multiple divisions are linked together with intermediate measures. In general, there are two types of network DEA models. One is developed under the standard multiplier DEA models based upon the DEA ratio efficiency, and the other under the envelopment DEA models based upon production possibility sets. While the multiplier and envelopment DEA models are dual models and equivalent under the standard DEA, such is not necessarily true for the two types of network DEA models. Pitfalls in network DEA are discussed with respect to the determination of divisional efficiency, frontier type, and projections. We point out that the envelopment-based network DEA model should be used for determining the frontier projection for inefficient DMUs while the multiplier-based network DEA model should be used for determining the divisional efficiency. Finally, we demonstrate that under general network structures, the multiplier and envelopment network DEA models are two different approaches. The divisional efficiency obtained from the multiplier network DEA model can be infeasible in the envelopment network DEA model. This indicates that these two types of network DEA models use different concepts of efficiency. We further demonstrate that the envelopment model’s divisional efficiency may actually be the overall efficiency.
Data envelopment analysis (DEA); Efficiency; Network; Intermediate measure; Link; Frontier;
http://www.sciencedirect.com/science/article/pii/S0377221712008697
Chen, Yao
Cook, Wade D.
Kao, Chiang
Zhu, Joe
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:789-7902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:789-790
article
Book Review by Etienne de Klerk “An Introduction to Polynomial and Semi-Algebraic Optimization” by Jean-Bernard Lasserre, Cambridge University Press, 2015.
http://www.sciencedirect.com/science/article/pii/S0377221715009170
de Klerk, Etienne
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:328-3412015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:328-341
article
Are targets for renewable portfolio standards too low? The impact of market structure on energy policy
In order to limit climate change from greenhouse gas emissions, governments have introduced renewable portfolio standards (RPS) to incentivise renewable energy production. While the response of industry to exogenous RPS targets has been addressed in the literature, setting RPS targets from a policymaker’s perspective has remained an open question. Using a bi-level model, we prove that the optimal RPS target for a perfectly competitive electricity industry is higher than that for a benchmark centrally planned one. Allowing for market power by the non-renewable energy sector within a deregulated industry lowers the RPS target vis-à-vis perfect competition. Moreover, to our surprise, social welfare under perfect competition with RPS is lower than that when the non-renewable energy sector exercises market power. In effect, by subsidising renewable energy and taxing the non-renewable sector, RPS represents an economic distortion that over-compensates damage from emissions. Thus, perfect competition with RPS results in “too much” renewable energy output, whereas the market power of the non-renewable energy sector mitigates this distortion, albeit at the cost of lower consumer surplus and higher emissions. Hence, ignoring the interaction between RPS requirements and the market structure could lead to sub-optimal RPS targets and substantial welfare losses.
OR in environment and climate change; Renewable portfolio standards; Bi-level modelling; Market power;
http://www.sciencedirect.com/science/article/pii/S0377221715009893
Siddiqui, Afzal S.
Tanaka, Makoto
Chen, Yihsu
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:131-1422015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:131-142
article
Outsource planning through option contracts with demand and cost uncertaintyAuthor-Name: Nosoohi, Iman
This research considers a supply chain consisting of one supplier and one manufacturer that produces a kind of product, e.g. innovative products, with a long supply lead-time, a short selling season and a stochastic demand. Complete production of the final product requires both an initial and a final processing operation. The manufacturer performs the initial processing operation with a deterministic cost. The final processing operation may be either performed by the manufacturer or assigned to an outside firm through a bid process. At the time of the supply contract, the final processing cost (FPC) is estimated as a stochastic variable. The uncertainty on FPC is removed before the selling season starts. The present study is an attempt to determine how the manufacturer should place the supply orders within the framework of wholesale price, put, call and bidirectional options. Option contracts provide the manufacturer with the flexibility to adjust his initial orders by exercising purchased options after the FPC is realized. We find optimal exercised orders with each option contract, in addition to equations in which the optimal initial and option orders hold. According to our analysis, if the realized FPC is higher (lower) than a specific level, the manufacturer should decrease (increase) his initial orders. We obtain analytically these specific levels under all types of option contracts. The numerical analysis and managerial insights shed light on the value of option contracts considering different parameter settings.
Supply chain management; Outsource planning; Option contract; Demand/Cost uncertainty;
http://www.sciencedirect.com/science/article/pii/S037722171500956X
Nookabadi, Ali Shahandeh
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:853-8632015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:853-863
article
The effects of integrating management judgement into OUT levels: In or out of context?
Physical inventories constitute a significant proportion of companies’ investments in today's competitive environment. The trade-off between customer service levels and inventory reserves is addressed in practice by statistical inventory software solutions; given the tremendous number of Stock Keeping Units (SKUs) that contemporary organisations deal with, such solutions are fully automated. However, empirical evidence suggests that managers habitually judgementally adjust the output of such solutions, such as replenishment orders or re-order levels. This research is concerned with the value being added, or not, when statistically derived inventory related decisions (Order-Up-To (OUT) levels in particular) are judgementally adjusted. We aim at developing our current understanding on the effects of incorporating human judgement into inventory decisions; to our knowledge such effects do not appear to have been studied empirically before and this is the first endeavour to do so. A number of research questions are examined and a simulation experiment is performed, using an extended database of approximately 1800 SKUs from the electronics industry, in order to evaluate human judgement effects. The linkage between adjustments and their justification is also evaluated; given the apparent lack of comprehensive empirical evidence in this area, including the field of demand forecasting, this is a contribution in its own right. Insights are offered to academics, to facilitate further research in this area, practitioners, to enable more constructive intervention into statistical inventory solutions, and software developers, to consider the interface with human decision makers.
Judgemental adjustments; Inventory management; Behavioural operations;
http://www.sciencedirect.com/science/article/pii/S0377221715006542
Syntetos, Aris A.
Kholidasari, Inna
Naim, Mohamed M.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:691-7052015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:691-705
article
Capacity market design options: A dynamic capacity investment model and a GB case study
Rising feed-in from renewable energy sources decreases margins, load factors, and thereby profitability of conventional generation in several electricity markets around the world. At the same time, conventional generation is still needed to ensure security of electricity supply. Therefore, capacity markets are currently being widely discussed as a measure to ensure generation adequacy in markets such as France, Germany, and the United States (e.g., Texas), or even implemented for example in Great Britain.
Capacity mechanism; Capacity market; Dynamic capacity investment model; Generation adequacy; Conventional electricity generation investment; Renewable energy sources;
http://www.sciencedirect.com/science/article/pii/S0377221715007894
Hach, Daniel
Chyong, Chi Kong
Spinler, Stefan
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1161-11682015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1161-1168
article
Investment and financing for SMEs with a partial guarantee and jump risk
We consider a small- and medium-sized enterprise (SME) with a funding gap intending to invest in a project, of which the cash flow follows a double exponential jump-diffusion process. In contrast to traditional corporate finance theory, we assume the SME is unable to get a loan directly from a bank and hence it enters into a partial guarantee agreement with an insurer and a lender. Utilizing a real options approach, we develop an investment and financing model with a partial guarantee. We explicitly derive the pricing and timing of the option to invest. We find that if the funding gap rises, the option value decreases but its investment threshold first declines and then increases. The larger the guarantee level, the lower the option value and the later the investment. The optimal coupon rate decreases with project risk and a growth of the guarantee level can effectively reduce agency conflicts.
Finance; Investment analysis; Guarantee level; Real options; Double exponential jump-diffusion process;
http://www.sciencedirect.com/science/article/pii/S0377221715008747
Luo, Pengfei
Wang, Huamao
Yang, Zhaojun
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:540-5502015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:540-550
article
A variable neighborhood search for the multi-period collection of recyclable materials
We consider an approach for scheduling the multi-period collection of recyclable materials. Citizens can deposit glass and paper for recycling in small cubes located at several collection points. The cubes are emptied by a vehicle that carries two containers and the material is transported to two treatment facilities. We investigate how the scheduling of emptying and transportation should be done in order to minimize the operation cost, while providing a high service level and ensuring that capacity constraints are not violated. We develop a heuristic solution method for solving the daily planning problem with uncertain accretion rate for materials by considering a rolling time horizon of a few days. We apply a construction heuristic in the first period and re-optimize the solution every subsequent period with a variable neighborhood search. Computational experiments are conducted on real life data.
Inventory routing problem; Multi-period routing; Multi-compartment vehicle; Rolling time horizon; Waste management;
http://www.sciencedirect.com/science/article/pii/S0377221715007900
Elbek, Maria
Wøhlk, Sanne
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:657-6662015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:657-666
article
Asymptotic behaviors of stochastic reserving: Aggregate versus individual models
In this paper, we investigate the asymptotic behaviors of the loss reservings computed by individual data method and its aggregate data versions by Chain-Ladder (CL) and Bornhuetter–Ferguson (BF) algorithms. It is shown that all deviations of the three reservings from the individual loss reserve (the projection of the outstanding liability on the individual data) converge weakly to a zero-mean normal distribution at the nrate. The analytical forms of the asymptotic variances are derived and compared by both analytical and numerical examples. The results show that the individual method has the smallest asymptotic variance, followed by the BF algorithm, and the CL algorithm has the largest asymptotic variance.
Risk management; Stochastic reserving; Individual data model; Aggregate data model; Asymptotic variance;
http://www.sciencedirect.com/science/article/pii/S0377221715008814
Huang, Jinlong
Wu, Xianyi
Zhou, Xian
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:827-8412015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:827-841
article
Behavioural operational research: Towards a framework for understanding behaviour in OR interventions
Stimulated by the growing interest in behavioural issues in the management sciences, research scholars have begun to address the implications of behavioural insights for Operational Research (OR). This current work reviews some foundational debates on the nature of OR to serve as a theoretical backdrop to orient a discussion on a behavioural perspective and OR. The paper addresses a specific research need by outlining that there is a distinct and complementary contribution of a behavioural perspective to OR. However, there is a need to build a theoretical base in which the insights from classical behavioural research is just one of a number of convergent building blocks that together point towards a compelling basis for behavioural OR. In particular, the focus of the paper is a framework that highlights the collective nature of OR practice and provides a distinct and interesting line of enquiry for future research.
Behavioural OR; Process of OR; Philosophy of OR; Collective behaviour;
http://www.sciencedirect.com/science/article/pii/S0377221715006657
White, Leroy
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1139-11432015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1139-1143
article
Dual cone approach to convex-cone dominance in multiple criteria decision making
In a paper published in Management Science in 1984, Korhonen, Wallenius, and Zionts presented the idea and method based on convex-cone dominance in the discrete Multiple Criteria Decision Making framework. In our current paper, we revisit the old idea from a new standpoint and provide the mathematical theory leading to a dual-cone based approach to solving such problems. Our paper makes the old results computationally more tractable. The results provided in the present paper also help extend the theory.
Multiple criteria analysis; Convex-cone dominance; Duality; Linear programming; Dual/Polar cone;
http://www.sciencedirect.com/science/article/pii/S0377221715008851
Korhonen, Pekka
Soleimani-damaneh, Majid
Wallenius, Jyrki
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1092-11012015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1092-1101
article
A branch-and-cut algorithm for the profitable windy rural postman problem
In this paper we study the profitable windy rural postman problem. This is an arc routing problem with profits defined on a windy graph in which there is a profit associated with some of the edges of the graph, consisting of finding a route maximizing the difference between the total profit collected and the total cost. This problem generalizes the rural postman problem and other well-known arc routing problems and has real-life applications, mainly in snow removal operations. We propose here a formulation for the problem and study its associated polyhedron. Several families of facet-inducing inequalities are described and used in the design of a branch-and-cut procedure. The algorithm has been tested on a large set of benchmark instances and compared with other existing algorithms. The results obtained show that the branch-and-cut algorithm is able to solve large-sized instances optimally in reasonable computing times.
Windy rural postman problem; Arc routing; Profits; Branch-and-cut algorithm; Polyhedron;
http://www.sciencedirect.com/science/article/pii/S0377221715009236
Ávila, Thais
Corberán, Ángel
Plana, Isaac
Sanchis, José M.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:179-1912015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:179-191
article
A new method for elicitation of criteria weights in additive models: Flexible and interactive tradeoffAuthor-Name: de Almeida, Adiel Teixeira
This paper proposes the Flexible and Interactive Tradeoff (FITradeoff) method, for eliciting scaling constants or weights of criteria. The FITradeoff uses partial information about decision maker (DM) preferences to determine the most preferred in a specified set of alternatives, according to an additive model in MAVT (Multi-Attribute Value Theory) scope. This method uses the concept of flexible elicitation for improving the applicability of the traditional tradeoff elicitation procedure. FITradeoff offers two main benefits: the information required from the DM is reduced and the DM does not have to make adjustments for the indifference between two consequences (trade-off), which is a critical issue on the traditional tradeoff procedure. It is easier for the DM to make comparisons of consequences (or outcomes) based on strict preference rather than on indifference. The method is built into a decision support system and applied to two cases on supplier selection, already published in the literature.
Multiple criteria analysis; MAVT additive model; Flexible elicitation; Interactive elicitation; Tradeoff elicitation;
http://www.sciencedirect.com/science/article/pii/S0377221715008140
de Almeida, Jonatas Araujo
Costa, Ana Paula Cabral Seixas
de Almeida-Filho, Adiel Teixeira
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1082-10912015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1082-1091
article
Inventory performance under staggered deliveries and autocorrelated demand
Production plans often span a whole week or month, even when independent production lots are completed every day and service performance is tallied daily. Such policies are said to use staggered deliveries, meaning that the production rate for multiple days are determined at a single point in time. Assuming autocorrelated demand, and linear inventory holding and backlog costs, we identify the optimal replenishment policy for order cycles of length P. With the addition of a once-per-cycle audit cost, we optimize the order cycle length P* via an inverse-function approach. In addition, we characterize periodic inventory costs, availability, and fill rate. As a consequence of staggering deliveries, the inventory level becomes cyclically heteroskedastic. This manifests itself as ripples in the expected cost and service levels. Nevertheless, the cost-optimal replenishment policy achieves a constant availability by using time-varying safety stocks; this is not the case with suboptimal constant safety stock policies, where the availability fluctuates over the cycle.
Inventory; Autoregressive demand; Order-up-to-policy; Staggered deliveries; Planning cycles;
http://www.sciencedirect.com/science/article/pii/S0377221715009029
Hedenstierna, Carl Philip T.
Disney, Stephen M.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:717-7272015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:717-727
article
Incorporation of activity sensitivity measures into buffer management to manage project schedule riskAuthor-Name: Hu, Xuejun
Critical Chain Scheduling and Buffer Management (CC/BM) has shown to provide an effective approach for building robust project schedules and to offer a valuable control tool for coping with schedule variability. Yet, the current buffer monitoring mechanism faces a problem of neglecting the dynamic feature of the project execution and related activity information when taking corrective actions. The schedule risk analysis (SRA) method in a traditional PERT framework, on the other hand, provides important information about the relative activity criticality in relation to the project duration which can highlight management focus. It is implied, however, that control actions are independent from the current project schedule performance. This paper attempts to research these defects of both tracking methods and proposes a new project schedule monitoring framework by introducing the activity cruciality index as a trigger for effective expediting to be integrated into the buffer monitoring process. Furthermore, dynamic action threshold settings that depend on the project progress as well as the buffer penetration are presented and examined in order to exhibit a more accurate control system. Our computational experiment demonstrates the relative dominance of the integrated schedule monitoring methods compared to the predominant buffer management approach in generating better control actions with less effort and an increased tracking efficiency, especially when the increasing buffer trigger point is combined with decreasing sensitivity action threshold values.
Buffer management; Schedule monitoring; Activity sensitivity; Schedule risk; Action threshold;
http://www.sciencedirect.com/science/article/pii/S0377221715008243
Cui, Nanfang
Demeulemeester, Erik
Bie, Li
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:864-8772015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:864-877
article
Developing and validating the multidimensional proactive decision-making scale
On the basis of an extensive interdisciplinary literature review proactive decision-making (PDM) is conceptualised as a multidimensional concept. We conduct five studies with over 4000 participants from various countries for developing and validating a theoretically consistent and psychometrically sound scale of PDM. The PDM concept is developed and appropriate items are derived from literature. Six dimensions are conceptualised: the four proactive cognitive skills ‘systematic identification of objectives’, ‘systematic search for information’, ‘systematic identification of alternatives’, and ‘using a decision radar’, and the two proactive personality traits ‘showing initiative’ and ‘striving for improvement’. Using principal component factor analysis and subsequent item analysis as well as confirmatory factor analysis, six conceptually distinct dimensional factors are identified and tested acceptably reliable and valid. Our results are remarkably similar for individuals who are decision-makers, decision analysts, both or none of both with different levels of experience. There is strong evidence that individuals with high scores in a PDM factor, e.g. proactive cognitive skills or personality traits, show a significantly higher decision satisfaction. Thus, the PDM scale can be used in future research to analyse other concepts. Furthermore, the scale can be applied, e.g. by staff teams to work on OR problems effectively or to inform a decision analyst about the decision behaviour in an organisation.
Behavioural OR; Decision analysis; Problem structuring;
http://www.sciencedirect.com/science/article/pii/S0377221715005998
Siebert, Johannes
Kunz, Reinhard
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:959-9672015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:959-967
article
The decoy effect in relative performance evaluation and the debiasing role of DEA
There is overwhelming evidence that performance ratings and evaluations are context dependent. A special case of such context effects is the decoy effect, which implies that the inclusion of a dominated alternative can influence the preference for non-dominated alternatives. Adapting the well-known experimental setting from the area of consumer behavior to the performance evaluation context of Data Envelopment Analysis (DEA), an experiment was conducted. The results show that adding a dominated decision making unit (DMU) to the set of DMUs augments the attractiveness of certain dominating DMUs and that DEA efficiency scores discriminating between efficient and inefficient DMUs serve as an appropriate debiasing procedure. The mention of the existence of slacks for distinguishing between strong and weak efficient DMUs also contributes to reducing the decoy effect, but it is also associated to other unexpected effects.
Behavioral OR; Decoy effect; Data Envelopment Analysis; Debiasing procedure; Performance measurement;
http://www.sciencedirect.com/science/article/pii/S0377221715006797
Ahn, Heinz
Vazquez Novoa, Nadia
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:899-9072015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:899-907
article
Experimental behavioural research in operational research: What we know and what we might come to know
There is a long standing, but thin, stream of experimental behavioural research into understanding how modellers within operational research (OR) behave when constructing models, and how individuals use such models to make decisions. Such research aims to better understand the modelling process, using empirical studies to construct a body of knowledge. Drawing upon this research, and experimental behavioural research in associated research areas, this paper aims to summarise the current body of knowledge. It suggests that we have some experimentally generated findings concerning the construction of models, model usage, the impact of model visualisation, and the effect (or lack thereof) of cognitive style on decision quality. The paper also considers how experiments have been operationalised, and particularly the problem of the dependent variable in such research (that is, what beneficial outputs can be measured in an experiment). It concludes with a consideration of what we might come to know through future experimental behavioural research, suggesting a more inclusive approach to experimental design.
Behavioural OR; Decision processes; Decision Support Systems;
http://www.sciencedirect.com/science/article/pii/S0377221715008693
O'Keefe, Robert M.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1074-10812015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1074-1081
article
Complexity results for storage loading problems with stacking constraints
In this paper, we present complexity results for storage loading problems where the storage area is organized in fixed stacks with a limited common height. Such problems appear in several practical applications, e.g., in the context of container terminals, container ships or warehouses. Incoming items arriving at a storage area have to be assigned to stacks so that certain constraints are respected (e.g., not every item may be stacked on top of every other item). We study structural properties of the general model and special cases where at most two or three items can be stored in each stack. Besides providing polynomial time algorithms for some of these problems, we establish the boundary to NP-hardness.
Storage loading; Stacking; Complexity; Stacking constraints;
http://www.sciencedirect.com/science/article/pii/S0377221715008784
Bruns, Florian
Knust, Sigrid
Shakhlevich, Natalia V.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1063-10732015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1063-1073
article
On the role of psychological heuristics in operational research; and a demonstration in military stability operationsAuthor-Name: Keller, Niklas
Psychological heuristics are formal models for making decisions that (i) rely on core psychological capacities (e.g., recognizing patterns or recalling information from memory), (ii) do not necessarily use all available information, and process the information they use by simple computations (e.g., ordinal comparisons or un-weighted sums), and (iii) are easy to understand, apply and explain. The contribution of this article is fourfold: First, the conceptual foundation of the psychological heuristics research program is provided, along with a discussion of its relationship to soft and hard OR. Second, empirical evidence and theoretical analyses are presented on the conditions under which psychological heuristics perform on par with or even better than more complex standard models in decision problems such as multi-attribute choice, classification, and forecasting, and in domains as varied as health, economics and management. Third, we demonstrate the application of the psychological heuristics approach to the problem of reducing civilian casualties in military stability operations. Finally, we discuss the role that psychological heuristics can play in OR theory and practice.
Behavioural OR; Bounded rationality; Heuristics; Decision analysis; Forecasting;
http://www.sciencedirect.com/science/article/pii/S0377221715006566
Katsikopoulos, Konstantinos V.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:407-4162015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:407-416
article
Lending decisions with limits on capital available: The polygamous marriage problem
In order to stimulate or subdue the economy, banking regulators have sought to impose caps or floors on individual bank's lending to certain types of borrowers. This paper shows that the resultant decision problem for a bank of which potential borrower to accept is a variant of the marriage/secretary problem where one can accept several applicants. The paper solves the decision problem using dynamic programming. We give results on the form of the optimal lending problem and counter examples to further “reasonable” conjectures which do not hold in the general case. By solving numerical examples we show the potential loss of profit and the inconsistency in the lending decision that are caused by introducing floors and caps on lending. The paper also describes some other situations where the same decision occurs.
Dynamic programming; Markov processes; Consumer credit lending;
http://www.sciencedirect.com/science/article/pii/S0377221715000673
So, Mee Chi
Thomas, Lyn C.
Huang, Bo
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:192-2032015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:192-203
article
Opening the technological innovation black box: The case of the electronics industry in Korea
In this system dynamics simulation study we analyze a series of feedback causal relationships wherein R&D investments create new knowledge stocks, increasing technological knowledge “triggers” and interactions among entities of technological innovation, leading to firm profits through the commercialization process.
R&D investment; Technological innovation; Investment portfolio; Product portfolio complexity; Product architecture complexity;
http://www.sciencedirect.com/science/article/pii/S0377221715008103
Choi, Kanghwa
Narasimhan, Ram
Kim, Soo Wook
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:262-2722015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:262-272
article
Performance measurement with multiple interrelated variables and threshold target levels: Evidence from retail firms in the US
In this study, we developed a DEA–based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
OR in service industries; Performance management; Data envelopment analysis; Balanced Scorecard;
http://www.sciencedirect.com/science/article/pii/S0377221715008115
Zervopoulos, Panagiotis D.
Brisimi, Theodora S.
Emrouznejad, Ali
Cheng, Gang
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:155-1632015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:155-163
article
Constant approximation algorithms for the one warehouse multiple retailers problem with backlog or lost-sales
We consider the One Warehouse Multi-Retailer (OWMR) problem with deterministic time-varying demand in the case where shortages are allowed. Demand may be either backlogged or lost. We present a simple combinatorial algorithm to build an approximate solution from a decomposition of the system into single-echelon subproblems. We establish that the algorithm has a performance guarantee of 3 for the OWMR with backlog under mild assumptions on the cost structure. In addition, we improve this guarantee to 2 in the special case of the Joint-Replenishment Problem (JRP) with backlog. As a by-product of our approach, we show that our decomposition provides a new lower bound of the optimal cost. A similar technique also leads to a 2-approximation for the OWMR problem with lost-sales. In all cases, the complexity of the algorithm is linear in the number of retailers and quadratic in the number of time periods, which makes it a valuable tool for practical applications. To the best of our knowledge, these are the first constant approximations for the OWMR with shortages.
Approximation algorithms; Lot-sizing; Inventory control; Distribution systems;
http://www.sciencedirect.com/science/article/pii/S0377221715009807
Gayon, J.-P.
Massonnet, G.
Rapine, C.
Stauffer, G.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:525-5392015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:525-539
article
Pricing derivatives with counterparty risk and collateralization: A fixed point approach
This paper studies a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous PDEs, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask spreads.
Bilateral counterparty risk; Collateralization; Credit valuation adjustment; Fixed point method Contraction mapping,;
http://www.sciencedirect.com/science/article/pii/S0377221715005883
Kim, Jinbeom
Leung, Tim
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:56-642015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:56-64
article
Continuous multifacility ordered median location problems
In this paper we propose a general methodology for solving a broad class of continuous, multifacility location problems, in any dimension and with ℓτ-norms proposing two different methodologies: (1) by a new second order cone mixed integer programming formulation and (2) by formulating a sequence of semidefinite programs that converges to the solution of the problem; each of these relaxed problems solvable with SDP solvers in polynomial time.
Continuous multifacility location; Ordered median problems; Semidefinite programming; Second order cone programming;
http://www.sciencedirect.com/science/article/pii/S0377221715009911
Blanco, Víctor
Puerto, Justo
Ben-Ali, Safae El-Haj
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:487-4972015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:487-497
article
A new Mixture model for the estimation of credit card Exposure at Default
Using a large portfolio of historical observations on defaulted loans, we estimate Exposure at Default at the level of the obligor by estimating the outstanding balance of an account, not only at the time of default, but at any time over the entire loan period. We theorize that the outstanding balance on a credit card account at any time during the loan is a function of the spending by the borrower and is also subject to the credit limit imposed by the card issuer. The predicted value is modelled as a weighted average of the estimated balance and limit, with weights depending on how likely the borrower is to have a balance greater than the limit. The weights are estimated using a discrete-time repeated events survival model to predict the probability of an account having a balance greater than its limit. The expected balance and expected limit are estimated using two panel models with random effects. We are able to get predictions which, overall, are more accurate for outstanding balance, not only at the time of default, but at any time over the entire default loan period, than any other particular technique in the literature.
Risk management; Forecasting; Panel models; Survival models; Macroeconomic variables;
http://www.sciencedirect.com/science/article/pii/S037722171500908X
Leow, Mindy
Crook, Jonathan
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:647-6562015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:647-656
article
Dynamic mean-risk portfolio selection with multiple risk measures in continuous-time
While our society began to recognize the importance to balance the risk performance under different risk measures, the existing literature has confined its research work only under a static mean-risk framework. This paper represents the first attempt to incorporate multiple risk measures into dynamic portfolio selection. More specifically, we investigate the dynamic mean-variance-CVaR (Conditional value at Risk) formulation and the dynamic mean-variance-SFP (Safety-First Principle) formulation in a continuous-time setting, and derive the analytical solutions for both problems. Combining a downside risk measure with the variance (the second order central moment) in a dynamic mean-risk portfolio selection model helps investors control both a symmetric central risk measure and an asymmetric catastrophic downside risk. We find that the optimal portfolio policy derived from our mean-multiple risk portfolio optimization models exhibits a feature of curved V-shape. Our numerical experiments using real market data clearly demonstrate a dominance relationship of our dynamic mean-multiple risk portfolio policies over the static buy-and-hold portfolio policy.
Dynamic mean-risk portfolio selection; Conditional value at risk; Safety-first principle; Stochastic optimization; Martingale approach;
http://www.sciencedirect.com/science/article/pii/S0377221715008292
Gao, Jianjun
Xiong, Yan
Li, Duan
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1014-10232015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1014-1023
article
Elementary modelling and behavioural analysis for emergency evacuations using social media
Social media usage in evacuations and emergency management represents a rapidly expanding field of study. Our paper thus provides quantitative insight into a serious practical problem. Within this context a behavioural approach is key. We discuss when facilitators should consider model-based interventions amid further implications for disaster communication and emergency management. We model the behaviour of individual people by deriving optimal contrarian strategies. We formulate a Bayesian algorithm which enables the optimal evacuation to be conducted sequentially under worsening conditions.
Behavioural OR; OR in disaster relief; Decision analysis; Humanitarian logistics; Modelling;
http://www.sciencedirect.com/science/article/pii/S0377221715004397
Fry, John
Binner, Jane M.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1131-11382015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1131-1138
article
R&D for green technologies in a dynamic oligopoly: Schumpeter, arrow and inverted-U’s
We extend a well-known differential oligopoly game to encompass the possibility for production to generate a negative environmental externality, regulated through Pigouvian taxation and price caps. We show that, if the price cap is set so as to fix the tolerable maximum amount of emissions, the resulting equilibrium investment in green R&D is indeed concave in the structure of the industry. Our analysis appears to indicate that inverted-U-shaped investment curves are generated by regulatory measures instead of being a ‘natural’ feature of firms’ decisions.
Dynamic games; Oligopoly; Environmental externality; R&D;
http://www.sciencedirect.com/science/article/pii/S0377221715008498
Feichtinger, Gustav
Lambertini, Luca
Leitmann, George
Wrzaczek, Stefan
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:305-3132015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:305-313
article
Venture capital, staged financing and optimal funding policies under uncertainty
Our paper presents a dynamic model of entrepreneurial venture financing under uncertainty based on option exercise games between an entrepreneur and a venture capitalist (VC). In particular, we analyze the impact of multi-staged financing and both economic and technological uncertainty on optimal contracting in the context of VC-financing. Our novel approach combines compound option pricing with sequential non-cooperative contracting, allowing us to determine whether renegotiation will improve the probability of coming to an agreement and proceed with the venture. It is shown that both sources of uncertainty positively impact the VC-investor's optimal equity share. Specifically, higher uncertainty leads to a larger stake in the venture, and renegotiation may result in a dramatic shift of control rights in the venture, preventing the venture from failure. Moreover, given ventures with low volatility, situations might occur where the VC-investor loses his first-mover advantage. Based on a comparative-static analysis, new testable hypotheses for further empirical studies are derived from the model.
Bargaining; Decision-making; Game theory; Real options; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221715009777
Lukas, Elmar
Mölls, Sascha
Welling, Andreas
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:842-8522015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:842-852
article
Do ‘big losses’ in judgmental adjustments to statistical forecasts affect experts’ behaviour?
The behaviour of poker players and sports gamblers has been shown to change after winning or losing a significant amount of money on a single hand. In this paper, we explore whether there are changes in experts’ behaviour when performing judgmental adjustments to statistical forecasts and, in particular, examine the impact of ‘big losses’. We define a big loss as a judgmental adjustment that significantly decreases the forecasting accuracy compared to the baseline statistical forecast. In essence, big losses are directly linked with wrong direction or highly overshooting judgmental overrides. Using relevant behavioural theories, we empirically examine the effect of such big losses on subsequent judgmental adjustments exploiting a large multinational data set containing statistical forecasts of demand for pharmaceutical products, expert adjustments and actual sales. We then discuss the implications of our findings for the effective design of forecasting support systems, focusing on the aspects of guidance and restrictiveness.
Forecasting; Judgment; Behavioural analytics; Decision support systems;
http://www.sciencedirect.com/science/article/pii/S037722171500497X
Petropoulos, Fotios
Fildes, Robert
Goodwin, Paul
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:476-4862015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:476-486
article
Modelling repayment patterns in the collections process for unsecured consumer debt: A case studyAuthor-Name: Thomas, Lyn C.
One approach to modelling Loss Given Default (LGD), the percentage of the defaulted amount of a loan that a lender will eventually lose is to model the collections process. This is particularly relevant for unsecured consumer loans where LGD depends both on a defaulter's ability and willingness to repay and the lender's collection strategy. When repaying such defaulted loans, defaulters tend to oscillate between repayment sequences where the borrower is repaying every period and non-repayment sequences where the borrower is not repaying in any period. This paper develops two models – one a Markov chain approach and the other a hazard rate approach to model such payment patterns of debtors. It also looks at simplifications of the models where one assumes that after a few repayment and non-repayment sequences the parameters of the model are fixed for the remaining payment and non-payment sequences. One advantage of these approaches is that they show the impact of different write-off strategies. The models are applied to a real case study and the LGD for that portfolio is calculated under different write-off strategies and compared with the actual LGD results.
OR in banking; Payment patterns; Collection process; Markov chain models; Survival analysis models;
http://www.sciencedirect.com/science/article/pii/S0377221715008371
Matuszyk, Anna
So, Mee Chi
Mues, Christophe
Moore, Angela
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:592-6042015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:592-604
article
A disassembly line balancing problem with fixed number of workstations
In this study, a Disassembly Line Balancing Problem with a fixed number of workstations is considered. The product to be disassembled comprises various components, which are referred to as its parts. There is a specified finite supply of the product to be disassembled and specified minimum release quantities (possible zero) for each part of the product. All units of the product are identical, however different parts can be released from different units of the product. There is a finite number of identical workstations that perform the necessary disassembly operations, referred to as tasks. We present several upper and lower bounding procedures that assign the tasks to the workstations so as to maximize the total net revenue. The computational study has revealed that the procedures produce satisfactory results.
Integer programming; Heuristics; Disassembly lines; Linear programming relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221715008279
Kalaycılar, Eda Göksoy
Azizoğlu, Meral
Yeralan, Sencer
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:204-2132015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:204-213
article
Solving hard control problems in voting systems via integer programming
Voting problems are central in the area of social choice. In this article, we investigate various voting systems and types of control of elections. We present integer linear programming (ILP) formulations for a wide range of NP-hard control problems. Our ILP formulations are flexible in the sense that they can work with an arbitrary number of candidates and voters. Using the off-the-shelf solver Cplex, we show that our approaches can manipulate elections with a large number of voters and candidates efficiently.
Voting system; Election model; Control problem; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715008085
Polyakovskiy, S.
Berghammer, R.
Neumann, F.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:214-2252015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:214-225
article
Hub location under competition
Hubs are consolidation and dissemination points in many-to-many flow networks. Hub location problem is to locate hubs among available nodes and allocate non-hub nodes to these hubs. The mainstream hub location studies focus on optimal decisions of one decision-maker with respect to some objective(s) even though the markets that benefit hubbing are oligopolies. Therefore, in this paper, we propose a competitive hub location problem where the market is assumed to be a duopoly. Two decision-makers (or firms) sequentially decide locations of their hubs and then customers choose one firm with respect to provided service levels. Each decision-maker aims to maximize his/her own market share. We propose two problems for the leader (former decision-maker) and follower (latter decision-maker): (r|Xp)hub−medianoid and (r|p)hub−centroid problems, respectively. Both problems are proven to be NP-complete. Linear programming models are presented for these problems as well as exact solution algorithms for the (r|p)hub−centroid problem. The performance of models and algorithms are tested by computational analysis conducted on CAB and TR data sets.
Hub location; Competition models; Competitive location;
http://www.sciencedirect.com/science/article/pii/S0377221715008322
Mahmutogullari, Ali Irfan
Kara, Bahar Y.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1050-10622015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1050-1062
article
A decision-analysis-based framework for analysing stakeholder behaviour in scenario planning
Scenario planning is a method widely used by strategic planners to address uncertainty about the future. However, current methods either fail to address the future behaviour and impact of stakeholders or they treat the role of stakeholders informally. We present a practical decision-analysis-based methodology for analysing stakeholder objectives and likely behaviour within contested unfolding futures. We address issues of power, interest, and commitment to achieve desired outcomes across a broad stakeholder constituency. Drawing on frameworks for corporate social responsibility (CSR), we provide an illustrative example of our approach to analyse a complex contested issue that crosses geographic, organisational and cultural boundaries. Whilst strategies can be developed by individual organisations that consider the interests of others – for example in consideration of an organisation's CSR agenda – we show that our augmentation of scenario method provides a further, nuanced, analysis of the power and objectives of all concerned stakeholders across a variety of unfolding futures. The resulting modelling framework is intended to yield insights and hence more informed decision making by individual stakeholders or regulators.
Strategic planning; Ethics in OR; Decision processes; Scenario method; Education;
http://www.sciencedirect.com/science/article/pii/S0377221715006669
Cairns, George
Goodwin, Paul
Wright, George
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:919-9302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:919-930
article
Can involving clients in simulation studies help them solve their future problems? A transfer of learning experimentAuthor-Name: Monks, Thomas
It is often stated that involving the client in operational research studies increases conceptual learning about a system which can then be applied repeatedly to other, similar, systems. Our study provides a novel measurement approach for behavioural OR studies that aim to analyse the impact of modelling in long term problem solving and decision making. In particular, our approach is the first to operationalise the measurement of transfer of learning from modelling using the concepts of close and far transfer, and overconfidence. We investigate learning in discrete-event simulation (DES) projects through an experimental study. Participants were trained to manage queuing problems by varying the degree to which they were involved in building and using a DES model of a hospital emergency department. They were then asked to transfer learning to a set of analogous problems. Findings demonstrate that transfer of learning from a simulation study is difficult, but possible. However, this learning is only accessible when sufficient time is provided for clients to process the structural behaviour of the model. Overconfidence is also an issue when the clients who were involved in model building attempt to transfer their learning without the aid of a new model. Behavioural OR studies that aim to understand learning from modelling can ultimately improve our modelling interactions with clients; helping to ensure the benefits for a longer term; and enabling modelling efforts to become more sustainable.
Behavioural OR; Psychology of decision; Model building; Model reuse; Discrete-event simulation;
http://www.sciencedirect.com/science/article/pii/S0377221715007924
Robinson, Stewart
Kotiadis, Kathy
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:551-5592015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:551-559
article
Branch-and-price algorithms for the solution of the multi-trip vehicle routing problem with time windows
We investigate the exact solution of the vehicle routing problem with time windows, where multiple trips are allowed for the vehicles. In contrast to previous works in the literature, we specifically consider the case in which it is mandatory to visit all customers and there is no limitation on duration. We develop two branch-and-price frameworks based on two set covering formulations: a traditional one where columns (variables) represent routes, that is, a sequence of consecutive trips, and a second one in which columns are single trips. One important difficulty related to the latter is the way mutual temporal exclusion of trips can be handled. It raises the issue of time discretization when solving the pricing problem. Our dynamic programming algorithm is based on concept of groups of labels and representative labels. We provide computational results on modified small-sized instances (25 customers) from Solomon’s benchmarks in order to evaluate and compare the two methods. Results show that some difficult instances are out of reach for the first branch-and-price implementation, while they are consistently solved with the second.
Vehicle routing; Time windows; Multiple trips; Column generation; Branch-and-price;
http://www.sciencedirect.com/science/article/pii/S037722171500795X
Hernandez, Florent
Feillet, Dominique
Giroudeau, Rodolphe
Naud, Olivier
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1169-11772015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1169-1177
article
Aggregation heuristic for the open-pit block scheduling problem
In order to establish a production plan, an open-pit mine is partitioned into a three-dimensional array of blocks. The order in which blocks are extracted and processed has a dramatic impact on the economic value of the exploitation. Since realistic models have millions of blocks and constraints, the combinatorial optimization problem of finding the extraction sequence that maximizes the profit is computationally intractable. In this work, we present a procedure, based on innovative aggregation and disaggregation heuristics, that allows us to get feasible and nearly optimal solutions. The method was tested on the public reference library MineLib and improved the best known results in the literature in 9 of the 11 instances of the library. Moreover, the overall procedure is very scalable, which makes it a promising tool for large size problems.
Mine planning; Block aggregation; Open-pit block scheduling; Integer programming; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715009704
Jélvez, Enrique
Morales, Nelson
Nancel-Penard, Pierre
Peypouquet, Juan
Reyes, Patricio
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:945-9582015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:945-958
article
Critical Learning Incidents in system dynamics modelling engagements
This paper reports in-depth behavioural operational research to explore how individual clients learned to resolve dynamically complex problems in system dynamics model-based engagements. Consultant-client dyads involved in ten system dynamics consulting engagements were interviewed to identify individual clients' Critical Learning Incidents—defined as the moment of surprise caused after one's mental model produces unexpected failure and a change in one's mental model produces the desired result. The cases, which are reprised from interviews, include assessments of the nature of the engagement problem, the form of system dynamics model, and the methods employed by consultants during each phase of the engagement. Reported Critical Learning Incidents are noted by engagement phase and consulting method and constructivist learning theory is used to describe a pattern of learning. Research outcomes include descriptions of: the role of different methods applied in engagement phases (for example, the role of concept models to commence problem identification and to introduce iconography and jargon to the engagement participants); how model form associates with the timing of Critical Learning Incidents; and the role of social mediation and negotiation in the learning process.
Systems dynamics; Practice of OR; Critical Learning Incidents; Behavioural OR; Constructivism;
http://www.sciencedirect.com/science/article/pii/S0377221715008905
Thompson, James P.
Howick, Susan
Belton, Valerie
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:784-7882015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:784-788
article
Notes on technical efficiency estimation with multiple inputs and outputs
Collier, Johnson and Ruggiero (2011) deal with the problem of estimating technical efficiency using regression analysis that allows multiple inputs and outputs. This revives an old problem in the analysis of production. In this note we provide an alternative maximum likelihood estimator that addresses the concerns. A Monte Carlo experiment shows that the technique works well in practice. A test for homotheticity, a critical assumption in Collier, Johnson and Ruggiero (2011) is constructed and its behavior is examined using Monte Carlo simulation and an empirical application to European banking.
Efficiency; Least squares; Multiple-output production; Limited information maximum Likelihood;
http://www.sciencedirect.com/science/article/pii/S0377221715009625
Tsionas, Mike G.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:465-4752015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:465-475
article
Take it to the limit: Innovative CVaR applications to extreme credit risk measurement
The Global Financial Crisis (GFC) demonstrated the devastating impact of extreme credit risk on global economic stability. We develop four credit models to better measure credit risk in extreme economic circumstances, by applying innovative Conditional Value at Risk (CVaR) techniques to structural models (called Xtreme-S), transition models (Xtreme-T), quantile regression models (Xtreme-Q), and the author's unique iTransition model (Xtreme-i) which incorporates industry factors into transition matrices. We find the Xtreme-S and Xtreme-Q models to be the most responsive to changing market conditions. The paper also demonstrates how the models can be used to determine capital buffers required to deal with extreme credit risk.
Uncertainty modeling; Credit risk; Conditional Value at Risk; Conditional probability of default; Capital buffers;
http://www.sciencedirect.com/science/article/pii/S0377221714010182
Allen, D.E.
Powell, R.J.
Singh, A.K.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1144-11522015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1144-1152
article
A branch-and-cut algorithm for the truck dock assignment problem with operational time constraints
In this paper, we address a truck dock assignment problem with operational time constraint which has to be faced in the management of cross docks. More specifically, this problem is the subproblem of more involved problems with additional constraints and criteria. We propose a new integer programming model for this problem. The dimension of the polytope associated with the proposed model is identified by introducing a systematic way of generating linearly independent feasible solutions. Several classes of valid inequalities are also introduced. Some of them are proved to be facet-defining. Then, exact separation algorithms are described for separating cuts for classes with exponential number of constraints, and an efficient branch-and-cut algorithm solving real-life size instances in a reasonable time is provided. In most cases, the optimal solution is identified at the root node without requiring any branching.
Truck dock assignment; Polytope; Dimension; Valid inequalities; Facet-defining inequalities;
http://www.sciencedirect.com/science/article/pii/S0377221715008917
Gelareh, Shahin
Monemi, Rahimeh Neamatian
Semet, Frédéric
Goncalves, Gilles
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:273-2902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:273-290
article
Network-flow based algorithms for scheduling production in multi-processor open-pit mines accounting for metal uncertainty
The open-pit mine production scheduling problem (MPSP) deals with the optimization of the net present value of a mining asset and has received significant attention in recent years. Several solution methods have been proposed for its deterministic version. However, little is reported in the literature about its stochastic version, where metal uncertainty is accounted for. Moreover, most methods focus on the mining sequence and do not consider the flow of the material once mined. In this paper, a new MPSP formulation accounting for metal uncertainty and considering multiple destinations for the mined material, including stockpiles, is introduced. In addition, four different heuristics for the problem are compared; namely, a tabu search heuristic incorporating a diversification strategy (TS), a variable neighborhood descent heuristic (VND), a very large neighborhood search heuristic based on network flow techniques (NF), and a diversified local search (DLS) that combines VND and NF. The first two heuristics are extensions of existing methods recently proposed in the literature, while the last two are novel approaches. Numerical tests indicate that the proposed solution methods are effective, able to solve in a few minutes up to a few hours instances that standard commercial solvers fail to solve. They also indicate that NF and DLS are in general more efficient and more robust than TS and VND.
Scheduling; Heuristics; Open-pit mining; Metal uncertainty; Network-flow algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221715008073
Lamghari, Amina
Dimitrakopoulos, Roussos
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:618-6272015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:618-627
article
The parallel stack loading problem to minimize blockages
This paper treats an elementary optimization problem, which arises whenever an inbound stream of items is to be intermediately stored in a given number of parallel stacks, so that blockages during their later retrieval are avoided. A blockage occurs whenever an item to be retrieved earlier is blocked by another item with lower priority stored on top of it in the same stack. Our stack loading problem arises, for instance, if containers arriving by vessel are intermediately stored in a container yard of a port or if, during nighttime, successively arriving wagons are to be parked in multiple parallel dead-end rail tracks of a tram depot. We formalize the resulting basic stack loading problem, investigate its computational complexity, and present suited exact and heuristic solution procedures.
Container storage; Stacking yard; Stack loading; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715008759
Boysen, Nils
Emde, Simon
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:30-452015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:30-45
article
A differential evolution algorithm with self-adaptive strategy and control parameters based on symmetric Latin hypercube design for unconstrained optimization problems
This paper presents a differential evolution (DE) algorithm, namely SLADE, with self-adaptive strategy and control parameters for unconstrained optimization problems. In SLADE, the population is initialized by symmetric Latin hypercube design (SLHD) to increase the diversity of the initial population. Moreover, the trial vector generation strategy assigned to each target individual is adaptively selected from the strategy candidate pool to match different stages of the evolution according to their previous successful experience. SLADE employs Cauchy distribution and normal distribution to update the control parameters CR and F to appropriate values during the evolutionary process. A large amount of simulation experiments and comparisons have been made by employing a set of 25 benchmark functions. Experimental results show that SLADE is better than, or at least comparable to, other classic or adaptive DE algorithms, and SLHD is effective for improving the performance of SLADE.
Evolutionary computations; Differential evolution; Parameter adaptation; Strategy adaptation; Symmetric Latin hypercube design;
http://www.sciencedirect.com/science/article/pii/S0377221715009698
Zhao, Zhiwei
Yang, Jingming
Hu, Ziyu
Che, Haijun
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:506-5162015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:506-516
article
A comparative analysis of the UK and Italian small businesses using Generalised Extreme Value models
This paper presents a cross-country comparison of significant predictors of small business failure between Italy and the UK. Financial measures of profitability, leverage, coverage, liquidity, scale and non-financial information are explored, some commonalities and differences are highlighted. Several models are considered, starting with the logistic regression which is a standard approach in credit risk modelling. Some important improvements are investigated. Generalised Extreme Value (GEV) regression is applied in contrast to the logistic regression in order to produce more conservative estimates of default probability. The assumption of non-linearity is relaxed through application of BGEVA, non-parametric additive model based on the GEV link function. Two methods of handling missing values are compared: multiple imputation and Weights of Evidence (WoE) transformation. The results suggest that the best predictive performance is obtained by BGEVA, thus implying the necessity of taking into account the low volume of defaults and non-linear patterns when modelling SME performance. WoE for the majority of models considered show better prediction as compared to multiple imputation, suggesting that missing values could be informative.
Decision support systems; Risk analysis; Credit scoring; Small and Medium Sized Enterprises; Default prediction;
http://www.sciencedirect.com/science/article/pii/S0377221715007183
Andreeva, Galina
Calabrese, Raffaella
Osmetti, Silvia Angela
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:395-3962015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:395-396
article
EditorialAuthor-Name: Crook, Jonathan
http://www.sciencedirect.com/science/article/pii/S0377221715009169
Bellotti, Tony
Mues, Christophe
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:806-8152015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:806-815
article
An outlook on behavioural OR – Three tasks, three pitfalls, one definition
In their recent paper, Hämäläinen, Luoma, and Saarinen (2013) have made a strong case for the importance of Behavioural OR. With the motivation to contribute to a broad academic outlook in this emerging discipline, this rather programmatic paper intends to further the discussion by describing three types of research tasks that should play an important role in Behavioural OR, namely a descriptive, a methodological and a technological task. Moreover, by relating Behavioural OR to similar academic endeavours, three potential pitfalls are presented that Behavioural OR should avoid: (1) a too narrow understanding of what “behavioural” means, (2) ignorance of interdisciplinary links, and (3) a development without close connection with the core disciplines of OR. The paper concludes by suggesting a definition of Behavioural OR that sums up all points addressed.
Behavioural OR; Interdisciplinary; Social sciences; Organizations; Hard and soft OR;
http://www.sciencedirect.com/science/article/pii/S0377221715008978
Becker, Kai Helge
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:751-7702015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:751-770
article
A dynamic program for valuing corporate securities
We design and implement a dynamic program for valuing corporate securities, seen as derivatives on a firm’s assets, and computing the term structure of yield spreads and default probabilities. Our setting is flexible for it accommodates an extended balance-sheet equality, arbitrary corporate debts, multiple seniority classes, and a reorganization process. This flexibility comes at the expense of a minor loss of efficiency. The analytical approach proposed in the literature is exchanged here for a quasi-analytical approach based on dynamic programming coupled with finite elements. To assess our construction, which shows flexibility and efficiency, we carry out a numerical investigation along with a complete sensitivity analysis.
Option theory; Structural models; Corporate securities; Corporate bonds; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221715009522
Ayadi, Mohamed A.
Ben-Ameur, Hatem
Fakhfakh, Tarek
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:983-10042015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:983-1004
article
Understanding behaviour in problem structuring methods interventions with activity theory
This article argues that OR interventions, particularly problem structuring methods (PSM), are complex events that cannot be understood by conventional methods alone. In this paper an alternative approach is introduced, where the units of analysis are the activity systems constituted by and constitutive of PSM interventions. The paper outlines the main theoretical and methodological concerns that need to be appreciated in studying PSM interventions. The paper then explores activity theory as an approach to study them. A case study describing the use of this approach is provided.
Problem structuring methods; Behavioural OR; Activity theory; Collective intentionality;
http://www.sciencedirect.com/science/article/pii/S0377221715006785
White, Leroy
Burger, Katharina
Yearworth, Mike
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1153-11602015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1153-1160
article
A MILP model for the teacher assignment problem considering teachers’ preferences
The Teacher Assignment Problem is part of the University Timetabling Problem and involves assigning teachers to courses, taking their preferences into consideration. This is a complex problem, usually solved by means of heuristic algorithms. In this paper a Mixed Integer Linear Programing model is developed to balance teachers’ teaching load (first optimization criterion), while maximizing teachers’ preferences for courses according to their category (second optimization criterion). The model is used to solve the teachers-courses assignment in the Department of Management at the School of Industrial Engineering of Barcelona, in the Universitat Politècnica de Catalunya. Results are discussed regarding the importance given to the optimization criteria. Moreover, to test the model's performance a computational experiment is carried out using randomly generated instances based on real patterns. Results show that the model is proven to be suitable for many situations (number of teachers-courses and weight of the criteria), being useful for departments with similar requests.
Timetabling; Linear programming; Teacher assignment problem; MILP model;
http://www.sciencedirect.com/science/article/pii/S0377221715008139
Domenech, B
Lusa, A
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1124-11302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1124-1130
article
Optimal policies of M(t)/M/c/c queues with two different levels of servers
This paper deals with optimal control points of M(t)/M/c/c queues with periodic arrival rates and two levels of the number of servers. We use the results of this model to build a Markov decision process (MDP). The problem arose from a case study in the Kelowna General Hospital (KGH). The KGH uses surge beds when the emergency room is overcrowded which results in having two levels for the number of the beds. The objective is to minimize a cost function. The findings of this work are not limited to the healthcare; They may be used in any stochastic system with fluctuation in arrival rates and/or two levels of the number of servers, i.e., call centers, transportation, and internet services. We model the situation and define a cost function which needs to be minimized. In order to find the cost function we need transient solutions of the M(t)/M/c/c queue. We modify the fourth-order Runge–Kutta to calculate the transient solutions and we obtain better solutions than the existing Runge–Kutta method. We show that the periodic variation of arrival rates makes the control policies time-dependent and periodic. We also study how fast the policies converge to a periodic pattern and obtain a criterion for independence of policies in two sequential cycles.
Periodic MDP; Time-dependent queues; Health care; Two-level for number of servers; Hysteretic policy;
http://www.sciencedirect.com/science/article/pii/S0377221715009662
Tirdad, Ali
Grassmann, Winfried K.
Tavakoli, Javad
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:342-3462015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:342-346
article
Environmental efficiency measurement and the materials balance condition reconsidered
This note takes up a shortcoming of Coelli et al.’s (2007) popular environmental efficiency measure and its extension to economic-environmental trade-off analysis (see Van Meensel et al. (2010)), namely that they do not reward emission reductions by pollution control. A new environmental efficiency measure that overcomes this issue and - similar to Coelli et al.’s efficiency measure - is in line with the materials balance principle is proposed and further decomposed into “technical environmental efficiency” and “material and nonmaterial allocative environmental efficiencies”. The new efficiency measure collapses into Coelli et al.’s efficiency measure if none of the considered Decision Making Units control pollutants. A numerical example using Data Envelopment Analysis is provided to further explore the properties of the new efficiency measure.
OR in environment and climate change; Environmental efficiency; Data envelopment analysis; Weak G-disposability;
http://www.sciencedirect.com/science/article/pii/S037722171500987X
Rødseth, Kenneth Løvold
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:120-1302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:120-130
article
A comprehensive annual delivery program for upstream liquefied natural gas supply chain
Developing a cost-effective annual delivery program (ADP) is a challenging task for liquefied natural gas (LNG) suppliers, especially for LNG supply chains with large number of vessels and customers. Given significant operational costs in LNG delivery operations, cost-effective ADPs can yield substantial savings, adding up to millions. Providing an extensive account of supply chain operations and contractual terms, this paper aims to consider a realistic ADP problem faced by large LNG suppliers; suggest alternative delivery options, such as split-delivery; and propose an efficient heuristic solution which outperforms commercial optimizers. The comprehensive numerical study in this research demonstrates that contrary to the common belief in practice, split-delivery may generate substantial cost reductions in LNG supply chains.
OR in maritime industry; Annual delivery planning problem; LNG supply chain; Large-scale optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715009571
Mutlu, Fatih
Msakni, Mohamed K.
Yildiz, Hakan
Sönmez, Erkut
Pokharel, Shaligram
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1033-10492015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1033-1049
article
Reference points in revenue sharing contracts—How to design optimal supply chain contracts
Coordinating supply chains is an important goal for contract designers because it enables the channel members to increase their profits. Recently, many experimental studies have shown that behavioral aspects have to be taken into account when choosing the type of contract and specifying the contract parameters. In this paper, we analyze behavioral aspects of revenue-sharing contracts. We extend the classical normative decision model by incorporating reference-dependent valuation into the decision model and show how this affects inventory decisions. We conduct different lab experiments to test our model. As a result, human inventory decisions deviate from classical normative predictions, and we find evidence for reference-dependent valuation of human decision makers. We also show how contract designers can use the insights we gained to design better contracts.
Supply chain management; Contracting; Behavioral operations research; Experiments; Reference-dependent valuation;
http://www.sciencedirect.com/science/article/pii/S0377221715005214
Becker-Peth, Michael
Thonemann, Ulrich W.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:791-7952015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:791-795
article
Behavioural operational research: Returning to the roots of the OR profession
http://www.sciencedirect.com/science/article/pii/S0377221715009601
Franco, L. Alberto
Hämäläinen, Raimo P.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:440-4562015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:440-456
article
Accuracy of mortgage portfolio risk forecasts during financial crises
This paper explores whether factor based credit portfolio risk models are able to predict losses in severe economic downturns such as the recent Global Financial Crisis (GFC) within standard confidence levels. The paper analyzes (i) the accuracy of default rate forecasts, and (ii) whether forecast downturn percentiles (Value-at-Risk, VaR) are sufficient to cover default rate outcomes over a quarterly and an annual forecast horizon. Uninformative maximum likelihood and informative Bayesian techniques are compared as they imply different degrees of uncertainty.
Bayesian estimation; Maximum likelihood estimation; Model risk; Mortgage; Value-at-risk;
http://www.sciencedirect.com/science/article/pii/S0377221715008310
Lee, Yongwoong
Rösch, Daniel
Scheule, Harald
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:605-6172015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:605-617
article
Incentive strategies for an optimal recovery program in a closed-loop supply chain
We consider a dynamic closed-loop supply chain made up of one manufacturer and one retailer, with both players investing in a product recovery program to increase the rate of return of previously purchased products. End-of use product returns have two impacts. First, they lead to a decrease in the production cost, as manufacturing with used parts is cheaper than using virgin materials. Second, returns boost sales through replacement items.
Closed-loop supply chain; Coordination; Incentive strategies; Pricing; Product recovery programs;
http://www.sciencedirect.com/science/article/pii/S0377221715008450
De Giovanni, Pietro
Reddy, Puduru V.
Zaccour, Georges
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:46-552015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:46-55
article
Fundamental properties and pseudo-polynomial-time algorithm for network containership sailing speed optimization
In container liner shipping, bunker cost is an important component of the total operating cost, and bunker consumption increases dramatically when the sailing speed of containerships increases. A higher speed implies higher bunker consumption (higher bunker cost), shorter transit time (lower inventory cost), and larger shipping capacity per ship per year (lower ship cost). Therefore, a container shipping company aims to determine the optimal sailing speed of containerships in a shipping network to minimize the total cost. We derive analytical solutions for sailing speed optimization on a single ship route with a continuous number of ships. The advantage of analytical solutions lies in that it unveils the underlying structure and properties of the problem, from which a number of valuable managerial insights can be obtained. Based on the analytical solution and the properties of the problem, the optimal integer number of ships to deploy on a ship route can be obtained by solving two equations, each in one unknown, using a simple bi-section search method. The properties further enable us to identify an optimality condition for network containership sailing speed optimization. Based on this optimality condition, we propose a pseudo-polynomial-time solution algorithm that can efficiently obtain an epsilon-optimal solution for sailing speed of containerships in a liner shipping network.
Transportation; Liner shipping; Containership; Sailing speed; Bunker fuel;
http://www.sciencedirect.com/science/article/pii/S0377221715009789
Wang, Shuaian
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:91-1002015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:91-100
article
Obtaining cell counts for contingency tables from rounded conditional frequencies
We present an integer linear programming formulation and solution procedure for determining the tightest bounds on cell counts in a multi-way contingency table, given knowledge of a corresponding derived two-way table of rounded conditional probabilities and the sample size. The problem has application in statistical disclosure limitation, which is concerned with releasing useful data to the public and researchers while also preserving privacy and confidentiality. Previous work on this problem invoked the simplifying assumption that the conditionals were released as fractions in lowest terms, rather than the more realistic and complicated setting of rounded decimal values that is treated here. The proposed procedure finds all possible counts for each cell and runs fast enough to handle moderately sized tables.
OR in government; Integer linear programming; Statistical disclosure control; Tabular data; Fast Fourier transform;
http://www.sciencedirect.com/science/article/pii/S0377221715008358
Sage, Andrew J.
Wright, Stephen E.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:878-8892015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:878-889
article
Different paths to consensus? The impact of need for closure on model-supported group conflict management
Empirical evidence on how cognitive factors impact the effectiveness of model-supported group decision making is lacking. This study reports on an experiment on the effects of need for closure, defined as a desire for definite knowledge on some issue and the eschewal of ambiguity. The study was conducted with over 40 postgraduate student groups. A quantitative analysis shows that compared to groups low in need for closure, groups high in need for closure experienced less conflict when using Value-Focused Thinking to make a budget allocation decision. Furthermore, low need for closure groups used the model to surface conflict and engaged in open discussions to come to an agreement. By contrast, high need for closure groups suppressed conflict and used the model to put boundaries on the discussion. Interestingly, both groups achieve similar levels of consensus, and high need for closure groups are more satisfied than low need for closure groups. A qualitative analysis of a subset of groups reveals that in high need for closure groups only a few participants control the model building process, and final decisions are not based on the model but on simpler tools. The findings highlight the need to account for the effects of cognitive factors when designing and deploying model-based support for practical interventions.
Behavioural OR; Need for closure; Decision processes; Conflict management; Model-based group support;
http://www.sciencedirect.com/science/article/pii/S0377221715005895
Franco, L. Alberto
Rouwette, Etiënne A.J.A.
Korzilius, Hubert
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:560-5762015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:560-576
article
Finding robust timetables for project presentations of student teams
This article describes a methodology developed to find robust solutions to a novel timetabling problem encountered during a course. The problem requires grouping student teams according to diversity/homogeneity criteria and assigning the groups to time-slots for presenting their project results. In this article, we develop a mixed integer programming (MIP) formulation of the problem and then solve it with CPLEX. Rather than simply using the optimal solution reported, we obtain a set of solutions provided by the solution pool feature of the solution engine. We then map these solutions to a network, in which each solution is a node and an edge represents the distance between a pair of solutions (as measured by the number of teams assigned to a different time slot in those solutions). Using a scenario-based exact robustness measure, we test a set of metrics to determine which ones can be used to heuristically rank the solutions in terms of their robustness measure. Using seven semesters’ worth of actual data, we analyze performances of the solution approach and the metrics. The results show that by using the solution pool feature, analysts can quickly obtain a set of Pareto-optimal solutions (with objective function value and the robustness measure as the two criteria). Furthermore, two of the heuristic metrics have strong rank correlation with the robustness measure (mostly above 0.80) making them quite suitable for use in the development of new heuristic search algorithms that can improve the solution pool.
Timetabling; Mixed integer programming; Robustness; Diverse grouping; Bicriteria optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715008036
Akkan, Can
Erdem Külünk, M.
Koçaş, Cenk
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:314-3272015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:314-327
article
Accounting for externalities and disposability: A directional economic environmental distance function
The existence of positive and negative externalities ought to be considered in a productivity analysis in order to obtain unbiased measures of efficiency. In this research we present an additive style, data envelopment analysis model that considers the production of both negative and positive externalities and permits a limited increase in input utilisation where relevant. The directional economic environmental distance (DEED) function is a unified approach based on a linear program that evaluates the relative inefficiency of the units under examination with respect to a unique reference technology. We discuss the impact of disposability assumptions in depth and demonstrate how different versions of the DEED model improve on models presented in the literature to date.
Data envelopment analysis; Negative externalities; Disposability; Additive measure; Environment;
http://www.sciencedirect.com/science/article/pii/S037722171500990X
Adler, Nicole
Volta, Nicola
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:890-8982015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:890-898
article
Path dependence and biases in the even swaps decision analysis method
There are usually multiple paths that can be followed in a decision analysis process. It is possible that these different paths lead to different outcomes, i.e. there can exist path dependence. To demonstrate the phenomenon we show how path dependence emerges in the Even Swaps method. We also discuss the phenomenon in decision analysis in general. The Even Swaps process helps the decision maker to find the most preferred alternative out of a set of multi-attribute alternatives. In our experiment different paths are found to systematically lead to different choices in the Even Swaps process. This is explained by the accumulated effect of successive biased even swap tasks. The biases in these tasks are shown to be due to scale compatibility and loss aversion phenomena. Estimates of the magnitudes of these biases in the even swap tasks are provided. We suggest procedures to cancel out the effects of biases.
Behavioral Operational Research; Decision analysis; Path dependence; Biases; Trade-off; Scale compatibility;
http://www.sciencedirect.com/science/article/pii/S037722171500898X
Lahtinen, Tuomas J.
Hämäläinen, Raimo P.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:101-1192015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:101-119
article
The vehicle-routing problem with time windows and driver-specific times
This paper proposes a tabu search algorithm for the vehicle-routing problem with time windows and driver-specific times (VRPTWDST), a variant of the classical VRPTW that uses driver-specific travel and service times to model the familiarity of the different drivers with the customers to visit. We carry out a systematic investigation of the problem on a comprehensive set of newly generated benchmark instances. We find that consideration of driver knowledge in the route planning clearly improves the efficiency of vehicle routes, an effect that intensifies for higher familiarity levels of the drivers. Increased benefits are produced if the familiar customers of drivers are geographically contiguous. Moreover, a higher number of drivers that are familiar with the same (larger) region provides higher benefits compared to a scenario where each driver is only familiar with a dedicated (smaller) region. Finally, our tabu search is able to prove its performance on the Solomon test instances of the closely related VRPTW, yielding high-quality solutions in short time.
Vehicle routing; Time windows; Driver-specific times; Routing consistency; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715008395
Schneider, Michael
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:143-1542015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:143-154
article
Control and enforcement in order to increase supplier inventory in a JIT contract
Prompt response to customer demand has long been a point of major concern in supply chains. “Inventory wars” between suppliers and their customers are common, owing to cases in which one supply chain party attempts to decrease its stock at the expense of the other party. In order to ensure that suppliers meet their commitments to fulfill orders on time, customers must formulate incentives or, alternatively, enforce penalties. This paper deals with a customer organization that has a contract with a supplier, based on Just-In-Time strategy. Initiating a policy of sanctions, the customer becomes the lead player in a Stackelberg game and forces the supplier to hold inventory, which is made available to the customer in real-time. Using a class of sanctioning functions, we show that the customer can force the supplier to hold inventory up to some maximal value, rendering actual enforcement of sanctions unnecessary. However, contrary to expectations, escalation of the enforcement level can in fact reduce the capacity of the supplier to replenish on time. Consequently, the customer must sanction meticulously in order to receive his inventory on time. Having the possibility to devote a few hours each day to sanctioning activity significantly reduces the customer's expected cost. In particular, numerical examples show that the customer's costs under an enforcement level may be only 2 percent higher than his costs in a situation in which all inventory is necessarily replenished on time.
Just-In-Time; Customer-led supply chain; Replenishment on-time enforcement;
http://www.sciencedirect.com/science/article/pii/S037722171500973X
Shnaiderman, Matan
Ben-Baruch, Liron
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1005-10132015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1005-1013
article
Modelling adherence behaviour for the treatment of obstructive sleep apnoea
Continuous positive airway pressure therapy (CPAP) is known to be the most efficacious treatment for obstructive sleep apnoea (OSA). Unfortunately, poor adherence behaviour in using CPAP reduces its effectiveness and thereby also limits beneficial outcomes. In this paper, we model the dynamics and patterns of patient adherence behaviour as a basis for designing effective and economical interventions. Specifically, we define patient CPAP usage behaviour as a state and develop Markov models for diverse patient cohorts in order to examine the stochastic dynamics of CPAP usage behaviours. We also examine the impact of behavioural intervention scenarios using a Markov decision process (MDP), and suggest a guideline for designing interventions to improve CPAP adherence behaviour. Behavioural intervention policy that addresses economic aspects of treatment is imperative for translation to clinical practice, particularly in resource-constrained environments that are clinically engaged in the chronic care of OSA.
Behavioural OR; Obstructive sleep apnoea; Treatment adherence behaviour; Markov models; Cost-effective interventions;
http://www.sciencedirect.com/science/article/pii/S0377221715006724
Kang, Yuncheol
Sawyer, Amy M.
Griffin, Paul M.
Prabhu, Vittaldas V.
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1113-11232015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1113-1123
article
Zero-inefficiency stochastic frontier models with varying mixing proportion: A semiparametric approach
In this paper, we propose a semiparametric version of the zero-inefficiency stochastic frontier model of Kumbhakar, Parmeter, and Tsionas (2013) by allowing for the proportion of firms that are fully efficient to depend on a set of covariates via unknown smooth function. We propose a (iterative) backfitting local maximum likelihood estimation procedure that achieves the optimal convergence rates of both frontier parameters and the nonparametric function of the probability of being efficient. We derive the asymptotic bias and variance of the proposed estimator and establish its asymptotic normality. In addition, we discuss how to test for parametric specification of the proportion of firms that are fully efficient as well as how to test for the presence of fully inefficient firms, based on the sieve likelihood ratio statistics. The finite sample behaviors of the proposed estimation procedure and tests are examined using Monte Carlo simulations. An empirical application is further presented to demonstrate the usefulness of the proposed methodology.
Zero-inefficiency; Varying proportion; Semiparametric approach; Backfitting local maximum likelihood; Sieve likelihood ratio statistics;
http://www.sciencedirect.com/science/article/pii/S0377221715009455
Tran, Kien C.
Tsionas, Mike G.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:1-292015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:1-29
article
ELECTRE: A comprehensive literature review on methodologies and applications
Multi-criteria decision analysis (MCDA) is a valuable resource within operations research and management science. Various MCDA methods have been developed over the years and applied to decision problems in many different areas. The outranking approach, and in particular the family of ELECTRE methods, continues to be a popular research field within MCDA, despite its more than 40 years of existence. In this paper, a comprehensive literature review of English scholarly papers on ELECTRE and ELECTRE-based methods is performed. Our aim is to investigate how ELECTRE and ELECTRE-based methods have been considered in various areas. This includes area of applications, modifications to the methods, comparisons with other methods, and general studies of the ELECTRE methods. Although a significant amount of literature on ELECTRE is in a language different from English, we focus only on English articles, because many researchers may not be able to perform a study in some of the other languages. Each paper is categorized according to its main focus with respect to ELECTRE, i.e. if it considers an application, performs a review, considers ELECTRE with respect to the problem of selecting an MCDA method or considers some methodological aspects of ELECTRE. A total of 686 papers are included in the review. The group of papers considering an application of ELECTRE consists of 544 papers, and these are further categorized into 13 application areas and a number of sub-areas. In addition, all papers are classified according to the country of author affiliation, journal of publication, and year of publication. For the group of applied papers, the distribution by ELECTRE version vs. application area and ELECTRE version vs. year of publication are provided. We believe that this paper can be a valuable source of information for researchers and practitioners in the field of MCDA and ELECTRE in particular.
Multiple criteria decision aiding (MCDA); Outranking; ELECTRE; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221715006529
Govindan, Kannan
Jepsen, Martin Brandt
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:291-3042015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:291-304
article
Parameters measuring bank risk and their estimation
The paper develops estimation of three parameters of banking risk based on an explicit model of expected utility maximization by financial institutions subject to the classical technology restrictions of neoclassical production theory. The parameters are risk aversion, prudence or downside risk aversion and generalized risk resulting from a factor model of loan prices. The model can be estimated using standard econometric techniques, like GMM for dynamic panel data and latent factor analysis for the estimation of covariance matrices. An explicit functional form for the utility function is not needed and we show how measures of risk aversion and prudence (downside risk aversion) can be derived and estimated from the model. The model is estimated using data for Eurozone countries and we focus particularly on (i) the use of the modeling approach as a device close to an “early warning mechanism”, (ii) the bank- and country-specific estimates of risk aversion and prudence (downside risk aversion), and (iii) the derivation of a generalized measure of risk that relies on loan-price uncertainty. Moreover, the model provides estimates of loan price distortions and thus, allocative efficiency.
Financial stability; Banking; Expected utility maximization; Sub-prime crisis; Financial crisis;
http://www.sciencedirect.com/science/article/pii/S0377221715008991
Tsionas, Mike G.
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:164-1782015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:164-178
article
Cyclic inventory routing in a line-shaped network
The inventory routing problem (IRP) is a very challenging optimization task that couples two of the most important components of supply chain management, i.e., inventory control and transportation. Routes of vehicles are to be determined to repeatedly resupply multiple customers with constant demand rates from a single depot. We alter this basic IRP setting by two aspects: (i) only cyclic tours are allowed, i.e., each vehicle continuously tours its dedicated route, and (ii) all customers are located along a line. Both characteristics occur, for instance, in liner shipping (when feeder ships service inland ports along a stream) and in facility logistics (when tow trains deliver part bins to the stations of an assembly line). We formalize the resulting problem setting, identify NP-hard as well as polynomially solvable cases, and develop suited solution procedures.
Inventory routing; Cyclic routes; Container shipping; Facility logistics;
http://www.sciencedirect.com/science/article/pii/S0377221715009935
Zenker, Michael
Emde, Simon
Boysen, Nils
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:65-762015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:65-76
article
A mean-shift algorithm for large-scale planar maximal covering location problems
The planar maximal covering location problem (PMCLP) concerns the placement of a given number of facilities anywhere on a plane to maximize coverage. Solving PMCLP requires identifying a candidate locations set (CLS) on the plane before reducing it to the relatively simple maximal covering location problem (MCLP). The techniques for identifying the CLS have been mostly dominated by the well-known circle intersect points set (CIPS) method. In this paper we first review PMCLP, and then discuss the advantages and weaknesses of the CIPS approach. We then present a mean-shift based algorithm for treating large-scale PMCLPs, i.e., MSMC. We test the performance of MSMC against the CIPS approach on randomly generated data sets that vary in size and distribution pattern. The experimental results illustrate MSMC’s outstanding performance in tackling large-scale PMCLPs.
Location; Large scale optimization; Planar maximal covering location problem; Mean shift;
http://www.sciencedirect.com/science/article/pii/S0377221715008309
He, Zhou
Fan, Bo
Cheng, T.C.E.
Wang, Shou-Yang
Tan, Chin-Hon
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:251-2612015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:251-261
article
Sparse and robust normal and t- portfolios by penalized Lq-likelihood minimization
Two important problems arising in traditional asset allocation methods are the sensitivity to estimation error of portfolio weights and the high dimensionality of the set of candidate assets. In this paper, we address both issues by proposing a new criterion for portfolio selection. The new criterion is a two-stage description of the available information, where the q-entropy, a generalized measure of information, is used to code the uncertainty of the data given the parametric model and the uncertainty related to the model choice. The information about the model is coded in terms of a prior distribution that promotes asset weights sparsity. Our approach carries out model selection and estimation in a single step, by selecting a few assets and estimating their portfolio weights simultaneously. The resulting portfolios are doubly robust, in the sense that they can tolerate deviations from both assumed data model and prior distribution for model parameters. Empirical results on simulated and real-world data support the validity of our approach.
Investment analysis; Penalized least squares; q-entropy; Sparsity; Index tracking;
http://www.sciencedirect.com/science/article/pii/S0377221715008127
Giuzio, Margherita
Ferrari, Davide
Paterlini, Sandra
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:498-5052015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:498-505
article
Models and forecasts of credit card balance
Credit card balance is an important factor in retail finance. In this article we consider multivariate models of credit card balance and use a real dataset of credit card data to test the forecasting performance of the models. Several models are considered in a cross-sectional regression context: ordinary least squares, two-stage and mixture regression. After that, we take advantage of the time series structure of the data and model credit card balance using a random effects panel model. The most important predictor variable is previous lagged balance, but other application and behavioural variables are also found to be important. Finally, we present an investigation of forecast accuracy on credit card balance 12 months ahead using each of the proposed models. The panel model is found to be the best model for forecasting credit card balance in terms of mean absolute error (MAE) and the two-stage regression model performs best in terms of root mean squared error (RMSE).
Credit cards; Balance estimation; Mixture model; Panel model;
http://www.sciencedirect.com/science/article/pii/S0377221714010157
Hon, Pak Shun
Bellotti, Tony
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:427-4392015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:427-439
article
An empirical comparison of classification algorithms for mortgage default prediction: evidence from a distressed mortgage market
This paper evaluates the performance of a number of modelling approaches for future mortgage default status. Boosted regression trees, random forests, penalised linear and semi-parametric logistic regression models are applied to four portfolios of over 300,000 Irish owner-occupier mortgages. The main findings are that the selected approaches have varying degrees of predictive power and that boosted regression trees significantly outperform logistic regression. This suggests that boosted regression trees can be a useful addition to the current toolkit for mortgage credit risk assessment by banks and regulators.
Boosting; Random forests; Semi-parametric models; Mortgages; Credit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221715008383
Fitzpatrick, Trevor
Mues, Christophe
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:771-7832015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:771-783
article
Controlling for spatial heterogeneity in nonparametric efficiency models: An empirical proposal
This paper introduces an original methodology, derived by the robust order-m model, to estimate technical efficiency with spatial autocorrelated data using a nonparametric approach. The methodology is aimed to identify potential competitors on a subset of productive units that are identified through spatial dependence, thus focusing on peers located in close proximity of the productive unit. The proposed method is illustrated in a simulation setting that verifies the territorial differences between the nonparametric unconditioned and the conditioned estimates. A firm-level application to the Italian industrial districts is proposed in order to highlight the ability of the new method to separate the global intangible spatial effect from the efficiency term on real data.
Productive efficiency; Conditional nonparametric efficiency; Spatial heterogeneity; Industrial districts;
http://www.sciencedirect.com/science/article/pii/S0377221715009765
Vidoli, Francesco
Canello, Jacopo
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:226-2382015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:226-238
article
Dual sourcing under disruption risk and cost improvement through learning
As suppliers are crucial for successful supply chain management, buying companies have to deal with the risks of supply disruptions due to e.g. labor strikes, natural disasters, supplier bankruptcy, and business failures. Dual sourcing is one potential countermeasure, however, when applying it one loses the full potential of economies of scale. To provide decision support, we analyze the trade-off between risk reduction via dual sourcing under disruption risk and learning benefits on sourcing costs induced by long-term relationships with a single supplier from a buyer’s perspective. The buyer’s optimal volume allocation strategy over a finite dynamic planning horizon is identified and we find that a symmetric demand allocation is not optimal, even if suppliers are symmetric. We obtain insights on how reliability, cost and learning ability of potential suppliers impact the buyer’s sourcing decision and find that the allocation balance increases with learning rate and decreases with reliability and demand level. Further, we quantify the benefit of dual sourcing compared to single sourcing, which increases with learning rate and decreases with reliability. When comparing the optimal policy to heuristic dual sourcing policies, a simple 75:25 allocation rule turns out to be a very robust policy. Finally, we perform sensitivity analysis and find that increasing certainty about supplier reliability and increasing risk aversion of a buyer yield more balanced supply volume allocations among the available suppliers and that the advantage of dual sourcing decreases with uncertainty about supplier reliability. Further, we discuss the impact of demand uncertainty.
Dual sourcing; Learning effects; Disruption risk;
http://www.sciencedirect.com/science/article/pii/S0377221715008413
Silbermayr, Lena
Minner, Stefan
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:931-9442015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:931-944
article
An experimental investigation into the role of simulation models in generating insights
It is often claimed that discrete-event simulation (DES) models are useful for generating insights. There is, however, almost no empirical evidence to support this claim. To address this issue we perform an experimental study which investigates the role of DES, specifically the simulation animation and statistical results, in generating insight (an ‘Aha!’ moment). Undergraduate students were placed in three separate groups and given a task to solve using a model with only animation, a model with only statistical results, or using no model at all. The task was based around the UK's NHS111 telephone service for non-emergency health care. Performance was measured based on whether participants solved the task with insight, the time taken to achieve insight and the participants’ problem-solving patterns. The results show that there is some association between insight generation and the use of a simulation model, particularly the use of the statistical results generated from the model. While there is no evidence that insights were generated more frequently from statistical results than the use of animation, the participants using the statistical results generated insights more rapidly.
Discrete-event simulation; Insight; Animation; Experimentation; Behavioural operational research;
http://www.sciencedirect.com/science/article/pii/S037722171500884X
Gogi, Anastasia
Tako, Antuela A.
Robinson, Stewart
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:816-8262015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:816-826
article
Model-based organizational decision making: A behavioral lens
Operational research assumes that organizational decision-making processes can be improved by making them more rigorous and analytical through the application of quantitative and qualitative modeling. However, we have only a limited understanding of how modeling actually affects organizational decision-making behavior, positively or negatively. Drawing from the Carnegie School's tradition of organizational research, this paper identifies two types of organizational decision-making activities where modeling can be applied: routine decision making and problem solving. These two types of decision-making activities have very different implications for model-based decision support, both in terms of the positive and negative behavioral impacts associated with modeling as well as the criteria used to evaluate models and modeling practices. Overall, the paper offers novel insights that help understand why modeling activities are successful (or not), explains why practitioners adopt some approaches more readily than others and points to new opportunities for empirical research and method development.
Behavioral OR; Behavioral theory of the firm; Bounded rationality; Organizational behavior; Decision making;
http://www.sciencedirect.com/science/article/pii/S0377221715007948
Luoma, Jukka
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:728-7392015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:728-739
article
Strategic entry in a triopoly market of firms with asymmetric cost structures
This paper examines the strategic investment timing decision in a triopoly market comprising firms with asymmetric cost structures. We present three novel results. First, in the case where there are relatively small cost asymmetries between firms and a relatively small first-mover advantage, the firm with the lowest cost structure is not always the first investor. In other cases, the firm with the lowest cost structure is the first investor. Second, an increase in volatility increases the possibility that a firm without the lowest cost structure is the first investor. Finally, even in the three-asymmetric-firm model, we show that the first investor threshold is larger in a triopoly than in a duopoly, although it is smaller in a duopoly than in a monopoly.
Investment analysis; Real options; Competition; Uncertainty;
http://www.sciencedirect.com/science/article/pii/S037722171500819X
Shibata, Takashi
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:77-902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:77-90
article
An iterated multi-stage selection hyper-heuristic
There is a growing interest towards the design of reusable general purpose search methods that are applicable to different problems instead of tailored solutions to a single particular problem. Hyper-heuristics have emerged as such high level methods that explore the space formed by a set of heuristics (move operators) or heuristic components for solving computationally hard problems. A selection hyper-heuristic mixes and controls a predefined set of low level heuristics with the goal of improving an initially generated solution by choosing and applying an appropriate heuristic to a solution in hand and deciding whether to accept or reject the new solution at each step under an iterative framework. Designing an adaptive control mechanism for the heuristic selection and combining it with a suitable acceptance method is a major challenge, because both components can influence the overall performance of a selection hyper-heuristic. In this study, we describe a novel iterated multi-stage hyper-heuristic approach which cycles through two interacting hyper-heuristics and operates based on the principle that not all low level heuristics for a problem domain would be useful at any point of the search process. The empirical results on a hyper-heuristic benchmark indicate the success of the proposed selection hyper-heuristic across six problem domains beating the state-of-the-art approach.
Heuristics; Combinatorial optimisation; Hyper-heuristic; Meta-heuristic; Hybrid approach;
http://www.sciencedirect.com/science/article/pii/S0377221715008255
Kheiri, Ahmed
Özcan, Ender
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:667-6762015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:667-676
article
Accurate algorithms for identifying the median ranking when dealing with weak and partial rankings under the Kemeny axiomatic approach
Preference rankings virtually appear in all fields of science (political sciences, behavioral sciences, machine learning, decision making and so on). The well-known social choice problem consists in trying to find a reasonable procedure to use the aggregate preferences or rankings expressed by subjects to reach a collective decision. This turns out to be equivalent to estimate the consensus (central) ranking from data and it is known to be a NP-hard problem. A useful solution has been proposed by Emond and Mason in 2002 through the Branch-and-Bound algorithm (BB) within the Kemeny and Snell axiomatic framework. As a matter of fact, BB is a time demanding procedure when the complexity of the problem becomes untractable, i.e. a large number of objects, with weak and partial rankings, in presence of a low degree of consensus. As an alternative, we propose an accurate heuristic algorithm called FAST that finds at least one of the consensus ranking solutions found by BB saving a lot of computational time. In addition, we show that the building block of FAST is an algorithm called QUICK that finds already one of the BB solutions so that it can be fruitfully considered to speed up even more the overall searching procedure if the number of objects is low. Simulation studies and applications on real data allows to show the accuracy and the computational efficiency of our proposal.
Preference rankings; Median ranking; Kemeny distance; Social choice problem; Branch-and-bound algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715008048
Amodio, S.
D’Ambrosio, A.
Siciliano, R.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:677-6822015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:677-682
article
A new model for intuitionistic fuzzy multi-attributes decision making
In this study, we discuss linear orders of intuitionistic fuzzy values (IFVs). Then we introduce an intuitionistic fuzzy weighted arithmetic average operator. Some fundamental properties of this operator are investigated. Based on the introduced operator, we propose a new model for intuitionistic fuzzy multi-attributes decision making. The proposed model deals with the degree of membership and degree of nonmembership separately. It is resistant to extreme data.
Decision analysis; Intuitionistic fuzzy sets; Weighted arithmetic average; Weighted geometric average; Admissible order;
http://www.sciencedirect.com/science/article/pii/S0377221715007997
Ouyang, Yao
Pedrycz, Witold
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:706-7162015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:706-716
article
Optimizing layouts of initial AFV refueling stations targeting different drivers, and experiments with agent-based simulations
The number of refuelling stations for AFVs (alternative fuel vehicles) is limited during the early stages of the diffusion of AFVs. Different layouts of these initial stations will result in different degrees of driver concern regarding refueling and will therefore influence individuals’ decisions to adopt AFVs. The question becomes “what is an optimal layout for these initial stations? Should it target all drivers or just a portion of them, and if so, which portion?” Further, how does the number of initial AFV refueling stations influence the adoption of AFVs? This paper explores these questions with agent-based simulations. Using Shanghai as the basis of computational experiments, this paper first generates different optimal layouts using a genetic algorithm to minimize the total concern of different targeted drivers and then conducts agent-based simulations on the diffusion of AFVs with these layouts. The main findings of this study are that (1) targeting drivers in the city center can induce the fastest diffusion of AFVs if AFV technologies are mature and (2) it is possible that a larger number of initial AFV refueling stations may result in slower diffusion of AFVs because these initial stations may not have sufficient customers to survive. The simulations can provide some insights for cities that are trying to promote the diffusion of AFVs.
Simulation; Optimal layout; Alternative fuel vehicles; Initial refueling stations; Agent-based model;
http://www.sciencedirect.com/science/article/pii/S0377221715008218
Zhao, Jiangjiang
Ma, Tieju
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:740-7502015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:740-750
article
Inverse portfolio problem with coherent risk measures
In general, a portfolio problem minimizes risk (or negative utility) of a portfolio of financial assets with respect to portfolio weights subject to a budget constraint. The inverse portfolio problem then arises when an investor assumes that his/her risk preferences have a numerical representation in the form of a certain class of functionals, e.g. in the form of expected utility, coherent risk measure or mean-deviation functional, and aims to identify such a functional, whose minimization results in a portfolio, e.g. a market index, that he/she is most satisfied with. In this work, the portfolio risk is determined by a coherent risk measure, and the rate of return of investor’s preferred portfolio is assumed to be known. The inverse portfolio problem then recovers investor’s coherent risk measure either through finding a convex set of feasible probability measures (risk envelope) or in the form of either mixed CVaR or negative Yaari’s dual utility. It is solved in single-period and multi-period formulations and is demonstrated in a case study with the FTSE 100 index.
Decision making under risk; Coherent risk measure; Portfolio optimization; Inverse portfolio problem;
http://www.sciencedirect.com/science/article/pii/S0377221715008929
Grechuk, Bogdan
Zabarankin, Michael
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:908-9182015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:908-918
article
Recent evidence on the effectiveness of group model building
Group model building (GMB) is a participatory approach to using system dynamics in group decision-making and problem structuring. This paper considers the published quantitative evidence base for GMB since the earlier literature review by Rouwette et al. (2002), to consider the level of understanding on three basic questions: what does it achieve, when should it be applied, and how should it be applied or improved? There have now been at least 45 such studies since 1987, utilising controlled experiments, field experiments, pretest/posttest, and observational research designs. There is evidence of GMB achieving a range of outcomes, particularly with regard to the behaviour of participants and their learning through the process. There is some evidence that GMB is more effective at supporting communication and consensus than traditional facilitation, however GMB has not been compared to other problem structuring methods. GMB has been successfully applied in a range of contexts, but there is little evidence on which to select between different GMB tools, or to understand when certain tools may be more appropriate. There is improving evidence on how GMB works, but this has not yet been translated into changing practice. Overall the evidence base for GMB has continued to improve, supporting its use for improving communication and agreement between participants in group decision processes. This paper argues that future research in group model building would benefit from three main shifts: from single cases to multiple cases; from controlled settings to applied settings; and by augmenting survey results with more objective measures.
Behavioural OR; Group model building; System dynamics; Evidence; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221715006323
Scott, Rodney J
Cavana, Robert Y
Cameron, Donald
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:397-4062015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:397-406
article
“Time-to-profit scorecards for revolving credit”
This paper defines and models time-to-profit for the first time for credit acceptance decisions within the context of revolving credit. This requires the definition of a time-related event: A customer is profitable when monthly cumulative return is at least one (i.e. cumulative profits cover the outstanding balance). Time-to-profit scorecards were produced for a data set of revolving credit from a Colombian lending institution which included socio-demographic and first purchase individual characteristics. Results show that it is possible to obtain good classification accuracy and improve portfolio returns which are continuous by definition through the use of survival models for binary events (i.e. either being profitable or not). It is also shown how predicting time-to-profit can be used for investment planning purposes of credit programmes. It is possible to identify the earliest point in time in which a customer is profitable and hence, generates internal (organic) funds for a credit programme to continue growing and become sustainable. For survival models the effect of segmentation on loan duration was explored. Results were similar in terms of classification accuracy and identifying organic growth opportunities. In particular, loan duration and credit limit usage have a significant economic impact on time-to-profit. This paper confirms that high risk credit programmes can be profitable at different points in time depending on loan duration. Furthermore, existing customers may provide internal funds for the credit programme to continue growing.
Risk management; Time-to-profit; Profit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221715008942
Sanchez-Barrios, Luis Javier
Andreeva, Galina
Ansell, Jake
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1102-11122015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1102-1112
article
Estimating risk preferences of bettors with different bet sizes
We extend the literature on risk preferences of a representative bettor by including odds-dependent bet sizes in our estimations. Accounting for different bet sizes largely reduces the standard errors of all coefficients. Substituting the coefficients from the model with equal bet sizes into the model with odds-dependent sizes leads to a sharp decline in the likelihood which shows that accounting for different amounts is important. Our estimations strongly reject the hypothesis that the overbetting of outcomes with low probabilities (favorite-longshot bias) can be explained by risk-seeking bettors. Depending on the exact specification within cumulative prospect theory, the data can best be described by an overweighting of small probabilities which is more pronounced in the gain domain. Models allowing for two parameters for probability weighting each in the gain- and in the loss domain are superior.
Applied probability; Betting markets; Favorite-longshot bias; Estimation of risk preferences; Overweighting of small probabilities;
http://www.sciencedirect.com/science/article/pii/S0377221715008954
Feess, Eberhard
Müller, Helge
Schumacher, Christoph
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:517-5242015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:517-524
article
Spatial dependence in credit risk and its improvement in credit scoring
Credit scoring models are important tools in the credit granting process. These models measure the credit risk of a prospective client based on idiosyncratic variables and macroeconomic factors. However, small and medium sized enterprises (SMEs) are subject to the effects of the local economy. From a data set with the localization and default information of 9 million Brazilian SMEs, provided by Serasa Experian (the largest Brazilian credit bureau), we propose a measure of the local risk of default based on the application of ordinary kriging. This variable has been included in logistic credit scoring models as an explanatory variable. These models have shown better performance when compared to models without this variable. A gain around 7 percentage points of KS and Gini was observed.
Risk analysis; Spatial dependence; SME credit risk; Ordinary kriging; Credit scoring;
http://www.sciencedirect.com/science/article/pii/S0377221715006463
Fernandes, Guilherme Barreto
Artes, Rinaldo
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:628-6302015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:628-630
article
A note: An improved upper bound for the online inventory problem with bounded storage and order costs
This work gives an improved competitive analysis for an online inventory problem with bounded storage and order costs proposed by Larsen and Wøhlk (2010). We improve the upper bound of the competitive ratio from (2+1k)Mmto less than 45(2+1k)Mm,where k, M and m are parameters of the given problem. The key idea is to use linear-fractional programming and primal-dual analysis methods to find the upper bound of a central inequality.
Inventory; Online algorithms; Competitive analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715008875
Dai, Wenqiang
Jiang, Qingzhu
Feng, Yi
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:417-4262015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:417-426
article
Instance-based credit risk assessment for investment decisions in P2P lending
Recent years have witnessed increased attention on peer-to-peer (P2P) lending, which provides an alternative way of financing without the involvement of traditional financial institutions. A key challenge for personal investors in P2P lending marketplaces is the effective allocation of their money across different loans by accurately assessing the credit risk of each loan. Traditional rating-based assessment models cannot meet the needs of individual investors in P2P lending, since they do not provide an explicit mechanism for asset allocation. In this study, we propose a data-driven investment decision-making framework for this emerging market. We designed an instance-based credit risk assessment model, which has the ability of evaluating the return and risk of each individual loan. Moreover, we formulated the investment decision in P2P lending as a portfolio optimization problem with boundary constraints. To validate the proposed model, we performed extensive experiments on real-world datasets from two notable P2P lending marketplaces. Experimental results revealed that the proposed model can effectively improve investment performances compared with existing methods in P2P lending.
Data mining; P2P lending; Credit risk assessment; Instance-based method; Investment decisions;
http://www.sciencedirect.com/science/article/pii/S0377221715004610
Guo, Yanhong
Zhou, Wenjun
Luo, Chunyu
Liu, Chuanren
Xiong, Hui
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:683-6902015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:683-690
article
Optimal search for parameters in Monte Carlo simulation for derivative pricing
This paper provides a novel and general framework for the problem of searching parameter space in Monte Carlo simulations. We propose a deterministic online algorithm and a randomized online algorithm to search for suitable parameter values for derivative pricing which are needed to achieve desired precisions. We also give the competitive ratios of the two algorithms and prove the optimality of the algorithms. Experimental results on the performance of the algorithms are presented and analyzed as well.
Finance; Monte Carlo simulation; Deterministic online algorithm; Randomized online algorithm; Competitive ratio;
http://www.sciencedirect.com/science/article/pii/S0377221715008164
Wang, Chuan-Ju
Kao, Ming-Yang
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:1024-10322015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:1024-1032
article
Optimization and strategic behavior in a passenger–taxi service system
We study a passenger–taxi problem in this paper. The objective to maximize the social welfare and optimize the allocation of taxi market resources. We analyze the strategic behavior of passengers who decide whether to join the system or balk in both observable and unobservable cases. In observable case, we obtain the optimal selfish threshold that maximizes their individual revenues and give the conditions of the existence of the optimal selfless threshold that maximize the social welfare. In unobservable case, we discuss the equilibrium strategies for the selfish passengers and derive the optimal arrival rate for the socially concerned passengers. Further, we analyze how the government controls the number of taxis by subsidizing taxis or levying a tax on taxis.
Double-ended queueing system; Equilibrium; Optimization; Strategic behavior; Threshold;
http://www.sciencedirect.com/science/article/pii/S0377221715006645
Shi, Ying
Lian, Zhaotong
oai:RePEc:eee:ejores:v:249:y:2016:i:3:p:968-9822015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:3:p:968-982
article
Boundary games: How teams of OR practitioners explore the boundaries of interventionAuthor-Name: Velez-Castiblanco, Jorge
An operational research (OR) practitioner designing an intervention needs to engage in a practical process for choosing methods and implementing them. When a team of OR practitioners does this, and/or clients and stakeholders are involved, the social dynamics of designing the approach can be complex. So far, hardly any theory has been provided to support our understanding of these social dynamics. To this end, our paper offers a theory of ‘boundary games’. It is proposed that decision making on the configuration of the OR approach is shaped by communications concerning boundary judgements. These communications involve the OR practitioners in the team (and other participants, when relevant) ‘setting’, ‘following’, ‘enhancing’, ‘wandering outside’, ‘challenging’ and ‘probing’ boundaries concerning the nature of the context and the methods to be used. Empirical vignettes are provided of a project where three OR practitioners with different forms of methodological expertise collaborated on an intervention to support a Regional Council in New Zealand. In deciding how to approach a problem structuring workshop where the Regional Council employees would be participants, the OR team had to negotiate their methodological boundaries in some detail. The paper demonstrates that the theory of boundary games helps to analyse and describe the shifts in thinking that take place in this kind of team decision making. A number of implications for OR practitioners are discussed, including how this theory can contribute to reflective practice and improve awareness of what is happening during communications with OR colleagues, clients and participants.
Behavioural OR; Boundary games; Critical systems thinking; Multimethodology; Process of OR;
http://www.sciencedirect.com/science/article/pii/S0377221715007237
Brocklesby, John
Midgley, Gerald
oai:RePEc:eee:ejores:v:250:y:2016:i:1:p:239-2502015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:250:y:2016:i:1:p:239-250
article
Humanitarian logistics network design under mixed uncertainty
In this paper, we address a two-echelon humanitarian logistics network design problem involving multiple central warehouses (CWs) and local distribution centers (LDCs) and develop a novel two-stage scenario-based possibilistic-stochastic programming (SBPSP) approach. The research is motivated by the urgent need for designing a relief network in Tehran in preparation for potential earthquakes to cope with the main logistical problems in pre- and post-disaster phases. During the first stage, the locations for CWs and LDCs are determined along with the prepositioned inventory levels for the relief supplies. In this stage, inherent uncertainties in both supply and demand data as well as the availability level of the transportation network's routes after an earthquake are taken into account. In the second stage, a relief distribution plan is developed based on various disaster scenarios aiming to minimize: total distribution time, the maximum weighted distribution time for the critical items, total cost of unused inventories and weighted shortage cost of unmet demands. A tailored differential evolution (DE) algorithm is developed to find good enough feasible solutions within a reasonable CPU time. Computational results using real data reveal promising performance of the proposed SBPSP model in comparison with the existing relief network in Tehran. The paper contributes to the literature on optimization based design of relief networks under mixed possibilistic-stochastic uncertainty and supports informed decision making by local authorities in increasing resilience of urban areas to natural disasters.
Humanitarian logistics; Integrated stock prepositioning and relief distribution; Mixed possibilistic-stochastic programming; Differential evolution;
http://www.sciencedirect.com/science/article/pii/S0377221715008152
Tofighi, S.
Torabi, S.A.
Mansouri, S.A.
oai:RePEc:eee:ejores:v:249:y:2016:i:2:p:577-5912015-12-17RePEc:eee:ejores
RePEc:eee:ejores:v:249:y:2016:i:2:p:577-591
article
An approach using SAT solvers for the RCPSP with logical constraints
This paper presents a new solution approach to solve the resource-constrained project scheduling problem in the presence of three types of logical constraints. Apart from the traditional AND constraints with minimal time-lags, these precedences are extended to OR constraints and bidirectional (BI) relations. These logical constraints extend the set of relations between pairs of activities and make the RCPSP definition somewhat different from the traditional RCPSP research topics in literature. It is known that the RCPSP with AND constraints, and hence its extension to OR and BI constraints, is NP-hard.
Project scheduling; RCPSP; AND/OR/BI constraints; SAT;
http://www.sciencedirect.com/science/article/pii/S0377221715008000
Vanhoucke, Mario
Coelho, José
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:281-2912013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:281-291
article
Should retail stores also RFID-tag ‘cheap’ items?
Despite their implementations in a wide variety of applications, there are very few instances where every item sold at a retail store is RFID-tagged. While the business case for expensive items to be RFID tagged may be somewhat clear, we claim that even ‘cheap’ items (i.e., those that cost less than an RFID tag) should be RFID tagged for retailers to benefit from efficiencies associated with item-level visibility. We study the relative price premiums a retailer with RFID tagged items can command as well as the retailer’s profit to illustrate the significance of item-level RFID-tagging both cheap and expensive items at a retail store. Our results indicate that, under certain conditions, item-level RFID tagging of items that cost less than an RFID tag has the potential to generate significant benefits to the retailer. The retailer is also better off tagging all items regardless of their relative price with respect to that of an RFID tag compared to the case where only the expensive item is RFID-tagged.
RFID; Partial and complete tagging; Retailing;
http://www.sciencedirect.com/science/article/pii/S0377221713007352
Piramuthu, Selwyn
Wochner, Sina
Grunow, Martin
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:125-1302013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:125-130
article
A multi-objective approach to supply chain visibility and risk
This paper investigates the twin effects of supply chain visibility (SCV) and supply chain risk (SCR) on supply chain performance. Operationally, SCV has been linked to the capability of sharing timely and accurate information on exogenous demand, quantity and location of inventory, transport related cost, and other logistics activities throughout an entire supply chain. Similarly, SCR can be viewed as the likelihood that an adverse event has occurred during a certain epoch within a supply chain and the associated consequences of that event which affects supply chain performance. Given the multi-faceted attributes of the decision making process which involves many stages, objectives, and stakeholders, it beckons research into this aspect of the supply chain to utilize a fuzzy multi-objective decision making approach to model SCV and SCR from an operational perspective. Hence, our model incorporates the objectives of SCV maximization, SCR minimization, and cost minimization under the constraints of budget, customer demand, production capacity, and supply availability. A numerical example is used to demonstrate the applicability of the model. Our results suggest that decision makers tend to mitigate SCR first then enhance SCV.
Supply chain management; Multiple objective programming; Supply chain visibility; Supply chain risk;
http://www.sciencedirect.com/science/article/pii/S0377221713007212
Yu, Min-Chun
Goh, Mark
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:131-1442013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:131-144
article
Revenue deficiency under second-price auctions in a supply-chain setting
Consider a firm, called the buyer, that satisfies its demand over two periods by assigning both demands to a supplier via a second-price procurement auction; call this the Standard auction. In the hope of lowering its purchase cost, the firm is considering an alternative procedure in which it will also allow bids on each period individually, where there can be either one or two winners covering the two demands; call this the Multiple Winner auction. Choosing the Multiple Winner auction over the Standard auction can in fact result in a higher cost to the buyer. We provide a bound on how much greater the buyer’s cost can be in the Multiple Winner auction and show that this bound is tight. We then sharpen this bound for two scenarios that can arise when the buyer announces his demands close to the beginning of the demand horizon. Under a monotonicity condition, we achieve a further sharpening of the bound in one of the scenarios. Finally, this monotonicity condition allows us to generalize this bound to the T-period case in which bids are allowed on any subset of period demands.
Procurement; Supply chain; Second-price auction; VCG mechanism; Revenue deficiency;
http://www.sciencedirect.com/science/article/pii/S0377221713006437
Romero Morales, Dolores
Steinberg, Richard
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:16-222013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:16-22
article
Two-stage stochastic linear programs with incomplete information on uncertainty
Two-stage stochastic linear programming is a classical model in operations research. The usual approach to this model requires detailed information on distribution of the random variables involved. In this paper, we only assume the availability of the first and second moments information of the random variables. By using duality of semi-infinite programming and adopting a linear decision rule, we show that a deterministic equivalence of the two-stage problem can be reformulated as a second-order cone optimization problem. Preliminary numerical experiments are presented to demonstrate the computational advantage of this approach.
Stochastic programming; Linear decision rule; Second order cone optimization;
http://www.sciencedirect.com/science/article/pii/S0377221713006413
Ang, James
Meng, Fanwen
Sun, Jie
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:23-332013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:23-33
article
On distributional robust probability functions and their computations
Consider a random vector, and assume that a set of its moments information is known. Among all possible distributions obeying the given moments constraints, the envelope of the probability distribution functions is introduced in this paper as distributional robust probability function. We show that such a function is computable in the bi-variate case under some conditions. Connections to the existing results in the literature and its applications in risk management are discussed as well.
Risk management; Distributional robust; Moment bounds; Semidefinite programming (SDP); Conic programming;
http://www.sciencedirect.com/science/article/pii/S0377221713007285
Wong, Man Hong
Zhang, Shuzhong
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:114-1242013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:114-124
article
Product variety and channel structure strategy for a retailer-Stackelberg supply chain
Motivated by the observations that the direct sales channel is increasingly used for customized products and that retailers wield leadership, we develop in this paper a retailer-Stackelberg pricing model to investigate the product variety and channel structure strategies of manufacturer in a circular spatial market. To avoid channel conflict, we consider the commonly observed case where the indirect channel sells standard products whereas the direct channel offers custom products. Our analytical results indicate that if the reservation price in the indirect channel is sufficiently low, adding the direct channel raises the unit wholesale price and retail price in the indirect channel due to customization in the direct channel. Despite the fact that dual channels for the retailer may dominate the single indirect channel, we find that the motivation for the manufacturer to use dual channels decreases with the unit production cost, while increases with (i) the marginal cost of variety, (ii) the retailer’s marginal selling cost, and (iii) the customer’s fit cost. Interestingly, our equilibrium analysis demonstrates that it is more likely for the manufacturer to use dual channels under the retailer Stackelberg channel leadership scenario than under the manufacturer Stackelberg scenario if offering a greater variety is very expensive. When offering a greater variety is inexpensive, the decentralization of the indirect channel may invert the manufacturer’s channel structure decision. Furthermore, endogenization of product variety will also invert the channel structure decision if the standard product’s reservation price is sufficiently low.
Supply chain management; Product variety; Customization; Dual channels; Game theory;
http://www.sciencedirect.com/science/article/pii/S0377221713007224
Xiao, Tiaojun
Choi, Tsan-Ming
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:208-2192013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:208-219
article
Optimal two-phase vaccine allocation to geographically different regions under uncertainty
In this article, we consider a decision process in which vaccination is performed in two phases to contain the outbreak of an infectious disease in a set of geographic regions. In the first phase, a limited number of vaccine doses are allocated to each region; in the second phase, additional doses may be allocated to regions in which the epidemic has not been contained. We develop a simulation model to capture the epidemic dynamics in each region for different vaccination levels. We formulate the vaccine allocation problem as a two-stage stochastic linear program (2-SLP) and use the special problem structure to reduce it to a linear program with a similar size to that of the first stage problem. We also present a Newsvendor model formulation of the problem which provides a closed form solution for the optimal allocation. We construct test cases motivated by vaccine planning for seasonal influenza in the state of North Carolina. Using the 2-SLP formulation, we estimate the value of the stochastic solution and the expected value of perfect information. We also propose and test an easy to implement heuristic for vaccine allocation. We show that our proposed two-phase vaccination policy potentially results in a lower attack rate and a considerable saving in vaccine production and administration cost.
OR in health services; Epidemic control; Two-phase vaccine allocation; Stochastic linear program; Newsvendor model; Value of stochastic solution;
http://www.sciencedirect.com/science/article/pii/S0377221713006929
Yarmand, Hamed
Ivy, Julie S.
Denton, Brian
Lloyd, Alun L.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:159-1702013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:159-170
article
A scenario-based stochastic model for supplier selection in global context with multiple buyers, currency fluctuation uncertainties, and price discounts
Suppliers network in the global context under price discounts and uncertain fluctuations of currency exchange rates have become critical in today’s world economy. We study the problem of suppliers’ selection in the presence of uncertain fluctuations of currency exchange rates and price discounts. We specifically consider a buyer with multiple sites sourcing a product from heterogeneous suppliers and address both the supplier selection and purchased quantity decision. Suppliers are located worldwide and pricing is offered in suppliers’ local currencies. Exchange rates from the local currencies of suppliers to the standard currency of the buyer are subject to uncertain fluctuations overtime. In addition, suppliers offer discounts as a function of the total quantity bought by the different customer’ sites over the time horizon irrespective of the quantity purchased by each site.
Supplier selection; Currency fluctuation uncertainty; Multiple buyers; Price discounts; Global purchasing;
http://www.sciencedirect.com/science/article/pii/S0377221713006851
Hammami, Ramzi
Temponi, Cecilia
Frein, Yannick
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:234-2452013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:234-245
article
Improvements to a large neighborhood search heuristic for an integrated aircraft and passenger recovery problem
Because most commercial passenger airlines operate on a hub-and-spoke network, small disturbances can cause major disruptions in their planned schedules and have a significant impact on their operational costs and performance. When a disturbance occurs, the airline often applies a recovery policy in order to quickly resume normal operations. We present in this paper a large neighborhood search heuristic to solve an integrated aircraft and passenger recovery problem. The problem consists of creating new aircraft routes and passenger itineraries to produce a feasible schedule during the recovery period. The method is based on an existing heuristic, developed in the context of the 2009 ROADEF Challenge, which alternates between three phases: construction, repair and improvement. We introduce a number of refinements in each phase so as to perform a more thorough search of the solution space. The resulting heuristic performs very well on the instances introduced for the challenge, obtaining the best known solution for 17 out of 22 instances within five minutes of computing time and 21 out of 22 instances within 10minutes of computing time.
Airline recovery; Fleet assignment; Aircraft routing; Passenger itineraries; Large neighborhood search;
http://www.sciencedirect.com/science/article/pii/S0377221713007182
Sinclair, Karine
Cordeau, Jean-François
Laporte, Gilbert
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:263-2722013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:263-272
article
Pricing and market segmentation using opaque selling mechanisms
In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. We develop a stylized model of consumer choice that illustrates the role of opaque selling in market segmentation. We model a firm selling a product via three selling channels: a regular full information channel, an opaque posted price channel and an opaque bidding channel where consumers specify the price they are willing to pay. We illustrate the segmentation created by opaque selling as well as compare optimal revenues and prices for sellers using regular full information channels with those using opaque selling mechanisms in conjunction with regular channels. We also study the segmentation and policy changes induced by capacity constraints.
Revenue management; Marketing: pricing; Segmentation; Auctions; Buyer behavior;
http://www.sciencedirect.com/science/article/pii/S0377221713006838
Anderson, Chris K.
Xie, Xiaoqing
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:246-2622013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:246-262
article
Stochastic models for strategic resource allocation in nonprofit foreclosed housing acquisitions
Increased rates of mortgage foreclosures in the U.S. have had devastating social and economic impacts during and after the 2008 financial crisis. As part of the response to this problem, nonprofit organizations such as community development corporations (CDCs) have been trying to mitigate the negative impacts of mortgage foreclosures by acquiring and redeveloping foreclosed properties. We consider the strategic resource allocation decisions for these organizations which involve budget allocations to different neighborhoods under cost and return uncertainty. Based on interactions with a CDC, we develop stochastic integer programming based frameworks for this decision problem, and assess the practical value of the models by using real-world data. Both policy-related and computational analyses are performed, and several insights such as the trade-offs between different objectives, and the efficiency of different solution approaches are presented.
OR in societal problem analysis; OR in strategic planning; Foreclosures; Stochastic programming; Resource allocation;
http://www.sciencedirect.com/science/article/pii/S0377221713007248
Bayram, Armagan
Solak, Senay
Johnson, Michael
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:145-1582013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:145-158
article
Issues Mapping: A problem structuring method for addressing science and technology conflicts
There are new opportunities for the application of problem structuring methods to address science and technology risk conflicts through stakeholder dialogue. Most previous approaches to addressing risk conflicts have been developed from a traditional risk communication perspective, which tends to construct engagement between stakeholders based on the assumption that scientists evaluate technologies using facts, and lay participants do so based on their values. ‘Understanding the facts’ is generally privileged, so the value framings of experts often remain unexposed, and the perspectives of lay participants are marginalized. When this happens, risk communication methodologies fail to achieve authentic dialogue and can exacerbate conflict. This paper introduces ‘Issues Mapping’, a problem structuring method that enables dialogue by using visual modelling techniques to clarify issues and develop mutual understanding between stakeholders. A case study of the first application of Issues Mapping is presented, which engaged science and community protagonists in the genetic engineering debate in New Zealand. Participant and researcher evaluations suggest that Issues Mapping helped to break down stereotypes of both scientists and environmental activists; increased mutual understanding; reduced conflict; identified common ground; started building trust; and supported the emergence of policy options that all stakeholders in the room could live with. The paper ends with some reflections and priorities for further research.
Problem structuring methods; Issues mapping; Science and technology conflicts; Risk communication; Dialogue; Genetic engineering;
http://www.sciencedirect.com/science/article/pii/S0377221713006772
Cronin, Karen
Midgley, Gerald
Jackson, Laurie Skuba
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:220-2332013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:220-233
article
Optimal relay node placement in delay constrained wireless sensor network design
The Delay Constrained Relay Node Placement Problem (DCRNPP) frequently arises in the Wireless Sensor Network (WSN) design. In WSN, Sensor Nodes are placed across a target geographical region to detect relevant signals. These signals are communicated to a central location, known as the Base Station, for further processing. The DCRNPP aims to place the minimum number of additional Relay Nodes at a subset of Candidate Relay Node locations in such a manner that signals from various Sensor Nodes can be communicated to the Base Station within a pre-specified delay bound. In this paper, we study the structure of the projection polyhedron of the problem and develop valid inequalities in form of the node-cut inequalities. We also derive conditions under which these inequalities are facet defining for the projection polyhedron. We formulate a branch-and-cut algorithm, based upon the projection formulation, to solve DCRNPP optimally. A Lagrangian relaxation based heuristic is used to generate a good initial solution for the problem that is used as an initial incumbent solution in the branch-and-cut approach. Computational results are reported on several randomly generated instances to demonstrate the efficacy of the proposed algorithm.
Relay node placement; Cutting plane/facet; Polyhedral theory; Projection; Branch and cut; Lagrangian-relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221713006966
Nigam, Ashutosh
Agarwal, Yogesh K.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:64-742013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:64-74
article
The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost
We study a scheduling problem with rejection on a single serial batching machine, where the objectives are to minimize the total completion time and the total rejection cost. We consider four different problem variations. The first is to minimize the sum of the two objectives. The second and the third are to minimize one objective, given an upper bound on the value of the other objective and the last is to find a Pareto-optimal solution for each Pareto-optimal point. We provide a polynomial time procedure to solve the first variation and show that the three other variations are NP-hard. For solving the three NP-hard problems, we construct a pseudo-polynomial time algorithm. Finally, for one of the NP-hard variants of the problem we propose an FPTAS, provided some conditions hold.
Batch scheduling; Bicriteria scheduling; Rejection; Total completion time;
http://www.sciencedirect.com/science/article/pii/S0377221713006784
Shabtay, Dvir
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:84-932013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:84-93
article
Branch-and-price algorithm for the Resilient Multi-level Hop-constrained Network Design
In this work, we investigate the Resilient Multi-level Hop-constrained Network Design (RMHND) problem, which consists of designing hierarchical telecommunication networks, assuring resilience against random failures and maximum delay guarantees in the communication. Three mathematical formulations are proposed and algorithms based on the proposed formulations are evaluated. A Branch-and-price algorithm, which is based on a delayed column generation approach within a Branch-and-bound framework, is proven to work well, finding optimal solutions for practical telecommunication scenarios within reasonable time. Computational results show that algorithms based on the compact formulations are able to prove optimality for instances of limited size in the scenarios of interest while the proposed Branch-and-price algorithm exhibits a much better performance.
Integer programming; Branch-and-price; Multi-level; Resilience; Hop-constrained;
http://www.sciencedirect.com/science/article/pii/S0377221713006899
Souza, Fernanda S.H.
Gendreau, Michel
Mateus, Geraldo R.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:34-422013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:34-42
article
A geometric characterisation of the quadratic min-power centre
For a given set of nodes in the plane the min-power centre is a point such that the cost of the star centred at this point and spanning all nodes is minimised. The cost of the star is defined as the sum of the costs of its nodes, where the cost of a node is an increasing function of the length of its longest incident edge. The min-power centre problem provides a model for optimally locating a cluster-head amongst a set of radio transmitters, however, the problem can also be formulated within a bicriteria location model involving the 1-centre and a generalised Fermat-Weber point, making it suitable for a variety of facility location problems. We use farthest point Voronoi diagrams and Delaunay triangulations to provide a complete geometric description of the min-power centre of a finite set of nodes in the Euclidean plane when cost is a quadratic function. This leads to a new linear-time algorithm for its construction when the convex hull of the nodes is given. We also provide an upper bound for the performance of the centroid as an approximation to the quadratic min-power centre. Finally, we briefly describe the relationship between solutions under quadratic cost and solutions under more general cost functions.
Networks; Power efficient range assignment; Wireless ad hoc networks; Generalised Fermat–Weber problem; Farthest point Voronoi diagrams;
http://www.sciencedirect.com/science/article/pii/S0377221713007406
Brazil, M.
Ras, C.J.
Thomas, D.A.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:105-1132013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:105-113
article
Competition for cores in remanufacturing
We study competition between an original equipment manufacturer (OEM) and an independently operating remanufacturer (IO). Different from the existing literature, the OEM and IO compete not only for selling their products but also for collecting returned products (cores) through their acquisition prices. We consider a two-period model with manufacturing by the OEM in the first period, and manufacturing as well as remanufacturing in the second period. We find the optimal policies for both players by establishing a Nash equilibrium in the second period, and then determine the optimal manufacturing decision for the OEM in the first period. This leads to a number of managerial insights. One interesting result is that the acquisition price of the OEM only depends on its own cost structure, and not on the acquisition price of the IO. Further insights are obtained from a numerical investigation. We find that when the cost benefits of remanufacturing diminishes and the IO has more chance to collect the available cores, the OEM manufactures less in the first period as the market in the second period gets larger to protect its market share. Finally, we consider the case where consumers have lower willingness to pay for the remanufactured products and find that in that case remanufacturing becomes less profitable overall.
Inventory; Remanufacturing; Competition; Nash equilibrium;
http://www.sciencedirect.com/science/article/pii/S0377221713006905
Bulmus, Serra Caner
Zhu, Stuart X.
Teunter, Ruud
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:75-832013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:75-83
article
Optimal single machine scheduling of products with components and changeover cost
We consider the problem of scheduling products with components on a single machine, where changeovers incur fixed costs. The objective is to minimize the weighted sum of total flow time and changeover cost. We provide properties of optimal solutions and develop an explicit characterization of optimal sequences, while showing that this characterization has recurrent properties. Our structural results have interesting implications for practitioners, primarily that the structure of optimal sequences is robust to changes in demand.
Scheduling; Single machine; Components; Flow time; Changeover cost;
http://www.sciencedirect.com/science/article/pii/S0377221713006814
Zhou, Feng
Blocher, James D.
Hu, Xinxin
Sebastian Heese, H.
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:184-1922013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:184-192
article
Portfolio optimization in a regime-switching market with derivatives
We consider the optimal asset allocation problem in a continuous-time regime-switching market. The problem is to maximize the expected utility of the terminal wealth of a portfolio that contains an option, an underlying stock and a risk-free bond. The difficulty that arises in our setting is finding a way to represent the return of the option by the returns of the stock and the risk-free bond in an incomplete regime-switching market. To overcome this difficulty, we introduce a functional operator to generate a sequence of value functions, and then show that the optimal value function is the limit of this sequence. The explicit form of each function in the sequence can be obtained by solving an auxiliary portfolio optimization problem in a single-regime market. And then the original optimal value function can be approximated by taking the limit. Additionally, we can also show that the optimal value function is a solution to a dynamic programming equation, which leads to the explicit forms for the optimal value function and the optimal portfolio process. Furthermore, we demonstrate that, as long as the current state of the Markov chain is given, it is still optimal for an investor in a multiple-regime market to simply allocate his/her wealth in the same way as in a single-regime market.
Functional operator; Elasticity approach; Portfolio optimization; Regime switching; Dynamic programming principle;
http://www.sciencedirect.com/science/article/pii/S0377221713007170
Fu, Jun
Wei, Jiaqin
Yang, Hailiang
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:193-2072013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:193-207
article
Customer acceptance mechanisms for home deliveries in metropolitan areas
Efficient and reliable home delivery is crucial for the economic success of online retailers. This is especially challenging for attended home deliveries in metropolitan areas where logistics service providers face congested traffic networks and customers expect deliveries in tight delivery time windows. Our goal is to develop and compare strategies that maximize the profits of a logistics service provider by accepting as many delivery requests as possible, while assessing the potential impact of a request on the service quality of a delivery tour. Several acceptance mechanisms are introduced, differing in the amount of travel time information that is considered in the decision of whether a delivery request can be accommodated or not. A real-world inspired simulation framework is used for comparison of acceptance mechanisms with regard to profits and service quality. Computational experiments utilizing this simulation framework investigate the effectiveness of acceptance mechanisms and help identify when more advanced travel time information may be worth the additional data collection and computational efforts.
Routing; Home delivery; Feasibility check; Congestion; City logistics;
http://www.sciencedirect.com/science/article/pii/S0377221713006930
Ehmke, Jan Fabian
Campbell, Ann Melissa
oai:RePEc:eee:ejores:v:233:y:2014:i:1:p:1-152013-11-21RePEc:eee:ejores
RePEc:eee:ejores:v:233:y:2014:i:1:p:1-15
article
Multimodal freight transportation planning: A literature review
Multimodal transportation offers an advanced platform for more efficient, reliable, flexible, and sustainable freight transportation. Planning such a complicated system provides interesting areas in Operations Research. This paper presents a structured overview of the multimodal transportation literature from 2005 onward. We focus on the traditional strategic, tactical, and operational levels of planning, where we present the relevant models and their developed solution techniques. We conclude our review paper with an outlook to future research directions.
Freight transportation planning; Multimodal; Intermodal; Co-modal; Synchromodal; Review;
http://www.sciencedirect.com/science/article/pii/S0377221713005638
SteadieSeifi, M.
Dellaert, N.P.
Nuijten, W.
Van Woensel, T.
Raoufi, R.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:122-1292014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:122-129
article
Production planning and pricing policy in a make-to-stock system with uncertain demand subject to machine breakdowns
We consider a make-to-stock system served by an unreliable machine that produces one type of product, which is sold to customers at one of two possible prices depending on the inventory level at the time when a customer arrives (i.e., the decision point). The system manager must determine the production level and selling price at each decision point. We first show that the optimal production and pricing policy is a threshold control, which is characterized by three threshold parameters under both the long-run discounted profit and long-run average profit criteria. We then establish the structural relationships among the three threshold parameters that production is off when inventory is above the threshold, and that the optimal selling price should be low when inventory is above the threshold under the scenario where the machine is down or up. Finally we provide some numerical examples to illustrate the analytical results and gain additional insights.
Production planning; Dynamic pricing; Machine breakdown; Uncertain demand; Inventory control;
http://www.sciencedirect.com/science/article/pii/S0377221714002318
Shi, Xiutian
Shen, Houcai
Wu, Ting
Cheng, T.C.E.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:208-2202014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:208-220
article
The partial adjustment valuation approach with dynamic and variable speeds of adjustment to evaluating and measuring the business value of information technology
In this paper we develop the partial adjustment valuation approach in which the speeds of (partial) adjustment are assumed to be dynamic and variable, rather than fixed or constant, to assessing the value of information technology (IT). The speeds of adjustment are a function of a set of macroeconomic and/or microeconomic variables, observed and unobserved and, hence, become time-varying or dynamic and variable over time. The approach is illustrated by a practical application. The results imply that the constant speeds of adjustment may overestimate or underestimate the actual speeds of adjustment and, accordingly, may miscalculate the values of performance metrics. Thus, the partial adjustment valuation approach with dynamic and variable speeds of adjustment is more realistic and, more importantly, captures the changing patterns and trends of the adjustment speeds and the performance measures as well. As such, the partial adjustment valuation approach with constant speeds of adjustment fails to adequately explain the dynamic production process of a decision making unit. The empirical evidence also conflicts with the lopsided view that the productivity paradox does not exist in developed countries.
Theory of partial adjustment; Constant speeds of adjustment; Dynamic and variable speeds of adjustment; IT productivity paradox; Non-linear least squares;
http://www.sciencedirect.com/science/article/pii/S0377221714002331
Lin, Winston T.
Kao, Ta-Wei (Daniel)
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:814-8232014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:814-823
article
The Red–Blue transportation problem
This paper considers the Red–Blue Transportation Problem (Red–Blue TP), a generalization of the transportation problem where supply nodes are partitioned into two sets and so-called exclusionary constraints are imposed. We encountered a special case of this problem in a hospital context, where patients need to be assigned to rooms. We establish the problem’s complexity, and we compare two integer programming formulations. Furthermore, a maximization variant of Red–Blue TP is presented, for which we propose a constant-factor approximation algorithm. We conclude with a computational study on the performance of the integer programming formulations and the approximation algorithms, by varying the problem size, the partitioning of the supply nodes, and the density of the problem.
Transportation problem; Exclusionary constraints; Complexity; Approximation; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714001908
Vancroonenburg, Wim
Della Croce, Federico
Goossens, Dries
Spieksma, Frits C.R.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:1-172014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:1-17
article
A survey of recent research on location-routing problems
The design of distribution systems raises hard combinatorial optimization problems. For instance, facility location problems must be solved at the strategic decision level to place factories and warehouses, while vehicle routes must be built at the tactical or operational levels to supply customers. In fact, location and routing decisions are interdependent and studies have shown that the overall system cost may be excessive if they are tackled separately. The location-routing problem (LRP) integrates the two kinds of decisions. Given a set of potential depots with opening costs, a fleet of identical vehicles and a set of customers with known demands, the classical LRP consists in opening a subset of depots, assigning customers to them and determining vehicle routes, to minimize a total cost including the cost of open depots, the fixed costs of vehicles used, and the total cost of the routes. Since the last comprehensive survey on the LRP, published by Nagy and Salhi (2007), the number of articles devoted to this problem has grown quickly, calling a review of new research works. This paper analyzes the recent literature (72 articles) on the standard LRP and new extensions such as several distribution echelons, multiple objectives or uncertain data. Results of state-of-the-art metaheuristics are also compared on standard sets of instances for the classical LRP, the two-echelon LRP and the truck and trailer problem.
Location-routing problem; Facility location; Vehicle routing; Distribution; Truck and trailer routing problem;
http://www.sciencedirect.com/science/article/pii/S0377221714000071
Prodhon, Caroline
Prins, Christian
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1095-11042014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1095-1104
article
Developing an early warning system to predict currency crises
The purpose of this paper is to develop an early warning system to predict currency crises. In this study, a data set covering the period of January 1992–December 2011 of Turkish economy is used, and an early warning system is developed with artificial neural networks (ANN), decision trees, and logistic regression models. Financial Pressure Index (FPI) is an aggregated value, composed of the percentage changes in dollar exchange rate, gross foreign exchange reserves of the Central Bank, and overnight interest rate. In this study, FPI is the dependent variable, and thirty-two macroeconomic indicators are the independent variables. Three models, which are tested in Turkish crisis cases, have given clear signals that predicted the 1994 and 2001 crises 12months earlier. Considering all three prediction model results, Turkey’s economy is not expected to have a currency crisis (ceteris paribus) until the end of 2012. This study presents uniqueness in that decision support model developed in this study uses basic macroeconomic indicators to predict crises up to a year before they actually happened with an accuracy rate of approximately 95%. It also ranks the leading factors of currency crisis with regard to their importance in predicting the crisis.
Early warning system; Currency crisis; Perfect signal; Artificial neural networks (ANN); Decision tree; Logistic regression;
http://www.sciencedirect.com/science/article/pii/S0377221714001829
Sevim, Cuneyt
Oztekin, Asil
Bali, Ozkan
Gumus, Serkan
Guresen, Erkam
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:313-3262014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:313-326
article
Using a partially observable Markov chain model to assess colonoscopy screening strategies – A cohort study
Colorectal cancer (CRC) is notoriously hard to combat for its high incidence and mortality rates. However, with improved screening technology and better understanding of disease pathways, CRC is more likely to be detected at early stage and thus more likely to be cured. Among the available screening methods, colonoscopy is most commonly used in the U.S. because of its capability of visualizing the entire colon and removing the polyps it detected. The current national guideline for colonoscopy screening recommends an observation-based screening strategy. Nevertheless, there is scant research studying the cost-effectiveness of the recommended observation-based strategy and its variants. In this paper, we describe a partially observable Markov chain (POMC) model which allows us to assess the cost-effectiveness of both fixed-interval and observation-based colonoscopy screening strategies. In our model, we consider detailed adenomatous polyp states and estimate state transition probabilities based on longitudinal clinical data from a specific population cohort. We conduct a comprehensive numerical study which investigates several key factors in screening strategy design, including screening frequency, initial screening age, screening end age, and screening compliance rate. We also conduct sensitivity analyses on the cost and quality of life parameters. Our numerical result demonstrates the usability of our model in assessing colonoscopy screening strategies with consideration of partial observation of true health states. This research facilitates future design of better colonoscopy screening strategies.
Medical decision making; Cancer screening; Colorectal cancer natural history; Partially observable Markov chain; Cost-effectiveness analysis;
http://www.sciencedirect.com/science/article/pii/S0377221714002185
Li, Y.
Zhu, M.
Klein, R.
Kong, N.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:836-8452014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:836-845
article
Robust combinatorial optimization with variable cost uncertainty
We present in this paper a new model for robust combinatorial optimization with cost uncertainty that generalizes the classical budgeted uncertainty set. We suppose here that the budget of uncertainty is given by a function of the problem variables, yielding an uncertainty multifunction. The new model is less conservative than the classical model and approximates better Value-at-Risk objective functions, especially for vectors with few non-zero components. An example of budget function is constructed from the probabilistic bounds computed by Bertsimas and Sim. We provide an asymptotically tight bound for the cost reduction obtained with the new model. We turn then to the tractability of the resulting optimization problems. We show that when the budget function is affine, the resulting optimization problems can be solved by solving n+1 deterministic problems. We propose combinatorial algorithms to handle problems with more general budget functions. We also adapt existing dynamic programming algorithms to solve faster the robust counterparts of optimization problems, which can be applied both to the traditional budgeted uncertainty model and to our new model. We evaluate numerically the reduction in the price of robustness obtained with the new model on the shortest path problem and on a survivable network design problem.
Combinatorial optimization; Robust optimization; Dynamic programming; Price of robustness; Budgeted uncertainty;
http://www.sciencedirect.com/science/article/pii/S0377221714002124
Poss, Michael
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1133-11412014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1133-1141
article
Operational transportation planning of freight forwarding companies in horizontal coalitions
In order to improve profitability, freight forwarding companies try to organize their operational transportation planning systematically, considering not only their own fleet but also external resources. Such external resources include vehicles from closely related subcontractors in vertical cooperations, autonomous common carriers on the transportation market, and cooperating partners in horizontal coalitions. In this paper, the transportation planning process of forwarders is studied and the benefit of including external resources is analyzed. By introducing subcontracting, the conventional routing of own vehicles is extended to an integrated operational transportation planning, which simultaneously constructs fulfillment plans with overall lowest costs using the own fleet and subcontractors’ vehicles. This is then combined with planning strategies, which intend to increase the profitability by exchanging requests among members in horizontal coalitions. Computational results show considerable cost reductions using the proposed planning approach.
Logistics; Distributed decision making; Transportation planning; Subcontracting; Collaborative planning; Request exchange;
http://www.sciencedirect.com/science/article/pii/S037722171400191X
Wang, Xin
Kopfer, Herbert
Gendreau, Michel
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:871-8862014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:871-886
article
Pricing in a supply chain for auction bidding under information asymmetry
We examine a supply chain in which a manufacturer participates in a sealed-bid lowest price procurement auction through a distributor. This form of supply chain is common when a manufacturer is active in an overseas market without establishing a local subsidiary. To gain a strategic advantage in the division of profit, the manufacturer and distributor may intentionally conceal information about the underlying cost distribution of the competition. In this environment of information asymmetry, we determine the equilibrium mark-up, the ex-ante expected mark-up and expected profit of the manufacturer and the equilibrium bid of the distributor. In unilateral communication, we demonstrate the informed agent’s advantage resulting to higher mark-up. Under information sharing, we show that profit is equally shared among the supply chain partners and we explicitly derive the mark-up when the underlying cost distribution is uniform in [0,1]. The model and findings are illustrated by a numerical example.
Auctions/bidding; Supply chain management; Equilibrium mark-up; Information asymmetry; Double marginalization; Information sharing;
http://www.sciencedirect.com/science/article/pii/S0377221714001866
Lorentziadis, Panos L.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:270-2802014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:270-280
article
An intelligent decomposition of pairwise comparison matrices for large-scale decisions
A Pairwise Comparison Matrix (PCM) has been used to compute for relative priorities of elements and are integral components in widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, PCMs suffer from several issues limiting their applications to large-scale decision problems. These limitations can be attributed to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker. This issue results to inconsistent preferences due to the limited cognitive powers of decision makers. To address these limitations, this research proposes a PCM decomposition methodology that reduces the elicited pairwise comparisons. A binary integer program is proposed to intelligently decompose a PCM into several smaller subsets using interdependence scores among elements. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets to derive the global weights of the elements from the original PCM. As a result, the number of pairwise comparison is reduced and consistency is of the comparisons is improved. The proposed decomposition methodology is applied to both AHP and ANP to demonstrate its advantages.
AHP; ANP; Pairwise comparison matrices; Inconsistency; Binary integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714002719
Jalao, Eugene Rex
Wu, Teresa
Shunk, Dan
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1037-10532014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1037-1053
article
The effects of asset specificity on maintenance financial performance: An empirical application of Transaction Cost Theory to the medical device maintenance field
This study uses multivariate regression analysis to examine the effects of asset specificity on the financial performance of both external and internal governance structures for medical device maintenance, and investigates how the financial performance of external governance structures differs depending on whether a hospital is private or public. The hypotheses were tested using information on 764 medical devices and 62 maintenance service providers, resulting in 1403 maintenance transactions. As such, our data sample is significantly larger than those used in previous studies in this area. The results empirically support our core theoretical argument that governance financial performance is influenced by assets specificity.
Maintenance; Multivariate statistics; Econometrics in health;
http://www.sciencedirect.com/science/article/pii/S0377221714001751
Cruz, Antonio Miguel
Haugan, Gregory L.
Rincon, Adriana Maria Rios
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:233-2442014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:233-244
article
Lease expiration management for a single lease term in the apartment industry
Lease expiration management (LEM) in the apartment industry aims to control the number of lease expirations and thus achieve maximal revenue growth. We examine rental rate strategies in the context of LEM for apartment buildings that offer a single lease term and face demand uncertainty. We show that the building may incur a significant revenue loss if it fails to account for LEM in the determination of the rental rate. We also show that the use of LEM is a compromise approach between a limited optimization, where no future demand information is available, and a global optimization, where complete future demand information is available. We show that the use of LEM can enhance the apartment building’s revenue by as much as 8% when the desired number of expirations and associated costs are appropriately estimated. Numerical examples are included to illustrate the major results derived from our models and the impact on the apartment’s revenue of sensitivity to the desired number of expirations and associated costs.
Revenue management; Pricing; Lease expiration management; Apartment industry;
http://www.sciencedirect.com/science/article/pii/S0377221714002549
Chen, Jing
Wang, Jian
Bell, Peter C.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:185-1982014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:185-198
article
Measurement of preferences with self-explicated approaches: A classification and merge of trade-off- and non-trade-off-based evaluation types
Self-explicated approaches are popular preference measurement approaches for products with many attributes. This article classifies previous self-explicated approaches according to their evaluation types, i.e. trade-off- versus non-trade-off-based, and outlines their advantages and disadvantages. In addition, it proposes a new method, the presorted adaptive self-explicated approach that is based on Netzer and Srinivasan’s (2011) adaptive self-explicated approach and that combines trade-off- and non-trade-off-based evaluation types. Two empirical studies compare this new method with the most popular existing self-explicated approaches, including the adaptive self-explicated approach and paired comparison preference measurement. The new method overcomes the insufficient discrimination between importance weights, as usually found in non-trade-off-based evaluation types; discourages respondents’ simplification strategies, as are frequently encountered in trade-off evaluation types; is easy to implement; and yields high predictive validity compared with other popular self-explicated approaches.
Preference measurement; Self-explicated approaches; Marketing research;
http://www.sciencedirect.com/science/article/pii/S0377221714002240
Schlereth, Christian
Eckert, Christine
Schaaf, René
Skiera, Bernd
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:254-2692014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:254-269
article
Impact of productivity on cross-training configurations and optimal staffing decisions in hospitals
Cross-training of nursing staff has been used in hospitals to reduce labor cost, provide scheduling flexibility, and meet patient demand effectively. However, cross-trained nurses may not be as productive as regular nurses in carrying out their tasks because of a new work environment and unfamiliar protocols in the new unit. This leads to the research question: What is the impact of productivity on optimal staffing decisions (both regular and cross-trained) in a two-unit and multi-unit system. We investigate the effect of mean demand, cross-training cost, contract nurse cost, and productivity, on a two-unit, full-flexibility configuration and a three-unit, partial flexibility and chaining (minimal complete chain) configurations under centralized and decentralized decision making. Under centralized decision making, the optimal staffing and cross-training levels are determined simultaneously, while under decentralized decision making, the optimal staffing levels are determined without any knowledge of future cross-training programs. We use two-stage stochastic programming to derive closed form equations and determine the optimal number of cross-trained nurses for two units facing stochastic demand following general, continuous distributions. We find that there exists a productivity level (threshold) beyond which the optimal number of cross-trained nurses declines, as fewer cross-trained nurses are sufficient to obtain the benefit of staffing flexibility. When we account for productivity variations, chaining configuration provides on average 1.20% cost savings over partial flexibility configuration, while centralized decision making averages 1.13% cost savings over decentralized decision making.
Cross-training; Productivity; Chaining; Healthcare; Stochastic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714002720
Gnanlet, Adelina
Gilland, Wendell G.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1008-10202014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1008-1020
article
Computing approximate Nash equilibria in general network revenue management games
Computing optimal capacity allocations in network revenue management is computationally hard. The problem of computing exact Nash equilibria in non-zero-sum games is computationally hard, too. We present a fast heuristic that, in case it cannot converge to an exact Nash equilibrium, computes an approximation to it in general network revenue management problems under competition. We also investigate the question whether it is worth taking competition into account when making (network) capacity allocation decisions. Computational results show that the payoffs in the approximate equilibria are very close to those in exact ones. Taking competition into account never leads to a lower revenue than ignoring competition, no matter what the competitor does. Since we apply linear continuous models, computation time is very short.
Network revenue management; Competition; Approximate Nash equilibria; Algorithmic game theory;
http://www.sciencedirect.com/science/article/pii/S0377221714001805
Grauberger, W.
Kimms, A.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:281-2892014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:281-289
article
Subgroup additivity in the queueing problem
Subgroup additivity requires that a rule assigns the same expected ‘relative’ utility to each agent whether an agent’s expected relative utility is calculated from the problem involving all agents or from its sub-problems with a smaller number of agents. In this paper, we investigate its implications for the queueing problem. As a result, we present characterizations of five important rules: the minimal transfer rule, the maximal transfer rule, the pivotal rule, the reward based pivotal rule, and the symmetrically balanced VCG rule. In addition to some basic axioms and subgroup additivity, the characterization results can be obtained by additionally imposing either a strategic axiom or an equity axiom.
Queueing problem; Subgroup additivity; Weak strategyproofness; Egalitarian equivalence;
http://www.sciencedirect.com/science/article/pii/S037722171400277X
Chun, Youngsub
Mitra, Manipushpak
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1142-11542014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1142-1154
article
Computationally efficient evaluation of appointment schedules in health care
We consider the problem of evaluating and constructing appointment schedules for patients in a health care facility where a single physician treats patients in a consecutive manner, as is common for general practitioners, clinics and for outpatients in hospitals. Specifically, given a fixed-length session during which a physician sees K patients, each patient has to be given an appointment time during this session in advance. Optimising a schedule with respect to patient waiting times, physician idle times, session overtime, etc. usually requires a heuristic search method involving a huge number of repeated schedule evaluations. Hence, our aim is to obtain accurate predictions at very low computational cost. This is achieved by (1) using Lindley’s recursion to allow for explicit expressions and (2) choosing a discrete-time (slotted) setting to make those expressions easy to compute. We assume general, possibly distinct, distributions for the patients’ consultation times, which allows to account for multiple treatment types, emergencies and patient no-shows. The moments of waiting and idle times are obtained and the computational complexity of the algorithm is discussed. Additionally, we calculate the schedule’s performance in between appointments in order to assist a sequential scheduling strategy.
Stochastic Programming; Scheduling; Queueing; Complexity theory;
http://www.sciencedirect.com/science/article/pii/S0377221714002100
De Vuyst, Stijn
Bruneel, Herwig
Fiems, Dieter
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:857-8702014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:857-870
article
A hybrid wrapper–filter approach to detect the source(s) of out-of-control signals in multivariate manufacturing process
With modern data-acquisition equipment and on-line computers used during production, it is now common to monitor several correlated quality characteristics simultaneously in multivariate processes. Multivariate control charts (MCC) are important tools for monitoring multivariate processes. One difficulty encountered with multivariate control charts is the identification of the variable or group of variables that cause an out-of-control signal. Expert knowledge either in combination with wrapper-based supervised classifier or a pre-filter with wrapper are the standard approaches to detect the sources of out-of-control signal. However gathering expert knowledge in source identification is costly and may introduce human error. Individual univariate control charts (UCC) and decomposition of T2 statistics are also used in many cases simultaneously to identify the sources, but these either ignore the correlations between the sources or may take more time with the increase of dimensions. The aim of this paper is to develop a source identification approach that does not need any expert-knowledge and can detect out-of-control signal in less computational complexity. We propose, a hybrid wrapper–filter based source identification approach that hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper. The Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to utilize the knowledge about the intrinsic pattern of the quality characteristics computed by the filter for directing the wrapper search process. To compute optimal ANNIGMA score, we also propose a Global MR-ANNIGMA using non-functional relationship between variables which is independent of the derivative of the objective function and has a potential to overcome the local optimization problem of ANN training. The novelty of the proposed approaches is that they combine the advantages of both filter and wrapper approaches and do not require any expert knowledge about the sources of the out-of-control signals. Heuristic score based subset generation process also reduces the search space into polynomial growth which in turns reduces computational time. The proposed approaches were tested by exhaustive experiments using both simulated and real manufacturing data and compared to existing methods including independent filter, wrapper and Multivariate EWMA (MEWMA) methods. The results indicate that the proposed approaches can identify the sources of out-of-control signals more accurately than existing approaches.
Multivariate control chart; Fault diagnosis; Global optimization; Wrapper and filter approaches;
http://www.sciencedirect.com/science/article/pii/S0377221714001672
Huda, Shamsul
Abdollahian, Mali
Mammadov, Musa
Yearwood, John
Ahmed, Shafiq
Sultan, Ibrahim
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:802-8132014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:802-813
article
An iterated local search algorithm for the single-vehicle cyclic inventory routing problem
The Single-Vehicle Cyclic Inventory Routing Problem (SV-CIRP) belongs to the class of Inventory Routing Problems (IRP) in which the supplier optimises both the distribution costs and the inventory costs at the customers. The goal of the SV-CIRP is to minimise both kinds of costs and to maximise the collected rewards, by selecting a subset of customers from a given set and determining the quantity to be delivered to each customer and the vehicle routes, while avoiding stockouts. A cyclic distribution plan should be developed for a single vehicle.
Routing; Inventory; Single-vehicle cyclic inventory routing problem; Iterated local search; Metaheuristic;
http://www.sciencedirect.com/science/article/pii/S0377221714001350
Vansteenwegen, Pieter
Mateo, Manuel
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:348-3622014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:348-362
article
Online stochastic UAV mission planning with time windows and time-sensitive targets
In this paper we simultaneously consider three extensions to the standard Orienteering Problem (OP) to model characteristics that are of practical relevance in planning reconnaissance missions of Unmanned Aerial Vehicles (UAVs). First, travel and recording times are uncertain. Secondly, the information about each target can only be obtained within a predefined time window. Due to the travel and recording time uncertainty, it is also uncertain whether a target can be reached before the end of its time window. Finally, we consider the appearance of new targets during the flight, so-called time-sensitive targets, which need to be visited immediately if possible. We tackle this online stochastic UAV mission planning problem with time windows and time-sensitive targets using a re-planning approach. To this end, we introduce the Maximum Coverage Stochastic Orienteering Problem with Time Windows (MCS-OPTW). It aims at constructing a tour with maximum expected profit of targets that were already known before the flight. Secondly, it directs the planned tour to predefined areas where time-sensitive targets are expected to appear. We have developed a fast heuristic that can be used to re-plan the tour, each time before leaving a target. In our computational experiments we illustrate the benefits of the MCS-OPTW planning approach with respect to balancing the two objectives: the expected profits of foreseen targets, and expected percentage of time-sensitive targets reached on time. We compare it to a deterministic planning approach and show how it deals with uncertainty in travel and recording times and the appearance of time-sensitive targets.
Stochastic orienteering problem; Time windows; Online planning;
http://www.sciencedirect.com/science/article/pii/S0377221714002288
Evers, Lanah
Barros, Ana Isabel
Monsuur, Herman
Wagelmans, Albert
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:921-9312014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:921-931
article
Cost, revenue and profit efficiency measurement in DEA: A directional distance function approach
Estimation of efficiency of firms in a non-competitive market characterized by heterogeneous inputs and outputs along with their varying prices is questionable when factor-based technology sets are used in data envelopment analysis (DEA). In this scenario, a value-based technology becomes an appropriate reference technology against which efficiency can be assessed. In this contribution, the value-based models of Tone (2002) are extended in a directional DEA set up to develop new directional cost- and revenue-based measures of efficiency, which are then decomposed into their respective directional value-based technical and allocative efficiencies. These new directional value-based measures are more general, and include the existing value-based measures as special cases. These measures satisfy several desirable properties of an ideal efficiency measure. These new measures are advantageous over the existing ones in terms of (1) their ability to satisfy the most important property of translation invariance; (2) choices over the use of suitable direction vectors in handling negative data; and (3) flexibility in providing the decision makers with the option of specifying preferable direction vectors to incorporate their preferences. Finally, under the condition of no prior unit price information, a directional value-based measure of profit inefficiency is developed for firms whose underlying objectives are profit maximization. For an illustrative empirical application, our new measures are applied to a real-life data set of 50 US banks to draw inferences about the production correspondence of banking industry.
Data envelopment analysis; Cost efficiency; Revenue efficiency; Profit efficiency; Translation invariance; Directional distance function;
http://www.sciencedirect.com/science/article/pii/S0377221714001325
Sahoo, Biresh K.
Mehdiloozad, Mahmood
Tone, Kaoru
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:290-2992014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:290-299
article
Systemic risk elicitation: Using causal maps to engage stakeholders and build a comprehensive view of risks
As evidenced through both a historical and contemporary number of reported over-runs, managing projects can be a risky business. Managers are faced with the need to effectively work with a multitude of parties and deal with a wealth of interlocking uncertainties. This paper describes a modelling process developed to assist managers facing such situations. The process helps managers to develop a comprehensive appreciation of risks and gain an understanding of the impact of the interactions between these risks through explicitly engaging a wide stakeholder base using a group support system and causal mapping process. Using a real case the paper describes the modelling process and outcomes along with its implications, before reflecting on the insights, limitations and future research.
Problem structuring; Risk analysis; Group decision making;
http://www.sciencedirect.com/science/article/pii/S0377221714002744
Ackermann, Fran
Howick, Susan
Quigley, John
Walls, Lesley
Houghton, Tom
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:988-9962014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:988-996
article
A system dynamics model for determining the waste disposal charging fee in construction
The waste disposal charging fee (WDCF) has long been adopted for stimulating major project stakeholders’ (particularly project clients and contractors) incentives to minimize solid waste and increase the recovery of wasted materials in the construction industry. However, the present WDCFs applied in many regions of China are mostly determined based on a rule of thumb. Consequently the effectiveness of implementing these WDCFs is very limited. This study aims at addressing this research gap through developing a system dynamics based model to determine an appropriate WDCF in the construction sector. The data used to test and validate the model was collected from Shenzhen of south China. By using the model established, two types of simulations were carried out. One is the base run simulation to investigate the status quo of waste generation in Shenzhen; the other is policy analysis simulation, with which an appropriate WDCF could be determined to reduce waste generation and landfilling, maximize waste recycling, and minimize the waste dumped inappropriately. The model developed can function as a tool to effectively determine an appropriate WDCF in Shenzhen. Further, it can also be used by other regions intending to stimulate construction waste minimization and recycling through implementing an optimal WDCF.
System dynamics; Decision making; Waste disposal charging fee (WDCF); Waste management;
http://www.sciencedirect.com/science/article/pii/S0377221714001696
Yuan, Hongping
Wang, Jiayuan
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:31-402014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:31-40
article
The Share-a-Ride Problem: People and parcels sharing taxis
New city logistics approaches are needed to ensure efficient urban mobility for both people and goods. Usually, these are handled independently in dedicated networks. This paper considers conceptual and mathematical models in which people and parcels are handled in an integrated way by the same taxi network. From a city perspective, this system has a potential to alleviate urban congestion and environmental pollution. From the perspective of a taxi company, new benefits from the parcel delivery service can be obtained. We propose two multi-commodity sharing models. The Share-a-Ride Problem (SARP) is discussed and defined in detail. A reduced problem based on the SARP is proposed: the Freight Insertion Problem (FIP) starts from a given route for handling people requests and inserts parcel requests into this route. We present MILP formulations and perform a numerical study of both static and dynamic scenarios. The obtained numerical results provide valuable insights into successfully implementing a taxi sharing service.
Transportation; Share-a-Ride Problem; Freight insertion problem; Multi-commodity; Taxi;
http://www.sciencedirect.com/science/article/pii/S0377221714002173
Li, Baoxiang
Krushinsky, Dmitry
Reijers, Hajo A.
Van Woensel, Tom
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1054-10662014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1054-1066
article
A travel time estimation model for a high-level picker-to-part system with class-based storage policies
Most previous related studies on warehouse configurations and operations only investigated single-level storage rack systems where the height of storage racks and the vertical movement of the picking operations are both not considered. However, in order to utilize the space efficiently, high-level storage systems are often used in warehouses in practice. This paper presents a travel time estimation model for a high-level picker-to-part system with the considerations of class-based storage policy and various routing policies. The results indicate that the proposed model appears to be sufficiently accurate for practical purposes. Furthermore, the effects of storage and routing policies on the travel time and the optimal warehouse layout are discussed in the paper.
Facilities planning and design; Logistics; Warehouse layout; Order picking;
http://www.sciencedirect.com/science/article/pii/S0377221714001726
Pan, Jason Chao-Hsien
Wu, Ming-Hung
Chang, Wen-Liang
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:175-1842014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:175-184
article
Take back costs and product durability
Extended Producer Responsibility (EPR) initiatives may require a manufacturer to be responsible in the future for taking back the products it produces today. A ramification of EPR is that take back costs may influence firms’ decisions regarding product durability. In the absence of EPR, prior literature has shown that a firm may intentionally lower durability, yielding planned obsolescence. We use a two period model to examine the impact of take back costs on a manufacturer’s product durability and pricing decisions, under both selling and leasing scenarios. We show that compared to selling, leasing provides a greater incentive to raise durability, thus extending a classic insight to a setting with product take backs. Interestingly, we also show that it is possible for the optimal product durability to decrease if the stipulated take back fraction increases. In such situations, were the take back fraction tied to durability rather than a fixed fraction, we demonstrate durability can increase. We explore the impact of take backs on profits and surplus by alternatively considering products for which take back costs are either increasing or decreasing functions of durability. When increasing durability implies higher take back costs, our results demonstrate that leasing can increase durability, profits, and surplus significantly compared to selling. In contrast, when increasing durability implies a lower take back cost, there is a built-in incentive for the firm to increase durability, which can make selling more efficient (i.e., surplus enhancing) than leasing.
Durability; Take-backs; Obsolescence; Production; Pricing;
http://www.sciencedirect.com/science/article/pii/S0377221714002227
Pangburn, Michael S.
Stavrulaki, Euthemia
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:41-532014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:41-53
article
Robust optimization for interactive multiobjective programming with imprecise information applied to R&D project portfolio selection
A multiobjective binary integer programming model for R&D project portfolio selection with competing objectives is developed when problem coefficients in both objective functions and constraints are uncertain. Robust optimization is used in dealing with uncertainty while an interactive procedure is used in making tradeoffs among the multiple objectives. Robust nondominated solutions are generated by solving the linearized counterpart of the robust augmented weighted Tchebycheff programs. A decision maker’s most preferred solution is identified in the interactive robust weighted Tchebycheff procedure by progressively eliciting and incorporating the decision maker’s preference information into the solution process. An example is presented to illustrate the solution approach and performance. The developed approach can also be applied to general multiobjective mixed integer programming problems.
Multiobjective programming; Robust optimization; Imprecise information; Portfolio selection; Interactive procedures;
http://www.sciencedirect.com/science/article/pii/S0377221714002525
Hassanzadeh, Farhad
Nemati, Hamid
Sun, Minghe
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:130-1422014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:130-142
article
Coordination of production and interstage batch delivery with outsourced distribution
In this paper, we consider coordinated production and interstage batch delivery scheduling problems, where a third-party logistics provider (3PP) delivers semi-finished products in batches from one production location to another production location belonging to the same manufacturer. A batch cannot be delivered until all jobs of the batch are completed at the upstream stage. The 3PP is required to deliver each product within a time T from its release at the upstream stage. We consider two transportation modes: regular transportation, for which delivery departure times are fixed at the beginning, and express transportation, for which delivery departure times are flexible. We analyze the problems faced by the 3PP when either the manufacturer dominates or the 3PP dominates. In this context, we investigate the complexity of several problems, providing polynomiality and NP-completeness results.
Supply chain scheduling; Batching and delivery; Outsourced distribution; Two delivery modes; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714002781
Agnetis, Alessandro
Aloulou, Mohamed Ali
Fu, Liang-Liang
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:824-8352014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:824-835
article
Service differentiation through selective lateral transshipments
We consider a multi-item spare parts problem with multiple warehouses and two customer classes, where lateral transshipments are used as a differentiation tool. Specifically, premium requests that cannot be met from stock at their preferred warehouse may be satisfied from stock at other warehouses (so-called lateral transshipments). We first derive approximations for the mean waiting time per class in a single-item model with selective lateral transshipments. Next, we embed our method in a multi-item model minimizing the holding costs and costs of lateral and emergency shipments from upstream locations in the network. Compared to the option of using only selective emergency shipments for differentiation, the addition of selective lateral transshipments can lead to significant further cost savings (14% on average).
Inventory; Service differentiation; Lateral transshipments; Spare parts;
http://www.sciencedirect.com/science/article/pii/S037722171400188X
Alvarez, E.M.
van der Heijden, M.C.
Vliegen, I.M.H.
Zijm, W.H.M.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:77-862014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:77-86
article
Effective learning hyper-heuristics for the course timetabling problem
Course timetabling is an important and recurring administrative activity in most educational institutions. This article combines a general modeling methodology with effective learning hyper-heuristics to solve this problem. The proposed hyper-heuristics are based on an iterated local search procedure that autonomously combines a set of move operators. Two types of learning for operator selection are contrasted: a static (offline) approach, with a clear distinction between training and execution phases; and a dynamic approach that learns on the fly. The resulting algorithms are tested over the set of real-world instances collected by the first and second International Timetabling competitions. The dynamic scheme statistically outperforms the static counterpart, and produces competitive results when compared to the state-of-the-art, even producing a new best-known solution. Importantly, our study illustrates that algorithms with increased autonomy and generality can outperform human designed problem-specific algorithms.
Timetabling; Hyper-heuristics; Heuristics; Metaheuristics; Combinatorial optimization;
http://www.sciencedirect.com/science/article/pii/S0377221714002859
Soria-Alcaraz, Jorge A.
Ochoa, Gabriela
Swan, Jerry
Carpio, Martin
Puga, Hector
Burke, Edmund K.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:363-3732014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:363-373
article
Mass-casualty triage: Distribution of victims to multiple hospitals using the SAVE model
During a mass casualty incident (MCI), to which one of several area hospitals should each victim be sent? These decisions depend on resource availability (both transport and care) and the survival probabilities of patients. This paper focuses on the critical time period immediately following the onset of an MCI and is concerned with how to effectively evacuate victims to the different area hospitals in order to provide the greatest good to the greatest number of patients while not overwhelming any single hospital. This resource-constrained triage problem is formulated as a mixed-integer program, which we call the Severity-Adjusted Victim Evacuation (SAVE) model. It is compared with a model in the extant literature and also against several current policies commonly used by the so-called incident commander. The experiments indicate that the SAVE model provides a marked improvement over the commonly used ad-hoc policies and an existing model. Two possible implementation strategies are discussed along with managerial conclusions.
OR in service industries; Risk management; Disaster management; Health care; Victim distribution;
http://www.sciencedirect.com/science/article/pii/S0377221714002574
Dean, Matthew D.
Nair, Suresh K.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1105-11182014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1105-1118
article
Modeling framework for optimal evacuation of large-scale crowded pedestrian facilities
The paper presents a simulation–optimization modeling framework for the evacuation of large-scale pedestrian facilities with multiple exit gates. The framework integrates a genetic algorithm (GA) and a microscopic pedestrian simulation–assignment model. The GA searches for the optimal evacuation plan, while the simulation model guides the search through evaluating the quality of the generated evacuation plans. Evacuees are assumed to receive evacuation instructions in terms of the optimal exit gates and evacuation start times. The framework is applied to develop an optimal evacuation plan for a hypothetical crowded exhibition hall. The obtained results show that the model converges to a superior optimal evacuation plan within an acceptable number of iterations. In addition, the obtained evacuation plan outperforms conventional plans that implement nearest-gate immediate evacuation strategies.
Crowd dynamics; Evacuation; Simulation; Cellular automata; Genetic algorithms;
http://www.sciencedirect.com/science/article/pii/S0377221714001891
Abdelghany, Ahmed
Abdelghany, Khaled
Mahmassani, Hani
Alhalabi, Wael
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:221-2322014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:221-232
article
Vertical integration with endogenous contract leadership: Stability and fair profit allocation
This paper studies vertical integration in serial supply chains with a wholesale price contract. We consider a business environment where the contracting leader may be endogenously changed before and after forming the integration. A cooperative game is formulated to normatively analyze the stable and fair profit allocations under the grand coalition in such an environment. Our main result demonstrates that vertical integration is stable when all members are pessimistic in the sense that they are sure that they will not become the contracting leader if they deviate from the grand coalition. We find that in this case, the grand coalition’s profit must be allocated more to the retailer and the members with higher costs. Nevertheless, we also show the conditions under which the upstream manufacturer can have strong power as in traditional supply chains.
Vertical integration; Leader position; Cooperative game; Core allocation; Economics;
http://www.sciencedirect.com/science/article/pii/S0377221714002513
Kumoi, Yuki
Matsubayashi, Nobuo
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:245-2532014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:245-253
article
A new nonlinear interval programming method for uncertain problems with dependent interval variables
This paper proposes a new nonlinear interval programming method that can be used to handle uncertain optimization problems when there are dependencies among the interval variables. The uncertain domain is modeled using a multidimensional parallelepiped interval model. The model depicts single-variable uncertainty using a marginal interval and depicts the degree of dependencies among the interval variables using correlation angles and correlation coefficients. Based on the order relation of interval and the possibility degree of interval, the uncertain optimization problem is converted to a deterministic two-layer nesting optimization problem. The affine coordinate is then introduced to convert the uncertain domain of a multidimensional parallelepiped interval model to a standard interval uncertain domain. A highly efficient iterative algorithm is formulated to generate an efficient solution for the multi-layer nesting optimization problem after the conversion. Three computational examples are given to verify the effectiveness of the proposed method.
Uncertainty modeling; Nonlinear interval programming; Interval model; Uncertain optimization; Variable dependency;
http://www.sciencedirect.com/science/article/pii/S0377221714002586
Jiang, C.
Zhang, Z.G.
Zhang, Q.F.
Han, X.
Xie, H.C.
Liu, J.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1165-11692014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1165-1169
article
A new distance measure including the weak preference relation: Application to the multiple criteria aggregation procedure for mixed evaluations
We introduce a new distance measure between two preorders that captures indifference, strict preference, weak preference and incomparability relations. This measure is the first to capture weak preference relations. We illustrate how this distance measure affords decision makers greater modeling power to capture their preferences, or uncertainty and ambiguity around them, by using our proposed distance measure in a multiple criteria aggregation procedure for mixed evaluations.
Multiple criteria analysis; Uncertainty modeling; Preference relations; Stochastic dominance; Distance measure;
http://www.sciencedirect.com/science/article/pii/S0377221714002756
Ben Amor, Sarah
Martel, Jean-Marc
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1155-11642014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1155-1164
article
Lift ticket prices and quality in French ski resorts: Insights from a non-parametric analysis
Using a unique data set with 168 ski resorts located in France, this paper investigates the relationship between lift ticket prices and supply-related characteristics of ski resorts. A non-parametric analysis combined with a principal component analysis is used to identify the set of efficient ski resorts, defined as those where the lift ticket price is the cheapest for a given level of quality. Results show that the average inefficiency per lift ticket price is less than 1.5euros for resorts located in the Pyrenees and the Southern Alps. The average inefficiency is three times higher for ski resorts located in the Northern Alps, which is explained by the presence of large connected ski areas offering many more runs for a small surcharge.
Data envelopment analysis; Free disposal hull model; Quality; Lift ticket price; Ski resorts;
http://www.sciencedirect.com/science/article/pii/S0377221714002148
Wolff, François-Charles
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:966-9742014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:966-974
article
Interior analysis of the green product mix solution
When modeling optimal product mix under emission restrictions produces a solution with unacceptable level of profit, analyst is moved to investigate the cause(s). Interior analysis (IA) is proposed for this purpose. With IA, analyst can investigate the impact of accommodating emission controls in step-by-step one-at-a-time manner and in doing so track how profit and other important features of product mix degrade and to which emission control enforcements its diminution may be attributed. In this way, analyst can assist manager in identifying implementation strategies. Although IA is presented within context of a linear programming formulation of the green product mix problem, its methodology may be applied to other modeling frameworks. Quantity dependent penalty rates and transformations of emissions to forms with or without economic value are included in the modeling and illustrations of IA.
Linear programming; Product mix problem; Implementation strategy; Sustainability; 0/1 Mixed integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221714001623
Wellington, John F.
Guiffrida, Alfred L.
Lewis, Stephen A.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1021-10362014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1021-1036
article
Backward induction algorithm for a class of closed-loop Stackelberg games
In the paper a new deterministic continuum-strategy two-player discrete-time dynamic Stackelberg game is proposed with fixed finite time duration and closed-loop information structure. The considered payoff functions can be widely used in different applications (mainly in conflicts of consuming a limited resource, where one player, called leader, is a superior authority choosing strategy first, and another player, called follower, chooses after).
Game theory; Closed-loop Stackelberg game; Leader–follower equilibrium; Backward induction algorithm; Game regulation; Dynamic programming;
http://www.sciencedirect.com/science/article/pii/S0377221714001921
Kicsiny, R.
Varga, Z.
Scarelli, A.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:143-1542014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:143-154
article
Optimisation of integrated reverse logistics networks with different product recovery routes
The awareness of importance of product recovery has grown swiftly in the past few decades. This paper focuses on a problem of inventory control and production planning optimisation of a generic type of an integrated Reverse Logistics (RL) network which consists of a traditional forward production route, two alternative recovery routes, including repair and remanufacturing and a disposal route. It is assumed that demand and return quantities are uncertain. A quality level is assigned to each of the returned products. Due to uncertainty in the return quantity, quantity of returned products of a certain quality level is uncertain too. The uncertainties are modelled using fuzzy trapezoidal numbers. Quality thresholds are used to segregate the returned products into repair, remanufacturing or disposal routes. A two phase fuzzy mixed integer optimisation algorithm is developed to provide a solution to the inventory control and production planning problem. In Phase 1, uncertainties in quantity of product returns and quality of returns are considered to calculate the quantities to be sent to different recovery routes. These outputs are inputs into Phase 2 which generates decisions on component procurement, production, repair and disassembly. Finally, numerical experiments and sensitivity analysis are carried out to better understand the effects of quality of returns and RL network parameters on the network performance. These parameters include quantity of returned products, unit repair costs, unit production cost, setup costs and unit disposal cost.
Supply chain management; Reverse logistics; Quality of returned products; Uncertainty modelling; Inventory control; Fuzzy optimisation;
http://www.sciencedirect.com/science/article/pii/S0377221714002732
Niknejad, A.
Petrovic, D.
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:1083-10942014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:1083-1094
article
Capturing and prioritizing students’ requirements for course design by embedding Fuzzy-AHP and linear programming in QFD
Customer requirements play a vital and important role in the design of products and services. Quality Function Deployment (QFD) is a popular, widely used method that helps translate customer requirements into design specifications. Thus, the foundation for a successful QFD implementation lies in the accurate capturing and prioritization of these requirements. This paper proposes and tests the use of an alternative framework for prioritizing students’ requirements within QFD. More specifically, Fuzzy Analytic Hierarchy Process (Fuzzy-AHP) and the linear programming method (LP-GW-AHP) based on Data Envelopment Analysis (DEA) are embedded into QFD (QFD-LP-GW-Fuzzy AHP) in order to account for inherent subjectivity of human judgements. The effectiveness of the proposed framework is assessed in capturing and prioritizing students’ requirements regarding courses’ learning outcomes within the process of an academic course design. Sensitivity analysis evaluates the robustness of the prioritization solution and implications for course design specifications are discussed.
OR in service industries; Quality management; Quality Function Deployment; Education;
http://www.sciencedirect.com/science/article/pii/S0377221714001775
Kamvysi, Konstantina
Gotzamani, Katerina
Andronikidis, Andreas
Georgiou, Andreas C.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:114-1212014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:114-121
article
Hybrid algorithm for a vendor managed inventory system in a two-echelon supply chain
In this paper we address the issue of vendor managed inventory (VMI) by considering a two-echelon single vendor/multiple buyer supply chain network. We try to find the optimal sales quantity by maximizing profit, given as a nonlinear and non-convex objective function. For such complicated combinatorial optimization problems, exact algorithms and optimization commercial software such as LINGO are inefficient, especially on practical-size problems. In this paper we develop a hybrid genetic/simulated annealing algorithm to deal with this nonlinear problem. Our results demonstrate that the proposed hybrid algorithm outperforms previous methodologies and achieves more robust solutions.
Supply chain management; Hybrid algorithms; Genetic algorithms; Simulated annealing; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221714002136
Diabat, Ali
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:199-2072014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:199-207
article
On the student evaluation of university courses and faculty members’ teaching performance
Trying to determine higher education quality, one gets quickly to one of its significant dimensions, namely the quality of faculty members’ teaching. The latter and, overall, the quality of any university course should be certainly evaluated by their recipients, namely students. In this paper we develop a statistical framework based on Statistical Quality Control mainly, which can be used in order to exploit student evaluations as much as possible. More specifically we present two directions of data monitoring and analysis; the one uses control charts and the other hypotheses testing. The results that can be raised through both directions are crucial for any decision maker.
Higher education; Student evaluation; Questionnaire; Statistics; Control charts; Hypotheses testing;
http://www.sciencedirect.com/science/article/pii/S037722171400232X
Nikolaidis, Yiannis
Dimitriadis, Sotirios G.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:18-302014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:18-30
article
Integrating stochastic time-dependent travel speed in solution methods for the dynamic dial-a-ride problem
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches.
Dial-a-ride problem; Dynamic stochastic; Time-dependent; Variable neighborhood search; Multiple plan approach; Multiple scenario approach;
http://www.sciencedirect.com/science/article/pii/S0377221714002197
Schilde, M.
Doerner, K.F.
Hartl, R.F.
oai:RePEc:eee:ejores:v:238:y:2014:i:1:p:54-642014-06-05RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:1:p:54-64
article
Fast approximation algorithms for bi-criteria scheduling with machine assignment costs
We consider parallel machine scheduling problems where the processing of the jobs on the machines involves two types of objectives. The first type is one of two classical objective functions in scheduling theory: either the total completion time or the makespan. The second type involves an actual cost associated with the processing of a specific job on a given machine; each job-machine combination may have a different cost. Two bi-criteria scheduling problems are considered: (1) minimize the maximum machine cost subject to the total completion time being at its minimum, and (2) minimize the total machine cost subject to the makespan being at its minimum. Since both problems are strongly NP-hard, we propose fast heuristics and establish their worst-case performance bounds.
Bi-criteria scheduling; Maximum machine cost; Total machine cost; Makespan; Total completion time; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221714002550
Lee, Kangbok
Leung, Joseph Y-T.
Jia, Zhao-hong
Li, Wenhua
Pinedo, Michael L.
Lin, Bertrand M.T.
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:800-8142015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:800-814
article
Fast local search for single row facility layout
Given n facilities of prescribed lengths and a flow matrix, the single row facility layout problem (SRFLP) is to arrange the facilities along a straight line so as to minimize the total arrangement cost, which is the sum of the products of the flows and center-to-center distances between facilities. We propose interchange and insertion neighborhood exploration (NE) procedures with time complexity O(n2), which is an improvement over O(n3)-time NE procedures from the literature. Numerical results show that, for large SRFLP instances, our insertion-based local search (LS) algorithm is two orders of magnitude faster than the best existing LS techniques. As a case study, we embed this LS algorithm into the variable neighborhood search (VNS) framework. We report computational results for SRFLP instances of size up to 300 facilities. They indicate that our VNS implementation offers markedly better performance than the variant of VNS that uses a recently proposed O(n3)-time insertion-based NE procedure.
Combinatorial optimization; Single row facility layout; Local search; Variable neighborhood search;
http://www.sciencedirect.com/science/article/pii/S037722171500466X
Palubeckis, Gintaras
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:787-7992015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:787-799
article
Practical solutions for a dock assignment problem with trailer transportation
We study a distribution warehouse in which trailers need to be assigned to docks for loading or unloading. A parking lot is used as a buffer zone and transportation between the parking lot and the docks is performed by auxiliary resources called terminal tractors. Each incoming trailer has a known arrival time and each outgoing trailer a desired departure time. The primary objective is to produce a docking schedule such that the weighted sum of the number of late outgoing trailers and the tardiness of these trailers is minimized; the secondary objective is to minimize the weighted completion time of all trailers, both incoming and outgoing. The purpose of this paper is to produce high-quality solutions to large instances that are comparable to a real-life case. This will oblige us to abandon the guarantee of always finding an optimal solution, and we will instead look into a number of sub-optimal procedures. We implement four different methods: a mathematical formulation that can be solved using an IP solver, a branch-and-bound algorithm, a beam search procedure and a tabu search method. Lagrangian relaxation is embedded in the algorithms for computing lower bounds. The different solution frameworks are compared via extensive computational experiments.
Dock assignment; Multicriteria scheduling; Branch and bound; Beam search; Tabu search;
http://www.sciencedirect.com/science/article/pii/S0377221715004683
Berghman, Lotte
Leus, Roel
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:858-8732015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:858-873
article
A group decision-making approach based on evidential reasoning for multiple criteria sorting problem with uncertainty
A new group decision-making approach is developed to address a multiple criteria sorting problem with uncertainty. The uncertainty in this paper refers to imprecise evaluations of alternatives with respect to the considered criteria. The belief structure and the evidential reasoning approach are employed to represent and aggregate the uncertain evaluations. In our approach, the preference information elicited from a group of decision makers is composed of the assignment examples of some reference alternatives. The disaggregation–aggregation paradigm is utilized to infer compatible preference models from these assignment examples. To help the group reach an agreement on the assignment of alternatives, we propose a consensus-reaching process. In this process, a consensus degree is defined to measure the agreement among the decision makers’ opinions. When the decision makers are not satisfied with the consensus degree, possible solutions are explored to help them adjust assignment examples in order to improve the consensus level. If the consensus degree arrives at a satisfactory level, a linear program is built to determine the collective assignment of alternatives. The application of the proposed approach to a customer satisfaction analysis is presented at the end of the paper.
Multiple criteria analysis; Multiple criteria sorting; Evidential reasoning approach; Group consensus;
http://www.sciencedirect.com/science/article/pii/S0377221715004178
Liu, Jiapeng
Liao, Xiuwu
Yang, Jian-bo
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:155-1652015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:155-165
article
The effect of the autocorrelation on the performance of the T2 chart
In this article, we consider the T2 control chart for bivariate samples of size n with observations that are not only cross-correlated but also autocorrelated. The cross covariance matrix of the sample mean vectors is derived with the assumption that the observations are described by a multivariate first order autoregressive model – VAR (1). The combined effect of the correlation and autocorrelation on the performance of the T2 chart is also investigated. Earlier studies proved that changes in only one variable are detected faster when the variables are correlated. This result extends to the case that one or both variables are also autocorrelated.
Quality control; Statistical process control; Hotelling T2 chart; VAR (1) model;
http://www.sciencedirect.com/science/article/pii/S0377221715004890
Leoni, Roberto Campos
Costa, Antonio Fernando Branco
Machado, Marcela Aparecida Guerreiro
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:113-1232015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:113-123
article
Integrated production and logistics planning: Contract manufacturing and choice of air/surface transportation
We study the operational problem of a make-to-order contract manufacturer seeking to integrate production scheduling and transportation planning for improved performance under commit-to-delivery model. The manufacturer produces customer orders on a set of unrelated parallel lines/processors, accounting for release dates and sequence dependent setup times. A set of shipping options with different costs and transit times is available for order delivery through the third party logistics service providers. The objective is to manufacture and deliver multiple customer orders by selecting from the available shipping options, before preset due dates to minimize total cost of fulfilling orders, including tardiness penalties. We model the problem as a mixed integer programming model and provide a novel decomposition scheme to solve the problem. An exact dynamic programming model and a heuristics approach are presented to solve the subproblems. The performance of the solution algorithm is tested through a set of experimental studies and results are presented. The algorithm is shown to efficiently solve the test cases, even the complex instances, to near optimality.
Production scheduling; Third party logistics; Commit to delivery; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715005184
Azadian, Farshid
Murat, Alper
Chinnam, Ratna Babu
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:685-6852015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:685-685
article
Editors’ Awards for Excellence in Reviewing 2015
http://www.sciencedirect.com/science/article/pii/S0377221715004154
Słowiński, Roman
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:762-7712015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:762-771
article
Integration of selecting and scheduling urban road construction projects as a time-dependent discrete network design problem
Decision making on the selection of transportation infrastructure projects is an interesting subject to both transportation authorities and researchers. Due to resource limitations, the selected projects should then be scheduled during the planning horizon. Integration of selecting and scheduling projects into a single model increases the accuracy of results; however it leads to more complexity. In this paper, first, three different mathematical programming models are presented to integrate selecting and scheduling of urban road construction projects as a time-dependent discrete network design problem. Then, the model that seems more flexible and realistic is selected and an evolutionary approach is proposed to solve it. The proposed approach is a combination of three well-known techniques: the phase-I of the two-phase simplex method, Frank-Wolfe algorithm, and genetic algorithm. Taguchi method is used to optimize the genetic algorithm parameters. The main difficulty in solving the model is due to the large number of subsequent network traffic assignment problems that should be solved which makes the solution process very time-consuming. Therefore, a procedure is proposed to overcome this difficulty by significantly reducing the traffic assignment problem solution time. In order to verify the performance of the proposed approach, 27 randomly generated test problems of different scales are applied to Sioux Falls urban transportation network. The proposed approach and full enumeration method are used to solve the problems. Numerical results show that the proposed approach has an acceptable performance in terms of both solution quality and solution time.
Transportation; Project selection; Network design problem; Scheduling of transportation projects; Genetic algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715004294
Hosseininasab, Seyyed-Mohammadreza
Shetab-Boushehri, Seyyed-Nader
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:204-2152015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:204-215
article
Heterogeneous beliefs, regret, and uncertainty: The role of speculation in energy price dynamics
This paper proposes to investigate the impact of financialization on energy markets (oil, gas, coal, and electricity European forward prices) during both normal times and periods of extreme fluctuation by using an original behavioral and emotional approach. With this aim, we propose a new theoretical and empirical framework based on a heterogeneous agents model in which fundamentalists and chartists co-exist and are subject to regret and uncertainty. We find significant evidence that energy markets are composed of heterogeneous traders who behave differently depending on the intensity of the price fluctuations and the uncertainty context. In particular, energy prices are governed primarily by fundamental and chartist agents that are neutral to uncertainty during normal times, whereas these prices face irrational chartist investors averse to uncertainty during periods of extreme fluctuations. In this context, the recent surge in energy prices can be viewed as the consequence of irrational exuberance. Our new theoretical model is suitable for modeling energy price dynamics and outperforms both the random walk and the ARMA model in out-of-sample predictive ability.
Energy forward prices; Financialization; Heterogeneous agents; Uncertainty aversion; Regret;
http://www.sciencedirect.com/science/article/pii/S0377221715004725
Joëts, Marc
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:191-2032015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:191-203
article
National-strategic investment in European power transmission capacity
The transformation of the European electricity system requires substantial investment in transmission capacity to facilitate cross-border trade and to efficiently integrate renewable energy sources. However, network planning in the EU is still mainly a national prerogative. In contrast to other studies aiming to identify the pan-European (continental) welfare-optimal transmission expansion, we investigate the impact of zonal planners deciding on network investment strategically, with the aim of maximizing the sum of consumer surplus, generator profits and congestion rents in their jurisdiction. This reflects the inadequacy of current mechanisms to compensate for welfare re-allocations across national boundaries arising from network upgrades.
Electricity transmission; Network expansion; Generalized Nash equilibrium (GNE); Mixed-integer equilibrium problem under equilibrium constraints (MI-EPEC);
http://www.sciencedirect.com/science/article/pii/S0377221715004671
Huppmann, Daniel
Egerer, Jonas
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:886-8932015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:886-893
article
A group evidential reasoning approach based on expert reliability
The reliability of an expert is an important concept in multiple attribute group decision analysis (MAGDA). However, reliability is rarely considered in MAGDA, or it may be simply assumed that all experts are fully reliable and thus their reliabilities do not need to be considered explicitly. In fact, any experts can only be bounded rational and their various degrees of reliabilities may significantly influence MAGDA results. In this paper, we propose a new method based on the evidential reasoning rule to explicitly measure the reliability of each expert in a group and use expert weights and reliabilities to combine expert assessments. Two sets of assessments, i.e., original assessments and updated assessments provided after group analysis and discussion are taken into account to measure expert reliabilities. When the assessments of some experts are incomplete while global ignorance is incurred, pairs of optimization problems are constructed to decide interval-valued expert reliabilities. The resulting expert reliabilities are applied to combine the expert assessments of alternatives on each attribute and then to generate the aggregated assessments of alternatives. An industry evaluation problem in Wuhu, a city in Anhui Province of China is analyzed by using the proposed method as a real case study to demonstrate its detailed implementation process, validity, and applicability.
Decision analysis; Multiple attribute group decision analysis; Expert reliability; Evidential reasoning rule; Evidential reasoning approach;
http://www.sciencedirect.com/science/article/pii/S0377221715004324
Fu, Chao
Yang, Jian-Bo
Yang, Shan-Lin
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:936-9432015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:936-943
article
Identification of the anchor points in FDH models
This paper investigates the anchor points in nonconvex Data Envelopment Analysis (DEA), called Free Disposal Hull (FDH), technologies. We develop the concept of anchor points under various returns to scale assumptions in FDH models. A necessary and sufficient condition for characterizing the anchor points is provided. Since the set of anchor points is a subset of the set of extreme units, a definition of extreme units in non-convex technologies as well as a new method for obtaining these units are given. Finally, a polynomial-time algorithm for identification of the anchor points in FDH models is provided. Obtaining both extreme units and anchor points is done via calculating only some ratios, without solving any mathematical programming problem.
Data Envelopment Analysis (DEA); FDH models; Anchor point; Returns to scale; Polynomial-time algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715004622
Soleimani-damaneh, Majid
Mostafaee, Amin
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:276-2932015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:276-293
article
A system dynamics view of the acute bed blockage problem in the Irish healthcare system
Global population ageing is creating immense pressures on hospitals and other healthcare services, compromising their abilities to meet the growing demand from elderly patients. Current demand–supply gaps result in prolonged waiting times in emergency departments (EDs), and several studies have focused on improving ED performance. However, the overcrowding in EDs generally stems from delayed patient flows to inpatient wards – which are congested with inpatients waiting for beds in post-acute facilities. This problem of bed blocking in acute hospitals causes substantial cost burdens on hospitals. This study presents a system dynamics methodology to model the dynamic flow of elderly patients in the Irish healthcare system aimed at gaining a better understanding of the dynamic complexity caused by the system's various parameters. The model evaluates the stock and flow interventions that Irish healthcare executives have proposed to address the problem of delayed discharges, and ultimately reduce costs. The anticipated growth in the nation's demography is also incorporated in the model. Policy makers can also use the model to identify the potential strategic risks that might arise from the unintended consequences of new policies designed to overcome the problem of the delayed discharge of elderly patients.
Delayed discharge; System dynamics; Patient pathways; Capacity planning; Irish healthcare system;
http://www.sciencedirect.com/science/article/pii/S0377221715004336
Rashwan, Wael
Abo-Hamad, Waleed
Arisha, Amr
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:16-262015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:16-26
article
Minimum edge blocker dominating set problem
This paper introduces and studies the minimum edge blocker dominating set problem (EBDP), which is formulated as follows. Given a vertex-weighted undirected graph and r > 0, remove a minimum number of edges so that the weight of any dominating set in the remaining graph is at least r. Dominating sets are used in a wide variety of graph-based applications such as the analysis of wireless and social networks. We show that the decision version of EBDP is NP-hard for any fixed r > 0. We present an analytical lower bound for the value of an optimal solution to EBDP and formulate this problem as a linear 0–1 program with a large number of constraints. We also study the convex hull of feasible solutions to EBDP and identify facet-inducing inequalities for this polytope. Furthermore, we develop the first exact algorithm for solving EBDP, which solves the proposed formulation by a branch-and-cut approach where nontrivial constraints are applied in a lazy fashion. Finally, we also provide the computational results obtained by using our approach on a test-bed of randomly generated instances and real-life power-law graphs.
Network interdiction; Minimum weighted dominating set; NP-hardness; Branch-and-cut algorithm; Critical elements detection;
http://www.sciencedirect.com/science/article/pii/S0377221715004270
Mahdavi Pajouh, Foad
Walteros, Jose L.
Boginski, Vladimir
Pasiliao, Eduardo L.
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:27-362015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:27-36
article
A minimum cost network flow model for the maximum covering and patrol routing problem
This paper shows how the maximum covering and patrol routing problem (MCPRP) can be modeled as a minimum cost network flow problem (MCNFP). Based on the MCNFP model, all available benchmark instances of the MCPRP can be solved to optimality in less than 0.4s per instance. It is furthermore shown that several practical additions to the MCPRP, such as different start and end locations of patrol cars and overlapping shift durations can be modeled by a multi-commodity minimum cost network flow model and solved to optimality in acceptable computational times given the sizes of practical instances.
Routing; Problem structuring; Minimum Cost Network Flow Problem; Multi-commodity;
http://www.sciencedirect.com/science/article/pii/S0377221715004798
Dewil, R.
Vansteenwegen, P.
Cattrysse, D.
Van Oudheusden, D.
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:721-7292015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:721-729
article
An inexact proximal method for quasiconvex minimization
In this paper we propose an inexact proximal point method to solve constrained minimization problems with locally Lipschitz quasiconvex objective functions. Assuming that the function is also bounded from below, lower semicontinuous and using proximal distances, we show that the sequence generated for the method converges to a stationary point of the problem.
Computing science; Global optimization; Nonlinear programming; Proximal point methods; Quasiconvex minimization;
http://www.sciencedirect.com/science/article/pii/S0377221715004312
Papa Quiroz, E.A.
Mallma Ramirez, L.
Oliveira, P.R.
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:46-592015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:46-59
article
A bicriterion algorithm for the allocation of cross-trained workers based on operational and human resource objectives
The problem of allocating a pool of cross-trained workers across multiple departments, units, or work centers is important for both manufacturing and service environments. Within the context of services, such as hospital nursing, the problem has commonly been formulated as a nonlinear assignment problem with an operationally-oriented objective function that focuses on the maximization of service utility as a function of deviations from target staffing levels in the departments. However, service managers might also deem it appropriate to consider human resource-oriented goals, such as accommodating worker preferences, avoiding decay of skill productivity, or the provision of training. We present a bicriterion formulation of the nonlinear worker assignment problem that incorporates both operational and human resource objective criteria. An algorithm for generating the entire Pareto efficient set associated with the bicriterion model is subsequently presented. A small numerical example is used to illustrate the bicriterion model and algorithm. A second example based on a test problem from the literature is also contained in the paper, and a third example is provided in an online supplement. In addition, a simulation experiment was conducted to evaluate the sensitivity of the algorithm to a variety of environmental characteristics.
Workforce assignment; Cross-training; Bicriterion programming; Algorithms; Services;
http://www.sciencedirect.com/science/article/pii/S0377221715005238
Brusco, Michael J.
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:772-7862015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:772-786
article
High-speed railway scheduling based on user preferences
This paper proposes an optimization model for high-speed railway scheduling. The model is composed of two sub-models. The first is a discrete event simulation model which represents the supply of the railway services whereas the second is a constrained logit-type choice model which takes into account the behaviour of users. This discrete choice model evaluates the attributes of railway services such as the timetable, fare, travel time and seat availability (capacity constraints) and computes the High-Speed railway demand for each planned train service.
Timetabling; High-speed railway; Constrained nested logit model; Utility theory; Metaheuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715004634
Espinosa-Aranda, José Luis
García-Ródenas, Ricardo
Ramírez-Flores, María del Carmen
López-García, María Luz
Angulo, Eusebio
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:708-7202015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:708-720
article
Circumventing the Slater conundrum in countably infinite linear programs
Duality results on countably infinite linear programs are scarce. Subspaces that admit an interior point, which is a sufficient condition for a zero duality gap, yield a dual where the constraints cannot be expressed using the ordinary transpose of the primal constraint matrix. Subspaces that permit a dual with this transpose do not admit an interior point. This difficulty has stumped researchers for a few decades; it has recently been called the Slater conundrum. We find a way around this hurdle.
Infinite-dimensional linear optimization; Markov decision processes; Shadow prices;
http://www.sciencedirect.com/science/article/pii/S0377221715003197
Ghate, Archis
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:294-3092015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:294-309
article
A simulation model to enable the optimization of ambulance fleet allocation and base station location for increased patient survival
An effective emergency medical service (EMS) is a critical part of any health care system. This paper presents the optimization of EMS vehicle fleet allocation and base station location through the use of a genetic algorithm (GA) with an integrated EMS simulation model. Two tiers to the EMS model realized the different demands on two vehicle classes; ambulances and rapid response cars. Multiple patient classes were modelled and survival functions used to differentiate the required levels of service. The objective was maximization of the overall expected survival probability across patient classes. Applications of the model were undertaken using real call data from the London Ambulance Service. The simulation model was shown to effectively emulate real-life performance. Optimization of the existing resource plan resulted in significant improvements in survival probability. Optimizing a selection of 1 hour periods in the plan, without introducing additional resources, resulted in a notable increase in the number of cardiac arrest patients surviving per year. The introduction of an additional base station further improved survival when its location and resourcing were optimized for key periods of service. Also, the removal of a base station from the system was found to have minimal impact on survival probability when the selected station and resourcing were optimized simultaneously.
Simulation; Optimization; Emergency medical service;
http://www.sciencedirect.com/science/article/pii/S0377221715004300
McCormack, Richard
Coates, Graham
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:750-7612015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:750-761
article
An efficient local search heuristic with row weighting for the unicost set covering problem
The Set Covering Problem (SCP) is NP-hard. We propose a new Row Weighting Local Search (RWLS) algorithm for solving the unicost variant of the SCP, i.e., USCPs where the costs of all sets are identical. RWLS is a heuristic algorithm that has three major components united in its local search framework: (1) a weighting scheme, which updates the weights of uncovered elements to prevent convergence to local optima, (2) tabu strategies to avoid possible cycles during the search, and (3) a timestamp method to break ties when prioritizing sets. RWLS has been evaluated on a large number of problem instances from the OR-Library and compared with other approaches. It is able to find all the best known solutions (BKS) and improve 14 of them, although requiring a higher computational effort on several instances. RWLS is especially effective on the combinatorial OR-Library instances and can improve the best known solution to the hardest instance CYC11 considerably. RWLS is conceptually simple and has no instance-dependent parameters, which makes it a practical and easy-to-use USCP solver.
Combinatorial optimization; Unicost set covering problem; Row weighting local search;
http://www.sciencedirect.com/science/article/pii/S0377221715004282
Gao, Chao
Yao, Xin
Weise, Thomas
Li, Jinlong
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:166-1782015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:166-178
article
Optimizing mining complexes with multiple processing and transportation alternatives: An uncertainty-based approach
Mining complexes contain multiple sequential activities that are strongly interrelated. Extracting the material from different sources may be seen as the first main activity, and any change in the sequence of extraction of the mining blocks modify the activities downstream, including blending, processing and transporting the processed material to final stocks or ports. Similarly, modifying the conditions of operation at a given processing path or the transportation systems implemented may affect the suitability of using a mining sequence previously optimized. This paper presents a method to generate mining, processing and transportation schedules that account for the previously mentioned activities (or stages) associated with the mining complex simultaneously. The method uses an initial solution generated using conventional optimizers and improves it by mean of perturbations associated to three different levels of decision: block based perturbations, operating alternative based perturbations and transportation system based perturbation. The method accounts for geological uncertainty of several deposits by considering scenarios originated from combinations of their respective stochastic orebody simulations. The implementation of the method in a multipit copper operation shows its ability to reduce deviations from capacity and blending targets while improving the expected NPV (cumulative discounted cash flows), which highlight the importance of stochastic optimizers given their ability to generate more value with less risk.
Metaheuristics; OR in natural resources; Mining complex; Stochastic orebody simulations; Operating alternatives;
http://www.sciencedirect.com/science/article/pii/S0377221715003720
Montiel, Luis
Dimitrakopoulos, Roussos
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:321-3342015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:321-334
article
An integrated approach for planning a long-term care network with uncertainty, strategic policy and equity considerations
Considering key uncertainties and health policy options in the reorganization of a long-term care (LTC) network is crucial. This study proposes a stochastic mixed integer linear programming model for planning the delivery of LTC services within a network of care where such aspects are modeled in an integrated manner. The model assists health care planners on how to plan the delivery of the entire range of LTC services – institutional, home-based and ambulatory services – when the main policy objective is the minimization of expected costs and while respecting satisficing levels of equity. These equity levels are modeled as constraints, ensuring the attainment of equity of access, equity of utilization, socioeconomic equity and geographical equity. The proposed model provides planners with key information on: when and where to locate services and with which capacity, how to distribute this capacity across services and patient groups, and which changes to the network of care are needed over time. Model outputs take into account the uncertainty surrounding LTC demand, and consider strategic health policy options adopted by governments. The applicability of the model is demonstrated through the resolution of a case study in the Great Lisbon region in Portugal with estimates on the equity-cost trade-off for several equity dimensions being provided.
OR in health services; LTC planning; Multi-service; Uncertainty; Equity;
http://www.sciencedirect.com/science/article/pii/S0377221715004865
Cardoso, Teresa
Oliveira, Mónica Duarte
Barbosa-Póvoa, Ana
Nickel, Stefan
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:93-1002015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:93-100
article
An optimal (r, Q) policy in a stochastic inventory system with all-units quantity discount and limited sharable resource
This paper is concerned with the single-item inventory system with resource constraint and all-units quantity discount under continuous review where demand is stochastic and discrete. In most actual inventory systems, the resource available for inventory management is limited and the system is able to confront the resource shortage by charging more costs. Considering the resource constraint as a soft constraint beside a quantity discount opportunity makes the model more practical. An optimization problem is formulated for finding an optimal (r, Q) policy for the problem in which the per unit resource usage depends on unit purchasing price. The properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (r, Q) policy which minimizes the expected system costs and converges to a global optimum. Based on the properties of the partially conditioned cost functions, the presented algorithm is modified such that its search path to optimal policy is changed. Experimental results show that the performance of the modified version of the presented algorithm is much better than the original algorithm in various environments of test problems.
Inventory; (r, Q) policy; Stochastic demand; Limited sharable resource; All-units discount;
http://www.sciencedirect.com/science/article/pii/S0377221715004853
Tamjidzad, Shahrzad
Mirmohammadi, S. Hamid
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:101-1122015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:101-112
article
Integrated optimization of safety stock and transportation capacity
In this paper we consider a segment of a supply chain comprising an inventory and a transportation system that cooperate in the fulfillment of stochastic customer orders. The inventory is operated under a discrete time (r, s, q) policy with backorders. The transportation system consists of an in-house transportation capacity which can be extended by costly external transportation capacity (such as a third-party logistics provider). We show that in a system of this kind stock-outs and the resulting accumulation of inventory backorders introduces volatility in the workload of the transportation process. Geunes and Zeng (2001) have shown for a base-stock system, that backordering decreases the variability of transportation orders. Our findings show that in inventory systems with order cycles longer than one period the opposite is true. In both cases, however, inventory decisions and transportation decision must be taken simultaneously.
Inventory; Vehicle fleet size; Excess transportation capacity;
http://www.sciencedirect.com/science/article/pii/S0377221715004816
Tempelmeier, Horst
Bantel, Oliver
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:907-9152015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:907-915
article
An optimal plan of zero-defect single-sampling by attributes for incoming inspections in assembly lines
This paper proposes a nonlinear integer program for determining an optimal plan of zero-defect, single-sampling by attributes for incoming inspections in assembly lines. Individual parts coming to an assembly line differ in the non-conforming (NC) risk, NC severity, lot size, and inspection cost-effectiveness. The proposed optimization model is able to determine the inspection sample size for each of the parts in a resource constrained condition where a product’s NC risk is not a linear combination of NC risks of the individual parts. This paper develops a three-step solution procedure that effectively reduces the solution time for larger size problems commonly seen in assembly lines. The proposed optimization model provides insightful implications for quality management. For example, it reveals the principle of sample size decisions for heterogeneous, dependent parts waiting for incoming inspections; as well as provides a tool for quantifying the expected return from investing additional inspection resources. The optimization model builds a foundation for extensions to advanced inspection sampling plans.
Quality management; Incoming inspection; Quality attributes; Acceptance sampling; Nonlinear integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715004658
Qin, Ruwen
Cudney, Elizabeth A.
Hamzic, Zlatan
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:137-1432015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:137-143
article
Optimal loading of system with random repair time
This paper considers single-component repairable systems supporting different levels of workloads and subject to random repair times. The mission is successful if the system can perform a specified amount of work within the maximum allowed mission time. The system can work with different load levels, each corresponding to different productivity, time-to-failure distribution, and per time unit operation cost. A numerical algorithm is first suggested to evaluate mission success probability and conditional expected cost of a successful mission for the considered repairable system. The load optimization problem is then formulated and solved for finding the system load level that minimizes the expected mission cost subject to providing a desired level of the mission success probability. Examples with discrete and continuous load variation are provided to illustrate the proposed methodology. Effects of repair efficiency, repair time distribution, and maximum allowed time on the mission reliability and cost are also investigated through the examples.
Loading; Repair; Mission success probability; Mission cost; Mission time;
http://www.sciencedirect.com/science/article/pii/S0377221715004233
Levitin, Gregory
Xing, Liudong
Dai, Yuanshun
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:124-1362015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:124-136
article
Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research
Many years have passed since Baesens et al. published their benchmarking study of classification algorithms in credit scoring [Baesens, B., Van Gestel, T., Viaene, S., Stepanova, M., Suykens, J., & Vanthienen, J. (2003). Benchmarking state-of-the-art classification algorithms for credit scoring. Journal of the Operational Research Society, 54(6), 627–635.]. The interest in prediction methods for scorecard development is unbroken. However, there have been several advancements including novel learning methods, performance measures and techniques to reliably compare different classifiers, which the credit scoring literature does not reflect. To close these research gaps, we update the study of Baesens et al. and compare several novel classification algorithms to the state-of-the-art in credit scoring. In addition, we examine the extent to which the assessment of alternative scorecards differs across established and novel indicators of predictive accuracy. Finally, we explore whether more accurate classifiers are managerial meaningful. Our study provides valuable insight for professionals and academics in credit scoring. It helps practitioners to stay abreast of technical advancements in predictive modeling. From an academic point of view, the study provides an independent assessment of recent scoring methods and offers a new baseline to which future approaches can be compared.
Data mining; Credit scoring; OR in banking; Forecasting benchmark;
http://www.sciencedirect.com/science/article/pii/S0377221715004208
Lessmann, Stefan
Baesens, Bart
Seow, Hsin-Vonn
Thomas, Lyn C.
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:310-3202015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:310-320
article
Defining line replaceable units
Defective capital assets may be quickly restored to their operational condition by replacing the item that has failed. The item that is replaced is called the Line Replaceable Unit (LRU), and the so-called LRU definition problem is the problem of deciding on which item to replace upon each type of failure: when a replacement action is required in the field, service engineers can either replace the failed item itself or replace a parent assembly that holds the failed item. One option may be fast but expensive, while the other may take longer but against lower cost. We consider a maintenance organization that services a fleet of assets, so that unavailability due to maintenance downtime may be compensated by acquiring additional standby assets. The objective of the LRU-definition problem is to minimize the total cost of item replacement and the investment in additional assets, given a constraint on the availability of the fleet of assets. We link this problem to the literature. We also present two cases to show how the problem is treated in practice. We next model the problem as a mixed integer linear programming formulation, and we use a numerical experiment to illustrate the model, and the potential cost reductions that using such a model may lead to.
Maintenance; Replacement; Integer programming; Line replaceable unit definition;
http://www.sciencedirect.com/science/article/pii/S0377221715004774
Parada Puig, J.E.
Basten, R.J.I.
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:874-8852015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:874-885
article
Continuous (s, S) policy with MMPP correlated demand
This work considers a continuous inventory replenishment system where demand is stochastic and dependent on the state of the environment. A Markov Modulated Poisson Process (MMPP) is utilized to model the demand process where the corresponding embedded Markov Chain represents the state of the environment. The equations to calculate the system inventory measures and the number of orders per unit time are obtained for a continuous, infinite horizon and dynamically changing (s, S) policy. An efficient optimization heuristic is presented and compared to the commonly used approach of approximating the demand-count process over the lead time with a Normal distribution. An investigation of the MMPP demand process is considered where we quantify the impact of variability in the demand-count process which is due to auto-correlation. Our findings indicate that when demand correlation is high, a dynamic control, where the (s, S) policy changes with state of the environment governing the MMPP, is highly superior to the commonly used “static” heuristics. We propose two dynamic policies of varying computational complexity, and cost efficiency, depending on the class of the product (one for class A, and one for classes B and C), to handle such high-correlation situations.
Stochastic demand; Correlated demand; MMPP; Inventory systems; Ordering policies;
http://www.sciencedirect.com/science/article/pii/S0377221715004191
Nasr, Walid W.
Maddah, Bacel
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:916-9262015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:916-926
article
Pulsation in a competitive model of advertising-firm's cost interaction
The literature reveals contradiction between theoretical results (superiority of uniform policy under a concave advertising response function) and empirical results (concavity of the advertising response function and the superiority of a pulsation policy). To reconcile the above difference, this paper offers a resolution based on (1) the concavity of the advertising response function; (2) the convexity of the firm's cost function; and (3) over-advertising. The resolution is reached upon maximizing the net profit per unit time over the infinite planning horizon subject to an exogenous advertising budget constraint. Theoretical results for monopolistic markets are found mostly generalized to competitive markets. A numerical example is introduced to gain more insight into the theoretical findings and an approach is introduced and implemented to empirically assess the shape of a firm's cost function and the advertising policy to be employed.
Marketing; Advertising pulsation; Game theory; Regression; Shape of firm's cost function;
http://www.sciencedirect.com/science/article/pii/S0377221715003756
Mesak, Hani Ibrahim
Bari, Abdullahel
Lian, Qin
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:927-9352015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:927-935
article
Cautious label ranking with label-wise decomposition
In this paper, we are interested in the label ranking problem. We are more specifically interested in the recent trend consisting in predicting partial but more accurate (i.e., making less incorrect statements) orders rather than complete ones. To do so, we propose a ranking method based on label-wise decomposition. We estimate an imprecise probabilistic model over each label rank and we infer a partial order from it using optimization techniques. This leads to new results concerning a particular bilinear assignment problem. Finally, we provide some experiments showing the feasibility of our method.
Label ranking; Label-wise decomposition; Assignment problem; Bilinear optimization;
http://www.sciencedirect.com/science/article/pii/S037722171500377X
Destercke, Sébastien
Masson, Marie-Hélène
Poss, Michael
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:949-9572015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:949-957
article
Consignment contract for mobile apps between a single retailer and competitive developers with different risk attitudes
Consider n mobile application (app) developers selling their software through a common platform provider (retailer), who offers a consignment contract with revenue sharing. Each app developer simultaneously determines the selling price of his app and the extent to which he invests in its quality. The demand for the app, which depends on both price and quality investment, is uncertain, so the risk attitudes of the supply chain members have to be considered. The members' equilibrium strategies are analyzed under different attitudes toward risk: risk-aversion, risk-neutrality and risk-seeking. We show that the retailer's utility function has no effect on the equilibrium strategies, and suggest schemes to identify these strategies for any utility function of the developers. Closed-form solutions are obtained under the exponential utility function.
Supply chain; Risk attitude; Consignment; Supplier competition;
http://www.sciencedirect.com/science/article/pii/S0377221715003884
Avinadav, Tal
Chernonog, Tatyana
Perlman, Yael
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:944-9482015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:944-948
article
Risk pricing in a non-expected utility framework
Risk prices are calculated as the certainty equivalents of risky assets, using a recently developed non-expected utility (non-EU) approach to quantitative risk assessment. The present formalism for the pricing of risk is computationally simple, realistic in the sense of behavioural economics and straightforward to apply in operational research and risk and decision analyses.
Risk analysis; Risk pricing; Certainty equivalent; Utility theory; Non-expected utility;
http://www.sciencedirect.com/science/article/pii/S0377221715003252
Geiger, Gebhard
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:730-7432015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:730-743
article
The multi-compartment vehicle routing problem with flexible compartment sizes
In this paper, a capacitated vehicle routing problem is discussed which occurs in the context of glass waste collection. Supplies of several different product types (glass of different colors) are available at customer locations. The supplies have to be picked up at their locations and moved to a central depot at minimum cost. Different product types may be transported on the same vehicle, however, while being transported they must not be mixed. Technically this is enabled by a specific device, which allows for separating the capacity of each vehicle individually into a limited number of compartments where each compartment can accommodate one or several supplies of the same product type. For this problem, a model formulation and a variable neighborhood search algorithm for its solution are presented. The performance of the proposed heuristic is evaluated by means of extensive numerical experiments. Furthermore, the economic benefits of introducing compartments on the vehicles are investigated.
Vehicle routing; Multiple compartments; Glass waste collection; Variable neighborhood search; Heuristics;
http://www.sciencedirect.com/science/article/pii/S0377221715004105
Henke, Tino
Speranza, M. Grazia
Wäscher, Gerhard
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:245-2582015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:245-258
article
Dependence among single stations in series and its applications in productivity improvement
Theory of constraints has been commonly used in production systems to improve productivity. Since the improvement on an upstream workstation may have impact on its downstream servers, finding the true bottleneck is not trivial in a stochastic production line. Due to the analytical intractability of general tandem queues, we develop methods to quantify the dependence among stations through simulation. Dependence is defined by the contribution queue time at each station, and contribution factors are developed based on the insight from Friedman's reduction method and Jackson networks. In a tandem queue, the dependence among stations can be either diffusion or blocking, and their impact depends on the positions relative to the bottlenecks. Based on these results, we show that improving the performance of the system bottleneck may not be the most effective place to reduce system cycle time. Rather than making independence assumptions, the proposed method points out a promising direction and sheds light on the insights of the dependence in practical systems.
Productivity; Simulation; Theory of constraint; Tandem queue;
http://www.sciencedirect.com/science/article/pii/S037722171500418X
Wu, Kan
Zhao, Ning
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:229-2442015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:229-244
article
Capacitated p-center problem with failure foresight
This paper deals with a generalized version of the capacitated p-center problem. The model takes into account the possibility that a center might suffer a disruption (being unavailable to serve some demand) and assumes that every site will be covered by its closest available center. The problem is of interest when the goal is to locate emergency centers while, at the same time, taking precautions against an unforeseen incident (natural disaster, labor strike, accident…) which can cause failure of a center. We consider different formulations for the problem and extensive computational tests are reported, showing the potentials and limits of each formulation in several types of instances. Finally, a preprocessing phase for fixing variables has been developed and different families of valid inequalities have been proposed to strengthen the most promising formulations, obtaining in some cases much better resolution times.
p-center; Capacities; Integer programming;
http://www.sciencedirect.com/science/article/pii/S0377221715004841
Espejo, Inmaculada
Marín, Alfredo
Rodríguez-Chía, Antonio M.
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:338-3382015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:338-338
article
Corrigendum to ‘‘Solving mixed model sequencing problem in assembly lines with serial workstations with work overload minimisation and interruption rules’’ [Eur. J. Oper. Res. 210 (2011) 495–513]
http://www.sciencedirect.com/science/article/pii/S0377221715005366
Bautista, Joaquín
Cano, Alberto
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:179-1902015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:179-190
article
Achieving customer satisfaction through product–service systems
The purpose of this research is to help manufacturing companies identify the key performance evaluation criteria for achieving customer satisfaction through Balanced Scorecard (BSC) and Multiple Criteria Decision Making (MCDM) approaches. To explore the causal relationships among the four dimensions of business performance in Balanced Scorecard as well as their key performance criteria, a MCDM approach combining DEMATEL and ANP techniques is adopted. Subsequently, the MCDM framework is tested using Delphi method and a questionnaire survey is conducted in 24 manufacturing firms from Taiwan, Vietnam and Thailand.
Product–service system; Customer satisfaction; Balanced scorecard; DEMATEL; ANP;
http://www.sciencedirect.com/science/article/pii/S0377221715003902
Pan, Jeh-Nan
Nguyen, Hung Thi Ngoc
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:1-152015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:1-15
article
Quantitative models for managing supply chain risks: A reviewAuthor-Name: Fahimnia, Behnam
As supply chain risk management has transitioned from an emerging topic to a growing research area, there is a need to classify different types of research and examine the general trends of this research area. This helps identify fertile research streams with great potential for further examination. This paper presents a systematic review of the quantitative and analytical models (i.e. mathematical, optimization and simulation modeling efforts) for managing supply chain risks. We use bibliometric and network analysis tools to generate insights that have not been captured in the previous reviews on the topic. In particular, we complete a systemic mapping of the literature that identifies the key research clusters/topics, interrelationships, and generative research areas that have provided the field with the foundational knowledge, concepts, theories, tools, and techniques. Some of our findings include (1) quantitative analysis of supply chain risk is expanding rapidly; (2) European journals are the more popular research outlets for the dissemination of the knowledge developed by researchers in United States and Asia; and (3) sustainability risk analysis is an emerging and fast evolving research topic.
Supply Chain Risk; Uncertainty; Quantitative Model; Review; Bibliometrics and network analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715003276
Tang, Christopher S.
Davarzani, Hoda
Sarkis, Joseph
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:686-6992015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:686-699
article
A classification of the literature on the planning of substitutable products
A company's assortment of products and corresponding inventory levels are constrained by available resources, such as production capacity, storage space, and capital to acquire the inventory. Thus, customers may not always be able to find a most preferred product at the time of purchase; this unsatisfied demand is often substituted with an alternative. In the extant literature, there have been an increasing number of studies that consider product substitution when planning product assortment, inventory, and capacity, in conjunction with pricing. In this paper we classify the literature on the planning of substitutable products published in the major OM and marketing journals during the past thirty years (1974–2013) and present a comprehensive taxonomy of the literature. One criterion is adopted to discuss modeling objectives, and three major criteria are provided to define the nature of product substitution, including substitution mechanism, substitution decision maker, and direction of substitutability. We also identify research gaps to provide guidance for related research in the future.
Inventory; Product substitution; Choice model; Assortment decision; Pricing;
http://www.sciencedirect.com/science/article/pii/S0377221715002854
Shin, Hojung
Park, Soohoon
Lee, Euncheol
Benton, W.C.
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:72-822015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:72-82
article
Bounded growth of the bullwhip effect under a class of nonlinear ordering policies
This paper analyzes the bullwhip effect in multi-echelon supply chains under a general class of nonlinear ordering policies. A describing-function approach from control theory is used to derive closed-form formulas to predict amplification of order fluctuations along the supply chain. It is proven that with consideration of nonlinearity in the ordering policy, the magnitude of the bullwhip effect will eventually become bounded after growing through the first few stages of the supply chain. It is also proven that the average customer demand as well as the demand fluctuation frequency would directly affect the bounded magnitude, while the suppliers’ demand forecasting method has no effect at all. For illustration, analytical results for a class of order-up-to policies are derived and verified by numerical simulations. The proposed modeling framework holds the promise to not only explain empirical observations, but also serve as the basis for developing counteracting strategies against the bullwhip effect.
Bullwhip effect; Frequency domain analysis; Nonlinear; Order-up-to; Describing function;
http://www.sciencedirect.com/science/article/pii/S0377221715003677
Wang, Zhaodong
Wang, Xin
Ouyang, Yanfeng
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:259-2752015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:259-275
article
Joining the CCS club! The economics of CO2 pipeline projects
This paper examines the conditions for a widespread adoption of Carbon Capture transport and Storage (CCS) by a group of emitters that can be connected to a common CO2 pipeline. It details a modeling framework aimed at assessing the critical value in the charge for the CO2 emissions required for each of the emitters to decide to implement capture capabilities. This model can be used to analyze how the tariff structure imposed on the CO2 pipeline operator modifies the overall cost of CO2 abatement via CCS. This framework is applied to the case of a real European CO2 pipeline project. We find that the obligation to use cross-subsidy-free pipeline tariffs has a minor impact on the minimum CO2 price required to adopt the CCS. In contrast, the obligation to charge non-discriminatory prices can either impede the adoption of CCS or significantly raise that price. Besides which, we compared two alternative regulatory frameworks for CO2 pipelines: a common European organization as opposed to a collection of national regulations. The results indicate that the institutional scope of that regulation has a limited impact on the adoption of CCS compared to the detailed design of the tariff structure imposed on pipeline operators.
OR in environment and climate change; Carbon capture and storage; CO2 pipeline; Club theory; Regulation;
http://www.sciencedirect.com/science/article/pii/S0377221715004245
Massol, Olivier
Tchung-Ming, Stéphane
Banal-Estañol, Albert
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:837-8492015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:837-849
article
Demand information and spot price information: Supply chains trading in spot markets
This paper investigates the effect of information updating on the members of a two-stage supply chain in the presence of spot market. The supplier decides the contract price. New information becomes available as time progresses. The manufacturer updates his belief on demand and/or spot price and subsequently decides the contract quantity. The demand and spot price are correlated. Thus, the new demand information also updates the belief on the spot price, and vice versa. We model the problem with an information updating Stackelberg game and derive unique equilibrium strategies. Previous studies have considered only the demand information and concluded that improved demand information always benefits the supplier. By contrast, we demonstrate that improved demand information benefits both the supplier and manufacturer if the correlation coefficient between the two uncertainties has a small positive value and benefits the manufacturer but hurts the supplier otherwise. Moreover, superior spot price information benefits only the manufacturer and always hurts the supplier. Surprisingly, superior information fails to improve the performance of the supply chain and only changes the allocation of the profits between the supplier and manufacturer. Our findings likewise provide insights into when the supplier intends to use the contract channel and which type of information updating facility or expertise to invest in if a choice must be made.
Supply chain management; Information updating; Spot market; Forward contract; Dual sourcing;
http://www.sciencedirect.com/science/article/pii/S0377221715004361
Zhao, Xuan
Xing, Wei
Liu, Liming
Wang, Shouyang
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:83-922015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:83-92
article
Determining the retailer's replenishment policy considering multiple capacitated suppliers and price-sensitive demand
This article presents a mixed integer nonlinear programming model to find the optimal selling price and replenishment policy for a particular type of product in a supply chain defined by a single retailer and multiple potential suppliers. Each supplier offers all-unit quantity discounts as an incentive mechanism. Multiple orders are allowed to be submitted to the selected suppliers during a repeating order cycle. The demand rate is considered to be not constant but dependent upon the selling price. The model provides the optimal number of orders and corresponding order quantities for the selected suppliers, and the optimal demand rate and selling price that maximize the total profit per time unit under suppliers’ capacity and quality constraints. In addition, we provide sufficient conditions under which there exists an optimal solution where the retailer only orders from one supplier. We also apply the Karush–Kuhn–Tucker conditions to investigate the impact of supplier's capacity on the optimal sourcing strategy. The results show that, there may exist a range of capacity values for the dominating supplier, where the retailer's optimal sourcing strategy is to consider multiple suppliers without fully utilizing the dominating supplier's capacity. A numerical example is presented to illustrate the proposed model.
Supplier selection; Price-sensitive demand; Supply chain inventory; All-unit quantity discounts; Mixed integer nonlinear programming model;
http://www.sciencedirect.com/science/article/pii/S0377221715004762
Adeinat, Hamza
Ventura, José A.
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:335-3372015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:335-337
article
Decomposing profit efficiency using a slack-based directional distance function
This paper develops a slack-based decomposition of profit efficiency based on a directional distance function. It is an alternative to Cooper, Pastor, Aparicio, and Borras (2011).
Directional distance function; Slacks; Data envelopment analysis;
http://www.sciencedirect.com/science/article/pii/S0377221715004373
Färe, Rolf
Fukuyama, Hirofumi
Grosskopf, Shawna
Zelenyuk, Valentin
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:827-8362015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:827-836
article
The impact of information sharing, random yield, correlation, and lead times in closed loop supply chains
We investigate the impact of advance notice of product returns on the performance of a decentralised closed loop supply chain. The market demands and the product returns are stochastic and are correlated with each other. The returned products are converted into “as-good-as-new” products and used, together with new products, to satisfy the market demand. The remanufacturing process takes time and is subject to a random yield. We investigate the benefit of the manufacturer obtaining advance notice of product returns from the remanufacturer. We demonstrate that lead times, random yields and the parameters describing the returns play a significant role in the benefit of the advance notice scheme. Our mathematical results offer insights into the benefits of lead time reduction and the adoption of information sharing schemes.
Supply chain management; Closed loop supply chain; Information sharing; Random yield; Lead time;
http://www.sciencedirect.com/science/article/pii/S0377221715004269
Hosoda, Takamichi
Disney, Stephen M.
Gavirneni, Srinagesh
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:60-712015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:60-71
article
A comparison of column-generation approaches to the Synchronized Pickup and Delivery Problem
In the Synchronized Pickup and Delivery Problem (SPDP), user-specified transportation requests from origin to destination points have to be serviced by a fleet of homogeneous vehicles. The task is to find a set of minimum-cost routes satisfying pairing and precedence, capacities, and time windows. Additionally, temporal synchronization constraints couple the service times at the pickup and delivery locations of the customer requests in the following way: a request has to be delivered within prespecified minimum and maximum time lags (called ride times) after it has been picked up. The presence of these ride-time constraints severely complicates the subproblem of the natural column-generation formulation of the SPDP so that it is not clear if their integration into the subproblem pays off in an integer column-generation approach. Therefore, we develop four branch-and-cut-and-price algorithms for the SPDP based on column-generation formulations that use different subproblems. Two of these subproblems are considered for the first time in this paper have not been studied before. We derive new dominance rules and labeling algorithms for their effective solution. Extensive computational results indicate that integrating either both types of ride-time constraints or only the maximum ride-time constraints into the subproblem results in the strongest overall approach.
Vehicle routing; Pickup and delivery; Temporal synchronization; Labeling algorithm; Branch-and-cut-and-price;
http://www.sciencedirect.com/science/article/pii/S0377221715005317
Gschwind, Timo
oai:RePEc:eee:ejores:v:247:y:2015:i:1:p:37-452015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:247:y:2015:i:1:p:37-45
article
Improved branching disjunctions for branch-and-bound: An analytic center approach
In classical branch-and-bound algorithms, the branching disjunction is often based on a single variable, which is a special case of the more general approach that involves multiple variables. In this paper, we present a new approach to generate good general branching disjunctions based on the shape of the polyhedron and interior-point concepts. The approach is based on approximating the feasible polyhedron using Dikin’s inscribed ellipsoid, calculated using the analytic center from interior-point methods. We use the fact that the width of the ellipsoid in a given direction has a closed form expression to formulate a quadratic problem whose optimal solution is a thin direction of the ellipsoid. While solving a quadratic problem at each node of the branch-and-bound tree is impractical, we use an efficient neighborhood search heuristic for its solution. We report computational results on hard mixed integer problems from the literature showing that the proposed approach leads to smaller branch-and-bound trees and a reduction in the computational time as compared with classical branching and strong branching. As the computation of the analytic center is a bottleneck, we finally test the approach within a general interior-point based Benders decomposition where the analytic center is readily available, and show clear dominance of the approach over classical branching.
Interior-point methods; Analytic center; Integer programming; Branch-and-bound; Generalized branching;
http://www.sciencedirect.com/science/article/pii/S0377221715004786
Elhedhli, Samir
Naoum-Sawaya, Joe
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:894-9062015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:894-906
article
Step out–Step in sequencing games
In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First, we show that if the value of a coalition in a relaxed sequencing game is bounded from above by the gains made by all possible neighbor switches, then the game has a non-empty core. After that, we show that this is the case for Step out –Step in sequencing games. Moreover, this paper provides a polynomial time algorithm to determine the values of the coalitions in Step out–Step in sequencing games.
(Cooperative) game theory; Sequencing games; Core;
http://www.sciencedirect.com/science/article/pii/S037722171500435X
Musegaas, M.
Borm, P.E.M.
Quant, M.
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:700-7072015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:700-707
article
An iterative algorithm for two level hierarchical time minimization transportation problem
This paper discusses a two level hierarchical time minimization transportation problem, in which the whole set of source–destination links consists of two disjoint partitions namely Level-I and Level-II links. Some quantity of a homogeneous product is first shipped from sources to destinations by Level-I decision makers using only Level-I links, and on its completion the Level-II decision maker transports the remaining quantity of the product in an optimal fashion using only Level-II links. The objective is to find that feasible solution for Level-I decision corresponding to which the optimal feasible solution for Level-II decision maker is such that the sum of shipment times in Level-I and Level-II is minimum. A polynomial time iterative algorithm is proposed to solve the two level hierarchical time minimization transportation problem. At each iteration a lexicographic optimal solution of a restricted version of a related standard time minimization transportation problem is examined to generate a pair of Level-I and Level-II shipment times and finally the global optimal solution is obtained by selecting the best out of these generated pairs. Numerical illustration is included in support of theory.
Global optimization; Concave minimization; Transportation problem; Hierarchical optimization; Polynomial algorithm;
http://www.sciencedirect.com/science/article/pii/S0377221715002416
Sharma, Anuj
Verma, Vanita
Kaur, Prabhjot
Dahiya, Kalpana
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:850-8572015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:850-857
article
Optimal sequence of container ships in a string
Container ships in a string may not have the same capacity. Therefore, the sequence of ships affects the number of containers that are delayed at export ports due to demand uncertainty, for instance, “a large ship, followed by a small ship, then another large ship, and finally another small ship” is better than “a large ship, followed by another large ship, then a small ship, and finally another small ship”. We hence aim to determine the sequence of the ships in a string to minimize the delay of containers, without requiring the probability distribution functions for the future demand. We propose three rules to identify an optimal or near-optimal string. The rules have been proved to be effective based on extensive numerical experiments. A rough estimation indicates that over 6 million dollars/year could be saved for all liner services in the world by optimizing the sequences of ships.
Logistics; Liner container shipping; Maritime transportation; Ship fleet deployment; Robust optimization;
http://www.sciencedirect.com/science/article/pii/S0377221715004695
Wang, Shuaian
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:744-7492015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:744-749
article
Scheduling for data gathering networks with data compression
This paper analyzes scheduling in a data gathering network with data compression. The nodes of the network collect some data and pass them to a single base station. Each node can, at some cost, preprocess the data before sending it, in order to decrease its size. Our goal is to transfer all data to the base station in given time, at the minimum possible cost. We prove that the decision version of this scheduling problem is NP-complete. Polynomial-time heuristic algorithms for solving the problem are proposed and tested in a series of computational experiments.
Scheduling; Data gathering networks; Data compression;
http://www.sciencedirect.com/science/article/pii/S0377221715004166
Berlińska, Joanna
oai:RePEc:eee:ejores:v:246:y:2015:i:3:p:815-8262015-08-06RePEc:eee:ejores
RePEc:eee:ejores:v:246:y:2015:i:3:p:815-826
article
Innovative menu of contracts for coordinating a supply chain with multiple mean-variance retailers
We consider the coordination challenge with a risk-neutral manufacturer which supplies to multiple heterogeneou retailers. We find that the manufacturer can maximize its expected profit only if the expected profit of the supply chain is maximized, or equivalently supply chain coordination (SCC) is achieved. The target sales rebate (TSR) contract is commonly used in practice to achieve SCC. However, as we found in this paper, the presence of heterogeneity in retailers’ minimum expected profit requirements is the major cause that a single TSR contract and the related single hybrid contracts all fail to achieve SCC and maximize the manufacturer’s expected profit simultaneously. Thus, we develop an innovative menu of TSR contracts with fixed order quantity (TSR-FOQ) . Although there are multiple contracts in a menu, we find that the manufacturer only needs to decide one basic TSR contract and two newly developed parameters termed as the risk-level indicator and the separation indicator, in applying the sophisticated menu of TSR-FOQ. By adjusting the two indicators, the manufacturer can control the profit variance of the retailers and the separations of the component contracts of the menu. We further propose another sophisticated menu of TSR with minimum order quantity and quantity discount contracts which can give each retailer a higher degree of freedom in the selection of order quantity. Differences between the two menus are analytically examined. Some meaningful managerial insights are generated.
Supply chain management; Risk management; Coordination; Mean-variance; Menu of contracts;
http://www.sciencedirect.com/science/article/pii/S0377221715004257
Chiu, Chun-Hung
Choi, Tsan-Ming
Hao, Gang
Li, Xun
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:744-7552014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:744-755
article
Environmental implications for online retailing
Recent press has highlighted the environmental benefits associated with online shopping, such as emissions savings from individual drivers, economies of scale in package delivery, and decreased inventories. We formulate a dual channel model for a retailer who has access to both online and traditional market outlets to analyze the impact of customer environmental sensitivity on its supply. In particular, we analyze stocking decisions for each channel incorporating price dependent demand, customer preference/utility for online channels, and channel related costs. We compare and contrast results from both deterministic and stochastic models, and utilize numerical examples to illustrate the implications of industry specific factors on these decisions. Finally, we compare and contrast the findings for disparate industries, such as electronics, books and groceries.
Retailing; Pricing; Environment; e-Commerce; Dual channel;
http://www.sciencedirect.com/science/article/pii/S0377221714004615
Carrillo, Janice E.
Vakharia, Asoo J.
Wang, Ruoxuan
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:685-6982014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:685-698
article
Managing raw material in supply chains
In this paper, we explore how firms can manage their raw material sourcing better by developing appropriate sourcing relationships with their raw material suppliers. We detail three empirical case studies of firms explaining their different raw material sourcing strategies: (a) firms can adopt a hands-off approach to raw material management, (b) firms can supply raw material directly to their suppliers, and this may be beneficial for some agents in the supply chain, and (c) firms can bring their component suppliers together, and the resulting cooperation between suppliers can be beneficial for supply chain. We then analytically model the three raw material scenarios encountered in our empirical work, examine the resulting profits along the supply chain, and extend the results to a competitive buyer scenario. Overall, our results show that active management of raw material sourcing can add value to supply chains.
Sourcing; Raw material; Supply chains;
http://www.sciencedirect.com/science/article/pii/S0377221714004950
Agrawal, Anupam
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:625-6352014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:625-635
article
An adaptive stochastic knapsack problem
We consider a stochastic knapsack problem in which the event of overflow results in the problem ending with zero return. We assume that there are n types of items available where each type has infinite supply. An item has an exponentially distributed random weight with a known mean depending on its type and the item’s value is proportional to its weight with a given factor depending on the item’s type. We have to make a decision on each stage whether to stop, or continue to put an item of a selected type in the knapsack. An item’s weight is learned when placed to the knapsack. The objective of this problem is to find a policy that maximizes the expected total values. Using the framework of dynamic programming, the optimal policy is found when n=2 and a heuristic policy is suggested for n>2.
Decision process; Dynamic programming; Stochastic knapsack;
http://www.sciencedirect.com/science/article/pii/S037722171400530X
Chen, Kai
Ross, Sheldon M.
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:802-8092014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:802-809
article
The stochastic ordering of mean-preserving transformations and its applications
The stochastic variability measures the degree of uncertainty for random demand and/or price in various operations problems. Its ordering property under mean-preserving transformation allows us to study the impact of demand/price uncertainty on the optimal decisions and the associated objective values. Based on Chebyshev’s algebraic inequality, we provide a general framework for stochastic variability ordering under any mean-preserving transformation that can be parameterized by a single scalar, and apply it to a broad class of specific transformations, including the widely used mean-preserving affine transformation, truncation, and capping. The application to mean-preserving affine transformation rectifies an incorrect proof of an important result in the inventory literature, which has gone unnoticed for more than two decades. The application to mean-preserving truncation addresses inventory strategies in decentralized supply chains, and the application to mean-preserving capping sheds light on using option contracts for procurement risk management.
Uncertainty modeling; Stochastic variability; Mean-preserving transformation; Inventory management; Procurement risk management;
http://www.sciencedirect.com/science/article/pii/S0377221714005001
Zhu, Wanshan
Wu, Zhengping
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:865-8672014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:865-867
article
Notes on ‘Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis’
In our previous work published in this journal, we showed how the Hit-And-Run (HAR) procedure enables efficient sampling of criteria weights from a space formed by restricting a simplex with arbitrary linear inequality constraints. In this short communication, we note that the method for generating a basis of the sampling space can be generalized to also handle arbitrary linear equality constraints. This enables the application of HAR to sampling spaces that do not coincide with the simplex, thereby allowing the combined use of imprecise and precise preference statements. In addition, it has come to our attention that one of the methods we proposed for generating a starting point for the Markov chain was flawed. To correct this, we provide an alternative method that is guaranteed to produce a starting point that lies within the interior of the sampling space.
Multiple criteria analysis; Simulation; Uncertainty modeling;
http://www.sciencedirect.com/science/article/pii/S0377221714005396
van Valkenhoef, Gert
Tervonen, Tommi
Postmus, Douwe
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:593-6082014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:593-608
article
Synchronization in cross-docking networks: A research classification and framework
Cross-docking is a distribution strategy that enables the consolidation of less-than-truckload shipments into full truckloads without long-term storage. Due to the absence of a storage buffer inside a cross-dock, local and network-wide cross-docking operations need to be carefully synchronized. This paper proposes a framework specifying the interdependencies between different cross-docking problem aspects with the aim to support future research in developing decision models with practical and scientific relevance. The paper also presents a new general classification scheme for cross-docking research based on the inputs and outputs for each problem aspect. After classifying the existing cross-docking research, we conclude that the overwhelming majority of papers fail to consider the synchronization of local and network-wide cross-docking operations. Lastly, to highlight the importance of synchronization in cross-docking networks, two real-life illustrative problems are described that are not yet addressed in the literature.
Transportation; Cross-dock; Cross-docking network; Synchronization; Literature review;
http://www.sciencedirect.com/science/article/pii/S0377221714002264
Buijs, Paul
Vis, Iris F.A.
Carlo, Héctor J.
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:699-7102014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:699-710
article
More than a second channel? Supply chain strategies in B2B spot markets
The emergence of B2B spot markets has greatly facilitated spot trading and impacted supply chain structures as well as the way commercial transactions take place between firms in many industries. While providing new opportunities, the B2B spot market also exposes participants to a price risk. This new business landscape raises some important questions on how the supplier and manufacturer should change their sales channel and procurement strategies and tailor their decisions to this changing environment. In this paper, we study the channel-choice, pricing and ordering/production decisions of the risk-averse supplier and manufacturer in a two-tier supply chain with a B2B spot market. Our analysis shows that, to benefit from the B2B spot market and control the risk exposure, the upstream supplier should develop an integrated channel-choice and pricing strategy. When the supplier adopts a dual-channel strategy, the equilibrium contract price decreases in the supplier’s risk attitude, but increases in the demand uncertainty. However, it first decreases and then increases in the manufacturer’s risk attitude and spot price volatility. We conclude that rather than simply being a second channel, the B2B spot market provides a strategic tool to supply chain members to achieve an advantageous position in their contract trading.
Supply chain management; Pricing; Risk management; Spot market; Channel strategy;
http://www.sciencedirect.com/science/article/pii/S0377221714005323
Xing, Wei
Liu, Liming
Wang, Shouyang
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:830-8412014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:830-841
article
Spanning trees with variable degree bounds
In this paper, we introduce and study a generalization of the degree constrained minimum spanning tree problem where we may install one of several available transmission systems (each with a different cost value) in each edge. The degree of the endnodes of each edge depends on the system installed on the edge. We also discuss a particular case that arises in the design of wireless mesh networks (in this variant the degree of the endnodes of each edge depend on the transmission system installed on it as well as on the length of the edge). We propose three classes of models using different sets of variables and compare from a theoretical perspective as well as from a computational point of view, the models and the corresponding linear programming relaxations. The computational results show that some of the proposed models are able to solve to optimality instances with 100 nodes and different scenarios.
OR in telecommunications networks; Spanning tree; Degree constraints; Wireless mesh networks;
http://www.sciencedirect.com/science/article/pii/S0377221714004573
Gouveia, L.
Moura, P.
Ruthmair, M.
Sousa, A.
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:644-6622014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:644-662
article
Modeling lotsizing and scheduling problems with sequence dependent setups
Several production environments require simultaneous planing of sizing and scheduling of sequences of production lots. Integration of sequencing decisions in lotsizing and scheduling problems has received an increased attention from the research community due to its inherent applicability to real world problems. A two-dimensional classification framework is proposed to survey and classify the main modeling approaches to integrate sequencing decisions in discrete time lotsizing and scheduling models. The Asymmetric Traveling Salesman Problem can be an important source of ideas to develop more efficient models and methods to this problem. Following this research line, we also present a new formulation for the problem using commodity flow based subtour elimination constraints. Computational experiments are conducted to assess the performance of the various models, in terms of running times and upper bounds, when solving real-word size instances.
Production planning; Lotsizing and scheduling; Mixed-integer programming; Sequence-dependent setups; Computational benchmark;
http://www.sciencedirect.com/science/article/pii/S0377221714004251
Guimarães, Luis
Klabjan, Diego
Almada-Lobo, Bernardo
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:820-8292014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:820-829
article
Scheduling the part supply of mixed-model assembly lines in line-integrated supermarkets
Line-integrated supermarkets constitute a novel in-house parts logistics concept for feeding mixed-model assembly lines. In this context, supermarkets are decentralized logistics areas located directly in each station. Here, parts are withdrawn from their containers by a dedicated logistics worker and sorted just-in-sequence (JIS) into a JIS-bin. From this bin, assembly workers fetch the parts required by the current workpiece and mount them during the respective production cycle. This paper treats the scheduling of the part supply processes within line-integrated supermarkets. The scheduling problem for refilling the JIS-bins is formalized and a complexity analysis is provided. Furthermore, a heuristic decomposition approach is presented and important managerial aspects are investigated.
In-house logistics; Mixed-model assembly lines; Automobile industry; Part supply;
http://www.sciencedirect.com/science/article/pii/S0377221714004524
Boysen, Nils
Emde, Simon
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:764-7752014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:764-775
article
Mean-risk analysis with enhanced behavioral content
We study a mean-risk model derived from a behavioral theory of Disappointment with multiple reference points. One distinguishing feature of the risk measure is that it is based on mutual deviations of outcomes, not deviations from a specific target. We prove necessary and sufficient conditions for strict first and second order stochastic dominance, and show that the model is, in addition, a Convex Risk Measure. The model allows for richer, and behaviorally more plausible, risk preference patterns than competing models with equal degrees of freedom, including Expected Utility (EU), Mean–Variance (M-V), Mean-Gini (M-G), and models based on non-additive probability weighting, such as Dual Theory (DT). In asset allocation, the model allows a decision-maker to abstain from diversifying in a positive expected value risky asset if its performance does not meet a certain threshold, and gradually invest beyond this threshold, which appears more acceptable than the extreme solutions provided by either EU and M-V (always diversify) or DT and M-G (always plunge). In asset trading, the model provides no-trade intervals, like DT and M-G, in some, but not all, situations. An illustrative application to portfolio selection is presented. The model can provide an improved criterion for mean-risk analysis by injecting a new level of behavioral realism and flexibility, while maintaining key normative properties.
Risk analysis; Uncertainty modeling; Utility theory; Stochastic dominance; Convex risk measures;
http://www.sciencedirect.com/science/article/pii/S0377221714004846
Cillo, Alessandra
Delquié, Philippe
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:756-7632014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:756-763
article
Sufficient conditions under which SSD- and MR-efficient sets are identical
Three approaches are commonly used for analyzing decisions under uncertainty: expected utility (EU), second-degree stochastic dominance (SSD), and mean-risk (MR) models, with the mean–standard deviation (MS) being the best-known MR model. Because MR models generally lead to different efficient sets and thus are a continuing source of controversy, the specific concern of this article is not to suggest another MR model. Instead, we show that the SSD- and MR-efficient sets are identical, as long as (a) the risk measure satisfies both positive homogeneity and consistency with respect to the Rothschild and Stiglitz (1970) definition(s) of increasing risk and (b) the choice set includes the riskless asset and satisfies a generalized location and scale property, which can be interpreted as a market model. Under these conditions, there is no controversy among MR models and they all have a decision-theoretic foundation. They also offer a convenient way to compare the estimation error related to the empirical implementation of different MR models.
Efficient sets; Utility theory; Generalized location and scale property; Mean-risk models; Second-degree stochastic dominance;
http://www.sciencedirect.com/science/article/pii/S037722171400486X
Schuhmacher, Frank
Auer, Benjamin R.
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:794-8012014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:794-801
article
Analytic hierarchy process-hesitant group decision making
In this paper, we consider that the judgments provided by the decision makers (DMs) cannot be aggregated and revised, then define them as hesitant judgments to describe the hesitancy experienced by the DMs in decision making. If there exist hesitant judgments in analytic hierarchy process-group decision making (AHP-GDM), then we call it AHP-hesitant group decision making (AHP-HGDM) as an extension of AHP-GDM. Based on hesitant multiplicative preference relations (HMPRs) to collect the hesitant judgments, we develop a hesitant multiplicative programming method (HMPM) as a new prioritization method to derive ratio-scale priorities from HMPRs. The HMPM is discussed in detail with examples to show its advantages and characteristics. The practicality and effectiveness of our methods are illustrated by an example of the water conservancy in China.
Linear programming; Analytic hierarchy process (AHP); Group decision making (GDM); Hesitant judgment; Hesitant multiplicative preference relation (HMPR);
http://www.sciencedirect.com/science/article/pii/S0377221714005025
Zhu, Bin
Xu, Zeshui
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:786-7932014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:786-793
article
Investment under duality risk measure
One index satisfies the duality axiom if one agent, who is uniformly more risk-averse than another, accepts a gamble, the latter accepts any less risky gamble under the index. Aumann and Serrano (2008) show that only one index defined for so-called gambles satisfies the duality and positive homogeneity axioms. We call it a duality index. This paper extends the definition of duality index to all outcomes including all gambles, and considers a portfolio selection problem in a complete market, in which the agent’s target is to minimize the index of the utility of the relative investment outcome. By linking this problem to a series of Merton’s optimum consumption-like problems, the optimal solution is explicitly derived. It is shown that if the prior benchmark level is too high (which can be verified), then the investment risk will be beyond any agent’s risk tolerance. If the benchmark level is reasonable, then the optimal solution will be the same as that of one of the Merton’s series problems, but with a particular value of absolute risk aversion, which is given by an explicit algebraic equation as a part of the optimal solution. According to our result, it is riskier to achieve the same surplus profit in a stable market than in a less-stable market, which is consistent with the common financial intuition.
Duality axiom; Duality risk measure; Duality index; Portfolio selection;
http://www.sciencedirect.com/science/article/pii/S0377221714005256
Xu, Zuo Quan
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:810-8192014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:810-819
article
Impulse control of pension fund contributions, in a regime switching economy
In defined benefit pension plans, allowances are independent from the financial performance of the fund. And the sponsoring firm pays regularly contributions to limit deviations of fund assets from the mathematical reserve, necessary for covering the promised liabilities. This research paper proposes a method to optimize the timing and size of contributions, in a regime switching economy. The model takes into consideration important market frictions, like transactions costs, late payments and illiquidity. The problem is solved numerically using dynamic programming and impulse control techniques. Our approach is based on parallel grids, with trinomial links, discretizing the asset return in each economic regime.
Pension fund; Impulse control; Regime switching; Transaction costs; Liquidity risk;
http://www.sciencedirect.com/science/article/pii/S0377221714004998
Hainaut, Donatien
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:663-6732014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:663-673
article
Buyback contracts with price-dependent demands: Effects of demand uncertainty
We explore buyback contracts in a supplier–retailer supply chain where the retailer faces a price-dependent downward-sloping demand curve subject to uncertainty. Differentiated from the existing literature, this work focuses on analytically examining how the uncertainty level embedded in market demand affects the applicability of buyback contracts in supply chain management. To this end, we seek to characterize the buyback model in terms of only the demand uncertainty level (DUL). With this new research perspective, we have obtained some interesting new findings for buyback. For example, we find that (1) even though the supply chain’s efficiency will change over the DUL with a wholesale price-only contract, it will be maintained constantly at that of the corresponding deterministic demand setting with buyback, regardless of the DUL; (2) in the practice of buyback, the buyback issuer should adjust only the buyback price in reaction to different DULs while leave the wholesale price unchanged as that in the corresponding deterministic demand setting; (3) only in the demand setting with an intermediate level of the uncertainty (which is identified quantitatively in Theorem 5), buyback provision is beneficial simultaneously for the supplier, the retailer, and the supply chain system, while this is not the case in the other demand settings. This work reveals that DUL can be a critical factor affecting the applicability of supply chain contracts.
Supply chain management; Buyback contract; Structural property; Stochastic price-dependent demand; Demand uncertainty level;
http://www.sciencedirect.com/science/article/pii/S0377221714004913
Zhao, Yingxue
Choi, Tsan-Ming
Cheng, T.C.E.
Sethi, Suresh P.
Wang, Shouyang
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:609-6242014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:609-624
article
Retail store scheduling for profit
In spite of its tremendous economic significance, the problem of sales staff schedule optimization for retail stores has received relatively scant attention. Current approaches typically attempt to minimize payroll costs by closely fitting a staffing curve derived from exogenous sales forecasts, oblivious to the ability of additional staff to (sometimes) positively impact sales. In contrast, this paper frames the retail scheduling problem in terms of operating profit maximization, explicitly recognizing the dual role of sales employees as sources of revenues as well as generators of operating costs. We introduce a flexible stochastic model of retail store sales, estimated from store-specific historical data, that can account for the impact of all known sales drivers, including the number of scheduled staff, and provide an accurate sales forecast at a high intra-day resolution. We also present solution techniques based on mixed-integer (MIP) and constraint programming (CP) to efficiently solve the complex mixed integer non-linear scheduling (MINLP) problem with a profit-maximization objective. The proposed approach allows solving full weekly schedules to optimality, or near-optimality with a very small gap. On a case-study with a medium-sized retail chain, this integrated forecasting–scheduling methodology yields significant projected net profit increases on the order of 2–3% compared to baseline schedules.
Shift scheduling; Constraint programming; Mixed integer programming; Statistical forecasting; Retail;
http://www.sciencedirect.com/science/article/pii/S0377221714004561
Chapados, Nicolas
Joliveau, Marc
L’Ecuyer, Pierre
Rousseau, Louis-Martin
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:776-7852014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:776-785
article
Decomposing technical inefficiency using the principle of least action
In for-profit organizations, profit efficiency decomposition is considered important since estimates on profit drivers are of practical use to managers in their decision making. Profit efficiency is traditionally due to two sources – technical efficiency and allocative efficiency. The contribution of this paper is a novel decomposition of technical efficiency that could be more practical to use if the firm under evaluation really wants to achieve technical efficiency as soon as possible. For this purpose, we show how a new version of the Measure of Inefficiency Proportions (MIP), which seeks the minimization of the total technical effort by the assessed firm, is a lower bound of the value of technical inefficiency associated with the directional distance function. The targets provided by the new MIP could be beneficial for firms since it specifies how firms may become technically efficient simply by decreasing one input or increasing one output, suggesting that each firm should focus its effort on a specific dimension (input or output). This approach is operationalized in a data envelopment analysis framework and applied to a dataset of airlines.
Data envelopment analysis; Technical efficiency decomposition; Closest targets;
http://www.sciencedirect.com/science/article/pii/S0377221714004895
Aparicio, Juan
Mahlberg, Bernhard
Pastor, Jesus T.
Sahoo, Biresh K.
oai:RePEc:eee:ejores:v:239:y:2014:i:3:p:636-6432014-09-25RePEc:eee:ejores
RePEc:eee:ejores:v:239:y:2014:i:3:p:636-643
article
Minmax regret 1-facility location on uncertain path networks
Let P be an undirected path graph of n vertices. Each edge of P has a positive length and a constant capacity. Every vertex has a nonnegative supply, which is an unknown value but is known to be in a given interval. The goal is to find a point on P to build a facility and move all vertex supplies to the facility such that the maximum regret is minimized. The previous best algorithm solves the problem in O(nlog2n) time and O(nlogn) space. In this paper, we present an O(nlogn) time and O(n) space algorithm, and our approach is based on new observations and algorithmic techniques.
Algorithms; Path networks; Uncertainty; Facility location; Minmax regret;
http://www.sciencedirect.com/science/article/pii/S0377221714005293
Wang, Haitao
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:997-10072014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:997-1007
article
Input–output substitutability and strongly monotonic p-norm least distance DEA measures
In DEA, there are two frameworks for efficiency assessment and targeting: the greatest and the least distance framework. The greatest distance framework provides us with the efficient targets that are determined by the farthest projections to the assessed decision making unit via maximization of the p-norm relative to either the strongly efficient frontier or the weakly efficient frontier. Non-radial measures belonging to the class of greatest distance measures are the slacks-based measure (SBM) and the range-adjusted measure (RAM). Whereas these greatest distance measures have traditionally been utilized because of their computational ease, least distance projections are quite often more appropriate than greatest distance projections from the perspective of managers of decision-making units because closer efficient targets may be attained with less effort. In spite of this desirable feature of the least distance framework, the least distance (in) efficiency versions of the additive measure, SBM and RAM do not even satisfy weak monotonicity. In this study, therefore, we introduce and investigate least distance p-norm inefficiency measures that satisfy strong monotonicity over the strongly efficient frontier. In order to develop these measures, we extend a free disposable set and introduce a tradeoff set that implements input–output substitutability.
Data envelopment analysis (DEA); Least distance efficiency/inefficiency measures; Strong monotonicity; Input–output substitutability; Free disposability;
http://www.sciencedirect.com/science/article/pii/S0377221714001684
Fukuyama, Hirofumi
Maeda, Yasunobu
Sekitani, Kazuyuki
Shi, Jianming
oai:RePEc:eee:ejores:v:218:y:2012:i:1:p:106-1122014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:1:p:106-112
article
An optimal replenishment policy for deteriorating items with effective investment in preservation technology
In this paper, considering the amount invested in preservation technology and the replenishment schedule as decision variables, we formulate an inventory model with a time-varying rate of deterioration and partial backlogging. The objective is to find the optimal replenishment and preservation technology investment strategies while maximizing the total profit per unit time. For any given preservation technology cost, we first prove that the optimal replenishment schedule not only exists but is unique. Next, under given replenishment schedule, we show that the total profit per unit time is a concave function of preservation technology cost. We then provide a simple algorithm to figure out the optimal preservation technology cost and replenishment schedule for the proposed model. We use numerical examples to illustrate the model.
Inventory; Deterioration; Partial backlogging; Preservation technology investment;
http://www.sciencedirect.com/science/article/pii/S0377221711009350
Dye, Chung-Yuan
Hsieh, Tsu-Pang
oai:RePEc:eee:ejores:v:226:y:2013:i:1:p:122-1312014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:1:p:122-131
article
Trends in efficiency in response to regulatory reforms: The case of Indian and Pakistani commercial banks
This paper attempts to estimate trends in the efficiency levels of Indian and Pakistani commercial banks between 1985 and 2003, a time period which encompasses two phases of significant change to the regulation of the financial sector in both countries. Our efficiency estimates show that, during the initial years of the post reform period, a reduction in efficiency is observed for banks in both countries. However, efficiency levels were found to have increased subsequently, suggesting a period of initial adjustment throughout much of the 1990s followed by a subsequent correction in the latter part of the sample period.
Efficiency; Technology; Deregulation; Privatisation; Indian banking; Pakistani banking;
http://www.sciencedirect.com/science/article/pii/S0377221712008168
Jaffry, Shabbar
Ghulam, Yaseen
Cox, Joe
oai:RePEc:eee:ejores:v:234:y:2014:i:2:p:546-5602014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:2:p:546-560
article
International portfolio choice and political instability risk: A multi-objective approach
The benefits derived from international portfolio diversification into foreign nations (including the less developed countries) are well documented, yet this practice is discouraged due to market imperfections such as political instability. In practice, nations may be differentiated further by many aspects, such as border controls or political and social trends, which constrain private transactions and financial decisions. This paper attempts to examine (1) whether the home asset bias in a portfolio holding is associated with higher political instability risk, and (2) to what extent international diversification among stocks, in the presence of such risk, outperforms domestic stock portfolios. Using alternative instability risk proxies in the context of a discrete-time version of mean–variance framework, we corroborate the impact of this type of risk on international portfolio investment decisions.
International diversification; Political instability risk; Equity home bias; Corruption-averse; Mean–variance theory;
http://www.sciencedirect.com/science/article/pii/S0377221713000544
Smimou, K.
oai:RePEc:eee:ejores:v:236:y:2014:i:2:p:488-4982014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:236:y:2014:i:2:p:488-498
article
Knapsack problems with sigmoid utilities: Approximation algorithms via hybrid optimization
We study a class of non-convex optimization problems involving sigmoid functions. We show that sigmoid functions impart a combinatorial element to the optimization variables and make the global optimization computationally hard. We formulate versions of the knapsack problem, the generalized assignment problem and the bin-packing problem with sigmoid utilities. We merge approximation algorithms from discrete optimization with algorithms from continuous optimization to develop approximation algorithms for these NP-hard problems with sigmoid utilities.
Sigmoid utility/S-curve; Knapsack problem; Generalized assignment problem; Bin-packing problem; Multi-choice knapsack problem; Human attention allocation;
http://www.sciencedirect.com/science/article/pii/S0377221713010199
Srivastava, Vaibhav
Bullo, Francesco
oai:RePEc:eee:ejores:v:230:y:2013:i:3:p:656-6652014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:230:y:2013:i:3:p:656-665
article
A vague set based decision support approach for evaluating research funding programs
Scientific Research Assessment (SRA) is receiving increasing attention in both academic and industry. More and more organizations are recognizing the importance of SRA for the optimal use of scarce resources. In this paper, a vague set theory based decision support approach is proposed for SRA. Specifically, a family of parameterized S-OWA operator is developed for the aggregation of vague assessments. The proposed approach is introduced to evaluate the research funding programs of the National Natural Science Foundation of China (NSFC). It provides a soft and expansive way to help the decision maker in NSFC to make his decisions. The proposed approach can also be used for some other agencies to make similar assessment.
Decision analysis; Vague set theory; OWA operator; Decision support approach; Scientific research assessment;
http://www.sciencedirect.com/science/article/pii/S0377221713003597
Wang, Jue
Xu, Wei
Ma, Jian
Wang, Shouyang
oai:RePEc:eee:ejores:v:235:y:2014:i:2:p:412-4302014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:2:p:412-430
article
Storage yard operations in container terminals: Literature overview, trends, and research directions
Inbound and outbound containers are temporarily stored in the storage yard at container terminals. A combination of container demand increase and storage yard capacity scarcity create complex operational challenges for storage yard managers. This paper presents an in-depth overview of storage yard operations, including the material handling equipment used, and highlights current industry trends and developments. A classification scheme for storage yard operations is proposed and used to classify scientific journal papers published between 2004 and 2012. The paper also discusses and challenges the current operational paradigms on storage yard operations. Lastly, the paper identifies new avenues for academic research based on current trends and developments in the container terminal industry.
Transportation; Container terminals; Literature overview; Yard operations; Material handling equipment; Stack;
http://www.sciencedirect.com/science/article/pii/S0377221713008771
Carlo, Héctor J.
Vis, Iris F.A.
Roodbergen, Kees Jan
oai:RePEc:eee:ejores:v:224:y:2013:i:3:p:497-5062014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:224:y:2013:i:3:p:497-506
article
Inventory sharing in integrated network design and inventory optimization with low-demand parts
Service Parts Logistics (SPL) problems induce strong interaction between network design and inventory stocking due to high costs and low demands of parts and response time based service requirements. These pressures motivate the inventory sharing practice among stocking facilities. We incorporate inventory sharing effects within a simplified version of the integrated SPL problem, capturing the sharing fill rates in 2-facility inventory sharing pools. The problem decides which facilities in which pools should be stocked and how the demand should be allocated to stocked facilities, given full inventory sharing between the facilities within each pool so as to minimize the total facility, inventory and transportation costs subject to a time-based service level constraint. Our analysis for the single pool problem leads us to model this otherwise non-linear integer optimization problem as a modified version of the binary knapsack problem. Our numerical results show that a greedy heuristic for a network of 100 facilities is on average within 0.12% of the optimal solution. Furthermore, we observe that a greater degree of sharing occurs when a large amount of customer demands are located in the area overlapping the time windows of both facilities in 2-facility pools.
Inventory; Logistics; Service parts logistics; Inventory sharing; Network design;
http://www.sciencedirect.com/science/article/pii/S0377221712007126
Iyoob, Ilyas Mohamed
Kutanoglu, Erhan
oai:RePEc:eee:ejores:v:226:y:2013:i:1:p:46-522014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:1:p:46-52
article
Outsourcing and scheduling for two-machine ordered flow shop scheduling problems
This paper considers a two-machine ordered flow shop problem, where each job is processed through the in-house system or outsourced to a subcontractor. For in-house jobs, a schedule is constructed and its performance is measured by the makespan. Jobs processed by subcontractors require paying an outsourcing cost. The objective is to minimize the sum of the makespan and the total outsourcing cost. Since this problem is NP-hard, we present an approximation algorithm. Furthermore, we consider three special cases in which job j has a processing time requirement pj, and machine i a characteristic qi. The first case assumes the time job j occupies machine i is equal to the processing requirement divided by a characteristic value of machine i, that is, pj/qi. The second (third) case assumes that the time job j occupies machine i is equal to the maximum (minimum) of its processing requirement and a characteristic value of the machine, that is, max{pj,qi} (min{pj,qi}). We show that the first and the second cases are NP-hard and the third case is polynomially solvable.
Scheduling; Outsourcing; Ordered flow shop; Computational complexity;
http://www.sciencedirect.com/science/article/pii/S0377221712008132
Chung, Dae-Young
Choi, Byung-Cheon
oai:RePEc:eee:ejores:v:234:y:2014:i:3:p:830-8382014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:3:p:830-838
article
When should service firms provide free experience service?
By providing a free experience service, a service firm can attract more uninformed customers. However, it could reversely effect the delay-sensitive, informed customers’ decision. In this paper, we study a priority queueing system with free experience services. We study the customer behavior in equilibrium after we derive the expected customer waiting time. We then construct the service firm’s revenue function and obtain an optimal strategy for the service firm. Our results suggest that when the market size of informed customers is relatively small, the firm should consider providing free experience services for uninformed customers. Conversely, if the demand rate of potential informed customers is quite high, the firm should ignore uninformed customers.
Queueing; Revenue management; OR in service industries; Experience service; Delay-sensitive;
http://www.sciencedirect.com/science/article/pii/S037722171300859X
Zhou, Wenhui
Lian, Zhaotong
Wu, Jinbiao
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:149-1612014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:149-161
article
Analysis of job assignment with batch arrivals among heterogeneous servers
We revisit the problem of job assignment to multiple heterogeneous servers in parallel. The system under consideration, however, has a few unique features. Specifically, repair jobs arrive to the queueing system in batches according to a Poisson process. In addition, servers are heterogeneous and the service time distributions of the individual servers are general. The objective is to optimally assign each job within a batch arrival to minimize the long-run average number of jobs in the entire system. We focus on the class of static assignment policies where jobs are routed to servers upon arrival according to pre-determined probabilities. We solve the model analytically and derive the structural properties of the optimal static assignment. We show that when the traffic is below a certain threshold, it is better to not assign any jobs to slower servers. As traffic increases (either due to an increase in job arrival rate or batch size), more slower servers will be utilized. We give an explicit formula for computing the threshold. Finally we compare and evaluate the performance of the static assignment policy to two dynamic policies, specifically the shortest expected completion policy and the shortest queue policy.
Job assignment; Parallel queues; Batch arrival; Threshold policy;
http://www.sciencedirect.com/science/article/pii/S0377221711008046
Zhang, Zhongju
Daigle, John
oai:RePEc:eee:ejores:v:220:y:2012:i:1:p:286-2942014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:220:y:2012:i:1:p:286-294
article
Revisiting corporate growth options in the presence of state-dependent cashflow risk
Given a non-trivial market price of risk, we study the impact of state-dependent cashflow risk on the optimal investment policy and on the ensuing value of an unlevered firm that holds the option of scaling up cashflows from its assets in place upon incurring an irreversible cost. The firm’s investment decision and value are studied as a function of the market price of risk and of the degree of state dependence in cashflow risk.
Corporate growth options; State-dependent risk; Stock pricing; Market price of risk;
http://www.sciencedirect.com/science/article/pii/S0377221711009003
Sbuelz, Alessandro
Caliari, Marco
oai:RePEc:eee:ejores:v:226:y:2013:i:1:p:77-842014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:1:p:77-84
article
Nonparametric predictive reliability of series of voting systems
Nonparametric Predictive Inference (NPI) for system reliability reflects the dependence of reliabilities of similar components due to limited knowledge from testing. NPI has recently been presented for reliability of a single voting system consisting of multiple types of components. The components are all assumed to play the same role within the system, but with regard to their reliability components of different types are assumed to be independent. The information from tests is available per type of component. This paper presents NPI for systems with subsystems in a series structure, where all subsystems are voting systems and components of the same type can be in different subsystems. As NPI uses only few modelling assumptions, system reliability is quantified by lower and upper probabilities, reflecting the limited information in the test data. The results are illustrated by examples, which also illustrate important aspects of redundancy and diversity for system reliability.
k-out-of-m system; Lower and upper probabilities; Nonparametric predictive inference; Redundancy; System reliability; Voting system;
http://www.sciencedirect.com/science/article/pii/S0377221712008156
Aboalkhair, Ahmad M.
Coolen, Frank P.A.
MacPhee, Iain M.
oai:RePEc:eee:ejores:v:225:y:2013:i:3:p:472-4782014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:225:y:2013:i:3:p:472-478
article
Generalization of the weighted voting method using penalty functions constructed via faithful restricted dissimilarity functions
In this paper we present a generalization of the weighted voting method used in the exploitation phase of decision making problems represented by preference relations. For each row of the preference relation we take the aggregation function (from a given set) that provides the value which is the least dissimilar with all the elements in that row. Such a value is obtained by means of the selected penalty function. The relation between the concepts of penalty function and dissimilarity has prompted us to study a construction method for penalty functions from the well-known restricted dissimilarity functions. The development of this method has led us to consider under which conditions restricted dissimilarity functions are faithful. We present a characterization theorem of such functions using automorphisms. Finally, we also consider under which conditions we can build penalty functions from Kolmogoroff and Nagumo aggregation functions. In this setting, we propose a new generalization of the weighted voting method in terms of one single variable functions. We conclude with a real, illustrative medical case, conclusions and future research lines.
Restricted dissimilarity function; Penalty function; Selection process; Weighted voting method;
http://www.sciencedirect.com/science/article/pii/S0377221712007369
Bustince, H.
Jurio, A.
Pradera, A.
Mesiar, R.
Beliakov, G.
oai:RePEc:eee:ejores:v:216:y:2012:i:2:p:326-3332014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:2:p:326-333
article
Transfer pricing in a dynamic marketing-operations interface
A transfer price mechanism is proposed to coordinate the strategies of the marketing and operations functional areas operating in a dynamic interface environment in a decentralized firm. Marketing and operations are strategic decision-makers in a differential game, in which marketing has price and advertising and operations has production as control variables, and advertising goodwill and production backlog are state variables. A constant transfer price is entered into the objective functionals for marketing and operations, and subgame perfect feedback strategies are derived for price, advertising, and production as functions of the state variables. The feedback strategies allow a solution for the dynamic system involving goodwill and backlog, and the total payoff to the firm, the sum of the payoffs to marketing and operations, is determined as a function of the transfer price. Finally, for certain parameter conditions an interior maximum of the payoff function is achieved, and the optimal transfer price is identified.
Transfer price; Marketing-operations interface; Differential game;
http://www.sciencedirect.com/science/article/pii/S0377221711006746
Erickson, Gary M.
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:459-4692014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:459-469
article
An Ant Colony Optimisation algorithm for solving the asymmetric traffic assignment problem
In this paper we propose an Ant Colony Optimisation (ACO) algorithm for defining the signal settings on urban networks following a local approach. This consists in optimising the signal settings of each intersection of an urban network as a function only of traffic flows at the accesses to the same intersection, taking account of the effects of signal settings on costs and on user route choices. This problem, also known as Local Optimisation of Signal Settings (LOSS), has been widely studied in the literature and can be formulated as an asymmetric assignment problem. The proposed ACO algorithm is based on two kinds of behaviour of artificial ants which allow the LOSS problem to be solved: traditional behaviour based on the response to pheromones for simulating user route choice, and innovative behaviour based on the pressure of an ant stream for solving the signal setting definition problem. Our results on real-scale networks show that the proposed approach allows the solution to be obtained in less time but with the same accuracy as in traditional MSA (Method of Successive Averages) approaches.
Traffic; Ant Colony Optimisation; Signal settings design; Stochastic traffic assignment;
http://www.sciencedirect.com/science/article/pii/S0377221711008630
D’Acierno, Luca
Gallo, Mariano
Montella, Bruno
oai:RePEc:eee:ejores:v:217:y:2012:i:2:p:404-4162014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:2:p:404-416
article
An efficient Differential Evolution based algorithm for solving multi-objective optimization problems
In the present study, a modified variant of Differential Evolution (DE) algorithm for solving multi-objective optimization problems is presented. The proposed algorithm, named Multi-Objective Differential Evolution Algorithm (MODEA) utilizes the advantages of Opposition-Based Learning for generating an initial population of potential candidates and the concept of random localization in mutation step. Finally, it introduces a new selection mechanism for generating a well distributed Pareto optimal front. The performance of proposed algorithm is investigated on a set of nine bi-objective and five tri-objective benchmark test functions and the results are compared with some recently modified versions of DE for MOPs and some other Multi Objective Evolutionary Algorithms (MOEAs). The empirical analysis of the numerical results shows the efficiency of the proposed algorithm.
Evolutionary computation; Global optimization; Multiple objective programming; Opposition-Based Learning; Random localization;
http://www.sciencedirect.com/science/article/pii/S0377221711008538
Ali, Musrrat.
Siarry, Patrick
Pant, Millie.
oai:RePEc:eee:ejores:v:234:y:2014:i:2:p:459-4682014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:2:p:459-468
article
Optimal multi-period mean–variance policy under no-shorting constraint
We consider in this paper the mean–variance formulation in multi-period portfolio selection under no-shorting constraint. Recognizing the structure of a piecewise quadratic value function, we prove that the optimal portfolio policy is piecewise linear with respect to the current wealth level, and derive the semi-analytical expression of the piecewise quadratic value function. One prominent feature of our findings is the identification of a deterministic time-varying threshold for the wealth process and its implications for market settings. We also generalize our results in the mean–variance formulation to utility maximization with no-shorting constraint.
Multi-period portfolio selection; Multi-period mean–variance formulation; Expected utility maximization; No-shorting;
http://www.sciencedirect.com/science/article/pii/S0377221713001732
Cui, Xiangyu
Gao, Jianjun
Li, Xun
Li, Duan
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:594-6042014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:594-604
article
Resequencing of mixed-model assembly lines: Survey and research agenda
Nowadays, mixed-model assembly lines are applied in a wide range of industries to mass-produce customized products to order, e.g., in automobile industry. An important decision problem in this context receiving a lot of attention from researchers and practitioners is the sequencing problem, which decides on the succession of workpieces launched down the line. However, if multiple departments with diverging sequencing objectives are to be passed or unforeseen disturbances like machine breakdowns or material shortages occur, a resequencing of a given production sequence often becomes equally essential. This paper reviews existing research on resequencing in a mixed-model assembly line context. Important problem settings, alternative buffer configurations, and resulting decision problems are described. Finally, future research needs are identified as some relevant real-world resequencing settings have not been dealt with in literature up to now.
Mixed-model assembly line; Resequencing; Survey;
http://www.sciencedirect.com/science/article/pii/S0377221711007284
Boysen, Nils
Scholl, Armin
Wopperer, Nico
oai:RePEc:eee:ejores:v:235:y:2014:i:1:p:138-1482014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:1:p:138-148
article
An approach to optimize block surgical schedules
We provide an approach to optimize a block surgical schedule (BSS) that adheres to the block scheduling policy, using a new type of newsvendor-based model. We assume that strategic decisions assign a specialty to each Operating Room (OR) day and deal with BSS decisions that assign sub-specialties to time blocks, determining block duration as well as sequence in each OR each day with the objective of minimizing the sum of expected lateness and earliness costs. Our newsvendor approach prescribes the optimal duration of each block and the best permutation, obtained by solving the sequential newsvendor problem, determines the optimal block sequence. We obtain closed-form solutions for the case in which surgery durations follow the normal distribution. Furthermore, we give a closed-form solution for optimal block duration with no-shows.
Operations research in health service; Block surgical schedule; Sequential newsvendor; Normal distribution; No-shows;
http://www.sciencedirect.com/science/article/pii/S0377221713008631
Choi, Sangdo
Wilhelm, Wilbert E.
oai:RePEc:eee:ejores:v:232:y:2014:i:3:p:643-6532014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:232:y:2014:i:3:p:643-653
article
A multiple criteria decision making approach to manure management systems in the Netherlands
The intensification of livestock operations in the last few decades has resulted in an increased social concern over the environmental impacts of livestock operations and thus making appropriate manure management decisions increasingly important. A socially acceptable manure management system that simultaneously achieves the pressing environmental objectives while balancing the socio-economic welfare of farmers and society at large is needed. Manure management decisions involve a number of decision makers with different and conflicting views of what is acceptable in the context of sustainable development. This paper developed a decision-making tool based on a multiple criteria decision making (MCDM) approach to address the manure management problems in the Netherlands. This paper has demonstrated the application of compromise programming and goal programming to evaluate key trade-offs between socio-economic benefits and environmental sustainability of manure management systems while taking decision makers’ conflicting views of the different criteria into account. The proposed methodology is a useful tool in assisting decision makers and policy makers in designing policies that enhance the introduction of economically, socially and environmentally sustainable manure management systems.
Multiple criteria decision making; Multiple objective programming; Compromise programming; Goal programming; Analytical hierarchy process;
http://www.sciencedirect.com/science/article/pii/S0377221713006528
Gebrezgabher, Solomie A.
Meuwissen, Miranda P.M.
Oude Lansink, Alfons G.J.M.
oai:RePEc:eee:ejores:v:221:y:2012:i:1:p:87-982014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:221:y:2012:i:1:p:87-98
article
Optimal ordering and pricing strategies in the presence of a B2B spot market
In the current paper, we examine the effect of a B2B spot market on the strategic behavior and the performance of a reseller who continues to use the traditional channel while participating in a B2B spot market. We analyze the case in which a risk-neutral reseller faces an additive or multiplicative demand function and identify sufficient conditions under which the optimal order quantity and retail price exist and are unique. We then analytically examine the case in which a risk-averse reseller participates in a fully liquid spot market. We also study numerically how varying liquidity, spot price volatility, demand variability, and correlation coefficient affect a firm’s strategies and performance. We find that demand variability significantly affects both pricing and ordering strategies, whereas the spot price volatility has less influence on pricing decisions. Our results also show that for a risk-averse reseller to charge a lower retail price when the spot market liquidity increases is desirable. We further show that a B2B spot market cannot always improve a reseller’s utility. These findings shed light on how resellers can adjust their procurement and pricing strategies to align with the new business environment created by the emergence of B2B spot markets, as well as have obvious implications for the development of a B2B spot market.
Supply chain management; Procurement and pricing strategy; B2B spot market; Market liquidity; Risk;
http://www.sciencedirect.com/science/article/pii/S037722171200210X
Xing, Wei
Wang, Shouyang
Liu, Liming
oai:RePEc:eee:ejores:v:235:y:2014:i:3:p:494-5022014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:235:y:2014:i:3:p:494-502
article
The self regulation problem as an inexact steepest descent method for multicriteria optimization
In this paper we study an inexact steepest descent method for multicriteria optimization whose step-size comes with Armijo’s rule. We show that this method is well-defined. Moreover, by assuming the quasi-convexity of the multicriteria function, we prove full convergence of any generated sequence to a Pareto critical point. As an application, we offer a model for the Psychology’s self regulation problem, using a recent variational rationality approach.
Multiple objective programming; Steepest descent; Self regulation; Quasi-convexity;
http://www.sciencedirect.com/science/article/pii/S0377221714000046
Bento, G.C.
Cruz Neto, J.X.
Oliveira, P.R.
Soubeyran, A.
oai:RePEc:eee:ejores:v:236:y:2014:i:2:p:395-4022014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:236:y:2014:i:2:p:395-402
article
The one-dimensional cutting stock problem with usable leftovers – A survey
In this article, we review published studies that consider the solution of the one-dimensional cutting stock problem (1DCSP) with the possibility of using leftovers to meet future demands, if long enough. The one-dimensional cutting stock problem with usable leftovers (1DCSPUL) is a problem frequently encountered in practical settings but often, it is not dealt with in an explicit manner. For each work reviewed, we present the application, the mathematical model if one is proposed and comments on the computational results obtained. The approaches are organized into three classes: heuristics, item-oriented, or cutting pattern-oriented.
Usable leftovers; Cutting stock problem; Review;
http://www.sciencedirect.com/science/article/pii/S0377221713009430
Cherri, Adriana Cristina
Arenales, Marcos Nereu
Yanasse, Horacio Hideki
Poldi, Kelly Cristina
Gonçalves Vianna, Andréa Carla
oai:RePEc:eee:ejores:v:223:y:2012:i:3:p:644-6582014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:3:p:644-658
article
Scenario-based Supply Chain Network risk modeling
This paper provides a risk modeling approach to facilitate the evaluation and the design of Supply Chain Networks (SCNs) operating under uncertainty. The usefulness of the approach is demonstrated with two realistic case studies. Three event types are defined to describe plausible future SCN environments: random, hazardous and deeply uncertain events. A three-phase hazard modeling approach is also proposed. It involves a characterization of SCN hazards in terms of multihazards, vulnerability sources and exposure levels; the estimation of incident arrival, intensity and duration processes; and the assessment of SCN hit consequences in terms of damage and time to recovery. Based on these descriptive models, a Monte Carlo approach is then proposed to generate plausible future scenarios. The two cases studied illustrate the key aspects of the approach, and how it can be used to obtain resilient SCNs under disruptions.
Supply Chain Network; Uncertainty; Risk modeling; Multihazards; Scenario Planning; Supply Chain Disruptions;
http://www.sciencedirect.com/science/article/pii/S0377221712004821
Klibi, Walid
Martel, Alain
oai:RePEc:eee:ejores:v:216:y:2012:i:3:p:658-6672014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:3:p:658-667
article
Sequential market entries and competition modelling in multi-innovation diffusions
The diffusion of innovations for simultaneous processes cannot take into account and properly explain systematic perturbations due to competition-substitution effects if they are examined one by one. A first aspect in simultaneous competing diffusions is the distinction between simultaneous market entries (synchronic competition) and sequential entries (diachronic competition). In the latter case, the beginning of competition may upset the first entrant’s diffusion. A second important aspect in multiple competition is represented by the choice to model the word-of-mouth effect either at the category level (balanced model) or at the brand level, separating the within-brand effect from the cross-brand one (unbalanced model). In this paper, balanced models are studied, and we propose a model that allows for a change in the parameter values of the first entrant as soon as the second one enters the market. The resulting differential system has a closed-form solution that enables, through sales data, an empirical validation of the assumptions underlying the model structure, improving the forecasting accuracy. An application to pharmaceutical drug competition is discussed.
Marketing; Strategic planning; Synchronic and diachronic competition;
http://www.sciencedirect.com/science/article/pii/S0377221711007594
Guseo, Renato
Mortarino, Cinzia
oai:RePEc:eee:ejores:v:214:y:2011:i:3:p:739-7482014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:214:y:2011:i:3:p:739-748
article
An adaptive evaluation mechanism for online traders
Economic agents in electronic markets generally consider reputation to be a significant factor in selecting trading partners. Most traditional online businesses publish reputation profile for traders that reflect average of the ratings received in previous transactions. Because of the importance of these ratings, there is an incentive for traders to partake in strategic behavior (for example shilling) to artificially inflate their rating. It is therefore important for an online business to be able to provide a robust estimate of a trader's reputation that is not easily affected by strategic behavior or noisy ratings. This paper proposes such an adaptive ratings-based reputation model. The model is based on a trader's transaction history, witness testimony, and other weighting factors. Learning is integrated to make the ratings model adaptive and robust in a dynamic environment. To validate the proposed model and to demonstrate the significance of its constructs, a multi-agent system is built to simulate the interactions among buyers and sellers in an electronic marketplace. The performance of the proposed model is compared to that of the reputation model used in most online marketplaces like Amazon, and to Huynh's model proposed in the literature.
E-commerce Reputation mechanism Online ratings Simulation Multi-agent system
http://www.sciencedirect.com/science/article/pii/S0377221711004565
You, Liangjun
Sikora, Riyaz
oai:RePEc:eee:ejores:v:229:y:2013:i:1:p:37-402014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:1:p:37-40
article
A library for continuous convex separable quadratic knapsack problems
The Continuous Convex Separable Quadratic Knapsack problem (CQKnP) is an easy but useful model that has very many different applications. Although the problem can be solved quickly, it must typically be solved very many times within approaches to (much) more difficult models; hence an efficient solution approach is required. We present and discuss a small open-source library for its solution that we have recently developed and distributed.
Quadratic programming; Continuous Nonlinear Resource Allocation Problem; Lagrangian relaxation; Optimization software;
http://www.sciencedirect.com/science/article/pii/S0377221713001719
Frangioni, Antonio
Gorgone, Enrico
oai:RePEc:eee:ejores:v:234:y:2014:i:2:p:536-5452014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:2:p:536-545
article
Financial portfolio management through the goal programming model: Current state-of-the-art
Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays.
Multi-attribute portfolio management; Goal programming; Typology;
http://www.sciencedirect.com/science/article/pii/S0377221713007959
Aouni, Belaid
Colapinto, Cinzia
La Torre, Davide
oai:RePEc:eee:ejores:v:221:y:2012:i:3:p:557-5702014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:221:y:2012:i:3:p:557-570
article
Batch picking in narrow-aisle order picking systems with consideration for picker blocking
This paper develops strategies to control picker blocking that challenge the traditional assumptions regarding the tradeoffs between wide- and narrow-aisle order picking systems. We propose an integrated batching and sequencing procedure called the indexed batching model (IBM), with the objective of minimizing the total retrieval time (the sum of travel time, pick time and congestion delays). The IBM differs from traditional batching formulations by assigning orders to indexed batches, whereby each batch corresponds to a position in the batch release sequence. We develop a mixed integer programming solution for exact control, and demonstrate a simulated annealing procedure to solve large practical problems. Our results indicate that the proposed approach achieves a 5–15% reduction in the total retrieval time primarily by reducing picker blocking. We conclude that the IBM is particularly effective in narrow-aisle picking systems.
Facilities planning and design; Distribution center; Order picking; Batching and sequencing;
http://www.sciencedirect.com/science/article/pii/S0377221712002706
Hong, Soondo
Johnson, Andrew L.
Peters, Brett A.
oai:RePEc:eee:ejores:v:234:y:2014:i:3:p:701-7082014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:234:y:2014:i:3:p:701-708
article
On stabilizing volatile product returns
As input flows of secondary raw materials show high volatility and tend to behave in a chaotic way, the identification of the main drivers of the dynamic behavior of returns plays a crucial role. Based on a stylized production-recycling system consisting of a set of nonlinear difference equations, we explicitly derive parameter constellations where the system will or will not converge to its equilibrium. Using a constant elasticity of substitution production function, the model is then extended to enable coverage of real world situations. Using waste paper as a reference raw material, we empirically estimate the parameters of the system. By using these regression results, we are able to show that the equilibrium solution is a Lyapunov unstable saddle point. This implies that the system is sensitive on initial conditions that will hence impede the predictability of product returns. Small variations of production input proportions could however stabilize the whole system.
OR in natural resources; Discrete dynamical systems; Production;
http://www.sciencedirect.com/science/article/pii/S0377221713009569
Nowak, Thomas
Hofer, Vera
oai:RePEc:eee:ejores:v:216:y:2012:i:1:p:214-2242014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:216:y:2012:i:1:p:214-224
article
Cost/risk balanced management of scarce resources using stochastic programming
We consider the situation when a scarce renewable resource should be periodically distributed between different users by a Resource Management Authority (RMA). The replenishment of this resource as well as users demand is subject to considerable uncertainty. We develop cost optimization and risk management models that can assist the RMA in its decision about striking the balance between the level of target delivery to the users and the level of risk that this delivery will not be met. These models are based on utilization and further development of the general methodology of stochastic programming for scenario optimization, taking into account appropriate risk management approaches. By a scenario optimization model we obtain a target barycentric value with respect to selected decision variables. A successive reoptimization of deterministic model for the worst case scenarios allows the reduction of the risk of negative consequences derived from unmet resources demand. Our reference case study is the distribution of scarce water resources. We show results of some numerical experiments in real physical systems.
OR in natural resources; Risk management; Stochastic programming; Water resources management; Risk/performance tradeoff;
http://www.sciencedirect.com/science/article/pii/S0377221711005698
Gaivoronski, Alexei
Sechi, Giovanni M.
Zuddas, Paola
oai:RePEc:eee:ejores:v:226:y:2013:i:2:p:325-3312014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:226:y:2013:i:2:p:325-331
article
Economic value of greenhouse gases and nitrogen surpluses: Society vs farmers’ valuation
Livestock supply must challenge the growth of final demand in the developing countries. This challenge has to take into account its ecological effects since the dairy and livestock sectors are clearly pointed out as human activities which contribute significantly to environmental deterioration. Therefore, livestock activity models have to include desirable and undesirable outputs simultaneously. Using this perspective, we implement a Data Envelopment Analysis model to evaluate shadow prices of outputs under contradictory objectives between the society and the farmers. We show that farmers are able to reduce pollution significantly if society accepts to balance farmers’ opportunity cost. Finally, we observe that initial levels of the CO2 tax implemented in European countries are in line with farmers’ valuation while the current level of the CO2 tax tends to reach the value of pollution targeted by the society.
Environment; Data envelopment analysis; Agriculture;
http://www.sciencedirect.com/science/article/pii/S0377221712008417
Berre, David
Boussemart, Jean-Philippe
Leleu, Hervé
Tillard, Emmanuel
oai:RePEc:eee:ejores:v:229:y:2013:i:2:p:281-3022014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:229:y:2013:i:2:p:281-302
article
A review of urban transportation network design problems
This paper presents a comprehensive review of the definitions, classifications, objectives, constraints, network topology decision variables, and solution methods of the Urban Transportation Network Design Problem (UTNDP), which includes both the Road Network Design Problem (RNDP) and the Public Transit Network Design Problem (PTNDP). The current trends and gaps in each class of the problem are discussed and future directions in terms of both modeling and solution approaches are given. This review intends to provide a bigger picture of transportation network design problems, allow comparisons of formulation approaches and solution methods of different problems in various classes of UTNDP, and encourage cross-fertilization between the RNDP and PTNDP research.
Transportation; Urban transportation network design problem; Road network design; Transit network design and frequency setting problem; Multi-modal network design problem;
http://www.sciencedirect.com/science/article/pii/S0377221713000106
Farahani, Reza Zanjirani
Miandoabchi, Elnaz
Szeto, W.Y.
Rashidi, Hannaneh
oai:RePEc:eee:ejores:v:223:y:2012:i:2:p:573-5842014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:223:y:2012:i:2:p:573-584
article
Dynamic multi-appointment patient scheduling for radiation therapy
Seeking to reduce the potential impact of delays on radiation therapy cancer patients such as psychological distress, deterioration in quality of life and decreased cancer control and survival, and motivated by inefficiencies in the use of expensive resources, we undertook a study of scheduling practices at the British Columbia Cancer Agency (BCCA). As a result, we formulated and solved a discounted infinite-horizon Markov decision process for scheduling cancer treatments in radiation therapy units. The main purpose of this model is to identify good policies for allocating available treatment capacity to incoming demand, while reducing wait times in a cost-effective manner. We use an affine architecture to approximate the value function in our formulation and solve an equivalent linear programming model through column generation to obtain an approximate optimal policy for this problem. The benefits from the proposed method are evaluated by simulating its performance for a practical example based on data provided by the BCCA.
Patient scheduling; OR in health services; Markov decision processes; Linear programming; Approximate dynamic programming;
http://www.sciencedirect.com/science/article/pii/S037722171200522X
Sauré, Antoine
Patrick, Jonathan
Tyldesley, Scott
Puterman, Martin L.
oai:RePEc:eee:ejores:v:238:y:2014:i:2:p:497-5042014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:238:y:2014:i:2:p:497-504
article
Random sampling: Billiard Walk algorithm
Hit-and-Run is known to be one of the best random sampling algorithms, its mixing time is polynomial in dimension. However in practice, the number of steps required to obtain uniformly distributed samples is rather high. We propose a new random walk algorithm based on billiard trajectories. Numerical experiments demonstrate much faster convergence to the uniform distribution.
Sampling; Monte-Carlo; Hit-and-Run; Billiards;
http://www.sciencedirect.com/science/article/pii/S037722171400280X
Gryazina, Elena
Polyak, Boris
oai:RePEc:eee:ejores:v:236:y:2014:i:1:p:1-132014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:236:y:2014:i:1:p:1-13
article
Transport operations in container terminals: Literature overview, trends, research directions and classification scheme
Internal transport operations connect the seaside, yard side, and landside processes at container terminals. This paper presents an in-depth overview of transport operations and the material handling equipment used, highlights current industry trends and developments, and proposes a new classification scheme for transport operations and scientific journal papers published up to 2012. The paper also discusses and challenges current operational paradigms of transport operations. Lastly, the paper identifies new avenues for academic research based on current trends and developments in the container terminal industry.
Container terminal; Literature overview; Transportation; Material handling equipment;
http://www.sciencedirect.com/science/article/pii/S0377221713009405
Carlo, Héctor J.
Vis, Iris F.A.
Roodbergen, Kees Jan
oai:RePEc:eee:ejores:v:217:y:2012:i:1:p:69-742014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:217:y:2012:i:1:p:69-74
article
A column generation approach for the unconstrained binary quadratic programming problem
This paper proposes a column generation approach based on the Lagrangean relaxation with clusters to solve the unconstrained binary quadratic programming problem that consists of maximizing a quadratic objective function by the choice of suitable values for binary decision variables. The proposed method treats a mixed binary linear model for the quadratic problem with constraints represented by a graph. This graph is partitioned in clusters of vertices forming sub-problems whose solutions use the dual variables obtained by a coordinator problem. The column generation process presents alternative ways to find upper and lower bounds for the quadratic problem. Computational experiments were performed using hard instances and the proposed method was compared against other methods presenting improved results for most of these instances.
Quadratic programming; Column generation; Lagrangean relaxation;
http://www.sciencedirect.com/science/article/pii/S0377221711008198
Mauri, Geraldo Regis
Lorena, Luiz Antonio Nogueira
oai:RePEc:eee:ejores:v:218:y:2012:i:3:p:847-8552014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:218:y:2012:i:3:p:847-855
article
A risk analysis based on a two-stage delayed diagnosis regression model with application to chronic disease progression
This paper presents a two-stage regression model for quantifying different stages of a disease progression with delayed diagnosis time and for identifying the risk factors associated with each stage. Conventional chronic disease progression studies reported replied on the assumption that the time of the confirmation of a disease state by diagnosis is the start time of this disease state. Clearly this will lead to biased estimates of progression since the disease state should have already occurred before the diagnosis, but the true occurrence time is unknown. This later confirmation is called the delayed diagnosis in this paper and a delay-time modelling procedure is developed for the identification of the unknown stages of progression. A hazard-based regression model is also proposed for a further risk analysis. We apply the developed methods to hepatitis C data and the analysis shows that considering the delayed diagnosis significantly improved the model fit in comparison with the conventional model. We also find that the risk factors associated with each stage are more significant, particularly in the second stage of progression, than those based on the conventional model. We conclude that such delayed phenomena in diagnosis should be taken into account when modelling the chronic disease progression process and conducting related risk analysis.
Two-stage disease progression; Risk; Chronic disease; Hepatitis; Delayed diagnosis;
http://www.sciencedirect.com/science/article/pii/S0377221711010903
Fu, Bo
Wang, Wenbin
Shi, Xin
oai:RePEc:eee:ejores:v:232:y:2014:i:1:p:234-2402014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:232:y:2014:i:1:p:234-240
article
A minimax distribution-free procedure for a newsvendor problem with free shipping
In this paper, we study the optimal policies of retailers who operate their inventory with a single period model (i.e., newsvendor model) under a free shipping offer where a fixed shipping fee is exempted if an order quantity is greater than or equal to a given minimum quantity. Zhou et al. (2009) have explored this model, and we further investigate their analysis for the optimal ordering policies which they did not sufficiently develop. Based on the investigation, we extend the base model in order to deal with the practically important aspect of inventory management when the exact distribution function of demand is not available. We incorporate the aspect into the base model and present the optimal policies for the extended model with a numerical example. Finally, we conduct extensive numerical experiments to evaluate the performance of the extended model and analyze the impacts of minimum free shipping quantity and the fixed shipping fee on the performance.
Newsvendor model; Distribution free; Free shipping;
http://www.sciencedirect.com/science/article/pii/S0377221713005663
Kwon, Kysang
Cheong, Taesu
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:312-3232014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:219:y:2012:i:2:p:312-323
article
A periodic-review inventory system with a capacitated backup supplier for mitigating supply disruptions
We consider a periodic-review inventory system with two suppliers: an unreliable regular supplier that may be disrupted for a random duration, and a reliable backup supplier that can be used during a disruption. The backup supplier charges higher unit purchasing cost and fixed order cost when compared to the regular supplier. Because the backup supplier is used at unplanned moments, its capacity to replenish inventory is considered limited. Analytical results partially characterize the structure of the optimal order policy: a state-dependent (X(i),Y(i)) band structure (with corresponding bounds of X(i) and Y(i) to be given), where i represents the status of the regular supplier. Numerical studies illustrate the structure of the optimal policy and investigate the impacts of major parameters on optimal order decisions and system costs.
Inventory; Supply risk; Disruption; (X,Y) band; Capacity;
http://www.sciencedirect.com/science/article/pii/S0377221711011258
Chen, Junlin
Zhao, Xiaobo
Zhou, Yun
oai:RePEc:eee:ejores:v:237:y:2014:i:3:p:794-8012014-08-14RePEc:eee:ejores
RePEc:eee:ejores:v:237:y:2014:i:3:p:794-801
article
A fast algorithm for identifying minimum size instances of the equivalence classes of the Pallet Loading Problem
In this paper, a novel and fast algorithm for identifying the Minimum Size Instance (MSI) of the equivalence class of the Pallet Loading Problem (PLP) is presented. The new algorithm is based on the fact that the PLP instances of the same equivalence class have the property that the aspect ratios of their items belong to an open interval of real numbers. This interval characterises the PLP equivalence classes and is referred to as the Equivalence Ratio Interval (ERI) by authors of this paper. The time complexity of the new algorithm is two polynomial orders lower than that of the best known algorithm. The authors of this paper also suggest that the concept of MSI and its identifying algorithm can be used to transform the non-integer PLP into its equivalent integer MSI.
Packing; Pallet Loading Problem; Equivalence class;
http://www.sciencedirect.com/science/article/pii/S0377221714001234
Lu, Yiping
Cha, Jianzhong
oai:RePEc:eee:ejores:v:219:y:2012:i:2:p:415-424