Most, if not all, decisions are made under some (relevant) uncertainty about the future.
Airlines set up schedules without exactly knowing how many passengers they will face, where and when airplanes are going to have technical difficulties, or how the weather is going to be. Companies make major investments in buildings and machines without fully knowing how much they will need to produce, indeed, often without exactly knowing what they will produce (as time goes by). Firms trade in foreign markets facing both uncertain demand and exchange rates. And in all but very special cases, future prices are not known. In the public sector, hospitals face very uncertain demand for their services, and in a longer time scale: Uncertainty about funding and the rules of the game. Sometimes there is a lot of data describing the decision-problem, at other times the activity is new and very little is known.
How do we support decision-making when we do not know all the facts of a problem? How do we model in such situations? How do we treat uncertain future events? How do we determine that a deterministic model will do well even when the future is not deterministic? When is it appropriate to set up an optimization model, and when are other tools such as simulation more appropriate? How do we test the quality of a suggested decision – wherever it comes from?
In this cluster we face all these questions. Modelling under uncertainty – also called “stochastic modelling” – opens up many questions not faced when deterministic models are formulated. The most obvious ones are how we should represent the uncertainty, and how it is revealed over time. Sequencing of decisions relative to the arrival of information is in many ways the core of stochastic modelling.
Once a model is formulated, irrespective of methodology, we face the difficulty of solving instances of the model. In all but very special cases this is also a major challenge. So the development of stochastic models almost always leads to a need to develop tools as well.
Behind all these questions and difficulties looms the over-riding issue: While stochastic modelling is certainly appropriate (as there is major uncertainty facing the decision-maker), is it really necessary? Can we not settle with simpler tools, somehow compensating for our simplifications? If we, in optimization, by this mean to solve deterministic models accompanied by sensitivity analysis, parametric optimization, what-if questions, or scenario analysis, the answer is no. Why this is the case is well established in the literature. Deterministic models answer deterministic questions, and have no controllable properties in terms of average behaviour.
However, at a somewhat deeper level, the question of simplifications is very relevant. There are cases where deterministic models, even if bad in their own rights, produce useful information about the stochastic setting. However, assuming that you face such a situation without actually verifying it, is extremely risky. There is also the question: Can we set up deterministic models that produce good solutions to stochastic models, that is, can we solve the wrong model, but still get good answers? At times the answer is yes.
These are the kinds of questions we face in this cluster. We have a multitude of relevant, challenging and interesting problems to work with. Since the world is stochastic we shall not run out of questions
Professor Stein W. Wallace just completed a visit to CIRRELT in Montreal with partial LANCS support.
Dr Alan King from IBM Watson Research Center in New York will visit Lancaster with partial LANCS support in November 2011.
Professor Teodor G Crainic from Montreal will visit Lancaster with partial LANCS support in January 2012.
Prof. Stein W. Wallace recently completed a visit to Hong Kong partially funded by the LANCS Scientific Outreach fund.