Abstract
We consider the multiattribute decision-making problem under risk with partial information of the decision-makers' preferences modelled through an imprecise vectorial utility function and where the consequences are assumed under uncertainty. In this framework under imprecision, we introduce the appropriate utility efficient set that plays a fundamental role because it is where the decision-maker must look for a solution. However, this set can be difficult to determine. So, we provide a decision support system with a solution strategy based on interactive simulated annealing for generating a subset of efficient solutions that gives a fair representation of the whole set, to progressively derive one or several satisfying solutions to aid the decision-maker in his/her final choice.
imprecise vectorial utility function; efficient set; simulated annealing
Interactive simulated annealing for solving imprecise discrete multiattribute problems under risk
Antonio Jiménez*; Sixto Ríos-Insua; Alfonso Mateos
Department of Artificial Intelligence, School of Computer Science, Madrid Technical University, Madrid, Spain
Address to correpondence Address to correpondende Antonio Jiménez E-mail: ajimenez@fi.upm.es
ABSTRACT
We consider the multiattribute decision-making problem under risk with partial information of the decision-makers' preferences modelled through an imprecise vectorial utility function and where the consequences are assumed under uncertainty. In this framework under imprecision, we introduce the appropriate utility efficient set that plays a fundamental role because it is where the decision-maker must look for a solution. However, this set can be difficult to determine. So, we provide a decision support system with a solution strategy based on interactive simulated annealing for generating a subset of efficient solutions that gives a fair representation of the whole set, to progressively derive one or several satisfying solutions to aid the decision-maker in his/her final choice.
Keywords: imprecise vectorial utility function, efficient set, simulated annealing.
1. Introduction
Multiattribute expected utility theory can be considered as a leading paradigm for normative decision theory. However, multiattribute utility theory calls for the decision-maker (DM) to provide all the information describing the decision situation to assess on the one hand, the component scalar utility functions ui, von Neumann & Morgenstern (1947), Savage (1954) and Fishburn (1970) and, on the other, to determine an appropriate functional form of the global utility function u including these components ui, Keeney & Raiffa (1976). It is obvious that these information requirements can be far too strict in most practical situations. This could lead to the consideration of imprecise components utility functions in the first case, see e.g., Weber (1987) and Ríos et al. (2000), and in the second one, a vectorial utility function u, Roberts (1972, 1979) and Rietveld (1980), due to the difficulty in testing certain independence conditions, thus being u a proxy for the underlying utility function u.
In this case, what could be called the set of imprecise utility efficient strategies plays an important role because of its well known property, analogous to the precise scalar case: the DM can restrict his attention to it, discarding the remaining strategies, because a nonefficient strategy can never be optimal. However, this set can be difficult to determine and, in many cases, it is not considered the resolution of the problem because this set can have many elements and is not totally ordered. Thus, intelligent approaches are needed to generate a representative approximation of the whole set.
Moreover, we consider the situation where the consequences of the strategies could have associated uncertainty in their policy effects, as it is considered in Mateos et al. (2001).
In order to provide the DM under this framework with a manageable number of strategies under risk for evaluation, we introduce a method for approximating the imprecise utility efficient set and its reduction for the case of a vectorial utility function, if it is possible that the DM can improve his assessments through an interactive process, Colson & de Bruyn (1989), Mateos & Ríos-Insua (1997, 1998) and Ríos-Insua & Mateos (1998), that adapts multi-objective simulated annealing, Serafini (1992), Ulungu et al. (1998) and Teghem et al. (2000).
The paper includes four more sections. In section 2, we formally introduce the problem, some basic concepts and the imprecise approximation set. Section 3 deals with some theory and concepts describing multi-objective simulated annealing and its application to our problem, multiattribute decision-making problem under risk with imprecise information. An interactive simulated annealing approach to construct 'good compromises' taking into account DM preferences is described in section 4. Finally, in section 5, some conclusions are provided.
2. Problem Formulation: the approximation set
Throughout the paper we employ the following notation: for two scalars a and b, a ³ b denotes a > b or a = b. For two vectors x, y in Âm, x ³ y denotes xi³ yi for i = 1,...,m, and x ³ ydenotes x ³ y but x ¹ y.
Let us consider a decision-making problem under risk with imprecision concerning the consequences of each decision strategy. We define the imprecise consequence of certain strategy in a particular state of nature, in such a way that it is characterised by a vector of intervals
where [,] define the imprecise consequence of the strategy for attribute Zi of the whole set of attributes {Z1,...,Zm}. Note that the situation under precision or under certainty is the particular case where the endpoints of each interval are the same, i.e., = , for i = 1,...,m. Let Z be the set of imprecise outcomes with elements z, and PZ the set of simple probability distributions over Z (probability distributions with a finite number of imprecise outcomes), with elements p, q, p'... also called strategies, lotteries or risky prospects:
where
with pt ³ 0, t = 1..,s, and å pt = 1.
The consideration of this type of mixed strategies is important in different areas among we can cite portfolio management, medicine and environment.
For each attribute Zi it is necessary to assess a utility function ui that reflects DM preferences about their possible values. It is well known the difficulty in assessing utility functions even though good software is available to aid in conducting such process, see e.g., Logical Decisions (1998). Several authors, see e.g., Hershey et al. (1982), McCord & de Neufville (1986) or Jaffray (1989), have suggested that elicited utility functions are generally method-dependent, and bias and inconsistencies can be generated in the assignment process. To overcome these problems the system uses two slightly modified standard procedures jointly: a certainty equivalent method (CE) and a probability equivalent method (PE), see Farquhar (1984). It assumes imprecision concerning the scalar utility functions allowing the decision maker to provide a range of responses, instead of only one precise number in each probability question, as these methods demand. This is less stressful on experts, since they are allowed to provide incomplete preference statements by means of intervals rather than unique numbers, see von Nitzsch & Weber (1988) and Ríos et al. (1994).
Therefore, a class of utility functions is obtained, rather than a single one, for each method. The intersection of both ranges provides the range where preferences elicited by the above methods agree. Should such intersections be empty for an interval, the DM would be inconsistent and he should reelicit his preferences. The process ends when a consistent range is provided.
It is interesting to note that the use of imprecise utility functions facilitates the assignment process as we have experimented in real cases, as shown in Mateos et al. (2001) and Jiménez et al. (2002). Of course, this is not the definitive solution to overcome all the objections to the elicitation process, but we consider that it is an important aid and an interesting starting point to motivate a more deep study of the assessment problem.
Specifically, we have implemented a CE-method, known as the fractile method, see e.g., Fishburn (1964, 1970), Holloway (1979) and Hull et al. (1973), where the DM is asked to provide certainty equivalent intervals or ranges of attributes for lotteries, whose results are the extreme values and zi*, that represent the most and less preferred outcomes for attribute Zi and with probabilities pt and 1 pt, respectively. We take p1= .25, p2= .50 and p3= .75. This means that the DM considers
for all amounts Î [,], for t = 1, 2, 3. Figure 1 shows in the zi/ui(zi)-diagram this class of utility functions drawn between the dotted lines and represented by the bounding utility functions and , where L (U) means Lower (Upper).
The PE-method included in the system is the extreme gambles method, see e.g., Schlaifer (1969) and Mosteller & Nogee (1951), where the DM has to specify probability intervals [, ], t = 1, 2, 3, such that
for all ptÎ[,], for some selected amounts Î[zi*,]. By default, these selected amounts for attributes Zi are the upper endpoints of the intervals proposed by the DM in the CE method. Other points can be used for comparison. To obtain these probability intervals, the system includes a routine implementing a wheel of fortune, see French (1986), that provides the probabilistic questions and guides the expert until an interval of indifference probabilities is obtained. A number of additional questions are included as consistency checks.
Figure 1 also shows this class of utility functions drawn with continuous lines and represented by the bounding utility functions and , with L and U as above.
As mentioned above, should the intersection of both ranges be empty for some values, the DM would have provided inconsistent responses and he should reassess his preferences. Thus, the intersection will be the range for the DM's utility functions. The system is able to detect the possible inconsistencies and suggests what the DM could change to achieve the consistency. Thus, for each attribute Zi, instead of having a unique utility ui (zi), we have a utility interval [(zi),(zi)], which represents the elicited imprecision of the DM concerning the utility of value zi. This is also shown by the striped area in Figure 1.
The second point is related to the kind of decomposition of the global utility function u(z) = u(z1 , , zm). Assuming precision, as in the classic case, the general setting is that the DM looks for a global utility function of the form
where zi is a specific amount of Zi, f is a scalar-valued function and ui are precise utility functions. To obtain a specific structure for f, it is necessary to fulfil independence conditions among the attributes (additive independence, preferential independence or other possible independence conditions related to the set or subsets of attributes), Keeney & Raiffa (1976). This can be difficult in complex problems because it could be possible to verify only some of these independence conditions among certain attributes in such a way that we consider a vectorial utility function instead of a scalar one as above (possibly with a more reduced dimension). Thus, to maintain the notation, assume that no independence condition is fulfilled and thus we have a vectorial utility function (dimension m remains for ease)
From both imprecision sources, we have associated to each consequence in a particular state of nature z = ([,],...,[,]) an imprecise vectorial utility function given by
where () = max {,(),()} and () = min {(),()} provided that all the classes of utility functions for the attributes Z are monotone increasing, see Figure 2. In case these classes of utility functions for attribute Zi were monotone decreasing, the corresponding utility interval would be [(),()]. The approach we present requires that all the utility functions are monotone, what it is not an important practical limitation. For simplicity, let us suppose from now on that the classes of utility functions are monotone increasing for all the attributes.
Now, given a strategy under risk with imprecise consequences, p, the imprecise expected utility vector is defined as the vector of intervals with endpoints the lower and upper expected utilities for each attribute (or utility component in the vectorial utility function), given by
denoted by p and where the subscript I in E means imprecision.
Thus, u =([(.),(.)],...,[(.),(.)]) represents a preference relation u on PZ, leading to a dominance principle defined as
The relation
u is a strict partial order on PZ (transitive and asymmetric), and, hence, we state the imprecise vector optimization problem under risk asmax EI(u, p)
s.t. pÎ PZ
A natural concept is the one of efficient strategy, pÎ PZ is an imprecise utility efficient vector strategy if there is no qÎ PZ such that EI (u, q) ³ EI (u, p). This set of strategies is called imprecise utility efficient vector set and denotedeI(u, p). The above definition first extends the one for precise problems under certainty and then under uncertainty.
Note that, if we consider a sure prospect pz = (1, z) Î PZ or a strategy p = (p1, z1;...; pn, zn) Î PZ, with a precise vectorial utility function u, it also makes sense to consider the utility efficient set for Z given u for the first case, denoted
e (Z, u) = { z Î Z : there not exists z'Î Z such thatu(z') ³ u(z)} ,
or the utility efficient set for PZgiven u for the second case, denoted
e (PZ, u) = { p Î PZ : there not exists p'Î PZsuch that E(u, p') ³ E(u, p)} .
Thus, we are led to the problem "Given PZ and u , find eI (PZ, u)". Clearly, if eI (PZ, u) had a unique element p, it would be the most preferred strategy for DM. However, this is not the case in most real problems, as eI (PZ, u) could contain a lot of elements. The generation of this set can be difficult and, usually, it is not the resolution of the problem because it does not provide the DM with a small enough number of decision strategies to facilitate the choice. Thus, our problem should be restated as "select a single element from the set eI (PZ, u)".
One way to solve this decision problem, favoured by behavioural approaches, will be possible if the DM is able to reveal more information of his preferences to provide additional structural assumptions obtaining a subset of the imprecise utility efficient vector set. So, we provide an interactive method based on multi-objective simulated annealing, Serafini (1992), Czyzak & Jaszkiewicz (1997) and Teghem et al. (2000), to aid the DM to progressively build a subset of the approximation set in collaboration with the DM.
3. Multi-objective Simulated Annealing
The idea of this method we propose is based on the set of imprecise expected utility vectors. We generate then an approximation set A(PZ, u) of the utility efficient set eI(PZ, u), i.e., strategies not dominated by any other generated strategy.
The basic idea of multi-objective simulated annealing (MSA) is simple, the method begins with a first iteration providing an initial strategy or solution p0, and thus the set A(PZ, u) is initialized containing only p0. In the next iterations, another strategy q from the neighbourhood of the current iteration is considered and is accepted if it is not dominated by any solution currently in the approximation set. In this case, we add q to the set A(PZ, u)and throw out any solution in A(PZ, u) dominated by q. On the other hand, if q is dominated by any solution in A(PZ, u), we would continue considering q for the next iteration with certain probability. In this way, according to the movement in the iterations through the space, we simultaneously build the set A(PZ, u).
Next, we introduce the phases of the method in more detail. For this purpose, let us now see the steps to be applied from a general point of view and some adaptations to our specific case, the imprecise multiattribute problem under risk. The clarifications of some concepts that appear in the algorithm, as the neighbourhood of a solution, will be considered later.
Parameters and stopping conditions that are typically used in simulated annealing are the following:
-
n, is the number of iterations in the algorithm.
-
T0, initial temperature (or alternatively, it defines the initial acceptance probability).
-
Nstep, number of iterations throughout the temperature is held at the same value.
-
Ncount, number of iterations without having found new solutions.
-
a (<1), cooling factor. The updating of the temperature depends on it. If the decreasing of the temperature is slow, the performance is also slow. The usual scheme is to maintain the temperature for a number of iterations, in this case Nstep iterations. After that, decreases by multiplying it by a . A typical value for a is 0.95, Hajek (1988).
-
Tstop, final temperature.
-
Nstop, maximum number of iterations without improvement. It is used as stopping rule.
More information of these parameters and stopping conditions for the standard algorithm can be found in Pirlot (1996).
Now, MSA algorithm can be formulated as follows:
-
Initialisation: We set Ncount = n = 0 and A(PZ, u) = {p0}, where p0 is drawn at PZ .
-
Step n: Let pn be the solution in the current iteration, V(pn) a neighbourhood of pn and q a solution drawn at random in V(pn). Let us consider the difference between the imprecise expected utilities in each component k, for k=1,..,m. Given the respective imprecise expectations
there will be the next three possibilities, see Figure 3:
-
For all k, E(, q) E(, pn) ³ 0, what implies that q dominates to pn. Then pn+1 ¬ q.
-
There are k, k' such that E(, q) E(, pn)< 0 and E(, q) E(, pn)> 0, i.e., no dominance between q and pn is fulfilled. Then pn+1 ¬ q.
-
For all k, E(, q) E(, pn) £ 0, what implies that pn dominates to q. In this case we have:
(a) pn+1¬ q with probability t .
(b) pn+1¬ pn with probability 1t .
In both cases the number of consecutive iterations without having obtained a new solution is increased: Ncount = Ncount + 1.
Probability t depends on the differences E(, q) E(, pn) and on the temperature, as we shall next see.
Moreover, in the first two cases, the A(PZ, u) is updated by comparing q with the solutions in the list:
-
If q is not dominated by any solution in A(PZ, u) ® we add q to the set A(PZ, u), throw out any solution in A(PZ, u) dominated by q and Ncount = 0.
-
Else ® Ncount = Ncount + 1.
One possible difficulty related to the dominance principle is the fact that it could be far too strict in the sense that, for real applications, cases 1 and 3 arise in a very reduced number of occasions, not allowing an adequate search in the utility efficient set.
To overcome this inconvenience, we propose to relax these kind of comparisons allowing the DM to introduce a percentage quantity s . Observe that whether the percentage quantity is equal to zero, we are in the initial setting. Thus, for a given pn, instead of the intervals [E(, pn), E(, pn)], for each k=1,...,m , we would use the intervals
where
= (E(, pn)E(, pn))/2, i.e., is the half of the expected utility interval length in component k for the solution pn. Figure 4 displays the involvement of this consideration for case 2. We have the relationship between pn and q before (including the dotted lines) and after introducing a percentage quantity s (only the continuous lines).The stopping condition makes reference to the present value of the temperature and to the number of iterations carried out without having obtained new solutions. The algorithm stops when the temperature takes a value under Tstop or after Ncount consecutive iterations without having obtained new solutions.
-
(Updating of temperature) If (n mod Nstep) = 0 then Tn=a Tn-1,
-
(Stopping condition) If Ncount = Nstop or T < Tstop, then stop.
otherwise, Tn= Tn-11.
Fortemps et al. (1994) have proposed several multi-objective rules for the acceptance probability to deal with the third case. These rules are based on two different criteria to accept with probability one the chosen point in the neighbourhood. On the one hand, we have the strong criterion in which only points q that dominates to pn will be accepted with probability one and, on the other, the weak criterion in which only dominated points will be accepted with probability strictly smaller than one.
A rule based on the strong criterion is P-rule (Product) defined in our context as:
where the amount T/l j plays the role of the temperature Tj. This is like having a different temperature factor for each component.
Another rule is the W-rule (Weak) that as its own name indicates it uses the weak criterion. It is defined as:
By reasons described in Serafini (1992), we have decided to use a combination of both rules, that keeps the advantages of both, leading to the M-rule (Mixed), defined as:
where r is a weighting factor provided by the DM.
In all the formulas, T represents the temperature parameter and a set of weights l is necessary to define each function. Moreover, taking into account the less strict dominance principle, see Figure 4, we obtain the next modified M-rule:
where
= (E(, pn)E(, pn))/2 and = (E(, q)E(, q))/2.The set A(PZ, u) depends on l , so we denote it from now on by A(PZ, u, l ). Thus, A(PZ, u, l ) contains all the potential efficient imprecise utility vector solutions generated by MSA using the weights l . Note that by controlling the weights, we can increase or decrease the acceptance probability of new solutions, which means to select a certain set of weights that could lead us towards certain region formed by the potential efficient imprecise utility vector solutions.
The procedure for obtaining a good approximation of the set A(PZ, u) could be as follows: A(PZ, u, l ) is the list of potential efficient imprecise utility vectors obtained with the weights l . Taking several sets of weights l (l), lÎ L generated in an extensively diversified way, for each one, we obtain a list A(PZ, u, l (l)), that contains the potential efficient utility solutions in the direction induced by l (l). So, to obtain a good approximation A(PZ, u) of e I (PZ, u), we need to filter the set A(PZ, u, l (l)) by pairwise comparisons to throw out the dominated solutions. This filtering process is denoted by D , in such a way that
There is a concept that must be clarified, the definition of neighbourhood V(pn) of pn. For this purpose, we make use of the notion of Euclidean distance between two solutions. Thus, solutions whose distance to pn are smaller than certain threshold, d, will belong to its neighbourhood.
Given two solutions p and q, with imprecise expected utility vectors
the Euclidean distance between them, assessed taking into account the midpoints in the respective imprecise components, should be smaller or equal than d, i.e.,
where pjA=E(,p)+E(,p))/2, j = 1,..., m, and similarly for qjA, j = 1,...,m.
With respect to the distance threshold d, it is dynamic and depends on the temperature updating and on the cardinality of the neighbourhood, in such a way that in the first iterations is high and decreases along them.
4. Interactive procedure
As mentioned above, the DM is usually interested not in the complete generation of the approximation set but in obtaining one or several compromise solutions that satisfy him with respect to his preferences. For this purpose, we propose an interactive method based on MSA adapted from Teghem et al. (2000), that allows the DM to build in a progressive way a subset of the approximation set of the imprecise utility efficient vector set eI(PZ, u).
4.1 Initial phase
Next points must be firstly considered:
-
We assign values to the parameters of the MSA, i.e., T0, a , Nstep, Tstop and Nstop, for its later use as inputs of the algorithm.
-
We apply the simulated annealing on each objective, obtaining the higher expected upper utility for each component that we denote by , being k the optimised component, i.e., for each k = 1,...,m, . The best solution found for objective k is noted .
-
We define a variation interval [mk, Mk] for each component k, with Mk= and mk = minl=1,..,m E(,) the minimum value of component k of all solutions , with l = 1,...,m. Thus Mk and mk are approximations of the co-ordinates of the ideal and nadir points, respectively.
-
We initialise the set of minimal satisfaction levels e k for the different attributes Zk. In our problem, as we start from a set of imprecise expected utility vectors, what the DM will provide through these minimal satisfaction levels are some bounds within each one of the utilities of the different components satisfies him individually.
-
The DM can fix a unique satisfaction level applicable to all the attributes or else to provide one for each attribute. We suggest to initialise these quantities with a value into the corresponding variation interval, i.e., ek = vk, vk Î [mk, Mk]. However, it would be possible to fix such quantities as the lower bounds of the variation interval for the attribute, vk = mk, k=1,...,m, in such a way that in the first iteration the entire efficient frontier will be explored. Through the interactive process posed in this section, the DM will modify the satisfaction levels accordingly with his preferences as a reply to the obtained information.
-
We establish the set W(0) with |L| (|×| means cardinality) weighting vectors, where
W(0)={(, , ), lÎ L} .
Note that these sets of weights should be widely diversified to make possible to explore the set of potentially efficient solutions. Each set of weights will be uniformly generated, i.e.,
Î{0, 1/r, 2/r,..., (r1)/r, 1} åk=1,"lÎL .
The number r can be defined by the DM, in such a way that |L|= .
-
We apply the MSA to obtain A(PZ, u) above explained, except that for l (l) we limit the input of a solution to A(PZ, u, l (l)) as follows
p= ([E(, p), E(, p)],...,[E(, p), E(, p)]) Î A(PZ, u, l (l))
if and only if E(, p) ³ ek " k.
Another less strict possibility could be to substitute the last condition considering the average utility (M) instead of the lower one, i.e., E(, p) ³ ek, " k.
-
In this way, we generate an initial list L0 =DUl(1)ÎW(0) A(PZ, u, l (l)) as a result of the filtering operation considered above.
4.2 Mth iteration
Dialog phase with the DM
The list LM-11 is presented to the DM who:
-
Discards from LM-11 those solutions that do not satisfy him, maintaining only the most preferred one(s).
-
Modifies the minimal satisfaction levels, ek, taking into account the information or characteristics furnished by the preferred solution(s). Based on this new information provided by the DM and whether the condition that there exists one or several k=1,...,m such that ek ¹ mk is fulfilled, then we define a new set of weights l|L|+1 as follows
-
Updates the parameters of the MSA if the DM considers suitable. In each iteration, he can choose to intensify (increasing Nstep and a ) or not (decreasing these values) the search of new solutions potentially optimal of interest.
Computation phase
Based on the new satisfaction levels supplied by the DM in the dialog phase, we use a new restricted list of weights. We define the bounds
and the new set of weights is
We have the relationship 0 £ å ka k £ m. When å ka k = 0 or ek = mk " k (the DM has not provided satisfaction levels), no new set of weights is defined, we use the complete set W of weights and in this case it is explored all the efficient frontier. The restricted W(M) allows eliminating some weights that becomes useless with respect to DM's satisfaction levels. For not being very restrictive in the elimination of the sets of weights and be sure to cover the reduced part of the efficient frontier that the DM wants to explore, we should assign to parameter g; a non too small value. We use the value g; = 0.9 for our concrete problem. A new set of weights l |L|+1 is also introduced so as not to obtain an empty set of weights (when the DM's satisfaction levels are too strong or the assigned value to g; too small).
-
We perform the MSA method with the set of weights W(M) but updating during its execution mk and Mk, that were approximations, of the type
if there exists p such that E(ukU, p)> Mk, then Mk = E(ukU, p)
if there exists p such that E(ukL, p)< mk, then mk = E(ukL, p).
-
We obtain from the MSA the list of solutions LM that we can state in the next way:
LM= D { LM-11 U { A(PZ, u, l (l)), l (l) Î W(M)}} Ç {p : E(, p) ³ e k } ,
i.e., we make a filter to the set
A(PZ, u, l (l)) and LM-1 by means of comparisons between pairs to remove the dominated solutions. Moreover, we also remove those solutions that do not satisfy the new minimal satisfaction levels defined by the DM (they have not been tested for the solutions in the aforementioned iteration, LM-1).The interactive MSA ends when the DM is completely satisfied with a solution or set of solutions LM.
5. Conclusions
In imprecise multiattribute decision-making problems under risk, the utility efficient set plays an important role in its solution. However, its generation can be very difficult or its cardinality too big for the DM to choose one solution. To overcome such inconveniences we have proposed a method based on an adaptation of the simulated annealing in the multi-objective framework that aids the DM to find an approximation to the imprecise utility efficient set. Finally, through an interactive process where the DM provides progressively (depending on the information the decision maker receives) some characteristics that should be satisfied by the preferred solution (through the minimal satisfaction levels), one solution (or a small set of solutions) satisfying his/her preferences over the approximation set will be obtained.
Acknowledgements
This paper was supported by the Ministry of Science and Technology project DPI2001-3731 and the Madrid regional government project CAM 07T/0027/2000.
Received September 2001;
accepted October 2002 after one revision.
References
- (1) Colson, G. & de Bruyn, C. (1989) (eds). Models and Methods in Multiple Criteria Decision Making Pergamon Press, Oxford.
- (2) Czyzak, P. & Jaszkiewicz, A. (1997). Pareto Simulated Annealing. Multiple Criteria Decision Making, Lectures Notes in Economics and Mathematical Systems, 448, 297-307.
- (3) Farquhar, P.H. (1984). Utility Assessment Methods. Management Science, 30, 1283-1300.
- (4) Fishburn, P.C. (1964). Decision and Value Theory Wiley, New York.
- (5) Fishburn, P.C. (1970). Utility Theory for Decision Making Wiley, New York.
- (6) Fortemps, P.; Teghem, J. & Ulungu, B. (1994). Heuristics for Multiobjective Combinatorial Optimization by Simulated Annealing. XI-th International Conference on MCDM, Coimbra, Portugal.
- (7) French, S. (1986). Decision Theory: An Introduction to the Mathematics of Rationality. Horwood, Chichester.
- (8) Hajek, B. (1988). Cooling Schedules for Optimal Annealing. Mathematical Operations Research, 13, 311-329.
- (9) Hersey, J.C.; Kunreuther, H.C. & Schoemaker, P.J. (1982). Sources of Bias in Assessments Procedures for Utility Functions. Management Science, 28, 936-953.
- (10) Holloway, C.A. (1979). Decision Making Under Uncertainty: Models and Choices Prentice-Hall, Englewood Cliffs, New Jersey.
- (11) Hull, J.; Moore, P.G. & Thomas, H. (1973). Utility and its Measurement. Journal of the Royal Statistical Society, Serie A, 136, 226-247.
- (12) Jaffray, J.Y. (1989). Some Experimental Findings on Decision Making under Risk and their Implications. European Journal of Operational Research, 38, 301-306.
- (13) Jiménez, A.; Ríos-Insua, S. & Mateos, A. (2002). A Decision Support System for Multiattribute Utility Evaluation based on Imprecise Assignments. Decision Support Systems, (to appear).
- (14) Keeney, R.L. & Raiffa, H. (1976). Decision with Multiple Objective: Preferences and Value Tradeoffs. Wiley, New York.
- (15) Logical Decisions (1998). Multi-Measure Decision Analysis Software, V. 5.0, 1014 Wood Lily Dr, Golden, CO.
- (16) Mateos, A. & Ríos-Insua, S. (1997). Approximation of Value Efficient Solutions. Journal of Multicriteria Decision Analysis, 6, 227-232.
- (17) Mateos, A. & Ríos-Insua, S. (1998). An Approximation to the Value Efficient Set. Lectures Notes in Economics and Mathematical System, 448, 360-371.
- (18) Mateos, A.; Ríos-Insua, S. & Gallego, E. (2001). Postoptimal Analysis in a Multi-Attribute Decison Model for Restoring Contaminated Aquatic Ecosystems. Journal of the Operational Research Society, 52, 1-12.
- (19) McCord, M. & de Neufville, R. (1986). Lottery Equivalents: Reduction of the Certainty Effect Problem in Utility Assessment. Management Science, 32, 56-61.
- (20) Mosteller, F & Nogee, P. (1951). An Experimental Measurement of Utility. Journal of Political Economy, 59, 371-404.
- (21) Pirlot, M. (1996). General Local Search Methods. European Journal of Operational Research, 92, 493-511.
- (22) Rietveld, P. (1980). Multiple Objective Decision Methods and Regional Planning. Studies in Regional Science and Urban Economics, 7, North-Holland, Amsterdam.
- (23) Ríos-Insua, S. & Mateos, A. (1998). Utility Efficient Set and its Interactive Reduction. European Journal of Operational Research, 105, 581-593.
- (24) Ríos, S.; Ríos-Insua, S.; Ríos Insua, D. & Pachon, J.G. (1994). Experiments in Robust Decision Making. Decision Theory and Decision Analysis: Trends and Challenges, 233-242.
- (25) Ríos Insua, D.; Gallego, E.; Mateos, A. & Ríos-Insua, S. (2000). MOIRA: A Decision Support System for Decision Making on Aquatic Ecosystems Contaminated by Radioactive Fallout. Annals of Operations Research, 95, 341-364.
- (26) Roberts, F.S. (1972). What if Utility do not Exist? Theory and Decision, 3, 126-139.
- (27) Roberts, F.S. (1979). Measurement Theory Addison-Wesley, Reading , Mass.
- (28) Savage, L.J. (1954). The Foundations of Statistics Wiley, New York.
- (29) Schlaifer, R. (1969). Analysis of Decisions Under Uncertainty McGraw-Hill, New York.
- (30) Serafini, P. (1992). Simulated Annealing for Multiple Objective Optimization Problems. In: Proceedings of the Tenth International Conference on MCDM [edited by G.H. Tzeng, H.F. Wang, V.P. Wen and P.L. Yu], Springer Verlag, Berlin, 87-96.
- (31) Teghem, J.; Tuyttens, D. & Ulungu, E.L. (2000). An Interactive Heuristic Method for Multi-Objective Combinatorial Optimization. Computers and Operations Research, 27, 621-634.
- (32) Ulungu, E.L.; Teghem, J. & Ost, Ch. (1998). Efficiency of Interactive Multi-Objective Simulated Annealing Through a Case Study. Journal of the Operational Research Society, 49, 1044-1050.
- (33) von Neumann, J. & Morgenstern, O. (1947). Theory of Games and Economic Behavior Princeton University Press, New York.
- (34) von Nitzsch R. & Weber, M. (1988). Utility Function Assessment on a Micro-Computer: An Interactive Procedure. Annals of Operations Research, 16, 149-160.
- (35) Weber, M. (1987). Decision Making with Incomplete Information. European Journal of Operational Research, 28, 44-57.
Publication Dates
-
Publication in this collection
28 Feb 2003 -
Date of issue
Dec 2002
History
-
Accepted
Oct 2002 -
Received
Sept 2001