Tablets-au.com Available ED Drugstore is an 1st. pharmacy providing a individual service to the society in Australia. Over 80,000 extremely satisfied buyers! We're your medication drug store **cialis australia** and have provided trusted service to families in Australia for over 15 years.

## Anton-benz.de

Centre for General Linguistics, ZAS, Berlin
We call approaches which use decision theoretic explications of Grice’ relevance maxim
for selecting best answers and calculating implicatures relevance scale approaches. In thispaper we discuss these approaches with respect to the questions: Are intuitively optimalassertions identical to assertions with maximal relevance? Can classical relevance implica-tures be explained by the assumption that propositions are implicated to be false exactly ifthey are more relevant than what the speaker has actually asserted? The answers to bothquestions are negative. We will show that there exists a decision theoretically defined rele-vance scale which the hearer can use for calculating implicatures, but we will also see thatthis hearer related scale is only defined after the speaker’s assertion is known and, therefore,cannot be presupposed by a definition of Grice’ relevance maxim.

A clarification of status and satisfying formulation of the Relevance principle is one of the majordesiderata of Gricean pragmatics. In the traditional formulation, the three maxims of (Quality),(Quantity), and (Relevance) can be taken together as: (QQR) Be truthful and say as much as youcan as long as it is relevant. Applications often rely on an intuitive everyday understanding ofrelevance. That this is insufficient for a useful theory of implicatures can be seen from exampleas the following from Grice (1989, p. 32):
A is standing by an obviously immobilized car and is approached by B, after which thefollowing exchange takes place:A: I am out of patrol.

B: There is a garage round the corner. (G)+> The garage is open. (H)
Grice notes that because B’s remark can only be relevant if the garage is open A can concludethat H. A possible derivation of this implicature along the lines of the standard theory (Levinson1983) could proceed as follows: let H denote the negation of H:
1. B said that G;2. H, that the garage is not open, is relevant and G ∧ H is more informative than G;3. B observes (QQR), hence the only reason for not saying that H can be that H is false;4. Hence H.

But, in the given context, H, that the garage is open, can also be called relevant. Hence, thesame argument can be made with H and H interchanged, which leads to the conclusion that H
∗I have to thank the Institute for Business Communication and Information Analysis, IFKI, at the University of
Southern Denmark and the Centre for General Linguistics, Typology and Universals Research, ZAS, Berlin. Thispaper benefited a lot from discussions with Michael Franke and Robert van Rooij.

is implicated. The weak point of such derivations, clearly, is the fact that H and H can both becalled relevant.

Relevance is a central notion in decision theory (Pratt, Raiffa and Schlaifer 1995). It is de-fined as the value of information to a decision problem of a single agent. In this paper wediscuss approaches that use decision theoretic measurer as explications of Grice’ notion of rel-evance. Such measures of relevance have been discussed e.g. by (Merin 1999), (Rooij 2004),and (Schulz and Rooij to appear). Decision theoretic relevance measures define a linear pre–order on propositions. In analogy to the standard theory of quantity implicatures, we can callthese linear pre-orders relevance scales, and the approaches employing these scales relevancescale approaches. We discuss relevance scale approaches with respect to three questions: Areintuitively optimal assertions identical to assertions with maximal relevance? Can classical rel-evance implicatures be explained by the assumption that propositions are implicated to be falseexactly if they are more relevant than what the speaker has actually asserted? Do these decisiontheoretic explications make inference valid? In this paper, we show that all three questionsreceive a negative answer:
1. Negative Result: If we assume that the speaker maximises the relevance of his utterances,
then no suitable decision theoretic explication of the notion of relevance can avoid mis-leading answers.

2. Negative Result: No relevance scale approach can avoid unintended implicatures in cases
If the speaker cannot follow (QQR) for selecting his utterances, nor the hearer rely on it as aninterpretative principle, then two fundamental properties of conversational maxims are violated.

This point is confirmed by the third negative result:
3. Negative Result: The relevance of propositions that makes the inference in valid is itself
implicated information and can therefore not define a conversational maxim.

The paper divides into three parts. In the first part, Section we provide a non–technicaloverview and discussion of our results. In the second part, Sections we introduce thegeneral structures that characterise relevance scale approaches. We contrast them with a gametheoretic model for deriving optimal assertions (Benz 2006, Benz and Rooij to appear). Finally,in Sections we prove the three negative results.

An essential feature of Grice’ theory of conversational implicatures is the assumption that thereis a joint purpose underlying every talk exchange. In the following, we concentrate on answeringsituations which are subordinated to a decision problem of the inquirer. The question whichcreates the answering situation provides an explicit shared discourse goal. We include caseswhere the question remains implicit as in the Out-of-Petrol example where we can assumethat B’s assertion is an answer to the question “Where can I buy petrol for my car?” Implicatureswill always be treated as particularised conversational implicatures. We restrict considerationsto situations where the inquirer and the answering expert are fully cooperative, where the expertknows everything the inquirer knows, and where these facts are common knowledge. We usegame and decision theory to represent these situations. Our discussion of Example showedthat a precise definition of relevance is necessary for a useful theory of relevance implicatures.

Game and decision theory are attractive frameworks as they allow us to explicitly study theinteraction of such central pragmatic concepts as speaker and hearer’s preferences, information,choice of action, and coordination of interpretation.

(Grice 1989) only gave a brief formulation to the third maxim: Be relevant! Grice noted thatthere are analogues to most conversational maxims that relate to non-linguistic joint projects.

For the maxim of relevance, he illustrated this point by the following non–linguistic example(Grice 1989, p. 28): “If I am mixing ingredients for a cake, I do not expect to be handed a goodbook, or even an oven cloth (.).” In this situation, both persons seems to be equally competentto decide what is relevant and what is not; it is even more likely that the helping person is lesscompetent than the person using the help. But in questioning and answering situations it is theinquirer who lacks information and it is the answering person, we call this person expert, whohas the information. Grice didn’t specify from whose perspective relevance has to be defined.

In principle, there are two possibilities.

The main concern of our discussion is the question whether we need a game or a decision the-oretic explication of Grice notion of relevance. Game and Decision theory differ in the numberof decision makers which are involved in decision making. Decision theory is concerned withthe decision making of single agents. Context parameters are the possible actions the decisionmaker can choose from, their outcomes, and the decision maker’s preferences over these out-comes. Game theory is concerned with the inter-depended decision making of several agents.

Hence, the general question that underlies our discussion is the question whether we need aninteractional model of communication in order to explain the choice of answers and their impli-catures, or whether a non-interactional model is sufficient. The latter path was taken by previousapproaches to pragmatics which are based on classical game or decision theoryWe will showthat hearer centred explications of relevance are insufficient for both, choosing useful answersand defining implicatures, even if we consider only highly favourable dialogue situations.

There seems to be a strong a priori argument in favour of the non–interactional view that stemsfrom the calculability of implicatures. Calculability presupposes that interlocutors have accessto the necessary contextual parameters. This seems to imply that the hearer must possess somemeasure of relevance that depends only on the semantics of utterances and common backgroundknowledge but not on the speaker’s private knowledge. Furthermore, in order to coordinate themeaning of utterances successfully, it seems necessary that the speaker uses the same definitionof relevance depending on the same parameters. This reasoning leads to a hearer centred defi-nition of relevance, and hence to a decision theoretic explication based on the hearer’s, i.e. theinquirer’s, decision problem. We will see that this reasoning is not conclusive. Before we ad-dress the argument of calculability, we first discuss relevance as a principle for choosing optimalanswers and relevance scale explanations of implicatures.

In the following, we use the term relevance if we refer to decision theoretically defined mea-sures for the value of information. “Be relevant!” is then interpreted as meaning that the speakershould choose answers that have a positive value of information. In order to understand its ef-fects, we first consider the maxim of quantity. In Grice (1989) original formulation, the quantitymaxim divided into two parts:
1. Make your contribution as informative as required (for the current purpose of exchange).

2. Do not make your contribution more informative than is required.

1See (Parikh 1992, Parikh 2001), (Parikh 1994), (Merin 1999), (Rooij 2004), (Schulz and Rooij to appear).

Grice himself noted that the second sub-maxim may be superfluous if we take relevance intoaccount. If we see quantity and relevance in interaction, then we can simplify the first sub-maxim of quantity to “Say as much as you can,” and restrict it by “Say only what is relevant.”Hence, we can take the maxim of quantity and the relevance maxim together and phrase them asSay as much as you can as long as it is relevant. It is commonly assumed that information canbe more or less relevant. The two maxims together then lead to the constraint that the speaker’sutterance should provide the most relevant information possible. This principle is restricted bythe maxim of quality which states that the speaker can only say what he believes to be true.

Hence, we end up with a constraint (QQR) that says that the speaker can only choose the mostrelevant proposition which he believes to be true. Let us contrast (QQR) with a principle thatcombines only quality and quantity (QQ) “Say as much as you can as long as it is true.” Weconsider the following example:
Somewhere in the streets of Amsterdam.

I: Where can I buy an Italian newspaper?E: At the station and at the Palace but nowhere else. (SE)E: At the station. (A) / At the Palace. (B)
The inquirer has to decide where to go for buying an Italian newspaper. Let us assume that theanswering expert knows that (SE) is true. What should he If we assume (QQ), then heshould say everything he knows, hence only SE would conform to the maxims. But intuitively,in the given situation, A and B are equally appropriate with respect to their usefulness. This iswhat (QQR) predicts.

We provide an explicit model of relevance scale approaches in Section Our representationof an answering situation σ will consist of a decision problem D and the answering expert’s
expectations about the state of the world. We denote by Adm the set of all propositions which
the experts believes to be true. We call our representation σ a support problem. Relevanceis a property of propositions and depends on a given decision problem. Propositions can becompared according to their relevance. These very general properties of relevance can be rep-resented by real-valued functions R with two arguments, decision problems and propositions. IfD is a decision problem and A, B two propositions, then R(D , A) < R(D , B) means that A is
less relevant for the decision problem D than B. We call these functions relevance measures.

Hence, the set MR of all maximally relevant propositions that the answering expert believes
to be true consists of all those propositions in Adm which are maximally relevant to D with
respect to a given relevance measure R. MR will be defined in
The characterisation of relevance remains very general. Special measures which have beenwidely tested are sample value of information and utility value of information (Pratt et al. 1995).

By EU (a) we denote the expected utility of performing action a given the current backgroundknowledge. By EU (a|A) we denote the expected utility of performing a after learning propo-sition A. Let a∗ denote the action that an agent would choose before learning anything. Aswe assume that agents are rational, it must hold that EU (a∗) = maxa EU(a). Sample value ofinformation A is defined as follows:
SV I(A) := max EU (a|A) − EU (a∗|A).

Utility value of information A is defined as:
UV (A) := max EU (a|A) − max EU (a).

2From now on we assume that the inquirer is female and the answering expert male.

We see that SV I(A) can never be negative. It can only become positive if learning A induces theagent to choose a different action. In contrast, the utility value of a proposition A can becomenegative. It becomes positive if the maximal expected utility after learning A is higher than themaximal expected utility before learning A. In Example both measures of relevance makethe correct prediction if we assume that the station and the Palace are places where Italian news-papers might be available and that both possibilities are equally probable and optimal. But ifwe assume that there is a slightly higher a priori expectation that there are Italian newspapers atthe station, then using sample value of information would predict that only the answer B, thereare Italian newspapers at the Palace, is relevant because only this proposition would lead to adifferent choice of action. If we use utility value as an explication of relevance, then the rel-evance principle would require the answering expert to increase the inquirer’s expectations asmuch as possible. Even without example, it is clear that such a prescript must lead to mislead-ing answers. (Benz 2006) provides a proof for support problems with completely coordinatedpreferences but diverging expectations. Section contains an analogous result for support prob-lems where the answering expert’s expectations are derived from the inquirer’s expectations bya Bayesian update. In order to prove this result we have to make stronger assumptions aboutrelevance scales. We look at the following two examples to motivate these assumptions:
There is a strike in Amsterdam and therefore the supply with foreign newspapers isa problem. The probability that there are Italian newspapers at the station is slightlyhigher than the probability that there are Italian newspapers at the Palace, and it mightbe that there are no Italian newspapers at all. All this is common knowledge between Iand E. Now E learns that (N) the Palace has been supplied with foreign newspapers. Ingeneral, it is known that the probability that Italian newspapers are available at a shopincreases significantly if the shop has been supplied with foreign newspapers.

We assume the same scenario as in but E learns this time that (M) the Palace hasbeen supplied with British newspapers. Due to the fact that the British delivery serviceis rarely affected by strikes and not related to newspaper delivery services of other coun-tries, this provides no evidence whether or not the Palace has been supplied with Italiannewspapers.

What is of interest is the relation between the propositions N, M, and the uninformative proposi-tion Ω, i.e. saying nothing. It is M ⊆ N ⊆ Ω and, as M has no influence on the expected successof going to the station or going to the Palace, M and Ω must be equally relevant to the underly-ing decision problem. In both examples, I’s decision problem, i.e. her information, preferencesand choices of action, are the same. This means that in both examples either N is more relevantthan Ω, or it is not. But this means that N is either the most relevant answer in or irrelevantin Both predictions are counterintuitive. The standard relevance measures introduced inand e.g. both predict that N is the most relevant answer in
Intuitively, in E has nothing relevant to say because the most informative answer he couldgive has no influence on the expected utilities of any action. We can generalise this observationas follows: If A represents the expert’s knowledge, and if A and Ω are equally relevant, then theremust be no A ⊆ C ⊆ Ω which is more relevant than Ω. This condition would be sufficient forour purposes but, in order to have a constraint that does only depend on the inquirer’s decisionproblem, we formulate a slightly more general monotonicity condition: if A ⊆ B, then B mustnot be more relevant than A. This monotonicity condition is quite strong and rules out measureslike and
In order to explain we have to assume that relevance measures are monotone. In order toexplain examples like we have to assume that propositions A and B which lead to identical
expected utilities are equally relevant: if ∀a EU (a|A) = EU (a|B), then the relevance of A andB must be equal. We call this property the Italian newspaper property. Finally, in situationswhere the answering expert thinks that saying nothing would induce the inquirer to choose asub-optimal action there must exists some relevant answer that he can choose. The precise defi-nitions of these properties are stated in Section Def. A fourth condition is that relevancemeasures must not prescribe misleading answers. In Lemma we show that no relevancemeasures can satisfy all four of these properties.

As mentioned before, in principle the value of information can be determined from two per-spectives, the speaker’s and the hearer’s. In Grice’ example of handing someone ingredients formaking a cake, a relevance based analogue would demand that I evaluate the ingredients accord-ing to the receiver’s expectation. But why should I do so? Especially, if I am more competentthan the receiver and know exactly what she is going to do with the ingredients. It would bemore reasonable to deliberate first how she can handle the different ingredients and then choosethose ingredient she can make the best use of. Applied to answering situations, this means thatthe answering expert should first find out what are the optimal actions for the inquirer, and thenchoose an answer that will induce her to choose one of them. In order to do this, the expert hasto calculate which action the inquirer will choose after receiving the different possible answers.

This leads to a game theoretic model in which the expert E calculates backward from the finaloutcome of I’s actions a to his own decision situation where he chooses an answer A. We in-troduce the game theoretic model in Section The associated set of optimal answers Op is
defined in It is identical to the set of all non-misleading answers.

Relevance scale approaches typically embrace the following assumptions: (1) propositions canbe ordered according to their relevance to the joint purpose of the talk exchange, (2) speakerand hearer know this order, (3) the speaker is presumed to maximise the relevance of his talkcontributions, and (4) whatever is not said but would have been more relevant is implicated tobe false. The fourth assumption is a consequence of the third assumption: If the speaker ispresumed to maximise relevance and asserted a proposition A which is not maximally relevant,then there must have been a reason for it. Ignoring reasons as e.g. complexity or politeness,the only explanation is that A is the most relevant proposition which the speaker knows to betrue. But if speakers cannot be assumed to maximise relevance, as shown before, then therelevance based account lacks a proper foundation. Moreover, we will show in Section thatthis approach necessarily predicts undesired implicatures. The Italian newspaper example is acase where there are two propositions, “At the station” A1 and “At the Palace” A2, which mustbe equally relevant in order to explain why the answer A1 does not implicate A2 and vice versa.

In the Out-of-Petrol example we found a case where an answer A1 implicates some strongerproposition H2. By merging these two examples, we get the ultimate counter example againstrelevance scale approaches:
Somewhere in Berlin. Suppose I approaches the information desk at the entrance ofa shopping centre. He wants to buy Argentine wine. He knows that staff at the infor-mation desk is very well trained and know exactly where you can buy which product inthe centre. E, who serves at the information desk today, knows that there are two super-markets selling Argentine wine, a Kaiser’s supermarket in the basement and an Edekasupermarket on the first floor.

I: I want to buy some Argentine wine. Where can I get it?E: Hm, Argentine wine. Yes, there is a Kaiser’s supermarket downstairs in the basement
We show that no relevance scale approach can explain the (non-)implicatures in this example.

We consider the following propositions:
1. A1: There is a Kaiser’s supermarket in the shopping centre.

2. A2: There is an Edeka supermarket in the shopping centre.

3. H1: The Kaiser’s supermarket sells Argentine wine.

4. H2: The Edeka supermarket sells Argentine wine.

A1 and A2 are equally relevant to the joint goal of finding a shop where I can buy Italian wine.

Due to the linearity of the pre-order induced by a real valued relevance measure, all implicaturesof A2 must also be implicatures of A1. As answering Ai implicates Hi, it follows from a relevancescale approach that answering A1 must also implicate H2. But the assertion that there is aKaiser’s supermarket clearly does not implicate that there is an Edeka supermarket which sellsArgentine wine.

The perhaps strongest argument in favour of relevance approaches seems to be the argumentfrom calculability. Implicatures are part of what is communicated, hence speaker and hearerhave to agree on their content, and especially the hearer has to be able to calculate them givena relevance measure that is defined relative to his local information, i.e. relative to his decisionproblem D . If optimality of answers can only be calculated when taking into account the
speaker’s expectations, then, it seems, that a game theoretic approach cannot explain how thehearer is able to calculate implicatures. But this reasoning does not take into account thatthe hearer already knows the answer A when calculating implicatures A +> H. The hearer’slocal information must be identified with the pair (A, D ). As we will see, this is sufficient
information for calculating implicatures in an optimal answer model. We provide two criteriawhich can be used for calculation. The first, Lemma allows to calculate implicatures of theform A +> H from the fact that the action aA which the hearer will choose when learning Ais optimal. The second, Lemma is based on a relevance scale. As we saw in the previoussection, relevance measures that define a linear per-order on propositions cannot, in general, beused for calculating implicatures. Lemma makes use of the sample value of information,see after learning answer A. In contrast to the relevance scale approaches discussed before,this relevance measure is defined relative to the posterior probability PI( . |A). It depends on thepair (A, D ). We will see in Section that this notion of relevance makes the inference in
valid. Both criteria are only applicable if certain epistemic conditions are satisfied. Thepreconditions of the second criterion are stronger than the preconditions of the first.

In the standard theory (Levinson 1983, Ch. 3), implicatures follow logically from the semanticcontent of an utterance and the assumption that the speaker adheres to a number of conversa-tional maxims. It is a defining property of conversational maxims that their knowledge is alogical precondition for determining the speaker’s utterance. But, as our results show, the ap-propriate notion of relevance that makes inferences like valid can only be measured after theanswer has been given. The fact that a proposition is relevant is itself implicated information.

Hence, maximising relevance cannot be a maxim. The proper explication of Grice’ conceptof relevance and the meaning of relevance in cannot be the same thing. This is the thirdnegative result about relevance measures.

The remainder of the paper contains the technical results. We first introduce our representationsof answering situations, which we call support problems, in Section Then we present twoapproaches to finding solutions to support problems. First we present the optimal answer ap-proach, which is a game theoretic approach; then we characterise relevance scale approachesas described before. In sections we show three negative results about relevance scale ap-proaches. In Section we show that relevance approaches cannot avoid misleading answers;in Section we show that there are certain non-implicatures which cannot be explained by anyrelevance scale approach; in Section we argue that the appropriate notion of relevance thatmakes valid does not define a conversational maxim.

A decision problem is characterised by the possible states of the world, the decision maker’sexpectations about the state of the world, a set of actions the decision maker can choose from,and the decision maker’s preferences over the outcomes of his actions. Let Ω be the set ofall possible states of the world. We restrict our considerations to situations with finitely manypossibilities. We represent an agent’s expectations about the world by a probability distributionover Ω, i.e. a real valued function P : Ω → R with the following properties: (1) P(v) ≥ 0 forall v ∈ Ω and (2) ∑v∈ P(v) = 1. For sets A ⊆
Ω we set P(A) = ∑v∈A P(v). The pair (Ω, P) is
called a finite probability space. We represent an agent’s preferences over outcomes of actionsby a real valued function over action–world pairs. We collect these elements in the followingstructure:
Definition 3.1 A decision problem is a triple (Ω, P), A, u such that (Ω, P) is a finite proba-bility space, A a finite, non–empty set and u : A × Ω → R a function. A is called the action set,and its elements actions. u is called a payoff or utility function.

It is standard to assume that rational agents try to maximise their expected utilities. The expectedutility of an action a is defined by:
In general, there might be several a ∈ A with EU(a) = maxb∈A EU(b). In order to make surethat there is always a unique solution to a decision problem, we assume that the decision makerhas intrinsic preferences over the actions in A which come only to bear if there are severaloptimal actions. Hence, we add a linear order < to our decision problem and assume thatthe decision maker chooses a = max{a ∈ A | ∀b ∈ A EU(b) ≤ EU(a)}, where max is definedrelative to <. We call (Ω, P), (A, <), u a decision problem with tie break rule.

In the following, a decision problem (Ω, P), (A, <), u represents the inquirer’s situation beforereceiving information from an answering expert. We will assume that this problem is commonknowledge. In order to get a model for the full questioning and answering situation we have toadd a representation for the answering expert’s situation. We only add a probability distributionPE that represents his expectations about the world:
Definition 3.2 A support problem is a five–tuple Ω, PE, PI, (A, <), u where (Ω, PE) is a finiteprobability space and (Ω, PI), (A, <), u a decision problem with tie break rule. We assume:
∀X ⊆ Ω PE(X) = PI(X|K) for K = {v ∈ Ω | PE(v) > 0}.

Condition implies that the expert’s beliefs cannot contradict the inquirer’s expectations,i.e. that for A, B ⊆ Ω:
PE (A) = 1 ⇒ PI(A) > 0 and PI(A|B) = 1 & PE (B) = 1 ⇒ PE (A) = 1.

For support problems σ = Ω, PE, PI, (A, <), u we denote by D the associated decision prob-
lem (Ω, PI), (A, <), u with tie break rule.

A support problem represents just the fixed static parameters of the answering situation. Weassume that I’s decision does not depend on what she believes that E believes. Hence herepistemic state (Ω, PI) represents just her expectations about the actual world. E’s task is toprovide information that is optimally suited to support I in her decision problem. Hence, Efaces a decision problem himself, where his actions are the possible answers. The utilities ofthe answers depend on how they influence I’s final choice. We find two successive decisionproblems:
We assume that the answering expert E is fully cooperative and wants to maximise I’s finalsuccess; i.e. E’s payoff is identical with I’s (our representation of the Cooperative Principle).

E has to choose his answer in such a way that it optimally contributes towards I’s decision.

We first introduce a game theoretic solution based on (Benz 2006). Afterwards, we provide acharacterisation of relevance scale approaches.

The expected utility of actions may change if the decision maker learns new information. Todetermine this change of expected utility, we first have to know how learning new informa-tion affects the inquirer’s beliefs. In probability theory the result of learning a proposition Ais modelled by conditional probabilities. Let H be any proposition and A the newly learnedproposition. Then, the probability of H given A, written P(H|A), is defined by:
This is only well–defined if P(A) = 0. In terms of this conditional probability function, theexpected utility after learning A is defined by:
I will choose the action which maximises her expected utilities, i.e. she will only choose actionsa where EU (a, A) is maximal. In addition, we assume that I has always a preference for oneaction over the other. We represented this preference by a linear order < on A. For A ⊆ Ω wecan therefore denote the inquirer’s unique choice by
aA := max{a ∈ A | ∀b ∈ A EUI(b|A) ≤ EUI(a|A)}.

As we assume that E is fully cooperative, E has the same preferences over outcomes as I. E hasto choose an answer that induces I to choose an action that maximises their common payoff.

We can see E’s situation as a separate decision problem where he has to choose between theanswers A ⊆ Ω. With aA defined as before, we can calculate the expected utilities of the differentanswers as follows:
We add here a further Gricean maxim, the Maxim of Quality. We call an answer admissible ifPE (A) = 1. The Maxim of Quality is represented by the assumption that the expert E does onlygive admissible answers. This means that he believes them to be true. For a support problem
σ = Ω, PE , PI , (A , <), u we set:
Hence, the set of optimal answers for σ is given by:
The expert may always answer everything he knows, i.e. he may answer K := {v ∈ Ω | PE(v) >0}. From condition it trivially follows that EUE(aK) = maxa∈A EUE(a), hence:
Let us call an answer C misleading iff EUE(aC) < maxa∈A EUE(a). It follows from thatOp is the set of all non–misleading answers.

Calculating Implicatures from Optimal Answers
From the previous model we can derive a technical definition of what is an implicature. In thestandard model (Levinson 1983), implicatures A +> H follow logically from the fact that Ahas been uttered and the assumption that the speaker adheres to the conversational maxims. Inour context, this means that implicatures follow from the fact that the utterance of A impliesthat it must be an optimal answer. If we assume that the speaker has true knowledge, thenthe truth of a proposition H follows if the speaker believes it to be true. Implicatures maydepend on additional, contextually given information. This information can be represented by asubclass ˆ
S of support problems. The following definition applies only to propositions that can
be represented as subsets of Ω, i.e. it does not capture situations where H attributes a certainbelief to the speaker.

Definition 4.1 (Implicature) Let σ = Ω, PE, PI, (A, <), u be a given support problem, σ ∈
S ⊆ S. For A,H ∈ P (Ω), A ∈ Op we define:
Let O(a) be the set of all worlds where a is an optimal action:
O(a) := {w ∈ Ω | ∀b ∈ A u(a, w) ≥ u(b, w)}.

S be the set of all support problems with ∃a ∈ A PE(O(a)) = 1.

S, A,H ⊆ Ω, A ∈ Op , and A∗ := {w ∈ A | P
Proof of Lemma We first show that A∗ ∩ O(aA) ⊆ H implies that A +> H. Let ˆσ ∈ [σ] ˆSbe such that A ∈ Op ˆ . We have to show that Pˆσ(H) = 1. By Pˆσ(O(a
A) ∩ A∗) = 1, and it follows that P ˆ
Next we show A +> H implies A∗ ∩ O(aA) ⊆ H. Suppose A∗ ∩ O(aA) ⊆ H. Let w ∈ A∗ ∩ O(aA) \H. Let ˆ
A +> H, it follows that P ˆσ(H) = 1, in contradiction to w ∈ H.

Both, A∗ and O(aA) are known to the inquirer. The condition A∗ ∩ O(aA) ⊆ H is equivalent toPI(O(aA) ∩ H|A) = 1. Hence, this result explains how the inquirer can calculate implicaturesusing her local information after learning answer A.

Any definition of relevance will define a real valued function R which orders propositions ac-cording to their relevance. For the question how to choose a maximally relevant answer, we canabstract away from other desirable properties of relevance measures and assume that they aregeneral functions R(D, . ) : P (Ω) → R for decision problems D = (Ω, P), A, u . Given such arelevance measure, we can define the set of maximally relevant answers. This set is restrictedto the propositions which the speaker believes to be true. Let σ = Ω, PE, PI, (A, <), u be anysupport problem. Then, the set of maximally relevant answers MR is given by
MR := {A ∈ Adm | ∀B ∈ Adm R(D , B) ≤ R(D , A)}.

I call a theory about relevance implicatures a relevance scale approach iff it defines or postu-lates a linear pre-order ≺ on propositions such that an utterance of proposition A implicates aproposition H iff A ≺ ¬H; i.e.:
The reasoning behind this kind of approach is roughly as follows: (1) The speaker said A; (2)¬H would have been more relevant but the speaker didn’t say that ¬H; (3) as the speaker shouldsay as much as he can as long as it is relevant, it follows that ¬H must be false; (4) hence H.

The pre-order ≺ may again depend on a given decision problem. A representation by a pre-order is equivalent to a representation of preferences by a real valued function. Furthermore,implicatures may depend on commonly known background assumptions. The following defini-tion makes these dependencies explicit. In addition, it adds the constraint that a relevance scaleimplicature H can only arise if the speaker is known to know whether H.

Definition 4.3 (Relevance Scale Implicature) Let ˆ
S be a subset of S and σ ∈ ˆS. Let R(D , .) :
P (Ω) → R be a given relevance measure. Then, it holds in σ that A +> H iff
∀ ˆσ ∈ [σ] ˆS : (PˆσE(H) = 1 ∨ PˆσE(H) = 1) & R(D ,A) < R(D ,H),
In the following sections we discuss relevance scale approaches. We will show that there are se-vere principled limitations to this approach. First, we will show that maximising relevance mustnecessarily lead to misleading answers even under extremely favourable conditions. Secondly,we will show that relevance scale approaches necessarily over-predict implicatures. These re-sults show already that the maximisation of relevance and the derived explanation of implica-tures cannot be principles of conversations which speaker and hearer are presumed to follow. Asa last result, we will introduce a relevance measure that makes the relevance based reasoning inthe Out-of-Petrol example valid but cannot qualify as an explication of the relevance principle.

In this section we show that maximisation of relevance leads necessarily to misleading answers.

Technically this will be achieved by comparing the set of maximally relevant answers withthe set of optimal answers defined by the optimal answer approach. We know that the set ofoptimal answers is identical to the set of all non–misleading answers. Hence, if we know that theintersection of maximally relevant and optimal answers is empty for a given support problem,then, in this case, all maximally relevant answers must be misleading. In the previous section,we introduced relevance measures as functions R(D, . ) : P (Ω) → R. In order to achieve ourresult, we have to assume that the relevance measures satisfy some additional properties. Wemotivated these properties in Section
Definition 5.1 Let R : P (Ω) → R and A ⊆ B ⊆ Ω. We call R a monotone relevance measure iff
1. ∀a ∈ A EU(a|A) = EU(a|B) ⇒ R(A) = R(B) (Italian newspaper);
2. ∀a ∈ A (EU(a|B) = maxb EU(b|B) ⇒ EU(a|A) < maxb EU(b|A)) ⇒ R(B) < R(A);
The following lemma shows that there cannot be a relevance measure that satisfies these prop-erties and avoids misleading answers.

Lemma 5.2 For each support problem σ ∈ S let R(D , . ) :
Proof: Let us assume that there are support problems σ1, σ2 with D
sets A ⊆ C ⊆ B with ∀a ∈ A EU(a|A) = EU(a|B) and
(Σ1) A, B,C ∈ Adm , A, B ∈ Op , and C ∈ Op ;
(Σ2) B,C ∈ Adm , A ∈ Adm , B ∈ Op , and C ∈ Op .

Suppose now that for all σ ∈ S : MR ∩ Op =
0. Then, if (Σ1), it follows with monotonicity
conditions 1. and 3. that R(D ,C) ≤ R(D , B)But if (
dition 2. that R(D ,C) > R(D , B). Hence, the lemma follows if we can show that there are
support problems σ1, σ2 such that (Σ1) and (Σ2) hold.

We first define the shared decision problem D = Ω, A, PI, u of σ1, σ2. Let Ω = {v1, v2, v3, v4},PI(v1) = PI(v2) = 4 and P
We set A := {v1, v3}, B := Ω, and C := {v1, v3, v4}. Let σ1 be the support problem with PE(X) :=PI(X|A), and σ2 the support problem with PE(X) := PI(X|C). Then σ1 has property (Σ1), andσ2 property (Σ2). This completes the proof.

The last section showed that relevance scale approaches cannot successfully account for thechoice of answers. This does not necessarily entail that they cannot successfully account forrelevance implicatures. The following lemma shows that relevance scale approaches have prin-cipled problems avoiding certain implicatures. If a proposition A1 is equally relevant as a secondproposition A2, then whatever A2 implicates is also implicated by A1. This massively over-generates implicatures.

S be the set of support problems σ where for each of the propositions X ∈
{A1, A2, H1, H2} it is commonly known that E knows whether X, i.e. where it is commonly knownthat Pσ(X ) = 1 ∨ Pσ(X ) = 1. There exists no relevance measure R(D , . ) : P (
the following set of implicatures are satisfied in any σ ∈ ˆ
3For this argument it suffices that A, B,C ∈ Adm . The lemma can be proven with the following slightly weaker
conditions (IN1)–(IN3) if we take into account all conditions of (Σ1). Let K := {v | PI(v) > 0} and K ⊆ B. Thenlet (IN1) be the Italian newspaper condition from Definition (IN2) EU (aK|K) > EU(aB|K) then R(D , B) <
σ Ω) ⇒ ∀C(K ⊆ C ⊆ Ω ⇒ R(Dσ
S and relevance measure R we write A ≺ B iff R(D ,A) < R(D ,B).

Hence, ≺ satisfies We show that the above set of implicatures cannot be satisfied:
6. not A1 +> H2 implies A1 ≺ ¬H2, in contradiction to line
This and the last result show already that the relevance principle cannot have the status of aconversational maxim. In the standard theory it is assumed that implicatures are derived fromthe assumption that the speaker adheres to the maxims. The first negative result shows thatthe relevance principle is not responsible for the choice of answers, hence the hearer cannotpresume that the speaker is adhering to it. The second negative result shows that it necessarilyproduces unintended implicatures. Both results contradict what is commonly seen as a definingproperty of maxims, namely to be the basic principles that govern the speaker’s language useand thereby being the reason that generates implicatures.

In the introduction, we considered the Out-of-Petrol–Example and two opposing derivationsof implicatures. The validity of the inference in depends on G ∧ H being more relevant thanG. We show that there is a reliable explication of relevance that makes this inference true. Inthe last section we saw that the linearity of a relevance order implies that for every two equallyrelevant propositions A1, A2 it follows that whatever A1 implicates is also implicated by A2.

We can avoid this problem if we construct a new relevance scale for each answer. We will dothis using a variant of sample value of information We have to define it relative to theaddressee’s posterior probability, i.e. relative to the probability after learning the answer.

Lemma 7.1 Let σ = Ω, PE, PI, (A, <), u be a given support problem. Let O(a) be defined asin A, H ⊆ Ω, and let ˆ
S be the set of support problems where ∃a ∈ A PE(O(a)) = 1. The
sample value of information K posterior to learning A, SV I(K|A), is defined by:
SV I(K|A) := EUI(aA∩K, A ∩ K) − EUI(aA|A ∩ K).

If ∀K ⊆ H SV I(K|A) > 0, then A +> H,
S be any support problem with A ∈ Opˆ and D = D . We have to show
that P ˆσ(H) = 1. As D = D , it follows that ∀K ⊆ H SV I(K|A) > 0 holds also for ˆ
A)) = 1. By assumption, it holds that for all v ∈ H : if P ˆ
then SV I(v|A) > 0. But 0 < EUI(a{v}, {v}) − EUI(aA, {v}) = u(a{v}, v) − u(aA, {v}) impliesv ∈ O(aA). Therefore, P ˆσ(v) = 0 for v ∈ H. It follows that P ˆσ(H) = 0.

By definition SV I(A|A) = 0, hence the answer A is always the most irrelevant proposition. Therelative relevance scales SV I( . , A) cannot be combined to a linear order on P (Ω), hence they
do not allow to compare the absolute relevance of two arbitrary propositions. This violates twoessential assumptions about Grice’ notion of relevance.

The question whether a proposition H is relevant or not is meaningful only after an answerA is known. It follows logically from the fact that A is optimal. Hence, H’s relevance is aconsequence of the fact that the speaker adheres to the conversational maxims represented inthe optimal answer model. But this means that it is itself implicated information. Hence, theposterior sample value of information cannot be used for defining a conversational relevancemaxim. This is the third negative result.

Several concepts in this paper were independently and about the same
time discovered by Michael Franke (p.c. and unpublished manuscripts): he arrived at a definitionof implicature for the Best Answer approach similar to Definition calling it pragmatic en-hancement; he proposed posterior sample value of information, Lemma in order to explainsome older counter-examples to van Rooij–Schulz–exhaustification (Schulz and Rooij to ap-pear). Section is mainly motivated by Franke’s observation that misleadingness of answerscan be defined from the speaker’s and hearer’s perspective. If the speaker’s probabilities arederived from the hearer’s by a Bayesian update, as in the present paper, then the speaker’s andhearer’s misleadingness coincide.

Benz, A.: 2006, Utility and relevance of answers, in A. Benz, G. J¨ager and R. van Rooij (eds),
Game Theory and Pragmatics, Palgrave McMillan, Basingstoke, pp. 195–219.

Benz, A. and Rooij, R. v.: to appear, Optimal assertions and what they implicate: A uniform
Grice, H. P.: 1989, Studies in the Way of Words, Harvard University Press, Cambridge MA.

Levinson, S.: 1983, Pragmatics, Cambridge University Press, Cambridge MA.

Merin, A.: 1999, Information, relevance, and social decisionmaking: Some principles and re-
sults of decision-theoretic semantics, in L. Moss, J. Ginzburg and M. de Rijke (eds), Logic,Language, and Information, Vol. 2, Palgrave McMillan, Stanford.

Parikh, P.: 1992, A game-theoretic account of implicature, in Y. Moses (ed.), Theoretical As-
pects of Reasoning aboutKnowledge, Morgan Kaufman Publisher, Monterey, CA.

Parikh, P.: 2001, The Use of Language, CSLI Publications, Stanford CA.

Parikh, R.: 1994, Vagueness and utility: the semantics of common nouns, Linguistics and
Pratt, J., Raiffa, H. and Schlaifer, R.: 1995, Introduction to Statistical Decision Teory, The MIT
Rooij, R. v.: 2004, Utility of mention-some questions, Research on Language and Computation
Schulz, K. and Rooij, R. v.: to appear, Pragmatic meaning and non–monotonic reasoning: The
case of exhaustive interpretation, Linguistics and Philosophy .

Source: http://www.anton-benz.de/paper/benz-sub06.pdf

Supplementary Text The RBR protein family Birgit Eisenhaber1,*, Nina Chumak2, Frank Eisenhaber1 and Marie-Theres Hauser2,* 1Research Institute of Molecular Pathology (IMP), Dr. Bohr-Gasse 7, A-1030 Vienna, Austria 2Institute of Applied Genetics and Cell Biology, Department of Plant Science and Plant Biotechnology, University of Natural Resources and Applied Life Sciences, Muthgasse 18,

for 90 days* Health Savings Pass ® hundreds of generic medication list Sign up now to save on prescriptions. Plus – 10% off at MinuteClinic®. ALLERGY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . QTY DIABETES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . QTY FLURBIPROFEN SODIUM 0.03 % DROPS . . . . . . . . 8FEXOFENADINE HCL 30 MG TAB . . . . . . .