An Awareness Epistemic Framework for
Belief, Argumentation and Their Dynamics
Abstract
The notion of argumentation and the one of belief stand in a problematic relation to one another. On the one hand, argumentation is crucial for belief formation: as the outcome of a process of arguing, an agent might come to (justifiably) believe that something is the case. On the other hand, beliefs are an input for argument evaluation: arguments with believed premisses are to be considered as strictly stronger by the agent to arguments whose premisses are not believed. An awareness epistemic logic that captures qualified versions of both principles was recently proposed in the literature. This paper extends that logic in three different directions. First, we try to improve its conceptual grounds, by depicting its philosophical foundations, critically discussing some of its design choices and exploring further possibilities. Second, we provide a (heretofore missing) completeness theorem for the basic fragment of the logic. Third, we study, using techniques from dynamic epistemic logic, how different forms of information change can be captured in the framework.
1 Introduction
Belief and argumentation are two central dimensions of humans’ cognitive architecture. They have received attention from antiquity to nowadays, and from a broad range of disciplines. It is then unsurprising that formal researchers have undertaken the task of modelling both phenomena. Regarding beliefs, there is an important amount of options for capturing some of its formal aspects [23]. These models usually capture what kind of things are believed (typically, propositions or sentences); who believes them (intelligent agents); and, only sometimes, how strong or safe these beliefs are (for instance, in probabilistic models of belief or in plausibility structures [8]). However, most of them fail to capture why agents do believe certain things. This lack motivates the recent trial within the epistemic logic community of capturing the missing justification component. This enterprise has been approached from a variety of methods: justification logic [4, 5, 2, 3], evidence logics based on neighbourhood semantics [13, 12], and its further topological development [6], amongst others. Yet another natural candidate to model justification consists in using conceptual and technical tools coming from argumentation theory (as done, e.g. in [24, 37, 29, 16]).
As to argumentation theory, it is a well-established, interdisciplinary field of research [20]. Since the last few decades, formal argumentation has gained more and more attention within the field of artificial intelligence, and its general advantages have been highlighted several times [11, 33]. Within formal approaches to argumentation, it is frequent to distinguish between abstract approaches (those that consider arguments as primitive, atomic entities) and structured approaches (those that explicitly account for the structure of arguments). For expository purposes, we just mention the popular Dung’s approach to abstract argumentation [19], based on so-called abstract argumentation frameworks, and the ASPIC family of formalisms for structured argumentation, e.g., ASPIC+ [31, 32], that will be the main argumentative resources used in this paper.
Recently, some works have taken the first steps to explore and exploit the relations among the two different traditions (epistemic logic and formal argumentation). These can be divided in two groups. On the one hand, there are works using epistemic logic tools to reason about argumentation frameworks [36, 35, 34]. On the other hand, there are works using argumentation tools to provide an (argumentatively inspired) notion of justified belief (the already mentioned [24, 37, 29, 16]). The current paper is inserted in the latter group, and it follows the ideas of [16] that, contrarily to [24, 37, 29], and according to more standard ideas in structured argumentation, decides to model arguments as syntactic entities.
We start by pointing out that the informal relation between argumentation and belief is itself problematic. Arguably, there is a tension between two intuitive principles governing belief formation and argument evaluation. These principles are:
-
Beliefs are an input for argument evaluation, meaning that arguments with believed premisses are better to those with contingent or even rejected premisses.111We use the term contingent in its doxastic sense, that is, a sentence is said to be contingent iff it is neither believed nor believed to be false.
-
Argumentation is an input for belief formation, meaning that rational agents should believe sentences that are ground in good arguments.
The mentioned tension arises when one tries to embrace both principles without any restriction, leading to an infinite regress. A very similar problem can be found in the root of a long-standing debate about the structure of epistemic justification within contemporary epistemology. Foundationalist solutions to such a tension, to which we adhere here, consists in distinguishing between basic (non-inferred) beliefs and non-basic (inferred) beliefs, where the latter inherit the justification from the former [27]. This implies accepting qualified version of both principles, but giving some sort of priority to Introduction over Introduction. Curiously enough, an analogous distinction can be found as one of the basis of the recent argumentative theory of reason advocated by Mercier and Sperber [30]. In this context, basic beliefs are called intuitive beliefs while inferred beliefs are called reflective beliefs (see [38] for a detailed exposition of the distinction).
In the rest of this paper, we follow up the work made in [16], by extending it in three different directions. First, and after recalling the logic introduced there, whose language allows talking about basic beliefs and structured arguments, we provide its sound and complete axiomatisation (Section 2). We then explain how to use this logic for reasoning about explicit basic beliefs and argument-based belief, discussing some of the design choices, as well as depicting some alternatives (Section 3). Finally, we extend the basic fragment of the logic so as to capture different kinds of informational dynamics, illustrating their effects on both types of beliefs (Section 4).
2 An awareness logic for belief and argumentation
Let us start by recalling the logic introduced in [16]. We follow the traditional order for presentation: syntax, semantics, and proof theory. We assume a countable set of propositional letters as fixed from now on. The language is defined as the the pair of formulas and arguments which are respectively generated by the following grammars:
The rest of Boolean operators () and constants (), as well as the dual of (noted ), are defined as usual. Arguments of have the following informal readings. is an atomic argument. Note that this kind of arguments are rather strange in real-life examples, since they have one sole premise and conclusion, and there is not a proper inference step. Mathematically, they can be understood as a one-line proof from to . As for (resp. ), it represents an argument claiming that follows deductively (resp. defeasibly) from the conclusions of arguments . As an example of a complex argument consider that informally reads “This has wings, because it is a bird and all birds have wings. Moreover, since it has wings, it presumably (defeasibly) flies”.
Regarding formulas, elements of represent factual, atomic propositions. means that the agent implicitly (ideally) believes that . reads “the agent is aware of ”. reads the “conclusion of is ”. means that does not contain defeasible inference steps. means that undercuts , that is, attacks some defeasible inference link of . Finally, means that has been constructed properly, that is, all its deductive inference steps are valid and all its defeasible inference steps are accepted by the agent.
We use to denote the set of all finite sequences over . We denote an arbitrary sequence of +1 elements over as . Sequences of formulas are useful to represent inference steps in the meta-language. Although strongly connected from a conceptual point of view, the sequence is not the same object as, for instance, the object language argument . Let we use as a shorthand for . We can see as the simplest argument using . As an example, consider the rule , we have , but note that the are infinitely many other arguments using , for instance .
Let us define the following meta-syntactic functions for analysing an argument’s structure, taken form ASPIC+ [31]:
returns the premisses of and it is defined as follows: , where .
returns the conclusion of and it is defined as follows and where .
returns the subarguments of and it is defined as follows: and where .
returns the top rule of , i.e. the last rule applied in the formation of . It is defined as follows: is left undefined, .
returns the set of defeasible rules of and it is defined as , and .
Let us also define semantic propositional negations, for any : abbreviates .
Let us now move to semantics. A model for is a tuple where:
-
•
is a set of possible worlds.
-
•
and is the set of worlds that are doxastically indistinguishable for the agent.
-
•
is the set of available arguments, also called the awareness set of the agent.
-
•
is a set of accepted defeasible rules. Moreover, for every we require that:
-
–
(defeasible rules are consistent), where denotes the consequence relation of classical propositional logic, and
-
–
(defeasible rules are not deductively valid).
-
–
-
•
is a (possibly partial) naming function for rules, where informally means “the rule is applicable”.
-
•
is and an atomic valuation, i.e. a function .
Interpretation. In a given model , represents the set of arguments that the agent entertains or is aware of. Whenever , we assume that (i) the agent can determine her doxastic attitude toward the premisses of through non-inferential methods (for instance, through observations), and (ii) she knows the structure of (either because has been communicated to her, or because she has gone through the cognitive process of building ). Besides this, there is not semantic intuition underlying , so the agent can be perfectly aware of rather silly arguments, as , without accepting them in any sense. Moreover, rules in the set are interpreted as rules whose inference strength lies in their content, rather than as purely formal schemas (as deductive rules are). As an example, consider the rule “Peter’s bike is on the bike parking area, therefore he should be in his office”. The term accepted means that the agent considers them applicable if there are not good reasons against doing so. Note that does not imply (informally corresponding to the intuition that an agent can be aware of a defeasible argument without accepting its rule). There are further restrictions that could be arguably adopted, but that we leave out for the sake of simplicity. For instance, we could require to be closed under subarguments, or that for any accepted defeasible rule, the agent is aware of at least an argument using it.
Let be a model for . The set of well-shaped arguments (depending on in ) is the smallest set fulfilling the following conditions:
-
1.
for any .
-
2.
iff both for every and .
-
3.
iff both for every and .
We drop the superscript from whenever there is no danger of confusion.
Let be a pointed model for , that is, is a model and . The truth relation, relating pointed models and formulas, is given by:222Note that we do not need to consider as a primitive operator, since it could be defined through a (simpler) operator that captures the meaning of . We make this choice for the sake of succinctness, as well as for studying the axiomatic behaviour of .
iff | for all : implies . | |
iff | . | |
iff | . | |
iff | . | |
iff | and . | |
iff | . |
A formula is said to be valid (noted ) iff it is true at all pointed models. We use to denote the truth-set of , i.e., the set of worlds of where is true, and to denote the class of all models.
We now present a sound and complete axiomatisation of w.r.t. , a topic that was left out in [16], and that constitutes one of the main technical contributions of the current paper. Although our models provide a compact representation of the needed components for reasoning about basic and argument-based beliefs in a single-agent context, they are rather non-standard from a technical point of view. Besides the strongly syntactic character of some of their elements, their modal components are not defined as usual, therefore the definition of its canonical model cannot be extrapolated straightforwardly. Nevertheless, we can provide an indirect completeness proof (see Appendix A1 for details).
Theorem 1.
The axiom system , defined in Table 1, is sound a complete for w.r.t. .
Modal core axioms |
(Ax0) All propositional tautologies |
(Ax1) axioms for |
Introspection axioms |
(Ax2) |
(Ax3) |
(Ax4) |
(Ax5) |
(Ax6) |
(Ax7) |
Axioms for syntactic operators |
(Ax8) whenever |
(Ax9) whenever |
(Ax10) whenever |
(Ax11) whenever |
Wellshapedness axioms |
(Ax12) |
(Ax13) |
(Ax14) |
whenever |
(Ax15) whenever |
(Ax16) |
(Ax17) |
(Ax18) |
Undercut axioms |
(Ax19) whenever |
(Ax20) |
(Ax21) |
(Ax22) whenever for some |
(Ax23) |
(Ax24) |
Rules |
(MP) From and infer |
(Nec) From , infer |
3 Basic beliefs and argument-based beliefs
The logic introduced above can be used to study a rich repertoire of doxastic attitudes. We start by discussing basic beliefs, informally representing those that are not grounded on inferential processes. As mentioned, they can also be understood in terms of intuitive beliefs, i.e., those that the agent extracts from a sort of data-base, seen by her as completely trustworthy [38]. As usual in awareness epistemic logic, we have two versions of this notion. On the one hand, we have the implicit (ideal) version of basic beliefs, modelled through , that suffers from the extensively discussed problem of logical omniscience (see e.g. [22, Chapter 9]). On the other hand, we have its explicit counterpart, for which we have chosen . Note that, like in other logics for implicit and explicit belief, it holds that . Moreover, under the current semantics, is equivalent to a schema that resembles another usual option for modelling explicit beliefs (e.g. [40]): .
Besides basic beliefs, we can also capture in a sort of deductive-explicit belief. Deductive-explicit beliefs are those rooted in a deductive argument s.t. the agent has a basic belief that all its premisses are true. Formally, and following [7], we define doxastic argument acceptance as
,
and deductive-explicit belief as
.
Note that and . The first validity shows that deductive-explicit beliefs are a subset of basic-implicit beliefs. The second one shows that basic-explicit beliefs are an extreme case of deductive-explicit beliefs (those that are rooted in the trivial deduction that goes from to , i.e., in the atomic argument ).
Up to now, we have not gone far from the kind of attitudes that are usually discussed in the awareness logic literature (e.g. in [21, 14, 25, 26]). We now take a small detour through argumentation theory in order to define argument-based beliefs. Roughly speaking, argument-based beliefs are grounded in arguments that may involve non-deductive steps. They can be understood, at least to some extent, in terms of the reflective beliefs of [38]. Recall that we are after formalising the principle Introduction presented in the introduction: the beliefs of a rational agent should be grounded in good arguments. But, what does it means good in this context? Following [10], the very notion of argument strength can be analysed in three different layers or dimensions: the support dimension (how strong is the reason given by an argument to accept its conclusion), the dialectic dimension (how arguments attack and defeat each other), and the evaluative dimension (how the former conflicts are to be solved).
Hence, the first step is to set up a notion of argument strength regarding the support dimension. Formally, we seek to define a preference relation among the arguments of that takes into account Introduction (arguments with believed premisses are to be preferred over arguments with premisses that are not believed). In [16], we showed how to do this by splitting all arguments in three preference classes that were based on the basic doxastic attitude of the agent toward the premisses of the arguments. Here, we take a much simpler view, for the sake of brevity, and directly exclude arguments whose premisses are not believed. Both options makes Introduction dependent on Introduction, since in the process of grounding arguments in beliefs and these in turn in new arguments, we arrive at good arguments that are good just because the agent has a basic belief that all their premisses hold. However, inference links must still play a role when determining the relative strength of two arguments, the simplest principle that can be adopted in this regard is captured by the following binary relation among arguments of : . This relation informally corresponds to the idea that, ceteris paribus, deductive arguments are to be preferred to non-deductive ones.
Regarding the dialectic dimension of argument strength, we capture two forms of argumentative defeat, namely, undercutting (attacking a defeasible inference step of any subargument) and successful rebuttal (attacking the conclusion of a less or equally preferred subargument). Formally,
-
•
Undercutting a subargument .
-
•
Unrestricted successful rebuttal
. -
•
Defeat .
As discussed in the formal argumentation literature, there is a more restrictive alternative for the notion of rebuttal, requiring the top rule of the attacked subargument to be defeasible333See [42] for a discussion about the two possible design choices. Note moreover that the other customary type of attack, i.e. undermining (attacking a premise), makes sense only when non-believed premisses are taken into account.
-
•
Restricted successful rebuttal
.
Argumentation frameworks and their semantics [19] are the most studied tool to capture the evaluative dimension of argument strength. We now explain how to incorporate them in the current approach. Let be a pointed model for , we define its associated argumentation framework as , where and is given by iff . We stress the fact that in the domain of our frameworks (i.e., in ), basic beliefs act as a filter (in the clause ), instantiating a qualified, unproblematic version of Introduction, namely : basic beliefs are an input for argument evaluation. Given a set of possibly conflicting arguments (an argumentation framework), we need a mechanism for the agent to decide which of the arguments are to be selected (an argumentation semantics). We say that a set is conflict-free iff there are no s.t. . Moreover, we say that defends iff for every , implies that there is s.t. . We say that is a complete extension iff it is conflict-free and it contains precisely the elements of that it defends. We say that is the grounded extension of iff it is the smallest (w.r.t. set inclusion) complete extension. We use to denote the grounded extension of . As it is well-known, the grounded extension of an argumentation framework always exists and it is moreover unique [19]. The unfamiliar reader is referred to [9] for an extensive discussion on argumentation semantics.
Finally, we use the grounded extension to define the argument-based beliefs of the agent. First, let us extend to by adding a new kind of formulas where and . means that the agent believes that based on argument . We interpret the new language in the same class of models, by adding the truth clause:
iff |
Note that and .
We close this section by analysing our notion of argument-based belief under the view of [17]’s rationality postulates. In a nutshell, if no restrictions are imposed, our agent behaves according to a kind of minimal rationality (i.e. she does not explicitly believe in inconsistencies). If, however, we add some ideal assumptions, then she satisfies all [17]’s postulates.
Proposition 1.
Let be a pointed model for , where . Let be its associated argumentation framework, then:
-
•
satisfies direct consistency, that is, there are no and s.t. .
-
•
If restricted rebuttal is assumed and , then satisfies direct consistency; indirect consistency (that is, ); sub-argument closure (that is, implies ); and strict closure (that is, implies ).
Proof (sketched).
For the first item, we suppose the contrary, that is, that there are arguments s.t. , and is propositionally equivalent to the negation of . Then, we continue by cases on the shape of and (each of them can be either an atomic argument, or an argument whose last inference step is deductive (resp. defeasible)). From the nine different cases, three of them are redundant. From the six remaining cases, it is easy to arrive to or (which contradicts the assumption that they are in the grounded extension, because it is conflict-free).
For the second item, it suffices to show that under both assumptions (adopting the definition of restricted rebuttal and assuming ), we are just working with an instance of well-defined ASPIC+ frameworks (one constructed over a knowledge base where the set of ordinary premisses is empty), which is guaranteed to satisfy all [17]’s rationality postulates (see [32, Section 3.3] for details).∎
4 Dynamics of information
The current framework can throw some light on the relations between dynamics of information, argumentation and doxastic attitudes. We can distinguish several kinds of actions, that have different potential effects on basic and argument-based beliefs. The framework naturally allows for the use of tools imported from dynamic epistemic logic (DEL) [18]. In particular, we can describe these actions using dynamic modalities, for which complete axiomatisations can be then provided by finding a full list of reduction axioms [28, 18, 41]. In order to do so, one first need to show that the rule of replacement of proved equivalents is sound (it preserves validity) in the extended language (see [28] for details). Although this is not the case in , as it happens with other languages containing awareness operators [21, 25], we can restrict the domain of application of the rule, and it still does the job for axiomatizing certain dynamic extensions. More precisely, we will work with the rule:
- (RE)
-
From , infer ,
with the result of replacing one or more non- occurrences of in by .444A non- of in is just an occurrence of in where is not inside the scope of . Note that we assume that is inside the scope of in the formula . Semantically, this amounts to showing that each of the actions we are about to discuss is well defined, in the sense that whenever we compute (the result of executing action in model ), we stay in the intended class of models. When this does not happen, as it is the case with many DEL actions (e.g. public announcements [18]), one needs to find a set of preconditions for the action. Preconditions works as sufficient conditions for the action to be “safe” i.e., to secure that after executing it, we stay in the intended class of models.
Let us start by defining four different actions. Let be given, let be an -model, let , let , and let . We define:
-
•
The act of acquiring argument (resp. forgetting argument ) produces the model , where (resp. , where ).
-
•
The act of accepting the defeasible rule produces the model , where .
-
•
The act of publicly announcing produces the model , where ; ; and for every atom .
Interpretation.
Note that the definition is far from being exhaustive, we analyse them because they are natural adaptations of other actions studied in the literature [25, 18]. The most basic argumentative change we can think of consists in adding an argument into the awareness of the agent. Informally, this can be thought as the result of a communicative event (e.g. an opponent advancing an argument), learning (the agent reading an argument in a book), or as the result of reflection (the own agent constructing an argument). Formally, the action is a direct generalization of the “consider” action defined for sentences in [25, 14]. Its straightforward counterpart is the act of forgetting an argument (i.e. dropping it from the awareness of the agent). As for the action , defeasible rules can also be learnt in different ways. For instance, an agent can learn the rule because an ornithologist told her, because she observed repeatedly that birds fly, or because she read it in a textbook. Finally, public announcements are probably the most studied action in DEL (see e.g. [18, Chapter 4]). This kind of announcements are supposed to be truthful and coming from a completely reliable source.
We now define a dynamic language, in order to talk about the different actions. Let be a language, formulas of the extended language are given by:
Let be any of the dynamic modalities we have just defined, we use as an abbreviation of , with informally meaning that action can be executed and after executing it, holds.
Note that the class of all models is not closed under all defined actions. In particular, it is not closed under nor under . For the former, the reason is that only rules that are consistent and non-deductive can be learnt as defeasible (see the definition of model in Section 2). For the latter, only truthful formulas that do not trivialize the beliefs of the agent (in the sense of making empty), can be announced. This inconvenience is solved by fixing preconditions (expressible in ) for both actions. Let and , we define:
; and
.
It is almost immediate to check that, for any pointed model , any , and any we have that:
( and ) iff ; and
( and iff .
Moreover, note that iff for every . Let be a pointed model with , we define the truth clause for the new kind of formulas:
iff | , | |
---|---|---|
iff | , | |
iff | , implies , | |
iff | implies |
Finally, we establish a completeness result for w.r.t. . Note that in Table 2, denotes an arbitrary element of .
Proposition 2.
The proof system that extends the one of Table 1 with all axioms of Table 2 and it is closed under Dynamics of information is sound and complete for w.r.t. .
Proof.
Soundness follows from the validity of all axioms and the validity-preserving character of Dynamics of information in the extended language. Completeness follows from the usual reduction argument. In short, note that in the right-hand side of all axioms of Table 2, either the dynamic operator disappears or it is applied to a less complex formula than in the left-hand side. In the case of reduction axioms for , either there are no dynamic modalities occurring in the right-hand side of the equivalence or they are applied to -formulas with less complex arguments than in the right-hand side. Therefore, we can define a meaning-preserving translation from to that, together with Theorem 1, provides the desired result. The validity-preserving character of Dynamics of information in the extended language w.r.t. takes care of formulas with nested dynamic modalities. The reader is referred to [28] for details.
∎
We close this section by modelling a toy example, inspired by [39], and illustrating how actions affect argument-based beliefs. Suppose that an agent is wondering whether another agent, Harry, is a British subject (). Suppose that the only basic-explicit belief she holds at the beginning is that Harry was born in Bermuda (). Other pieces of relevant information are: Harry’s parents are aliens (), and that the rule “If Harry is born in Bermuda, then he is presumably a British subject” is applicable (). Let . We start with the model , where , , , , , , and . It is then easy to check that . Moreover, we have that . In words, after learning the rule and becoming aware of the simplest argument using it, i.e. , the agent has an argument-based belief that Harry is a British subject. If, however the agent learns subsequently from a completely trustworthy source that Harry’s parents are alien (), together with the rule , and the argument , then she revises her argument-based belief about Harry’s nationality. In symbols, .
for | |
whenever | |
5 Concluding remarks
Closely related work.
From all the works we have commented throughout the paper, it seems that [25, 26] and [37] are the closest one to our approach. Regarding [25, 26], we have somehow generalize their awareness of rules to our awareness of arguments (abstracting away from other forms of awareness treated there). As for [37], their choice of modelling arguments semantically (as opens of a topology), permits a transparent axiomatisation of their notion of argument-based beliefs, which is easily guaranteed to be consistent (two of the weaknesses of our approach). On the other hand, we naturally treat arguments as first-class citizens in our language, and the argument-based beliefs of our agent escape from every form of logical omniscience (while the beliefs of [37]’s agent are still closed under equivalent formulas).
Future work.
There are natural open paths for future work. An urgent task in the development of the logical aspects of the framework consists in axiomatizing (if possible) the argument-based belief operator . Moreover, the modal semantic apparatus of our models could be extended to plausibility structures [8], so as to model fine-grained preference between arguments, based on the agent’s basic epistemic attitudes toward the premisses of the involved arguments (e.g. known premisses are to be preferred to strongly believed premisses, and the latter, in turn, are to be preferred to merely believed premisses). Finally, a multi-agent extension of the current framework could be used to model argument exchange in different kinds of scenarios (e.g. deliberation, persuasion dialogues or inquiry).
References
- [1]
- [2] Sergei Artemov (2018): Justification Awareness Models. In Sergei Artemov & Anil Nerode, editors: Logical Foundations of Computer Science, LNCS 10703, Springer, pp. 22–36, 10.1007/978-3-319-72056-22.
- [3] Sergei Artemov & Melvin Fitting (2016): Justification Logic. In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University.
- [4] Sergei Artemov & Elena Nogina (2005): Introducing justification into epistemic logic. Journal of Logic and Computation 15(6), pp. 1059–1073, 10.1093/logcom/exi053.
- [5] Sergei N Artemov (2012): The ontology of justifications in the logical setting. Studia Logica 100(1-2), pp. 17–30, 10.1007/s11225-012-9387-x.
- [6] Alexandru Baltag, Nick Bezhanishvili, Aybüke Özgün & Sonja Smets (2016): Justified Belief and the Topology of Evidence. In Jouko Väänänen, Åsa Hirvonen & Ruy de Queiroz, editors: Logic, Language, Information, and Computation, LNCS 9803, Springer, pp. 83–103, 10.1007/978-3-662-52921-86.
- [7] Alexandru Baltag, Bryan Renne & Sonja Smets (2012): The Logic of Justified Belief Change, Soft Evidence and Defeasible Knowledge. In Luke Ong & Ruy de Queiroz, editors: Logic, Language, Information and Computation. WoLLIC 2012., LNCS 7456, Springer, pp. 168–190, 10.1007/978-3-642-32621-913.
- [8] Alexandru Baltag & Sonja Smets (2008): A qualitative theory of dynamic interactive belief revision. In Wiebe van der Hoek, Giacomo Bonanno & Michael Wooldridge, editors: Logic and the foundations of game and decision theory (LOFT 7), Texts in Logic and Games 3, Amsterdam University Press, pp. 9–58.
- [9] Pietro Baroni, Martin Caminada & Massimiliano Giacomin (2018): Abstract argumentation frameworks and their semantics. In Pietro Baroni, Dov M. Gabbay, Massimilino Giacomin & Leendert van der Torre, editors: Handbook of formal argumentation, College Publications, pp. 159–236.
- [10] Mathieu Beirlaen, Jesse Heyninck, Pere Pardo & Christian Straßer (2018): Argument strength in formal argumentation. IfCoLog Journal of Logics and their Applications 5(3), pp. 629–675.
- [11] Trevor JM Bench-Capon & Paul E Dunne (2007): Argumentation in artificial intelligence. Artificial intelligence 171(10-15), pp. 619–641, 10.1016/j.artint.2007.05.001.
- [12] Johan van Benthem, David Fernández-Duque & Eric Pacuit (2014): Evidence and plausibility in neighborhood structures. Annals of Pure and Applied Logic 165(1), pp. 106–133, 10.1016/j.apal.2013.07.007.
- [13] Johan van Benthem, David Fernández-Duque, Eric Pacuit et al. (2012): Evidence Logic: A New Look at Neighborhood Structures. Advances in modal logic 9, pp. 97–118.
- [14] Johan van Benthem & Fernando R Velázquez-Quesada (2010): The dynamics of awareness. Synthese 177(1), pp. 5–27, 10.1007/s11229-010-9764-9.
- [15] Patrick Blackburn, Maarten De Rijke & Yde Venema (2010): Modal Logic. Cambridge University Press, 10.1017/CBO9781107050884.
- [16] Alfredo Burrieza & Antonio Yuste-Ginel (2020): Basic beliefs and argument-based beliefs in awareness epistemic logic with structured arguments. In H. Prakken, S. Bistarelli, F. Santini & C. Taticchi, editors: Proceedings of the COMMA 2020, IOS Press, pp. 123–134, 10.3233/FAIA200498.
- [17] Martin Caminada & Leila Amgoud (2007): On the evaluation of argumentation formalisms. Artificial Intelligence 171(5-6), pp. 286–310, 10.1016/j.artint.2007.02.003.
- [18] Hans van Ditmarsch, Wiebe van der Hoek & Barteld Kooi (2007): Dynamic epistemic logic. Springer, 10.1007/978-1-4020-5839-4.
- [19] Phan Minh Dung (1995): On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence 77(2), pp. 321–357, 10.1016/0004-3702(94)00041-X.
- [20] Frans H. van Eemeren, Bart Garssen, Erik C. W. Krabbe, A. Francisca Snoeck Henkemans, Bart Verheij & Jean H. M. Wagemans (2014): Handbook of Argumentation Theory. Springer, 10.1007/978-90-481-9473-5.
- [21] Ronald Fagin & Joseph Y Halpern (1987): Belief, awareness, and limited reasoning. Artificial intelligence 34(1), pp. 39–76, 10.1016/0004-3702(87)90003-8.
- [22] Ronald Fagin, Joseph Y Halpern, Yoram Moses & Moshe Vardi (2004): Reasoning about knowledge. MIT press, 10.7551/mitpress/5803.001.0001.
- [23] Konstantin Genin & Franz Huber (2021): Formal Representations of Belief. In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University.
- [24] Davide Grossi & Wiebe van der Hoek (2014): Justified Beliefs by Justified Arguments. In Chitta Baral, Giuseppe De Giacomo & Thomas Eiter, editors: Principles of Knowledge Representation and Reasoning: Proceedings of the Fourteenth International Conference, AAAI Press, pp. 131–140, 10.5555/3031929.3031947.
- [25] Davide Grossi & Fernando R. Velázquez-Quesada (2009): Twelve Angry Men: A Study on the Fine-Grain of Announcements. In Xiangdong He, John Horty & Eric Pacuit, editors: Logic, Rationality, and Interaction, Springer, pp. 147–160, 10.1007/978-3-642-04893-712.
- [26] Davide Grossi & Fernando R Velázquez-Quesada (2015): Syntactic awareness in logical dynamics. Synthese 192(12), pp. 4071–4105, 10.1007/s11229-015-0733-1.
- [27] Ali Hasan & Richard Fumerton (2018): Foundationalist Theories of Epistemic Justification. In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University.
- [28] Barteld Kooi (2007): Expressivity and completeness for public update logics via reduction axioms. Journal of Applied Non-Classical Logics 17(2), pp. 231–253, 10.3166/jancl.17.231-253.
- [29] Xu Li & Yì N. Wáng (2020): A Logic of Knowledge and Belief Based on Abstract Arguments. In Mehdi Dastani, Huimin Dong & Leon van der Torre, editors: Logic and Argumentation, Springer, pp. 116–130, 10.1007/978-3-030-44638-38.
- [30] Hugo Mercier & Dan Sperber (2011): Why do humans reason? Arguments for an argumentative theory. Behavioral and brain sciences 34(2), pp. 57–74, 10.1017/S0140525X10000968.
- [31] Sanjay Modgil & Henry Prakken (2013): A general account of argumentation with preferences. Artificial Intelligence 195, pp. 361–397, 10.1016/j.artint.2012.10.008.
- [32] Sanjay Modgil & Henry Prakken (2018): Abstract rule-based argumentation. In Pietro Baroni, Dov M. Gabbay, Massimilino Giacomin & Leendert van der Torre, editors: Handbook of formal argumentation, College Publications, pp. 287–364.
- [33] Sanjay Modgil, Francesca Toni, Floris Bex, Ivan Bratko, Carlos I. Chesñevar, Wolfgang Dvořák, Marcelo A. Falappa, Xiuyi Fan, Sarah Alice Gaggl, Alejandro J. García, María P. González, Thomas F. Gordon, João Leite, Martin Možina, Chris Reed, Guillermo R. Simari, Stefan Szeider, Paolo Torroni & Stefan Woltran (2013): The Added Value of Argumentation, pp. 357–403. Springer, 10.1007/978-94-007-5583-321.
- [34] Carlo Proietti & Antonio Yuste-Ginel (2021): Dynamic epistemic logics for abstract argumentation. Synthese, 10.1007/s11229-021-03178-5.
- [35] Chiaki Sakama & Tran Cao Son (2020): Epistemic Argumentation Framework: Theory and Computation. Journal of Artificial Intelligence Research 69, pp. 1103–1126, 10.1613/jair.1.12121.
- [36] François Schwarzentruber, Srdjan Vesic & Tjitze Rienstra (2012): Building an Epistemic Logic for Argumentation. In Luis Fariñas del Cerro, Andreas Herzig & Jérôme Mengin, editors: Logics in Artificial Intelligence, LNCS 7519, Springer, pp. 359–371, 10.1007/978-3-642-33353-828.
- [37] Chenwei Shi, Sonja Smets & Fernando R Velázquez-Quesada (2017): Argument-based belief in topological structures. In J Lang, editor: Proceedings TARK 2017. EPTCS, Open Publishing Association, 10.4204/EPTCS.251.36.
- [38] Dan Sperber (1997): Intuitive and reflective beliefs. Mind & Language 12(1), pp. 67–83, 10.1111/j.1468-0017.1997.tb00062.x.
- [39] Stephen E Toulmin ([1958] 2003): The uses of argument. Cambridge university press, 10.1017/CBO9780511840005.
- [40] Fernando R Velázquez-Quesada (2014): Dynamic epistemic logic for implicit and explicit beliefs. Journal of Logic, Language and Information 23(2), pp. 107–140, 10.1007/s10849-014-9193-0.
- [41] Yanjing Wang & Qinxiang Cao (2013): On axiomatizations of public announcement logic. Synthese 190(1), pp. 103–134, 10.1007/s11229-012-0233-5.
- [42] Zhe Yu, Kang Xu & Beishui Liao (2018): Structured argumentation: Restricted rebut vs. unrestricted rebut. Studies in Logic 11(3), pp. 3–17.
Appendix (Proof sketch of Theorem 1)
The outline of the proof is as follows: we first define a new class of (non-standard) models for our language (which are Kripke models where the syntactic components –awareness, accepted rules and names of rule– are maintained throughout the accessibility relation). We then show two things: (i) we can go from pointed Kripke models to its generated submodels without loosing -information (just as in the general modal case) and; (ii) we can transform systematically Kripke generated submodels into our models (again, without loosing -information). Finally, we prove completeness w.r.t. the class of non-standard models and apply (i) and (ii) to obtain the desired result. Let us unfold some of the details.
First of all, we define a Kripke model for as a tuple where: is a set of possible worlds; is a serial, transitive and euclidean relation; is a function assigning an awareness set to each world ; (with ) is a function assigning a set of accepted defeasible rules to each world ; is a (possibly partial) naming function for defeasible rules, where informally means “the defeasible rule is applicable at ”; and is a valuation function. Moreover, we assume that for every , implies , , and . We also assume that if , then and .
Note that now sets depend on both the model and the world at which we are looking (since may vary from one world to another). Consequently, we use to denote the set of well-shaped arguments at .
Truth w.r.t. pointed Kripke models is denoted by and defined as follows (the missing clauses are as expected):
iff | implies | |
iff | ||
iff | ||
iff | and . |
We say that a Kripke model is uniform iff for every it holds that: (i) ; (ii) ; and (iii) for every . denotes the class of all pointed Kripke models, and denotes the class of all uniform pointed Kripke models. We abuse of notation and use to denote the class of all pointed models (the standard ones that we defined in Section 2).
Transformation lemmas.
Now, we need a couple of lemmas. The first one says that we can go from pointed Kripke models to Kripke uniform pointed models without loosing -information, by taking generated submodels. We use to denote the submodel of generated by (see [15, Chapter 2]).
Lemma 1.
Let . We have that:
-
i)
,
i.e. each pointed-generated submodel of a Kripke model is a uniform Kripke model.
-
ii)
For every , , i.e. truth is preserved under generated submodels.
Item i) follows easily from the definition of generated submodel and uniform Kripke model. Item ii) can be proved by induction on .
The second lemma says that we can go from Kripke uniform models to our models (the standard ones, defined in Section 2) without loosing -information.
Lemma 2.
For every uniform pointed Kripke model , there is a pointed model s.t. for every :
Let us define the function for each uniform Kripke model as follows where and s.t.:
, |
, |
, |
, |
, |
for every |
Now, it is easy to check that, for every : , that is . Once this is done, we can show that, for every , it holds that:
The proof of the last assertion is by induction on where the step for is another inductive argument (on the construction on ).
Completeness w.r.t. Kripke models.
We can now define the canonical Kripke model for as:
where the definition of , y is as usual in modal logic [15]; while the definition of the rest of the elements mimics the one of awareness operators [21]:
Now, we need to prove:
Lemma 3 (Canonicity).
is a Kripke model for .
For showing that satisfies all conditions, we reason using maximally-consistent set properties and our axiom system. As illustrations: semantic restrictions on the accessibility relations follows from (Ax1) (see e.g. [22] or [15]), while (Ax19) permits showing that is a function.
Lemma 4 (Truth).
For every : iff .
The proof proceeds by induction on . The Boolean and modal cases are standard [15]. The cases for operators , and are straightforward (they actually do not make use of the induction hypothesis, due to their syntactic character). The cases for and are slightly more compromised. For the latter, another inductive argument on the structure of is required.
Completeness w.r.t. standard models.
Finally, completeness w.r.t. standard models can be proved as follows. Suppose , then is consistent. By Lindenbaum, we have that there is a s.t . By the Truth Lemma we have that . By item ii) of Lemma 1 we have that and by item i) we have that is a pointed uniform Kripke model. Then by Lemma 2 we know that which implies by definition of semantic logical consequence that .