This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

An Awareness Epistemic Framework for
Belief, Argumentation and Their Dynamics

Alfredo Burrieza     Antonio Yuste-Ginel Department of Philosophy,
University of Málaga, Spain  burrieza@uma.es    antonioyusteginel@gmail.com
Abstract

The notion of argumentation and the one of belief stand in a problematic relation to one another. On the one hand, argumentation is crucial for belief formation: as the outcome of a process of arguing, an agent might come to (justifiably) believe that something is the case. On the other hand, beliefs are an input for argument evaluation: arguments with believed premisses are to be considered as strictly stronger by the agent to arguments whose premisses are not believed. An awareness epistemic logic that captures qualified versions of both principles was recently proposed in the literature. This paper extends that logic in three different directions. First, we try to improve its conceptual grounds, by depicting its philosophical foundations, critically discussing some of its design choices and exploring further possibilities. Second, we provide a (heretofore missing) completeness theorem for the basic fragment of the logic. Third, we study, using techniques from dynamic epistemic logic, how different forms of information change can be captured in the framework.

1 Introduction

Belief and argumentation are two central dimensions of humans’ cognitive architecture. They have received attention from antiquity to nowadays, and from a broad range of disciplines. It is then unsurprising that formal researchers have undertaken the task of modelling both phenomena. Regarding beliefs, there is an important amount of options for capturing some of its formal aspects [23]. These models usually capture what kind of things are believed (typically, propositions or sentences); who believes them (intelligent agents); and, only sometimes, how strong or safe these beliefs are (for instance, in probabilistic models of belief or in plausibility structures [8]). However, most of them fail to capture why agents do believe certain things. This lack motivates the recent trial within the epistemic logic community of capturing the missing justification component. This enterprise has been approached from a variety of methods: justification logic [4, 5, 2, 3], evidence logics based on neighbourhood semantics [13, 12], and its further topological development [6], amongst others. Yet another natural candidate to model justification consists in using conceptual and technical tools coming from argumentation theory (as done, e.g. in [24, 37, 29, 16]).

As to argumentation theory, it is a well-established, interdisciplinary field of research [20]. Since the last few decades, formal argumentation has gained more and more attention within the field of artificial intelligence, and its general advantages have been highlighted several times [11, 33]. Within formal approaches to argumentation, it is frequent to distinguish between abstract approaches (those that consider arguments as primitive, atomic entities) and structured approaches (those that explicitly account for the structure of arguments). For expository purposes, we just mention the popular Dung’s approach to abstract argumentation [19], based on so-called abstract argumentation frameworks, and the ASPIC family of formalisms for structured argumentation, e.g., ASPIC+ [31, 32], that will be the main argumentative resources used in this paper.

Recently, some works have taken the first steps to explore and exploit the relations among the two different traditions (epistemic logic and formal argumentation). These can be divided in two groups. On the one hand, there are works using epistemic logic tools to reason about argumentation frameworks [36, 35, 34]. On the other hand, there are works using argumentation tools to provide an (argumentatively inspired) notion of justified belief (the already mentioned [24, 37, 29, 16]). The current paper is inserted in the latter group, and it follows the ideas of [16] that, contrarily to [24, 37, 29], and according to more standard ideas in structured argumentation, decides to model arguments as syntactic entities.

We start by pointing out that the informal relation between argumentation and belief is itself problematic. Arguably, there is a tension between two intuitive principles governing belief formation and argument evaluation. These principles are:

𝖯𝟣\mathsf{P1}

Beliefs are an input for argument evaluation, meaning that arguments with believed premisses are better to those with contingent or even rejected premisses.111We use the term contingent in its doxastic sense, that is, a sentence is said to be contingent iff it is neither believed nor believed to be false.

𝖯𝟤\mathsf{P2}

Argumentation is an input for belief formation, meaning that rational agents should believe sentences that are ground in good arguments.

The mentioned tension arises when one tries to embrace both principles without any restriction, leading to an infinite regress. A very similar problem can be found in the root of a long-standing debate about the structure of epistemic justification within contemporary epistemology. Foundationalist solutions to such a tension, to which we adhere here, consists in distinguishing between basic (non-inferred) beliefs and non-basic (inferred) beliefs, where the latter inherit the justification from the former [27]. This implies accepting qualified version of both principles, but giving some sort of priority to Introduction over Introduction. Curiously enough, an analogous distinction can be found as one of the basis of the recent argumentative theory of reason advocated by Mercier and Sperber [30]. In this context, basic beliefs are called intuitive beliefs while inferred beliefs are called reflective beliefs (see [38] for a detailed exposition of the distinction).

In the rest of this paper, we follow up the work made in [16], by extending it in three different directions. First, and after recalling the logic introduced there, whose language allows talking about basic beliefs and structured arguments, we provide its sound and complete axiomatisation (Section 2). We then explain how to use this logic for reasoning about explicit basic beliefs and argument-based belief, discussing some of the design choices, as well as depicting some alternatives (Section 3). Finally, we extend the basic fragment of the logic so as to capture different kinds of informational dynamics, illustrating their effects on both types of beliefs (Section 4).

2 An awareness logic for belief and argumentation

Let us start by recalling the logic introduced in [16]. We follow the traditional order for presentation: syntax, semantics, and proof theory. We assume a countable set of propositional letters 𝖠𝗍\mathsf{At} as fixed from now on. The language \mathcal{L} is defined as the the pair (𝖥,𝖠)(\mathsf{F},\mathsf{A}) of formulas and arguments which are respectively generated by the following grammars:

φ::=p¬φ(φφ)φ𝖺𝗐𝖺𝗋𝖾(α)𝖼𝗈𝗇𝖼(α)=φ\varphi::=p\mid\lnot\varphi\mid(\varphi\land\varphi)\mid\square\varphi\mid\mathsf{aware}(\alpha)\mid\mathsf{conc}(\alpha)=\varphi\mid

𝗌𝗍𝗋𝗂𝖼𝗍(α)𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,α)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)p𝖠𝗍,α𝖠.\mid\mathsf{strict}(\alpha)\mid\mathsf{undercuts}(\alpha,\alpha)\mid\mathsf{wellshap}(\alpha)\qquad p\in\mathsf{At},\alpha\in\mathsf{A}\text{.}

α::=φα_1,,α_nφα_1,,α_nφφ𝖥,n1.\alpha::=\langle\varphi\rangle\mid\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle\mid\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle\qquad\varphi\in\mathsf{F},n\geq 1\text{.}

The rest of Boolean operators (,,\to,\lor,\leftrightarrow) and constants (,\top,\perp), as well as the dual of \square (noted \lozenge), are defined as usual. Arguments of \mathcal{L} have the following informal readings. φ\langle\varphi\rangle is an atomic argument. Note that this kind of arguments are rather strange in real-life examples, since they have one sole premise and conclusion, and there is not a proper inference step. Mathematically, they can be understood as a one-line proof from φ\varphi to φ\varphi. As for α1,,αnφ\langle\alpha_{1},...,\alpha_{n}\mathsf{\twoheadrightarrow}\varphi\rangle (resp. α1,,αnφ\langle\alpha_{1},...,\alpha_{n}\Rightarrow\varphi\rangle), it represents an argument claiming that φ\varphi follows deductively (resp. defeasibly) from the conclusions of arguments α_1,,α_n\alpha_{\_}1,...,\alpha_{\_}n. As an example of a complex argument consider 𝖡𝗂𝗋𝖽,𝖡𝗂𝗋𝖽𝖶𝗂𝗇𝗀𝗌𝖶𝗂𝗇𝗀𝗌𝖥𝗅𝗂𝖾𝗌\langle\langle\langle\mathsf{Bird}\rangle,\langle\mathsf{Bird}\to\mathsf{Wings}\rangle\mathsf{\twoheadrightarrow}\mathsf{Wings}\rangle\Rightarrow\mathsf{Flies}\rangle that informally reads “This has wings, because it is a bird and all birds have wings. Moreover, since it has wings, it presumably (defeasibly) flies”.

Regarding formulas, elements of 𝖠𝗍\mathsf{At} represent factual, atomic propositions. φ\square\varphi means that the agent implicitly (ideally) believes that φ\varphi. 𝖺𝗐𝖺𝗋𝖾(α)\mathsf{aware}(\alpha) reads “the agent is aware of α\alpha”. 𝖼𝗈𝗇𝖼(α)=φ\mathsf{conc}(\alpha)=\varphi reads the “conclusion of α\alpha is φ\varphi”. 𝗌𝗍𝗋𝗂𝖼𝗍(α)\mathsf{strict}(\alpha) means that α\alpha does not contain defeasible inference steps. 𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)\mathsf{undercuts}(\alpha,\beta) means that α\alpha undercuts β\beta, that is, α\alpha attacks some defeasible inference link of β\beta. Finally, 𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)\mathsf{wellshap}(\alpha) means that α\alpha has been constructed properly, that is, all its deductive inference steps are valid and all its defeasible inference steps are accepted by the agent.

We use 𝖲𝖤𝖰(𝖥)\mathsf{SEQ}(\mathsf{F}) to denote the set of all finite sequences over 𝖥\mathsf{F}. We denote an arbitrary sequence of nn+1 elements over 𝖥\mathsf{F} as ((φ_1,,φ_n),φ)((\varphi_{\_}1,...,\varphi_{\_}n),\varphi). Sequences of formulas are useful to represent inference steps in the meta-language. Although strongly connected from a conceptual point of view, the sequence ((φ_1,,φ_n),φ)((\varphi_{\_}1,...,\varphi_{\_}n),\varphi) is not the same object as, for instance, the object language argument φ_1,,φ_nφ\langle\langle\varphi_{\_}1\rangle,...,\langle\varphi_{\_}n\rangle\Rightarrow\varphi\rangle. Let R=((φ_1,,φ_n),φ)𝖲𝖤𝖰(𝖥)R=((\varphi_{\_}1,...,\varphi_{\_}n),\varphi)\in\mathsf{SEQ}(\mathsf{F}) we use αR\alpha^{R} as a shorthand for φ_1,,φ_nφ\langle\langle\varphi_{\_}1\rangle,...,\langle\varphi_{\_}n\rangle\Rightarrow\varphi\rangle. We can see αR\alpha^{R} as the simplest argument using RR. As an example, consider the rule R_1=((𝖶𝗂𝗇𝗀𝗌),𝖥𝗅𝗂𝖾𝗌)R_{\_}1=((\mathsf{Wings}),\mathsf{Flies}), we have αR_1=𝖶𝗂𝗇𝗀𝗌𝖥𝗅𝗂𝖾𝗌\alpha^{R_{\_}1}=\langle\langle\mathsf{Wings}\rangle\Rightarrow\mathsf{Flies}\rangle, but note that the are infinitely many other arguments using R_1R_{\_}1, for instance 𝖡𝗂𝗋𝖽,𝖡𝗂𝗋𝖽𝖶𝗂𝗇𝗀𝗌𝖶𝗂𝗇𝗀𝗌𝖥𝗅𝗂𝖾𝗌\langle\langle\langle\mathsf{Bird}\rangle,\langle\mathsf{Bird}\to\mathsf{Wings}\rangle\mathsf{\twoheadrightarrow}\mathsf{Wings}\rangle\Rightarrow\mathsf{Flies}\rangle.

Let us define the following meta-syntactic functions for analysing an argument’s structure, taken form ASPIC+ [31]:

𝖯𝗋𝖾𝗆(α)\mathsf{Prem}(\alpha) returns the premisses of α\alpha and it is defined as follows: 𝖯𝗋𝖾𝗆(φ):={φ}\mathsf{Prem}(\langle\varphi\rangle):=\{\varphi\},  𝖯𝗋𝖾𝗆(α_1,,α_nφ):=𝖯𝗋𝖾𝗆(α_1)𝖯𝗋𝖾𝗆(α_n)\mathsf{Prem}(\langle\alpha_{\_}1,...,\alpha_{\_}n\hookrightarrow\varphi\rangle):=\mathsf{Prem}(\alpha_{\_}1)\cup...\cup\mathsf{Prem}(\alpha_{\_}n) where {,}\hookrightarrow\in\{\mathsf{\twoheadrightarrow},\Rightarrow\}.

𝖢𝗈𝗇𝖼(α)\mathsf{Conc}(\alpha) returns the conclusion of α\alpha and it is defined as follows 𝖢𝗈𝗇𝖼(φ):={φ}\mathsf{Conc}(\langle\varphi\rangle):=\{\varphi\} and 𝖢𝗈𝗇𝖼(α_1,,α_nφ):={φ}\mathsf{Conc}(\langle\alpha_{\_}1,...,\alpha_{\_}n\hookrightarrow\varphi\rangle):=\{\varphi\} where {,}\hookrightarrow\in\{\mathsf{\twoheadrightarrow},\Rightarrow\}.

𝗌𝗎𝖻𝖠(α)\mathsf{sub_{A}}(\alpha) returns the subarguments of α\alpha and it is defined as follows: 𝗌𝗎𝖻𝖠(φ):={φ}\mathsf{sub_{A}}(\langle\varphi\rangle):=\{\langle\varphi\rangle\} and 𝗌𝗎𝖻𝖠(α_1,,α_nφ):={α_1,,α_nφ}𝗌𝗎𝖻𝖠(α_1)𝗌𝗎𝖻𝖠(α_n)\mathsf{sub_{A}}(\langle\alpha_{\_}1,...,\alpha_{\_}n\hookrightarrow\varphi\rangle):=\{\langle\alpha_{\_}1,...,\alpha_{\_}n\hookrightarrow\varphi\rangle\}\cup\mathsf{sub_{A}}(\alpha_{\_}1)\cup...\cup\mathsf{sub_{A}}(\alpha_{\_}n) where {,}\hookrightarrow\in\{\mathsf{\twoheadrightarrow},\Rightarrow\}.

𝖳𝗈𝗉𝖱𝗎𝗅𝖾(α)\mathsf{TopRule}(\alpha) returns the top rule of α\alpha, i.e. the last rule applied in the formation of α\alpha. It is defined as follows: 𝖳𝗈𝗉𝖱𝗎𝗅𝖾(φ)\mathsf{TopRule}(\langle\varphi\rangle) is left undefined, 𝖳𝗈𝗉𝖱𝗎𝗅𝖾(α_1,,α_nφ)=𝖳𝗈𝗉𝖱𝗎𝗅𝖾(α_1,,α_nφ):=\mathsf{TopRule}(\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle)=\mathsf{TopRule}(\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle):= ((𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n)),φ)((\mathsf{Conc}(\alpha_{\_}1),...,\mathsf{Conc}(\alpha_{\_}n)),\varphi).

𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α)\mathsf{DefRule}(\alpha) returns the set of defeasible rules of α\alpha and it is defined as 𝖣𝖾𝖿𝖱𝗎𝗅𝖾(φ):=\mathsf{DefRule}(\langle\varphi\rangle):=\emptyset, 𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α_1,,α_nφ):=𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α_1)𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α_n)\mathsf{DefRule}(\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle):=\mathsf{DefRule}(\alpha_{\_}1)\cup...\cup\mathsf{DefRule}(\alpha_{\_}n) and 𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α_1,,α_nφ):={((𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n)),φ)}𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α_1)𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α_n)\mathsf{DefRule}(\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle):=\{((\mathsf{Conc}(\alpha_{\_}1),...,\mathsf{Conc}(\alpha_{\_}n)),\varphi)\}\cup\mathsf{DefRule}(\alpha_{\_}1)\cup...\cup\mathsf{DefRule}(\alpha_{\_}n).

Let us also define semantic propositional negations, for any φ,ψ𝖥\varphi,\psi\in\mathsf{F}: φ=ψ\varphi=\sim\psi abbreviates 𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ¬ψ)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(ψ¬φ)\mathsf{wellshap}(\langle\langle\varphi\rangle\mathsf{\twoheadrightarrow}\lnot\psi\rangle)\land\mathsf{wellshap}(\langle\langle\psi\rangle\mathsf{\twoheadrightarrow}\lnot\varphi\rangle).

Let us now move to semantics. A model for \mathcal{L} is a tuple M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||) where:

  • WW\neq\emptyset is a set of possible worlds.

  • W\mathcal{B}\subseteq W and \mathcal{B}\neq\emptyset is the set of worlds that are doxastically indistinguishable for the agent.

  • 𝒪𝖠\mathcal{O}\subseteq\mathsf{A} is the set of available arguments, also called the awareness set of the agent.

  • 𝒟𝖲𝖤𝖰(𝖥)\mathcal{D}\subseteq\mathsf{SEQ}(\mathsf{F}) is a set of accepted defeasible rules. Moreover, for every ((φ_1,,φ_n),φ)𝒟((\varphi_{\_}1,...,\varphi_{\_}n),\varphi)\in\mathcal{D} we require that:

    • {φ_1,,φ_n,φ}_0\{\varphi_{\_}1,...,\varphi_{\_}n,\varphi\}\nvdash_{\_}0\perp (defeasible rules are consistent), where _0\vdash_{\_}{0} denotes the consequence relation of classical propositional logic, and

    • {φ_1,,φ_n}_0φ\{\varphi_{\_}1,...,\varphi_{\_}n\}\nvdash_{\_}0\varphi (defeasible rules are not deductively valid).

  • 𝔫:𝖲𝖤𝖰(𝖥)𝖠𝗍\mathfrak{n}:\mathsf{SEQ}(\mathsf{F})\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\to\mathsf{At} is a (possibly partial) naming function for rules, where 𝔫(R)\mathfrak{n}(R) informally means “the rule RR is applicable”.

  • ||||||\cdot|| is and an atomic valuation, i.e. a function ||||:𝖠𝗍(W)||\cdot||:\mathsf{At}\to\wp(W).

Interpretation. In a given model M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||), 𝒪\mathcal{O} represents the set of arguments that the agent entertains or is aware of. Whenever α𝒪\alpha\in\mathcal{O}, we assume that (i) the agent can determine her doxastic attitude toward the premisses of α\alpha through non-inferential methods (for instance, through observations), and (ii) she knows the structure of α\alpha (either because α\alpha has been communicated to her, or because she has gone through the cognitive process of building α\alpha). Besides this, there is not semantic intuition underlying 𝒪\mathcal{O}, so the agent can be perfectly aware of rather silly arguments, as pq\langle\langle p\rangle\mathsf{\twoheadrightarrow}q\rangle, without accepting them in any sense. Moreover, rules in the set 𝒟\mathcal{D} are interpreted as rules whose inference strength lies in their content, rather than as purely formal schemas (as deductive rules are). As an example, consider the rule “Peter’s bike is on the bike parking area, therefore he should be in his office”. The term accepted means that the agent considers them applicable if there are not good reasons against doing so. Note that αR𝒪\alpha^{R}\in\mathcal{O} does not imply R𝒟R\in\mathcal{D} (informally corresponding to the intuition that an agent can be aware of a defeasible argument without accepting its rule). There are further restrictions that could be arguably adopted, but that we leave out for the sake of simplicity. For instance, we could require 𝒪\mathcal{O} to be closed under subarguments, or that for any accepted defeasible rule, the agent is aware of at least an argument using it.

Let M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||) be a model for =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}). The set of well-shaped arguments WSM𝖠WS^{M}\subseteq\mathsf{A} (depending on 𝒟\mathcal{D} in MM) is the smallest set fulfilling the following conditions:

  1. 1.

    φWSM\langle\varphi\rangle\in WS^{M} for any φ𝖥\varphi\in\mathsf{F}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}.

  2. 2.

    α_1,,α_nφWSM\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle\in WS^{M} iff both α_iWSM\alpha_{\_}i\in WS^{M} for every 1in1\leq i\leq n and {𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n)}_0φ\{\mathsf{Conc}(\alpha_{\_}1),...,\mathsf{Conc}(\alpha_{\_}n)\}\vdash_{\_}0\varphi.

  3. 3.

    α_1,,α_nφWSM\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle\in WS^{M} iff both α_iWSM\alpha_{\_}i\in WS^{M} for every 1in1\leq i\leq n and ((𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n)),φ)𝒟((\mathsf{Conc}(\alpha_{\_}1),...,\mathsf{Conc}(\alpha_{\_}n)),\varphi)\in\mathcal{D}.

We drop the superscript MM from WSMWS^{M} whenever there is no danger of confusion.

Let (M,w)(M,w) be a pointed model for \mathcal{L}, that is, M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||) is a model and wWw\in W. The truth relation, relating pointed models and formulas, is given by:222Note that we do not need to consider 𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌\mathsf{undercuts} as a primitive operator, since it could be defined through a (simpler) operator that captures the meaning of 𝔫\mathfrak{n}. We make this choice for the sake of succinctness, as well as for studying the axiomatic behaviour of 𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌\mathsf{undercuts}.

M,wφM,w\models\square\varphi iff for all wWw^{\prime}\in W: ww^{\prime}\in\mathcal{B} implies M,wφM,w^{\prime}\models\varphi.
M,w𝖺𝗐𝖺𝗋𝖾(α)M,w\models\mathsf{aware}(\alpha) iff α𝒪\alpha\in\mathcal{O}.
M,w𝖼𝗈𝗇𝖼(α)=φM,w\models\mathsf{conc}(\alpha)=\varphi iff 𝖢𝗈𝗇𝖼(α)=φ\mathsf{Conc}(\alpha)=\varphi.
M,w𝗌𝗍𝗋𝗂𝖼𝗍(α)M,w\models\mathsf{strict}(\alpha) iff 𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α)=\mathsf{DefRule}(\alpha)=\emptyset.
M,w𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)M,w\models\mathsf{undercuts}(\alpha,\beta) iff β=β_1,,β_nψ\beta=\langle\beta_{\_}1,...,\beta_{\_}n\Rightarrow\psi\rangle and 𝖢𝗈𝗇𝖼(α)=¬𝔫(𝖳𝗈𝗉𝖱𝗎𝗅𝖾(β))\mathsf{Conc}(\alpha)=\lnot\mathfrak{n}(\mathsf{TopRule}(\beta)).
M,w𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)M,w\models\mathsf{wellshap}(\alpha) iff αWSM\alpha\in WS^{M}.

A formula φ\varphi is said to be valid (noted φ\models\varphi) iff it is true at all pointed models. We use φ_M||\varphi||_{\_}{M} to denote the truth-set of φ\varphi, i.e., the set of worlds of MM where φ\varphi is true, and \mathcal{M} to denote the class of all models.

We now present a sound and complete axiomatisation of \mathcal{L} w.r.t. \mathcal{M}, a topic that was left out in [16], and that constitutes one of the main technical contributions of the current paper. Although our models provide a compact representation of the needed components for reasoning about basic and argument-based beliefs in a single-agent context, they are rather non-standard from a technical point of view. Besides the strongly syntactic character of some of their elements, their modal components are not defined as usual, therefore the definition of its canonical model cannot be extrapolated straightforwardly. Nevertheless, we can provide an indirect completeness proof (see Appendix A1 for details).

Theorem 1.

The axiom system 𝖫𝖡𝖠\mathsf{L}^{\mathsf{BA}}, defined in Table 1, is sound a complete for \mathcal{L} w.r.t. \mathcal{M}.

    Modal core axioms
(Ax0)  All propositional tautologies
(Ax1)  KD45KD45 axioms for \square
    Introspection axioms
(Ax2)  𝖺𝗐𝖺𝗋𝖾(α)𝖺𝗐𝖺𝗋𝖾(α)\mathsf{aware}(\alpha)\to\square\mathsf{aware}(\alpha)
(Ax3)  ¬𝖺𝗐𝖺𝗋𝖾(α)¬𝖺𝗐𝖺𝗋𝖾(α)\lnot\mathsf{aware}(\alpha)\to\square\lnot\mathsf{aware}(\alpha)
(Ax4)  𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)\mathsf{wellshap}(\alpha)\to\square\mathsf{wellshap}(\alpha)
(Ax5)  ¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)\lnot\mathsf{wellshap}(\alpha)\to\square\lnot\mathsf{wellshap}(\alpha)
(Ax6)  𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)\mathsf{undercuts}(\alpha,\beta)\to\square\mathsf{undercuts}(\alpha,\beta)
(Ax7)  ¬𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)¬𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)\lnot\mathsf{undercuts}(\alpha,\beta)\to\square\lnot\mathsf{undercuts}(\alpha,\beta)
  Axioms for syntactic operators
(Ax8)  𝖼𝗈𝗇𝖼(α)=φ\mathsf{conc}(\alpha)=\varphi  whenever 𝖢𝗈𝗇𝖼(α)=φ\mathsf{Conc}(\alpha)=\varphi
(Ax9)  ¬𝖼𝗈𝗇𝖼(α)=φ\lnot\mathsf{conc}(\alpha)=\varphi  whenever 𝖢𝗈𝗇𝖼(α)φ\mathsf{Conc}(\alpha)\neq\varphi
(Ax10)  𝗌𝗍𝗋𝗂𝖼𝗍(α)\mathsf{strict}(\alpha)  whenever 𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α)=\mathsf{DefRule}(\alpha)=\emptyset
(Ax11)  ¬𝗌𝗍𝗋𝗂𝖼𝗍(α)\lnot\mathsf{strict}(\alpha)  whenever 𝖣𝖾𝖿𝖱𝗎𝗅𝖾(α)\mathsf{DefRule}(\alpha)\neq\emptyset
    Wellshapedness axioms
(Ax12)  𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ)\mathsf{wellshap}(\langle\varphi\rangle)
(Ax13)  𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)_1in𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_i)\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle)\to\bigwedge_{\_}{1\leq i\leq n}\mathsf{wellshap}(\alpha_{\_}i)
(Ax14)  _1in𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_i)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)\bigwedge_{\_}{1\leq i\leq n}\mathsf{wellshap}(\alpha_{\_}i)\to\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle)
whenever {𝖢𝗈𝗇𝖼(α_i)1in}_0φ\{\mathsf{Conc}(\alpha_{\_}i)\mid 1\leq i\leq n\}\vdash_{\_}{0}\varphi
(Ax15)  ¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)\lnot\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle)  whenever {𝖢𝗈𝗇𝖼(α_i)1in}_0φ\{\mathsf{Conc}(\alpha_{\_}i)\mid 1\leq i\leq n\}\nvdash_{\_}{0}\varphi
(Ax16)  (_1in𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_i)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n)φ))\Big{(}\bigwedge_{\_}{1\leq i\leq n}\mathsf{wellshap}(\alpha_{\_}i)\land\mathsf{wellshap}(\langle\langle\mathsf{Conc}(\alpha_{\_}1)\rangle,...,\langle\mathsf{Conc}(\alpha_{\_}n)\rangle\Rightarrow\varphi\rangle)\Big{)}
𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)\leftrightarrow\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle)
(Ax17)  (𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n),φ)(\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle)\to\lnot\mathsf{wellshap}(\langle\langle\mathsf{Conc}(\alpha_{\_}1)\rangle,...,\langle\mathsf{Conc}(\alpha_{\_}n)\rangle,\langle\varphi\rangle\mathsf{\twoheadrightarrow}\perp\rangle)
(Ax18)  (𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)(\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\Rightarrow\varphi\rangle)\to\lnot\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle)
    Undercut axioms
(Ax19)  𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(¬p,αR)¬𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(¬q,αR)\mathsf{undercuts}(\langle\lnot p\rangle,\alpha^{R})\to\lnot\mathsf{undercuts}(\langle\lnot q\rangle,\alpha^{R})  whenever qpq\neq p
(Ax20)  ¬𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,φ)\lnot\mathsf{undercuts}(\alpha,\langle\varphi\rangle)
(Ax21)  ¬𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,α_1,,α_nφ)\lnot\mathsf{undercuts}(\alpha,\langle\alpha_{\_}1,...,\alpha_{\_}n\mathsf{\twoheadrightarrow}\varphi\rangle)
(Ax22)  ¬𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)\lnot\mathsf{undercuts}(\alpha,\beta)  whenever 𝖢𝗈𝗇𝖼(α)¬p\mathsf{Conc}(\alpha)\neq\lnot p for some p𝖠𝗍p\in\mathsf{At}
(Ax23) (𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(¬p,𝖢𝗈𝗇𝖼(β_1),,𝖢𝗈𝗇𝖼(β_n)φ)𝖼𝗈𝗇𝖼(α)=¬p)(\mathsf{undercuts}(\langle\lnot p\rangle,\langle\langle\mathsf{Conc}(\beta_{\_}1)\rangle,...,\langle\mathsf{Conc}(\beta_{\_}n)\rangle\Rightarrow\varphi\rangle)\land\mathsf{conc}(\alpha)=\lnot p)\to
𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β_1,,β_nφ)\mathsf{undercuts}(\alpha,\langle\beta_{\_}1,...,\beta_{\_}n\Rightarrow\varphi\rangle)
(Ax24)  (𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β_1,,β_nφ)𝖼𝗈𝗇𝖼(α)=¬p)(\mathsf{undercuts}(\alpha,\langle\beta_{\_}1,...,\beta_{\_}n\Rightarrow\varphi\rangle)\land\mathsf{conc}(\alpha)=\lnot p)\to
𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(¬p,𝖢𝗈𝗇𝖼(β_1),,𝖢𝗈𝗇𝖼(β_n)φ)\mathsf{undercuts}(\langle\lnot p\rangle,\langle\langle\mathsf{Conc}(\beta_{\_}1)\rangle,...,\langle\mathsf{Conc}(\beta_{\_}n)\rangle\Rightarrow\varphi\rangle)
    Rules
(MP)  From φψ\varphi\to\psi and φ\varphi infer ψ\psi
(Nec)  From φ\varphi, infer φ\square\varphi
Table 1: Axiom system.

3 Basic beliefs and argument-based beliefs

The logic introduced above can be used to study a rich repertoire of doxastic attitudes. We start by discussing basic beliefs, informally representing those that are not grounded on inferential processes. As mentioned, they can also be understood in terms of intuitive beliefs, i.e., those that the agent extracts from a sort of data-base, seen by her as completely trustworthy [38]. As usual in awareness epistemic logic, we have two versions of this notion. On the one hand, we have the implicit (ideal) version of basic beliefs, modelled through φ\square\varphi, that suffers from the extensively discussed problem of logical omniscience (see e.g. [22, Chapter 9]). On the other hand, we have its explicit counterpart, for which we have chosen eφ:=φ𝖺𝗐𝖺𝗋𝖾(φ)\square^{e}\varphi:=\square\varphi\land\mathsf{aware}(\langle\varphi\rangle). Note that, like in other logics for implicit and explicit belief, it holds that eφφ\models\square^{e}\varphi\to\square\varphi. Moreover, under the current semantics, eφ\square^{e}\varphi is equivalent to a schema that resembles another usual option for modelling explicit beliefs (e.g. [40]): eφ(φ𝖺𝗐𝖺𝗋𝖾(φ))\models\square^{e}\varphi\leftrightarrow\square(\varphi\land\mathsf{aware}(\langle\varphi\rangle)).

Besides basic beliefs, we can also capture in \mathcal{L} a sort of deductive-explicit belief. Deductive-explicit beliefs are those rooted in a deductive argument s.t. the agent has a basic belief that all its premisses are true. Formally, and following [7], we define doxastic argument acceptance as

𝖺𝖼𝖼𝖾𝗉𝗍(α):=_φ𝖯𝗋𝖾𝗆(α)φ\mathsf{accept}(\alpha):=\bigwedge_{\_}{\varphi\in\mathsf{Prem}(\alpha)}\square\varphi,

and deductive-explicit belief as

𝖡𝖣(α,φ):=𝖺𝖼𝖼𝖾𝗉𝗍(α)𝖺𝗐𝖺𝗋𝖾(α)𝖼𝗈𝗇𝖼(α)=φ𝗌𝗍𝗋𝗂𝖼𝗍(α)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)\mathsf{B}^{\mathsf{D}}(\alpha,\varphi):=\mathsf{accept}(\alpha)\land\mathsf{aware}(\alpha)\land\mathsf{conc}(\alpha)=\varphi\land\mathsf{strict}(\alpha)\land\mathsf{wellshap}(\alpha).

Note that 𝖡𝖣(α,φ)φ\models\mathsf{B}^{\mathsf{D}}(\alpha,\varphi)\to\square\varphi and eφ𝖡𝖣(φ,φ)\models\square^{e}\varphi\leftrightarrow\mathsf{B}^{\mathsf{D}}(\langle\varphi\rangle,\varphi). The first validity shows that deductive-explicit beliefs are a subset of basic-implicit beliefs. The second one shows that basic-explicit beliefs are an extreme case of deductive-explicit beliefs (those that are rooted in the trivial deduction that goes from φ\varphi to φ\varphi, i.e., in the atomic argument φ\langle\varphi\rangle).

Up to now, we have not gone far from the kind of attitudes that are usually discussed in the awareness logic literature (e.g. in [21, 14, 25, 26]). We now take a small detour through argumentation theory in order to define argument-based beliefs. Roughly speaking, argument-based beliefs are grounded in arguments that may involve non-deductive steps. They can be understood, at least to some extent, in terms of the reflective beliefs of [38]. Recall that we are after formalising the principle Introduction presented in the introduction: the beliefs of a rational agent should be grounded in good arguments. But, what does it means good in this context? Following [10], the very notion of argument strength can be analysed in three different layers or dimensions: the support dimension (how strong is the reason given by an argument to accept its conclusion), the dialectic dimension (how arguments attack and defeat each other), and the evaluative dimension (how the former conflicts are to be solved).

Hence, the first step is to set up a notion of argument strength regarding the support dimension. Formally, we seek to define a preference relation among the arguments of \mathcal{L} that takes into account Introduction (arguments with believed premisses are to be preferred over arguments with premisses that are not believed). In [16], we showed how to do this by splitting all arguments in three preference classes that were based on the basic doxastic attitude of the agent toward the premisses of the arguments. Here, we take a much simpler view, for the sake of brevity, and directly exclude arguments whose premisses are not believed. Both options makes Introduction dependent on Introduction, since in the process of grounding arguments in beliefs and these in turn in new arguments, we arrive at good arguments that are good just because the agent has a basic belief that all their premisses hold. However, inference links must still play a role when determining the relative strength of two arguments, the simplest principle that can be adopted in this regard is captured by the following binary relation among arguments of \mathcal{L}: αβ:=𝗌𝗍𝗋𝗂𝖼𝗍(α)¬𝗌𝗍𝗋𝗂𝖼𝗍(β)\alpha\geq\beta:=\mathsf{strict}(\alpha)\lor\lnot\mathsf{strict}(\beta). This relation informally corresponds to the idea that, ceteris paribus, deductive arguments are to be preferred to non-deductive ones.

Regarding the dialectic dimension of argument strength, we capture two forms of argumentative defeat, namely, undercutting (attacking a defeasible inference step of any subargument) and successful rebuttal (attacking the conclusion of a less or equally preferred subargument). Formally,

  • Undercutting a subargument 𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β):=_β𝗌𝗎𝖻𝖠(β)𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)\mathsf{undercuts}^{\ast}(\alpha,\beta):=\bigvee_{\_}{\beta^{\prime}\in\mathsf{sub_{A}}(\beta)}\mathsf{undercuts}(\alpha,\beta^{\prime}).

  • Unrestricted successful rebuttal
    𝖴𝗋𝖾𝖻𝗎𝗍𝗌(α,β):=¬𝗌𝗍𝗋𝗂𝖼𝗍(β)_β𝗌𝗎𝖻𝖠(β)(𝖼𝗈𝗇𝖼(α)=φ𝖼𝗈𝗇𝖼(β)=ψφ=ψαβ)\mathsf{U}\mathsf{rebuts}(\alpha,\beta):=\lnot\mathsf{strict}(\beta)\land\bigvee_{\_}{\beta^{\prime}\in\mathsf{sub_{A}}(\beta)}(\mathsf{conc}(\alpha)=\varphi\land\mathsf{conc}(\beta^{\prime})=\psi\land\varphi=\sim\psi\land\alpha\geq\beta^{\prime}).

  • Defeat 𝖽𝖾𝖿𝖾𝖺𝗍(α,β):=𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)𝖴𝗋𝖾𝖻𝗎𝗍𝗌(α,β)\mathsf{defeat}(\alpha,\beta):=\mathsf{undercuts}^{\ast}(\alpha,\beta)\lor\mathsf{U}\mathsf{rebuts}(\alpha,\beta).

As discussed in the formal argumentation literature, there is a more restrictive alternative for the notion of rebuttal, requiring the top rule of the attacked subargument to be defeasible333See [42] for a discussion about the two possible design choices. Note moreover that the other customary type of attack, i.e. undermining (attacking a premise), makes sense only when non-believed premisses are taken into account.

  • Restricted successful rebuttal
    𝖱𝗋𝖾𝖻𝗎𝗍𝗌(α,β):=¬𝗌𝗍𝗋𝗂𝖼𝗍(β)_β_1,,β_nφ𝗌𝗎𝖻𝖠(β)(𝖼𝗈𝗇𝖼(α)=ψφ=ψ)\mathsf{R}\mathsf{rebuts}(\alpha,\beta):=\lnot\mathsf{strict}(\beta)\land\bigvee_{\_}{\langle\beta_{\_}1,...,\beta_{\_}n\Rightarrow\varphi\rangle\in\mathsf{sub_{A}}(\beta)}(\mathsf{conc}(\alpha)=\psi\land\varphi=\sim\psi).

Argumentation frameworks and their semantics [19] are the most studied tool to capture the evaluative dimension of argument strength. We now explain how to incorporate them in the current approach. Let (M,w)(M,w) be a pointed model for =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}), we define its associated argumentation framework as AFM:=(𝖠M,)AF^{M}:=(\mathsf{A}^{M},\rightsquigarrow), where 𝖠M:={α𝖠M,w𝖺𝗐𝖺𝗋𝖾(α)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)𝖺𝖼𝖼𝖾𝗉𝗍(α)}\mathsf{A}^{M}:=\{\alpha\in\mathsf{A}\mid M,w\models\mathsf{aware}(\alpha)\land\mathsf{wellshap}(\alpha)\land\mathsf{accept}(\alpha)\} and 𝖠M×𝖠M\rightsquigarrow\subseteq\mathsf{A}^{M}\times\mathsf{A}^{M} is given by αβ\alpha\rightsquigarrow\beta iff M,w𝖽𝖾𝖿𝖾𝖺𝗍(α,β)M,w\models\mathsf{defeat}(\alpha,\beta). We stress the fact that in the domain of our frameworks (i.e., in 𝖠M\mathsf{A}^{M}), basic beliefs act as a filter (in the clause 𝖺𝖼𝖼𝖾𝗉𝗍(α)\mathsf{accept}(\alpha)), instantiating a qualified, unproblematic version of Introduction, namely 𝖯𝟣\mathsf{P1}^{\prime}: basic beliefs are an input for argument evaluation. Given a set of possibly conflicting arguments (an argumentation framework), we need a mechanism for the agent to decide which of the arguments are to be selected (an argumentation semantics). We say that a set B𝖠MB\subseteq\mathsf{A}^{M} is conflict-free iff there are no α,βB\alpha,\beta\in B s.t. αβ\alpha\rightsquigarrow\beta. Moreover, we say that B𝖠MB\subseteq\mathsf{A}^{M} defends α𝖠M\alpha\in\mathsf{A}^{M} iff for every γ𝖠M\gamma\in\mathsf{A}^{M}, γα\gamma\rightsquigarrow\alpha implies that there is βB\beta\in B s.t. βγ\beta\rightsquigarrow\gamma. We say that B𝖠MB\subseteq\mathsf{A}^{M} is a complete extension iff it is conflict-free and it contains precisely the elements of 𝖠M\mathsf{A}^{M} that it defends. We say that B𝖠MB\subseteq\mathsf{A}^{M} is the grounded extension of AFM=(𝖠M,)AF^{M}=(\mathsf{A}^{M},\rightsquigarrow) iff it is the smallest (w.r.t. set inclusion) complete extension. We use GE(AFM)GE(AF^{M}) to denote the grounded extension of AFMAF^{M}. As it is well-known, the grounded extension of an argumentation framework always exists and it is moreover unique [19]. The unfamiliar reader is referred to [9] for an extensive discussion on argumentation semantics.

Finally, we use the grounded extension to define the argument-based beliefs of the agent. First, let us extend =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}) to 𝖠𝖡=(𝖥𝖠𝖡,𝖠)\mathcal{L}^{\mathsf{AB}}=(\mathsf{F}^{\mathsf{AB}},\mathsf{A}) by adding a new kind of formulas 𝖡(α,φ)\mathsf{B}(\alpha,\varphi) where α𝖠\alpha\in\mathsf{A} and φ𝖥\varphi\in\mathsf{F}. 𝖡(α,φ)\mathsf{B}(\alpha,\varphi) means that the agent believes that φ\varphi based on argument α\alpha. We interpret the new language in the same class of models, by adding the truth clause:

M,w𝖡(α,φ)M,w\models\mathsf{B}(\alpha,\varphi) iff αGE(AFM)and𝖢𝗈𝗇𝖼(α)=φ.\alpha\in GE(AF^{M})\quad\text{and}\quad\mathsf{Conc}(\alpha)=\varphi\text{.}

Note that 𝖡𝖣(α,φ)𝖡(α,φ)\models\mathsf{B}^{\mathsf{D}}(\alpha,\varphi)\to\mathsf{B}(\alpha,\varphi) and eφ𝖡(φ,φ)\models\square^{e}\varphi\to\mathsf{B}(\langle\varphi\rangle,\varphi).

We close this section by analysing our notion of argument-based belief under the view of [17]’s rationality postulates. In a nutshell, if no restrictions are imposed, our agent behaves according to a kind of minimal rationality (i.e. she does not explicitly believe in inconsistencies). If, however, we add some ideal assumptions, then she satisfies all [17]’s postulates.

Proposition 1.

Let (M,w)(M,w) be a pointed model for =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}), where M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||). Let AFMAF^{M} be its associated argumentation framework, then:

  • AFMAF^{M} satisfies direct consistency, that is, there are no α,β𝖠\alpha,\beta\in\mathsf{A} and φ,ψ𝖥\varphi,\psi\in\mathsf{F} s.t. M,w𝖡(α,φ)𝖡(β,ψ)φ=ψM,w\models\mathsf{B}(\alpha,\varphi)\land\mathsf{B}(\beta,\psi)\land\varphi=\sim\psi.

  • If restricted rebuttal is assumed and 𝒪=𝖠\mathcal{O}=\mathsf{A}, then AFMAF^{M} satisfies direct consistency; indirect consistency (that is, 𝖢𝗈𝗇𝖼(GE(AFM))_0)\mathsf{Conc}(GE(AF^{M}))\nvdash_{\_}{0}\perp) ); sub-argument closure (that is, αGE(AFM)\alpha\in GE(AF^{M}) implies 𝗌𝗎𝖻𝖠(α)GE(AFM)\mathsf{sub_{A}}(\alpha)\subseteq GE(AF^{M})); and strict closure (that is, 𝖢𝗈𝗇𝖼(GE(AFM))_0φ\mathsf{Conc}(GE(AF^{M}))\vdash_{\_}{0}\varphi implies φ𝖢𝗈𝗇𝖼(GE(AFM))\varphi\in\mathsf{Conc}(GE(AF^{M}))).

Proof (sketched).

For the first item, we suppose the contrary, that is, that there are arguments α,βGE(AFM)\alpha,\beta\in GE(AF^{M}) s.t. 𝖢𝗈𝗇𝖼(α)=φ\mathsf{Conc}(\alpha)=\varphi, 𝖢𝗈𝗇𝖼(β)=ψ\mathsf{Conc}(\beta)=\psi and φ\varphi is propositionally equivalent to the negation of ψ\psi. Then, we continue by cases on the shape of α\alpha and β\beta (each of them can be either an atomic argument, or an argument whose last inference step is deductive (resp. defeasible)). From the nine different cases, three of them are redundant. From the six remaining cases, it is easy to arrive to αβ\alpha\rightsquigarrow\beta or βα\beta\rightsquigarrow\alpha (which contradicts the assumption that they are in the grounded extension, because it is conflict-free).

For the second item, it suffices to show that under both assumptions (adopting the definition of restricted rebuttal and assuming 𝒪=𝖠\mathcal{O}=\mathsf{A}), we are just working with an instance of well-defined ASPIC+ frameworks (one constructed over a knowledge base where the set of ordinary premisses is empty), which is guaranteed to satisfy all [17]’s rationality postulates (see [32, Section 3.3] for details).∎

4 Dynamics of information

The current framework can throw some light on the relations between dynamics of information, argumentation and doxastic attitudes. We can distinguish several kinds of actions, that have different potential effects on basic and argument-based beliefs. The framework naturally allows for the use of tools imported from dynamic epistemic logic (DEL) [18]. In particular, we can describe these actions using dynamic modalities, for which complete axiomatisations can be then provided by finding a full list of reduction axioms [28, 18, 41]. In order to do so, one first need to show that the rule of replacement of proved equivalents is sound (it preserves validity) in the extended language (see [28] for details). Although this is not the case in \mathcal{L}, as it happens with other languages containing awareness operators [21, 25], we can restrict the domain of application of the rule, and it still does the job for axiomatizing certain dynamic extensions. More precisely, we will work with the rule:

(RE)

From φψ\varphi\leftrightarrow\psi, infer δδ[φ/ψ]\delta\leftrightarrow\delta[\varphi/\psi],

with δ[φ/ψ]\delta[\varphi/\psi] the result of replacing one or more non-\star occurrences of ψ\psi in δ\delta by φ\varphi.444A non-\star of ψ\psi in δ\delta is just an occurrence of ψ\psi in δ\delta where ψ\psi is not inside the scope of {𝖺𝗐𝖺𝗋𝖾,𝖼𝗈𝗇𝖼,𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉,𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌}\star\in\{\mathsf{aware},\mathsf{conc},\mathsf{wellshap},\mathsf{undercuts}\}. Note that we assume that φ\varphi is inside the scope of 𝖼𝗈𝗇𝖼\mathsf{conc} in the formula 𝖼𝗈𝗇𝖼(α)=φ\mathsf{conc}(\alpha)=\varphi. Semantically, this amounts to showing that each of the actions 𝖺𝖼𝗍\mathsf{act} we are about to discuss is well defined, in the sense that whenever we compute M𝖺𝖼𝗍M^{\mathsf{act}} (the result of executing action 𝖺𝖼𝗍\mathsf{act} in model MM), we stay in the intended class of models. When this does not happen, as it is the case with many DEL actions (e.g. public announcements [18]), one needs to find a set of preconditions for the action. Preconditions works as sufficient conditions for the action to be “safe” i.e., to secure that after executing it, we stay in the intended class of models.

Let us start by defining four different actions. Let =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}) be given, let M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||) be an \mathcal{L}-model, let α𝖠\alpha\in\mathsf{A}, let R𝖲𝖤𝖰(𝖥)R\in\mathsf{SEQ}(\mathsf{F}), and let φ𝖥\varphi\in\mathsf{F}. We define:

  • The act of acquiring argument α\alpha (resp. forgetting argument α\alpha) produces the model Mα+!:=(W,,𝒪α+!,𝒟,𝔫,||||)M^{\alpha+!}:=(W,\mathcal{B},\mathcal{O}^{\alpha+!},\mathcal{D},\mathfrak{n},||\cdot||), where 𝒪α+!:=𝒪{α}\mathcal{O}^{\alpha+!}:=\mathcal{O}\cup\{\alpha\} (resp. Mα!:=(W,,𝒪α!,𝒟,𝔫,||||)M^{\alpha-!}:=(W,\mathcal{B},\mathcal{O}^{\alpha-!},\mathcal{D},\mathfrak{n},||\cdot||), where 𝒪α!:=𝒪{α}\mathcal{O}^{\alpha-!}:=\mathcal{O}\setminus\{\alpha\}).

  • The act of accepting the defeasible rule RR produces the model MR+!:=(W,,𝒪,𝒟R+!,𝔫,||||)M^{R+!}:=(W,\mathcal{B},\mathcal{O},\mathcal{D}^{R+!},\mathfrak{n},||\cdot||), where 𝒟R+!:=𝒟{R}\mathcal{D}^{R+!}:=\mathcal{D}\cup\{R\}.

  • The act of publicly announcing φ\varphi produces the model Mφ!:=(Wφ!,φ!,𝒪,𝒟,𝔫,||||φ!)M^{\varphi!}:=(W^{\varphi!},\mathcal{B}^{\varphi!},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||^{\varphi!}), where Wφ!:=Wφ_MW^{\varphi!}:=W\cap||\varphi||_{\_}{M}; φ!:=φ_M\mathcal{B}^{\varphi!}:=\mathcal{B}\cap||\varphi||_{\_}{M}; and p_Mφ!:=p_Mφ_M||p||_{\_}{M}^{\varphi!}:=||p||_{\_}{M}\cap||\varphi||_{\_}{M} for every atom pp.

Interpretation.

Note that the definition is far from being exhaustive, we analyse them because they are natural adaptations of other actions studied in the literature [25, 18]. The most basic argumentative change we can think of consists in adding an argument into the awareness of the agent. Informally, this can be thought as the result of a communicative event (e.g. an opponent advancing an argument), learning (the agent reading an argument in a book), or as the result of reflection (the own agent constructing an argument). Formally, the action is a direct generalization of the “consider” action defined for sentences in [25, 14]. Its straightforward counterpart is the act of forgetting an argument (i.e. dropping it from the awareness of the agent). As for the action ()R+!(\cdot)^{R+!}, defeasible rules can also be learnt in different ways. For instance, an agent can learn the rule ((𝖡𝗂𝗋𝖽),𝖥𝗅𝗂𝖾𝗌)((\mathsf{Bird}),\mathsf{Flies}) because an ornithologist told her, because she observed repeatedly that birds fly, or because she read it in a textbook. Finally, public announcements are probably the most studied action in DEL (see e.g. [18, Chapter 4]). This kind of announcements are supposed to be truthful and coming from a completely reliable source.

We now define a dynamic language, in order to talk about the different actions. Let =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}) be a language, formulas of the extended language !=(𝖥!,𝖠)\mathcal{L}^{!}=(\mathsf{F}^{!},\mathsf{A}) are given by:

φ::=ψ¬φ(φφ)[α+!]φ[α!]φ[R+!]φ[ψ!]φψ𝖥,α𝖠,R𝖲𝖤𝖰(𝖥).\varphi::=\psi\mid\lnot\varphi\mid(\varphi\land\varphi)\mid[\alpha+!]\varphi\mid[\alpha-!]\varphi\mid[R+!]\varphi\mid[\psi!]\varphi\qquad\psi\in\mathsf{F},\alpha\in\mathsf{A},R\in\mathsf{SEQ}(\mathsf{F})\text{.}

Let [𝖺𝖼𝗍][\mathsf{act}] be any of the dynamic modalities we have just defined, we use 𝖺𝖼𝗍\langle\mathsf{act}\rangle as an abbreviation of ¬[𝖺𝖼𝗍]¬\lnot[\mathsf{act}]\lnot, with 𝖺𝖼𝗍φ\langle\mathsf{act}\rangle\varphi informally meaning that action 𝖺𝖼𝗍\mathsf{act} can be executed and after executing it, φ\varphi holds.

Note that the class of all models \mathcal{M} is not closed under all defined actions. In particular, it is not closed under ()R+!(\cdot)^{R+!} nor under ()φ!(\cdot)^{\varphi!}. For the former, the reason is that only rules that are consistent and non-deductive can be learnt as defeasible (see the definition of model in Section 2). For the latter, only truthful formulas that do not trivialize the beliefs of the agent (in the sense of making \mathcal{B} empty), can be announced. This inconvenience is solved by fixing preconditions (expressible in \mathcal{L}) for both actions. Let R=((φ1,,φn),φ)𝖲𝖤𝖰(𝖥)R=((\varphi_{1},...,\varphi_{n}),\varphi)\in\mathsf{SEQ}(\mathsf{F}) and φ𝖥\varphi\in\mathsf{F}, we define:

Phys.Rev.E(R):=¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ_1,,φ_n,φ)¬𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ_1,,φ_nφ){\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R):=\lnot\mathsf{wellshap}(\langle\langle\varphi_{\_}1\rangle,...,\langle\varphi_{\_}n\rangle,\langle\varphi\rangle\mathsf{\twoheadrightarrow}\perp\rangle)\land\lnot\mathsf{wellshap}(\langle\langle\varphi_{\_}1\rangle,...,\langle\varphi_{\_}n\rangle\mathsf{\twoheadrightarrow}\varphi\rangle); and
Phys.Rev.E(φ!):=φφ{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!):=\varphi\land\lozenge\varphi.

It is almost immediate to check that, for any pointed model (M,w)(M,w), any R=((φ1,,φn),φ)𝖲𝖤𝖰(𝖥)R=((\varphi_{1},...,\varphi_{n}),\varphi)\in\mathsf{SEQ}(\mathsf{F}), and any φ𝖥\varphi\in\mathsf{F} we have that:

({φ_1,,φ_n,φ}_0\{\varphi_{\_}1,...,\varphi_{\_}n,\varphi\}\nvdash_{\_}{0}\perp and {φ_1,,φ_n}_0φ\{\varphi_{\_}1,...,\varphi_{\_}n\}\nvdash_{\_}{0}\varphi) iff M,wPhys.Rev.E(R)M,w\models{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R); and

(wφ_Mw\in||\varphi||_{\_}{M} and ||φ||_M)||\varphi||_{\_}{M}\cap\mathcal{B}\neq\emptyset) iff M,wPhys.Rev.E(φ!)M,w\models{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!).

Moreover, note that M,wPhys.Rev.E(R)M,w\models{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R) iff M,uPhys.Rev.E(R)M,u\models{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R) for every uWu\in W. Let (M,w)(M,w) be a pointed model with M=(W,,𝒪,𝒟,𝔫,||||)M=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||), we define the truth clause for the new kind of formulas:

M,w[σ+!]φM,w\models[\sigma+!]\varphi iff Mσ+!,wφM^{\sigma+!},w\models\varphi,
M,w[σ!]φM,w\models[\sigma-!]\varphi iff Mσ!,wφM^{\sigma-!},w\models\varphi,
M,w[R+!]φM,w\models[R+!]\varphi iff M,wPhys.Rev.E(R)M,w\models{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R), implies MR+!,wφM^{R+!},w\models\varphi,
M,w[φ!]ψM,w\models[\varphi!]\psi iff M,wPhys.Rev.E(φ!)M,w\models{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!) implies Mφ!,wψ.M^{\varphi!},w\models\psi\text{.}

Finally, we establish a completeness result for !\mathcal{L}^{!} w.r.t. \mathcal{M}. Note that in Table 2, ±\pm denotes an arbitrary element of {+,}\{+,-\}.

Proposition 2.

The proof system 𝖫_!𝖡𝖠\mathsf{L}^{!}_{\_}{\mathsf{BA}} that extends the one of Table 1 with all axioms of Table 2 and it is closed under Dynamics of information is sound and complete for !\mathcal{L}^{!} w.r.t. \cal M.

Proof.

Soundness follows from the validity of all axioms and the validity-preserving character of Dynamics of information in the extended language. Completeness follows from the usual reduction argument. In short, note that in the right-hand side of all axioms of Table 2, either the dynamic operator disappears or it is applied to a less complex formula than in the left-hand side. In the case of reduction axioms for [R!+]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)[R!+]\mathsf{wellshap}(\alpha), either there are no dynamic modalities occurring in the right-hand side of the equivalence or they are applied to 𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉\mathsf{wellshap}-formulas with less complex arguments than in the right-hand side. Therefore, we can define a meaning-preserving translation from 𝖥!\mathsf{F}^{!} to 𝖥\mathsf{F} that, together with Theorem 1, provides the desired result. The validity-preserving character of Dynamics of information in the extended language w.r.t. \mathcal{M} takes care of formulas with nested dynamic modalities. The reader is referred to [28] for details.

We close this section by modelling a toy example, inspired by [39], and illustrating how actions affect argument-based beliefs. Suppose that an agent is wondering whether another agent, Harry, is a British subject (𝖻𝗋\mathsf{br}). Suppose that the only basic-explicit belief she holds at the beginning is that Harry was born in Bermuda (𝖻𝖾\mathsf{be}). Other pieces of relevant information are: Harry’s parents are aliens (𝖺\mathsf{a}), and that the rule “If Harry is born in Bermuda, then he is presumably a British subject” is applicable (𝗋𝟣\mathsf{r1}). Let R_1=((𝖻𝖾),𝖻𝗋)R_{\_}1=((\mathsf{be}),\mathsf{br}). We start with the model M_0=(W,,𝒪,𝒟,𝔫,||||)M_{\_}0=(W,\mathcal{B},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||), where W=={w_0,w_1,w_2,w_3}W=\mathcal{B}=\{w_{\_}0,w_{\_}1,w_{\_}2,w_{\_}3\}, 𝒟=\mathcal{D}=\emptyset, 𝒪={𝖻𝖾}\mathcal{O}=\{\langle\mathsf{be}\rangle\}, 𝔫(R_1)=𝗋𝟣\mathfrak{n}(R_{\_}1)=\mathsf{r1}, 𝖻𝖾=W||\mathsf{be}||=W, 𝖻𝗋=𝗋𝟣={w_0,w_2}||\mathsf{br}||=||\mathsf{r1}||=\{w_{\_}0,w_{\_}2\}, and 𝖺={w_0,w_1}||\mathsf{a}||=\{w_{\_}0,w_{\_}1\}. It is then easy to check that M_0,w_0e𝖻𝖾M_{\_}0,w_{\_}0\models\square^{e}\mathsf{be}. Moreover, we have that M_0,w_0[R1+!][αR_1+!]𝖡(αR_1,𝖻𝗋)M_{\_}0,w_{\_}0\models[R1+!][\alpha^{R_{\_}1}+!]\mathsf{B}(\alpha^{R_{\_}1},\mathsf{br}). In words, after learning the rule R_1R_{\_}1 and becoming aware of the simplest argument using it, i.e. 𝖻𝖾𝖻𝗋\langle\langle\mathsf{be}\rangle\Rightarrow\mathsf{br}\rangle, the agent has an argument-based belief that Harry is a British subject. If, however the agent learns subsequently from a completely trustworthy source that Harry’s parents are alien (𝖺\mathsf{a}), together with the rule R_2=((𝖺),¬𝗋𝟣)R_{\_}2=((\mathsf{a}),\lnot\mathsf{r1}), and the argument 𝖺¬𝗋𝟣\langle\langle\mathsf{a}\rangle\Rightarrow\lnot\mathsf{r1}\rangle, then she revises her argument-based belief about Harry’s nationality. In symbols, M_0,w_0[R1+!][αR_1+!][𝖺!][R_2+!][αR_2+!]¬𝖡(αR_1,𝖻𝗋)M_{\_}0,w_{\_}0\models[R1+!][\alpha^{R_{\_}1}+!][\mathsf{a}!][R_{\_}2+!][\alpha^{R_{\_}2}+!]\lnot\mathsf{B}(\alpha^{R_{\_}1},\mathsf{br}).

[α±!]pp[\alpha\pm!]p\leftrightarrow p [φ!]p(Phys.Rev.E(φ!)p)[\varphi!]p\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to p)
[α±!]¬φ¬[α±!]φ[\alpha\pm!]\lnot\varphi\leftrightarrow\lnot[\alpha\pm!]\varphi [φ!]¬ψ(Phys.Rev.E(φ!)¬[φ!]ψ)[\varphi!]\lnot\psi\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\lnot[\varphi!]\psi)
[α±!](φψ)([α±!]φ[α±!]ψ)[\alpha\pm!](\varphi\land\psi)\leftrightarrow([\alpha\pm!]\varphi\land[\alpha\pm!]\psi) [φ!](δψ)([φ!]δ[φ!]ψ)[\varphi!](\delta\land\psi)\leftrightarrow([\varphi!]\delta\land[\varphi!]\psi)
[α±!]φ[α±!]φ[\alpha\pm!]\square\varphi\leftrightarrow\square[\alpha\pm!]\varphi [φ!]ψ(Phys.Rev.E(φ!)[φ!]ψ)[\varphi!]\square\psi\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\square[\varphi!]\psi)
[α±!]𝖺𝗐𝖺𝗋𝖾(β)𝖺𝗐𝖺𝗋𝖾(β)[\alpha\pm!]\mathsf{aware}(\beta)\leftrightarrow\mathsf{aware}(\beta) for αβ\alpha\neq\beta [φ!]𝖺𝗐𝖺𝗋𝖾(β)(Phys.Rev.E(φ!)𝖺𝗐𝖺𝗋𝖾(β))[\varphi!]\mathsf{aware}(\beta)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\mathsf{aware}(\beta))
[α+!]𝖺𝗐𝖺𝗋𝖾(α)[\alpha+!]\mathsf{aware}(\alpha)\leftrightarrow\top
[α!]𝖺𝗐𝖺𝗋𝖾(α)[\alpha-!]\mathsf{aware}(\alpha)\leftrightarrow\perp
[α±!]𝖼𝗈𝗇𝖼(β)=φ𝖼𝗈𝗇𝖼(β)=φ[\alpha\pm!]\mathsf{conc}(\beta)=\varphi\leftrightarrow\mathsf{conc}(\beta)=\varphi [φ!]𝖼𝗈𝗇𝖼(β)=ψ(Phys.Rev.E(φ!)𝖼𝗈𝗇𝖼(β)=ψ)[\varphi!]\mathsf{conc}(\beta)=\psi\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\mathsf{conc}(\beta)=\psi)
[α±!]𝗌𝗍𝗋𝗂𝖼𝗍(β)𝗌𝗍𝗋𝗂𝖼𝗍(β)[\alpha\pm!]\mathsf{strict}(\beta)\leftrightarrow\mathsf{strict}(\beta) [φ!]𝗌𝗍𝗋𝗂𝖼𝗍(β)(Phys.Rev.E(φ!)𝗌𝗍𝗋𝗂𝖼𝗍(β))[\varphi!]\mathsf{strict}(\beta)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\mathsf{strict}(\beta))
[α±!]𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(β,γ)𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(β,γ)[\alpha\pm!]\mathsf{undercuts}(\beta,\gamma)\leftrightarrow\mathsf{undercuts}(\beta,\gamma) [φ!]𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(β,γ)(Phys.Rev.E(φ!)𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(β,γ))[\varphi!]\mathsf{undercuts}(\beta,\gamma)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\mathsf{undercuts}(\beta,\gamma))
[α±!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(β)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(β)[\alpha\pm!]\mathsf{wellshap}(\beta)\leftrightarrow\mathsf{wellshap}(\beta) [φ!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(β)(Phys.Rev.E(φ!)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(β))[\varphi!]\mathsf{wellshap}(\beta)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(\varphi!)\to\mathsf{wellshap}(\beta))
[R+!]p(Phys.Rev.E(R)p)[R+!]p\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to p) [R+!]𝖺𝗐𝖺𝗋𝖾(α)(Phys.Rev.E(R)𝖺𝗐𝖺𝗋𝖾(α))[R+!]\mathsf{aware}(\alpha)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\mathsf{aware}(\alpha))
[R+!]¬φ(Phys.Rev.E(R)¬[R+!]φ)[R+!]\lnot\varphi\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\lnot[R+!]\varphi) [R+!]𝖼𝗈𝗇𝖼(α)=φ(Phys.Rev.E(R)𝖼𝗈𝗇𝖼(α)=φ)[R+!]\mathsf{conc}(\alpha)=\varphi\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\mathsf{conc}(\alpha)=\varphi)
[R+!](φψ)([R+!]φ[R+!]ψ)[R+!](\varphi\land\psi)\leftrightarrow([R+!]\varphi\land[R+!]\psi) [R+!]𝗌𝗍𝗋𝗂𝖼𝗍(α)(Phys.Rev.E(R)𝗌𝗍𝗋𝗂𝖼𝗍(α))[R+!]\mathsf{strict}(\alpha)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\mathsf{strict}(\alpha))
[R+!]φ[R+!]φ[R+!]\square\varphi\leftrightarrow\square[R+!]\varphi [R+!]𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)(Phys.Rev.E(R)𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β))[R+!]\mathsf{undercuts}(\alpha,\beta)\leftrightarrow({\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\mathsf{undercuts}(\alpha,\beta))
[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ)[R+!]\mathsf{wellshap}(\langle\varphi\rangle)\leftrightarrow\top
[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ_1,,φ_nφ)(Phys.Rev.E(R)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(φ_1,,φ_nφ))[R+!]\mathsf{wellshap}(\langle\langle\varphi_{\_}1\rangle,...,\langle\varphi_{\_}n\rangle\mathsf{\twoheadrightarrow}\varphi\rangle)\leftrightarrow\big{(}{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\mathsf{wellshap}(\langle\langle\varphi_{\_}1\rangle,...,\langle\varphi_{\_}n\rangle\mathsf{\twoheadrightarrow}\varphi\rangle)\big{)}
[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(αR)[R+!]\mathsf{wellshap}(\alpha^{R})\leftrightarrow\top
[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(αR)(Phys.Rev.E(R)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(αR))[R+!]\mathsf{wellshap}(\alpha^{R^{\prime}})\leftrightarrow\big{(}{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\mathsf{wellshap}(\alpha^{R^{\prime}})\big{)}  whenever RRR\neq R^{\prime}
[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_1,,α_nφ)[R+!]\mathsf{wellshap}(\langle\alpha_{\_}1,...,\alpha_{\_}n\hookrightarrow\varphi\rangle)\leftrightarrow
(Phys.Rev.E(R)(_1in[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α_i)[R+!]𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(𝖢𝗈𝗇𝖼(α_1),,𝖢𝗈𝗇𝖼(α_n)φ)))\leftrightarrow\Big{(}{\rm Phys.\leavevmode\nobreak\ Rev.\leavevmode\nobreak\ E}(R)\to\big{(}\bigwedge_{\_}{1\leq i\leq n}[R+!]\mathsf{wellshap}(\alpha_{\_}i)\land[R+!]\mathsf{wellshap}(\langle\langle\mathsf{Conc}(\alpha_{\_}1)\rangle,...,\langle\mathsf{Conc}(\alpha_{\_}n)\rangle\hookrightarrow\varphi\rangle)\big{)}\Big{)}
Table 2: Reduction axioms for !\mathcal{L}^{!}.

5 Concluding remarks

Closely related work.

From all the works we have commented throughout the paper, it seems that [25, 26] and [37] are the closest one to our approach. Regarding [25, 26], we have somehow generalize their awareness of rules to our awareness of arguments (abstracting away from other forms of awareness treated there). As for [37], their choice of modelling arguments semantically (as opens of a topology), permits a transparent axiomatisation of their notion of argument-based beliefs, which is easily guaranteed to be consistent (two of the weaknesses of our approach). On the other hand, we naturally treat arguments as first-class citizens in our language, and the argument-based beliefs of our agent escape from every form of logical omniscience (while the beliefs of [37]’s agent are still closed under equivalent formulas).

Future work.

There are natural open paths for future work. An urgent task in the development of the logical aspects of the framework consists in axiomatizing (if possible) the argument-based belief operator 𝖡(,)\mathsf{B}(\cdot,\cdot). Moreover, the modal semantic apparatus of our models could be extended to plausibility structures [8], so as to model fine-grained preference between arguments, based on the agent’s basic epistemic attitudes toward the premisses of the involved arguments (e.g. known premisses are to be preferred to strongly believed premisses, and the latter, in turn, are to be preferred to merely believed premisses). Finally, a multi-agent extension of the current framework could be used to model argument exchange in different kinds of scenarios (e.g. deliberation, persuasion dialogues or inquiry).

References

  • [1]
  • [2] Sergei Artemov (2018): Justification Awareness Models. In Sergei Artemov & Anil Nerode, editors: Logical Foundations of Computer Science, LNCS 10703, Springer, pp. 22–36, 10.1007/978-3-319-72056-22.
  • [3] Sergei Artemov & Melvin Fitting (2016): Justification Logic. In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University.
  • [4] Sergei Artemov & Elena Nogina (2005): Introducing justification into epistemic logic. Journal of Logic and Computation 15(6), pp. 1059–1073, 10.1093/logcom/exi053.
  • [5] Sergei N Artemov (2012): The ontology of justifications in the logical setting. Studia Logica 100(1-2), pp. 17–30, 10.1007/s11225-012-9387-x.
  • [6] Alexandru Baltag, Nick Bezhanishvili, Aybüke Özgün & Sonja Smets (2016): Justified Belief and the Topology of Evidence. In Jouko Väänänen, Åsa Hirvonen & Ruy de Queiroz, editors: Logic, Language, Information, and Computation, LNCS 9803, Springer, pp. 83–103, 10.1007/978-3-662-52921-86.
  • [7] Alexandru Baltag, Bryan Renne & Sonja Smets (2012): The Logic of Justified Belief Change, Soft Evidence and Defeasible Knowledge. In Luke Ong & Ruy de Queiroz, editors: Logic, Language, Information and Computation. WoLLIC 2012., LNCS 7456, Springer, pp. 168–190, 10.1007/978-3-642-32621-913.
  • [8] Alexandru Baltag & Sonja Smets (2008): A qualitative theory of dynamic interactive belief revision. In Wiebe van der Hoek, Giacomo Bonanno & Michael Wooldridge, editors: Logic and the foundations of game and decision theory (LOFT 7), Texts in Logic and Games 3, Amsterdam University Press, pp. 9–58.
  • [9] Pietro Baroni, Martin Caminada & Massimiliano Giacomin (2018): Abstract argumentation frameworks and their semantics. In Pietro Baroni, Dov M. Gabbay, Massimilino Giacomin & Leendert van der Torre, editors: Handbook of formal argumentation, College Publications, pp. 159–236.
  • [10] Mathieu Beirlaen, Jesse Heyninck, Pere Pardo & Christian Straßer (2018): Argument strength in formal argumentation. IfCoLog Journal of Logics and their Applications 5(3), pp. 629–675.
  • [11] Trevor JM Bench-Capon & Paul E Dunne (2007): Argumentation in artificial intelligence. Artificial intelligence 171(10-15), pp. 619–641, 10.1016/j.artint.2007.05.001.
  • [12] Johan van Benthem, David Fernández-Duque & Eric Pacuit (2014): Evidence and plausibility in neighborhood structures. Annals of Pure and Applied Logic 165(1), pp. 106–133, 10.1016/j.apal.2013.07.007.
  • [13] Johan van Benthem, David Fernández-Duque, Eric Pacuit et al. (2012): Evidence Logic: A New Look at Neighborhood Structures. Advances in modal logic 9, pp. 97–118.
  • [14] Johan van Benthem & Fernando R Velázquez-Quesada (2010): The dynamics of awareness. Synthese 177(1), pp. 5–27, 10.1007/s11229-010-9764-9.
  • [15] Patrick Blackburn, Maarten De Rijke & Yde Venema (2010): Modal Logic. Cambridge University Press, 10.1017/CBO9781107050884.
  • [16] Alfredo Burrieza & Antonio Yuste-Ginel (2020): Basic beliefs and argument-based beliefs in awareness epistemic logic with structured arguments. In H. Prakken, S. Bistarelli, F. Santini & C. Taticchi, editors: Proceedings of the COMMA 2020, IOS Press, pp. 123–134, 10.3233/FAIA200498.
  • [17] Martin Caminada & Leila Amgoud (2007): On the evaluation of argumentation formalisms. Artificial Intelligence 171(5-6), pp. 286–310, 10.1016/j.artint.2007.02.003.
  • [18] Hans van Ditmarsch, Wiebe van der Hoek & Barteld Kooi (2007): Dynamic epistemic logic. Springer, 10.1007/978-1-4020-5839-4.
  • [19] Phan Minh Dung (1995): On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence 77(2), pp. 321–357, 10.1016/0004-3702(94)00041-X.
  • [20] Frans H. van Eemeren, Bart Garssen, Erik C. W. Krabbe, A. Francisca Snoeck Henkemans, Bart Verheij & Jean H. M. Wagemans (2014): Handbook of Argumentation Theory. Springer, 10.1007/978-90-481-9473-5.
  • [21] Ronald Fagin & Joseph Y Halpern (1987): Belief, awareness, and limited reasoning. Artificial intelligence 34(1), pp. 39–76, 10.1016/0004-3702(87)90003-8.
  • [22] Ronald Fagin, Joseph Y Halpern, Yoram Moses & Moshe Vardi (2004): Reasoning about knowledge. MIT press, 10.7551/mitpress/5803.001.0001.
  • [23] Konstantin Genin & Franz Huber (2021): Formal Representations of Belief. In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University.
  • [24] Davide Grossi & Wiebe van der Hoek (2014): Justified Beliefs by Justified Arguments. In Chitta Baral, Giuseppe De Giacomo & Thomas Eiter, editors: Principles of Knowledge Representation and Reasoning: Proceedings of the Fourteenth International Conference, AAAI Press, pp. 131–140, 10.5555/3031929.3031947.
  • [25] Davide Grossi & Fernando R. Velázquez-Quesada (2009): Twelve Angry Men: A Study on the Fine-Grain of Announcements. In Xiangdong He, John Horty & Eric Pacuit, editors: Logic, Rationality, and Interaction, Springer, pp. 147–160, 10.1007/978-3-642-04893-712.
  • [26] Davide Grossi & Fernando R Velázquez-Quesada (2015): Syntactic awareness in logical dynamics. Synthese 192(12), pp. 4071–4105, 10.1007/s11229-015-0733-1.
  • [27] Ali Hasan & Richard Fumerton (2018): Foundationalist Theories of Epistemic Justification. In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University.
  • [28] Barteld Kooi (2007): Expressivity and completeness for public update logics via reduction axioms. Journal of Applied Non-Classical Logics 17(2), pp. 231–253, 10.3166/jancl.17.231-253.
  • [29] Xu Li & Yì N. Wáng (2020): A Logic of Knowledge and Belief Based on Abstract Arguments. In Mehdi Dastani, Huimin Dong & Leon van der Torre, editors: Logic and Argumentation, Springer, pp. 116–130, 10.1007/978-3-030-44638-38.
  • [30] Hugo Mercier & Dan Sperber (2011): Why do humans reason? Arguments for an argumentative theory. Behavioral and brain sciences 34(2), pp. 57–74, 10.1017/S0140525X10000968.
  • [31] Sanjay Modgil & Henry Prakken (2013): A general account of argumentation with preferences. Artificial Intelligence 195, pp. 361–397, 10.1016/j.artint.2012.10.008.
  • [32] Sanjay Modgil & Henry Prakken (2018): Abstract rule-based argumentation. In Pietro Baroni, Dov M. Gabbay, Massimilino Giacomin & Leendert van der Torre, editors: Handbook of formal argumentation, College Publications, pp. 287–364.
  • [33] Sanjay Modgil, Francesca Toni, Floris Bex, Ivan Bratko, Carlos I. Chesñevar, Wolfgang Dvořák, Marcelo A. Falappa, Xiuyi Fan, Sarah Alice Gaggl, Alejandro J. García, María P. González, Thomas F. Gordon, João Leite, Martin Možina, Chris Reed, Guillermo R. Simari, Stefan Szeider, Paolo Torroni & Stefan Woltran (2013): The Added Value of Argumentation, pp. 357–403. Springer, 10.1007/978-94-007-5583-321.
  • [34] Carlo Proietti & Antonio Yuste-Ginel (2021): Dynamic epistemic logics for abstract argumentation. Synthese, 10.1007/s11229-021-03178-5.
  • [35] Chiaki Sakama & Tran Cao Son (2020): Epistemic Argumentation Framework: Theory and Computation. Journal of Artificial Intelligence Research 69, pp. 1103–1126, 10.1613/jair.1.12121.
  • [36] François Schwarzentruber, Srdjan Vesic & Tjitze Rienstra (2012): Building an Epistemic Logic for Argumentation. In Luis Fariñas del Cerro, Andreas Herzig & Jérôme Mengin, editors: Logics in Artificial Intelligence, LNCS 7519, Springer, pp. 359–371, 10.1007/978-3-642-33353-828.
  • [37] Chenwei Shi, Sonja Smets & Fernando R Velázquez-Quesada (2017): Argument-based belief in topological structures. In J Lang, editor: Proceedings TARK 2017. EPTCS, Open Publishing Association, 10.4204/EPTCS.251.36.
  • [38] Dan Sperber (1997): Intuitive and reflective beliefs. Mind & Language 12(1), pp. 67–83, 10.1111/j.1468-0017.1997.tb00062.x.
  • [39] Stephen E Toulmin ([1958] 2003): The uses of argument. Cambridge university press, 10.1017/CBO9780511840005.
  • [40] Fernando R Velázquez-Quesada (2014): Dynamic epistemic logic for implicit and explicit beliefs. Journal of Logic, Language and Information 23(2), pp. 107–140, 10.1007/s10849-014-9193-0.
  • [41] Yanjing Wang & Qinxiang Cao (2013): On axiomatizations of public announcement logic. Synthese 190(1), pp. 103–134, 10.1007/s11229-012-0233-5.
  • [42] Zhe Yu, Kang Xu & Beishui Liao (2018): Structured argumentation: Restricted rebut vs. unrestricted rebut. Studies in Logic 11(3), pp. 3–17.

Appendix (Proof sketch of Theorem 1)

The outline of the proof is as follows: we first define a new class of (non-standard) models for our language (which are Kripke models where the syntactic components –awareness, accepted rules and names of rule– are maintained throughout the accessibility relation). We then show two things: (i) we can go from pointed Kripke models to its generated submodels without loosing \mathcal{L}-information (just as in the general modal case) and; (ii) we can transform systematically Kripke generated submodels into our models (again, without loosing \mathcal{L}-information). Finally, we prove completeness w.r.t. the class of non-standard models and apply (i) and (ii) to obtain the desired result. Let us unfold some of the details.

First of all, we define a Kripke model for =(𝖥,𝖠)\mathcal{L}=(\mathsf{F},\mathsf{A}) as a tuple S=(W,,𝒪,𝒟,𝔫,||||)S=(W,\mathcal{R},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||) where: WW\neq\emptyset is a set of possible worlds; W×W\mathcal{R}\subseteq W\times W is a serial, transitive and euclidean relation; 𝒪:W(𝖠)\mathcal{O}:W\to\wp(\mathsf{A}) is a function assigning an awareness set 𝒪(w)\mathcal{O}(w) to each world ww; 𝒟:W(𝖲𝖤𝖰(𝖥))\mathcal{D}:W\to\wp(\mathsf{SEQ}(\mathsf{F})) (with nn\in\mathbb{N}) is a function assigning a set of accepted defeasible rules 𝒟(w)\mathcal{D}(w) to each world ww; 𝔫:(W×𝖲𝖤𝖰(𝖥))𝖠𝗍\mathfrak{n}:(W\times\mathsf{SEQ}(\mathsf{F}))\to\mathsf{At} is a (possibly partial) naming function for defeasible rules, where 𝔫(w,R)\mathfrak{n}(w,R) informally means “the defeasible rule RR is applicable at ww”; and ||||:𝖠𝗍(W)||\cdot||:\mathsf{At}\to\wp(W) is a valuation function. Moreover, we assume that for every w,wWw,w^{\prime}\in W, www\mathcal{R}w^{\prime} implies 𝒪(w)=𝒪(w)\mathcal{O}(w)=\mathcal{O}(w^{\prime}), 𝒟(w)=𝒟(w)\mathcal{D}(w)=\mathcal{D}(w^{\prime}), and 𝔫(w,R)=𝔫(w,R)\mathfrak{n}(w,R)=\mathfrak{n}(w^{\prime},R). We also assume that if ((φ_1,,φ_n),φ)𝒟(w)((\varphi_{\_}1,...,\varphi_{\_}n),\varphi)\in\mathcal{D}(w), then {φ_1,,φ_n,φ}_0\{\varphi_{\_}1,...,\varphi_{\_}n,\varphi\}\nvdash_{\_}0\perp and {φ_1,,φ_n}_0φ\{\varphi_{\_}1,...,\varphi_{\_}n\}\nvdash_{\_}0\varphi.

Note that now WSWS sets depend on both the model and the world at which we are looking (since 𝒟\mathcal{D} may vary from one world to another). Consequently, we use WSS(w)WS^{S}(w) to denote the set of well-shaped arguments at (S,w)(S,w).

Truth w.r.t. pointed Kripke models is denoted by k\models_{k} and defined as follows (the missing clauses are as expected):

S,wkφS,w\models_{k}\square\varphi iff wvw\mathcal{R}v implies S,vkφS,v\models_{k}\varphi
S,wk𝖺𝗐𝖺𝗋𝖾(α)S,w\models_{k}\mathsf{aware}(\alpha) iff α𝒪(w)\alpha\in\mathcal{O}(w)
S,wk𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)S,w\models_{k}\mathsf{wellshap}(\alpha) iff αWSS(w)\alpha\in WS^{S}(w)
S,wk𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)S,w\models_{k}\mathsf{undercuts}(\alpha,\beta) iff β=β_1,,β_nφ\beta=\langle\beta_{\_}1,...,\beta_{\_}n\Rightarrow\varphi\rangle and 𝖢𝗈𝗇𝖼(α)=¬𝔫(w,𝖳𝗈𝗉𝖱𝗎𝗅𝖾(β))\mathsf{Conc}(\alpha)=\lnot\mathfrak{n}(w,\mathsf{TopRule}(\beta)).

We say that a Kripke model S=(W,,𝒪,𝒟,𝔫,||||)S=(W,\mathcal{R},\mathcal{O},\mathcal{D},\mathfrak{n},||\cdot||) is uniform iff for every w,wWw,w^{\prime}\in W it holds that: (i) 𝒪(w)=𝒪(w)\mathcal{O}(w)=\mathcal{O}(w^{\prime}); (ii) 𝒟(w)=𝒟(w)\mathcal{D}(w)=\mathcal{D}(w^{\prime}); and (iii) 𝔫(w,R)=𝔫(w,R)\mathfrak{n}(w,R)=\mathfrak{n}(w^{\prime},R) for every R𝒟R\in\mathcal{D}. 𝒦\mathcal{K} denotes the class of all pointed Kripke models, and 𝒦u\mathcal{K}^{u} denotes the class of all uniform pointed Kripke models. We abuse of notation and use \mathcal{M} to denote the class of all pointed models (the standard ones that we defined in Section 2).

Transformation lemmas.

Now, we need a couple of lemmas. The first one says that we can go from pointed Kripke models to Kripke uniform pointed models without loosing \mathcal{L}-information, by taking generated submodels. We use SwS^{w} to denote the submodel of SS generated by ww (see [15, Chapter 2]).

Lemma 1.

Let (S,w)𝒦(S,w)\in\mathcal{K}. We have that:

  • i)

    (Sw,w)𝒦u(S^{w},w)\in\mathcal{K}^{u},

    i.e. each pointed-generated submodel of a Kripke model is a uniform Kripke model.

  • ii)

    For every φ𝖥\varphi\in\mathsf{F}, (S,w)kφiff(Sw,w)kφ(S,w)\models_{k}\varphi\quad\text{iff}\quad(S^{w},w)\models_{k}\varphi, i.e. truth is preserved under generated submodels.

Item i) follows easily from the definition of generated submodel and uniform Kripke model. Item ii) can be proved by induction on φ\varphi.

The second lemma says that we can go from Kripke uniform models to our models (the standard ones, defined in Section 2) without loosing \mathcal{L}-information.

Lemma 2.

For every uniform pointed Kripke model (S,v)𝒦u(S,v)\in\mathcal{K}^{u}, there is a pointed model (M,w)(M,w)\in\mathcal{M} s.t. for every φ𝖥\varphi\in\mathsf{F}:

S,vkφiffM,wφ.S,v\models_{k}\varphi\quad\text{iff}\quad M,w\models\varphi\text{.}

Let us define the function τ\tau for each uniform Kripke model as follows τ(S,w)=(τ(S),τ(w))\tau(S,w)=(\tau(S),\tau(w)) where τ(w)=w\tau(w)=w and τ(S)=(τ(W),τ(),τ(𝒪),τ(𝒟),τ(𝔫),τ(||||))\tau(S)=(\tau(W),\tau(\mathcal{R}),\tau(\mathcal{O}),\tau(\mathcal{D}),\tau(\mathfrak{n}),\tau(||\cdot||)) s.t.:

τ(W):={w}[w]\tau(W):=\{w\}\cup\mathcal{R}[w],
τ():=[w]\tau(\mathcal{R}):=\mathcal{R}[w],
τ(𝒪):=𝒪(w)\tau(\mathcal{O}):=\mathcal{O}(w),
τ(𝒟):=𝒟(w)\tau(\mathcal{D}):=\mathcal{D}(w),
τ(𝔫):={(R,p)𝖲𝖤𝖰(𝖥)×𝖠𝗍𝔫(w,R)=p}\tau(\mathfrak{n}):=\{(R,p)\in\mathsf{SEQ}(\mathsf{F})\times\mathsf{At}\mid\mathfrak{n}(w,R)=p\},
τ(p):=pτ(W)\tau(||p||):=||p||\cap\tau(W) for every p𝖠𝗍.p\in\mathsf{At}\text{.}

Now, it is easy to check that, for every (S,w)𝒦u(S,w)\in\mathcal{K}^{u}: τ((S,w))\tau((S,w))\in\mathcal{M}, that is τ:𝒦u\tau:\mathcal{K}^{u}\to\mathcal{M}. Once this is done, we can show that, for every φ𝖥\varphi\in\mathsf{F}, it holds that:

S,wkφiffτ(S,w)φ.S,w\models_{k}\varphi\quad\text{iff}\quad\tau(S,w)\models\varphi\text{.}

The proof of the last assertion is by induction on φ\varphi where the step for φ=𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)\varphi=\mathsf{wellshap}(\alpha) is another inductive argument (on the construction on α\alpha).

Completeness w.r.t. Kripke models.

We can now define the canonical Kripke model for \mathcal{L} as:

Sc=(Wc,c,𝒪c,𝒟c,𝔫c,||||c),S^{c}=(W^{c},\mathcal{R}^{c},\mathcal{O}^{c},\mathcal{D}^{c},\mathfrak{n}^{c},||\cdot||^{c})\text{,}

where the definition of WcW^{c}, c\mathcal{R}^{c} y ||||c||\cdot||^{c} is as usual in modal logic [15]; while the definition of the rest of the elements mimics the one of awareness operators [21]:

𝒪c(Γ):={α𝖠𝖺𝗐𝖺𝗋𝖾(α)Γ},\mathcal{O}^{c}(\Gamma):=\{\alpha\in\mathsf{A}\mid\mathsf{aware}(\alpha)\in\Gamma\}\text{,}

𝒟c(Γ):={R𝖲𝖤𝖰(𝖥)𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(αR)Γ},\mathcal{D}^{c}(\Gamma):=\{R\in\mathsf{SEQ}(\mathsf{F})\mid\mathsf{wellshap}(\alpha^{R})\in\Gamma\}\text{,}

((Γ,R),p)𝔫ciff𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(¬p,αR)Γ.((\Gamma,R),p)\in\mathfrak{n}^{c}\quad\text{iff}\quad\mathsf{undercuts}(\langle\lnot p\rangle,\alpha^{R})\in\Gamma\text{.}

Now, we need to prove:

Lemma 3 (Canonicity).

ScS^{c} is a Kripke model for \mathcal{L}.

For showing that ScS^{c} satisfies all conditions, we reason using maximally-consistent set properties and our axiom system. As illustrations: semantic restrictions on the accessibility relations follows from (Ax1) (see e.g. [22] or [15]), while (Ax19) permits showing that 𝔫c\mathfrak{n}^{c} is a function.

Lemma 4 (Truth).

For every φ𝖥\varphi\in\mathsf{F}: φΓ\varphi\in\Gamma iff Sc,ΓkφS^{c},\Gamma\models_{k}\varphi.

The proof proceeds by induction on φ\varphi. The Boolean and modal cases are standard [15]. The cases for operators 𝖺𝗐𝖺𝗋𝖾\mathsf{aware}, 𝖼𝗈𝗇𝖼\mathsf{conc} and 𝗌𝗍𝗋𝗂𝖼𝗍\mathsf{strict} are straightforward (they actually do not make use of the induction hypothesis, due to their syntactic character). The cases for φ=𝗎𝗇𝖽𝖾𝗋𝖼𝗎𝗍𝗌(α,β)\varphi=\mathsf{undercuts}(\alpha,\beta) and φ=𝗐𝖾𝗅𝗅𝗌𝗁𝖺𝗉(α)\varphi=\mathsf{wellshap}(\alpha) are slightly more compromised. For the latter, another inductive argument on the structure of α\alpha is required.

Completeness w.r.t. standard models.

Finally, completeness w.r.t. standard models can be proved as follows. Suppose Γφ\Gamma\nvdash\varphi, then Γ{¬φ}\Gamma\cup\{\lnot\varphi\} is consistent. By Lindenbaum, we have that there is a Γ+Wc\Gamma^{+}\in W^{c} s.t Γ{¬φ}Γ+\Gamma\cup\{\lnot\varphi\}\subseteq\Gamma^{+}. By the Truth Lemma we have that Sc,Γ+kΓ{¬φ}S^{c},\Gamma^{+}\models_{k}\Gamma\cup\{\lnot\varphi\}. By item ii) of Lemma 1 we have that ScΓ+,Γ+kΓ{¬φ}S^{c\Gamma^{+}},\Gamma^{+}\models_{k}\Gamma\cup\{\lnot\varphi\} and by item i) we have that ScΓ+,Γ+S^{c\Gamma^{+}},\Gamma^{+} is a pointed uniform Kripke model. Then by Lemma 2 we know that τ(ScΓ+,Γ+)Γ{¬φ}\tau(S^{c\Gamma^{+}},\Gamma^{+})\models\Gamma\cup\{\lnot\varphi\} which implies by definition of semantic logical consequence that Γφ\Gamma\nvDash\varphi.