This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Duality for pairs of upward bipolar plane graphs and submodule lattices

Gábor Czédli czedli@math.u-szeged.hu http://www.math.u-szeged.hu/ czedli/ University of Szeged, Bolyai Institute. Szeged, Aradi vértanúk tere 1, HUNGARY 6720 Dedicated to Márta Madocsai, my brother György, and to the memory of Mrs. Pálné Haraszti (born Anna Miskó), members of an old summer team
Abstract.

Let GG and HH be acyclic, upward bipolarly oriented plane graphs with the same number nn of edges. While GG can symbolize a flow network, HH has only a controlling role. Let φ\varphi and ψ\psi be bijections from {1\{1, …, n}n\} to the edge set of GG and that of HH, respectively; their role is to define, for each edge of HH, the corresponding edge of GG. Let bb be an element of an Abelian group 𝔸\mathbb{A}. An nn-tuple (a1(a_{1}, \dots, an)a_{n}) of elements of 𝔸\mathbb{A} is a solution of the paired-bipolar-graphs problem P:=(G,HP:=(G,H, φ,ψ\varphi,\psi, 𝔸,b)\mathbb{A},b) if whenever aia_{i} is the “all-or-nothing-flow” capacity of the edge φ(i)\varphi(i) for i=1i=1, …, nn and e\vec{e} is a maximal directed path of HH, then by fully exploiting the capacities of the edges corresponding to the edges of e\vec{e} and neglecting the rest of the edges of GG, we have a flow process transporting bb from the source (vertex) of GG to the sink of GG. Let P:=(H,GP^{\prime}:=(H^{\prime},G^{\prime}, ψ,φ\psi^{\prime},\varphi^{\prime}, 𝔸,b)\mathbb{A},b), where HH^{\prime} and GG^{\prime} are the “two-outer-facet” duals of HH and GG, respectively, and ψ\psi^{\prime} and φ\varphi^{\prime} are defined naturally. We prove that PP and PP^{\prime} have the same solutions. This result implies George Hutchinson’s self-duality theorem on submodule lattices.

Key words and phrases:
Upward plane graph, edge capacity, George Hutchinson’s self-duality theorem, lattice identity
1991 Mathematics Subject Classification:
06C05, 05C21
This research was supported by the National Research, Development and Innovation Fund of Hungary, under funding scheme K 138892. June 23, 2024

1. Introduction

We present and prove the main result in Sections 24, intended to be readable for most mathematicians. Section 5, an application of the preceding sections, presupposes a modest familiarity with some fundamental concepts from (universal) algebra and, mainly, from lattice theory.

Sections 24 prove a duality theorem, Theorem 1, for some pairs of finite, oriented planar graphs. The first graph, GG, is a flow network with edge capacities belonging to a fixed Abelian group. The other graph plays a controlling role: each of its maximal paths determines a set of edges of GG to be used at their full capacities while neglecting the rest of the edges; see Section 2 for a preliminary illustration.

Section 5 applies Theorem 1 to give a new and elementary proof of George Hutchinson’s self-duality theorem on identities that hold in submodule lattices; our approach is simpler (mainly conceptually simpler) than the earlier ones.

The aforementioned two parts of the paper are interdependent. The second part, Section 5, is based upon the first part (Sections 24), while the necessity for a suitable tool in the second part led to the creation of the first part.

2. An introductory example

Before delving into the technicalities of Section 3, consider the following example.

Refer to caption

Figure 1. An introductory example

In Figure 1, GG and HH are oriented graphs. With the convention that every edge is upward oriented (like in the case of Hasse diagrams of partially ordered sets), the arrowheads are omitted. The subscripts 1,,171,\dots,17 supply a bijective correspondence between the edge set of GG and that of HH. We can think of GG as a hypothetical concrete system in which the arcs (i.e., the edges) are transit routes, pipelines, fiber-optic cables, or freighters (or passenger vehicles) traveling on fixed routes, etc. The numbers in colored geometric shapes are the capacities of the arcs of GG. (Even though we repeat these numbers on the arcs of HH, they still mean the capacities of the corresponding arcs of GG; the arcs of HH have no capacities.) These numbers are “all-or-nothing-flow” capacities, that is, each arc should be either used at full capacity or avoided; this stipulation is due to physical limitations or economic inefficiency. (However, there can be parallel arcs with different capacities; see, for example, e13e_{13} and e15e_{15}.) The vertices of GG are repositories (or warehouses, depots, etc.). In contrast to GG, the graph HH is to provide visual or digital information within a hypothetical control room. Each maximal directed path of HH defines a method to transport exactly 66 units (such as pieces, tons, barrels, etc.) of something from source(G)\textup{source}(G) to sink(G)\textup{sink}(G) without changing the final contents of other repositories. For example, (e9(e^{\prime}_{9}, e15e^{\prime}_{15}, e10)e^{\prime}_{10}) is a maximal directed path of HH; its meaning for GG is that we use exactly the arcs e9e_{9}, e15e_{15}, and e10e_{10} of GG. Namely, we use e9e_{9} to transport 33 (units of something) from source(G)\textup{source}(G) to v1v_{1}, e15e_{15} to transport 6 from v1v_{1} to sink(G)\textup{sink}(G), and e10e_{10} to transport 3 from source(G)\textup{source}(G) to v1v_{1}. Depending on the physical realization of GG, we can use e9e_{9}, e15e_{15} and e10e_{10} in this order, in any order, or simultaneously. No matter which of the ten maximal paths of HH we choose, the result of the transportation is the same. The negative sign of 3-3 at e5e_{5} and e8e_{8} means that the arc is to transport 3 in the opposite direction (that is, downward). The scheme of transportation just described is very adaptive. Indeed, when choosing one of the ten maximal paths of HH, several factors like speed, cost, the operational conditions of the edges, etc. can be taken into account.

3. Paired-bipolar-graphs problems and schemes

First, we recall some, mostly well-known and easy, concepts and fix our notations. They are not unique in the literature, but we try to use the most expressive ones. We go mainly after Auer at al. [1]111At the time of writing, freely available at http://dx.doi.org/10.1016/j.tcs.2015.01.003 . and Di Battista at al. [7]222At the time of writing, freely available at https://doi.org/10.1016/0925-7721(94)00014-X .. In the present paper, every graph is assumed to be finite and directed. Sometimes, we say digraph to emphasize that our graphs are directed. A (directed edge) ee of a graph starts at its tail, denoted by tail(e)\textup{{tail}}(e), and ends at its head, denoted by head(e)\textup{{head}}(e). Occasionally, we say that ee goes from tail(e)\textup{{tail}}(e) to head(e)\textup{{head}}(e); see the middle of Figure 1. We can also say that ee is an outgoing edge from tail(e)\textup{{tail}}(e) and an incoming edge into head(e)\textup{{head}}(e). For a vertex cc, let inc(c)\textup{inc}(c) and out(c)\textup{out}(c) stand for the set of edges incoming into cc and that of edges outgoing from cc, respectively. Sometimes, head(e)\textup{{head}}(e) is denoted by an arrowhead put on ee. The vertex set (set of all vertices) and the edge set of a graph GG are denoted by V(G)V(G) and E(G)E(G), respectively. The graph containing no directed cycle is said to be acyclic. Such a graph has no loop edges, since there is no cycle of length 1, and it is oriented, that is, inc(tail(e))out(head(e))=\textup{inc}(\textup{{tail}}(e))\cap\textup{out}(\textup{{head}}(e))=\emptyset for all eE(G)e\in E(G). A vertex cV(G)c\in V(G) is a source or a sink if inc(c)=\textup{inc}(c)=\emptyset or out(c)=\textup{out}(c)=\emptyset, respectively. A bipolarly oriented graph or, briefly saying, a bipolar graph is an acyclic digraph that has exactly one source, has exactly one sink, and has at least two vertices. For such a graph GG, source(G)\textup{source}(G) and sink(G)\textup{sink}(G) denote the source and the sink of GG, respectively. The uniqueness of source(G)\textup{source}(G) and that of sink(G)\textup{sink}(G) imply that in a bipolar graph GG,

each maximal directed path goes from source(G) to sink(G).\text{each maximal directed path goes from }\textup{source}(G)\text{ to }\textup{sink}(G). (3.1)

Next, guided by Section 2 and Figure 1, we introduce the concept of a paired-bipolar-graphs problem. This problem with one of its solutions forms a paired-bipolar-graphs scheme. For sets XX and YY, XYX^{Y} denotes the set of functions from YY to XX.

Definition 1.
  1. (pb1)

    Assume that GG and HH are bipolar graphs with the same number nn of edges. Assume also that φ:{1,,n}E(G)\varphi\colon\{1,\dots,n\}\to E(G) and ψ:{1,,n}E(H)\psi\colon\{1,\dots,n\}\to E(H) are bijections, ei:=φ(i)e_{i}:=\varphi(i) and ei:=ψ(i)e^{\prime}_{i}:=\psi(i) for i{1,,n}i\in\{1,\dots,n\}; then φψ1:E(H)E(G)\varphi\circ\psi^{-1}\colon E(H)\to E(G) defined by eieie^{\prime}_{i}\mapsto e_{i} is again a bijection. Let 𝔸=(A;+)\mathbb{A}=(A;+) be an Abelian group, and let bb be an element of AA. (In Section 2, 𝔸=\mathbb{A}=\mathbb{Z}, the additive group of all integers, and b=6b=6.)

  2. (pb2)

    By a system of contents we mean a function S:V(G)AS\colon V(G)\to A, i.e., a member of AV(G)A^{V(G)}. For vV(G)v\in V(G), S(v)AS(v)\in A is the content of vv. The following three systems333The notations of these systems and other acronyms are easy to locate in the PDF of the paper. For example, in most PDF viewers, a search for “Cntinit” or “bnd(” gives the (first) occurrence of Cntinit,b[G]\textup{Cnt}_{\textup{init,}b}[G] or bnd(G)\textup{bnd}(G) (to be defined later), respectively. of contents deserve particular interest. The bb-initial system of contents is the function Cntinit,b[G]:V(G)A\textup{Cnt}_{\textup{init,}b}[G]\colon V(G)\to A defined by

    Cntinit,b[G](v)={b,if v=source(G),0=0𝔸,if vV(G){source(G)}.\textup{Cnt}_{\textup{init,}b}[G](v)=\begin{cases}b,&\text{if }v=\textup{source}(G),\cr 0=0_{\mathbb{A}},&\text{if }v\in V(G)\setminus\{\textup{source}(G)\}.\end{cases}

    The bb-terminal system of contents is Cntterm,b[G]:V(G)A\textup{Cnt}_{\textup{term,}b}[G]\colon V(G)\to A defined by

    Cntterm,b[G](v)={b,if v=sink(G),0,if vV(G){sink(G)}.\textup{Cnt}_{\textup{term,}b}[G](v)=\begin{cases}b,&\text{if }v=\textup{sink}(G),\cr 0,&\text{if }v\in V(G)\setminus\{\textup{sink}(G)\}.\end{cases}

    The bb-transporting system of contents is Cnttransp,b[G]:V(G)A\textup{Cnt}_{\textup{transp,}b}[G]\colon V(G)\to A defined by

    Cnttransp,b[G](v)={b,if v=source(G),b,if v=sink(G),0,if vV(G){source(G),sink(G)}.\textup{Cnt}_{\textup{transp,}b}[G](v)=\begin{cases}-b,&\text{if }v=\textup{source}(G),\cr b,&\text{if }v=\textup{sink}(G),\cr 0,&\text{if }v\in V(G)\setminus\{\textup{source}(G),\textup{sink}(G)\}.\end{cases} (3.2)
  3. (pb3)

    With respect to the pointwise addition, the systems of contents form an Abelian group, namely, a direct power of 𝔸\mathbb{A}. The computation rule in this group is that (S(1)±S(2))(u)=S(1)(u)±S(2)(u)(S^{(1)}\pm S^{(2)})(u)=S^{(1)}(u)\pm S^{(2)}(u) for all uV(G)u\in V(G). For example, Cntterm,b[G]=Cntinit,b[G]+Cnttransp,b[G]\textup{Cnt}_{\textup{term,}b}[G]=\textup{Cnt}_{\textup{init,}b}[G]+\textup{Cnt}_{\textup{transp,}b}[G].

  4. (pb4)

    Let a:=(a1,,an)An\vec{a}:=(a_{1},\dots,a_{n})\in A^{n} be an nn-tuple of elements of AA. The effect of an edge eje^{\prime}_{j} of HH on GG with respect to a\vec{a} is the system EfEdge[G,a,ej]\textup{EfEdge}[G,\vec{a},e^{\prime}_{j}] of contents defined by

    EfEdge[G,a,ej](u):={aj,if u=tail(ej),aj,if u=head(ej),0,if uV(G){tail(ej),head(ej)};\textup{EfEdge}[G,\vec{a},e^{\prime}_{j}](u):=\begin{cases}-a_{j},&\text{if }u=\textup{{tail}}(e_{j}),\cr a_{j},&\text{if }u=\textup{{head}}(e_{j}),\cr 0,&\text{if }u\in V(G)\setminus\{\textup{{tail}}(e_{j}),\textup{{head}}(e_{j})\};\end{cases} (3.3)

    note that ejE(H)e^{\prime}_{j}\in E(H) occurs on the left but ejE(G)e_{j}\in E(G) on the right.

  5. (pb5)

    For a:=(a1,,an)An\vec{a}:=(a_{1},\dots,a_{n})\in A^{n} and a directed path e:=(ej1,ej2,,ejk)\vec{e}\kern 1.5pt^{\prime}:=(e^{\prime}_{j_{1}},e^{\prime}_{j_{2}},\dots,e^{\prime}_{j_{k}}) in HH or a kk-element subset X={ej1,ej2,,ejk}X=\{e^{\prime}_{j_{1}},e^{\prime}_{j_{2}},\dots,e^{\prime}_{j_{k}}\} of E(H)E(H), the effect of e\vec{e}\kern 1.5pt^{\prime} or XX on GG with respect to a\vec{a} is the following system of contents:

    EfSet[G,a,{ej1,,ejk}]:=i=1kEfEdge[G,a,eji].\textup{EfSet}[G,\vec{a},\{e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k}}\}]:=\sum_{i=1}^{k}\textup{EfEdge}[G,\vec{a},e^{\prime}_{j_{i}}]. (3.4)
  6. (pb6)

    The paired-bipolar-graphs problem is the 6-tuple (G,H(G,H, φ,ψ\varphi,\psi, 𝔸,b)\mathbb{A},b), which we denote by

    PBGP(G,H,φ,ψ,𝔸,b).\textup{PBGP}(G,H,\,\varphi,\psi,\,\mathbb{A},b). (3.5)

    We say that a:=(a1,,an)An\vec{a}:=(a_{1},\dots,a_{n})\in A^{n} is a solution of this paired-bipolar-graphs problem if for each maximal directed path e:=(ej1,ej2,,ejk)\vec{e}\kern 1.5pt^{\prime}:=(e^{\prime}_{j_{1}},e^{\prime}_{j_{2}},\dots,e^{\prime}_{j_{k}}) in HH,

    EfSet[G,a,{ej1,,ejk}]=Cnttransp,b[G].\textup{EfSet}[G,\vec{a},\{e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k}}\}]=\textup{Cnt}_{\textup{transp,}b}[G]. (3.6)
  7. (pb7)

    If a:=(a1,,an)An\vec{a}:=(a_{1},\dots,a_{n})\in A^{n} is a solution of PBGP(G,H\textup{PBGP}(G,H, φ,ψ\varphi,\psi, 𝔸,b)\mathbb{A},b), then we say that the 7-tuple (G,H(G,H, φ,ψ\varphi,\psi, 𝔸,b,a)\mathbb{A},b,\vec{a}) is paired-bipolar-graphs scheme and we denote this scheme by

    PBGS(G,H,φ,ψ,𝔸,b,a).\textup{PBGS}(G,H,\,\varphi,\psi,\,\mathbb{A},b,\vec{a}). (3.7)

For example, Figure 1 determines PBGP(G,H,\textup{PBGP}(G,H, φ,ψ,\varphi,\psi, ,6)\mathbb{Z},6), where \mathbb{Z} is the additive group of integers. As the numbers in colored geometric shapes form a solution, the figure defines a paired-bipolar-graphs scheme, too. Even though we do not use the following two properties of the figure, we mention them. First, the paired-bipolar-graphs problem defined by the figure has exactly one solution. Second, if k+:={1,2,3,}k\in{\mathbb{N}^{+}}:=\{1,2,3,\dots\}, we change \mathbb{Z} to the (2k)(2k)-element additive group of integers modulo 2k2k, and bb is (the residue class of) 11 rather than 6, then the problem determined by the figure has no solution.

The next section says more about paired-bipolar-graphs problems but only for specific bipolar graphs, including those in Figure 1.

4. Bipolar plane graphs and the main theorem

For digraphs G1G_{1} and G2G_{2}, a pair (γ,χ)(\gamma,\chi) of functions is an isomorphism from G1G_{1} onto G2G_{2} if both γ:V(G1)V(G2)\gamma\colon V(G_{1})\to V(G_{2}) and χ:E(G1)E(G2)\chi\colon E(G_{1})\to E(G_{2}) are bijections, and for eE(G1)e\in E(G_{1}), γ(tail(e))=tail(χ(e))\gamma(\textup{{tail}}(e))=\textup{{tail}}(\chi(e)) and γ(head(e))=head(χ(e))\gamma(\textup{{head}}(e))=\textup{{head}}(\chi(e)). Hence, V(Gi)V(G_{i}) and E(Gi)E(G_{i}) have been abstract sets and, in essence, a graph GiG_{i} has been the system (V(Gi),E(Gi),tail,head)(V(G_{i}),E(G_{i}),\textup{tail},\textup{head}) so far. However, in case of a plane graph GG, V(G)V(G) is a finite subset of the plane 2\mathbb{R}^{2} and E(G)E(G) consists of oriented Jordan arcs (i.e., homeomorphic images of [0,1])[0,1]\subseteq\mathbb{R}) such that each arc eE(G)e\in E(G) goes from a vertex tail(e)V(G)\textup{{tail}}(e)\in V(G) to a vertex head(e)V(G)\textup{{head}}(e)\in V(G); see the middle part of Figure 1. On the other hand, GG is a planar graph if it is isomorphic to a plane graph. Note the difference: a plane graph is always a planar graph but not conversely.

The boundary bnd(G)\textup{bnd}(G) of a plane graph consists of those arcs of GG that can be reached (i.e., each of their points can be reached) from any sufficiently distant point of the plane by walking along an open Jordan curve crossing no arc of the graph. Usually, we cannot define the boundary of a planar (rather than plane) graph.

Definition 2.

An upward bipolar plane graph444To widen the scope of the main result, our definition of “upward” is seemingly more general than the standard one occurring in the literature. However, up to graph isomorphism, our definition is equivalent to the standard one in which “upward” has its visual meaning; see Theorem 2, taken from Platt [11], later. Furthermore, if we went after the standard definition, then we should probably call the duals of these graphs “rightward”, so we should introduce one more concept. is a bipolar plain graph GG such that both source(G)\textup{source}(G) and sink(G)\textup{sink}(G) are on the boundary of GG. An upward bipolarly oriented planar graph is a digraph isomorphic to an upward bipolar plane graph.

Next, let GG be an upward bipolar plane graph. The arcs of GG divide the plane into regions. Exactly one of these regions is geometrically unbounded; we call the rest of the regions inner facets. Take a Jordan curve CC such555We stipulate that CC has exactly one point at infinity and, if possible, CC is a projective line. that CC connects source(G)\textup{source}(G) and sink(G)\textup{sink}(G) in the projective plane and the affine part CC^{\prime} (the set of those points of CC that are not on the line at infinity) lies in the unbounded region. Then CC^{\prime} divides the unbounded region into two parts called outer facets . In Figure 2, CC^{\prime} is the union of the two thick dotted half-lines. The facets of GG are its inner facets and the two outer facets. In Figure 2, any two facets of GG sharing an arc are indicated by different colors (or by distinct shades in a grey-scale version).

Refer to caption

Figure 2. A facet FF of HH and the facets of GG
Definition 3.

For an upward bipolar plane graph GG, we define the dual of GG, which we denote by Gdu{G^{\textup{du}}}, in the following way. Let E(Gdu)E({G^{\textup{du}}}) be the set of all facets of GG, including the two outer facets. For each edge eGe\in G, we define the dual edge edu{e^{\textup{du}}}, as follows. Let tail(edu)\textup{{tail}}({e^{\textup{du}}}) and head(edu)\textup{{head}}({e^{\textup{du}}}) be the two facets such that the arc ee is on their boundaries. Out of these two facets, tail(edu)\textup{{tail}}({e^{\textup{du}}}) is the one on the left when we walk along ee from tail(e)\textup{{tail}}(e) to head(e)\textup{{head}}(e)666Miller and Naor [10] call this the “left-hand rule”, since if our left thumb points in the direction of ee, then the left index finger shows the direction of edu{e^{\textup{du}}}., while the other facet is head(edu)\textup{{head}}({e^{\textup{du}}}). The edge set of Gdu{G^{\textup{du}}} is E(Gdu):={edu:eE(G)}E({G^{\textup{du}}}):=\{{e^{\textup{du}}}:e\in E(G)\}. In Figure 2, source(Gdu)\textup{source}({G^{\textup{du}}}) and sink(Gdu)\textup{sink}({G^{\textup{du}}}) are the left outer facet and the right outer facet. (Only a bounded part of each of these two geometrically unbounded facets is drawn.) Note that CC^{\prime} occurring before this definition belongs neither to E(G)E(G) nor to EGduE{{G^{\textup{du}}}}.

Note that there are isomorphic upward bipolar plane graphs G1G_{1} and G2G_{2} such that G1du{G_{1}^{\textup{du}}} and G2du{G_{2}^{\textup{du}}} are non-isomorphic; this is why we cannot define the dual of an upward bipolarly oriented planar graph. Observe that the dual of an upward bipolar plane graph is a bipolar graph777This is why Definition 3 deviates from the literature, where Gdu{G^{\textup{du}}} has only one outer facet, the outer region. Fact 1, to be formulated later, asserts more., so the following definition makes sense.

Definition 4.

With upward bipolar plane graphs GG and HH, let P:=PBGP(G,HP:=\textup{PBGP}(G,H, φ,ψ\varphi,\psi, 𝔸,b)\mathbb{A},b) be a paired-bipolar-graphs problem; see (3.5). Define the bijections ψdu:{1,,n}E(Hdu){\psi^{\textup{du}}}\colon\{1,\dots,n\}\to E({H^{\textup{du}}}) and φdu:{1,,n}E(Gdu){\varphi^{\textup{du}}}\colon\{1,\dots,n\}\to E({G^{\textup{du}}}) by ψdu(i):=ei=duψ(i)du{\psi^{\textup{du}}}(i):={e^{\prime}_{i}{}^{\textup{du}}}={\psi(i)^{\textup{du}}} and φdu(j):=ejdu=φ(j)du{\varphi^{\textup{du}}}(j):={e_{j}^{\textup{du}}}={\varphi(j)^{\textup{du}}}, respectively, for i,j{1,,n}i,j\in\{1,\dots,n\}; here ejdu{e_{j}^{\textup{du}}} and eidu{e^{\prime}_{i}{}^{\textup{du}}} are edges of the dual graphs defined in Definition 3. Then the dual of the paired-bipolar-graphs problem PP is the paired-bipolar-graphs problem

Pdu:=PBGP(Hdu,Gdu,ψdu,φdu,𝔸,b).{P^{\textup{du}}}:=\textup{PBGP}({H^{\textup{du}}},{G^{\textup{du}}},\,{\psi^{\textup{du}}},{\varphi^{\textup{du}}},\,\mathbb{A},b). (4.1)

Briefly and roughly saying, we obtain the dual problem by interchanging the two graphs and dualizing both.

Next, based on (3.5), (3.7), and (4.1), we state our main theorem and a corollary.

Theorem 1.

Let P:=PBGP(G,HP:=\textup{PBGP}(G,H, φ,ψ\varphi,\psi, 𝔸,b)\mathbb{A},b) be a paired-bipolar-graphs problem such that both GG and HH are upward bipolar plane graphs. Then PP and the dual problem Pdu{P^{\textup{du}}} have exactly the same solutions.

This theorem, to be proved soon, trivially implies the following statement.

Corollary 1.

For GG, HH, φ\varphi, ψ\psi, 𝔸\mathbb{A}, and bb as in Theorem 1 and for every a\vec{a}, PBGS(G,H\textup{PBGS}(G,H, φ,ψ\varphi,\psi, 𝔸,b,a)\mathbb{A},b,\vec{a}) is a paired-bipolar-graphs scheme if and only if so is PBGS(Hdu,Gdu\textup{PBGS}({H^{\textup{du}}},{G^{\textup{du}}}, ψdu,φdu{\psi^{\textup{du}}},{\varphi^{\textup{du}}}, 𝔸,b,a)\mathbb{A},b,\vec{a}).

An arc {(x(t),y(t)):0t1}\{(x(t),y(t)):0\leq t\leq 1\} in the plane is strictly ascending if y(t1)<y(t2)y(t_{1})<y(t_{2}) for all 0t1<t210\leq t_{1}<t_{2}\leq 1. A plane graph is ascending if all its arcs are strictly ascending. Platt [11] proved888Indeed, as source(G)\textup{source}(G) and sink(G)\textup{sink}(G) are on bnd(G)\textup{bnd}(G), we can connect them by a new arc without violating planarity. Furthermore, we can add parallel arcs to any arc. Thus, Platt’s result applies. the following result, mentioned also in Auer at al. [1].

Theorem 2 (Platt [11]).

Each upward bipolar plain graph is isomorphic to an upward bipolar ascending plain graph.

Proof of Theorem 1.

Let PP and Pdu{P^{\textup{du}}} be as in the theorem. Theorem 2 allows us to assume that GG and HH are upward bipolar ascending plain graphs; see Figure 1 for an illustration. As the graphs are ascending, Figure 1 satisfactorily reflects generality. Note that the summation in (3.4) does not depend on the order in which the edges of a directed path are listed. Hence, we often give a directed path by the set of its edges. We claim that for any nonempty X{1,2,,n}X\subseteq\{1,2,\dots,n\},

vV(G)EfSet[G,a,{ei:iX}](v)=0.\sum_{v\in V(G)}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in X\}](v)=0. (4.2)

For |X|=1|X|=1, this is clear by (3.3). The |X|=1|X|=1 case and (3.4) imply the general case of (4.2), since

vV(G)EfSet[G,a,{ei:iX}](v)=vV(G)iXEfSet[G,a,{ei}](v),\displaystyle\sum_{v\in V(G)}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in X\}](v)=\sum_{v\in V(G)}\sum_{i\in X}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}\}](v),

and the two summations after the equality sign above can be interchanged.

Assume that aAn\vec{a}\in A^{n} is a solution of PP. To show that a\vec{a} is a solution of Pdu{P^{\textup{du}}}, too, take a maximal directed path Γ={eidu:iM}\Gamma=\{{e_{i}^{\textup{du}}}:i\in M\} in Gdu{G^{\textup{du}}}. In Figure 2, M={7,8,9,10M=\{7,8,9,10, 16,1716,17, 4,5,6}4,5,6\} and, furthermore, {ei:iM}\{e_{i}:i\in M\} consists of the thick edges of GG. Note that (3.1), with HH instead of GG, is valid for Γ\Gamma. Denote by V(Γ)V(\Gamma) the set of vertices of path Γ\Gamma; it consists of some facets of GG. To mark these facets in the figure and also for a later purpose, for each facet XV(Γ)X\in V(\Gamma), we pick a point called capital999Since we think of the facets as path-connected countries on the map. in the geometric interior of XX. These capitals are the red pentagon-shaped points in Figure 2. We assume that the capital of source(Gdu)\textup{source}({G^{\textup{du}}}), the left outer facet, is far on the left, that is, its abscissa is smaller than that of every vertex of GG. Similarly, the capital of sink(Gdu)\textup{sink}({G^{\textup{du}}}) is far on the right. We need to show that

C:=EfSet[Hdu,a,{eidu:iM}] and D:=Cnttransp,b[Hdu]C:=\textup{EfSet}[{H^{\textup{du}}},\vec{a},\{{e_{i}^{\textup{du}}}:i\ \in M\}]\text{ and }D:=\textup{Cnt}_{\textup{transp,}b}[{H^{\textup{du}}}] (4.3)

are the same. So we need to show that for all FV(Hdu)F\in V({H^{\textup{du}}}), C(F)=D(F)C(F)=D(F).

First, we deal with the case when FF is an internal facet of HH; see on the left of Figure 2. As HH is ascending, the set of arcs on the boundary bnd(F)\textup{bnd}(F) of FF is partitioned into a left half bndlft(F)\textup{bnd}_{\textup{lft}}(F) and a right half bndrght(F)\textup{bnd}_{\textup{rght}}(F). Furthermore, all arcs on bnd(F)\textup{bnd}(F) (as well as in HH) are ascending. Let L:={i:eiL:=\{i:e^{\prime}_{i} belongs to bndlft(F)}\textup{bnd}_{\textup{lft}}(F)\} and R:={i:eiR:=\{i:e^{\prime}_{i} belongs to bndrght(F)}\textup{bnd}_{\textup{rght}}(F)\}. In Figure 2, L={3,14,6}L=\{3,14,6\} and R={9,15}R=\{9,15\}. For a directed path g\vec{g}, let tail(g)\textup{{tail}}(\vec{g}) and head(g)\textup{{head}}(\vec{g}) denote the tail of the first edge and the head of the last edge of g\vec{g}, respectively.

For later reference, we point out that this paragraph to prove (the forthcoming) (4.6) uses only the following property of LL and RR: the directed paths

{ei:iL} and {ei:iR} have the same tail and the same head.\{e^{\prime}_{i}:i\in L\}\text{ and }\{e^{\prime}_{i}:i\in R\}\text{ have the same tail and the same head.} (4.4)

Take a subset K{1,,n}K\subseteq\{1,\dots,n\} such that KL=K\cap L=\emptyset and {ei:iKL}\{e^{\prime}_{i}:i\in K\cup L\} is a maximal directed path in HH. In Figure 2, K={10}K=\{10\}. Note that KR=K\cap R=\emptyset and {ei:iKR}\{e^{\prime}_{i}:i\in K\cup R\} is also a maximal directed path in HH. As a\vec{a} is a solution of PBGP(G,H\textup{PBGP}(G,H, φ,ψ\varphi,\psi, 𝔸,b)\mathbb{A},b),

EfSet[G,a,{ei:iLK}]=EfSet[G,a,{ei:iRK}],\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in L\cup K\}]=\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in R\cup K\}], (4.5)

simply because both are Cnttransp,b[G]\textup{Cnt}_{\textup{transp,}b}[G]. By (3.4), both sides of (4.5) are sums. Subtracting EfSet[G,a,{ei:iK}]\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in K\}] from both sides, we obtain that

EfSet[G,a,{ei:iL}]=EfSet[G,a,{ei:iR}].\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in L\}]=\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in R\}]. (4.6)

Connect the capitals of the facets belonging to V(Γ)V(\Gamma) by an open Jordan curve JJ such that for each eduE(Gdu)Γ{e^{\textup{du}}}\in E({G^{\textup{du}}})\setminus\Gamma, the arc eE(G)e\in E(G) and JJ have no geometric point in common and, furthermore, for each eduΓ{e^{\textup{du}}}\in\Gamma, JJ and ee has exactly one geometric point in common and this point is neither tail(e)\textup{{tail}}(e) nor head(e)\textup{{head}}(e). In Figure 2, JJ is the thin dashed curve. Let Bdn:={vV(G):vB_{\textup{dn}}:=\{v\in V(G):v is (geometrically) below J}J\}. Similarly, let BupB_{\textup{up}} be the set of those vertices of GG that are above JJ. Note that BdnBup=V(G)B_{\textup{dn}}\cup B_{\textup{up}}=V(G) and BdnBup=B_{\textup{dn}}\cap B_{\textup{up}}=\emptyset. In Figure 2, Bdn={source(G),v2}B_{\textup{dn}}=\{\textup{source}(G),v_{2}\} and Bup={sink(G),v1}B_{\textup{up}}=\{\textup{sink}(G),v_{1}\}. Consider the sum

vBupEfSet[G,a,{ei:iL}](v)\displaystyle\sum_{v\in B_{\textup{up}}}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in L\}](v) =vBupiLEfEdge[G,a,ei](v)\displaystyle=\sum_{v\in B_{\textup{up}}}\sum_{i\in L}\textup{EfEdge}[G,\vec{a},e^{\prime}_{i}](v) (4.7)
=iLvBupEfEdge[G,a,ei](v),\displaystyle=\sum_{i\in L}\sum_{v\in B_{\textup{up}}}\textup{EfEdge}[G,\vec{a},e^{\prime}_{i}](v), (4.8)

where the first equality comes from (3.4). If {tail(ei)\{\textup{{tail}}(e_{i}), head(ei)}Bup\textup{{head}}(e_{i})\}\subseteq B_{\textup{up}}, then

EfEdge[G,a,ei](tail(ei))=ai and EfEdge[G,a,ei](head(ei))=ai,\textup{EfEdge}[G,\vec{a},e^{\prime}_{i}](\textup{{tail}}(e_{i}))=-a_{i}\text{ and }\textup{EfEdge}[G,\vec{a},e^{\prime}_{i}](\textup{{head}}(e_{i}))=a_{i},

in virtue of (3.3), eliminate each other in the inner summation in (4.8). If {tail(ei)\{\textup{{tail}}(e_{i}), head(ei)}Bdn\textup{{head}}(e_{i})\}\subseteq B_{\textup{dn}}, then eie^{\prime}_{i} does not influence the inner summation at all. As GG is ascending, the case tail(ei)Bup\textup{{tail}}(e_{i})\in B_{\textup{up}} and head(ei)Bdn\textup{{head}}(e_{i})\in B_{\textup{dn}} does not occur. So (4.8) depends only on those ii for which tail(ei)Bdn\textup{{tail}}(e_{i})\in B_{\textup{dn}} and head(ei)Bup\textup{{head}}(e_{i})\in B_{\textup{up}}. However, by the definitions of Gdu{G^{\textup{du}}}, Γ\Gamma, JJ, BupB_{\textup{up}}, and BdnB_{\textup{dn}}, these subscripts ii are exactly the members of MM. Thus, we can change iLi\in L in (4.8) to iLMi\in L\cap M. For such an ii, only head(ei)\textup{{head}}(e_{i}) is in BupB_{\textup{up}} and, by (3.3), only aia_{i} contributes to the inner summation in (4.8). Therefore, we conclude that

vBupEfSet[G,a,{ei:iL}](v)=iLMai.\sum_{v\in B_{\textup{up}}}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in L\}](v)=\sum_{i\in L\cap M}a_{i}. (4.9)

As LL and RR have played the same role so far, we also have that

vBupEfSet[G,a,{ei:iR}](v)=iRMai.\sum_{v\in B_{\textup{up}}}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in R\}](v)=\sum_{i\in R\cap M}a_{i}. (4.10)

Therefore, combining (4.6), (4.9), and (4.10), we obtain that

iLMai=iRMai.\sum_{i\in L\cap M}a_{i}=\sum_{i\in R\cap M}a_{i}. (4.11)

For iLi\in L and jRj\in R, by the left-hand rule quoted in Footnote 6, head(ei)du=F\textup{{head}}({e^{\prime}_{i}{}^{\textup{du}}})=F and tail(ej)du=F\textup{{tail}}({e^{\prime}_{j}{}^{\textup{du}}})=F. So, at the =\overset{\ast}{=} sign below, we can use (3.3) and that FF is not the endpoint of any further edge of Hdu{H^{\textup{du}}}. Using (3.4), (4.3), (4.9), and (4.10), too,

C(F)=iMEfEdge[Hdu,a,eidu](F)\displaystyle C(F)=\sum_{i\in M}\textup{EfEdge}[{H^{\textup{du}}},\vec{a},{e_{i}^{\textup{du}}}](F) (4.12)
=iLMEfEdge[Hdu,a,eidu](F)+jRMEfEdge[Hdu,a,ejdu](F)\displaystyle\overset{\ast}{=}\sum_{i\in L\cap M}\textup{EfEdge}[{H^{\textup{du}}},\vec{a},{e_{i}^{\textup{du}}}](F)+\sum_{j\in R\cap M}\textup{EfEdge}[{H^{\textup{du}}},\vec{a},{e_{j}^{\textup{du}}}](F) (4.13)
=iLMai+jRM(aj).\displaystyle=\sum_{i\in L\cap M}a_{i}+\sum_{j\in R\cap M}(-a_{j}). (4.14)

Combining (3.2), (4.3), (4.14), and (4.11), C(F)=0=D(F)C(F)=0=D(F), as required.

Next, we deal with the case F=source(Hdu)F=\textup{source}({H^{\textup{du}}}). So FF is the outer facet left to HH; see Figure 2. We modify the earlier argument as follows. Let R:={i:eiR:=\{i:e^{\prime}_{i} is on the left boundary of H}H\}. In Figure 2, R={1R=\{1, 44, 1313, 10}10\}. Now {ei:duiR}\{{e^{\prime}_{i}{}^{\textup{du}}}:i\in R\} is the set of outgoing edges from FF in Hdu{H^{\textup{du}}}. As {ei:iR}\{e^{\prime}_{i}:i\in R\} is a maximal directed path in HH,

EfSet[G,a,{ei:iR}]=Cnttransp,b[G].\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in R\}]=\textup{Cnt}_{\textup{transp,}b}[G]. (4.15)

Similarly to (4.7)–(4.8), we take the sum

vBupEfSet[G,a,{ei:iR}](v)=iRvBupEfEdge[G,a,ei](v).\sum_{v\in B_{\textup{up}}}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in R\}](v)=\sum_{i\in R}\sum_{v\in B_{\textup{up}}}\textup{EfEdge}[G,\vec{a},e^{\prime}_{i}](v). (4.16)

Like earlier, the inner sum in (4.16) is 0 unless tail(ei)Bdn\textup{{tail}}(e_{i})\in B_{\textup{dn}} and head(ei)Bup\textup{{head}}(e_{i})\in B_{\textup{up}}, that is, unless iMi\in M. Thus, we can change the range of the outer sum in (4.16) from iRi\in R to iRMi\in R\cap M; note that RM={4,10}R\cap M=\{4,10\} in Figure 2. For iRMi\in R\cap M, the inner sum is EfEdge[G,a,ei](head(ei))=ai\textup{EfEdge}[G,\vec{a},e^{\prime}_{i}](\textup{{head}}(e_{i}))=a_{i}. Therefore, (4.16) turns into

vBupEfSet[G,a,{ei:iR}](v)=iRMai.\sum_{v\in B_{\textup{up}}}\textup{EfSet}[G,\vec{a},\{e^{\prime}_{i}:i\in R\}](v)=\sum_{i\in R\cap M}a_{i}. (4.17)

So (4.17), (4.15), (3.2), sink(G)Bup\textup{sink}(G)\in B_{\textup{up}}, and source(G)Bup\textup{source}(G)\notin B_{\textup{up}} imply that

iRMai=vBupCnttransp,b[G](v)=b.\sum_{i\in R\cap M}a_{i}=\sum_{v\in B_{\textup{up}}}\textup{Cnt}_{\textup{transp,}b}[G](v)=b. (4.18)

Similarly to (4.14), but now there is no incoming edge into F=source(Hdu)F=\textup{source}({H^{\textup{du}}}) and so “the earlier LL” is \emptyset and not needed, we have that

C(F)=iMEfEdge[Hdu,a,eidu](F)\displaystyle C(F)=\sum_{i\in M}\textup{EfEdge}[{H^{\textup{du}}},\vec{a},{e_{i}^{\textup{du}}}](F) (4.19)
=iRMEfEdge[Hdu,a,eidu](F)=iRM(ai)=iRMai.\displaystyle=\sum_{i\in R\cap M}\textup{EfEdge}[{H^{\textup{du}}},\vec{a},{e_{i}^{\textup{du}}}](F)=\sum_{i\in R\cap M}(-a_{i})=-\sum_{i\in R\cap M}a_{i}. (4.20)

By (4.20) and (4.18), C(F)=bC(F)=-b. Since D(F)=Cnttransp,b[Hdu](source(Hdu))=bD(F)=\textup{Cnt}_{\textup{transp,}b}[{H^{\textup{du}}}](\textup{source}({H^{\textup{du}}}))=-b by (3.2) and (4.3), we obtain the required equality C(F)=D(F)C(F)=D(F).

The treatment for the remaining case F=sink(Hdu)F=\textup{sink}({H^{\textup{du}}}) could be similar, but we present a shorter approach. By (3.2), (4.3), and the dual of (4.2),

FV(Hdu)C(F)=0=FV(Hdu)D(F).\sum_{F\in V({H^{\textup{du}}})}C(F)=0=\sum_{F\in V({H^{\textup{du}}})}D(F). (4.21)

We already know that for each FV(Hdu)F\in V({H^{\textup{du}}}) except possibly for F=sink(Hdu)F=\textup{sink}({H^{\textup{du}}}), C(F)C(F) on the left of (4.21) equals the corresponding summand D(F)D(F) on the right. This fact and (4.21) imply that C(sink(Hdu)=D(sink(Hdu))C(\textup{sink}({H^{\textup{du}}})=D(\textup{sink}({H^{\textup{du}}})), as required.

After settling all three cases, we have shown that CC and DD in (4.3) are the same. This proves that any solution a\vec{a} of PP is also a solution of Pdu{P^{\textup{du}}}.

To prove the converse, we need the following easy consequence of Platt [11].

Fact 1 (Platt [11]).

If XX is an upward bipolar plane graph, then its dual, Xdu{X^{\textup{du}}}, is isomorphic to an upward bipolar plain graph.

We can extract Fact 1 from Platt [11] as follows. As earlier but now for each facet FF of XX, pick a capital cFc_{F} in the interior of FF. For any two neighboring facets FF and TT, connect cFc_{F} and cTc_{T} by a new arc through the common bordering arc of FF and TT. The capitals and the new arcs form a plane graph XX^{\prime} isomorphic to Xdu{X^{\textup{du}}}, in notation, XXduX^{\prime}\cong{X^{\textup{du}}}. As XX^{\prime} is an upward bipolar plain graph by Definition 2, we obtain Fact 1.

Temporarily, we call the way to obtain XX^{\prime} from XX above a prime construction; the indefinite article is explained by the fact that the vertices and the arcs of XX^{\prime} can be chosen in many ways in the plane. The transpose XTX^{\textup{T}} of a graph XX is obtained from XX by reversing all its edges. For eE(X)e\in E(X), eTe^{\textup{T}} stands for the transpose of ee; note that tail(eT)=head(e)\textup{{tail}}(e^{\textup{T}})=\textup{{head}}(e), head(eT)=tail(e)\textup{{head}}(e^{\textup{T}})=\textup{{tail}}(e), and V(XT)={eT:iV(X)}V(X^{\textup{T}})=\{e^{\textup{T}}:i\in V(X)\}.

Resuming the proof of Theorem 1, Theorem 2 allows us to assume that GG and HH are ascending. Let GG^{\prime} be a plane graph obtained from GG by a prime construction; GG^{\prime} is isomorphic to Gdu{G^{\textup{du}}}. In Figure 2, only some vertices of GG^{\prime} are indicated by red pentagons and only some of its arcs are drawn as segments of the thin dashed open Jordan curve, but the figure is still illustrative. To obtain a graph G′′G^{\prime\prime} isomorphic to (Gdu)du{({G^{\textup{du}}}){}^{\textup{du}}}, we apply a prime construction to GG^{\prime} so that the vertices of GG are the chosen capitals that form V(G′′)V(G^{\prime\prime}) and, geometrically, the original arcs of GG are the chosen arcs of G′′G^{\prime\prime} connecting these capitals. By the left-hand rule quoted in Footnote 6, G′′G^{\prime\prime} is GTG^{\textup{T}}. Hence (Gdu)duGT{({G^{\textup{du}}}){}^{\textup{du}}}\cong G^{\textup{T}}. Similarly, (Hdu)duHT{({H^{\textup{du}}}){}^{\textup{du}}}\cong H^{\textup{T}}. Let us define φT:{1,,n}E(GT)\varphi^{\textup{T}}\colon\{1,\dots,n\}\to E(G^{\textup{T}}) and ψT:{1,,n}E(HT)\psi^{\textup{T}}\colon\{1,\dots,n\}\to E(H^{\textup{T}}) in the natural way by φT(i):=(φ(i))T\varphi^{\textup{T}}(i):=(\varphi(i))^{\textup{T}} and ψT(i):=(ψ(i))T\psi^{\textup{T}}(i):=(\psi(i))^{\textup{T}}. We claim that

P and PT:=PBGP(GT,HT,φT,ψT,𝔸,b) have the same solutions.P\text{ and }P^{\textup{T}}:=\textup{PBGP}(G^{\textup{T}},H^{\textup{T}},\,\varphi^{\textup{T}},\psi^{\textup{T}},\,\mathbb{A},b)\text{ have the same solutions.} (4.22)

The reason is simple: to neutralize that the edges are reversed, a solution u\vec{u} of PP should be changed to u-\vec{u}. However, the source and the sink are interchanged, and this results in a second change of the sign. So, a solution of PP is also a solution of PTP^{\textup{T}}. Similarly, a solution of PTP^{\textup{T}} is a solution of (PT)T=P(P^{\textup{T}})^{\textup{T}}=P, proving (4.22).

Finally, let a\vec{a} be a solution of Pdu{P^{\textup{du}}}. Fact 1 allows us to apply the already proven part of Theorem 1 to Pdu{P^{\textup{du}}} instead of PP, and we obtain that a\vec{a} is a solution of (Pdu)du{({P^{\textup{du}}}){}^{\textup{du}}}. We have seen that (Gdu)duGT{({G^{\textup{du}}}){}^{\textup{du}}}\cong G^{\textup{T}} and (Hdu)duHT{({H^{\textup{du}}}){}^{\textup{du}}}\cong H^{\textup{T}}. Apart from these isomorphisms, (φdu)du{({\varphi^{\textup{du}}}){}^{\textup{du}}} and (ψdu)du{({\psi^{\textup{du}}}){}^{\textup{du}}} are φT\varphi^{\textup{T}} and ψT\psi^{\textup{T}}, respectively. Thus, (Pdu)du{({P^{\textup{du}}}){}^{\textup{du}}} and PTP^{\textup{T}} have the same solutions. Hence a\vec{a} is a solution of PTP^{\textup{T}}, and so (4.22) implies that a\vec{a} is a solution of PP, completing the proof of Theorem 1. ∎

Remark 1.

Apart from applying the result of Platt [11], the proof above is self-contained. Even though Platt’s result may seem intuitively clear, its rigorous proof is not easy at all. Since a trivial induction instead of relying on Platt [11] would suffice for the particular graphs occurring in the subsequent section, our aim to give an elementary proof of Hutchinson’s self-duality theorem is not in danger.

5. Hutchinson’s self-duality theorem

The paragraph on pages 272–273 in [9] gives a detailed account on the contribution of each of the two authors of [9]. In particular, the self-duality theorem, to be recalled soon, is due exclusively to George Hutchinson. Thus, we call it Hutchinson’s self-duality theorem, and we reference Hutchinson [9] in connection with it. A similar strategy applies when citing his other exclusive results from [9].

The original proof of the self-duality theorem is deep. It relies on Hutchinson [8], which belongs mainly to the theory of abelian categories, on the fourteen-page-long Section 2 of Hutchinson and Czédli [9], and on the nine-page-long Section 3 of Hutchinson [9]. A second proof given by Czédli and Takách [6] avoids Hutchinson [8] and abelian categories, but relying on the just-mentioned Sections 2 and 3, it is still complicated. No elementary proof of Hutchinson’s self-duality theorem has previously been given; in light of Remark 1, we present such a proof here.

By a module MM over a ring RR with 11 we always mean a unital left module, that is, 1m=m1m=m holds for all mMm\in M. The lattice of all submodules of MM is denoted by Sub(M)\textup{Sub}(M). For X,YSub(M)X,Y\in\textup{Sub}(M), XYX\leq Y and XYX\wedge Y means XYX\subseteq Y and XYX\cap Y, respectively, while XYX\vee Y is the submodule generated by XYX\cup Y. A lattice term is built from variables and the operation symbols \vee and \wedge. For lattice terms pp and qq, the string “p=qp=q” is called a lattice identity. For example, x1(x2x3)=(x1x2)(x1x3)x_{1}\wedge(x_{2}\vee x_{3})=(x_{1}\wedge x_{2})\vee(x_{1}\wedge x_{3}) is a lattice identity; in fact, it is one of the two (equivalent) distributive laws. To obtain the dual of a lattice term, we interchange \vee and \wedge in it. For example, the dual of

r\displaystyle r =(x1(x2(x3x4))x5)(((x6x7)(x8x9))x10),\displaystyle=\Bigl{(}x_{1}\vee\bigl{(}x_{2}\wedge(x_{3}\vee x_{4})\bigr{)}\vee x_{5}\Bigr{)}\wedge\Bigl{(}\bigl{(}(x_{6}\vee x_{7})\wedge(x_{8}\vee x_{9})\bigr{)}\vee x_{10}\Bigr{)}, (5.1)
rdu\displaystyle{r^{\textup{du}}} =(x1(x2(x3x4))x5)(((x6x7)(x8x9))x10).\displaystyle=\Bigl{(}x_{1}\wedge\bigl{(}x_{2}\vee(x_{3}\wedge x_{4})\bigr{)}\wedge x_{5}\Bigr{)}\vee\Bigl{(}\bigl{(}(x_{6}\wedge x_{7})\vee(x_{8}\wedge x_{9})\bigr{)}\wedge x_{10}\Bigr{)}. (5.2)

The dual of a lattice identity is obtained by dualizing the lattice terms on both sides of the equality sign. For example, the dual of the above-mentioned distributive law is x1(x2x3)=(x1x2)(x1x3)x_{1}\vee(x_{2}\wedge x_{3})=(x_{1}\vee x_{2})\wedge(x_{1}\vee x_{3}), the other distributive law.

Now we can state Hutchinson’s self-duality theorem.

Theorem 3 (Hutchinson [9, Theorem 7]).

Let RR be a ring with 11, and let λ\lambda be a lattice identity. Then λ\lambda holds in Sub(M)\textup{Sub}(M) for all unital modules MM over RR if and only if so does the dual of λ\lambda.

Even the following corollary of this theorem is interesting. For m0:={0,1,2,}m\in{\mathbb{N}}_{0}:=\{0,1,2,\dots\}, let 𝒜m\mathcal{A}_{m} be the class of Abelian groups101010We note but do not need that the 𝒜m\mathcal{A}_{m}s are exactly the varieties of Abelian groups. satisfying the identity x++x=0x+\dots+x=0 with mm summands on the left. In particular, 𝒜0\mathcal{A}_{0} is the class of all Abelian groups.

Corollary 2 (Hutchinson [9]).

For m0m\in{\mathbb{N}}_{0} and any lattice identity λ\lambda, λ\lambda holds in the subgroup lattices Sub(𝔸)\textup{Sub}(\mathbb{A}) of all 𝔸𝒜m\mathbb{A}\in\mathcal{A}_{m} if and only if so does the dual of λ\lambda.

By treating each 𝔸𝒜m\mathbb{A}\in\mathcal{A}_{m} as a left unital module over the residue-class ring m\mathbb{Z}_{m} in the natural way, Corollary 2 follows trivially from Theorem 3.

In the rest of the paper, we derive Theorem 3 from Theorem 1.

Proof of Theorem 3.

We can assume that λ\lambda is of the form pqp\leq q where pp and qq are lattice terms. Indeed, any identity of the form p=qp=q is equivalent to the conjunction of pqp\leq q and qpq\leq p. Thus, from now on, by a lattice identity we mean a universally quantified inequality of the form

λ:(x1)(xk)(p(x1,,xk)q(x1,,xk)).\lambda:\quad(\forall x_{1})\dots(\forall x_{k})\Bigl{(}p(x_{1},\dots,x_{k})\leq q(x_{1},\dots,x_{k})\Bigr{)}. (5.3)

The dual of λ\lambda, denoted by λdu{\lambda^{\textup{du}}}, is qdupdu{q^{\textup{du}}}\leq{p^{\textup{du}}}, where pdu{p^{\textup{du}}} and qdu{q^{\textup{du}}} are the duals of the terms pp and qq, respectively. Let us call λ\lambda in (5.3) a 11-balanced identity if every variable that occurs in the identity occurs exactly once in pp and exactly once in qq. For lattice identities λ1\lambda_{1} and λ2\lambda_{2}, we say that λ1\lambda_{1} and λ2\lambda_{2} are equivalent if for every lattice LL, λ1\lambda_{1} holds in LL if and only if so does λ2\lambda_{2}. As the first major step in the proof, we show that for each lattice identity pqp\leq q,

pq is equivalent to a 1-balanced lattice identity pq.p\leq q\text{ is equivalent to a 1-balanced lattice identity }p^{\prime}\leq q^{\prime}. (5.4)

To prove (5.4), observe that the absorption law y=y(yx)y=y\vee(y\wedge x) allows us to assume that every variable occurring in pqp\leq q occurs both in pp and qq. Indeed, if xix_{i} occurs, say, only in pp, then we can change qq to q(qxi)q\vee(q\wedge x_{i}). Let BB be the set111111(5.3) allows variables only from {xi:i+}\{x_{i}:i\in{\mathbb{N}^{+}}\}, so BB is a set. As usual, +={1,2,3,}{\mathbb{N}^{+}}=\{1,2,3,\dots\}. of those lattice identities λ\lambda in (5.3) for which (5.4) fails but the set of variables occurring in pp is the same as the set of variables occurring in qq. We need to show that B=B=\emptyset. Suppose the contrary. For an identity λ:pq\lambda:p\leq q belonging to BB, let β(λ)\beta(\lambda) be the number of those variables that occur at least three times in λ\lambda (that is, more than once in pp or qq). The notation β\beta comes from “badness”. Pick a member λ:pq\lambda:p\leq q of BB that minimizes β(λ)\beta(\lambda). As λB\lambda\in B, we know that β(λ)>0\beta(\lambda)>0. Let x1,,xkx_{1},\dots,x_{k} be the set of variables of λ\lambda. As β(λ)\beta(\lambda) remains the same when we permute the variables, we can assume that x1x_{1} occurs in λ\lambda at least three times. Let uu and vv denote the number of occurrences of x1x_{1} in pp and that in qq, respectively; note that u,v+:={1,2,3,}u,v\in{\mathbb{N}^{+}}:=\{1,2,3,\dots\} and u+v=β(λ)3u+v=\beta(\lambda)\geq 3. Clearly, there is a (u+k1)(u+k-1)-ary term p¯(y1\overline{p}(y_{1}, …, yuy_{u}, x2x_{2}, …, xk)x_{k}) such that each of y1y_{1}, …, yuy_{u} occurs in p¯\overline{p} exactly once and p(x1,,xk)p(x_{1},\dots,x_{k}) is of the form

p(x1,x2,,xk)=p¯(x1,,x1,x2,,xk)=p¯(x1,,x1,x)p(x_{1},x_{2},\dots,x_{k})=\overline{p}(x_{1},\dots,x_{1},x_{2},\dots,x_{k})=\overline{p}(x_{1},\dots,x_{1},\vec{x}\kern 1.5pt^{\prime})

where x1x_{1} is listed uu times in p¯\overline{p} and x=(x2,,xk)\vec{x}\kern 1.5pt^{\prime}=(x_{2},\dots,x_{k}). For example, if

p(x1,,x4)=((x1x2)(x1x3))((x2x4)(x1x3)),p(x_{1},\dots,x_{4})=\bigl{(}(x_{1}\vee x_{2})\wedge(x_{1}\vee x_{3})\bigr{)}\wedge\bigl{(}(x_{2}\vee x_{4})\wedge(x_{1}\vee x_{3})\bigr{)},

then we can let

p¯(y1,y2,y3,x2,x3,x4):=((y1x2)(y2x3))((x2x4)(y3x3)).\overline{p}(y_{1},y_{2},y_{3},x_{2},x_{3},x_{4}):=\bigl{(}(y_{1}\vee x_{2})\wedge(y_{2}\vee x_{3})\bigr{)}\wedge\bigl{(}(x_{2}\vee x_{4})\wedge(y_{3}\vee x_{3})\bigr{)}.

Similarly, there is an (v+k1)(v+k-1)-ary term q¯(z1\overline{q}(z_{1}, …, zvz_{v}, x2x_{2}, …, xk)x_{k}) such that each of z1z_{1}, …, zvz_{v} occurs in q¯\overline{q} exactly once and q(x1,,xk)q(x_{1},\dots,x_{k}) is of the form

q(x1,x2,,xk)=q¯(x1,,x1,x2,,xk)=q¯(x1,,x1,x)q(x_{1},x_{2},\dots,x_{k})=\overline{q}(x_{1},\dots,x_{1},x_{2},\dots,x_{k})=\overline{q}(x_{1},\dots,x_{1},\vec{x}\kern 1.5pt^{\prime})

where x1x_{1} is listed vv times in q¯\overline{q} and x\vec{x}\kern 1.5pt^{\prime} is still (x2,,xk)(x_{2},\dots,x_{k}) Consider the uu-by-vv matrix W=(wi,j)u×vW=(w_{i,j})_{u\times v} of new variables; it has uu rows and vv columns. Let

w:=(w1,1,w1,2,,w1,v,w2,1,w2,2,,w2,v,,wu,1,wu,2,,wu,v)\vec{w}:=(w_{1,1},w_{1,2},\dots,w_{1,v},\,w_{2,1},w_{2,2},\dots,w_{2,v},\,\dots,\,w_{u,1},w_{u,2},\dots,w_{u,v})

be the vector of variables formed from the elements of WW. That is, to obtain w\vec{w}, we have listed the entries of WW row-wise. We define the (uv+k1)(uv+k-1)-ary terms

p(w,x)\displaystyle p^{\ast}(\vec{w},\vec{x}\kern 1.5pt^{\prime}) :=p¯(j=1vw1,j,,j=1vwu,j,x) and\displaystyle:=\overline{p}(\bigwedge_{j=1}^{v}w_{1,j},\,\dots,\bigwedge_{j=1}^{v}w_{u,j},\,\vec{x}\kern 1.5pt^{\prime})\text{ and}
q(w,x)\displaystyle q^{\ast}(\vec{w},\vec{x}\kern 1.5pt^{\prime}) :=p¯(i=1uwi,1,,i=1uwi,v,x),\displaystyle:=\overline{p}(\bigvee_{i=1}^{u}w_{i,1},\,\dots,\bigvee_{i=1}^{u}w_{i,v},\,\vec{x}\kern 1.5pt^{\prime}),

and we let λ:p(w,x)q(w,x)\lambda^{\ast}:p^{\ast}(\vec{w},\vec{x}\kern 1.5pt^{\prime})\leq q^{\ast}(\vec{w},\vec{x}\kern 1.5pt^{\prime}). As each of the wi,jw_{i,j}s occurs in each of pp^{\ast} and qq^{\ast} exactly once and the numbers of occurrences of x2,,xkx_{2},\dots,x_{k} did not change, β(λ)=β(λ)1\beta(\lambda^{\ast})=\beta(\lambda)-1. So, by the choice of λ\lambda, we know that λ\lambda^{\ast} is outside BB. Thus, λ\lambda^{\ast} is equivalent to a 1-balanced lattice identity.

Next, we prove that λ\lambda^{\ast} is equivalent to λ\lambda. Assume that λ\lambda^{\ast} holds in a lattice LL. Letting all the wi,jw_{i,j}s equal x1x_{1} and using the fact that the join and the meet are idempotent operations, it follows immediately that λ\lambda also holds in LL. Conversely, assume that λ\lambda holds in LL, and let the wi,jw_{i,j}s and x2,,xkx_{2},\dots,x_{k} denote arbitrary elements of LL. Since the lattice terms and operations are order-preserving, we obtain that

p(w,x)\displaystyle p^{\ast}(\vec{w},\vec{x}\kern 1.5pt^{\prime}) =p¯(j=1vw1,j,,j=1vwu,j,x)\displaystyle=\overline{p}(\bigwedge_{j=1}^{v}w_{1,j},\,\dots,\bigwedge_{j=1}^{v}w_{u,j},\,\vec{x}\kern 1.5pt^{\prime})
p¯(i=1uj=1vwi,j,,i=1uj=1vwi,j,x)=p(i=1uj=1vwi,j,x)\displaystyle\leq\overline{p}(\bigvee_{i=1}^{u}\bigwedge_{j=1}^{v}w_{i,j},\,\dots,\bigvee_{i=1}^{u}\bigwedge_{j=1}^{v}w_{i,j},\,\vec{x}\kern 1.5pt^{\prime})=p(\bigvee_{i=1}^{u}\bigwedge_{j=1}^{v}w_{i,j},\,\vec{x}\kern 1.5pt^{\prime})
q(i=1uj=1vwi,j,x)=q¯(i=1uj=1vwi,j,,i=1uj=1vwi,j,x)\displaystyle\leq q(\bigvee_{i=1}^{u}\bigwedge_{j=1}^{v}w_{i,j},\,\vec{x}\kern 1.5pt^{\prime})=\overline{q}(\bigvee_{i=1}^{u}\bigwedge_{j=1}^{v}w_{i,j},\,\dots,\bigvee_{i=1}^{u}\bigwedge_{j=1}^{v}w_{i,j},\,\vec{x}\kern 1.5pt^{\prime})
q¯(i=1uwi,1,,i=1uwi,v,x)=q(w,x),\displaystyle\leq\overline{q}(\bigvee_{i=1}^{u}w_{i,1},\,\dots,\bigvee_{i=1}^{u}w_{i,v},\,\vec{x}\kern 1.5pt^{\prime})=q^{\ast}(\vec{w},\vec{x}\kern 1.5pt^{\prime}),

showing that λ\lambda^{\ast} holds in LL. So λ\lambda is equivalent to λ\lambda^{\ast}. Hence, λ\lambda is equivalent to a 11-balanced identity, since so is λ\lambda^{\ast}. This contradicts that λB\lambda\in B and proves (5.4).

Clearly, if λ\lambda is equivalent to a 1-balanced lattice identity λ\lambda^{\ast}, then the dual of λ\lambda is equivalent to the dual of λ\lambda^{\ast}, which is again a 1-balanced identity. Thus, it suffices to prove Theorem 3 only for 1-balanced identities. So, in the rest of the paper,

λ:p(x1,,xn)q(x1,,xn) (in short, pq)  is a 1-balanced\lambda:p(x_{1},\dots,x_{n})\leq q(x_{1},\dots,x_{n})\,\text{ (in short, }p\leq q\text{)\, is a }1\text{-balanced} (5.5)

lattice identity.

Refer to caption

Figure 3. For rr and rdu{r^{\textup{du}}} given in (5.1) and (5.2), GrG_{r} and its facets on the left, and GrduG_{{r^{\textup{du}}}} on the right

For a lattice term rr, Vrb(r)\textup{Vrb}(r) will stand for the set of variables occurring in rr. We say that rr is repetition-free if each of its variables occurs in rr only once, that is, if rrr\leq r is 11-balanced. With the lattice terms given (5.1) and (5.2), the following definition is illustrated by Figure 3.

Definition 5.

With each repetition-free lattice term rr, we are going to associate an upward bipolar ascending plane graph GrG_{r} up to isomorphism and a bijection ξr:Vrb(r)E(Gr)\xi_{r}\colon\textup{Vrb}(r)\to E(G_{r}) by induction as follows. If rr is a variable, then GrG_{r} is the two-element upward bipolar plane graph with a single directed edge, and ξr\xi_{r} is the only possible bijection from the singleton Vrb(r)\textup{Vrb}(r) to the singleton E(Gr)E(G_{r}). For r=r1r2r=r_{1}\vee r_{2}, we obtain GrG_{r} by putting Gr2G_{r_{2}} atop Gr1{G_{r_{1}}} and identifying (in other words, gluing together) sink(Gr1)\textup{sink}(G_{r_{1}}) and source(Gr2)\textup{source}(G_{r_{2}}). Then source(Gr)=source(Gr1)\textup{source}(G_{r})=\textup{source}(G_{r_{1}}) and sink(Gr)=sink(Gr2)\textup{sink}(G_{r})=\textup{sink}(G_{r_{2}}). For r=r1r2r=r_{1}\wedge r_{2}, we obtain GrG_{r} by bending or deforming, resizing, and moving Gr1{G_{r_{1}}} and Gr2{G_{r_{2}}} so that source(Gr1)=source(Gr2)\textup{source}(G_{r_{1}})=\textup{source}(G_{r_{2}}), sink(Gr1)=sink(Gr2)\textup{sink}(G_{r_{1}})=\textup{sink}(G_{r_{2}}), and the rest of Gr1{G_{r_{1}}} is on the left of the rest of Gr2{G_{r_{2}}}. Then source(Gr)=source(Gr1)=source(Gr2)\textup{source}(G_{r})=\textup{source}(G_{r_{1}})=\textup{source}(G_{r_{2}}) and sink(Gr)=sink(Gr1)=sink(Gr2)\textup{sink}(G_{r})=\textup{sink}(G_{r_{1}})=\textup{sink}(G_{r_{2}}). If r=r1r2r=r_{1}\vee r_{2} or r=r1r2r=r_{1}\wedge r_{2}, then let ξr:=ξr1ξr2\xi_{r}:=\xi_{r_{1}}\cup\xi_{r_{2}}, that is, for i{1,2}i\in\{1,2\} and xVrb(ri)x\in\textup{Vrb}(r_{i}), ξr(x):=ξri(x)\xi_{r}(x):=\xi_{r_{i}}(x).

In the aspect of GrG_{r}, the lattice operations are associative but not commutative. A straightforward induction yields that for every lattice term rr,

Grdu is isomorphic to Grdu:=(Gr)du, and\displaystyle G_{{r^{\textup{du}}}}\text{ is isomorphic to }{G_{r}^{\textup{du}}}:={(G_{r})^{\textup{du}}}\text{, and} (5.6)
ξrdu(x)=ξr(x)du for all xVrb(rdu);\displaystyle\xi_{{r^{\textup{du}}}}(x)={\xi_{r}(x)^{\textup{du}}}\text{ for all }x\in\textup{Vrb}({r^{\textup{du}}}); (5.7)

(5.6) and (5.7) are exemplified by Figure 3, where Grdu{G_{r}^{\textup{du}}} is given by facets.

The ring RR with 1 in the proof is fixed, and (R,+)(R,+) is its additive group. For pp in (5.5), we denote by SSC(p,R)\textup{SSC}(p,R) the set of systems of contents of GpG_{p} with respect to (R,+)(R,+). That is, complying with the terminology of Definition 1(pb2),

SSC(p,R) is RV(Gp), the set of all maps from V(Gp) to R.\textup{SSC}(p,R)\text{ is }R^{V(G_{p})},\text{ the set of all maps from }V(G_{p})\text{ to }R. (5.8)

For a unital module MM over RR (an RR-module MM for short), similarly to (5.8), let

SSC(p,M):=Sub(M)V(Gp), the set all V(Gp)M maps.\textup{SSC}(p,M):=\textup{Sub}(M)^{V(G_{p})},\text{ the set all }V(G_{p})\to M\text{ maps.}

Interrupting the proof of Theorem 3, we formulate and prove two lemmas.

Lemma 1.

For submodules B1B_{1}, …, BnB_{n} and elements u,vu,v of an RR-module MM, vup(B1,,Bn)v-u\in p(B_{1},\dots,B_{n}) if and only if there exists an SSSC(p,M)S\in\textup{SSC}(p,M) such that

S(source(Gp))=u, S(sink(Gp))=v, and S(head(ei))S(tail(ei))BiS(\textup{source}(G_{p}))=u,\text{ }S(\textup{sink}(G_{p}))=v,\text{ and }S(\textup{{head}}(e_{i}))-S(\textup{{tail}}(e_{i}))\in B_{i} (5.9)

for all edge eiE(Gp)e_{i}\in E(G_{p}). The same holds with qq and eie^{\prime}_{i} instead of pp and eie_{i}, respectively.

Letting u:=0u:=0, the lemma describes the containment vp(B1,,Bn)v\in p(B_{1},\dots,B_{n}). However, now that the lemma is formulated with vuv-u, it will be easier to apply it later. Based on the rule that for B,BSub(M)B,B^{\prime}\in\textup{Sub}(M), BB={h+h:hB,hB}B\vee B^{\prime}=\{h+h^{\prime}:h\in B,\,h^{\prime}\in B^{\prime}\}, we have that vuBBv-u\in B\vee B^{\prime} if and only if there is a ww such that wuBw-u\in B and vwBv-w\in B^{\prime}. (For the “only if” part: w:=u+h=vhw:=u+h=v-h^{\prime}.) Hence, the lemma follows by a trivial induction on the length of pp; the details are omitted. Alternatively (but with more work), one can derive the lemma from the congruence-permutable particular case of Czédli [2, Claim 1], [3, Proposition 3.1], [4, Lemma 3.3] or Czédli and Day [5, Proposition 3.1] together with the canonical isomorphism between Sub(M)\textup{Sub}(M) and the congruence lattice of MM. The following lemma, in which ξp\xi_{p} and ξq\xi_{q} are defined in Definition 5, is crucial and less obvious.

Lemma 2.

Let RR be a ring with 1=1R1=1_{R} and let λ:pq\lambda:p\leq q be a 11-balanced lattice identity as in (5.5). Then the following two conditions are equivalent.

  1. (α\alpha1)

    For every (unital left) RR-module MM, pqp\leq q holds in Sub(M)\textup{Sub}(M).

  2. (α\alpha2)

    PBGP(Gp,Gq\textup{PBGP}(G_{p},G_{q}, ξp,ξq\xi_{p},\xi_{q}, (R,+),1R)(R,+),1_{R}) has a solution.

Proof of Lemma 2.

For i{1,,n}i\in\{1,\dots,n\}, we denote ξp(xi)\xi_{p}(x_{i}) and ξq(xi)\xi_{q}(x_{i}) by eie_{i} and eie^{\prime}_{i}, respectively.

Assume that (α\alpha1) holds. Let FF be the free unital RR-module121212We note but do not use the facts that (R,+)(R,+) can be treated an RR-module denoted by RR{}_{R}R, and FF that we are defining is the |V(Gp)||V(G_{p})|th direct power of RR{}_{R}R. generated by V(Gp)V(G_{p}). For each eiE(Gp)e_{i}\in E(G_{p}), let BiSub(F)B_{i}\in\textup{Sub}(F) be the submodule generated by head(ei)tail(ei)\textup{{head}}(e_{i})-\textup{{tail}}(e_{i}). In other words, Bi=R(head(ei)tail(ei)):={r(head(ei)tail(ei)):rR}B_{i}=R\cdot(\textup{{head}}(e_{i})-\textup{{tail}}(e_{i})):=\{r\cdot(\textup{{head}}(e_{i})-\textup{{tail}}(e_{i})):r\in R\}. Taking SidSSC(p,F)S_{\textup{id}}\in\textup{SSC}(p,F) defined by Sid(v):=vS_{\textup{id}}(v):=v (like an identity map) for vV(Gp)v\in V(G_{p}), Lemma 1 implies that sink(Gp)source(Gp)p(B1,,Bn)\textup{sink}(G_{p})-\textup{source}(G_{p})\in p(B_{1},\dots,B_{n}). So, as we have assumed (α\alpha1), sink(Gp)source(Gp)q(B1,,Bn)\textup{sink}(G_{p})-\textup{source}(G_{p})\in q(B_{1},\dots,B_{n}). Therefore, Lemma 1 yields a system TSSC(q,F)T\in\textup{SSC}(q,F) of contents such that T(source(Gq))=source(Gp)T(\textup{source}(G_{q}))=\textup{source}(G_{p}), T(sink(Gq))=sink(Gp)T(\textup{sink}(G_{q}))=\textup{sink}(G_{p}), and for every i{1,,n}i\in\{1,\dots,n\}, T(head(ei))T(tail(ei))Bi=R(head(ei)tail(ei))T(\textup{{head}}(e^{\prime}_{i}))-T(\textup{{tail}}(e^{\prime}_{i}))\in B_{i}=R\cdot(\textup{{head}}(e_{i})-\textup{{tail}}(e_{i})). Thus, for each i{1,,n}i\in\{1,\dots,n\}, we can pick an aiRa_{i}\in R such that

T(head(ei))T(tail(ei))=ai(head(ei)tail(ei)).T(\textup{{head}}(e^{\prime}_{i}))-T(\textup{{tail}}(e^{\prime}_{i}))=a_{i}\cdot(\textup{{head}}(e_{i})-\textup{{tail}}(e_{i})). (5.10)

Let PP stand for the paired-bipolar-graphs problem occurring in (α\alpha2). With the aia_{i}s in (5.10), let a:=(a1,,an)\vec{a}:=(a_{1},\dots,a_{n}). We claim that a\vec{a} is a solution of PP. To show this, let e:=(ej1\vec{e}\kern 1.5pt^{\prime}:=(e^{\prime}_{j_{1}}, …, ejk)e^{\prime}_{j_{k}}) be a maximal directed path in GqG_{q}. Let us compute, using the equality head(eji)=tail(eji+1)\textup{{head}}(e^{\prime}_{j_{i}})=\textup{{tail}}(e^{\prime}_{j_{i+1}}) for i{1,,k1}i\in\{1,\dots,k-1\} at =\overset{\ast\ast}{=} and (5.10) at =\overset{\oplus}{=}:

sink(Gp)source(Gp)=T(sink(Gq))T(source(Gq))\displaystyle\textup{sink}(G_{p})-\textup{source}(G_{p})=T(\textup{sink}(G_{q}))-T(\textup{source}(G_{q})) (5.11)
=T(head(ejk))T(tail(ej1))=i=1k(T(head(eji))T(tail(eji)))\displaystyle=T(\textup{{head}}(e^{\prime}_{j_{k}}))-T(\textup{{tail}}(e^{\prime}_{j_{1}}))\overset{\ast\ast}{=}\sum_{i=1}^{k}\bigl{(}T(\textup{{head}}(e^{\prime}_{j_{i}}))-T(\textup{{tail}}(e^{\prime}_{j_{i}}))\bigr{)} (5.12)
=i=1k(ajihead(eji)ajitail(eji)).\displaystyle\overset{\oplus}{=}\sum_{i=1}^{k}\bigl{(}a_{j_{i}}\cdot\textup{{head}}(e_{j_{i}})-a_{j_{i}}\cdot\textup{{tail}}(e_{j_{i}})\bigr{)}. (5.13)

For vV(Gp)v\in V(G_{p}), define Iv:={i:tail(eji)=vI_{v}:=\{i:\textup{{tail}}(e_{j_{i}})=v and 1ik}1\leq i\leq k\} and Jv:={i:head(eji)=vJ_{v}:=\{i:\textup{{head}}(e_{j_{i}})=v and 1ik}1\leq i\leq k\}. Expressing (5.13) as a linear combination of the free generators of FF with coefficients taken from RR, the coefficient of vv is iJvajiiIvaji\sum_{i\in J_{v}}a_{j_{i}}-\sum_{i\in I_{v}}a_{j_{i}}. Hence, it follows from (3.3) and (3.4) that

iJvaji\displaystyle\sum_{i\in J_{v}}a_{j_{i}} iIvaji=iJvEfEdge[Gp,a,eji](v)iIvEfEdge[Gp,a,eji](v)\displaystyle-\sum_{i\in I_{v}}a_{j_{i}}=\sum_{i\in J_{v}}\textup{EfEdge}[G_{p},\vec{a},e^{\prime}_{j_{i}}](v)-\sum_{i\in I_{v}}\textup{EfEdge}[G_{p},\vec{a},e^{\prime}_{j_{i}}](v) (5.14)
=i{1,,k}EfEdge[Gp,a,eji](v)=EfSet[Gp,a,{ej1,,ejk}](v)\displaystyle=\sum_{i\in\{1,\dots,k\}}\textup{EfEdge}[G_{p},\vec{a},e^{\prime}_{j_{i}}](v)=\textup{EfSet}[G_{p},\vec{a},\{e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k}}\}](v) (5.15)

is the coefficient of vv in (5.13) and, by (5.11), also in the linear combination expressing sink(Gp)source(Gp)\textup{sink}(G_{p})-\textup{source}(G_{p}). On the other hand, the coefficients of source(Gp)\textup{source}(G_{p}), sink(Gp)\textup{sink}(G_{p}), and vV(Gp){source(Gp)v\in V(G_{p})\setminus\{\textup{source}(G_{p}), sink(Gp)}\textup{sink}(G_{p})\} in the straightforward linear combination expressing sink(Gp)source(Gp)\textup{sink}(G_{p})-\textup{source}(G_{p}) are 1R-1_{R}, 1R1_{R}, and 0R0_{R}, respectively. Since FF is freely generated by V(Gp)V(G_{p}), this linear combination is unique. Therefore, (5.15) is 1R-1_{R}, 1R1_{R}, and 0R0_{R} for v=source(Gp)v=\textup{source}(G_{p}), v=sink(Gp)v=\textup{sink}(G_{p}), and vV(Gp){source(Gp)v\in V(G_{p})\setminus\{\textup{source}(G_{p}), sink(Gp)}\textup{sink}(G_{p})\}, respectively. Thus, the function applied on the right of (5.15) to vv is the same as Cnttransp,1R[G]\textup{Cnt}_{\textup{transp,}1_{\kern-1.0ptR}}[G] defined in (3.2). As this holds for all vV(Gp)v\in V(G_{p}), the just-mentioned function equals Cnttransp,1R[G]\textup{Cnt}_{\textup{transp,}1_{\kern-1.0ptR}}[G]. Hence, a\vec{a} is a solution of PP; see (3.6). We have shown that (α\alpha1) implies (α\alpha2).

To show the converse implication, assume that (α\alpha2) holds, and let a\vec{a} be a solution of PP. Let MM be an RR-module, let B1,,BnSub(M)B_{1},\dots,B_{n}\in\textup{Sub}(M), and let vp(B1,,Bn)v\in p(B_{1},\dots,B_{n}). It is convenient to let u=0Mu=0_{M}; then we obtain an SSSC(p,M)S\in\textup{SSC}(p,M) satisfying (5.9) for all eiV(Gp)e_{i}\in V(G_{p}). Note in advance that when we reference Section 4, 𝔸:=(R,+)\mathbb{A}:=(R,+), G:=GpG:=G_{p}, and H:=GqH:=G_{q}. For each dV(Gq)d\in V(G_{q}),

pick a directed path e(d)=(ej1,,ejk) from source(Gq) to d;\text{pick a directed path }\vec{e}\kern 1.5pt^{\prime}(d)=(e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k}})\text{ from }\textup{source}(G_{q})\text{ to }d; (5.16)

here kk depends on the choice of this path (and on dd). With reference to (3.4), let

T(d):=wV(Gp)EfSet[Gp,a,{ej1,,ejk}](w)S(w).T(d):=\sum_{w\in V(G_{p})}\textup{EfSet}[G_{p},\vec{a},\{e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k}}\}](w)\cdot S(w). (5.17)

We know from Section 4 that (4.4) implies (4.6). Hence, the coefficient of S(w)S(w) in (5.17) does not depend on the choice of e(d)\vec{e}\kern 1.5pt^{\prime}(d). Thus, T(d)T(d) is well defined, that is

T(d) does not depend on the choice of e(d) in (5.16).T(d)\text{ does not depend on the choice of }\vec{e}\kern 1.5pt^{\prime}(d)\text{ in \eqref{eq:msTkvgrnVBrm}}. (5.18)

As S(w)S(w) in (5.17) belongs to MM and its coefficient to RR, T(d)MT(d)\in M. So, TSSC(q,M)T\in\textup{SSC}(q,M). As the empty sum in MM is 0M=u0_{M}=u, we have that T(source(Gq))=uT(\textup{source}(G_{q}))=u. Since a\vec{a} is a solution of PP and e(sink(Gq))\vec{e}\kern 1.5pt^{\prime}(\textup{sink}(G_{q})) is a maximal directed path in GqG_{q}, it follows from (5.17), (3.6), (3.2), and (5.9) that

T(sink(Gq))=1RS(sink(Gp))1RS(source(Gp))=vu.T(\textup{sink}(G_{q}))=1_{R}\cdot S(\textup{sink}(G_{p}))-1_{R}\cdot S(\textup{source}(G_{p}))=v-u.

To see the third part of (5.9) with qq and TT instead of pp and SS, let eiE(Gq)e^{\prime}_{i}\in E(G_{q}). According to (5.16) but with k1k-1 instead of kk, let e(tail(ei))\vec{e}\kern 1.5pt^{\prime}(\textup{{tail}}(e^{\prime}_{i})) be the chosen directed path for tail(ei)V(Gq)\textup{{tail}}(e^{\prime}_{i})\in V(G_{q}). By (5.18), we can assume that e(head(ei))\vec{e}\kern 1.5pt^{\prime}(\textup{{head}}(e^{\prime}_{i})) is obtained from e(tail(ei))\vec{e}\kern 1.5pt^{\prime}(\textup{{tail}}(e^{\prime}_{i})) by adding ejk:=eie^{\prime}_{j_{k}}:=e^{\prime}_{i} to its end. So jk=ij_{k}=i, ejk=eie^{\prime}_{j_{k}}=e^{\prime}_{i},

e(tail(ei))=(ej1,,ejk1), and e(head(ei))=(ej1,,ejk1,ejk).\vec{e}\kern 1.5pt^{\prime}(\textup{{tail}}(e^{\prime}_{i}))=(e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k-1}}),\text{ and }\vec{e}\kern 1.5pt^{\prime}(\textup{{head}}(e^{\prime}_{i}))=(e^{\prime}_{j_{1}},\dots,e^{\prime}_{j_{k-1}},e^{\prime}_{j_{k}}).

Hence, applying (3.4) to the coefficient of each of the S(w)S(w) in (5.17),

T(head(ei))T(tail(ei))=wV(Gp)EfEdge[Gp,a,ejk](w)S(w).T(\textup{{head}}(e^{\prime}_{i}))-T(\textup{{tail}}(e^{\prime}_{i}))=\sum_{w\in V(G_{p})}\textup{EfEdge}[G_{p},\vec{a},e^{\prime}_{j_{k}}](w)\cdot S(w). (5.19)

As jk=ij_{k}=i and most of the summands above are zero by (3.3), (5.19) turns into

T(head(ei))T(tail(ei))\displaystyle T(\textup{{head}}(e^{\prime}_{i}))-T(\textup{{tail}}(e^{\prime}_{i})) =aiS(tail(ei))+aiS(head(ei))\displaystyle=-a_{i}\cdot S(\textup{{tail}}(e_{i}))+a_{i}\cdot S(\textup{{head}}(e_{i}))
=ai(S(head(ei))S(tail(ei))),\displaystyle=a_{i}\cdot\bigl{(}S(\textup{{head}}(e_{i}))-S(\textup{{tail}}(e_{i}))\bigr{)},

which belongs to BiB_{i} since SS satisfies (5.9). Thus, Lemma 1 yields that v=vuq(B1,,Bn)v=v-u\in q(B_{1},\dots,B_{n}). Therefore, p(B1,,Bn)q(B1,,Bn)p(B_{1},\dots,B_{n})\leq q(B_{1},\dots,B_{n}), that is, (α\alpha1) holds, completing the proof of Lemma 2. ∎

Next, we resume the proof of Theorem 3. As noted in (5.5), λ:pq\lambda:p\leq q is 1-balanced. Clearly, so is λdu:qdupdu{\lambda^{\textup{du}}}:{q^{\textup{du}}}\leq{p^{\textup{du}}}. Letting R:={Sub(M):M\mathcal{L}_{R}:=\{\textup{Sub}(M):M is an RR-module}\} and P:=PBGP(Gp,GqP:=\textup{PBGP}(G_{p},G_{q}, ξp,ξq\xi_{p},\xi_{q}, (R,+),1R)(R,+),1_{R}), Lemma 2 gives that

λ holds in rP has a solution.\lambda\text{ holds in }\mathcal{L}_{r}\iff P\text{ has a solution}. (5.20)

Tailoring Definition 4 to the present situation, define ξpdu:{1,,n}E(Gpdu){\xi_{p}^{\textup{du}}}\colon\{1,\dots,n\}\to E(G_{{p^{\textup{du}}}}) and ξqdu:{1{\xi_{q}^{\textup{du}}}\colon\{1, …, n}E(Gqdu)n\}\to E(G_{{q^{\textup{du}}}}) in the natural way by ξpdu(i):=ξp(i)du=eidu{\xi_{p}^{\textup{du}}}(i):={\xi_{p}(i)^{\textup{du}}}={e_{i}^{\textup{du}}} and ξqdu(i):=ξq(i)du=eidu{\xi_{q}^{\textup{du}}}(i):={\xi_{q}(i)^{\textup{du}}}={e^{\prime}_{i}{}^{\textup{du}}}. With P:=PBGP(Gqdu,GpduP^{\prime}:=\textup{PBGP}(G_{{q^{\textup{du}}}},G_{{p^{\textup{du}}}}, ξqdu,ξpdu{\xi_{q}^{\textup{du}}},{\xi_{p}^{\textup{du}}}, (R,+),1R)(R,+),1_{R}), Lemma 2 yields that

λdu holds in rP has a solution.{\lambda^{\textup{du}}}\text{ holds in }\mathcal{L}_{r}\iff P^{\prime}\text{ has a solution}. (5.21)

Let Pdu{P^{\textup{du}}} denote the dual of PP; see Definition 4. It follows from (5.6)–(5.7) and Definitions 3 and 4 that PP^{\prime} is the same as Pdu{P^{\textup{du}}}. Hence, (5.21) turns into

λdu holds in rPdu has a solution.{\lambda^{\textup{du}}}\text{ holds in }\mathcal{L}_{r}\iff{P^{\textup{du}}}\text{ has a solution}. (5.22)

Finally, Theorem 1, (5.20), and (5.22) imply that λ\lambda holds in r\mathcal{L}_{r} if and only if so does λdu{\lambda^{\textup{du}}}, completing the proof of Theorem 3. ∎

References

  • [1] C. Auer, C. Bachmier, F. Brandenburg, A. Gleissner, K. Hanauer: Upward planar graphs and their duals. Theoretical Computer Science 571, 36–49 (2015) http://dx.doi.org/10.1016/j.tcs.2015.01.003
  • [2] G. Czédli: A characterization for congruence semi-distributivity. Proc. Conf. Universal Algebra and Lattice Theory, Puebla (Mexico, 1982), Springer-Verlag Lecture Notes in Math. 1004, 104–110.
  • [3] G. Czédli: Mal’cev conditions for Horn sentences with congruence permutability, Acta Math. Hungar. 44 (1984), 115–124
  • [4] G. Czédli: Horn sentences in submodule lattices. Acta Sci. Math. (Szeged) 51 (1987), 17–33
  • [5] G. Czédli, A. Day: Horn sentences with (W) and weak Mal’cev conditions. Algebra Universalis 19, (1984), 217–230
  • [6] G. Czédli, G. Takách: On duality of submodule lattices, Discussiones Matematicae, General Algebra and Applications 20 (2000), 43–49.
  • [7] G. Di Battista, P. Eades, R. Tamassia, I.G. Tollis: Algorithms for drawing graphs: an annotated bibliography. Computational Geometry 4 (1994), 235–282. https://doi.org/10.1016/0925-7721(94)00014-X
  • [8] G. Hutchinson: A duality principle for lattices and categories of modules. J. Pure and Applied Algebra 10 (1977) 115–119.
  • [9] G. Hutchinson, G. Czédli: A test for identities satisfied in lattices of submodules. Algebra Universalis 8, (1978), 269–309
  • [10] G.L. Miller, J. Naor: Flow in planar graphs with multiple sources and sinks. SIAM J. Comput. 24/5. 1002–1017, October 1995
  • [11] C.R. Platt: Planar lattices are planar graphs. J. Combinatorial Theory (B) 21, 30–39 (1976)