editnum
Agent Spaces
Abstract
Exploration is one of the most important tasks in Reinforcement Learning, but it is not well-defined beyond finite problems in the Dynamic Programming paradigm (see Subsection 2.4). We provide a reinterpretation of exploration which can be applied to any online learning method.
We come to this definition by approaching exploration from a new direction. After finding that concepts of exploration created to solve simple Markov decision processes with Dynamic Programming are no longer broadly applicable, we reexamine exploration. Instead of extending the ends of dynamic exploration procedures, we extend their means. That is, rather than repeatedly sampling every state-action pair possible in a process, we define the act of modifying an agent to itself be explorative. The resulting definition of exploration can be applied in infinite problems and non-dynamic learning methods, which the dynamic notion of exploration cannot tolerate.
To understand the way that modifications of an agent affect learning, we describe a novel structure on the set of agents: a collection of distances (see footnote 8) , which represent the perspectives of each agent possible in the process. Using these distances, we define a topology and show that many important structures in Reinforcement Learning are well behaved under the topology induced by convergence in the agent space.
0 Introduction
Reinforcement Learning (RL) is the study of stochastic processes which are composed of a decision process , and an agent . The Reinforcement Learning problem of a decision process with reward is to find an optimal agent which maximizes the expectation of a reward function with respect to the distribution of paths drawn from the process . Typically, Reinforcement Learning methods seek or otherwise high-reward agents via iterative learning processes, sometimes described as trial-and-error [1].
In an online learning algorithm, learners send one or more agents to interact with the process and collect information on those interactions. After studying these interactions, the learner develops new agents to experiment with, seeking to improve the reward of successive generations of agents (see Definition 1.15). In this pursuit, there is a trade-off between seeking high-quality agents during learning (exploitation) and seeking new information about the decision process (exploration).
Exploitation, being closely related to general iterative optimization algorithms, is well-understood. In simple problems, its trade-off with exploration has been extensively researched [1, 2]. However, in more complex problems the conclusions reached by studying simple problems seem to have little bearing; some of the exploration methods which are least efficient in simple problems have been used in the most impressive demonstrations of the power of Reinforcement Learning to date [3, 4].
This paper focuses on exploration, especially on methods of exploration which ignore a certain set of the tenets of Dynamic Programming [5, 6]. We call these methods naïve, and class among them Novelty Search [7], which we discuss in Subsection 4.1. We begin in Section 1 and Section 2, rigorously describing the problem of Reinforcement Learning and introducing the necessity for exploration. In Subsection 2.4 and Section 3, we study the notions of exploration coming from the study of Dynamic Programming, and consider their efficacy in modern Reinforcement Learning. In Section 4, we investigate the properties of a class of spaces based on what we call primitive behavior, and in Section 5 introduce a general form of these spaces, which we call agent spaces. The agent space can be defined in a broad class of decision processes and efficiently describes several important features of an agent. In Section 6, it is demonstrated that the distributions of truncated paths are continuous in the agent, and in Section 7 it is demonstrated that certain functions of those paths (chiefly, expected reward) are continuous in the agent space.
1 Reinforcement Learning
1.1 Definitions
Definition 1.1 (Decision Process).
A discrete time decision process is a controlled stochastic process indexed by . Associated with the process are a set of states , a set of actions , and a state-transition function .
Definition 1.2 (State).
A state is state possible in the decision process .
Definition 1.3 (Action).
An action is a control which an agent can exert on .
Definition 1.4 (Path).
A path (sometimes called a trajectory) in a decision process is a sequence of state-action pairs generated by the interaction between the process and an agent :
(1.1.1) |
Remark 1.1.
We will sometimes need to refer to truncated paths (), i.e. paths which contain only the state-action pairs associated with indices . This is common when referring to the domains of agents and the state transition function . We will see that truncated paths suffice to describe the domain of , but because agents act only when a new state has been generated, its domain is the set of truncated prime paths : paths which contain the -th state, but not the -th action.
Sometimes (as in equation (1.1.118)), it is necessary to refer to the -th state or action of a path . We denote the -th state , and the -th action , using and when unambiguous.
Because has no antecedent, it is determined by the initial state distribution .
Definition 1.5 (Initial State Distribution).
The initial state distribution of a process is the probability distribution over which determines the first state in a path, .
Definition 1.6 (State-Transition Function).
The state-transition function of a process is the function which takes the truncated path to the distribution of states from which the next state is drawn.
Thus, the state at is determined by
(1.1.2) |
Definition 1.7 (Agent).
An agent222Some authors call this function a policy, referring to its embodiment as an agent. See Subsection 1.2.1, Definition 5.6, and Subsection 7.1. is a function from the set of truncated paths (notice that contains only truncated paths, and contains only infinite paths) of a process into the set of actions of the process:
(1.1.3) |
(1.1.42) |
(1.1.48) | |||
(1.1.54) | |||
(1.1.63) | |||
(1.1.72) | |||
(1.1.85) | |||
(1.1.98) | |||
(1.1.116) |
Agents are judged by the quality of the control which they exert, as measured by the expectation of the reward of their paths.
Definition 1.8 (Reward).
Definition 1.9 (Expected Reward).
The expected reward is the expectation of the reward of paths drawn from the distribution of paths generated by the process :
(1.1.121) |
Reinforcement Learning is concerned with the optimal control of decision processes, pursuing an optimal agent which achieves the greatest possible expected reward . To this end, learning algorithms (see Subsection 1.3) are employed. Many algorithms pursue such agents by searching the agents which are “near” the agents which they have recently considered. In some contexts, this suffices to guarantee a solution in finitely many steps (see Theorem 2.1). When this cannot be guaranteed, the problem is often restated in terms of discovering “satisfactory” agents, or discovering the highest quality agent possible under certain constraints.
Remark 1.2 (Table of Notation).
We will frequently reference states, actions, paths, and sets and distributions of the same:333Without superscripts, represents the set of possible paths. Note that only truncated paths can be prime.
Individual | Set | Distribution | |
---|---|---|---|
State | |||
Action | |||
Path | |||
Prime Path |
Often, the decision processes in Reinforcement Learning are assumed to be Markov Decision Processes (MDPs); decision processes which satisfy the Markov property.
Definition 1.10 (Markov Property).
A decision process has the Markov Property if, given the prior state-action pair , the distribution of the state-action pair is independent of the pairs .
Remark 1.3 (A Simple Condition for the Markov Property).
Sometimes a slightly stricter definition of the Markov property is used: a decision process is guaranteed to be Markov if is a function of and is a function of alone (which is itself a function of ). We call such agents strictly Markov:
Definition 1.11 (Strictly Markov Agent).
A strictly Markov agent is a function from the set of states into the set of actions :
(1.1.122) |
1.2 Computational considerations
When studying modern Reinforcement Learning, it is important to keep the constraints of practice in mind. Among the most important of these is the quantification of states and actions. In practice, the sets of states and actions are typically real (), finite (), or a combination thereof.
1.2.1 The Agent
Agents can be represented in a variety of ways, but most modern algorithms represent agents with real function approximators (often neural networks) parameterized by a set of real numbers ,
(1.2.1) |
This is true even in cases where the sets of states or actions are not real. To reconcile an approximator’s domain and range with such sets, the approximator is fit with an input function and an output function , which mediate the interaction between a process and a function approximator:
(1.2.2) | ||||
(1.2.3) |
When composed with these functions, the approximator forms an agent:
(1.2.4) |
1.2.2 The Output Function
The choice of output function is important and requires consideration of several aspects of the process, including the learning process and the action set . One of the most consequential roles of an output function occurs in processes where is finite. In such processes, the output function must map a real vector to a discrete action.
Some learning algorithms, such as -Learning, invoke the action-value function (see Definition 2.2) in their learning process, training an agent to estimate the expected reward of each action at each state, given an agent [9, 81]. Because the object of approximation in -learning is defined analytically, some output functions are less sensible than others. Other algorithms, like policy gradients, have outputs with less fixed meanings. Consequently, they can accommodate a variety of output functions, and the output function has a substantial effect on the agent itself. Let us consider some of the most common output functions. For further discussion of output functions, see Appendix A.1.
Definition 1.12 (Greedy Action Sampling).
In Greedy Action Sampling, the function approximator’s output is taken to indicate a single “greedy” action, and this greedy action is taken[1]. In problems with finite sets of actions, the range of the parameterized agent is real () and the action associated with the dimension of greatest magnitude (the greedy action) is taken.
Definition 1.13 (-Greedy Sampling).
In -Greedy Sampling, , the greedy action is taken with probability . Otherwise, an action is drawn from the uniform distribution on .[1]
Unlike the greedy sampling methods above, Thompson sampling requires a finite set of actions.
Definition 1.14 (Thompson Sampling).
In Thompson Sampling, the range of the parametrized agent is , and the action is selected by drawing from the random variable which gives action probability [10].
1.3 Learning Algorithms
We are concerned primarily with online learning algorithms, in which exploration is most sensible. Whereas most dynamic programming based methods study only a single agent in each epoch of training, we consider a broader class of learning algorithms:
Definition 1.15 (Learning Algorithm).
A learning algorithm (sometimes optimizer) is an algorithm which generates sets of candidate agents , observes their interactions with a process, and uses this information to generate a set of candidates in the next epoch. This procedure is repeated for each epoch to improve the greatest expected reward among considered agents,
(1.3.1) |
Remark 1.4 (Loci).
Many learning algorithms have an additional property which can simplify discussions of learning algorithms: they center generations of agents around a locus agent, or locus of optimization, denoted .
Not every interesting algorithm falls within this definition (e.g. evolutionary methods with multiple loci), but many of the discussions in this paper apply, mutatis mutandis, to a broader class of methods.
1.4 Interpreting the Reinforcement Learning Problem
Many learning algorithms are influenced by a philosophy of optimization called Dynamic Programming [5] (DP). Dynamic programming approaches control problems state by state, determining the best action for each state according to an approximation of the state-action value function (see Definition 2.2). DP has compelling guarantees in finite444An MDP is called finite if and . strictly Markov decision processes [11]. However, because DP methods require every [8, 12] action value be approximated during each step of optimization, these guarantees cannot transfer to infinite problems.
Although DP is dominant in Reinforcement Learning [1, 73], several trends indicate that it may not be required for effective learning. Broadly, the problems of interest in Reinforcement Learning have shifted substantially from those Bellman considered in the early 1950s [9]. Among other changes, the field now studies many infinite problems, agents are not generally tabular, and the problems of interest are not in general Markov. Each of these changes independently prevents a problem from satisfying the requirements of dynamic programming [12]. Simultaneously, approaches not based on DP, like Evolution Strategies [13] have shown that DP methods are not necessary to achieve performance on par with modern dynamic programming based methods in common reinforcement learning problems. Taken together, these trends indicate that DP may not be uniquely effective.
This paper considers exploration in the light of these developments. We distinguish DP methods like -Learning, which treat reinforcement learning problems as collections of subproblems, one for each state, from methods like Evolution Strategies [13], which treat reinforcement learning problems as unitary. We call the former class dynamic, and the latter class naïve.
2 Exploitation and its discontents
The study of Reinforcement Learning employs several abstractions to describe and compare the dynamics of learning algorithms which may have little in common. One particularly important concept is a division between two supposed tasks of learning: exploitation and exploration. Exploitation designates behavior which targets greater reward in the short term and exploration designates behavior which seeks to improve the learner’s understanding of the process. Frequently, these tasks conflict; learning more (exploration) can exclude experimenting with agents which are estimated to obtain high reward (exploitation) and vice versa. The problem of balancing these tasks is known as the Exploration versus Exploitation problem. Let us consider exploitation, exploration, and their conflicts.
2.1 Exploitation
Exploitation can be defined in a version of the Reinforcement Learning problem in which the goal is to maximize the cumulative total of rewards obtained by all sampled agents: let be a path sampled from
(2.1.1) |
Definition 2.1 (Exploitation).
A learning method is exploitative if its purpose is to increase the reward accumulated by the agents considered in the epochs .
Fortunately, this definition of exploitation remains relevant to our problem because it informs the cumulative version of the problem. Exploitation advises the dispatch of agents which are expected to be high-quality, rather than those whose behavior and quality are less certain. In effect, exploitation is conservative, preferring “safer” experimentation and incremental improvement to potentially destructive exploration. While exploitation is crucial to reinforcement learning, explorative experiments are often necessary because sometimes, exploitation alone fails.
2.2 Exploitation Fails
When exploitation fails, it is because its conservatism causes it to become stuck in local optima. Because Reinforcement Learning methods change the locus agent gradually, and exploitative methods typically generate sets of agents near the locus, exploitative learners can become trapped in local optima. Figure 3 provides a unidimensional representation of this problem: because the agents fall within the sampling radius, and updates to the locus agent are generally interpolative, the product of the learning process depends exclusively on the initial locus and the sampling distance.
2.2.1 The Policy Improvement Theorem
No discussion of local optima in Reinforcement Learning would be complete without a discussion of the Policy Improvement Theorem [12, 8]. Let us begin with a definition:
Definition 2.2 (Action Value Function ()).
Let be a strictly Markov decision process (Remark 1.3). Then, the Action Value Function is the expectation of reward555Reward is assumed to be summable (1.1.118), and is assumed to be exponential, ., starting at state , taking an action , and continuing with an agent : [8]
(2.2.1) |
Theorem 2.1 (Policy Improvement Theorem).
For any agent in a finite, discounted, bounded-reward strictly Markov decision process, either there is an agent such that
(2.2.2) |
or is an optimal agent.
This theorem also holds if this condition is appended:
(2.2.3) |
With this condition, Theorem 2.1 shows that every agent either has a superior neighbor (an agent which differs from in its response to exactly one state), or it is optimal.
This means that any finite strictly Markov decision process has a discrete “convexity”; every imperfect agent has a superior neighbor. Unfortunately, many of the problems of interest in modern Reinforcement Learning are not finite or, if finite, are too large for the Policy Improvement Algorithm, which exploits this theorem, to be tractable [8, 54].
2.3 The Assumptions of Dynamic Programming
Modern Reinforcement Learning confronts many problems which do not satisfy the assumptions of the Policy Improvement Theorem. Many modern problems fail directly, using infinite decision processes. Many others technically satisfy the requirements of the theorem, but are so large as to render the guarantees of policy improvement notional with modern computational techniques. Still others fail the qualitative conditions, for example, by not being strictly Markov.
The calculations necessary to guarantee that one policy is an improvement upon another require that the learner have knowledge of each state and action in the process, as well as of the dynamics of the decision process. While excessive size of or are easiest to exhibit, the interplay of the size of these sets tends to be the true source of the problem. Difficulties caused by this interplay are known as the curse of dimensionality, a term which Bellman used to describe the way that problems with many aspects can be more difficult than the sum of their parts [6, ix]. For example, the size of the set of agents grows exponentially in the number of states and polynomially in the number of actions:
(2.3.1) |
In large problems, the curse of dimensionality makes the guarantees of Dynamic Programming almost impossible to achieve in practice. Curiously, the performance of DP methods seems to degrade slowly, even as the information necessary for the guarantees of DP quickly becomes unachievable.
2.4 Incomplete Information and Dynamic Programming
One way to address the size of a Reinforcement Learning problem is to collect the information necessary for Dynamic Programming more efficiently. Dynamic Programming methods which approximate rely on several types of information, all based in the superiority condition:
(2.4.1) |
-based methods rely on the learner’s ability to approximate the -function. There are two ways to approach this problem: can be approximated directly (i.e. separately for each state), or it can be calculated from knowledge or approximations of several aspects of the process:
-
1.
The set of states, ,
-
2.
The set of actions, ,
-
3.
The immediate reward function, ,
-
4.
The state-transition function, , and
-
5.
The agent .
Calculating the function is, in a sense, more efficient: if the quantities above are known to an acceptable degree of precision, then a consistent action-value function can be imputed without repeated sampling.
The complexity of this operation is a function of . In a sense, this method is inexpensive; the number of state-action pairs is much smaller than the number of possible sample paths, . It is also typically much smaller than the number of agents, .
This kind of approximation is fundamental to Dynamic Programming methods, and serves as the basis for many exploration methods. In general, reinforcement learning algorithms are assumed to have full knowledge of and , but they may not have complete information about , , or . Thus, exploration has sometimes been defined as pursuing experiences of state-action pairs , specifically the quadruplets which can be associated with them in a path, . If enough of these are collected, it is possible to approximate the expected reward of each state-action pair, as well as the transition probabilities .
Importantly, because these quadruplets can be induced by particular actions (i.e., can be generated by taking the action at a time when is visited), the task of collecting the information about these quadruplets can be reduced to a simple two-step formula: first, visit each state , and second, attempt each action in that state. Under the right circumstances, proceeding in this way results in experience of every state-action pair, .
This simple notion of exploration is both efficient and effective in some circumstances: if is small and all of its states may be easily visited, and is small, then it is easy to consider every possible state-action pair. After enough sampling, this allows the -function to be approximated. Outside of problems which satisfy those conditions, it is natural to consider other notions of exploration. Sometimes, these definitions take a more descriptive form, for example in Thrun: “exploration seeks to minimize learning time” [14]. Even without sufficient information about the process to guarantee policy improvement, Dynamic Programming methods have performed admirably [3, 4].
These results could be seen as a testament to the efficacy of Dynamic Programming under non-ideal circumstances, however, there is a curious countervailing trend: in many of these problems, simple or black-box methods such as Evolution Strategies [13] (an implementation of finite-differences gradient approximation) have been able to match the performance of modern Dynamic Programming methods. This conflicts with the present theory in two ways: first, it challenges the idea that Dynamic Programming is uniquely efficient, or uniquely suited to Reinforcement Learning problems. Second, because these methods are not dynamic, they lack the usual information requirements which exploration is supposed to resolve, yet they are useless without exploration (as seen in Figure 3) - if information about state-action pairs is not directly employed, what is the role of exploration?
In the next section we begin by describing the properties of the methods of Subsection 1.2.2 which guarantee that eventually, every state-action pair will be experienced. We then discuss methods which address circumstances where certain states are difficult to handle
3 Exploration and contentment
Dynamic exploration can be divided into two categories: directed exploration and undirected exploration [14]. Undirected methods explore by using random output functions like -Greedy Sampling and Thompson Sampling (see Subsection 1.2.2) to experience paths which would be impossible with the corresponding greedy agent. These are called undirected because the changes which the output functions make to the greedy version of the agent are not intended to cause the agent to visit particular states. Instead, these methods explore through a sort of serendipity.
Directed methods have specific goals and mechanisms more narrowly tailored to the optimization paradigm they support. Some directed methods, like #Exploration [15] (see Definition A.5) seek to experience particular state-action pairs by directing the learner through exploration bonuses to consider agents which lead to state-action pairs which have been visited less in the learning history. In order to apply the state-action pair formulation of exploration to large and even infinite problems, #Exploration employs a hashing function to simplify and discretize the set set of states.
Other directed exploration methods, like that of Stadie et al. [16] (see Definition A.6) employ exploration bonuses to incentivize agents which visit states which are poorly understood. Stadie et al. begin by modeling the state-transition function (in a strictly Markov process) with a function . In each step of the process, the model guesses the next state . After guessing, the model is trained on the transition from to . When the model is less accurate (i.e. when the distance666Stadie et al. assume that is a metric space. between and is large), Stadie et al. reason, there is more to be learned about , and their method assigns exploration bonuses to encourage agents which visit . Unfortunately, none of these methods can recover the guarantees of Dynamic Programming in problems which do not satisfy the requirements of the theory of Dynamic Programming. More detailed descriptions of these methods may be found in Appendix A.
3.1 The Essence of Exploration
The brief survey above and in Appendix A is far from complete, but it contains the core strains of most modern exploration methods. A more thorough discussion of exploration may be found in [14] or [17]. In spite of its incompleteness, our survey suffices for us to reason generally about exploration, and about its greatest mystery: why do methods which were developed to collect exactly the information necessary for dynamic programming appear to help naïve methods succeed?
Because naïve methods do not make use of action-values, nor do they collect information about particular state-action pairs, the dynamic motivations of these exploration methods cannot explain their efficacy when paired with naïve methods of exploitation. Instead, there must be something about the process of exploration itself which aids naïve methods. That poses a further challenge to the dynamic paradigm: if exploration is effective in naïve methods for non-dynamic reasons, to what extent do those reasons contribute to their effect in dynamic methods?
These exploration methods seem to share little beyond their motivations. One other thing which they share - and which they by necessity share with every reinforcement learning algorithm - is that their mechanism is, ultimately, aiding in the selection of the next set of agents . Undirected methods accomplish this by selecting an output function, and directed methods go slightly further in influencing the parameters of the agents, but this is their shared fundamental mechanism.
What differentiates exploration methods from other reinforcement learning methods is that they influence the selection of agents not to improve reward in the next epoch, as is standard in exploitation methods, but to collect a more diverse range of information about the process. It is the combination of this mechanism and purpose which makes a method explorative:
Definition 3.1 (Exploration).
A reinforcement learning method is explorative if it influences the agents for the purpose of information collection.
We now know two things about exploration: in Section 2 we established that exploration was a process which sought additional information about the process. We have now added that exploration is accomplished by changing the agents which the learner considers. This, however, does not resolve our question: under the dynamic programming paradigm, these changes are made so as to collect the information necessary to calculate action-values. What is the information which naïve methods require, and how does dynamic exploration collect it? To what extent does that other information contribute to the effectiveness of those methods in dynamic programming?
To continue our study of exploration in naïve learning, we begin in the next section with a discussion of Novelty Search, an algorithm which uses a practitioner-defined behavior function to explore behavior spaces. In Section 5, we describe a general substrate for naïve exploration which is general enough to contain other exploration substrates, and is equipped with a useful topological structure.
4 Naïve Exploration
One of the most prominent examples of naïve exploration is an algorithm called Novelty Search [7]. In contrast to the other methods which we discuss in this work, its creators do not describe it as a learning algorithm. Instead, they call novelty search an “open-ended” method. Nonetheless, methods which incorporate Novelty Search can usually be analyzed as learning algorithms, since they typically satisfy Definition 1.15, with the possible exception of the “purpose” of the method.
4.1 Novelty Search
Novelty Search is an algorithmic component of many learning algorithms which was introduced by Joel Lehman and Kenneth O. Stanley [7, 18]. Unlike other learning methods, Novelty Search works to encourage novel, rather than high-reward agents.
Definition 4.1 (Novelty Search).
Novelty Search is a component which can be incorporated into many learning algorithms which defines the behavior and novelty of agents which the learning algorithm considers .
Because Novelty Search does not specify an optimizer, the details of implementation can vary, but the “search” in Novelty Search refers to the way that Novelty Search methods seek agents with higher novelty scores . These scores are based upon the scarcity of an agent’s behavior within a behavior archive (4.1.2).
Definition 4.2 (Behavior).
The behavior of an agent is the image of that agent777In practice, many behavior functions are functions of a sampled path of the agent, rather than the agent itself. under a behavior function (or behavior characterization)
(4.1.1) |
a function from the set of agents into a space of behaviors equipped with a distance .888 The literature on Novelty Search is most sensible when the range of the behavior function is assumed to be a metric space, and novelty search is usually discussed under that pretense. However the “novelty metrics” employed in Abandoning Objectives are not metrics in the mathematical sense (see Definition 5.1) Instead, they are squared Euclidean distances - a symmetric [19]. We use the word distance to refer to a broader class of functions, and use the word metric and its derivatives in the formal sense.
The behavior archive in Novelty Search is a subset of the behaviors which have been observed in the learning process,
(4.1.2) |
In general, the archive is meant to summarize the behaviors which have been observed so far using as few representative behaviors as possible, so as to minimize computational requirements.
Definition 4.3.
The novelty of a behavior is a measure of the sparsity of the behavior archive around . In Abandoning Objectives, Lehman and Stanley use the average distance8 from that behavior to its nearest neighbors in the archive, :
(4.1.3) |
Novelty Search reveals something about naïve methods as a class: because they do not operate under the dynamic paradigm, individually manipulating the ways that agents respond to each situation possible in the process, they must employ another structure to understand the agents which it considers, and to determine . For this purpose, Novelty Search uses the structure of the chosen behavior space X. Let us consider the structures other naïve methods might use.
In the case of exploitation, a simple structure is available: reward. The purpose of exploitation is to improve reward, so reward is the relevant structure. Under the dynamic framework, reward is decomposed into the immediate rewards , and agents are specified in relation to these. In the naïve framework, that decomposition is not availed, so only the coarser reward of an agent, , can be used. In some problems, other structures may correlate with reward, but these correlations can be inverted by changes to the state-transition function or reward function, so the only a priori justifiable structure for exploitation is reward.
Exploration is more complex. In Dynamic Programming, the information necessary to solve a Reinforcement Learning problem is well defined, but in the naïve framework, there is not a general notion of information “sufficient” to solve a problem (except for exhaustion of the set of agents). That is, naïve exploration does not have a natural definition in the same way as naïve exploitation or dynamic exploration. Let us then consider a definition of exclusion:
Definition 4.4 (Naïve Exploration).
A learning method is a method of naïve exploration if
-
i.
The method itself is naïve, and
-
ii.
The set of agents is explored using a structure that is not induced by the expected reward.
Under this definition, Novelty Search is clearly a method of naïve exploration. Let us consider it further. Rather than treat it as a unique algorithm, we can consider Novelty Search to be a family of exploration methods, each characterized by the way that it projects the set of agents into a space of behaviors .
Novelty Search can be analyzed with respect to several goals: it may be viewed as an explorative method, or, when paired with a method of optimization, it may be seen as an open-ended or learning method in and of itself. Unfortunately, the capacity of Novelty Search to accomplish any of these goals is compromised by the subjectivity of the behavior function. Because the behavior function relies on human input, the exploration which is undertaken, the diversity which Novelty Search achieves, and the reward at the end of a learning process involving Novelty Search all rely on the beliefs of the practitioner. Instead of viewing this subjectivity as a problem, Lehman and Stanley embrace it, suggesting that behavior functions must be determined manually for each problem, writing:
There are many potential ways to measure novelty by analyzing and quantifying behaviors to characterize their differences. Importantly, like the fitness function, this measure must be fitted to the domain. -Lehman and Stanley [20]
If followed, this advice would make it virtually impossible to disentangle Novelty Search as an algorithm from either the problems to which it is applied or the practitioners applying it. Fortunately, some authors have rejected this suggestion, pursuing more general notions of behavior.
We now present a brief overview of some of the behavior functions in the literature, including those described in Lehman and Stanley’s pioneering Novelty Search papers [7, 18]. Appendix B presents a more detailed discussion of these functions as well as some other behavior functions which could not be included in this summary.
In their flagship paper on Novelty Search, Lehman and Stanley [7] consider as their primary example problem a two-dimension maze. As a secondary example they take a similar navigation problem in three dimensions. Importantly for our discussion, behavior functions in both environments admit a concept of the agent’s position. Lehman and Stanley introduce two behavior functions in their study of Novelty Search, both of which are functions of the position of the agent throughout a sampled truncated path .
The simpler of these behavior function takes the agent’s final position (i.e. the position of the agent when the path is truncated), and the more complex behavior function takes as behavior a list of positions throughout the path taken at temporal intervals. These functions provide insight into Lehman and Stanley’s intuitions about behavior: to them, behavior relates to the state of the process, rather than to the agent or its actions. Because the state of the process depends upon the interaction of the process and the agent, these definitions assure that behavior reflects the interaction between the process and the agent.
This is important. Notice that under the function-approximation framework (Subsection 1.2.2), any function with appropriate range and domain could be treated as an agent. As a result, if behavior were taken to be a matter of the function approximator alone, one would be forced to accept the premise that the structure of the set of agents should be identical for any pair of processes with the same sets of states and actions. Identical even when the state-transition functions differ. In other words, all three-dimensional navigation tasks would have the same space of agents. This issue certainly suffices to explain Lehman and Stanley’s attitude toward behavior functions. It does not, however, necessitate that approach.
Other authors have considered near-totally general notions of behavior. One group of these focuses on collating as much information as possible about every point in a path. In the case of Gomez et al. [21], this involves concatenating some number of observed states. Conti et al. [22] take a similar approach, replacing the states of the decision process with RAM states - the version of the state of the decision process stored by the computer.
Another class of general notions of behavior focuses instead on the actions of the agent itself, as viewed across a subset of . We call this class of functions Primitive Behavior.
4.2 Primitive Behavior
Primitive behavior functions define the behavior of a strictly Markov agent as the restriction of the agent to a subset of :
Definition 4.5 (Primitive Behavior).
A behavior function is said to be primitive if it is a collection of an agent’s actions in response to a finite set of states ; is primitive iff
(4.2.1) |
the distance on the set of behaviors is given by a weighted (by ) sum of distances between the actions of the agents on :999This assumes that the set of actions is a metric space with .
(4.2.2) |
This notion of behavior, with slight modifications, has appeared in several papers in the Reinforcement Learning literature [23, 24, 25, 26]. At least one existing work uses this notion of behavior in Novelty Search [23]. Another [24] uses it for optimization with an algorithm other than Novelty Search. [23, 25, 26] weight the constituent distances (i.e. is not constant), and [25] uses primitive behavior to study the relationship between behavior and reward.
A number of important properties of primitive behavior have been described. For example, [24] notes that agents with the same behavior may have different parameters. [23] takes implicit advantage of the fact that agents which do not encounter a state do not meaningfully have a response to it (a fact we address in Item iib of Section 5), and [25] considers the states which an agent encounters an important aspect of the agent, using them to create equivalence classes of agents.
We call this notion of behavior primitive because it is the simplest notion of behavior which completely describes the interaction between the agent and the process [on ]. Thus, for an appropriate set , the primitive behavior contains all information relevant to . So long as a definition of behavior only depends on the interaction between and , primitive behavior thus suffices to determine every other notion of behavior.
Clearly, this does not follow the advice of Lehman and Stanley [7]; it is completely unfitted to the underlying problem. In exchange for this lack of fit, primitive behavior is fully general. Further, because primitive behavior is simply a restriction of the agent itself, every other notion of behavior is downstream of primitive behavior, provided that an appropriate set is used.
However, because the selection of , and of the weights is itself a matter of choice, primitive behavior in general remains somewhat subjective. In finite problems, it is possible to assign to every state a non-zero weight, which produces a sort of objective distance, but this remains problematic; two agents might differ on a state which neither of them ever visits. Should agents and which produce identical processes and really be described as different? We contend in Definition 5.6 that the answer is “no”.
Our task in the next several sections is to resolve this and other issues with primitive behavior. In the next section we approach the matter of a general substrate for exploration from the ground up. We begin with the simplest version of primitive behavior (that associated with a single state) and proceed to a “complete” notion of behavior: a distance between agents which properly discriminates between agents (see Definition 5.6). In Section 6, we demonstrate some properties of this completed space, which we call the agent space.
5 Seeking a Structure for Naïve Exploration
Under Definition 4.4, naïve exploration is a category of exclusion. Any naïve method which is not exploitative, i.e. which does not use the structure induced by an agent’s expected reward, is a method of naïve exploration. Our task in this section is to develop a good structure for exploration. That is, to develop a structure on the space of agents, other than reward, which captures important aspects of the relationships of agents to one another and to the process. We seek a structure which:
-
i.
Exists in every discrete-time decision process,
- ii.
-
iii.
Naturally describes important relations on the set of agents.
Such a structure would allow us to compare structures which are used in naïve exploration methods, including, for example, the various behavior functions which have been used in Novelty Search. If computationally tractable, such a structure could also provide the basis for a new exploration method, or perhaps even a new definition of exploration. Let us begin by considering one possible kind of structure for this.
5.1 Prototyping the Agent Space
In contending with a generic discrete-time decision process, few assets are available to define the structure of an agent space. At a basic level, there are only two types of interaction between an agent and a process: the generation of a state, in which the decision process acts on the agent (), and the action of an agent, in which the agent acts on the decision process (). Every other aspect of a decision process may be regarded as a function of those interactions. Let us begin by using these basic interactions to define a metric space, following the behavior functions used in Novelty Search.
Definition 5.1 (Metric).
A metric on a set is a function
(5.1.1) |
which satisfies the metric axioms:
-
1.
Identity of Indiscernibles
-
2.
Symmetry
-
3.
Triangle Inequality
Let us begin with a simple case, comparing strictly Markov agents on their most basic elements: their actions on the process in response to a single state .
5.2 The Distance on
Let be a metric space with metric . Then, we define the distance between agents and in a single-state decision on the state :
Definition 5.2 (Distance on ).
The distance on between and is the distance of their actions on :
(5.2.1) |
Importantly, this distance is not a metric.8 Instead, it is a pseudometric; it cannot distinguish agents which act identically on but differently on another state .
Definition 5.3 (Pseudometric).
A pseudometric on a set is a function
(5.2.2) |
which satisfies the pseudometric axioms
-
1.
Indiscernibility of Identicals
-
2.
Symmetry
-
3.
Triangle Inequality
This distance describes the differences between and on the state , but decision processes involve many states, potentially infinitely many. Certainly, the action which agents take in response to a single state does not suffice to explain the differences between agents. Let us begin to resolve this by comparing agents on a finite set of states ().
5.3 The Distance on
Having defined the distance between agents on a single state , we can define the distance on a set of states by summation. Denote the distance between agents on a finite set of states as .
Definition 5.4 (Distance on ).
The distance on between and is the sum of the distances between and on each element of the set:
(5.3.1) | ||||
(5.3.2) |
Depending upon the process and the set itself, this could contain all of the states in a process (for decision processes with finite sets of states), or a set of important states. We can also extend this notion of distance by weighting the distance at each state with a weight , producing the primitive behavior of Subsection 4.2,
(4.2.2) |
Consider a special case for : let be the set of states observed before time in a path :
Definition 5.5 (The distance on ).
(5.3.3) |
Which is the distance between and over a truncated path. When is drawn from , is especially interesting; it is a description of the way that would have acted differently from over a path that actually encountered, a sort of backseat-driver metric.
The distance on is powerful in a number of respects. Let us consider the case . Clearly, this implies that and do not differ at all on this path - presented with the same initial state, they would produce exactly the same truncated path. Unfortunately, the distance at still fails to satisfactorily distinguish agents - it says nothing about paths in which other states are encountered, or longer paths, or about the stochastic nature of decision processes. Let us state these failings directly so that we may address them:
Remark 5.1 (Three Properties).
-
I.
The distance on a truncated path drawn from is not reciprocal; it describes how differs from , but not how differs from ,
-
II.
This distance ignores the stochasticity of the process; the ways in which and differ on do not necessarily imply anything about the other paths which experiences,
-
III.
The distance on a truncated path cannot account for infinite paths; if the agents are not assumed to be strictly Markov, one can easily construct a pair of agents and which differ only on longer paths. Even in the strictly Markov case, some states might only be possible after .
5.4 The Role of Time
It is common in Reinforcement Learning to treat problems with infinite time horizons by weighting sums with a discount function . If the space of actions is bounded,101010A metric space is bounded iff and
(5.4.1) |
Then we may define the distance between strictly Markov agents and on a complete path:
(5.4.2) |
Clearly, if is bounded, then [27, 60]
(5.4.3) |
Thus, this pseudometric resolves the problem of Item III of Remark 5.1, describing the differences in the action of agents over an infinite path.
Our definition of readily admits a change that allows the distance on a path to be defined for agents which are not strictly Markov. All we must do is remove the symbols “”:
(5.4.4) |
Let us use the distance on to define a notion of distance which incorporates the stochastic aspects of the interaction between an agent and a process, resolving the problem of Item II of Remark 5.1.
5.5 The Distance at
Consider , the distribution of paths generated by the interaction of an agent with . Given a discount function satisfying (5.4.1), [27, 318]
(5.5.1) | ||||
(5.5.2) | ||||
(5.5.3) | ||||
(5.5.4) |
exists and is bounded above. We call this quantity the distance at .
Because the expectation integrates the distance on over all of the paths of , compares and on every part of the process which experiences. This guarantees that the stochasticity involved in the interaction of and is considered in the comparison. However, the stochasticity involved in the processes and may be more relevant to their comparison than that produced by . In the next section, we begin to address this by considering the case .
5.6
In order to state our next result, we must introduce agent identity. Agent identity collapses two artificial distinctions between representations of agents caused by the use of function approximators. First, it treats approximators which are differently parameterized but identical as functions (e.g. because of a permutation of the order of parameters, as noted by [24]) as identical. Second, in keeping with the methods of [25] it treats functions which differ, but only on a set of probability 0 as identical.
Definition 5.6 (Agent Identity ()).
Let us say that the agents and are identical as agents () if and only if the set of paths where they differ has probability 0 in ;
(5.6.1) |
That is, and are identical as agents if and only if the probability of encountering a path where they differ is 0.
Remark 5.2 (Agent Identity and ).
Notice that this implies .
Theorem 5.1 (Identical agents the same local distance).
If and are identical as agents, then
(5.6.2) |
Proof by induction.
[Base case:] Suppose . Then,
(5.6.3) | |||||
(5.6.4) | |||||
(5.6.5) |
That is, if ( and are identical), then (they act identically on truncated prime paths of length 0). Then, the joint distributions and are identical.
Thus, the joint distributions of these and the next state, given by , are also identical: the total variation distance of and is 0;
(5.6.6) | |||||
and | (5.6.7) | ||||
(5.6.8) | |||||
(5.6.9) | |||||
(5.6.10) |
The distribution determines the component of at . By (5.6.4), we have .
Let and suppose that . By (5.6.4), we have . Then, since and are the joint distributions of these, they also have total variation 0, and since is fixed, the resulting distributions and also have total variation 0;
(5.6.11) | |||||
and | (5.6.12) | ||||
(5.6.13) | |||||
(5.6.14) |
By assumption (5.6.4), we have
(5.6.15) |
and thus, the joint distributions also have total variation 0:
(5.6.16) |
The same holds for all , so we have
(5.6.17) | ||||
(5.6.18) | ||||
(5.6.19) |
∎
Corollary 5.1 (Identical agents are indiscernible under their shared local distance.).
While the identity of indiscernibles (see Definition 5.1) does not generally hold for , it does hold if
-
1.
We consider the agents as agents, and
-
2.
The distance is taken at one of the considered agents, i.e. ;
(5.6.20) |
Proof..
Remark 5.3 (Symmetry of ).
Notice that because is an integral of distances between agents, which are symmetric, is symmetric;
(5.6.24) |
Corollary 5.1 demonstrates that the agent identity relation of Definition 5.6 is reflexive; for every pair of agents , .
5.7 : When Distance 0 Does Not Imply Agent Identity
The picture provided above when the distance is taken at one of the agents being compared is complicated when the comparison is made from a different vantage point. Let us consider some of these cases:
-
1.
Sometimes, identical agents may be distinguished by a local distance,
-
2.
Sometimes, different agents will not be distinguished,
-
3.
Sometimes, identical distances imply identical agents,
-
4.
Sometimes, identical distances don’t,
All of these problems stem from the issues of state visitation mentioned in Subsection 4.2 and [25]: unless , may visit paths which does not, and vice versa.
Example 5.1 (, but ).
In general, the functions and can be identical as agents, while they differ in their responses to unvisited paths. Then, from the perspective of an agent which visits such paths, and appear different.
Example 5.2 (, but ).
Similarly, it is possible for and to be identical on every path which visits, but differ when or control the process.
Example 5.3 ().
In general, local distances form a bijection with the distributions and the processes , and thus .
Example 5.4 (, but ).
However, when the set of agents is restricted, this is not necessarily true. If, for example, agents are assumed to be strictly Markov (see Remark 1.3), then all that matters to the equivalence of the functions and is the states which they visit; if = , , then agents and which differ in their response to could nonetheless produce “identical” distances, when the range of the distance is restricted to pairs of strictly Markov agents.
With so few assurances, the local distances may seem pointless. Are they nothing more than markers of identity? No, they are much more; Subsection 6.1 demonstrates that is not a special case. Instead, the local distances themselves are continuous in the agent space: as and approach one another under either local distance, i.e. as or goes to 0, and so does .
6 The Agent Space
The collection of local distances described in Section 5 is an odd basis for the structure of an agent space; rather than a single, objective notion of distance, each agent defines its own local distance . When paired with the set of agents, each local distance defines a pseudometric space , which describes the ways that agents differ on .
Subsection 5.6 establishes relationships between local distances, but only in the case of identical agents which differ as functions. We have not yet related the local distances of non-identical agents. In particular, we have not established that a collection of local distances defines a single space.
One interpretation of the collection of local distances is as a premetric (see Subsection 6.2), in a manner analogous to the Kullback-Leibler Divergence. However, can also be treated as more than a premetric; it need not be asymmetrical, nor need it violate the triangle inequality, because each agent defines a local distance that describes an internally-consistent pseudometric space. We continue this discussion in greater detail in Subsection 6.2, employing the premetric to provide a simple topology equivalent to that defined by convergence in the agent space (Definition 6.1).
In the next section, we unify the pseudometric spaces produced by each local distance to create an objective agent space, whose topology is compatible with many important aspects of Reinforcement Learning, including standard function approximators (e.g. neural networks) and standard formulations of reward (see Equation 7.2.4).
6.1 Convergence in Agent Spaces
Theorem 5.1 proves that identical agents have the same local distance,
(6.1.1) |
Corollary 5.1 gives an important condition for equivalence: agents are identical, and thus have identical local distances, whenever . The next step in our analysis of the local distance is to consider the case where and are close to one another, but their distance is greater than 0. Consider the case
(6.1.2) |
with . In order to simplify the remainder of this section, we restrict ourselves to stochastic agents. Let the metric on , , be the total variation distance . In this case, the logic of Theorem 5.1 can be extended. Theorem 5.1 demonstrates that two agents which are at every time identical must produce identical distributions of paths, and, as a result, identical local distances. Consider a pair of agents and , which have a distance less than on , , with . Then,
(6.1.3) | |||||
(6.1.4) | |||||
(6.1.5) |
Since does not vary with the agent, we have
(6.1.7) | |||
(6.1.8) |
Likewise,
(6.1.9) |
should imply that
(6.1.10) |
except that since
(6.1.8) |
we must start from a baseline of , giving the bound . In general, we have
(6.1.11) |
This bound can be improved by noting that the total variation can be bounded above by (and is in fact equal to) the smaller quantity
(6.1.12) |
yielding in the general case
(6.1.13) |
With our assumption that , we can bound the right side of this inequality above, giving the looser inequality
(6.1.14) | ||||
(6.1.15) |
If the discount function is not the constant value (i.e. if for some ), as assumed above, the sum above gains a factor of :
(6.1.16) |
For simplicity we now assume that , though the following results apply to a more general family of functions (for example, they apply to all monotonic super-exponential decay functions).
Lemma 6.1 ( can be bound above by a function of ).
Notice that when
(6.1.17) |
so for a fixed distance , the maximal total variation is achieved when
(6.1.18) |
Thus, we can bound the total variation above by . Thus,
(6.1.19) |
Lemma 6.1 enables us to prove our next theorem, the limit equivalent of Corollary 5.1. Let us begin with a definition.
Definition 6.1 (Convergence in the Agent Space).
We say that a sequence of agents converges to an agent if and only if the local distance between the agents in the sequence and goes to 0;
(6.1.20) |
Theorem 6.1 (The Limit Behavior of Local Distances).
Let be a sequence of agents converging to . Then,
-
1.
,
-
2.
, and
-
3.
.
Proof of 1.
By (6.1.19), we have for any fixed and any agent
(6.1.21) |
By assumption, for every there is an with . For any , there is a with . Thus, we can select an with
(6.1.22) | ||||||
(6.1.23) |
∎
Proof of 2.
Per the proof of 1, we have for any and any an giving . Let be the bound on (in the case of total variation, ). For each path , we have
(6.1.24) | ||||
(6.1.25) |
and we have the analogous bound for
(6.1.26) |
Remark 6.1 (Notation for the Distance on Distributions of Truncated Paths).
We now need to manipulate terms of this type, for which a bit of notation will be useful: Let
(6.1.27) | ||||
(6.1.28) |
Further, for any agent we can decompose into
(6.1.29) | ||||||
(6.1.30) | ||||||
(6.1.31) |
Notice that the maximum value of is
(6.1.32) |
and we can bound from above as well,
(6.1.33) | ||||
(6.1.34) | ||||
(6.1.35) |
Clearly, as , this bound goes to 0.
Proof of 3..
This theorem demonstrates that agents which are close in the agent space have close perspectives and produce close local distances. In fact, the proof of Item 2 of Theorem 6.1 demonstrates that the local distances which represent those perspectives are uniformly continuous in the agent. Further, Item 1 of Theorem 6.1 demonstrates that similar agents produce similar distributions of truncated paths - not just similar distance functions.
In the next section we consider a loose method of interpreting the local distances: the interpretation of the local distance as a function of two, rather than three, agents, fixing the vantage point at the first agent being compared. This allows us to describe the local distance as a premetric. We use this fact to define the topology of the agent space in Subsection 6.3.
6.2 as a Premetric
A premetric is a generalization of a metric which relaxes several properties, giving the very general definition
Definition 6.2 (Premetric).
A function is called a premetric if [19, 23]
(6.2.1) |
such a premetric is called separating if it also satisfies
(6.2.2) |
Many important functions satisfy this definition, including the Kullback-Liebler Divergence.
Remark 6.2 (The Local Distance is a Premetric).
Notice that the function
(6.2.3) |
is a separating premetric.
Important for the practical use of the local distances, this premetric (along with the other structures of the agent space) is able to describe the differences between agents and between the distributions of paths which they produce without actually sampling those distributions; compares the distributions and but only requires information about the distribution the functions and . This is valuable because in Reinforcement Learning it is typically simple to calculate the actions which an agent would take from that agent’s parameters, but information about the distribution usually needs to be sampled - an expensive operation. This is especially valuable if many nearby agents need to be compared (e.g. because the agents being considered are based on a single locus agent). Operations which involve comparing a pair of agents using the standard of a third like this are common in Reinforcement Learning. For example, the Definition 2.2 function is often used to judge the quality of the actions of other agents .
In the next section we describe a topology on the agent space which we will take as canonical (i.e. as the topology of the agent space). There are two basic ways to understand the topology: it may be understood as the topology of the premetric space given by the premetric on the agent space described above, or it may be understood as the topology given by the convergence relation of Definition 6.1. These are identical. In fact, Definition 6.1 can be defined using only the premetric description of the local distance.
6.3 The Topology of the Agent Space
Let us start by providing two equivalent definitions of the topology of the agent space: one definition of its open sets, and another definition of its closed sets.
Definition 6.3 (The Topology of : Open Sets).
We say that a set is open if and only if about every point , admits an open disk of positive radius:
(6.3.1) |
Definition 6.4 (The Topology of : Closed Sets).
We say that a set is closed if and only if it contains its limit points; iff for every convergent sequence with : is closed iff
(6.3.2) |
These definitions suffice, in fact, to define the topology of any premetric (or metric). It may be demonstrated that these definitions produce the same topology (for example, by remarking that open sets in metric spaces may be characterized by the criterion (6.3.1)). It is important to note that this, along with the premetric version of the agent space, represents a sort of lower-bound on the structure which the local distances describe on the set of agents. In particular, the local distances may prove useful beyond simple problems of limits. In [28], we employ the local distances for exploration in an implementation of Novelty Search.
In the next section, we demonstrate that the topology of the agent space is compatible with many of the most important aspects of Reinforcement Learning. In particular, we show that standard formulations of reward are continuous in the agent space, and that the agent space itself is continuous in the parameters of most agent approximators, demonstrating that the agent space is a valid structure for the set of agents, and for Reinforcement Learning more generally.
7 Functions of the Agent
The topology of the agent space carries information about many important aspects of the decision process and its interaction with agents, including the distributions of truncated paths. However, we have not yet demonstrated any relationship between the agent space and the object of Reinforcement Learning: the expected reward of the agent, . In this section we demonstrate that an important class of reward functions (summable reward functions, (1.1.118)) are continuous functions of the agent in the topology of the agent space. We begin with a simple condition for the agent to be a continuous function of the parameters of a function approximator. We then use the continuity of finite distributions of paths established in Item 1 of Theorem 6.1 in the agent space to prove that the expectation of reward is a continuous function of the agent.
7.1 Parameterized Agents
Let be a function approximator parameterized by a set of real numbers , taking truncated prime paths into a set of actions. Then,
(7.1.1) |
If we delay the selection of the truncated path, we may understand as a function from into the set of agents:
(7.1.2) |
Notice that we have returned to the pre-quotient notion of an agent - the set of functions from the set of truncated prime paths to the set of actions before the equivalence relation of Definition 5.6 is applied. To better distinguish these functions, let us denote the pre-quotient set . The matter of demonstrating that a particular function approximator is a continuous function from its parameters to the agent space may be divided into two parts: it must be demonstrated that the function approximator is a continuous function from the set of parameters to the set of functions, and it must be demonstrated that the quotient operation itself is a continuous function from the set of functions to the set of agents. We begin by demonstrating the continuity of the quotient operation.
Let us assume the metric on the set of functions and denote the map taking a function to an agent by . Then, the quotient operation which takes the set of agents to the space of agents is continuous if and only if for every convergent sequence in , converges.
Theorem 7.1 (The Agent Identity Quotient Operation is Continuous).
The quotient operation defined by Definition 5.6 is a continuous function from the to .
Proof.
Let be a -convergent sequence of functions converging to
(7.1.3) | |||
(7.1.4) |
Then, we must show that
(7.1.5) |
Consider the definition of :
(7.1.6) | ||||
(7.1.7) |
Clearly, we have
(7.1.8) | ||||
(7.1.9) |
Thus, . Recall that . Thus,
(7.1.10) |
exists because an integer which satisfies
(7.1.11) |
exists, by assumption. ∎
To finish the demonstration that a particular function approximator gives agents continuous in its parameters, then, it remains only to show that the function approximator is -continuous (uniformly continuous) in the parameters of the approximator. One class of function approximators which satisfies this is feedforward neural networks, such as those discussed in [29].111111Specifically, neural networks with continuous, bounded activation functions are uniformly continuous in their parameters.
7.2 Reward and the Agent Space
In order for the agent space to be useful for the problem of Reinforcement Learning, it must be related to the object of Reinforcement Learning: reward.
(1.1.117) |
We noted in Definition 1.8 that reward can frequently be described by a sum,
(1.1.118) |
We also noted that this sum is often weighted by a discount function . Discount functions are employed because they offer general conditions under which the reward of a path (and thus its expectation) is bounded: so long as is finite and the immediate reward is bounded, so too is the sum (1.1.118).
This formulation of reward has several valuable properties which can be extracted: the reward function can be extended from to include truncated paths:
(7.2.1) | |||
(7.2.2) |
Clearly, for any path for which exists we have
(7.2.3) |
If the immediate reward is bounded and the sum is weighted by a discount function with finite sum then is bounded and we have the stronger condition
(7.2.4) |
That is, such a summable discount function converges uniformly to its value as .
Theorem 7.2 (Functions Continuous in the Agent Space).
Proof.
Let us demonstrate that is a continuous function of by showing that for any convergent sequence converging to ,
(7.2.8) |
Thus, our goal is to demonstrate that
(7.2.9) |
By assumption of (7.2.4),
(7.2.10) |
Consider the expectation of the reward of the truncated paths of ,
(7.2.11) |
By (7.2.10), for an appropriate value of we have
(7.2.12) |
Thus, we have
(7.2.13) | ||||
(7.2.14) | ||||
(7.2.15) |
Now, let us consider Item 1 of Theorem 6.1, which demonstrates that for any and any sequence of agents converging to , goes to 0 as , so we have
(7.2.16) |
Then we have
(7.2.17) |
Thus, for sufficiently large and , we have for
(7.2.18) | ||||
(7.2.19) | ||||
(7.2.20) | ||||
(7.2.21) | ||||
(7.2.22) |
∎
8 Conclusion
In this work we consider the problem of exploration in Reinforcement Learning. We find that exploration is understood and well-defined in the dynamic paradigm of Richard Bellman [5], but that it is not well-defined for other optimization paradigms used in Reinforcement Learning. In dynamic Reinforcement Learning, exploration serves to collect the information necessary for dynamic programming, as described in Subsection 2.4. In non-dynamic Reinforcement Learning - what we call naïve Reinforcement Learning - the situation is more complex. We find that dynamic methods of exploration are effective in naïve methods, but that the explanation of their effect offered by dynamic programming cannot explain their efficacy in naïve methods, which do not use the information required by dynamic programming.
This leads us to several questions: Why are exploration methods designed to provide information useless to naïve Reinforcement Learning nonetheless effective for naïve methods? What should the definition of exploration be for naïve methods? To what extent does this more general kind of exploration contribute to the effectiveness of dynamic exploration in dynamic methods? To resolve these questions, we consult the commonalities of several dynamic methods of exploration, finding two: first, their dynamic justification, and second, their mechanism: considering different agents which are deemed likely to demonstrate different distributions of paths.
Of these, only the mechanism might serve to explain dynamic exploration’s efficacy in naïve methods, and we take this mechanism as the definition of naïve exploration. This, definition, however, leaves a gap: under it, totally random experimentation with agents is explorative. This may be effective in small problems, but it is unprincipled. We find a principle in Novelty Search [7]: in exploration one should consider agents which are novel relative to the agents which have already been considered. To determine novelty, they use the distance between the behavior of an agent and those considered in the past.
However, we find their notion of novelty deficient for the purpose of defining naïve exploration; they require that function which determines the behavior of an agent be separately and manually determined for each reinforcement learning problem. Fortunately, this view is not held universally in the literature. We consider a cluster of behavior functions which we call primitive behavior [23, 24, 25]. Primitive behavior is powerful: because it is composed of the actions of an agent, it is possible in some processes for primitive behavior to fully determine the distribution of paths, and thus to determine every notion of behavior derived from .
Unfortunately, primitive behavior has several flaws. First, only in certain finite processes may the primitive behavior of an agent fully determine . Second, primitive behavior can inappropriately distinguish between agents (see Definition 5.6). Third, it necessarily retains the manual selection requirement in decision processes with infinitely many states. In Section 5, we describe a more general notion of the distance between agents - one which does not require a behavior function. Instead, we define a structure on the set of agents itself. We call the resulting structure an agent space.
In Section 6 and Section 7, we describe the topology of the agent space, demonstrating that it carries information about many important aspects of Reinforcement Learning, including the distribution of paths produced by an agent and standard formulations of the reward of an agent. Using these facts, we demonstrate that, for many function approximators, reward is a continuous function of the parameters of an agent.
In a future work [28], we use techniques described in Appendix C to join the agent space with Novelty Search to perform Reinforcement Learning using a naïve, scalable learning system similar to Evolution Strategies [13]. We test this method in a variety of processes and find that it performs similarly to ES in problems which require little exploration, and is strictly superior to ES in problems in which exploration is necessary.
References
- [1] Richard S Sutton and Andrew G Barto “Reinforcement Learning: An Introduction” Cambridge, Massachusetts: MIT Press, 2020 URL: https://mitpress.mit.edu/books/reinforcement-learning-second-edition
- [2] J.C. Gittins and D.M. Jones “A Dynamic Allocation Index for the Discounted Multiarmed Bandit Problem” In Biometrika 66.3, 1979, pp. 561–565 URL: https://www.jstor.org/stable/2335176
- [3] Oriol Vinyals et al. “Grandmaster level in StarCraft II using multi-agent reinforcement learning” In Nature 575.7782 Springer US, 2019, pp. 350–354 DOI: 10.1038/s41586-019-1724-z
- [4] OpenAI et al. “Dota 2 with Large Scale Deep Reinforcement Learning”, 2019 arXiv: http://arxiv.org/abs/1912.06680
- [5] Richard Bellman “The Theory of Dynamic Programming” In Summer Meeting of the American Mathematical Society, 1954 URL: https://apps.dtic.mil/sti/citations/AD0604386
- [6] Richard Bellman “Dynamic programming” Princeton, New Jersey: Princeton University Press, 1957
- [7] Joel Lehman and Kenneth O Stanley “Abandoning Objectives: Evolution Through the Search for Novelty Alone” In Evolutionary Computation 19.2, 2011, pp. 189–233 DOI: 10.1162/EVCO˙a˙00025
- [8] Christopher J C H Watkins “Learning From Delayed Rewards”, 1989 URL: https://www.researchgate.net/publication/33784417
- [9] Richard E. Bellman and Stewart E. Dreyfus “Applied Dynamic Programming” In Journal of Mathematical Analysis and Applications, 1962 URL: https://www.rand.org/pubs/reports/R352.html
- [10] Shipra Agrawal and Navin Goyal “Further Optimal Regret Bounds for Thompson Sampling” In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics 31 Scottsdale, Arizona: PMLR, 2013, pp. 99–107 arXiv: http://proceedings.mlr.press/v31/agrawal13a.html
- [11] Christopher J C H Watkins and Peter Dayan “Technical Note Q,-Learning” In Machine Learning 8, 1992, pp. 279–292 DOI: 10.1023/A:1022676722315
- [12] Ronald A. Howard “Dynamic Programming and Markov Processes” MIT PressJohn Wiley & Sons, Inc., 1960 arXiv:Lib. Cong. 60-11030
- [13] Tim Salimans et al. “Evolution Strategies as a Scalable Alternative to Reinforcement Learning”, 2017, pp. 1–13 arXiv: https://openai.com/blog/evolution-strategies/
- [14] Sebastian B Thrun “THE ROLE OF EXPLORATION IN LEARNING CONTROL” In Handbook for Intelligent Control: Neural, Fuzzy and Adaptive Approaches, 1992 URL: http://www.cs.cmu.edu/%7B~%7Dthrun/papers/thrun.exploration-overview.html
- [15] Haoran Tang et al. “#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning” In 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017 DOI: 10.5555/3294996.3295035
- [16] Bradly C. Stadie, Sergey Levine and Pieter Abbeel “Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models”, 2015 arXiv: https://ui.adsabs.harvard.edu/abs/2015arXiv150700814S/abstract
- [17] Lilian Weng “Exploration Strategies in Deep Reinforcement Learning” In lilianweng.github.io/lil-log, 2020 URL: https://lilianweng.github.io/lil-log/2020/06/07/exploration-strategies-in-deep-reinforcement-learning.html
- [18] Joel Lehman and Kenneth O Stanley “Exploiting Open-Endedness to Solve Problems Through the Search for Novelty” In Proceedings of the Eleventh International Conference on Artificial Life XI Cambridge, Massachusetts: MIT Press, 2008 URL: http://eplex.cs.ucf.edu/papers/lehman%7B%5C_%7Dalife08.pdf
- [19] A.. Arkhangel’skiǐ and L.. Pontryagin “General Topology I”, Encyclopaedia of Mathematical Sciences Springer-Verlag Berlin Heidelberg, 1990, pp. 202 DOI: 10.1007/978-3-642-61265-7˙1
- [20] Joel Lehman and Kenneth O Stanley “NOVELTY SEARCH AND THE PROBLEM WITH OBJECTIVES” In Genetic Programming Theory and Practice IX Springer-Verlag, 2011, pp. 37–56 DOI: 10.1007/978-1-4614-1770-5˙3
- [21] Faustino J. Gomez “Sustaining diversity using behavioral information distance” In GECCO ’09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation Montréal, Québec, Canada: Association for Computing Machinery, 2009, pp. 113–120 DOI: 10.1145/1569901.1569918
- [22] Edoardo Conti et al. “Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents” In Advances in Neural Information Processing Systems 31, 2017 arXiv: https://proceedings.neurips.cc/paper/2018/hash/b1301141feffabac455e1f90a7de2054-Abstract.html
- [23] Elliot Meyerson, Joel Lehman and Risto Miikkulainen “Learning Behavior Characterizations for Novelty Search” In GECCO 2016 - Proceedings of the 2016 Genetic and Evolutionary Computation Conference Association for Computing Machinery, Inc, 2016, pp. 149–156 DOI: 10.1145/2908812.2908929
- [24] Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski and Stephen Roberts “Effective Diversity in Population Based Reinforcement Learning”, 2020 arXiv: https://research.google/pubs/pub49976/
- [25] Jörg Stork, Martin Zaefferer, Thomas Bartz-Beielstein and A E Eiben “Understanding the Behavior of Reinforcement Learning Agents” In International Conference on Bioinspired Methods and Their Applications 2020 Brussels, Belgium: Springer, 2020, pp. 148–160 DOI: 10.1007/978-3-030-63710-1˙12
- [26] Aldo Pacchiano et al. “Learning to score behaviors for guided policy optimization” In 37th International Conference on Machine Learning, ICML 2020, 2020, pp. 7445–7454 arXiv: https://proceedings.mlr.press/v119/pacchiano20a.html
- [27] Walter Rudin “Principles of Mathematical Analysis” McGraw-Hill, 1976 URL: https://www.maa.org/press/maa-reviews/principles-of-mathematical-analysis
- [28] Matthew W. Allen, John C. Raisbeck and Hakho Lee “Distributed Policy Reward & Strategy Optimization” In Unpublished, 2021 DOI: TBD.
- [29] Kurt Hornik, Maxwell Stinchcombe and Halbert White “Multilayer feedforward networks are universal approximators” In Neural Networks 2.5, 1989, pp. 359–366 DOI: 10.1016/0893-6080(89)90020-8
- [30] Mathias Edman and Neil Dhir “Boltzmann Exploration Expectation–Maximisation” In arXiv, 2019 arXiv: https://arxiv.org/abs/1912.08869
- [31] Ronald J Williams and Jing Peng “Function Optimization Using Connectionist Reinforcement Learning Algorithms” In Connection Science 3.3 Taylor & Francis, 1991, pp. 241–268 DOI: 10.1080/09540099108946587
- [32] Marc G. Bellemare et al. “Unifying Count-Based Exploration and Intrinsic Motivation” In Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, pp. 1479–1487 DOI: 10.5555/3157096.3157262
- [33] Achim Klenke “Probability Theory” Springer-Verlag, 2014 DOI: 10.1007/978-1-4471-5361-0
Appendix A Exploration Methods
This appendix contains a brief review of the exploration methods mentioned in Section 3.
A.1 Undirected Exploration
A.1.1 -Greedy
One of the simplest undirected exploration algorithms is the -greedy algorithm described in Definition 1.13.
In Definition 1.13, we assumed that was a real function approximator, but -Greedy can be applied to a broader range of intermediates. All that is necessary is that the underlying function approximator indicate a single action - referred to as the “greedy” action, a reference to Reinforcement Learning algorithms which explicitly predict the value of actions (see Definition 2.2). In such algorithms, the “greedy action” is the one which is predicted to have the highest value. We call functions with this property deterministic, and the greedy action their deterministic action.
Definition A.1 (-Greedy).
An -greedy output function renders a deterministic agent stochastic by changing its action with probability to one drawn from a uniform distribution over the set of actions, , and retaining the deterministic action with probability :
(A.1.1) |
where is the output function which takes ’s greedy action.121212For notational simplicity we assume that the function underlying is parameterized (). This is not necessary to apply the methods of this section.
Remark A.1 ().
Notice that the case collapses to the deterministic agent , and the case is the uniformly random agent.
The major benefit of -greedy sampling is that in a finite decision process, every path has a non-zero probability (provided ). Unfortunately, that can only be accomplished by assigning a diminutive probability to each of those paths. As a path deviates further from the paths generated by the agent , its probability decreases exponentially with each action which deviates from .
That restriction is not necessarily bad for optimization; by visiting paths which require few changes to the actions of , the newly discovered states are nearly accessible to , which may make them more poignant to learning algorithms, which typically change locus agents only by small amounts in each epoch.
While -Greedy can be applied to processes with finite or infinite sets of actions, the next method, Thompson Sampling, can only be defined for processes with finite sets of actions.
A.1.2 Thompson Sampling and Related Methods
Other major undirected exploration methods operate using a similar mechanism to -Greedy, to very different effect. Just like -Greedy sampling, Thompson Sampling acts as , taking the range of a function to the set of probability distributions of actions. Whereas -Greedy produces a distribution which varies only in the agent’s deterministic131313Different learning algorithms approximate different objects; in -Learning (Subsection 2.4), approximates the action-value of a state-action pair ; in policy gradients, its meaning is dependent on the output function . action, Thompson Sampling produces a distribution which varies with the agent’s output for each action; unlike -Greedy, Thompson Sampling is continuous in the output of .
Definition A.2 (Thompson Sampling).
A Thompson Sampling output function produces a distribution of actions from the output of a real function approximator . Thompson Sampling requires that be a real vector of dimension , whose elements are nonnegative and have sum 1; . The Thompson Output Function produces the distribution of actions
(A.1.2) |
Many function approximators do not naturally produce values which fall in the set of acceptable inputs to . Several methods may be employed to rectify these approximators with Thompson Sampling. One common method is known as Boltzmann Exploration (or as a softmax layer) [1, 37]:
Definition A.3 (Boltzmann Exploration).
A Boltzmann Exploration output function produces a distribution of actions from the output of a real function approximator . It is most easily understood as a “pre-processing” for Thompson Sampling function. Let be a real parameter ( is sometimes called temperature [30]). Then,
softmax | (A.1.3) | |||
softmax | (A.1.4) |
Which can be composed with the regular Thompson output function:
(A.1.5) |
Boltzmann Exploration is among the most common methods of creating functions which are compatible with Thompson Sampling because of its beneficial analytical properties: it is continuous, has a simple derivative (especially important for back-propagation), and guarantees that every action has a non-zero probability.
A different kind of augmentation of Thompson Sampling and other stochastic output functions is Entropy Maximization [31]. In contrast with Boltzmann Exploration, entropy maximization modifies the learning process itself through the immediate reward function.
Definition A.4 (Entropy Maximization).
Entropy Maximization is a method used with stochastic agents which adds the conditional entropy of the action with respect to the distribution to the immediate reward,
(A.1.6) |
where is a positive real parameter of the optimizer [31].
These entropy bonuses cause the learner to consider both the reward which an agent attains and its propensity to select a diversity of actions. The learner is thus encouraged to consider agents which express greater “uncertainty” in their actions, slowing the convergence of the locus agent.
With respect to dynamic exploration, there is little difference between -Greedy and Thompson Sampling. Both algorithms explore the process by selecting agents which allow them to experience unexplored aspects of the process. From the naïve perspective, this similarity is overshadowed by a difference in their analytical properties: in finite problems, Thompson Sampling agents act from a continuous set of actions, whereas -Greedy agents use a [modified] finite set of actions.
The exploration methods discussed in this section are fairly homogeneous, precisely because they are undirected; the only way to explore without direction is to inject stochasticity into the optimization process. Conversely, the methods of the next section are considerably more diverse; there are many ways to direct an explorative process.
A.2 Directed Exploration
The variety of directed exploration methods make the genre difficult to summarize. Perhaps the simplest description of directed methods as the complement of undirected methods. Undirected exploration methods use exclusively stochastic means to explore; they do not incorporate any information specific to the process. Directed exploration methods thus include any exploration method which does incorporate such information [14]. This section describes two major families of directed exploration methods: count-based and prediction-based through a pair of representative methods [17]. We begin with count-based exploration, exemplified by #Exploration [15].
Count-based methods [15, 32] count the number of times that each state (or state-action pair, see Subsection 2.4) has been observed in the course of learning, and use that count to inform the course of learning, to encourage visitation of scarcely visited states. Count-based algorithms have appealing guarantees in finite processes [32], but lose those guarantees in infinite settings. Despite this, count-based exploration continues to inspire exploration techniques in the infinite setting. #Exploration is a recent method which discretizes infinite problems, imitating traditional count-based methods.
Definition A.5 (#Exploration).
#Exploration [15] is an algorithm which augments the immediate reward function with an exploration bonus in the same manner as the entropy bonus of Definition A.4. However, instead of encouraging the learner to pursue agents which attempt new actions or visit rarely visited states, #Exploration uses hash codes. The hash codes are generated by a hashing function which discretizes an unmanageable (i.e. large or infinite) set of states into a manageable finite set of hash codes. Using these hash codes as a proxy for states, #Exploration assigns its exploration bonus in much the same way as a traditional count-based method:
(A.2.1) |
Here, is the combination of the immediate reward function and the exploration bonus for that state, is the state-count function, a tally of the number of times that a state with the same hash code as , , has been visited, and is a positive real number. #Exploration pursues its goal as a count-based method by assigning greater exploration bonuses to states which have been visited fewer times.
The next class of exploration methods in this section is called prediction-based exploration. Whereas count-based methods estimate the new information that an action will collect with a measurement of the learner’s experience of each state, prediction-based methods attempt to estimate the quality, rather than the mere quantity of the collected information. To do this, they employ a separate modeling method which predicts the next state of the process. The better that prediction is, the higher the quality of the information which the learner has about that part of the process.
Definition A.6.
In Incentivizing Exploration[16], Stadie et al. estimate the quality of information which the executor has gathered by using that information to train a dynamics model to estimate the next state from 141414This algorithm uses “state encodings”, similar to the hash codes of #Exploration, rather than states.. They reason that if accurately estimates the next state (as measured by the distance between and ), then the executor has gathered better information about that state-action pair. Thus, they assign exploration bonuses so as to encourage consideration of agents which visit state-action pairs for which the distance is large:
(A.2.2) |
where is a constant, and is decay constant (i.e. an increasing function of the learning epoch).
Appendix B Behavior Functions in the Literature
Many of the behavior functions which have been proposed have been influenced by the behavior functions of Lehman and Stanley’s initial work, and by their advice on the subject in Abandoning Objectives: Evolution Through the Search for Novelty Alone:
Although generally applicable, novelty search is best suited to domains with deceptive fitness landscapes, intuitive behavioral characterizations, and domain constraints on possible expressible behaviors. - Lehman and Stanley [7, 200]
This passage provides important insight for those who wish to apply Novelty Search to new domains in the tradition of Lehman and Stanley, but their suggestions also make it difficult to analyze Novelty Search independent of the choice of behavior function. This is especially problematic for the open-ended use of Novelty Search; without a general notion of behavior, there are few options for the comparison of behavior functions to one another or their absolute evaluation. In a given decision process, one can compare the outcomes of Novelty Search processes with different behavior functions by considering the diversity of behaviors which they produce, but this diversity must be measured by one of these or a different notion of behavior. One could consider each behavior function’s propensity to find high-quality agents, but this is a return to the just-abandoned objectives.
Lehman and Stanley are forced to compare their behavior functions in the maze environment along these lines. In a discussion of the degrees of “conflation” (assignment of the same behavior to different agents) present in their behavior functions: “[I]f some dimensions of behavior are completely orthogonal to the objective, it is likely better to conflate all such orthogonal dimensions of behavior together rather than explore them.” To address these issues and make it possible to apply Novelty Search to a wider range of decision processes, several authors have considered general behavior functions [21, 22, 23, 24, 25].
The rest of this appendix provides a brief survey of some behavior functions present in the literature, beginning with the specific functions of Abandoning Objectives [7], and proceeding to general behavior functions, including those of [21, 22].
B.1 The Behavior Functions of Abandoning Objectives
The main decision processes in Abandoning Objectives are two-dimensional mazes. They consider several behavior functions in this environment, all of which are based on the position of the agent. The primary behavior function they consider is what we call the final position behavior function:
Definition B.1 (Final Position Behavior).
(B.1.1) |
Where is the position at the final time in a sampled truncated trajectory .
They consider another positional behavior function: the position of the agent over time.
Definition B.2 (Position Over Time Behavior).
(B.1.2) |
For
As noted in footnote 8, while these are functions into in the case of final position behavior, and in the case of position over time behavior, neither nor are treated as metric spaces. Instead, both of these are equipped with a symmetric [19, 23]: the square of the usual Euclidean distance.
These examples reveal the intuitions about behavior which Lehman and Stanley relied upon to implement Novelty Search. First, rather than reflecting the actions of an agent alone, both of these behavior functions reflect the results of the agent’s interaction with the process - in fact, they reflect the position of the agent, a function of the state. Second, these behavior functions are distilled, considering only one or a few points of time .
In the other environment, Biped Locomotion, Lehman and Stanley take a different approach to selecting the times , opting to collect spatial information once per simulated second. Explaining that difference, they write: “Unlike in the maze domain, temporal sampling is necessary because the temporal pattern is fundamental to walking.” This is a strange argument, since reinforcement learning problems are defined by their temporality (see Definition 1.1).
B.2 General Behavior Functions
Since the publication of [18], Lehman and Stanley’s first paper on Novelty Search, many authors have sought general notions of behavior [21, 22, 23, 24, 25]. This section analyzes several of these behavior functions. Let us begin with a simple notion of distance on the set of agents which is defined with for any method using a parameterized agent:
Example B.1 (The distance of ).
Consider two agents, and , represented by function approximators of the same form. Assume that they are parameterized by an ordered list of real numbers, and let their parameters be and . Then,
(B.2.1) |
is a behavior function and
(B.2.2) |
is a metric on this set of behaviors.
Although it is a metric (though it does not satisfy the indiscernibility of identicals under the quotient operation of Definition 5.6), this distance is unsatisfactory in several ways. For example, it can assign an agent a non-zero distance from itself if the agent can be parameterized by two different sets of parameters (see Subsection 5.6 and [24]).
An early work of Gomez et al. [21] introduced a behavior function which maps agents to a concatenation of truncated trajectories. They then use the normalized compression distance (NCD) as a metric on this set of finite sequences.
Definition B.3 (Gomez et al. Behavior).
Gomez et al. [21] define the behavior of an agent as a concatenation of a number of observed truncated paths,
(B.2.3) |
As the distance on this set of behaviors , Gomez uses the normalized compression distance , which is an approximation of the mutual information of a pair of strings:
(B.2.4) |
Where is the length of the compressed sequence.
Appendix C Agent Spaces In Practice
While the agent space is in general not a metric space (see Section 5, Subsection 5.7, and Subsection C.1), this does not proscribe its use in e.g. Novelty Search, which has long used metric-adjacent spaces (see footnote 8). In an upcoming work [28], we describe an approach to the Reinforcement Learning problem based on an extension of Evolution Strategies [13] which combines naïve reward and Novelty Search [7] of the agent space, selecting the locus agent as the perspective for comparisons during each epoch, to solve a variety of reinforcement learning problems (see Subsection C.3).
Novelty Search and the Agent Space to be naïve artifacts, but we cannot a priori restrict them to naïve learning methods. For example, [25] uses several versions of primitive behavior (see Subsection 4.2) to explain “reward behavior correlation[s]”, showing that agents which are similar under certain primitive behavior functions perform similarly. In Section 7 we demonstrate that the Agent Space completes this line of inquiry by demonstrating analytically that reward is a continuous function of the agent in the agent space. The reasoning of the Agent Space in Definition 5.6 also provides a clean explanation for the observation of [25] that certain states may be totally unimportant to performance.
C.1 When is Equivalent to
While local distances do not generally produce homeomorphic topologies, it is worthwhile to note that many basic problems in the literature do have agents which produce homeomorphic topologies, especially in the Markov and strictly Markov cases. Let us begin by considering the equivalence of the measures underlying local distances:
Definition C.1 (Equivalence of Measures).
A pair of measures , each on a measurable space are said to be equivalent iff [33, 157]
(C.1.1) |
When the measures underlying and , and , are equivalent, they produce equivalent topologies. In general, this is rare. In most decision processes, some paths can only be visited by a subset of agents. However, there are certain circumstances where these distances are guaranteed to be equivalent.
Clearly, if two agents differ with non-zero probability, then the distributions of truncated paths which they produce must also differ. However, this does not apply when only Markov agents are considered (see Definition 1.10). In this case, if the distributions of states, rather than truncated paths, are equivalent, then the distances produce equivalent topologies.
Remark C.1 (Notation for the Probability of a State).
The next few results require a simple notation for the probability of a state occurring in a distribution of paths. This is complicated by the fact that in any path there are an infinity of states, so that the sum of the probabilities of a state occurring at each time might be infinite. To resolve this, we weight the probability of a state at by and then normalize these probabilities with . Let
(C.1.2) |
In general, this gives
Theorem C.1 (A Condition for the Equivalence of and ).
In a Markov decision process with a finite set of states, the local distances and are equivalent whenever the distributions of states in and are equivalent.
This theorem has several important manifestations. Let us consider the case where the local distances of all agents possible in the process are equivalent:
Lemma C.1 (Conditions for the Equivalence of all Local Distances).
The most general condition for the equivalence of all local distances is
(C.1.3) |
There are two common conditions which are more specific but easier to verify, which may help with the application of this result. First, if every transition probability is greater than zero, then certainly the probability of each state in the distribution of paths is greater than 0:
(C.1.4) |
Even more specifically, but also more easy to test, if the probability of each state at the beginning of the process is non-zero, then the probability of each state in the distribution of paths is greater than 0:
(C.1.5) |
Of course, this result is of little importance if these local distances cease to be useful for the analysis of the underlying decision process. By construction, these equivalent local distances are relevant only to Markov processes. Importantly, these local distances cannot detect differences in distributions of paths which are not caused by differences in distributions of states, or more specifically of state-action pairs. Thus, these local distances cannot guarantee the continuity all of the functions considered in Section 7, but they do apply to the cumulative reward functions described by (1.1.118).
C.2 Deterministic Agents
The theorems of Section 6 rely on stochastic agents to justify the topology of the Agent Space. This reliance stems from the fact that we are concerned in that section primarily with distributions of paths. Because paths consist of sequences of states and actions, distributions of paths can only approach one another (with respect to total variation) if, in response to a single truncated prime path , a distribution of actions is taken.
However, this does not mean that the agent space is useless when deterministic agents are considered. For example, the distributions of states mentioned in Subsection C.1 may be continuous even in an agent space composed entirely of deterministic agents, via the stochasticity of the state-transition function. Thus, if the set of actions is connected and for every truncated prime path the state-transition distribution is a continuous function of the final action , then the distributions of states are continuous in the agent space. Then, an immediate reward function which considers only the state would also be continuous in the agent space.
C.3 Sketch of the Novelty Search Methods of [28]
In an upcoming work, we use the Agent Space in conjunction with Novelty Search to develop a distributed optimization algorithm for Reinforcement Learning problems. Our method evaluates candidate agents with a path collected by the locus agent at each training epoch, resulting in a non-stationary objective that encourages the agent to behave in ways that it has not yet behaved, on states that it can currently encounter. The following pseudo-code is a summary of this method.
We approximate by evaluating candidate agents on a set of states that we gather every epoch. To do this, we follow the agent in the decision process until total states have been encountered, then store them in a set denoted . The distance can then be approximated by evaluating the responses of and on only , rather than by integration on .