This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Online Distributed Trajectory Planning for Quadrotor Swarm with Feasibility Guarantee using Linear Safe Corridor

Jungwon Park, Dabin Kim, Gyeong Chan Kim, Dahyun Oh and H. Jin Kim1 1The authors are with the Department of Mechanical and Aerospace Engineering, Seoul National University (SNU), and Automation and Systems Research Institute (ASRI), Seoul 08826, South Korea {qwerty35, dabin404, skykim0609, qlass33, hjinkim}@snu.ac.kr
Abstract

This paper presents a new online multi-agent trajectory planning algorithm that guarantees to generate safe, dynamically feasible trajectories in a cluttered environment. The proposed algorithm utilizes a linear safe corridor (LSC) to formulate the distributed trajectory optimization problem with only feasible constraints, so it does not resort to slack variables or soft constraints to avoid optimization failure. We adopt a priority-based goal planning method to prevent the deadlock without an additional procedure to decide which robot to yield. The proposed algorithm can compute the trajectories for 60 agents on average 15.5 ms per agent with an Intel i7 laptop and shows a similar flight distance and distance compared to the baselines based on soft constraints. We verified that the proposed method can reach the goal without deadlock in both the random forest and the indoor space, and we validated the safety and operability of the proposed algorithm through a real flight test with ten quadrotors in a maze-like environment.

I INTRODUCTION

The trajectory planning algorithm must guarantee collision avoidance and dynamical feasibility when operating a quadrotor swarm, as one accident can lead to total failure. Also, the need to handle a large number of agents requests a scalable online distributed algorithm. However, there are two main challenges to generate a safe trajectory online. First, it is difficult to find feasible collision avoidance constraints if the agents are crowded in a narrow space. Many researchers adopt soft constraints to avoid an infeasible optimization problem, but many of them cannot guarantee collision avoidance [1, 2, 3, 4]. Second, if the obstacles have a non-convex shape, it is difficult to solve deadlock even in sparse environments using distributed planning approaches. The centralized methods like conflict-based search (CBS) [5] can solve this problem, but they are not appropriate for online planning due to their long computation time.

In this letter, we present an online trajectory planning algorithm that guarantees feasibility of the optimization problem. The key idea of the proposed method is a linear safe corridor (LSC), which allows the feasible set of collision avoidance constraints by utilizing the trajectory at the previous step. We design the LSC to combine the advantages of the buffered Voronoi cell (BVC) [6] and the relative safe flight corridor (RSFC) [7]. First, the LSC can prevent collision between agents without non-convex constraints or slack variables, similar to the BVC. Second, since the LSC has a similar form to the RSFC, the LSC can reflect the previous solution to the current optimization problem. Hence it can reduce the flight time and distance to a similar level to on-demand collision avoidance [1] or discretization-based method [3] while guaranteeing the collision avoidance which was not done in the previous works. By adopting the LSC, the proposed algorithm always returns a safe trajectory without optimization failure. Furthermore, we introduce priority-based goal planning to solve deadlock in maze-like environments. Despite the limited performance in very dense environments due to the nature of a distributed approach, it can find the trajectory to the goal in sparse environments using the previous trajectories to decide which robot to yield. We compared the proposed algorithm with state-of-the-art methods, BVC [6], DMPC [1], EGO-Swarm [3] in simulations, and we conducted flight tests to demonstrate that the proposed algorithm can be applied to real-world settings. (See Fig. 1). We will release the source code of this work in https://github.com/qwerty35/lsc_planner.

We summarize the main contributions as follows.

  • An online multi-agent trajectory planning algorithm that guarantees to generate safe, feasible trajectory in cluttered environments without potential optimization failure by using the linear safe corridor.

  • Linear safe corridor that fully utilizes the previous solution to ensure collision avoidance and feasibility of the constraints.

  • Decentralized priority-based goal planning that prevents the deadlock in a sparse maze-like environment without an additional procedure to decide which robot to yield.

Refer to caption
Figure 1: Experiment with 10 quadrotors in the maze-like environment.

II RELATED WORK

II-A Online Multi-Agent Collision Avoidance

To achieve online planning, many recent works adopt discretization method, which apply the constraints only at some discrete points, not the whole trajectory [1, 2, 3, 4]. However, this approach can occur the collision if the interval between discrete points is too long. Moreover, most of the works utilize slack variables or soft constraints to avoid the infeasible optimization problem, which increases the probability of collisions. A velocity obstacle (VO)-based method [8, 9] can be considered, but they can cause a collision either since the agents do not follow the constant velocity assumption. On the contrary, the proposed algorithm guarantees collision avoidance for all trajectory points, and it ensures feasibility of the optimization problem without the slack variables or soft constraints.

There are several works that can guarantee both collision avoidance and feasibility. In [6], buffered Voronoi cell (BVC) is presented to confine the agents to a safe region, and in [10], a hybrid braking controller is applied for safety if the planner faces feasibility issues. However, these methods require conservative collision constraints. On the other hand, the proposed method employs less conservative constraints because it constructs the LSC considering the previous trajectories. The authors of [11] suggest an asynchronous planning method using a collision check-recheck scheme. This method guarantees feasibility by executing only verified trajectories through communication. However, it often takes over few seconds for replanning since it blocks the update if it receives the other agent’s trajectories during the planning. Conversely, the proposed method uses a synchronized replanning method, which takes constant time to update trajectories.

Compared to our previous works [7, 12], the proposed method does not require a centralized grid-based path planner or slack variables for feasibility. Thus, the proposed algorithm guarantees inter-agent collision avoidance without soft constraints while enabling online distributed planning.

II-B Decentralized Deadlock Resolution

A deadlock can occur if the agents are crowded in a narrow space without a centralized coordinator. One approach to solve this is a right-hand rule [13], which rotates the goal to the right when the agent detects the deadlock. However, it often causes another deadlock in cluttered environments. In [14], the agents are sequentially asked to plan an alternative trajectory until the deadlock is resolved. However, it requires an additional procedure to decide which robot to yield. On the other hand, the proposed method does not require such a decision process because it determines the priority by the distance to the goal. The authors of [15] present a cooperative strategy based on bidding to resolve the deadlock. However, this method takes more time for replanning as the number of agents increases because it updates only the trajectory of the winning bidder. A similar approaches to our proposed method can be found in [16]. It utilizes the grid-based planner to avoid conflict between agents. However, it often fails to find the discrete path in a compact space because it treats all other agents as static obstacles. To solve this, the proposed method imposes the priority on the agents so that the agents ignore the lower-priority agents while planning the discrete path.

III PROBLEM FORMULATION

In this section, we formulate a distributed trajectory optimization problem that guarantees feasibility. We suppose that there are NN agents in an obstacle environment 𝒲\mathcal{W}, and each agent ii has the mission to move from the start point si\textbf{s}^{i} to the goal point gi\textbf{g}^{i}. Our goal is to generate a safe, continuous trajectory so that all agents can reach the goal point without collision or exceeding the agent’s dynamical limits. Throughout this paper, the calligraphic uppercase letter means set, the bold lowercase letter means vector, and the italic lowercase letter means scalar value. The superscript of each symbol shows the index of the associated agent.

III-A Assumption

We assume that all agents start their mission with the collision-free initial configuration and they can follow the trajectory accurately. We suppose that all agents can share their current positions, goal points and the trajectories planned in the previous step without delay or loss of communication. We also assume that the free space of the environment is given as prior knowledge.

III-B Trajectory Representation

Due to the differential flatness of quadrotor dynamics [17], we represent the trajectory of agent ii at the replanning step kk, pki(t)\textbf{p}^{i}_{k}(t), with the piecewise Bernstein polynomial:

pki(t)={l=0nck,1,libl,n(τ(t,k,1))t[Tk,Tk+1]l=0nck,2,libl,n(τ(t,k,2))t[Tk+1,Tk+2]l=0nck,M,libl,n(τ(t,k,M))t[Tk+M1,Tk+M]\displaystyle\textbf{p}^{i}_{k}(t)=\begin{cases}\ \sum_{l=0}^{n}\textbf{c}^{i}_{k,1,l}b_{l,n}(\tau(t,k,1))&t\in[T_{k},T_{k+1}]\\ \ \sum_{l=0}^{n}\textbf{c}^{i}_{k,2,l}b_{l,n}(\tau(t,k,2))&t\in[T_{k+1},T_{k+2}]\\ \ \mathmakebox[\widthof{{}={}}][c]{\vdots}&\mathmakebox[\widthof{{}\in{}}][c]{\vdots}\\ \ \sum_{l=0}^{n}\textbf{c}^{i}_{k,M,l}b_{l,n}(\tau(t,k,M))&t\in[T_{k+M-1},\\ &\>\>\>\>\>\>\>\>T_{k+M}]\\ \end{cases} (1)

where nn is degree of polynomial, ck,m,li3\textbf{c}^{i}_{k,m,l}\in\mathbb{R}^{3} is the lthl^{th} control point of the mthm^{th} segment of the trajectory, bl,n(t)b_{l,n}(t) is Bernstein basis polynomials, TkT_{k} is the start time of the replanning step kk, Tk+m=Tk+mΔtT_{k+m}=T_{k}+m\Delta t, Δt\Delta t is duration of each segment, and τ(t,k,m)=(tTk+m1)/Δt\tau(t,k,m)=(t-T_{k+m-1})/\Delta t. Thus, the control points of the piecewise polynomial, ck,m,li\textbf{c}^{i}_{k,m,l} for m=1,,Mm=1,\cdots,M and l=0,,nl=0,\cdots,n, are the decision variables of our optimization problem. Here, the Bernstein basis can be replaced with another one that satisfies the convex hull property, such as [18]. In this paper, we denote the mthm^{th} segment of the trajectory as pk,mi(t)\textbf{p}^{i}_{k,m}(t) and the vector that contains all control points of the trajectory as cki3M(n+1)\textbf{c}^{i}_{k}\in\mathbb{R}^{3M(n+1)}.

III-C Objective Function

III-C1 Error to goal

We minimize the distance between the agent and the current goal point gcurri\textbf{g}^{i}_{curr} so that the agent can reach the final goal point.

Jerri=m=1Mwe,mpki(Tm)gcurri2J^{i}_{err}=\sum\limits_{m=1}^{M}w_{e,m}\|\textbf{p}^{i}_{k}(T_{m})-\textbf{g}^{i}_{curr}\|^{2} (2)

where we,m=1,,M0w_{e,m=1,\cdots,M}\geq 0 is the weight coefficient and \|\cdot\| is the Euclidean norm.

III-C2 Derivatives of trajectory

We minimize the derivatives of trajectory to penalize aggressive maneuvers of the agent.

Jderi=r=1nwd,rT0TMdrdtrpki(t)2𝑑t\displaystyle J^{i}_{der}=\sum\limits_{r=1}^{n}w_{d,r}\int_{T_{0}}^{T_{M}}\left\|\frac{d^{r}}{dt^{r}}\textbf{p}^{i}_{k}(t)\right\|^{2}dt (3)

where wd,r=1,,n0w_{d,r=1,\cdots,n}\geq 0 is the weight coefficient. In this work, we set the weights to minimize the jerk.

III-D Convex Constraints

The trajectory should satisfy the initial condition to match the agent’s current position, velocity and acceleration. Also, the trajectory should be continuous up to the acceleration for smooth flight. Due to the dynamical limit of quadrotor, the agents should not exceed their maximum velocity and acceleration. We can formulate these constraints in affine equality and inequality constraints using the convex hull property of the Bernstein polynomials [19]:

Aeqcki=beqA_{eq}\textbf{c}^{i}_{k}=\textbf{b}_{eq} (4)
AdynckibdynA_{dyn}\textbf{c}^{i}_{k}\preceq\textbf{b}_{dyn} (5)

III-E Collision Avoidance Constraints

III-E1 Obstacle avoidance constraint

To avoid collision with static obstacle, we formulate the constraint as follows:

pki(t)𝒞i,o,t[Tk,Tk+M]\textbf{p}^{i}_{k}(t)\oplus\mathcal{C}^{i,o}\subset\mathcal{F},\>\>\>\forall t\in[T_{k},T_{k+M}] (6)
𝒞i,o={x3xri}\mathcal{C}^{i,o}=\{\textbf{x}\in\mathbb{R}^{3}\mid\|\textbf{x}\|\leq r^{i}\} (7)

where \oplus is the Minkowski sum, \mathcal{F} is the free space of the environment, and 𝒞i,o\mathcal{C}^{i,o} is an obstacle collision model that has sphere shape, and rir^{i} is the radius of agent ii.

III-E2 Inter-collision avoidance constraint

We can represent the collision avoidance constraint between the agents ii and jj as follows:

pki(t)pkj(t)𝒞i,j\textbf{p}^{i}_{k}(t)-\textbf{p}^{j}_{k}(t)\notin\mathcal{C}^{i,j} (8)

where 𝒞i,j\mathcal{C}^{i,j} is the inter-collision model, which is a compact convex set that satisfies 𝒞j,i=𝒞i,j={xx𝒞i,j}\mathcal{C}^{j,i}=-\mathcal{C}^{i,j}=\{-\textbf{x}\mid\textbf{x}\in\mathcal{C}^{i,j}\} to maintain symmetry between agents. In this work, we adopt the ellipsoidal collision model to consider the downwash effect of the quadrotor:

𝒞i,j={x3Exri+rj}\mathcal{C}^{i,j}=\{\textbf{x}\in\mathbb{R}^{3}\mid\|E\textbf{x}\|\leq r^{i}+r^{j}\} (9)

where E=diag([1,1,1/cdw])E=diag([1,1,1/c_{dw}]) and cdwc_{dw} is the downwash coefficient. Since (8) is a non-convex constraint, we linearize it using the convex hull property of Bernstein polynomial:

k,mi,j𝒞i,j=,j\i,m=1,,M\mathcal{H}^{i,j}_{k,m}\cap\mathcal{C}^{i,j}=\emptyset,\>\>\>\forall j\in\mathcal{I}\backslash i,m=1,\cdots,M (10)
k,mi,j=Conv({ck,m,lick,m,ljl=0,,n})\mathcal{H}^{i,j}_{k,m}=\text{Conv}(\{\textbf{c}^{i}_{k,m,l}-\textbf{c}^{j}_{k,m,l}\mid l=0,\cdots,n\}) (11)

where k,mi,j\mathcal{H}^{i,j}_{k,m} is the convex hull of the control points of relative trajectory between the agents ii and jj, \mathcal{I} is a set of agents, and Conv()\text{Conv}(\cdot) is the convex hull operator that returns a convex hull of the input set.

Theorem 1.

If the trajectories of all agents satisfy the condition (10), then there is no collision between agents.

Proof.

For any pair of the agents ii and jj, the relative trajectory between two agents is a piecewise Bernstein polynomial:

pk,mi(t)pk,mj(t)=l=0n(ck,m,lick,m,lj)bl,n(τ(t,k,m))\textbf{p}^{i}_{k,m}(t)-\textbf{p}^{j}_{k,m}(t)=\sum_{l=0}^{n}(\textbf{c}^{i}_{k,m,l}-\textbf{c}^{j}_{k,m,l})b_{l,n}(\tau(t,k,m)) (12)

Due to the convex hull property of the Bernstein polynomial and (10), we obtain the following for m,t[Tk+m1,Tk+m]\forall m,t\in[T_{k+m-1},T_{k+m}].

pk,mi(t)pk,mj(t)k,mi,j\textbf{p}^{i}_{k,m}(t)-\textbf{p}^{j}_{k,m}(t)\in\mathcal{H}^{i,j}_{k,m} (13)
pk,mi(t)pk,mj(t)𝒞i,j\textbf{p}^{i}_{k,m}(t)-\textbf{p}^{j}_{k,m}(t)\notin\mathcal{C}^{i,j} (14)

Thus, there is no collision between agents because they satisfy the collision constraint (8) for any segment mm. ∎

IV Algorithm

In this section, we introduce the trajectory planning algorithm that guarantees to return the feasible solution. Alg. 1 summarizes the overall process. First, we receive the current positions, goal points and trajectories at the previous step for all agents as inputs. Then, we plan initial trajectories using the trajectory at the previous replanning step (line 2, Sec. IV-A). Next, we construct collision avoidance constraints based on the initial trajectories (lines 4-7, Sec. IV-B, IV-C). After that, we determine the intermediate goal using the grid-based path planner (line 8, Sec. IV-D). Finally, we obtain the safe trajectory by solving the quadratic programming (QP) problem (line 9, Sec. IV-E). We will prove the feasibility of the optimization problem by showing that the initial trajectory satisfies all the constraints.

Input: current positions pki(Tk)\textbf{p}^{i\in\mathcal{I}}_{k}(T_{k}), goal points gi\textbf{g}^{i\in\mathcal{I}}, trajectories at the previous step pk1i(t)\textbf{p}^{i\in\mathcal{I}}_{k-1}(t), and 3D occupancy map 𝒲\mathcal{W}
Output: trajectory pki(t)\textbf{p}^{i}_{k}(t) for agent ii, t[Tk,Tk+M]t\in[T_{k},T_{k+M}]
1 for i\forall i\in\mathcal{I} do
2       p^ki(t)\hat{\textbf{p}}^{i}_{k}(t)\leftarrow planInitialTraj(pki(Tk),pk1i(t)\textbf{p}^{i}_{k}(T_{k}),\textbf{p}^{i}_{k-1}(t));
3      
4 end for
5𝒮ki\mathcal{S}^{i}_{k}\leftarrow buildSFC(p^ki(t),𝒲\hat{\textbf{p}}^{i}_{k}(t),\mathcal{W});
6 for j\i\forall j\in\mathcal{I}\backslash i do
7       ki,j\mathcal{L}^{i,j}_{k}\leftarrow buildLSC(p^ki(t),p^kj(t)\hat{\textbf{p}}^{i}_{k}(t),\hat{\textbf{p}}^{j}_{k}(t));
8      
9 end for
10gcurri\textbf{g}^{i}_{curr}\leftarrow planCurrentGoal(p^ki(t),gi,𝒲\hat{\textbf{p}}^{i\in\mathcal{I}}_{k}(t),\textbf{g}^{i\in\mathcal{I}},\mathcal{W});
11 pki(t)\textbf{p}^{i}_{k}(t)\leftarrow trajOpt(𝒮ki,ki,j\i,gcurri)\mathcal{S}^{i}_{k},\mathcal{L}^{i,\forall j\in\mathcal{I}\backslash i}_{k},\textbf{g}^{i}_{curr});
return pki(t)\textbf{p}^{i}_{k}(t)
Algorithm 1 Trajectory planning for agent ii at replanning step kk

IV-A Initial Trajectory Planning

We utilize the trajectory at the previous replanning step, pk1i(t)\textbf{p}^{i}_{k-1}(t), as an initial trajectory. If it is the first step of the planning, we then use the current position instead. Suppose that the replanning period is equal to the segment time Δt\Delta t. Then we can represent the initial trajectory as piecewise Bernstein polynomial.

p^ki(t)={p0i(T0)k=0,t[T0,TM]pk1,m+1i(t)k>0,t[Tk,Tk+M1]pk1,Mi(Tk+M1)k>0,t[Tk+M1,Tk+M]\displaystyle\hat{\textbf{p}}^{i}_{k}(t)=\begin{cases}\ \textbf{p}^{i}_{0}(T_{0})&k=0,t\in[T_{0},T_{M}]\\ \ \textbf{p}^{i}_{k-1,m+1}(t)&k>0,t\in[T_{k},T_{k+M-1}]\\ \ \textbf{p}^{i}_{k-1,M}(T_{k+M-1})&k>0,t\in[T_{k+M-1},T_{k+M}]\\ \end{cases} (15)

where p^ki(t)\hat{\textbf{p}}^{i}_{k}(t) is the initial trajectory at the replanning step kk. We denote the mthm^{th} segment of the initial trajectory as p^k,mi(t)\hat{\textbf{p}}^{i}_{k,m}(t) and the lthl^{th} control point of this segment as c^k,m,li\hat{\textbf{c}}^{i}_{k,m,l}. Note that the control point of initial trajectory can be represented using the previous ones (e.g. c^k,m,li=ck1,m+1,li\hat{\textbf{c}}^{i}_{k,m,l}=\textbf{c}^{i}_{k-1,m+1,l} for m<M,l\forall m<M,l) and there is no collision between initial trajectories if there is no collision between the previous ones.

IV-B Safe Flight Corridor Construction

A safe flight corridor (SFC), 𝒮k,mi\mathcal{S}^{i}_{k,m}, is a convex set that prevents the collision with the static obstacles:

𝒮k,mi𝒞i,o\mathcal{S}^{i}_{k,m}\oplus\mathcal{C}^{i,o}\subset\mathcal{F} (16)

To guarantee the feasibility of the constraints, we construct the SFC using the initial trajectory and the SFC of the previous step.

𝒮k,mi={𝒮(c^k,M,ni)k=0 or m=M𝒮k1,m+1ielse\displaystyle\mathcal{S}^{i}_{k,m}=\begin{cases}\ \mathcal{S}(\hat{\textbf{c}}^{i}_{k,M,n})&k=0\text{ or }m=M\\ \ \mathcal{S}^{i}_{k-1,m+1}&\text{else}\\ \end{cases} (17)

where 𝒮(c^k,M,ni)\mathcal{S}(\hat{\textbf{c}}^{i}_{k,M,n}) is a convex set that contains the point c^k,M,ni\hat{\textbf{c}}^{i}_{k,M,n} and satisfies 𝒮(c^k,M,ni)𝒞i,o\mathcal{S}(\hat{\textbf{c}}^{i}_{k,M,n})\oplus\mathcal{C}^{i,o}\subset\mathcal{F}. Note that the SFC in (17) always exists for all replanning steps if the control points of the initial trajectory do not collide with static obstacles. We can find this using the axis-search method [7].

Assume that the control points of pk,mi\textbf{p}^{i}_{k,m} is confined in the corresponding SFC, i.e. for m=1,,Mm=1,\cdots,M and l=0,,nl=0,\cdots,n:

ck,m,li𝒮k,mi\textbf{c}^{i}_{k,m,l}\in\mathcal{S}^{i}_{k,m} (18)

Then, there is no collision with static obstacles due to the convex hull property of the Bernstein polynomial [20]. Also, we can prove that the control points of the initial trajectory always belong to the corresponding SFC for all replanning steps.

Lemma 1.

(Feasibility of SFC) Assume that ck1,m,li𝒮k1,mi\textbf{c}^{i}_{k-1,m,l}\in\mathcal{S}^{i}_{k-1,m} for m,l\forall m,l at the replanning step k>0k>0. Then, c^k,m,li(t)𝒮k,mi\hat{\textbf{c}}^{i}_{k,m,l}(t)\in\mathcal{S}^{i}_{k,m} for k,m,l\forall k,m,l.

Proof.

If k=0k=0, we obtain c^0,m,li𝒮0,mi\hat{\textbf{c}}^{i}_{0,m,l}\in\mathcal{S}^{i}_{0,m} for m,l\forall m,l since c^0,m,li=c^0,M,ni\hat{\textbf{c}}^{i}_{0,m,l}=\hat{\textbf{c}}^{i}_{0,M,n} for m,l\forall m,l. If k>0k>0 and m<Mm<M, we have c^k,m,li=ck1,m+1,li𝒮k1,m+1i=𝒮k,mi\hat{\textbf{c}}^{i}_{k,m,l}=\textbf{c}^{i}_{k-1,m+1,l}\in\mathcal{S}^{i}_{k-1,m+1}=\mathcal{S}^{i}_{k,m} for l\forall l due to (15) and (18). If k>0k>0 and m=Mm=M, we obtain c^k,M,li=c^k,M,ni𝒮(c^k,M,ni)=𝒮k,Mi\hat{\textbf{c}}^{i}_{k,M,l}=\hat{\textbf{c}}^{i}_{k,M,n}\in\mathcal{S}(\hat{\textbf{c}}^{i}_{k,M,n})=\mathcal{S}^{i}_{k,M} for l\forall l. Therefore, we have c^k,m,li(t)𝒮k,mi\hat{\textbf{c}}^{i}_{k,m,l}(t)\in\mathcal{S}^{i}_{k,m} for k,m,l\forall k,m,l. ∎

IV-C Linear Safe Corridor Construction

We define the LSC as a linear constraint that satisfies the following conditions:

k,m,li,j={x3(xc^k,m,lj)nmi,jdm,li,j>0}\mathcal{L}^{i,j}_{k,m,l}=\{\textbf{x}\in\mathbb{R}^{3}\mid(\textbf{x}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}-d^{i,j}_{m,l}>0\} (19)
nmi,j=nmj,i\textbf{n}^{i,j}_{m}=-\textbf{n}^{j,i}_{m} (20)
dm,li,j+dm,lj,imax𝒞i,j,nmi,j+(c^k,m,lic^k,m,lj)nmi,jd^{i,j}_{m,l}+d^{j,i}_{m,l}\geq\max\langle\mathcal{C}^{i,j},\textbf{n}^{i,j}_{m}\rangle+(\hat{\textbf{c}}^{i}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m} (21)

where nmi,j3\textbf{n}^{i,j}_{m}\in\mathbb{R}^{3} is the normal vector, dm,li,jd^{i,j}_{m,l} is the safety margin and max𝒞i,j,nmi,j=maxx𝒞i,jxnmi,j\max\langle\mathcal{C}^{i,j},\textbf{n}^{i,j}_{m}\rangle=\max_{\textbf{x}\in\mathcal{C}^{i,j}}\textbf{x}\cdot\textbf{n}^{i,j}_{m}. Let ^k,mi,j\hat{\mathcal{H}}^{i,j}_{k,m} is the convex hull of the control points of relative initial trajectory, i.e. ^k,mi,j=Conv({c^k,m,lic^k,m,ljl=0,,n})\hat{\mathcal{H}}^{i,j}_{k,m}=\text{Conv}(\{\hat{\textbf{c}}^{i}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l}\mid l=0,\cdots,n\}). Lemmas 2 and 3 present the main properties of LSC.

Lemma 2.

(Safety of LSC) If ck,m,lik,m,li,j\textbf{c}^{i}_{k,m,l}\in\mathcal{L}^{i,j}_{k,m,l}, ck,m,ljk,m,lj,i\textbf{c}^{j}_{k,m,l}\in\mathcal{L}^{j,i}_{k,m,l} for m,l\forall m,l then we obtain k,mi,j𝒞i,j=\mathcal{H}^{i,j}_{k,m}\cap\mathcal{C}^{i,j}=\emptyset which implies that the agent ii does not collide with the agent jj.

Proof.

We obtain the following by adding the inequality in (19) for each agent ii and jj:

(ck,m,lic^k,m,li)nmi,j+(ck,m,ljc^k,m,lj)nmj,i(dm,li,j+dm,lj,i)>0(\textbf{c}^{i}_{k,m,l}-\hat{\textbf{c}}^{i}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}+(\textbf{c}^{j}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{j,i}_{m}-(d^{i,j}_{m,l}+d^{j,i}_{m,l})>0 (22)

This can be simplified using (20) and (21):

(ck,m,lick,m,lj)nmi,j+(c^k,m,lic^k,m,lj)nmi,j(dm,li,j+dm,lj,i)>0(\textbf{c}^{i}_{k,m,l}-\textbf{c}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}+(\hat{\textbf{c}}^{i}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}-(d^{i,j}_{m,l}+d^{j,i}_{m,l})>0 (23)
(ck,m,lick,m,lj)nmi,j>max𝒞i,j,nmi,j(\textbf{c}^{i}_{k,m,l}-\textbf{c}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}>\max\langle\mathcal{C}^{i,j},\textbf{n}^{i,j}_{m}\rangle (24)

The above inequality satisfies for l\forall l, thus we have the following for any λl0\lambda_{\forall l}\geq 0 s.t. l=0nλl=1\sum_{l=0}^{n}\lambda_{l}=1:

l=0nλl(ck,m,lick,m,lj)nmi,j>max𝒞i,j,nmi,j\sum_{l=0}^{n}\lambda_{l}(\textbf{c}^{i}_{k,m,l}-\textbf{c}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}>\max\langle\mathcal{C}^{i,j},\textbf{n}^{i,j}_{m}\rangle (25)
k,mi,j𝒞i,j=\mathcal{H}^{i,j}_{k,m}\cap\mathcal{C}^{i,j}=\emptyset (26)

Therefore, there is no collision between the agents ii and jj by Thm. 1. ∎

Lemma 3.

(Existence and Feasibility of LSC) If ^k,mi,j𝒞i,j=\hat{\mathcal{H}}^{i,j}_{k,m}\cap\mathcal{C}^{i,j}=\emptyset for i,j,m,l\forall i,j\in\mathcal{I},m,l, then there exists k,m,li,j\mathcal{L}^{i,j}_{k,m,l} that satisfies the definition of LSC and c^k,m,lik,m,li,j\hat{\textbf{c}}^{i}_{k,m,l}\in\mathcal{L}^{i,j}_{k,m,l} for i,j,m,l\forall i,j\in\mathcal{I},m,l.

Proof.

^k,mi,j\hat{\mathcal{H}}^{i,j}_{k,m} and 𝒞i,j\mathcal{C}^{i,j} are disjoint compact convex sets. Thus, by the hyperplane seperation theorem [21], there exists ns\textbf{n}_{s} such that:

min^k,mi,j,ns>max𝒞i,j,ns\min\langle\hat{\mathcal{H}}^{i,j}_{k,m},\textbf{n}_{s}\rangle>\max\langle\mathcal{C}^{i,j},\textbf{n}_{s}\rangle (27)

where min^k,mi,j,ns=minx^k,mi,jxns\min\langle\hat{\mathcal{H}}^{i,j}_{k,m},\textbf{n}_{s}\rangle=\min_{\textbf{x}\in\hat{\mathcal{H}}^{i,j}_{k,m}}\textbf{x}\cdot\textbf{n}_{s}. Here, we set the normal vector and the safety margin of k,m,li,j\mathcal{L}^{i,j}_{k,m,l} as follows:

nmi,j=nmj,i=ns\textbf{n}^{i,j}_{m}=-\textbf{n}^{j,i}_{m}=\textbf{n}_{s} (28)
dm,li,j=dm,lj,i=12(max𝒞i,j,nmi,j+(c^k,m,lic^k,m,lj)nmi,j)d^{i,j}_{m,l}=d^{j,i}_{m,l}=\frac{1}{2}(\max\langle\mathcal{C}^{i,j},\textbf{n}^{i,j}_{m}\rangle+(\hat{\textbf{c}}^{i}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}) (29)

We can observe that they fulfill the definition of LSC. Furthermore, we have (c^k,m,lic^k,m,lj)nmi,jdm,li,j=12((c^k,m,lic^k,m,lj)nmi,jmax𝒞i,j,nmi,j)>0(\hat{\textbf{c}}^{i}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}-d^{i,j}_{m,l}=\frac{1}{2}((\hat{\textbf{c}}^{i}_{k,m,l}-\hat{\textbf{c}}^{j}_{k,m,l})\cdot\textbf{n}^{i,j}_{m}-\max\langle\mathcal{C}^{i,j},\textbf{n}^{i,j}_{m}\rangle)>0 due to (27). It indicates that c^k,m,lik,m,li,j\hat{\textbf{c}}^{i}_{k,m,l}\in\mathcal{L}^{i,j}_{k,m,l} for i,j,m,l\forall i,j\in\mathcal{I},m,l. ∎

Refer to caption
Figure 2: Decision method of the normal vector of LSC. The red ellipsoid is the collision model between two agents, and the orange-shaded region is the convex hull that consists of the control points of the relative initial trajectory.

We construct the LSC based on Lemma 3. First, we perform the coordinate transformation to convert the collision model to the sphere as shown in Fig. 2. Next, we conduct the GJK algorithm [22] to find the closest points between ^k,mi,j\hat{\mathcal{H}}^{i,j}_{k,m} and the collision model. Then, we determine the normal vector utilizing the vector crossing the closest points and compute the safety margin using (29). Finally, we obtain the LSC by reversing the coordinate transform.

Refer to caption
(a) BVC [6]
Refer to caption
(b) LSC
Figure 3: Trajectory planning comparison between BVC [6] and LSC. The colored lines, small squares and circles are desired trajectories, goal points and desired position at the end of planning horizon (i.e. pki(Tk+M)\textbf{p}^{i}_{k}(T_{k+M})) respectively. The color-shaded regions denote the collision constraints for the red agent at the end of planning horizon.

We compare the LSC with the BVC [6] in Fig. 3. Since the BVC is generated using the agent’s current position only, the desired trajectories from the BVC remain within each static cell, which leads to a conservative maneuver as shown in Fig. 3(a). On the other hand, we construct the LSC utilizing the full trajectory at the previous step. Thus, we can obtain more aggressive maneuver while ensuring collision avoidance, as depicted in Fig. 3(b).

Input: initial trajectories p^ki(t)\hat{\textbf{p}}^{i\in\mathcal{I}}_{k}(t), goal points gi\textbf{g}^{i\in\mathcal{I}} and 3D occupancy map 𝒲\mathcal{W}
Output: current goal point gcurri\textbf{g}^{i}_{curr}
1 𝒫\mathcal{P}\leftarrow\emptyset;
2 for j\i\forall j\in\mathcal{I}\backslash i do
3       if the agent jj satisfies (30), (31), (32) or p^ki(Tk)gi<dg\|\hat{\textbf{p}}^{i}_{k}(T_{k})-\textbf{g}^{i}\|<d_{g} then
4             𝒫=𝒫{j}\mathcal{P}=\mathcal{P}\cup\{j\};
5            
6       end if
7      
8 end for
9qq\leftarrow findClosestAgent(𝒫\mathcal{P});
10 if p^ki(Tk)p^kq(Tk)<dth\|\hat{\textbf{p}}^{i}_{k}(T_{k})-\hat{\textbf{p}}^{q}_{k}(T_{k})\|<d_{th} then
11       gcurrip^kq(Tk)+p^ki(Tk)p^kq(Tk)p^ki(Tk)p^kq(Tk)drep\textbf{g}^{i}_{curr}\leftarrow\hat{\textbf{p}}^{q}_{k}(T_{k})+\frac{\hat{\textbf{p}}^{i}_{k}(T_{k})-\hat{\textbf{p}}^{q}_{k}(T_{k})}{\|\hat{\textbf{p}}^{i}_{k}(T_{k})-\hat{\textbf{p}}^{q}_{k}(T_{k})\|}d_{rep}
12else
13       𝒟\mathcal{D}\leftarrow gridBasedPlanner(p^ki(Tk),𝒫,𝒲\hat{\textbf{p}}^{i}_{k}(T_{k}),\mathcal{P},\mathcal{W});
14       gcurri\textbf{g}^{i}_{curr}\leftarrow findLOSFreeGoal(p^ki(Tk),𝒟,𝒫,𝒲\hat{\textbf{p}}^{i}_{k}(T_{k}),\mathcal{D},\mathcal{P},\mathcal{W});
15      
16 end if
return gcurri\textbf{g}^{i}_{curr}
Algorithm 2 planCurrentGoal

IV-D Goal Planning

In this work, we adopt the priority-based goal planning to prevent the deadlock in cluttered environments. Alg. 2 describes the proposed goal planning method. We give the higher priority to the agent that has a smaller distance to the goal (30). However, we assign the lowest priority to the goal-reached agent to prevent them from blocking other agent’s path (31). If an agent moves away from the current agent, then we do not consider the priority of that agent for smooth traffic flow (32) (lines 1-6).

p^kj(Tk)gj<p^ki(Tk)gi\|\hat{\textbf{p}}^{j}_{k}(T_{k})-\textbf{g}^{j}\|<\|\hat{\textbf{p}}^{i}_{k}(T_{k})-\textbf{g}^{i}\| (30)
p^kj(Tk)gj>dg\|\hat{\textbf{p}}^{j}_{k}(T_{k})-\textbf{g}^{j}\|>d_{g} (31)
(p^kj(Tk+M)p^kj(Tk))(p^ki(Tk)p^kj(Tk))>0(\hat{\textbf{p}}^{j}_{k}(T_{k+M})-\hat{\textbf{p}}^{j}_{k}(T_{k}))\cdot(\hat{\textbf{p}}^{i}_{k}(T_{k})-\hat{\textbf{p}}^{j}_{k}(T_{k}))>0 (32)

Next, we find the nearest agent among the higher priority agents. If the distance to the nearest agent is less than the threshold dthd_{th}, then we assign the goal to increase the distance to the closest agent (lines 7-9). It is necessary to prevent agents from clumping together. If there is no such agent, we search a discrete path 𝒟\mathcal{D} using a grid-based path planner. Here, the higher-priority agent is considered as a static obstacle. If the planner fails to find a solution, then we retry to find the path without considering the higher-priority agent (line 11). Finally, we find the LOS-free goal in the discrete path (line 12). The LOS-free goal means that the line segment between the goal and the agent’s current position does not collide with higher priority agents and static obstacles.

Refer to caption
Refer to caption
Figure 4: Goal planning depending on the direction of the initial trajectory. The blue and red circles are the current position of agents ii and jj respectively. The small circles are the goal points of each agent and blue small squares are the discrete path of agent ii and black squares are static obstacles.

Fig. 4 describes the mechanism of the Alg. 2. In the left figure, the agent jj has lower priority because it is moving away from the agent ii. Due to that, the agent ii ignores the agent jj when planning the discrete path. On the other hand, the agent jj in the right figure has higher priority because it moves to the agent ii and has a smaller distance to the goal. Thus, the agent ii finds the LOS-free goal in the discrete path that detours the agent jj and static obstacles.

IV-E Trajectory Optimization

We reformulate the distributed trajectory optimization problem as a quadratic programming (QP) problem using the SFC and LSC:

minimizecki\displaystyle\underset{\textbf{c}^{i}_{k}}{\text{minimize}} Jerri+Jderi\displaystyle J^{i}_{err}+J^{i}_{der} (33)
subject to Aeqcki=beq\displaystyle A_{eq}\textbf{c}^{i}_{k}=\textbf{b}_{eq}
Adynckibdyn\displaystyle A_{dyn}\textbf{c}^{i}_{k}\preceq\textbf{b}_{dyn}
ck,m,li𝒮k,mi\displaystyle\textbf{c}^{i}_{k,m,l}\in\mathcal{S}^{i}_{k,m} j\i,m,l\displaystyle\forall j\in\mathcal{I}\backslash i,m,l
ck,m,lik,m,li,j\displaystyle\textbf{c}^{i}_{k,m,l}\in\mathcal{L}^{i,j}_{k,m,l} j\i,m,l\displaystyle\forall j\in\mathcal{I}\backslash i,m,l
ck,M,0i=ck,M,li\displaystyle\textbf{c}^{i}_{k,M,0}=\textbf{c}^{i}_{k,M,l} l>0\displaystyle\forall l>0

Here, we add the last constraint to guarantee dynamic feasibility of the initial trajectory. Note that the constraints in (33) guarantees collision-free due to the SFC and Lemma 2. Furthermore, Thm. 2 shows that this problem consists of feasible constraints. Therefore, we can obtain a feasible solution from Alg. 1 for all replanning steps by using a conventional convex solver.

Theorem 2.

Suppose that there is no collision in the initial configuration and the replanning period is equal to the segment time Δt\Delta t. Then, the solution of (33) always exists for all replanning steps.

Proof.

If k=0k=0, then we obtain ^0,mi,j𝒞i,j=\hat{\mathcal{H}}^{i,j}_{0,m}\cap\mathcal{C}^{i,j}=\emptyset for j\i,m\forall j\in\mathcal{I}\backslash i,m by the assumption. Hence the control points of the initial trajectory c^0i\hat{\textbf{c}}^{i}_{0} are the solution to (33) by Lemmas 1, 3.

If there exists a solution at the previous replanning step k1k-1, then the initial trajectory satisfies Aeqc^ki=beqA_{eq}\hat{\textbf{c}}^{i}_{k}=\textbf{b}_{eq}, Adync^kibdynA_{dyn}\hat{\textbf{c}}^{i}_{k}\preceq\textbf{b}_{dyn}, c^k,m,li𝒮k,mi\hat{\textbf{c}}^{i}_{k,m,l}\in\mathcal{S}^{i}_{k,m} and c^k,M,0i=c^k,M,li\hat{\textbf{c}}^{i}_{k,M,0}=\hat{\textbf{c}}^{i}_{k,M,l} for j\i,m,l\forall j\in\mathcal{I}\backslash i,m,l due to (15) and Lemma 1. Also, we have k1,mi,j𝒞i,j=\mathcal{H}^{i,j}_{k-1,m}\cap\mathcal{C}^{i,j}=\emptyset for j\i,m\forall j\in\mathcal{I}\backslash i,m by Lemma 2. Here, we can derive that ^k,mi,j𝒞i,j=\hat{\mathcal{H}}^{i,j}_{k,m}\cap\mathcal{C}^{i,j}=\emptyset for j\i,m\forall j\in\mathcal{I}\backslash i,m using the fact that ^k,mi,j=k1,m+1i,j\hat{\mathcal{H}}^{i,j}_{k,m}=\mathcal{H}^{i,j}_{k-1,m+1} for m<M\forall m<M and ^k,Mi,j={ck1,M,ni}k1,Mi,j\hat{\mathcal{H}}^{i,j}_{k,M}=\{\textbf{c}^{i}_{k-1,M,n}\}\in\mathcal{H}^{i,j}_{k-1,M}. Thus, we obtain c^k,m,lik,m,li,j\hat{\textbf{c}}^{i}_{k,m,l}\in\mathcal{L}^{i,j}_{k,m,l} by Lemma 3. To summarize, the control points of the initial trajectory c^ki\hat{\textbf{c}}^{i}_{k} satisfies all constraints of (33). Therefore, the solution of (33) always exists for all replanning steps by the mathematical induction. ∎

If we conduct the mission with actual quadrotors, the assumption used in Thm. 2 may not be satisfied due to external disturbance or tracking error. To solve this problem, we adopt event-triggered replanning proposed in [1]. This method ignores the tracking error when the error is small. In other word, it utilizes the desired state from the previous trajectory (pk1i(Tk)\textbf{p}^{i}_{k-1}(T_{k})) as the initial condition of the current optimization problem (4). If the detected disturbance is too large, we replace the planner with our previous work [12] until the error vanished.

V EXPERIMENTS

All simulation and experiments were conducted on a laptop running Ubuntu 18.04. with Intel Core i7-9750H @ 2.60GHz CPU and 32G RAM. All methods used for comparison were implemented in C++ except for DMPC tested in MATLAB R2020a. We modeled the quadrotor with radius ri=0.15r^{\forall i}=0.15 m, downwash coefficient cdw=2c_{dw}=2, maximum velocity [1.0,1.0,1.0][1.0,1.0,1.0] m/s, maximum acceleration [2.0,2.0,2.0][2.0,2.0,2.0] m/s2\text{s}^{2} based on the experimental result with Crazyflie 2.0. We use the Octomap [23] to represent the obstacle environment. We formulated the piecewise Bernstein polynomial with the degree of polynomials n=5n=5, the number of segments M=5M=5 and the segment time Δt=0.2\Delta t=0.2 s, where the total time horizon is 1 s. We set the parameters for the objective function and Alg. 2 to we,m=1w_{e,m}=1, wd,3=0.01w_{d,3}=0.01, dg=0.1d_{g}=0.1 m, dth=0.4d_{th}=0.4 m and drep=0.5d_{rep}=0.5 m. We used the openGJK package [24] for GJK algorithm and the CPLEX QP solver [25] for trajectory optimization. When we execute the proposed algorithm, we set the interval between trajectory update to Δt=0.2\Delta t=0.2 s to match the assumption in Thm. 2. It means that all agents begin to plan the trajectories at the same time and update it with the rate of 5 Hz.

Refer to caption
Figure 5: Performance comparison in a 3 m ×\times 3 m ×\times 2 m empty space. We averaged the value from success cases among 30 different random trials (shown best in color).

V-A Simulation in Obstacle-free Space

To validate the performance of the proposed method, we compare four online trajectory planning algorithms at the obstacle-free space: 1) DMPC (input space) [1], 2) BVC [6] with right-hand rule, 3) LSC with right-hand rule (LSC-RH), 4) LSC with priority-based goal planning (LSC-PB). We conducted the simulation with 10 to 60 quadrotors in a 3 m ×\times 3 m ×\times 2 m empty space. We executed 30 missions with randomly allocated start and goal points for each number of agents. When judging the collision, we use a smaller model with ri=0.1r^{\forall i}=0.1 m and cdw=2.25c_{dw}=2.25 for DMPC because it uses soft constraints, while the other methods use the original one.

The top-left graph of Fig. 5 describes the success rate of each method. We observed that the collision occurred in all failure cases of DMPC even though we adopted the smaller collision model. It implies that collision constraint including the slack variable cannot guarantees the safety in a dense environment even with the bigger safety margin. On the contrary, there was no collision when we use BVC and LSC-based methods, and the deadlock was the only reason for failure. It indicates that the proposed method does not cause optimization failure as guaranteed by Thm. 2. Among the four algorithms, the LSC with priority-based goal planning method shows the highest success rate for all cases. This result shows that priority-based goal planning can solve deadlock better than the right-hand rule.

The top-right and bottom-left figures of Fig. 5 show the total flight time and flight distance per agent, respectively. The proposed method has 50%50\% less flight time and 17%17\% less flight distance on average compared to the BVC-based method. Such performance is similar to DMPC, but the proposed algorithm still guarantees collision avoidance. Compared to the right-hand rule, the priority-based goal planning costs 11%11\% more flight time and 6%6\% more flight distance, but it is acceptable considering the increase in success rate.

The bottom-right graph of Fig. 5 describes the average computation time per agent. The proposed method takes 10.5 ms for 30 agents and 15.5 ms for 60 agents. The proposed algorithm consumes similar computation time to BVC, but needs more computation time than DMPC because it involves more constraints to guarantee collision avoidance.

Refer to caption
(a) Random forest
Refer to caption
(b) Indoor space
Figure 6: Trajectory planning result of the proposed method in obstacle environments. Ellipsoid and line denote the agent and its trajectory respectively, and the gray block is obstacle.
Table I: Comparison with EGO-Swarm [3] in cluttered environment. (srsr: success rate (%\%), TfT_{f}: flight time (s), ll: flight distance per agent (m), TcT_{c}: computation time (ms))
Env. Method srsr TfT_{f} ll TcT_{c}
Forest (20 agents) EGO-Swarm 50 18.2 8.69 4.7
LSC-PB 100 18.4 8.93 9.1
Indoor (20 agents) EGO-Swarm 0 - - 53.0
LSC-PB 100 42.0 14.6 9.2

V-B Simulation in Cluttered Environments

We compared the proposed algorithm with the EGO-Swarm [3] in two types of obstacle environment. One is a random forest consisting of 10 static obstacles. We deployed 20 agents in a circle with 4 m radius and 1 m height, and we assigned the goal points to antipodes of the start points, as shown in Fig. 6(a). The other is an indoor space with the dimension 10 m ×\times 15 m ×\times 2.5 m. For a fair comparison, we increased the sensor range of EGO-Swarm enough to recognize all obstacles. We randomly assigned the start and goal points among the 20 predetermined points as depicted in Fig. 6(b). We executed 30 simulations for each environment changing the obstacle’s position (random forest) or the start and goal points (indoor space).

Table I shows the planning result of both methods in cluttered environments. Ego-Swarm shows faster computation speed in the random forest because it optimizes the trajectory without any hard constraints. However, the success rate of this method is 50%50\% success rate, indicating that EGO-Swarm cannot guarantee collision avoidance. On the other hand, the proposed method shows a perfect success rate and has similar flight time and distance compared to EGO-Swarm. This result shows that the LSC is as efficient as the soft constraint of EGO-Swarm but still guarantees collision avoidance in obstacle environments.

Unlike the random forest, EGO-Swarm fails to find the trajectory to the goal points in the indoor space. It is because EGO-Swarm cannot find the topological trajectory detouring the wall-shaped obstacle. On the other hand, the proposed method reaches the goal for all cases due to the priority-based goal planning.

V-C Flight Test

We conducted the flight test with ten Crazyflie 2.0 quadrotors in the maze-like environment depicted in Fig. 1. We confined the quadrotors in 10 m x 6 m x 1 m space to prevent them from bypassing the obstacle region completely. We assign the quadrotors a mission to patrol the two waypoints to validate the operability of the proposed algorithm. We sequentially computed the trajectories for all agents in a single laptop, and we used the Crazyswarm [26] to broadcast the control command to quadrotors. We utilized the Optitrack motion capture system to observe the agent’s position at 120 Hz. The full flight is presented in the supplemental video.

The flight test was conducted for around 70 seconds, and there was no huge disturbance triggering the planner change. We accumulated the history of the smallest inter-agent distance and agent-to-obstacle distance in Fig. 7(a) and 7(b) respectively. Fig. 7(a) shows that the agents invade desired safe distance for some sample times due to the tracking error. However, there was no physical collision during the entire flight. The computation time per agent is reported in Fig. 7(c), and the average computation time was 6.8 ms.

Refer to caption
(a) Distance between quadrotors
Refer to caption
(b) Distance to static obstacles
Refer to caption
(c) Computation time per agent
Figure 7: Summary of the flight test with 10 quadrotors in maze-like environment. The red and black dashed lines denote the physical and desired safe distance respectively.

VI CONCLUSIONS

We presented the online distributed trajectory planning algorithm for quadrotor swarm that guarantees to return collision-free, and dynamically feasible trajectory in cluttered environments. We constructed the collision avoidance constraints using the linear safe corridor so that the feasible set of the constraints always contains the initial trajectory. We utilized these constraints to reformulate the trajectory generation problem as convex optimization, and we proved that this problem ensures feasibility for all replanning steps. We verified that the prioritized-based goal planning solves deadlock better than the right-hand rule and EGO-Swarm. The proposed method shows the highest success rate compared to the state-of-the-art baselines and yields 50%50\% less flight time and 17%17\% less flight distance compared to the BVC-based method. We validated the operability of our algorithm by intensive flight tests in a maze-like environment.

References

  • [1] C. E. Luis, M. Vukosavljev, and A. P. Schoellig, “Online trajectory generation with distributed model predictive control for multi-robot motion planning,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 604–611, 2020.
  • [2] S. Kandhasamy, V. B. Kuppusamy, and S. Krishnan, “Scalable decentralized multi-robot trajectory optimization in continuous-time,” IEEE Access, vol. 8, pp. 173 308–173 322, 2020.
  • [3] X. Zhou, X. Wen, J. Zhu, H. Zhou, C. Xu, and F. Gao, “Ego-swarm: A fully autonomous and decentralized quadrotor swarm system in cluttered environments,” arXiv preprint arXiv:2011.04183, 2020.
  • [4] X. Zhou, Z. Wang, X. Wen, J. Zhu, C. Xu, and F. Gao, “Decentralized spatial-temporal trajectory planning for multicopter swarms,” arXiv preprint arXiv:2106.12481, 2021.
  • [5] G. Sharon, R. Stern, A. Felner, and N. R. Sturtevant, “Conflict-based search for optimal multi-agent pathfinding,” Artificial Intelligence, vol. 219, pp. 40–66, 2015.
  • [6] D. Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered voronoi cells,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 1047–1054, 2017.
  • [7] J. Park, J. Kim, I. Jang, and H. J. Kim, “Efficient multi-agent trajectory planning with feasibility guarantee using relative bernstein polynomial,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 434–440.
  • [8] J. Van Den Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in Robotics research.   Springer, 2011, pp. 3–19.
  • [9] S. H. Arul and D. Manocha, “Dcad: Decentralized collision avoidance with dynamics constraints for agile quadrotor swarms,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1191–1198, 2020.
  • [10] L. Wang, A. D. Ames, and M. Egerstedt, “Safety barrier certificates for collisions-free multirobot systems,” IEEE Transactions on Robotics, vol. 33, no. 3, pp. 661–674, 2017.
  • [11] J. Tordesillas and J. P. How, “Mader: Trajectory planner in multiagent and dynamic environments,” IEEE Transactions on Robotics, 2021.
  • [12] J. Park and H. J. Kim, “Online trajectory planning for multiple quadrotors in dynamic environments using relative safe flight corridor,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 659–666, 2020.
  • [13] H. Zhu and J. Alonso-Mora, “B-uavc: Buffered uncertainty-aware voronoi cells for probabilistic multi-robot collision avoidance,” in 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS).   IEEE, 2019, pp. 162–168.
  • [14] M. Jager and B. Nebel, “Decentralized collision avoidance, deadlock detection, and deadlock resolution for multiple mobile robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems., vol. 3.   IEEE, 2001, pp. 1213–1219.
  • [15] V. R. Desaraju and J. P. How, “Decentralized path planning for multi-agent teams with complex constraints,” Autonomous Robots, vol. 32, no. 4, pp. 385–403, 2012.
  • [16] B. Şenbaşlar, W. Hönig, and N. Ayanian, “Robust trajectory execution for multi-robot teams using distributed real-time replanning,” in Distributed Autonomous Robotic Systems.   Springer, 2019, pp. 167–181.
  • [17] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on.   IEEE, 2011, pp. 2520–2525.
  • [18] J. Tordesillas and J. P. How, “Minvo basis: Finding simplexes with minimum volume enclosing polynomial curves,” arXiv preprint arXiv:2010.10726, 2020.
  • [19] M. Zettler and J. Garloff, “Robustness analysis of polynomials with polynomial parameter dependency using bernstein expansion,” IEEE Transactions on Automatic Control, vol. 43, no. 3, pp. 425–431, 1998.
  • [20] S. Tang and V. Kumar, “Safe and complete trajectory generation for robot teams with higher-order dynamics,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 1894–1901.
  • [21] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization.   Cambridge university press, 2004.
  • [22] E. G. Gilbert, D. W. Johnson, and S. S. Keerthi, “A fast procedure for computing the distance between complex objects in three-dimensional space,” IEEE Journal on Robotics and Automation, vol. 4, no. 2, pp. 193–203, 1988.
  • [23] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, no. 3, pp. 189–206, 2013.
  • [24] M. Montanari and N. Petrinic, “Opengjk for c, c# and matlab: Reliable solutions to distance queries between convex bodies in three-dimensional space,” SoftwareX, vol. 7, pp. 352–355, 2018.
  • [25] I. CPLEX, “12.7. 0 user’s manual,” 2016.
  • [26] J. A. Preiss, W. Honig, G. S. Sukhatme, and N. Ayanian, “Crazyswarm: A large nano-quadcopter swarm,” in International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 3299–3304.