Adaptive Policies for Perimeter Surveillance Problems

We consider the problem of sequentially choosing observation regions along a line, with an aim of maximising the detection of events of interest. Such a problem may arise when monitoring the movements of endangered or migratory species, detecting crossings of a border, policing activities at sea, and in many other settings. In each case, the key operational challenge is to learn an allocation of surveillance resources which maximises successful detection of events of interest. We present a combinatorial multi-armed bandit model with Poisson rewards and a novel ﬁltered feedback mechanism - arising from the failure to detect certain intrusions - where reward distributions are dependent on the actions selected. Our solution method is an upper conﬁdence bound approach and we derive upper and lower bounds on its expected performance. We prove that the gap between these bounds is of constant order, and demonstrate empirically that our approach is more reliable in simulated problems than competing algorithms.


Introduction
Many common surveillance tasks concern the detection of activity along a border or perimeter.Monitoring the movements of endangered or migratory species through crossings using camera traps, covertly tracking illegal fishing in territorial waters via adaptive satellite technology, and quantifying traffic across a border using drone technology are a few among many examples of important potential aims in this domain.Equally, a number of common scheduling challenges involve events arising through time.For instance, scheduling call center staff to meet random arrivals, or deciding what times traffic cameras should be in operation to catch speeding drivers.
Approaches to the optimal design of observation strategies are invaluable not only at the operational level, but also at the strategic level because they can inform decision makers about expected outcomes for different budget scenarios and policies.In each of these tasks the notion of optimality can be equated to maximising the rate of detection of events, or equivalently, detecting as many events as possible over some fixed time horizon.
We consider a scenario where observations are made by a team of searchers (representing cameras, sensors, human searchers, etc.), coordinated by a central agent referred to as the controller who chooses which segments of a line segment each searcher will observe.As the line segment may be thought of as indexing space or time, the formulation captures a wide range of examples (we will discuss the spatial problem in what follows for ease of exposition).We will assume that events arise according to a Poisson process and the likelihood of an event being detected depends on the allocation of resource chosen by the controller.
The problem of designing an optimal deployment of searchers becomes truly challenging when the number of available searchers is insufficient to guarantee perfect detection of all events, which is often the case in tight fiscal environments.In such a setting the controller faces a classic resource allocation problem, where the action set is the set of possible allocations of searchers to segments of the line and the controller aims to find an action which maximises the rate of detection.To compute this rate of detection the controller must know the rate at which events occur along the length of the line and the probabilities with which searchers detect events that have appeared at particular points (under a particular allocation of searchers to parts of the line).It is, of course, a strong assumption that such information is available, particularly at the beginning of a new project.
In this work we consider the more realistic setting where the rate at which events occur is unknown.When the rate at which events occur is unknown the controller has two broad options: (a) to select an allocation which performs best in expectation according to some prior information (if it exists) and stick to that, (b) (if possible) to take an adaptive strategy, which alters the allocation of searchers as observations are collected.
In this second scenario a sequential resource allocation problem is faced -where the controller wishes to quickly and confidently converge on an optimal allocation while also ensuring appropriate experimentation.This sequential problem is our principal concern in this paper.
To permit analysis of this problem we shall assume two discretisations to simplify the controller's action set.We will consider that opportunities to update the allocation of searchers occur only at particular time points t ∈ N. Thus, the problem can be thought of as taking place over a series of rounds.We will also suppose that the search space has been divided into a number of cells such that each searcher is allocated a connected set of cells in which to patrol, disjoint from those allocated to other searchers.Imposing this discrete structure on the problem is useful as it allows us to draw on a large literature concerning multi-armed bandit problems when designing and analysing solutions to the problem.
Multi-armed bandit problems are relevant to this sequential resource allocation problem because they provide a framework for studying exploration-exploitation dilemmas, which is the principal challenge faced by the controller here.In order to reliably make optimal actions, data must be collected from all cells to accurately estimate the expected number of detections associated with an action -i.e. the action space should be explored.However, data is being collected on a live problem -real events are passing undetected when sub-optimal actions are played.As such there is a pressure to exploit information that has been collected and select actions which are believed to yield high detection rates over those with more exploratory value.A balance must be struck.One may suppose that this is a trivial issue which can be resolved by simply searching in all cells in all rounds.However, searching more cells will not necessarily lead to more accurate information or a higher detection rate.Searchers become less effective at detecting events the more cells they are allocated, because events may be undetected if a searcher is aiming to detect over too large a region.Indeed, an optimal action may well be to assign each searcher to just a single cell.

Related Literature
We select a Poisson process as the data generating model for our problem.The Poisson process is famously widely used as a model for spatial and spatiotemporal event data in many settings, such as ecology (Heikkinen andArjas 1999, Serra et al. 2014), and arrival process modelling (Benes 1957, Weinberg et al. 2007).There is a large literature on inference for Poisson processes, which has lead to a variety of sophisticated techniques, such as those involving Gaussian processes (Adams et al. 2009, John andHensman 2018) or kernel-based smoothing (Diggle 1985).However the theoretical properties of the more complex methods are typically only understood asymptotically (Helmers et al. 2005, Kirichenko and Van Zanten 2015, Gugushvili et al. 2018) and therefore in the interest of developing tight guarantees on the performance of sequential decision making algorithms, we favour a simple piecewise-constant model for the Poisson process rate in this paper.
Search theory has its origins in WWII with the study of barrier patrols during the Battle of the Atlantic (Koopman (1946)).The works of Stone (1976) and Washburn (2002) present a much broader and more contemporary range of applications in search theory and detection, and are by now the classic references on the subject.More closely related to our work is Szechtman et al. (2008), who study the perimeter protection problem when the parameters of the arrival process are fully known, for mobile and fixed searchers.Carlsson et al. (2016) study the problem of optimally partitioning a space in R 2 to maximise a function of an intensity of events over the space.Their problem bears resemblance to the full information version of our problem though our solution method is quite different due to our discretisation of the problem.Our work is, to the best of our knowledge, the first to tackle the learning aspect of such a problem.
The sequential problem we consider is structurally similar to a combinatorial multi-armed bandit (CMAB) problem (Chen et al. 2013).To permit discussion of a CMAB we first describe the simpler multi-armed bandit (MAB) problem (first attributed to Thompson (1933)), which is a special case.The (stochastic) MAB problem models a scenario where an agent is faced with a series of potential actions (or arms), each associated with some underlying probability distribution.In each of a series of rounds, the agent selects a single action and receives a reward drawn from the underlying distribution associated with the selected action.The agent's aim is to maximise her cumulative expected reward over some number of rounds, or equivalently minimise her cumulative regret -defined as the difference in expected reward between optimal actions and actions actually selected.To succeed in this the agent must manage an exploration-exploitation trade-off as she learns which actions have high expected reward.
The CMAB problem models a richer framework where the agent may select multiple actions in each round and her reward is a function of the observations from the underlying distributions associated with the selected actions.Chen et al. (2013) consider a setting where this function may be non-linear.Numerous authors (Anantharam et al. (1987), Gai et al. (2012), Kveton et al. (2015b), Combes et al. (2015), and Luedtke et al. (2016)) consider a special case (known as a multiple play bandit) where the reward is simply a sum of the random observations and the number of actions which may be selected in one round is limited.A number of other works have since extended the framework of Chen et al. (2013) to model other novel features.Chen et al. (2016a) and Kveton et al. (2015a) consider a setting where playing a subset of arms may randomly trigger additional rewards from other arms, and Chen et al. (2016b) considers a broader set of non-linear reward functions.However the CMAB model and UCB approach of Chen et al. (2013) is the work closest to ours as the later developments model features that are not present in our setting.The fundamental differences between our model and theirs are that we consider heavy tailed rewards and a setting where reward distributions depend on the selected action.
Reward maximisation in a CMAB problem requires addressing a similar trade-off between exploration and exploitation to that faced in the MAB problem.For MAB-type problems, it has famously been shown that under certain assumptions optimal policies can be derived by formulating the problem as a Markov Decision Process and using an index approach (Gittins et al. 2011).In CMAB problems however, these approaches are inappropriate, not least, since the combinatorial action sets induce dependencies between rewards generated by distinct actions which invalidates Gittins' theory.See also Remark 1 in Section 2.More recently, so called upper confidence bound (UCB) algorithms, first proposed by Lai and Robbins (1985) and Burnetas and Katehakis (1996), and popularised by Auer et al. (2002), have attracted much attention as approaches that enjoy efficient implementation and strong theoretical guarantees.These heuristic methods balance exploration and exploitation by selecting actions based on optimistic estimates of the associated expected rewards and can be applied to both MAB and CMAB problems.Auer et al. (2002) originally proposed a UCB approach for MAB problems with underlying distributions whose support lies entirely within [0, 1].Chen et al. (2013) extended the principles of this algorithm to a version suitable for CMAB problems with nonlinear rewards.Broader classes of unbounded distributions have been considered by other authors.Cowan et al. (2015), Bubeck and Cesa-Bianchi (2012), Bubeck et al. (2013), andLattimore (2017) give UCB algorithms suitable for use with unbounded distributions, studying distributions that are Gaussian, have sub-Gaussian tails, known variance and known kurtosis respectively.Luedtke et al. (2016) have studied multiple-play bandits with exponential family distributions.However for CMAB • Introduction of a formal model for sequential event detection problems and an efficient integer programming solution to the full-information version of the problem; • Introduction of the filtered feedback model for combinatorial multi-armed bandits; • Development of a bespoke treatment of combinatorial bandits with Poisson rewards, leading to a new martingale inequality for filtered Poisson data and an accompanying UCB approach; • Regret analysis yielding an optimal order upper bound on finite time regret of the UCB algorithm and a problem-specific lower bound on asymptotic regret for any uniformly good algorithm.
We also present extensive numerical work which displays the robustness of the UCB approach in contrast to its competitors.

Paper Outline
The remainder of the paper is structured as follows.Section 2 introduces a model of the sequential problem.In Section 3 we solve the full information problem (the non-sequential resource allocation problem where the rate function of the arrival process is known).The proposed integer programming solution forms the backbone of the proposed solution methods for the sequential problem.In Section 4 we introduce a solution method, the Filtered Poisson Combinatorial Upper Confidence Bound algorithm, for the sequential resource allocation problem, and derive a performance guarantee in the form of an upper bound on expected regret of the policy.Here, we also derive a lower bound on the expected regret possible for any policy and thus show that our algorithm has a bound of the correct order.We conclude in Sections 5 and 6 with numerical experiments and a discussion respectively.

The Model
Before introducing solution methods we give a mathematical model of the problem.Throughout the paper, for a positive integer W let the notation [W ] represent the set {1, 2, ..., W }.
The observation domain (line) comprises K cells which can be searched by U searchers.We write to denote the deployment of searcher u to cell k, while is used when cell k goes unsearched.An action a := (a 1 , a 2 , ..., a K ) ∈ {0, 1, ..., U } K describes a deployment of the searchers across the line.We impose the requirement that a ∈ A, the action set, where These conditions on A ensure that searchers are assigned to disjoint connected sub-regions of the perimeter.The actions are uniquely defined by indicator variables a iju ∈ {0, 1} for i, j ∈ [K], i < j and u ∈ [U ] such that a iju = 1 ⇔ agent u is assigned to the cells {i, i + 1, ..., j} only.
Each action a ∈ A gives rise to a certain detection probability γ k (a) ∈ [0, 1] in all cells k ∈ [K].The detection probabilities capture the effectiveness of each searcher in observing an event in a specific cell.We write γ(a) for the K-vector whose k th component is γ k (a).The detection probabilities are structured such that for any a, b ∈ A and i ≤ j, Hence, the detection probability in a cell depends only on the sub-region assigned to the single agent searching that cell and is unaffected by the sub-regions assigned to other searchers.We assume that if a cell is searched there will be some non-zero probability of detecting events that occur.That is to say for any k ∈ [K], γ k (a) > 0 for any a ∈ A such that a k = 0.
We consider two cases with respect to knowledge of the detection probabilities: (I) The detection probabilities γ(a) are known for all a ∈ A. This scenario occurs when the controller knows γ(a) from the past.
(II) The functions γ have a particular known parametric form but unknown parameter values.This case is realistic when properties of the detection probabilities are dictated by physical considerations, such as the searchers' speed, the visibility in particular locations or the time for which an event is observable.
Our sequential decision problem may now be described as follows: 1.At each time t ∈ N an action a t ∈ A is taken, inducing a detection probability γ k (a t ) in each cell k ∈ [K]; 2. Events are generated by K independent Poisson processes, one for each cell.We use X k to denote the number of events in cell k (whether observed or not) occurring during the period of a single search.We have where the rates λ k ∈ R + are unknown, and write λ max ≥ max k∈[K] λ k for a known upper bound on the arrival rates.We use X kt for the number of events generated in cell k during search t.
3. Should action a t be taken at time t, a random vector of events Events in the underlying X-process are observed or not independently of each other.We write It follows from standard theory that and are independent random variables.It follows that the mean number of events observed under action a is given by r λ,γ (a) := γ(a) T λ, where T denotes vector transposition and λ is the K-vector whose k th component is λ k .
4. We write for the history (of actions taken and events observed) available to the decision-maker at time t ∈ N. A policy is a rule for decision-making and is determined by some collection of functions π t : H t → A, t ∈ N adapted to the filtration induced by H t .In practice a policy will be determined by some algorithm A. We will use the terms policy and algorithm interchangeably in what follows.
The goal of analysis is the elucidation of policies whose performance (as measured by the mean number of events observed) is strong uniformly over λ, γ and over partial horizons {1, 2, ..., n} ⊆ N. We write for the mean number of events observed up to time n ∈ N under algorithm A. If we write opt λ,γ := max a∈A r λ,γ (a), then it is plain that, for any choice of with achievement of the left hand side dependent on knowledge of λ.Assessment of algorithms will be based on the associated regret function, the expected reward lost through ignorance of λ, given for algorithm A and horizon n by which is necessarily positive and nondecreasing in n, for any fixed A. In related bandit-type problems the regret of the best algorithms typically grows at O(log(n)) uniformly across all λ.
We will demonstrate both that this is also the case for the algorithms we propose and that the best achievable growth for this problem is also O(log(n)).
Remark 1 An alternative, indeed classical, formulation uses Bayes sequential decision theory.
Here the goal of analysis is the determination of an algorithm A to maximise where the outer expectation is taken over some prior distribution ρ for the unknown λ.A standard approach would formulate this as a Markov Decision Process (MDP) with an informational state at time t taken to be some sufficient statistic for λ.The objections to this approach in this context are many.First, any serious attempt to derive such a formulation which is likely tractable will require strong assumptions on the prior ρ including, for example, independence of the components of λ.These would each typically have a conjugate gamma prior.Even then the resulting dynamic program would be computationally intractable for any reasonable choices of K and n.Second, the realities of our problem (and, indeed, many others) are such that specification of any reasonably informed prior is impractical.Confidence in the analysis would inevitably require robustness of the performance of any proposed algorithm to specification of the prior.Indeed, our formulation centred on regret simply seeks robustness of performance with respect to values of the unknown λ.Third, the MDP approach would require up front specification of the decision horizon n.This is practically undesirable for our problem.Moreover, the value of n is not unimportant.It will determine the nature of good policies in important ways.For example, the "last" decision at time n is guaranteed to be optimally "greedy" since there is no further need to learn about λ at that point.

The Full Information Problem
In order to develop strongly performing policies, it is critical that we are able to solve the full information optimisation problem opt λ,γ := max a∈A r λ,γ (a) for any pre-specified λ ∈ (R + ) K .A naive proposal for a policy addressing the problem outlined in the previous section would choose an action a t at time t to solve the full information problem for some estimate λ t of the unknown λ available at time t.While such a proposal would fail to adequately address the challenge of learning about λ, we will in the succeeding sections develop effective algorithms which choose allocations determined by solutions of full information problems for carefully chosen λ-values.
A challenge to the solution of the full information problem is the non-linearity in a of the objective r λ,γ (a) inherited from the non-linearity of the detection mechanism γ(a).To develop efficient solution approaches we produce a formulation as a linear integer program (IP) in which this non-linearity is removed by precomputing key quantities.In particular we write for the mean number of events detected when agent u is allocated to the subregion {i, i+1, ..., j} where a iju is any a ∈ A such that a iju = 1.Efficient solution of the full information problem relies on precomputing these q λ,γ,iju for all 1 ≤ i ≤ j ≤ K, and u ∈ [U ].We now have that The first constraint above guarantees that each searcher u is assigned to at most one sub-region while the second constraint guarantees that each cell k is searched by at most one searcher.We view the solution of (2) as the optimal allocation strategy and the optimal value function as the best achievable performance for an agent with perfect knowledge of γ and λ.
When we require solutions to the full information problem for the implementation of algorithms for the problem described in the preceding section, we solve an appropriate version of the above IP (ie, for suitably chosen λ) by means of branch and bound.While it can be shown that the IP (2) belongs to a class of problems which is NP-hard (see Appendix A) we find that the solution of this IP is very efficient in practice.We believe that this is because the solution of the LP-relaxation of (2) often coincides with the exact solution of the IP.Indeed, in empirical tests this occurred more than 90% of the time and in the remaining instances the gap between the two solutions was always less that 1%.For all problem sizes considered in this paper the pre-processing and solution steps can be completed in less than a second using basic linear program solvers in the statistical programming language R on a single laptop.

Sequential Problem
In the sequential problem, the controller's objective is to minimise regret (1) over a sequence of rounds.To do so the controller must construct a strategy which balances exploring all cells to accurately estimate the underlying rate parameters λ, while also exploiting the information gained to detect as many events as possible.In this section we introduce and analyse two upper confidence bound (UCB) algorithms as policies for the case of fully known detection probabilities (case (I)) and the case where only the nature of the scaling of detection probabilities is known (case (II)).
The model we introduced in Section 2 is closely related to the Combinatorial Multi Armed Bandit problem (CMAB) model of Chen et al. (2013).The CMAB problem models a scenario where a decision-maker is faced with a set of K basic actions (or arms) each associated with a random variable of unknown probability distribution.In each round t ∈ N, the decision-maker may select a subset of basic actions to take (or arms to pull ) and receives a reward which is a (possibly randomised) function of realisations of the random variables associated with the selected basic actions.The decision-maker's aim is to maximise her cumulative reward over a given horizon.Chen et al. study a CMAB problem where the decision-maker receives semibandit feedback on her actions, meaning she observes the overall reward but also all realisations of the random variables associated with the selected arms.Realisations of the random variables are identically distributed for a given arm and independent both across time and arms.
In our adaptive searching problem, electing to search a cell k in a round t, i.e. setting a kt = 0, is the analogue of pulling an arm k.The total number of events detected in a round is the analogue of reward.The fundamental, and non-trivial difference between our model and that of Chen et al. lies in the feedback mechanism.Our framework is more complex in two important regards.Firstly, we do not by default observe independent identically distributed (i.i.d.) realisations of the underlying random variable of interest X kt each time we elect to search a cell.We observe a filtered observation Y kt whose distribution depends on the action a t selected in that round.This introduces complex dependencies within the sequence of rewards meaning standard concentration results for independent observations do not apply.Secondly, because of the U possibly heterogeneous searchers, we can have multiple ways of searching the same collection of cells.While this is implicitly permitted within the framework of Chen et al., it is not explicitly acknowledged nor to the best of our knowledge are any real problems with such a structure explored in related work .
Our analytical challenge is to extend earlier work in order to meet these novel features.Specifically we will propose a UCB algorithm for both cases of our problem and derive upper bounds on the expected regret of these policies.UCB algorithms apply the principle of optimism in the face of uncertainty to sequential decision problems.Such an algorithm calculates an index for each action in each round which is the upper limit of a high probability confidence interval on the expected reward of that action and then selects the action with the highest index.In this way the algorithm will select actions which either have high indices due to a large mean estimate -leading it to exploit what has been profitable so far -or due to a large uncertainty in the empirical mean -leading it to explore actions which are currently poorly understood.As the rounds proceed, the confidence intervals will concentrate on the true means and fewer exploratory actions will be selected in favour of exploitative ones.

Case (I): Known detection probabilities
In our first version of the problem, case (I), the only unknowns are the underlying rate parameters λ.We assume that detection probability vectors γ(a) are known for all a ∈ A. Therefore we do not need to explicitly form UCB indices for every action separately.It will suffice to form a UCB index on each unknown λ k for k ∈ [K].Optimistic estimates of the value of each action will then arise by calculating the q λ,γ,iju quantities with the optimistic estimate of λ in place of known λ.
Our proposed approach to the sequential search problem in case (I), the FP-CUCB algorithm (Filtered Poisson -Combinatorial Upper Confidence Bound), is given as Algorithm 1.The algorithm consists of an initialisation phase of length K where allocations are selected such that every cell is searched in some capacity at least once.Then in every subsequent round is calculated for each cell k.This particular UCB index is chosen because it can be shown to bound λ k with high probability.Specifically, using de la Peña's inequality (de la Peña 1999), it can be shown that P ( λk,t ≥ λ k ) approaches 1 as t → ∞ at an appropriate rate.A full derivation of this term is given in the proof of the following theorem.An action which is optimal with respect to the K-vector of inflated rates λt = ( λ1,t , ..., λK,t ) is then selected by solving the IP (2) with λt in place of λ.The inflation terms involve a parameter λ max ≥ max k∈[K] λ k .This is necessary to construct UCBs which concentrate at a rate that matches the concentration of Poisson random variables, which is defined by the mean parameter.
To analyse the regret of this algorithm we must first introduce some additional notation for optimality gaps, the differences in expected reward between optimal and suboptimal actions. where The quantity ∆ max is then the difference in expected reward between an optimal Algorithm 1 FP-CUCB (case (I)) • Select an arbitrary allocation a ∈ A such that a t = 0 Iterative Phase: • Select an allocation a * λt such that rλ t,γ (a * λt ) = max a∈A rλ t,γ (a).
allocation of searchers and the worst possible allocation, while ∆ min is the difference in expected reward between an optimal allocation and the closest to optimal suboptimal allocation.The quantities ∆ k max and ∆ k min are the largest and smallest gaps between the expected reward of an optimal allocation and allocations where cell k is searched in some capacity.All ∆ terms depend on λ, γ but we drop this dependence in the notation for simplicity.

Upper bound on regret
Now, in Theorem 1 we provide an analytical bound on the expected regret of the FP-CUCB algorithm in n rounds.
Theorem 1 The regret of the FP-CUCB algorithm with λ max applied to the sequential surveillance problem with known γ satisfies where and To give a proof of this theorem we must introduce a new way of thinking about the action space.Consider that while we have previously (for ease of exposition) defined actions in terms of allocations of searchers to cells, a ∈ A, the real impact on reward comes from the vectors of detection probabilities, γ(a), which arise from these allocations.As multiple allocations may give rise to the same vector of detection probabilities (if, for instance, two searchers have identical capabilities, then switching their assignments would have no impact on the quality of the search) the set G = {γ(a), ∀a ∈ A} of possible detection probability vectors most parsimoniously describes the set of possible actions in this problem.
For an element g = (g 1 , ..., g K ) ∈ G we then have expected reward g T • λ and optimality gap ∆ g = opt λ,γ − g T • λ.Let G k be the set of vectors g with g k > 0 and G k,B ⊆ G k be the set of vectors in G k with sub-optimal expected reward -i.e.
in increasing order of expected reward.We use the following notation for optimality gaps with respect to these ordered vectors and thus the gaps defined previously can be expressed as where g s is the detection probability vector selected in round s.These allow us to keep track of the total detection probability applied to a cell up to the end of round t.
The central idea in proving Theorem 1 is that if for a certain sub-optimal action g : ∆ g > 0, all the cells k with g k > 0 have been sampled sufficiently, the mean estimates ought to be accurate enough that the probability of selecting that sub-optimal action again before horizon n is small.We show that this sufficient sampling level is O(log(n)) and the "small" probabilities of selecting the sub-optimal action after sufficient sampling are so small as to converge to a constant.Thus by re-expressing expected regret as a function of the number of plays of suboptimal actions, we can bound it from above as the sum of a O(log(n)) term derived from the sufficient sampling level and a constant independent of n.
To count the plays of sub-optimal actions we maintain counters N k,t , which collectively count the number of suboptimal plays.We update them as follows.Firstly, after the K initialisation rounds we set N k,K = 1 for k ∈ [K].Thereafter, in each round t > K, let k = arg min j:gj,t>0 N j,t−1 (i.e.k indexes the cell involved in the current action which has the lowest counter), where if k is non-unique, we choose a single value randomly from the minimising set.If g T t • λ = opt λ,γ then we increment N k by one, i.e. set N k ,t = N k ,t−1 + 1.The key consequences of these updating rules are that K k=1 N k,t provides an upper bound on the number of suboptimal plays in t rounds (since one of the first K actions may be optimal), and D k,t ≥ γ k,min N k,t for all k and t (since cell k is always searched with detection probability at least γ k,min ).While tracking the sub-optimal plays in this way is more complex than maintaining a single counter of the number of sub-optimal actions, it permits a convenient decomposition of regret that allows us to prove Theorem 1.
Proof of Theorem 1: We prove the theorem by decomposing regret into a function of the number of plays of suboptimal arms, up to and after some sufficient sampling level.We then introduce two propositions which give bounds for quantities in the decomposition which are then combined to give the bound in (4).The proofs of these propositions is reserved for Appendix C. Let These counters are defined as follows: where A cell k is said to be sufficiently sampled with respect to a choice of detection probabilities ), and thus N l,und k,n , N l,suf k,n count the suboptimal plays leading to incrementing N l k,n up to and after the sufficient level, respectively.From the definitions ( 6) and ( 7) we have The expected regret at time horizon n can also be bounded above using this notation as where ∆ k,1 arises as a worst case view of the initialisation.We can derive an analytical bound on regret by bounding the expectations of the random variables in (8).
Firstly, for the beyond sufficiency counter we have Proposition 1 For any time horizon n > K, The full proof of Proposition 1 is given in Appendix C, but it depends in particular on the following Lemma describing the concentration of filtered Poisson data.The derivation of the concentration result for the observations Y 1 , ..., Y t requires careful treatment as the parameters of these distributions, and therefore the observations themselves, are not independent.The stochastic dependencies between the sequence of random variables γ 1 , ..., γ s may be highly complex, so rather than attempt to quantify these relationships exactly, we appeal to martingale theory which allows us to derive the concentration result without assuming independence.We provide the necessary concentration result in the lemma below.
Lemma 1 Let Y 1 , ..., Y s be any sequence of Poisson random variables with means γ 1 λ, ...γ s λ respectively, such that the sequence {Z j } s j=1 is a martingale where )).Then, given parameters t ≥ s and λ max ≥ λ the following holds: The proof of this Lemma is given in Appendix B. The consequence of this Lemma is that the UCB indices (3) are of the correct form to guarantee that the probability of making suboptimal plays beyond the sufficient sampling level is small.
For the under sufficiency counter we have the following proposition, also proved in Appendix C, Proposition 2 For any time horizon n > K and k : ∆ k min > 0, Combining the decomposition (8), with the bounds ( 9) and ( 11) we have In the remainder of this section we show that the bound obtained in Theorem 1 is of optimal order, by deriving a lower bound on the expected regret of the best possible policies.We also proceed to show a second upper bound of sub-optimal order with respect to n but that has the advantage of holding for any problem instance, and therefore does not depend on the optimality gaps, ∆ k min and ∆ k max , ∀k ∈ [K].

Lower Bound on Regret
To analyse the performance of the best possible policies, we introduce the notion of a uniformly good policy.A uniformly good policy (Lai and Robbins 1985) is one where for every g : ∆ g > 0 and every λ ∈ R K + .Clearly, then all uniformly good policies must eventually favour optimal actions over suboptimal ones -with the suboptimal actions being necessary to accurately estimate λ.For a given rate vector λ we define the set of optimal actions as We write S(λ) = G \ J(λ) to be the set of suboptimal actions.The difficulty of a particular problem depends on the particular configuration of λ and γ.We define as the set of arms which are played in at least one optimal action and as the set of mean vectors such that all actions in J(λ) are suboptimal but this cannot be discerned by playing only actions in J(λ).The larger the set B(λ), the more challenging the problem is.If then the problem is trivial as one can simultaneously play optimal actions and gather the information necessary to affirm that these actions are optimal.In such a case the lower bound on expected regret is simply 0.
We have the following lower bound on regret for any uniformly good policy.A key consequence of this result is the assertion that policies with O(log(n)) regret are indeed of optimal order and thus that the regret induced by the FP-CUCB algorithm in case (I) grows at the lowest achievable rate.This result is analogous to results in other classes of bandit problem as shown by Lai and Robbins (1985) and Burnetas and Katehakis (1996).
Theorem 2 For any λ ∈ R K + such that B(λ) = ∅, and for any uniformly good policy π for the sequential surveillance problem with known γ, we have where c(λ) is the optimal value of the following optimisation problem over non-negative coefficients d = {d g , g ∈ S(λ)}, and kl(λ, θ) = λ log( λ θ )+θ−λ is the Kullback Leibler divergence between two Poisson distributions with mean parameters λ, θ respectively.
We prove this theorem fully in Appendix D, but here note that a key step of its proof is to invoke Theorem 1 of Graves and Lai (1997), which is a similar result for a more general class of controlled Markov Chains.It is possible to derive an analytical expression giving a lower bound on c(λ) by following steps similar to those in the proof of Theorem 2 in Combes et al. (2015).However we omit this here in the interests of succinctness as it is not an especially useful or elegant expression.
We note that the lower bound is based on the KL-divergence of the cell means, and this suggests that, as in simpler MAB problems, an approach incorporating the KL-divergence in the UCB indices could be asymptotically optimal.However, existing theory on the convergence of such approaches (Garivier andCappé 2011, Combes et al. 2015) pertains only to the case of independent reward generation action selection mechanisms.Therefore, it is not clear how to approach the optimal design and analysis of such an approach.

Gap-free bound on regret
The logarithmic order bounds of Theorems 1 and 2 are useful as they establish the orderoptimality of the UCB algorithm.We note that the coefficients of the two bounds are not the same, and the upper bound may be very large in problem instances where the ∆ k min terms are very small.
The main purpose, however, of the bounds in Theorems 1 and 2 is analytical, not numerical as they can be challenging to compute in practice.The computation of the ∆ k min and ∆ k max terms used in the upper bound requires evaluating the expected reward of every possible action, which quickly becomes computationally challenging for even modest values of K and U .The computation of the lower bound again requires computation of the expected reward of every possible action, to calculate the ∆ g terms, and also a minimisation over |S(λ)| variables, subject to a non-linear constraint.This optimisation problem lacks an convenient analytical solution and must be resolved numerically.Moreover, in absence of knowledge of the true reward generating parameters these bounds do little to inform one of expected performance of the algorithm.For these reasons, we also present the following upper bound on regret, which is order-suboptimal, being of order O(K n log(n)), but holds uniformly across any choice of λ ∈ [0, λ max ] K and does not depend on the optimality gaps.
Theorem 3 The regret of the FP-CUCB algorithm with λ max applied to the sequential surveillance problem with known γ satisfies Proof of Theorem 3: We first consider the following decomposition of regret, The terms of the first sum in ( 16) are very unlikely to be positive, increasingly so as more data is collected.If we upper bound by ignoring the case of negative terms we have: where the penultimate inequality is due to Lemma 1.Now consider the second sum in ( 16) Consider the expectation in the final term, we have, Similarly for the other expectation, we have Pulling this all together we have the following gap-free bound on regret: as stated in Theorem 3.

Case (II): Known scaling of detection probabilities
In the second case we suppose that we do not know exactly what probability of successful detection each searcher has in each cell, but that we have some idea of how these detection probabilities change as the searchers are assigned more cells to search.If, for example, the searcher is moving back-and-forth over l cells at a constant speed s, then the time between successive visits to a cell is 2l/s, suggesting that the detection probability may decay like s/(2l) with the number of cells l.
In order to be precise about this case we suppose that detection probabilities have the form where φ u : A → [0, 1] are known scaling functions, and ω ku ∈ (0, 1] ∀k ∈ [K], u ∈ [U ] are unknown baseline detection probabilities -the probability of searcher u detecting events in cell k given that is the only cell they are assigned to search.Functions φ u are assumed to be decreasing in the number of cells searcher u must search.For instance, and as suggested in the preceding paragraph, one suitable function may be φ u (a) = ( K k=1 I{a k = u}) −1 , the reciprocal of the number of cells the searcher u is assigned.Searcher effectiveness may however decay more slowly as the number of cells assigned grows if for instance events are visible for an extended period of time.
In case (II) the action set and observed rewards remain entirely the same as for case (I), it is the information initially available to the controller that differs.Here, both λ, the K-vector of rate parameters, and ω = (ω 1,1 , ..., ω 1,U , ω 2,1 ..., ω K,U ), the KU -vector of baseline detection probabilities are unknown as opposed to solely λ in case (I).Due to nonidentifiability we cannot make direct inference on λ or ω.However, simply estimating the products of certain components is sufficient for optimal decision making as estimating the expected reward does not depend on having separate estimates of each parameter.Therefore we can simply consider KU unknowns τ = (ω 1,1 λ 1 , ..., ω 1,U λ 1 , ω 2,1 λ 2 , ..., ω K,U λ K ) when referring to the unknown parameters.
As such this second case of the sequential search problem can also be modelled as a CMAB problem with filtered feedback.The set of arms is given by searcher-cell pairs ku ∈ Each arm ku is associated with a Poisson distribution with unknown parameter τ ku = ω k,u λ k .We continue to use A to specify the action set and filtering is governed by scaling function vectors φ(a) = (φ 1 (a), ..., φ U (a)).Let φ ku,t denote the filtering probability associated with the searcher-cell pair ku in round t.It is 0 if a k,t = u and φ u (a t ) if a k,t = u.Let reward in this setting be defined The appropriate FP-CUCB algorithm for case (II) then calculates upper confidence bounds for each τ ku parameter instead of λ k and as in the FP-CUCB algorithm for case (I) this induces an optimistic estimate of the value of every a ∈ A. We describe this second variant in Algorithm 2.
Since our CMAB model in case (II) and second variant of FP-CUCB are of the same form as in case (I), the analogous results to Theorems 1 and 2 can be derived.Specifically we have a regret upper bound for FP-CUCB in Corollary 1 and a lower bound for regret of any uniformly good algorithm in Corollary 2.
Corollary 1 The regret of the FP-CUCB algorithm in case (b) defined by τ max applied to the sequential search problem as defined previously satisfies and φ ku,min = min a:a k =u φ u (a).
Corollary 2 For any τ ∈ R KU + such that B(τ ) = ∅, and for any uniformly good policy π for the sequential surveillance problem with known φ, we have where c(τ ) is the solution of an optimisation problem analogous to (13).
Precise specification of c(τ ) requires redefining notation from Section 4.1.2in the context of case (II) and produces an entirely unsurprising analogue.In the interests of brevity we omit this.The techniques used in proving Theorems 1 and 2 can be easily extended to prove Corollaries 1 and 2.

Numerical Experiments
We now numerically evaluate the performance of the FP-CUCB algorithm in comparison to a greedy approach and Thompson Sampling (TS).The greedy approach is one which always selects the action currently believed to be best (following an initialisation period, where each cell is searched at least once).As such it is a fully exploitative policy which fails to recognise the benefit of the information gain inherent in exploration.TS is a randomised, Bayesian approach where an action is selected with the current posterior probability that it is the best one.This is achieved by sampling indices from a posterior distribution on each arm and passing these samples to the optimisation algorithm.We define these algorithms in the setting of known detection probabilities (case (I)) in Algorithms 3 and 4 respectively.

Algorithm 3 Greedy
Initialisation Phase: For t ∈ [K] • Select an arbitrary allocation a ∈ A such that a t = 0 Iterative Phase: • Select an allocation a * λt such that r λt,γ (a * λt ) = max a∈A r λt,γ (a).
We compare the FP-CUCB, Greedy and TS algorithms by randomly sampling λ and ω values which define problem instances.We then test our algorithms' performance on data generated from the models of these problem instances.We assume that detection probabilities have the form given in (17) but we know both the φ functions and ω values.
Specifically, we conduct four tests encompassing a range of different problem sizes and parameter values to display the efficacy of our proposed approach uniformly across problem instances.In each test 50 (λ, ω) pairs are sampled and functions φ are selected.For each (λ, ω) pair 5 datasets are sampled giving underlying counts of intrusion events in each cell in each round up to a horizon of n = 2000.Parameters are simulated as below: We test a variety of parametrisations of FP-CUCB (in terms of λ max ) and TS (in terms of the prior mean and variance -from which particular α and β values can be uniquely found) in each test.In each case we use λ max values which are both larger and smaller than the true maximal rate.Similarly we investigate TS with prior mean larger and smaller than the true maximal rate and with several different levels of variance.It is not always fully realistic to assume knowledge of λ max will be perfect and therefore it is of interest to investigate the effects of varying it.Also, the choice of prior parameters in TS is a potentially subjective one and it is important to understand its impact.
We measure the performance of our algorithms by calculating the expected regret incurred by their actions, rescaled by the expected reward of a single optimal action.For an algorithm A and particular history H n we write We calculate this value for all algorithms, all 250 datasets and rounds 1 ≤ n ≤ 2000.We choose to rescale our regret to make a fairer comparison across the 50 different problem instances in each test (i)-(iv) which will all have different optimal expected rewards.
In Figure 1 we illustrate how regret evolves over time by plotting the median scaled regret across the 250 runs of each algorithm in all rounds of test (i).The rate of growth shown in these plots is typical of the results in the other three tests.An immediate observation is that the greedy algorithm does very poorly on average and its full median regret over the 2000 rounds cannot be included in the graphs without obscuring differences between the other algorithms.We see also that the performance of both FP-CUCB and TS is strongly linked to the chosen parameters.For the FP-CUCB algorithm it seems in Figure 1 that the larger the parameter λ max is the larger the cumulative regret becomes.For TS, larger prior variances seem to induce lower regret, the relationship with the prior mean is more complex.Accurate specification of the prior mean seems to ensure good performance, but underestimation and overestimation of the mean can lead to poor performance (particularly when the variance is small).
We analyse these behaviours further in Figures 2 and 3.Here we calculate a scaled regret at time n = 2000 for all 250 runs of each algorithm and plot the empirical distribution of these values for each parameterisation of each algorithm.The results for tests (i) and (ii) are given in Figure 2 and for tests (iii) and (iv) in Figure 3.We omit the greedy algorithm's performance from these figures as the values are so large.In Appendix E we provide median values and lower and upper quantiles of the scaled regret for each algorithm.We see from these values that the greedy algorithm performs substantially worse than the FP-CUCB and TS algorithms which better address the exploration-exploitation dilemma.
Examining Figures 2 and 3 we see that the FP-CUCB algorithm enjoys greater robustness to parameter choice than the TS approach.In particular in the results of test (iii) we see that many parametrisations of TS give rise to a long tailed distribution of round 2000 regret -meaning the performance of TS is highly variable and often poor.This variability of performance does seem to coincide with underestimation of the mean, however FP-CUCB manages to maintain strong performance even when the λ max parameter is far from the true maximal rate.When the prior variance is sufficiently large and the prior mean is close to the true λ max TS seems to do the best job of balancing exploration and exploitation and incurs the smallest regret.

Discussion
In this paper we have considered the problem of adaptively assigning multiple searchers to cells along a line (in space or time) in order to detect the maximum number of events occurring along the line.The problem is real, and has important applications in ecology, security, defence and other areas.We have modelled the problem, and proposed and analysed solution methods.The challenge at the heart of this problem is to correctly balance exploration and exploitation, in    the face of initial ignorance as to the arrival process of events.We formulated our sequential decision problem as a combinatorial multi-armed bandit with Poisson rewards and a novel filtered feedback mechanism.To design quality policies for this problem we first derived an efficient solution method to the full information problem.This IP forms the backbone of all policies for the sequential problem, as it allows us to quickly identify an optimal solution given some estimate of the arrival process' rate parameters.
We considered the sequential problem in two informational scenarios -firstly where the probability of detecting events is known, and secondly where these probabilities are unknown but one knows how they scale as the number of cells searched increases.For both of these cases we proposed an upper confidence bound approach.We derived lower bounds on the regret of all uniformly good algorithms under this our new feedback mechanism and upper bounds on the regret of our proposed approach.
In addition to the advantage of theoretical guarantees, the FP-CUCB algorithm is somewhat more reliable than TS.It is clear from the results of Section 5 that TS outperforms FP-CUCB for certain parametrisations (commonly larger choices of variance and mean close to the true arrival rates).However, we see that TS is particularly vulnerable to poor performance when the mean of the prior underestimates the true rate parameters.Even though our theoretical results for FP-CUCB depend on λ max ≥ λ k , k ∈ [K] we see that it is robust to underestimating this parameter.The reason FP-CUCB still performs well even when a key assumption does not hold is likely due to the fact that de la Peña's inequality does not give the tightest possible bound on Poisson tail probabilities (and therefore the rate of concentration of the mean).However, in order to construct the algorithm we required a symmetric tail bound for which an inflation term giving the type of concentration in Lemma 1 could be identified.Other bounds may be tighter but lack these properties.
The variability of TS most likely arises due to the potential for the Gamma conjugate prior to be dominated by a small number of observations and create a scenario where TS behaves similarly to a greedy policy -sometimes fixing on good actions, but sometimes on poor ones.This phenomenon of variability of regret is understudied in multi-armed bandits, not least because it is much more challenging to analyse theoretically.However, in practical scenarios (where of course the learning and regret minimisation process will only occur once) this is a risk of TS.We note that both algorithms comfortably outperform the greedy algorithm in almost all examples, which speaks to the benefit of making some attempt to balance exploration and exploitation.
An alternative treatment of bandit decision making is the non-stochastic or adversarial bandit (Auer et al. 1995).Under such a model, the assumptions that rewards are drawn i.i.d.from a fixed distribution are dropped, and may instead be any arbitrary sequence.Adversarial bandits necessitate a randomised strategy to guarantee good performance across any chosen reward sequence.Such methods have been developed in the MAB and CMAB settings (Auer et al. 1995, Cesa-Bianchi andLugosi 2012).As further work the problem could be studied under a non-stochastic, or even a fully game-theoretic framework, relaxing some of our assumptions.This would however require a markedly different set of algorithmic and analytical tools.Within application domains, variants of the problem exist all along the spectrum from purely stochastic to fully game-theoretic.Our work has considered the stochastic setting in detail and in doing so provided a solution to many real-world problems.
Proof of Lemma 3 Firstly we write ν k as a recurrence relationship We also prove this Lemma via an induction argument, which proceeds as follows.For µ k we have the following initial values µ 2 = λ, µ 3 = λ, µ 4 = 4λ 2 and for ν k we have Clearly, these initial values satisfy ν k ≥ µ k .Now assume that for some p > 5, we have ν p ≥ µ p and ν p−1 ≥ µ p−1 .Then consider µ p+1 as follows: completing the proof by induction.The first and third inequalities are due to the assumed relationships for p and p − 1, the second inequality is a consequence of Lemma 2 and the differentiation of a polynomial. .The martingale difference sequence W j therefore satisfies the conditions of de la Peña's inequality with c = max(1, √ λ) and we have We have that s j=1 E(W 2 j |•) ≤ s j=1 γ j λ with probability 1, so we may use the simplified result λk,t we have the following properties.
) ∀s : g s,t > 0} holds at time t the following is implied where g * λ is an action that is optimal with respect to rate vector λ.However, by definition 2KΛ k,l t ≥ ∆ k,l and therefore ( 19) is a contradiction of the definition of ∆ k,l = opt λ,γ − g l k,B • λ.Therefore The bound on P(¬N t ) comes from applying Lemma 1 and is sufficient to prove Proposition 1 since

Proof of Proposition 2
Now consider the number of plays made prior to reaching the sufficient sampling level.Firstly set h k,n (∆ k,0 ) = 0 to simplify notation and consider the following steps.Then for any cell k in since N k can only be incremented a maximum of h k,n (∆ k,j )−h k,n (∆ k,j−1 ) times while remaining in this range The last inequality holds since h k,n (x) are decreasing functions.

D Theorem 2 Proof: Lower bound on regret
To prove Theorem 2, we must define the additional quantities necessary to apply Theorem 1 of Graves and Lai (1997) and frame the problem accordingly.We consider the reward history (Y t ) n t=1 to be a realisation of a controlled Markov Chain moving on the state space N K where the controls are the detection probability vectors selected in each round.Each control g ∈ G then has an associated set of λ parameter vectors under which it is an optimal control Λ g = {λ ∈ R K + : g T • λ = opt λ,γ }, which may be the empty set.For any states y, z ∈ N K transition probabilities are straightforward Poisson probabilities due to independence across rounds: p(y, z; λ, g) = p(z; λ, g) These transition probabilities define the Kullback Leibler Information number for any control g ∈ G: I g (λ, θ) = With these quantities and those defined in Section 4.1.2we can apply Theorem 1 of Graves and Lai (1997)

Figure 1 :
Figure 1: Cumulative Regret histories for Test (i).Upper left: FP-CUCB, upper right: TS with a prior variance of 1, lower left: TS with a prior variance of 5, lower right: TS with prior variance of 10.In each case the plotted lines are the median values of scaled regret calculated at each time point from 1 to 2000.Black lines represent λ max = 1 or a prior mean of 1, red represents the same parameters taking the value 5, green 10, blue 20, grey 40, and pink 60.In all sub-figures the teal line represents regret of the greedy algorithm.

Figure 2 :
Figure 2: Scaled regret distributions in tests (i) and (ii).In both tests we have a true largest rate of 20.

Figure 3 :
Figure 3: Scaled regret distributions in tests (iii) and (iv).In test (iii) the true largest rate is 100, and in test (iv) the true largest rate is 1.

Table 2 :
Quantiles of scaled regret at horizon n = 2000 for algorithms applied to Test (ii) data

Table 3 :
Quantiles of scaled regret at horizon n = 2000 for algorithms applied to Test (iii) data

Table 4 :
Quantiles of scaled regret at horizon n = 2000 for algorithms applied to Test (iv) data