Novel Sampling Design for Respondent-driven Sampling

Respondent-driven sampling (RDS) is a method of chain referral sampling popular for sampling hidden and/or marginalized populations. As such, even under the ideal sampling assumptions, the performance of RDS is restricted by the underlying social network: if the network is divided into communities that are weakly connected to each other, then RDS is likely to oversample one of these communities. In order to diminish the"referral bottlenecks"between communities, we propose anti-cluster RDS (AC-RDS), an adjustment to the standard RDS implementation. Using a standard model in the RDS literature, namely, a Markov process on the social network that is indexed by a tree, we construct and study the Markov transition matrix for AC-RDS. We show that if the underlying network is generated from the Stochastic Blockmodel with equal block sizes, then the transition matrix for AC-RDS has a larger spectral gap and consequently faster mixing properties than the standard random walk model for RDS. In addition, we show that AC-RDS reduces the covariance of the samples in the referral tree compared to the standard RDS and consequently leads to a smaller variance and design effect. We confirm the effectiveness of the new design using both the Add-Health networks and simulated networks.


Introduction
Several public policy and public health programs depend on estimating characteristics of hard-to-reach or hidden populations (e.g. HIV prevalence among people who inject drugs). These hard-to-reach populations cannot be sampled with standard techniques because there is no way to construct a sampling frame. [18,19] proposed respondent-driven sampling (RDS) as a variant of chainreferral methods, similar to snowball sampling [16,17], for collecting and analyzing data from hard-to-reach populations. Since then, RDS has been employed in over 460 studies spanning more than 69 countries [25,44].
RDS encompasses a collection of methods to both sample a population and infer population characteristics [33], referred to as RDS sampling and RDS inference, respectively. RDS sampling starts with a few "seed" participants chosen by a convenience sample of the target population. Then, the initial participants are given a few coupons to refer the second wave of respondents, the second wave refers the third wave, and so on. The participants receive a dual incentive to (i) take part in the study and (ii) successfully refer participants. The dual incentive, limited number of coupons, and without replacement sampling, in connected and closed triplets that participants form by their social ties. They also provide some measure of clustering levels in RDS samples.
To diminish referral bottlenecks, this paper proposes an adjustment to the current RDS implementation. Instead of asking participants to refer anyone from the target population, this paper proposes two basic types of "anti-cluster referral requests," which are described in Figure 1. These referral requests diminish referral bottlenecks by producing triples of participants that do not form a triangle, closed triplet, in the social network. The figure contains two types of such requests. In fact, as described in Section 3.3, we propose a procedure that probabilistically alternates between the two requests.
As compared to alternative methods, anti-cluster requests are more successful in diminishing referral bottlenecks for three reasons. First, this approach preserves privacy by refraining from asking participants to list their friends in the population. Second, anti-cluster requests do not require a priori knowledge about the nature of the bottleneck. For example, the most salient bottleneck could form on race, gender, neighborhood, or something else. If researchers knew which of these was most restricting the sampling process, then perhaps specific requests could be formed. However, in many populations, the bottlenecks are not known in advance. The final advantage is that the proposed adjustment is mathematically tractable; under certain assumptions, anti-cluster requests can form a reversible Markov chain.
We propose a novel variant of RDS, then study its theoretical properties under a statistical model. This work provides theoretical motivation to further develop and study novel referral requests. Additional work is needed before this variant should be employed in the field; this is discussed further in Section 6.
The remainder of the paper is organized as follows. Section 2 describes Designed RDS and presents our proposed design, anti-cluster RDS (AC-RDS). Section 3 sets the notation and provides the mathematical preliminaries. Section 4 gives our theoretical results, distinguishing between "population graph" and "sample graph" results. Section 5 contains numerical experiments which compare the performance of AC-RDS with standard RDS. Section 6 discusses some gaps between the theory and the practice of novel referral requests. We summarize the paper and offer a discussion in Section 7. All of the proofs are provided in the Appendix.

Novel sampling designs
When preparing to sample a target population with RDS, some aspects can be controlled by researchers (e.g. how many referral coupons to give each participant) and others cannot. In particular, the social network is beyond the control of researchers. Community structures are an intrinsic part of social networks [13] which, in RDS, lead to referral bottlenecks. To minimize these bottlenecks, RDS can be altered to make some referrals more or less likely. This is the essence of novel sampling designs for respondent-driven sampling.
As a thought experiment, suppose that the population of interest is divided into two communities, EAST and WEST. Furthermore, assume that people form most of their friendships within their own community. Under this simple model, referrals between communities are unlikely, creating a bottleneck. Now, suppose that these communities were known before performing the sample. The researchers could then request referrals from specific groups (e.g. flip a coin, if heads request WEST and if tails request EAST). This does not change the underlying social network, but it does change the probability of certain referrals. If participants followed this request, the referral bottleneck between EAST and WEST would be diminished. If 90% of a participant's friends belonged to the same community as the participant, then the standard approach would obtain a cross-community referral only 10% of the time. However, with the coin flip implementation, such a referral happens 50% of the time. [29] propose an alternative technique, Network Sampling with Memory (NSM). In NSM sampling, researchers construct a sampling frame by asking RDS participants to nominate their friends in the target population. This list is combined with the friend lists from previous participants to form a sampling frame. In the "List" mode of the sampling process, the next individual to be recruited and interviewed is selected by sampling with-replacement from the list of nominated members. In the "Search" mode, to improve the mixing property of the sampling process, individuals who appeared to be the "bridge nodes"to the unexplored parts of the network are identified. Then, randomly a node from friends of the bridge nodes who have only 1 nomination is selected for the next interview. In computational experiments, [29] report a decrease in the design effect, the ratio of the sampling variance to the sampling variance of simple random sampling, of this novel approach.
These two extensions of RDS (i.e. flipping a coin and NSM) are both forms of Designed RDS; through novel implementations of the sampling process they adjust the probability of certain referrals, thereby diminishing the referral bottlenecks. Unfortunately, the coin flipping example requires prior information about the social network, which may be unattainable given the hidden nature of the target population. The NSM approach requires respondents to reveal partial name and demographic information of their friends. Moreover, it asks respondents to refer (recruit) selected individuals from the list of nominees. When practically implemented in a hidden population, however, it is not clear if respondents will be willing to provide the requested information or refer the selected individual from their list of nominees. Furthermore, the referral pro-cess may be based more heavily on participants' interactions with members of the target population following the survey than on any plan they make to refer ahead of time.
Anti-cluster RDS is a type of Designed RDS that complements and builds upon both of these approaches. The implementation of anti-cluster RDS does not require a priori information on the communities in the social network, nor does it require that participants reveal sensitive information about individuals who have not consented. Anti-cluster sampling is designed to place larger referral probabilities on edges belonging to fewer triangles. There are at least two ways to consider why this strategy circumvents bottlenecks.
1. Many empirical networks share three properties. First, the number of edges is proportional to the number of nodes (i.e. the network is globally sparse). Second, friends of friends are likely to be friends (i.e. the network is locally dense). Third, shortest path lengths are small (i.e. the network has a small diameter); this is also known as the small-world phenomenon. [40] shows how a network can satisfy all three properties; take a deterministic graph that satisfies the first two features (e.g. a triangular tessellation), then select a few edges at random and randomly re-wire these edges to a randomly chosen node. Notice that these "random edges" are unlikely to be contained in a triangle. So, edges that are not part of triangles are more likely to lead to quicker network traverse. Anti-cluster RDS makes referral along that edges more probable, and potentially mixes faster and collects more representative samples from the target population. 2. The Markov chain has been a popular model for studying theoretical properties of RDS. Under the with-replacement sampling formulation of this model people make referrals by selecting uniformly from their set of friends. A similar assumption could be made about anti-cluster referrals; the referral is drawn uniformly from the set of referrals that satisfy the anti-cluster request. If the Markov transition matrix for anti-cluster sampling can be shown to have a larger spectral gap than the Markov transition matrix for the simple random walk, then this suggests that anti-cluster sampling will obtain a more representative sample.
In this paper, we pursue the second approach.

Framework
This paper models the referral process as a Markov chain indexed by a tree [4]. A Markov chain indexed by a tree is a variant of branching Markov chains in which a fixed deterministic tree indicates branching. This model is a straightforward combination of the Markov models developed in the previous literature on RDS [18,34,38,14]. This model is built with the following four mathematical pieces: an underlying social graph, a node feature which is measured on each sampled node (e.g. HIV status), a Markov transition matrix on this graph, and a referral tree to index the Markov process. Figure 2 gives a graphical depiction of this process. The social network. Denote the underlying social network by an undirected graph G = (V, E) where V = {1, . . . , N} is the set individuals in the target population and E = {(u, v) : u and v are friends} is the set of social ties. Define the adjacency matrix A as and the node degree as deg(u) = v A(u, v). Node features. After sampling an individual u ∈ V, we can measure their status y(u), where y : V → R is some node feature. For instance, y(u) could be a binary variable which is one if node u is HIV+ and zero otherwise. The aim of RDS is to estimate the population average of y over all nodes, Markov chain. Let (X i ) n i=0 be an irreducible Markov chain with the finite state space V of size N and transition matrix P ∈ R N ×N ; for u, v ∈ V and for all i ∈ 0, . . . , n − 1, Define P A as the Markov transition matrix of the simple random walk, The standard Markov model for RDS assumes that X i is a simple random walk. Novel designs. Designed RDS is any technique that assigns differing weights to the edges. Define the mapping W : E → R + as a weighting function on the Then, W can be expressed as a matrix. Define the diagonal matrix T to contain the row sums of W , so that Through novel implementations, Designed RDS alters the edge weights. After weighting the edges, the Markov transition matrix becomes If Designed RDS increases an edge weight, it makes the edge more likely to be traversed.
We restrict the analysis to symmetric weighting matrices. Because of this restriction, P W is reversible and has a stationary distribution π : V → R + that is easily computable, Throughout, it will be assumed that X 0 is initialized with π. A more thorough treatment of Markov chains and their stationary distribution can be found in [24]. Referral tree. In the Markov chain model, participant X i refers participant X i+1 . This assumes that each participant refers exactly one individual. In practice, RDS participants usually refer between zero and three future participants. To allow for this heterogeneity, it is necessary to index the Markov process with a tree, not a chain. Let T denote a rooted tree with n nodes. See Figure 2 for a graphical depiction.
To simplify notation, σ ∈ T is used to represent σ belonging to the node set of T. For any node σ ∈ T with σ = root(T), denote parent(σ) ∈ T as the parent node of σ. The Markov process indexed by T is a set of random variables {X σ ∈ V : σ ∈ T} such that X root(T) is initialized from π and The distribution of X σ is completely determined by the state of X parent(σ) . [4] called this process a (T, P )-walk on G. In the social network G, an edge represents friendship. In the referral tree, a directed edge (τ, σ) represents that random individual X τ ∈ V refers random individual X σ ∈ V in the (T, P )-walk on G.
Statistical estimation. For any function on the nodes of the graph y : V → R, denote μ π,y := E π y := u∈V y(u)π(u) and μ y := Ey : where N := |V| is the number of nodes in the social network. By assumption, X 0 ∼ π. So, X τ ∼ π and the sample mean 1/n τ ∈T y(X τ ) consistently estimates μ π,y , the population mean under stationarity. Thus, it is not a consistent estimator for the parameter of interest, namely the population mean μ y . In order to estimate μ y , one can use inverse probability weighting (IPW), using the stationary distribution. It can be shown that is an unbiased and consistent estimator of μ y . Typically, N is unknown. The Hajek estimator circumvents this problem while remaining asymptotically unbiased, 1 The typical "simple random walk" assumption in the RDS literature is that participants select uniformly from their contacts. This corresponds to T uu = deg(u), making π(u) ∝ deg(u), which is something that can be asked of participants. Under these assumptions, (4) reduces to the RDS II estimator [20] μ y = 1

The variance of RDS
Many empirical and social networks display community structures [13]. This can lead to referral bottlenecks in the Markov chain. These bottlenecks exist because respondents are likely to refer people within their own community who have similar characteristics. This section specifies how bottlenecks make successive samples dependent, increasing the variance ofμ y and the design effect of RDS. The spectral properties of the Markov transition matrix reveal the strength of these bottlenecks and control the variance of estimators likeμ IP W . These results motivate the main results of this paper, which show that anti-cluster sampling improves the relevant spectral properties of the Markov transition matrix under a certain class of Stochastic Blockmodels. As a result, anti-cluster sampling can decrease the variance of estimators likeμ IP W . Let λ 2 (P A ) be the second largest eigenvalue of the Markov transition matrix for the simple random walk. The Cheeger bound demonstrates that the spectral properties of P A can measure the strength of these communities. See [7] (Chapter 2) and [24] (p. 215) for more details. This relationship between communities in G and the spectral properties of P A is exploited in the literature on spectral clustering. In that literature, G is observed and the spectral clustering algorithm uses the leading eigenvectors of P A to partition V into communities [39].
Intuitively, if there are strong communities in G and the node features y are relatively homogeneous within communities, then successive samples X i and X i+t will likely belong to the same community and have similar values y(X i ) and y(X i+t ). This makes the samples highly dependent; the auto-covariance Cov(y(X i ), y(X i+t )) will decay slowly as a function of t. The next lemma decomposes the auto-covariance in the eigenbasis of the Markov transition matrix. This proposition shows that the auto-covariance decays like λ t 2 . The following result applies to any reversible Markov chain with |λ 2 | < 1. In particular, it applies to both P A (RDS) and P W (AC-RDS). With a reversible Markov chain, the assumption |λ 2 | < 1 is equivalent to assuming that the chain is irreducible and aperiodic.
i=0 be a Markov chain with reversible transition matrix P . Suppose that X 0 is initialized with π, the stationary distribution of P . For j = 1, 2, . . . , N, let (f j , λ j ) be the eigenpairs of P , ordered so that |λ i | ≥ |λ i+1 |. Because P is reversible, f j and λ j are real valued and the f j are orthonormal with respect to the inner product f , f j π = i∈V f (i)f j (i)π(i). If |λ 2 | < 1, then In previous research, [3] and [36] used a similar expression to compute the variance.

Anti-cluster random walk; constructing the weights W
This subsection describes a Markov model for AC-RDS. Section 4 then studies the spectral properties of the resulting AC-RDS Markov transition matrix. To describe the model we need the following notation. Let · denote element-wise matrix multiplication and let J K×K denote a K × K matrix containing all ones. Finally, define the overbar operator for a K × K matrix B asB : This model creates a Markov transition matrix which can be expressed with matrix notation. Under the model, if i has one coupon, then the probability that i refers j is proportional to the (i, j) th element of the matrix (AĀ) · A. To see this, note that the (i, j) th element of AĀ is the number of nodes that are friends with i but not friends with j, that is Then, the element-wise multiplication ensures that i is friends with j, yielding the weight matrix (AĀ) · A. Note that the weight matrix (AĀ)·A is not symmetric and, thus, does not lead to a reversible Markov chain. However, we can use a second referral request to augment the first request to ensure reversibility. To this end, model the referral request "Please refer someone that knows many people that you do not know" as follows: if i is friends with j, then the probability that i refers j is proportional to the number of people that j knows that i does not know. In a similar fashion as above, this request produces the weight matrix (ĀA) · A.
To implement AC-RDS, choose between (AĀ) · A and (ĀA) · A with equal probability by flipping a coin. Consider the matrixW given bỹ The (i, j) th element ofW is proportional to the probability that i refers j in the process described above. By design,W is symmetric, making making PW a reversible Markov transition matrix.
These ideas for connecting implementation instructions for AC-RDS with the Markov model are summarized in Table 1. The next section studies the spectral properties of PW under a statistical model for G.

Implementation instructions compared to the Markov model Flip a coin
If heads (type A), If tails (type B), Implementation Instructions Ask "please refer contacts in the target population who don't know many of your contacts." Ask "please refer contacts in the target population who have many contacts who don't know you." Markov model, Then choose a pair (j, k) uniformly and refer j or k uniformly at random.
List all pairs of nodes (j, k) such that (i, j) ∈ E and (i, k) / ∈ E. From this list, uniformly choose a node pair (j, k). Refer j.

Theoretical results
To study the spectral properties of PW under a statistical model for the underlying social network, we break the analysis into "population results" and "sampling results." The "population results" in this section correspond to using the (weighted) adjacency matrix A = EA, where the expectation is with respect to the statistical model for generating the network. The expected adjacency matrix is a deterministic matrix and various combinatorial techniques can be used to show its properties. Definẽ Define the Markov transition matrices PW and P A as in (2). In these definitions, P A corresponds to the population matrix for the simple random walk (RDS) and PW corresponds to the population matrix for AC-RDS. The "sampling" referred to in this section introduces an additional layer of randomness to generate the underlying social network G. The goal of "sample results" is to show that the random graph generated by the generic model has similar properties to the expected graph. That is the randomness of the graph doesn't significantly change the graph from the expected graph. To refer to the randomness of the Markov chain, this section will refer to "anti-cluster sampling," "Markov sampling," or "respondent-driven sampling." The population results will show that under various statistical models for the underlying social network, the second eigenvalue of PW is less than the second eigenvalue of P A . To extend these population results to a network which is sampled from the model, the sampling results use concentration of measure to show that A andW are close (under the operator norm) to A andW, respectively. Then, perturbation theorems show that the eigenvalues of P A and PW are close to the eigenvalues of P A and PW, respectively. Theorem 2 combines these results with Proposition 1 to show that AC-RDS reduces the covariance between Markov samples.

Population graph results
Anti-cluster sampling is motivated by the need to readily escape communities in a social network. The Stochastic Blockmodel (SBM) is a standard and popular model that parameterizes communities in the social network [22]. For this reason, the analyses below use the SBM to study anti-cluster sampling.
The results below condition on the partition z. Conditional on this partition, E[A|z] has a convenient block structure. Define the partition matrix Define the population weighting matrix as in (6). The following lemma shows thatW retains the block structure of A.
The following lemma shows that under a certain class of Stochastic Blockmodels, anti-cluster sampling decreases the probability of an in-block referral.
for all k = l, then for any two nodes u and v with z(u) = z(v), Note that if every block has an equal population, then the first assumption, 0 < r < p + r < 1, implies the second assumption Θ ll r < Θ kk (p + r). The next proposition uses Lemma 2 to show that anti-cluster sampling reduces the second eigenvalue of the population Markov transition matrix.

Proposition 2 (Spectral gap of the population graph). Under the SBM with
where > 0 depends on K, p, and r, but is independent of N , the number of nodes in the graph. Specifically, λ 2 (P A ) = 1/(R + 1), where R = Kr/p. In the asymptotic setting where K grows and r shrinks, while p and R stay fixed, For any single node, note that R is roughly the expected number of outof-block edges divided by the expected number of in-block edges. To see this, multiply the numerator and denominator of Kr/p by the block population N/K. As such, it is approximately the odds that a random walker will change blocks. When R is large, the Markov chain mixes quickly and λ 2 (P A ) is small to reflect that.
AC-RDS is most useful in social networks with tight communities, where the walk is slow to mix; this corresponds to a larger value of p and a smaller value of R. In this setting, c in (8) is large, thus making λ 2 (PW) much smaller than λ 2 (P A ). In particular, if p is close to one, then c ≈ 1 + R −1 becomes very large for small values of R. Notice that the second part of Proposition 2 makes no assumption on N , the number of nodes in the network.
The next proposition shows that anti-cluster sampling continues to perform well, even when the community structure is exceedingly strong and standard approaches will fail to mix well. Here, the reduction of λ 2 from anti-cluster sampling is dramatic. For any Markov transition matrix P , λ 2 (P ) ≤ 1. The graph is disconnected if and only if λ 2 = 1; this is the most extreme form of a bottleneck. In the above proposition, if = 0, then the sampled graph will contain two disconnected cliques, one for each block. Under this regime, both P A and PW will have second eigenvalues equal to one. However, if converges to zero from above, then Proposition 3 shows that λ 2 (PW) approaches 1/3, while λ 2 (P A ) approaches 1.
Propositions 2 and 3 assume balanced block sizes (i.e. an equal number of nodes). To study unbalanced cases, the necessary algebra quickly becomes uninterpretable. We explore the role of unbalanced block sizes with numerical experiments in Section 5.

Sample graph results
Theorem 1 gives conditions which ensure that the population eigenvalues, λ (PW), are close to the sample eigenvalues, λ (PW ). As such, the population results in the previous section appropriately represent the behavior of Markov sampling (both AC-RDS and RDS) on a network sampled from the Stochastic Blockmodel. Chung and Radcliffe [6] prove a similar result for |λ (P A ) − λ (P A )|.
Theorem 1 (Concentration of the anti-cluster random walk). Let G = (V, E) be a random graph with independent edges and A = EA be the expected adjacency matrix. Let where c 2 is a constant, · denotes the operator norm, T is a diagonal matrix with the row sums ofW on its diagonal, and T is defined in the same way with respect toW. Moreover, with probability at least 1 − , Remark 1. The theorem uses standard asymptotic notation, which we recall here for convenience. We write f (n) = O (g(n)) to indicate that |f | is bounded above by g asymptotically, that is We write f (n) = ω (g(n)) to indicate that f dominates g asymptotically, that is Remark 2. F ij gives the number of friends of node i that are not in the friend list of node j. So F min = ω (ln N ) ensures that the number of individuals that a node can refer under AC-RDS grows with a rate faster than ln N . Roughly speaking, it is similar to the sparsity condition required for concentration results of random graphs with independent edges. Since A is a symmetric matrix, F ij = G ji and, consequently, The condition on c 1 ensures that the ratio Di Fij +Gij stays bounded. These sampling results are sufficiently general to apply to all of the models studied in the previous section.
Theorem 2 presents the asymptotic behavior of AC-RDS in reducing the correlation among samples collected from a random graph under a Stochastic Blockmodel. The theorem is an aggregation of all the previous results in the paper. The result is asymptotic in the size of the population, not in the size of the sample.
Theorem 2 (Dependency reduction property of AC-RDS). Let G be a random graph with N nodes sampled from a Stochastic Blockmodel with B = pI K×K + rJ K×K , for 0 < r < p + r < c < 1. Further assume an equal number of nodes in each of the K blocks. Let (X i ) n i=1 and (X ac i ) n i=1 be two Markov chains with transition matrix P A and PW , respectively.
The parameters p, r and K can change with N . If ln(N )/(pK + rN ) → 0, then asymptotically almost surely, for all i, i + t ∈ {1, . . . , n}, and t = 0, Cov(y(X ac i ), y(X ac i+t )) < Cov(y(X i ), y(X i+t )), where y : V → R is any bounded node feature.

Numerical experiments
We conduct three sets of numerical experiments to compare the performance of AC-RDS with standard RDS. The first set investigates the impact of unequal block sizes on the results of Propositions 2 and 3. The second set investigates the impact of community structures and homophily using the Stochastic Blockmodel. In the third set, we consider an empirical social network with unknown community structure. Finally, we consider two relaxations of the Markov model to allow for more realistic settings: sampling without replacement and preferential recruitment.

The role of unequal block sizes
In this experiment, we numerically calculate the eigenvalues of P A and PW under varying SBM parameterizations with K = 2. Given θ and B in the definition of the SBM, we can use results from Rohe et al. [32] (see the proof of Lemma 3.1) to compute the K non-zero eigenvalues of the transition matrix.
Consider the setting of Propositions 2 and 3 with K = 2 blocks. These results assume that the blocks contain an equal number of nodes; here we explore the role of unequal block sizes. As a measure of unbalance, we use the ratio of the largest block size to the smallest block size. The results of the study are displayed in Figure 3. The horizontal axis in both panels gives this ratio of unbalance; when this value is large (farther to the right), the blocks are exceedingly unbalanced. The vertical axis controls the expected number of inblock versus out-of-block edges with a parameter . In the left panel, plays the dual role as in Proposition 3. In the right panel, does not control the in-block probabilities (i.e. the diagonal of B); here, the diagonal of B is set to .8 across all experiments.
For a range of unbalances and values of , Figure 3 plots the ratio of spectral gaps. In all of the parameterizations, this value is greater than one, indicating that anti-cluster sampling decreases λ 2 relative to the random walk model of RDS, even with unequal blocks. For example, the contour at 5.3 represents the class of models such that anti-cluster sampling increases the spectral gap by over five-fold. , these two panels display the ratio of spectral gaps as given in (9). All values are greater than one, indicating that anti-cluster sampling will increase the spectral gap, thus decreasing the dependence between adjacent samples. The benefits of anti-cluster sampling are especially prominent when is small; this corresponds to a model setting in which there are drastically fewer edges between blocks.

Random networks
Here we investigate the impact of community structures and homophily using the Stochastic Blockmodel. We use a SBM with 2000 nodes and 50 communities of equal size to generate the underlying social network. To illustrate the impact of community structures, we vary the ratio of the expected number of in-block edges divided by the expected number of out-of-block edges. This ratio also controls the probability of generating an out-of-community referral. For example, with the ratio equal to one, the probability of an out-of-community referral is 1/2. We examine values of this ratio between 1/2 to 4. To do this, we fix the in-block probabilities to 0.9 and change the out-of-block probabilities. We simulate Markovian referral trees in which each participant refers exactly three members with replacement. The three referrals are samples from the neighbors of the participant. RDS uses uniform samples, whereas AC-RDS uses non-uniform samples based on the weights described in (5). To show the effect of the communities, we choose the binary node feature to be based on the community membership. The value is set to zero if the node belongs to communities 1 through 25, otherwise, the value is set to one. For both designs, we use the RDS II estimator to estimate the community proportion, where the inclusion probabilities are the stationary distribution of the simple random walk.
The datasets are simulated in the following way. First we generate a realization of an SBM and compute the stationary distribution of the simple random walk. We simulate the referral procedure of RDS and AC-RDS starting from a uniformly selected node and continuing until a certain number of samples are collected, either 1%, 5%, or 10% of the total nodes. We compute the RDS II estimates of the feature from samples collected by both procedures.
This study is based on 5000 simulated datasets. Figure 4 displays box plots for the 5000 RDS II estimates of the proportion in different settings. Comparing RDS to AC-RDS, we see that AC-RDS collects more representative samples. Additionally, as we increase the degree of homophily, the performance of AC-RDS suffers less. In (a) and (b), the chance that participants make referrals outside of their community is relatively high, 2/3 and 1/2, respectively. In these cases, both designs perform similarly. However, in (c) and (d), where there is a smaller chance of cross-community referral, there is a stronger referral bottleneck. In this regime, AC-RDS collects more representative samples by encouraging participants to leave their communities more often. This is exactly the intended outcome of AC-RDS. In fact, at the population level, this is the result proven in Lemma 2.

Add-health networks
This set of simulations is based on friendship networks from the National Longitudinal Survey of Adolescent(available at http://www.cpc.unc.edu/addhealth), which we refer to as the Add-Health Study. In the study, the students were asked to list up to five friends of each gender, and whether they had any interaction within a certain period of time. The reported friendships were then combined into an undirected network. That is, an edge connecting two students means that either student, not necessarily both, reported a friendship. We use the four largest networks in the dataset. Table 2 contains summary information for the largest connected component of these four networks. We use gender as the binary node feature and focus on estimating the proportion of males in the population. We simulate the referral procedure of RDS and AC-RDS starting from a uniformly selected node and continuing until a certain number of samples are collected, either 1%, 5%, or 10% of the total nodes. In these simulations, each participant refers exactly three members with replacement. We compute the RDS II estimate of the male proportion using the node degree for the weights. Similar to the simulations in [15] and [2], these simulations are performed with replacement.
This study is based on 10, 000 simulated samples. Figure 5 and 6 display box plots for the 10, 000 RDS II estimates of the male proportion under different settings. Notice that in Figure 5 the interquartile range of AC-RDS with a 5% sample is often comparable to the interquartile range of a standard RDS with a sample that is twice as large. In Figure 6, only type B request is considered in the implementation of AC-RDS.

Without replacement sampling
We consider the impact on AC-RDS when simulating the sample with and without replacement from the underlying network. In the Random Networks simulation model, there is only a small difference between the two sampling settings. This is likely because the network is dense. In smaller networks, one expects there to be a greater difference between with and without replacement sampling. In fact, in the Add-Health simulation model, under a without replacement setting and a referral rate of one or two, the trees die quickly and often do not collect enough samples to attain 1% of the total nodes. Figure 7 displays plots for the 10, 000 RDS II estimates of the male proportion under the without replacement setting. In the simulation study of Add-Health networks type B implementation of AC-RDS collects more representative samples compare to the two types combined implementation.

Non-uniform seeds
We consider the impact of non-uniform (biased) seed nodes on AC-RDS and standard RDS when simulating the sample with and without replacement from the underlying network. Figure 8 displays plots for the 10, 000 RDS II estimates of the male proportion under the biased seed nodes.

Issues remaining
The aim of this research is to highlight how referral requests have the potential to alter referral patterns in a way that makes the resulting sample more representative of the target population. The Markov models for RDS and AC-RDS capture important features of reality, but both are necessarily an approximation to the practicalities of gathering a sample from a marginalized and hard-to-reach population. These gaps between "theory" and "practice" have the potential to make AC-RDS either more or less desirable. If AC-RDS is to be implemented in the field, there are several issues that must be explored. 1. If there are pockets of the marginalized target population which are particularly hard-to-reach, AC-RDS has the potential to both help and hinder the sampling of these populations. Novel referral requests could help by encouraging participants to refer friends from different communities, potentially exposing a new community to the researchers. Alternatively, because AC-RDS referral requests are likely more difficult for participants, it could reduce the number of referrals that are made, making it more difficult to reach a target sample size. 2. Because AC-RDS leads to a reversible Markov chain, there exist formulations for the sampling weights, akin to the Volz-Heckathorn estimator [38]. Successive Sampling model. The formulation of the sampling weights for AC-RDS could follow a similar argument as the Volz-Heckathorn weights. Because the Volz-Heckathorn estimator assumes a reversible Markov transition matrix P A , the stationary distribution is proportional to the row sums of A (i.e. the node degrees). Since AC-RDS also assumes a reversible Markov transition matrix PW , the stationary distribution of AC-RDS is Simulation results based on the Add-Health study described in Figure 6.

Each column corresponds to a different network from the study. In Row A and B the samples are collected with replacement and in Row C and D without replacement. In Row A and C the seed node is chosen from node with female attributes and in Row B and D from nodes with male attributes.
proportional to the row sums ofW (i.e. π given in (3)). In both cases (A andW ), the weights require asking participants questions about their local social network. Recently, [37] introduced data collection methods, survey questions, and estimators for RDS to estimate clustering properties of the underlying social network. Their estimators are designed to count the number of connected triplets and triangles which a participant belongs to. The collected data is the main part of estimating the sampling weights in AC-RDS. 3. Preferential recruitment, the tendency of participants to refer particular friends, leads to the violation of uniform referral assumption. AC-RDS gives participants some instructions for the new referrals. These instructions, since they are more specific, may lead to a referral process that satisfies the initial assumptions more. However, studying the reactions of members of a hidden population to this type of requests and the impact of preferential recruitment on AC-RDS requires rigorous field study that we will address in future research. 4. More generally, it is necessary to investigate how human subjects consider both standard and non-standard referral requests. Because it is practically infeasible to use random number generators to ensure participants refer randomly chosen friends, all statistical approaches to RDS assume that participants refer a random collection of friends. Whether these statistical models lead to adequate approximations of the actual referral process is an empirical question that has received some attention and deserves more.
In practice, there are many conditions that are often appended to this request. These conditions help define your contacts (e.g. as people you (i) know on a first name basis, (ii) have seen in the last month, and (iii) fit the eligibility criteria for the study). Page 330 of [23] and Appendix Q in [5] give further discussion on this topic. Of particular interest is that [23], in the section titled "Script for explaining the recruitment process," says The AC-RDS requests provide a formalization for this exact concept. For example, [42] designed a web-based method to sample undergraduate students and study the effectiveness and efficacy of RDS; [28] compared an RDS sample in Uganda with a total population survey on the same population; [27] studied how manipulating incentives might change referral patterns; [12] proposed statistical diagnostics to examine the convergence properties; and [1] performed qualitative follow-up interviews to ask participants about difficulties in finding referrals. Similar techniques could be used to evaluate whether novel referral requests provide a more representative sample.

Discussion
In respondent-driven sampling, bottlenecks create dependencies between the samples; successive samples are more likely to belong to the same community. Because of these dependencies, bottlenecks increase the variability of the resulting estimators. While researchers cannot alter the social network to diminish bottlenecks, researchers can use novel implementations of RDS to implicitly en-courage participants to refer friends in different communities. In comparison to other such techniques in the literature, AC-RDS does not require participants to reveal sensitive information, nor does it require a priori knowledge on what forms the bottlenecks (e.g. race, gender, neighborhood, some combination of these factors, or some entirely different factors). In a closer look, AC-RDS, similar to the "Search" mode of NSM, increases the referrals that are more likely to lead to the unexplored parts of the network. NSM aims to efficiently explore the network by targeting the best nodes for each sampling wave, while AC-RDS tries to find and explore the best local edges. Direct comparison of these two methods requires human subject experiments and is beyond the scope of the current paper. We call this approach anti-cluster RDS. This terminology stems from two distinct, but related, definitions of "clustering" in networks. First, the classical use of "clustering" in social networks is the clustering coefficient, a summary statistic of a network which describes the propensity of nodes to form triangles. This idea of "clustering" is a local measure. The second form of "clustering" is more global and is often used synonymously with community structure; the idea is that "clusters" of individuals form communities. Both of these types of clusters emerge due to homophily, the tendency of individuals to become friends with people who are similar. As such, homophily produces a local-global duality in "clustering." AC-RDS requests are built upon local structures in the network (which of your friends are friends) and immediately access the global network patterns, which could be unknown to the researchers and/or participants. This paper shows that AC-RDS is analytically tractable under the Markov model. One key benefit of the specific construction is that X ac i is reversible and its Markov transition matrix can be expressed with the underlying adjacency matrix and standard matrix operations (5). A key limitation of the Markov model is that it samples with replacement, while in practice the sampling is done without replacement. For further discussion of this topic, see [30]. The simulations in Section 5 show that the key insights from the Markov model continue to hold under the sampling without replacement model, so long as the sample size is not comparable to the population size.
Section 4 studies theoretical properties of AC-RDS. We first argue that AC-RDS can be approximated by a reversible Markovian process. Propositions 2 and 3 show that AC-RDS can decrease λ 2 , the second eigenvalue of the Markov transition matrix, on the population graph. Theorem 1 shows that these gains from Propositions 2 and 3 will continue to hold if the graph is sampled with independent edges. In addition, Theorem 2 shows that AC-RDS reduces the covariance of the samples in the referral tree under the Stochastic Blockmodel with equal block sizes.
Finally, in Section 6 we discuss some of the gaps between theory and practice, acknowledging that more work needs to be done before AC-RDS could be implemented in the field. For example, it is not clear how participants will actually respond to AC-RDS requests. Addressing this issue requires human subject experiments that are beyond the scope of the current paper. We are addressing this problem in concurrent research.

Appendix
This appendix provides the proofs contained in the main document. We begin by presenting some preliminary lemmas. We then provide the proofs for the results given in Sections 3.2, 4.1, and 4.2.

Appendix A: Preliminary lemmas
This section contains lemmas which are used to prove our main results. Lemmas 1 and 2 are contained in the main paper; we start the preliminary results with Lemma 3. First we state two standard results, given here for convenience.

Lemma 3. Let A be a symmetric matrix and D a diagonal matrix. Then
Lemma 4 (Bernstein's inequality). Let X 1 , · · · , X N be independent random variables and Then for all t ≥ 0, We use the following result from [32] in the proof of Proposition 2.

Lemma 5.
[ [32]] Under the Stochastic Blockmodel, if B = pI + rJ and there are an equal number of nodes in each block, then For completeness we include the proof here.
Proof. The matrix B ∈ R k×k is the sum of two matrices, where I k ∈ R k×k is the identity matrix, 1 k ∈ R k is a vector of ones, r ∈ (0, 1) and p ∈ (0, 1−r). Let Z ∈ {0, 1} N ×K be such that Z T 1 N = s1 K for some s ∈ R. This guarantees that all K blocks have equal size s. The Stochastic Blockmodel has the population adjacency matrix, A = ZBZ T . Moreover, P A = ZB L Z T for The eigenvalues are found by construction.
• The constant vector 1 N is an eigenvector with eigenvalue 1; where the last line follows because N = sK. • Let b 2 , . . . , b K ∈ R K be a set of orthogonal vectors which are also orthogonal to 1 K . For any i, Zb i is an eigenvector with eigenvalue (Kr/p + 1) −1 , = ps Nr + sp (Zb i ).
Because Zb i and Zb j are orthogonal for i = j, the multiplicity of the eigenvalue (Kr/p + 1) −1 is at least K − 1.
Because rank(P A ) ≤ min(rank(Z), rank(B L ), rank(Z T )) ≤ K, there are at most K nonzero eigenvalues. The results follow.
The following result is used for the computation of the eigenvalues in the proof of Proposition 3.

Lemma 6. Let P be a block constant Markov transition matrix, with blocks of identical sizes. Let P contain the block values
Proof. This follows from Lemma 5 using K = 2.

Lemma 7 (Operator norm of non-negative irreducible matrices). Let
Proof. By the Perron-Frobenius theorem, A has a real leading eigenvalue. Additionally, for any y ∈ R N , μ ∈ R, with y ≥ 0, and μ ≥ 0, if Ay ≤ μy, then λ 1 (A) ≤ μ. Take y = 1 and μ = max i r i (A), then Proof. Let x, λ be an eigenpair of L W , where the left hand side is P W (T −1/2 x). This implies that T −1/2 x, λ is an eigenpair of P W .

Appendix B: Design effect and variance
Here we provide the proof of Proposition 1 from Section 3.2.
Proof of Proposition 1. Lemma 12.2 in [24] shows that (i) f j and λ j are real valued and (ii) the f j are orthonormal with respect to f , f j π . Because λ 2 < 1, f 1 is the constant vector. We can express the covariance as Consider the first term of (11) Hence,

Appendix C: Population graph results
Here we provide the proofs of the results given in Section 4.1-Lemmas 1, 2 and Propositions 2, 3.
Proof of Lemma 1. From the definition of Z andĀ it follows that Z T Z = Θ andĀ = J n×n − ZBZ T = ZBZ T . Then, Hence, Proof of Lemma 2. We first show that We have r)).
With the above, we rewrite (12) as follows: where (13) to (14) follows from algebraic manipulation. Note that (14) is always true because of the lemma assumptions. In addition, by going through the same procedure, it can be shown that In terms of the expected adjacency matrices, the above statement is equivalent to the following result. Suppose that nodes k and m belong to the same block and l belongs to a different block, theñ Novel sampling design for respondent-driven sampling 4797 Now, we show PW(u, v) < P A (u, v), when u and v belong to the same block. We have Assume u and v belong to block C of size |C|. Factor out the transition probability between u and v. Then, Since the summations are along the rows, we have Therefore, based on inequality (15), Now consider the case where Θ kk = Θ ll for all k and l, then for w / ∈ C Proof of Proposition 2. The first part of this proof focuses on the inequality λ 2 (PW) < λ 2 (P A ). To this end, define Let f : {1, 2, · · · , k} → R and r be the row sum of B RW and B AC . Then I − 1 r B AC and I − 1 r B RW are Laplacian matrices. Therefore, where the inequality follows from inequality (16) and the fact that B AC uv > B RW uv for u = v. So we conclude that and, therefore This result is extended in the calculations below. The fact that λ 2 (P A ) = 1/(R + 1) follows immediately from Lemma 5. The rest of the proof is dedicated to equation (8) in the statement of the proposition. From Lemma 1,W = ZBZ T forB = (BΘB +BΘB) · B. Define r = 1− r. Note that Θ = N/KI, so it can be temporarily ignored as a constant.
Note thatr andp depend on the block populations N/K and thus the number of nodes in the graph N . However, this term cancels out in the ratior/p. So, neither λ 2 (PW) nor λ 2 (P A ) depend on N . As such, for some > 0 that is independent of N . As K grows and r shrinks, u → p(R + 1) and Using Lemma 5 onB, Then, which concludes the proof.
Proof of Proposition 3. Both P A and PW satisfy the conditions of Lemma 6. It is only necessary to compute the probabilities. For P A , p = 1 − and r = . So, To compute λ 2 PW , notice that it is only necessary to determine p and r up to proportionality. Under the assumed model,B 11 = ,B 12 = 1 − , and Θ ∝ I. Moreover, the matrix (BΘB +BΘB) · B contains the elements p = 2(1 − ) 2 and r = (1 − ) 2 + 2 for PW. By Lemma 6.

Appendix D: Sampled graph results
Here we provide the proofs of Theorems 1 and 2 from Section 4.2.
Proof of Theorem 1. By Lemma 8, and Weyl's inequality, The rest of the proof studies the right hand side of this inequality. For convenience and compactness, we introduce the following notation, By the triangle inequality, Also, The remainder of the proof is divided into four parts. The terms are bounded in Part 1,2, and 3, respectively. Finally, Part 4 combines these bounds and completes the argument.

Part 1.
Note that T is a diagonal matrix andÃ andÃ are both symmetric. Therefore, we apply Lemma 3 to obtain (17) It is sufficient to prove an upper bound for the first term in (17). The same bound will hold for the second term. We have where | · | is the element-wise absolute value operator. The inequality follows from the fact that for any matrix M , M ≤ |M | [e.g. 26, Theorem 2.5].
We begin by bounding the row sums of |AĀ − AĀ| · A with a concentration inequality. Then we use Lemma 7 to bound the operator norm. Define the row sum mapping r i , so that for a matrix C, r i (C) equals the sum of the i th row of C. We have Define F ij = k A ikĀkj and G ij = kĀ ik A kj . For fixed i and j, the random variables A ikĀkj k are independent with expected value E[A ikĀkj ] = A ikĀkj and variance Let Δ Fij := 10F ij ln 2N 2 δ . By Bernstein's Inequality and the union bound, where the last inequality follows from the assumption that F min ln N . So, with high probability, By Bernstein's Inequality, the following holds with high probability Consequently, Furthermore, From (19), (21) and (22), Following the same steps, we obtain Therefore, Part 2. We have Similar to Part 1, it is sufficient to prove an upper bound for the first term in (26). The same bound will hold for the second term.
Let J be the N × N square matrix comprised of all ones. We have Consider the first term in (27). For i, j ∈ {1, · · · , N}, define A ij ∈ {0, 1} N ×N to be the matrix with one at elements (i, j) and (j, i), and zero everywhere else. We then have, The right hand side is a sum of independent, symmetric matrices. Therefore, we can apply Theorem 5 of [6] to bound it. Let Note that Novel sampling design for respondent-driven sampling 4803 Therefore, applying Theorem 5 in [6] yields For the second term of (27), we obtain Because | k A ik A kj | ≤ D ii D jj , we obtain the same bound as (28). Namely, In addition, where the inequality follows from the assumption that F ij + G ij > c 1 D ii for all i, j ∈ {1, · · · , N}. Combining the above results, yields As noted above, the second term in (26) satisfies the same bound, so that Combining (26), (29), and (30), yields Part 3. First we bound |T ii − T ii | and then we bound T − 1 2 T + 1 2 − I . We have |Tii − Tii| = ri(Ã · A) − ri(Ã · A) ≤ ri((AĀ) · A) − ri((AĀ) · A) + ri((ĀA) · A) − ri((ĀA) · A) . (32) Consider the first term in (32), To bound the first term of (33), we use (20) and (21). With probability at least 1 − δ, Consider the second term in (33), Note that, E[A ikĀkj ] = A ikĀkj . In addition, we can obtain the same upper bound for the variance to use (20). Hence, with probability at least 1 − δ, we have From (35) and (36), we have For the second term in (32), following the same steps yields Therefore, Now we consider T − 1 2 T + 1 2 − I . We have where the last inequality follows from the theorem's assumptions. Define the Laplacian matrix L ac := T − 1 2W T − 1 2 . So, where the inequality follows from the fact that I − L ac ≤ 1. Now, Proof of Theorem 2. Define f j as the j th eigenvector of P A with respect to the inner product ·, · π ; similarly, define f ac j as the j th eigenvector of PW with respect to the inner product ·, · π ac . From Proposition 1, to prove the theorem, it is sufficient to show that |V| j=2 y, f ac j 2 π ac λ j (PW ) t < |V| j=2 y, f j 2 π λ j (P A ) t .
We break the proof into two steps. In the first step, we show that the above holds true in the population, i.e. we compare the Markov chains on PW and P A . In the second step, we show that the sample quantities converge almost surely to the population quantities. Part 1. In this step, we show that |V| j=2 y,f ac We begin by analyzing the eigenpairs of the transition matrices. From Lemma 5, for i = 2, . . . , K, λ i (P A ) = λ 2 (P A ) and λ i (PW) = λ 2 (PW).
Moreover, for i > K, λ i (P A ) = λ i (PW) = 0. Under the theorem conditions, PW and P A have the same stationary distribution; refer to this asπ (in fact, this distribution is uniform on the nodes). Definef j andf ac j as the jth eigenvectors, with respect to ·, · π , of P A and PW, respectively. Therefore, we have Proposition 2 shows thats λ 2 (PW) t + < λ 2 (P A ) t , where does not change asymptotically as |V| grows. Thus, Part 1 will be finished after showing that Part 2. To ease notation, let λ j := λ j (P A ) andλ j := λ j (P A ). Finally, let N := |V| denote the size of the graph. This part of the proof shows that, as N → ∞, The corresponding proof for the anti-cluster random walk follows from a similar argument.
We have