Translocation-Based Algorithm for Publishing Trajectories with Personalized Privacy Requirements

Up to now, a large amount of trajectory data have been collected by trusted servers because of the wide use of location-based services. One can extract useful information via an analysis of trajectory data. However, the privacy of trajectory bodies risks being inadvertently divulged to others. 'erefore, the trajectory data should be properly processed for privacy protection before being released to unknown analysts. 'is paper proposes a privacy protection scheme for publishing the trajectories with personalized privacy requirements based on the translocation of trajectory points. 'e algorithm not only enables the published trajectory points to meet the personalized privacy requirements regarding desensitization and anonymity but also preserves the positions of all trajectory points. Our algorithm trades the loss in mobility patterns for the advantage in the similarity of trajectory distance. Related experiments on trajectory data sets with personalized privacy requirements have verified the effectiveness and the efficiency of our algorithm.


Introduction
With the maturity of location technology and wireless communication technology, location-based service (LBS) becomes more and more popular. In LBS, servers for anonymity (SA), which are trusted, receive location service requests of users and accumulate a large amount of trajectory data. e analysis of trajectory data can reveal useful knowledge, but it also threatens the privacy of the trajectory bodies. Trajectory data should be processed to protect its privacy before being published to third party for analysis, which is called the privacy protection for publishing trajectories.
e location service requests (LSR) contain personalized privacy requirements of the body on that location, and the requirements can inform the SA of body's sensitivity. For the trajectory data with personalized privacy requirements, the existing privacy protection methods usually resist the identity link attack of trajectories and realize the personalized anonymity of trajectories [1][2][3]. Due to the curse of dimensionality [4], the anonymity of trajectories will cause a lot of data distortion.
We propose a Translocation-based Personalized Privacy Protection (TPPP) for publishing the trajectories with personalized privacy requirements on trajectory points. We assume that the personalized privacy requirements set by the body of the trajectory include sensitivity and anonymous threshold of the trajectory points. is method realizes the personalized privacy protection through the translocations of the trajectory points. Figure 1 shows a trajectory with personalized privacy requirements. Each trajectory point p i has specific sensitivity thresholds d i , and the translocations of trajectory points cannot be assigned into the sensitive area in red. e trajectory publisher defines a maximum of distortion distance and is represented as d max . e translocations of trajectory points cannot exceed the range of the dotted line in green. e anonymous threshold k i of p i is not marked in Figure 1. k i means the probability that inferring the position of p i through its translocation trajectory point p * i should not be more than 1/k i . We design two privacy protection algorithms according to whether the adversary has mastered the privacy requirements of p i . Experiments show that the two algorithms are efficient and produce less distortion of trajectory data for publishing. e main contributions of our proposed method are as follows: First, we propose a privacy protection framework for the trajectory data with personalized privacy protection requirements, which include desensitization and anonymity of trajectory points. Second, we propose a translocation strategy, by which the algorithm retains all the spatial-temporal positions in the original trajectory data. No dummy trajectory points are added, and no trajectory points that cannot be anonymous are removed, so the availability of published data is improved.
ird, a modified Manhattan distance metric is proposed to measure the distance between trajectory points in the space-time region, which improves the disadvantage of Euclidean distance in measuring the reachable distance between two points. Finally, for the trajectory data with personalized privacy protection requirements, an experimental method is designed to compare the personalized privacy protection algorithm with nonpersonalized privacy protection algorithm on effectiveness and efficiency. We also verify the effectiveness and efficiency of our algorithms by comparison with the baseline algorithms. e organizational structure of this paper is as follows. e first section introduces the research status and the main contributions of our work. e second section introduces the related work of personalized privacy protection for publishing trajectories. e third section describes the system model, defines the data symbols involved in the method, and introduces the problem statement. In the fourth section, we propose a modified Manhattan distance for measuring the accessible distance on the road network. e fifth section introduces two privacy protection algorithms under different background knowledge of adversary. In the sixth section, 24 trajectory data sets with personalized privacy requirements are constructed based on two classical trajectory data, and the comparison of different algorithms is implemented by using the median method. e last section summarizes the paper.

Related Work
Trajectory data is typical multidimensional data, and its kanonymous publishing is NP-hard [5]. ere are two kinds of nonpersonalized privacy protection methods according to different privacy objects. First, the publishers set a unified privacy protection level for all trajectories [6][7][8][9][10][11] and then complete the privacy protection for publishing trajectories. Second, trajectory points are taken as atomic operation objects; the publisher sets a unified privacy protection level for all trajectory points [12][13][14][15][16] and then completes the privacy protection. Wang. et al. [17] summarized and classified the nonpersonalized privacy protection methods of trajectory data. Bonchi et al. [18] introduced typical methods for publishing trajectories in detail.
Due to the maturity of location technology and communication technology, LBS is developed rapidly towards personalization. Some LBS allow users to set personalized privacy requirements on every LSR, so the trajectory data with personalized privacy requirements can be collected. e personalized privacy protection for publishing trajectories can be divided into two categories by the originator of personalized privacy requirements.

Trajectory Body Sets the Personalized Privacy Requirements.
is kind of method takes the trajectory identity as the privacy protection object and resists attacks on identity links by anonymizing trajectories. e algorithm firstly clusters the trajectory and then selects the highest level of privacy requirement in the cluster as the privacy level of the cluster.
is method does not improve the effective privacy level but greatly increases the data distortion. Mahdavifar et al. [1] selected the trajectory with the highest level of anonymity requirement in trajectory data as the center to cluster the trajectories. en the cluster was expanded until its capacity is not less than the privacy requirement of the center. Finally, the trajectory data is reconstructed and published. Lan et al. [2] mapped the trajectories and their relations to a weighted graph, and the trajectories whose similarity is not less than a certain threshold are connected by the edge. is algorithm takes the privacy difference between the trajectories on two endpoints as the weight of the edge. e algorithm constructed the anonymous set to realize privacy protection by partitioning the graph. Hu et al. [3] assumed that there was an independent privacy level on different trajectory segments and selected the maximum privacy requirements of trajectory segments as the privacy requirement. Similar to nonpersonalized trajectory privacy protection method, the Spatial p 1 p 9 p 8 p 7 p 6 p 5 p 4 p 3 p 2 Temporal Figure 1: Trajectory with personalized desensitization thresholds on trajectory points. e trajectory consists of 9 points, each of which has a unique sensitivity threshold d i . A consistent maximum distortion threshold (d max ) was set for all trajectory points.
method that takes trajectory as the anonymous object will cause serious data distortion.

Trajectory Publisher Sets the Personalized Privacy
Requirements. Firstly, the algorithm sets the privacy requirements of the trajectory points according to the distribution characteristics and geographical location of the trajectory data. en, trajectory point is taken as the atomic operation object by the algorithm, and personalized privacy protection of trajectory points is realized through perturbation [19,20] or generalization [21][22][23]. e personalized privacy protection of trajectory points is usually based on the classification tree of its semantic attributes. Dai et al. [19] first annotated the semantic attributes of all sampling points on the trajectories to establish a classification tree. en they extracted the sensitive stop points and replace them by selecting the appropriate user interest points. Finally, they published the reconstructed trajectory data. Komishani et al. [21] assumed that each trajectory has a unique sensitive attribute value, and all the sensitive attribute values in the trajectory data form an attribute classification tree. e algorithm took the level of sensitivity in the attribute classification tree as the privacy level of the trajectory and then generalized the trajectory. It further prevented identity link attacks by removing the critical subtrajectories. Deldar and Abadi [22] created a personalized noise trajectories tree based on preallocation of privacy level and realized the trajectory differential privacy protection with personalized location privacy requirements. Yang et al. [23] first defined the information active point with the level of personalized privacy requirements. en the active information points are clustered into anonymous areas, so that the information leakage of the information active points did not exceed the privacy threshold of the area.
Such methods abandon the anonymity of trajectories and cannot resist attacks on identity links. e algorithms can only generalize the specific location of trajectory points according to different privacy requirements and make sure the probability of inferring the original location does not exceed a certain threshold [19,[21][22][23]. Compared with the first category, this kind of methods that takes the trajectory point as the privacy protection object can reduce the level of privacy protection, but it reduces the data distortion significantly. Semantic attribute classification tree is the basis of generalization, and it is obtained through data analysis or subjective judgment; subjectivity is a shortcoming of such methods. [24] proposed a privacy protection framework for LBS, which allowed each trajectory point to specify the minimum anonymous threshold and the maximum spatial-temporal resolution. e algorithm detects a set of neighbors for each newly added message, in which the new message and its neighbors are enclosed in each other's maximum tolerance box. en the anonymous set containing new information is obtained by eliminating the trajectory points that do not meet the requirement of anonymity. If the number of trajectory points is not less than the highest level of anonymity requirement in the anonymous set, the anonymity succeeds; otherwise, the new message cannot be anonymous immediately. Gao et al. [25] allowed the publisher to select the model parameters of the algorithm individually without considering the personalized privacy requirements of different trajectories.

Problems and Shortcomings.
e existing personalized privacy protection methods based on trajectory anonymity cannot solve the curse of dimensionality, which is bound to cause a lot of data distortion. e generalization method based on trajectory points uses the semantic attributes of the trajectory points frequently to set its privacy level, but they are all subjectively assigned. So the semantic attributes usually lead to errors in the evaluation of privacy requirement level and privacy protection processing of trajectory points. For the trajectory data with personalized privacy requirements defined by the trajectory bodies, the existent privacy protection algorithm based on trajectory points is for LBS.
e privacy protection algorithm that processes the trajectory data with personalized privacy requirements on trajectory points, which is defined by trajectory bodies, is seldom reported, and our method belongs to this type.

System Model
e Generation Process of Trajectory Data. Trajectory data is typically generated in LBS, consisting of three primary objects: the user, the trusted SA, and the untrusted location service provider (LSP), as shown in Figure 2. Location service request (LSR) issued by users includes user identity, location information, service content, and privacy requirements. In order to protect trajectory privacy, users send LSR to SA instead of LSP, and SA processes the received LSRs for privacy and generates a deliverable LSRs set. is process is called privacy protection of trajectories for LBS. en SA sends the processed LSRs to LSP, and finally LSP returns the location service answers (LSA) to users through SA.
During the process of LBS, SA stores a large amount of original trajectory data, which can be analyzed to find useful information. Before publishing the trajectory data to the third party for analysis, SA should transform them to generate releasable trajectory data and ensure that the third party cannot disclose personal privacy from it. is process is called privacy protection for publishing trajectories, which is also the purpose of our study.

Personalized Privacy Requirements of Users.
e privacy of the trajectory is objective requirement of the trajectory body. It should not be set by the collector only according to the semantic and distribution characteristics of the trajectory on the map. Due to the difference on the trajectory bodies and the spatial-temporal location, each trajectory point may have different privacy requirement, which is also known as the personalized privacy requirement.

Mathematical Problems in Engineering
For example, GPS accuracy is set to a high level during navigation services by some users. For others, the accuracy of GPS varies according to spatial-temporal location: sometimes users allow accurate GPS service to improve navigation accuracy, but they need to reduce GPS accuracy in sensitive spatial-temporal location.

Model of Trajectory Data
Definition 1 (1) Location service request (LSR). 〈TID, r no , t, x, y , k, dt, dx, dy , Co〉 [24], where TID is user identity, r no is the record number of current request, {t, x, y} is the spatial-temporal location of service request, k is the anonymity threshold, {d t , d x , d y } is the desensitization threshold, and Co is the content of location service request. It should be noted that {d t , d x , d y } is the minimum disturbance distance for the trajectory point to reach desensitization requirements, while {d t , d x , d y } in LBS refers to the maximum distortion threshold of location service [24]. e trajectory data in SA can be regarded as a collection of trajectory points; it is defined as follows.

Definition 2
(2) Trajectory. A collection of spatial-temporal sampling positions belongs to the same moving body [8], denoted by T = {p 1 , p 2 , . . ., p |T| }, where |T| represents the number of points in T. e trajectory point p i is represented as a tetrad (TID i , t i , x i , y i ), where TID i denotes the body of T, t i denotes the sampling time, and (x i , y i ) denotes the spatial coordinate. (t i , x i , y i ) is called a triple. Trajectory data is a collection of trajectories [26], denoted by TD � {T 1 , T 2 , . . ., T |TD| }, where | TD| represents the number of trajectories. Intuitively, TD is a set of points:{ p 1 , p 2 , . . ., p |TD|p }, where |TD| p represents the number of trajectory points in TD [17].
We add private information into Definition 2 and describe the trajectory point with personalized privacy requirement by Definition 3.

Definition 3 (3) Trajectory point with personalized privacy requirement
personalized desensitization threshold; that is to say, the spatial-temporal range with {t, x, y} as the center and {d t , d x , d y } as the radius is the sensitive region of p, and p must be disturbed outside this range in order to be desensitized. Sometimes we also use d s and d t to represent the spatialtemporal components of the desensitization threshold, respectively. e anonymity threshold is denoted by k. When p is translocated to p * , in order to realize k-anonymity of p, there must be at least k adjacent individuals around p * .
k ≤ 1 indicates that there is no requirement for privacy protection. Even if the trajectory body sets desensitization threshold (d > 0), the possibility of being restored (1/k) does not make sense.
k > 1 indicates that the trajectory bodies need to protect the trajectory point with the anonymous level no less than k. Even if there is no desensitization requirement (d � 0), it only indicates that the translocation range of p is [0, d max ], where d max is the maximum distortion threshold proposed by the analyst. In the following statements, we only use d max to express its components in the spatial dimension, while the components in the temporal dimension can be expressed by d max /v, where v is the trajectory speed.

Definition 4 (4) Transformed Trajectory
Database. According to certain rules, the replacement of p i with other regions is called position transformation of p i , in which p i is a trajectory point of T; 0 ≤ i ≤ |T|. e region may be a point or an area, or empty, denoted by p * i . e new trajectory is called transformed trajectory of T, denoted by T * . e new trajectory data is called the transformed trajectory data of TD, denoted by TD * . Typically, the published trajectory data is also represented by TD * [17].

Problem Statement.
Before publishing TD to the third party for analysis, the process that transforms TD into TD * based on the personalized privacy requirement is called personalized privacy protection for publishing trajectories.
For the TD with personalized privacy requirements mentioned in the data model, the main concern of our algorithm is to ensure the personalized privacy requirements and improve the availability of published data as much as possible. We hope to design trajectory privacy protection algorithm based on the trajectory points to meet the personalized privacy requirements, which include desensitization from the sensitive area and kanonymity of the trajectory points. But the k-anonymity of trajectories is not required in our study. At the same time, the original position of the trajectory points should be maintained to reduce the distortion of TD * and improve its usefulness. e background knowledge of adversary is related to TD's privacy policy directly. We divide the assumptions of the background into two levels. e strict assumption is that the adversary has mastered some trajectory points of the target and the privacy requirement level of the target trajectory points. e second level assumes that the adversary only knows some trajectory points of the target but does not know the privacy requirements level of the target trajectory points.
is is also a more realistic set of background  knowledge. e privacy requirements are more private than the location of the trajectory point. It is hard to imagine that an adversary has mastered its privacy requirements without knowing the sensitive location of the target. In the case that the adversary does not know privacy level of the goal points, the trajectory points can achieve a more efficient k-anonymity and desensitization. In the Solution section, we discuss the privacy protection policies in both cases.

Modified Manhattan Space Distance
Although the traditional Euclidean distance accurately expresses the linear distance between geographical objects, the paths are usually polygonal in the actual geographic connected region. erefore, it is more reasonable to use the Manhattan distance to measure the distance between the trajectory points [3].
In most geographical areas, road direction depends on physical geographical conditions, and the direction of trajectory data is the reflection of road direction. erefore, the main directions of the local roads can be estimated approximately by the statistical characteristics of the trajectories.
e coordinate axis can be adjusted so that it is parallel to the two main path directions in the region, and the trajectory point coordinates can be projected onto the new coordinate axis. is coordinate axis transformation is a compromise between Manhattan distance and the road network constraint distance, which improves the authenticity of the distance between the track points and then avoids the fitting of the specific road network information and the trajectory data in the privacy protection. e transformation process is as follows: (a) Take the Due east as the reference vector; the trajectory segment in each direction was sampled. e radian of the angle formed by the direction of the trajectory segment and the reference vector was defined as It can be converted into a nonnegative angle and rounded into .
Considering that the roads in most areas are almost vertically intersected, when we simply select two mutually perpendicular directions as the new coordinate axis direction, the coordinate transformation formula is simplified as follows: x ′ � x cos α + y sin α, (e) According to the new coordinates of each trajectory point, the sum of the difference of each dimension is the modified Manhattan distance: where v is the average speed of the trajectory.
For example, we take the trajectory data of New York (see Figure 3(a)) to calculate the modified Manhattan distance, and the sum of the trajectory length at various angles is shown in Figure 3(b).
Comparing our new coordinate axis with the map of New York on Baidu (Figure 3(c)), we found that the main directions of the trajectories obtained by statistics (about 60 and 150) are basically consistent with the directions of urban traffic roads. en the rotation angles of the X-axis and Yaxis in the trajectory coordinate system are, respectively, equal to α � 60-0 � 60 and β � 150-90 � 60. In the transformed coordinate system, the distance from B to O in Figure 3(d) is converted from BC + CO to BF + FO, which is reasonable.

Solution
Intuitively, TD with personalized privacy requirements is a collection of trajectory points. As shown in Definition 3, all trajectory points are distributed in a three-dimensional space spanned by time and space dimensions. We can take trajectory points as atomic objects of trajectory privacy protection algorithms. In order to improve the usability of TD * , our method gives up the k-anonymity of trajectories and only requires the trajectory points to meet k-anonymity and desensitization requirements. Desensitization threshold d is a distance relationship; that is, p * should keep a certain distance from its original position p; anonymity threshold k is a structural relation; that is, the Mathematical Problems in Engineering probability to recognize the original position of p from p * is no more than 1/k.
We show the anonymity and desensitization of trajectory points after translocation in Figure 4. We combine the two spatial dimensions into one and mark the spatial-temporal dimensions as (x, y) for convenience.
In Figure 4, the dark gray area is the sensitive area of its centroid p, and the displacement position p * should be out of it to ensure desensitization. e light gray area is the replaceable area of p, and desensitization can be realized if p is replaced in this area, and the distortion of p * meets the requirements of the trajectory data analyst. If p is moved to the white area, the distortion will exceed the acceptable range of the data analyst. e upper limit of this distortion is usually set by SA, so it is consistent for each trajectory point. We assume that, for each trajectory point p, the distance between the translocated position p * and the original position p cannot exceed the effective threshold d max .
In order to ensure that the target trajectory point p * used for translocation satisfies k-anonymity, the number of trajectory points that can be translocated to p * should not be less than k. We use the dotted arrow to represent the translocated relation between the trajectory points. In Figure 4, five trajectory points can be translocated to p * , so p * meets 5-anonymity. e most important factor determining the strategy for protecting the privacy of trajectory points is whether the adversary has known the privacy requirements (k, d) of the target trajectory points. We discuss the privacy protection algorithms in two cases, respectively, in the following part.

Privacy Requirements are Known by Adversary (PRK).
We assumed that the information possessed by the adversary    Mathematical Problems in Engineering e trajectory body requires that p * i is outside the sensitive area of p i , and the probability to find {t i , x i , y i } is no more than 1/k i . e data analyst requires that the distortion of p * i should not exceed d max . In order to find a suitable p * i for p i , we design TPPP for publishing trajectories in the case of Privacy Requirements are Known (PRK).

Description of the Algorithm.
We take two steps to find a trajectory point that meets the desensitization and anonymity requirements of p i .
First, we search for the candidate trajectory point for translocation (CTPT) p ′ that meets desensitization requirements of p i in TD. e distance between p ′ and p i was required to conform to the following relation: Second, we search the trajectory point p j that can be translocated to p ′ to achieve the same desensitization threshold (d i ) as p i : en, the points conforming to formula (7) are added to set P j , and the number of elements is represented by |P j |. Since d i is uniformly used as the desensitization threshold, formula (7) actually reflects the adversary's method to invert the position range of p i , which is P j , under the condition of mastering d i and p * i . However, it must be noted that the anonymity level of p ′ is the number of p i conforming to formula (7) but not |P j |. If P j satisfies formula (8), then p i can be translocated to the position of p ′ ; the probability of deducing the original position according to d i and p * i is no more than 1/k i .
erefore, TID i of p i can be translocated to p ′ to form a new trajectory point: At this point, the records in TD * are Record (10) shows that the algorithm deletes not p i but its TID i and privacy requirements. ere are two benefits: on one hand, the data analyst can make use of the trajectory points without TID for necessary data analysis, which can improve the accuracy of data analysis, such as the density of trajectory points on some time point. On the other hand, the trajectory points without TID can also improve the density of trajectory points locally, so as to improve the anonymity level of trajectory points adjacent to it.
If |P j | < k i , p ′ cannot satisfy k i -anonymity; then we iteratively find trajectory points that satisfy formulas (7) and (8) in TD and translocate p i to a new position.
For p i , what should we do when there is no trajectory point that meets both formulas (7) and (8)? at is, there is no replaceable position that meets both anonymity and desensitization requirements. To address this interesting problem, we propose four options and choose the fourth one according to the same reason as Record (10).
(1) Increase the system distortion threshold d max to expand the replaceable region of p i . is will cause more data distortion; (2) Decrease the sensitive threshold d i of p i to expand the replaceable area. is will lead to a privacy crisis; (3) Decrease p i 's anonymity threshold k i . It would also lead to a privacy crisis; (4) Delete TID i of p i . e pseudocode of TPPP (PRK) is shown below.

Data Structure.
In Algorithm 1, both step (3) and step (5) need to traverse the whole TD; it leads to a lot of redundant calculations. We use time-window structure based on timing chain [17] to simplify this operation.
TD is stored as records, and we map TD to a linked list. e temporal distance between the trajectory points is onedimensional and is positively correlated with the spatialtemporal distance between the trajectory points. erefore, by filtering in the time dimension, the scope of comparison between trajectory points can be narrowed. We first sorted the trajectory data list by sampling time and formed a sequential linked list (SLL) and then limited the number of candidate trajectory points with the time-window structure.
is will greatly speed up the determination of the translocated relation between the trajectory points. e time-window structure is shown in Figure 5. In step (3) of Algorithm 1, when the operation pointer (OP) points to p i , we search CTPTs of p i . Only the point p ′ whose sampling time is t ′ meets the condition Figure 4: e translocated relation between the trajectory points. e distance between p * and p is greater than the sensitivity threshold d and less than the maximum distortion threshold d max , so p * is in the desensitization region of p. ere are 5 trajectory points that can be transferred to p * for desensitization, so the trajectory points that can be transferred to p * satisfy 5-anonymity. the spatial-temporal distance of p i and p ′ meets formula (7). If the result is true, p ′ is a CTPT of p i .
Although the purpose of step (5) is to find out which trajectory points can use p ′ as CTPT, the desensitization threshold is the same as that in step (3). erefore, the purpose of step (5) can be regarded as to search the CTPTs of p ′ , similar to searching the CTPTs of p i in step (3).

Analysis of the Efficiency.
In order to analyze the efficiency of the algorithm, we make the following assumptions: e sampling time of all trajectories is synchronous and the number is n, which means that the length of all trajectories is n. e number of trajectories in TD is N. e trajectory points are uniformly distributed in the spatialtemporal range. e number of trajectory points in the timewindow is N w . e number of CTPTs conforming to desensitization requirements is N d , and the ratio with N w is a constant value x%; that is, We take a trajectory point p as an example to analyze the time complexity of the algorithm. e purpose of the first step is to calculate whether the distance between p and trajectory points in its time-window meets the desensitization threshold, and the time complexity is O (N w ). e second step is to calculate the anonymity level for each CTPT of p, and the time complexity is O (N w · N d ). erefore, the time complexity for one trajectory point is . We need to do this process for each point. Since the number of trajectory points in TD is N·n, the total time complexity of the algorithm is O (N · n · N w · (x% · N w + 1)).

Demonstration of Example.
We take p 1 in Figure 6 as an example to demonstrate the process of PRK algorithm, where k 1 � 5.
Firstly, we search for CTPTs in the desensitization region of p 1 (as shown in Figure 6). We can see that CTPTs (p 1 ) � {p 3 , p 4 , p 5 , p 6 }. en, we further examine the anonymity of CTPTs (Figure 7). Since the adversary has mastered the desensitization threshold d 1 of p 1 , we suppose that p ′ can be used as the CTPT of the k ′ trajectory points, where p ′ is a CTPT of p 1 . e k ′ trajectory points include p 1 , and the distance from p ′ satisfies formula (7). If k ′ ≥ 5, p ′ can be used as the replaceable position of p 1 . Figure 7 shows whether the four CTPTs of p 1 satisfy 5-anonymity.
We can see from Figure 7 that p 4 and p 5 can be used as the replaceable positions of p 1 , so that p * 1 satisfies d 1 -desensitization and 5-anonymity, and the distortion of p * 1 does not exceed d max .

e Effectiveness on Privacy Protection.
In the example above, we assume that the translocation target of p 1 is p 5 , and the published trajectory points are shown in Figure 8. e background knowledge known by the adversary includes k 1 , d 1 , and d max . We analyze the effectiveness of p * 1 for privacy protection.
(1) Desensitization. As shown in Figure 8, we can draw a circle with p 1 as the center and d 1 as the radius. Since p 1 * (p 5 ) is outside the sensitive region of p 1 , the desensitization of p * 1 for p 1 can be guaranteed.
(2) Anonymity. Adversary can draw two concentric circles with d max and d 1 as the radius, respectively, and take p * 1 as their center.
ere are 5 trajectory points that can be For each p′ in P (5) Select p j from TD while d i ≤ |p j − p′| ≤ d max into set P j (6) If |P j | ≥ k i then sent p′ to set P ′ (7) Next p (8) If |P ′ | > 0 then select p′ from P ′ randomly and add TID i to p′ (9) Delete TID i from p i (10) Next i (11) Output TD as the TD * ALGORITHM 1: Algorithm for desensitization and anonymity of trajectory points under PRK.
Time window for p i Sequential linked list of TD   information, so we gave up to preserve the temporal characteristics in the published trajectory data. Our strategy divides the information of the trajectory point into two parts (〈TID〉, 〈t, x, y〉) and then transfers its 〈TID〉 to the location that meets the privacy protection requirements, so there is no spatial-temporal conflict in the published trajectories. e distortion of the transformed trajectories is limited by d max , so the overall trend of the trajectories will be preserved and the statistical characteristics of the trajectory points will be less distorted.
Intuitively, our method defines the scope of the original trajectory T with the published trajectory T * : a tubular region centered on T * , where the wall thickness of the tube at the sampling time t can be represented as (d max − d t ). e original trajectory T is on the wall of the tube. is is similar to the result of the method based on trajectories clustering, which generalizes the original trajectories by a curved column with an axis T * and a radius δ.

Privacy Requirements are Unknown by Adversary (PRU).
In the case that privacy requirements are unknown, we assume that the adversary's background knowledge is e adversary's target and the privacy requirements of trajectory bodies are consistent with those in PRK. Since the adversary does not know (k i , d i ), which are the privacy requirements of p i , it is impossible to judge the position range of p i based on d i and p * i . erefore, formula (6) in PRK can be changed as follows: It is reasonable to replace d i with desensitization threshold d j of p j . Formula (12) reflects whether p ′ can be used as CTPT of p j to realize desensitization of p j . Another benefit of this change is that |P j | is fixed as the number of trajectory points that can be replaced to p ′ and realize desensitization, namely, the anonymity of p ′ . In this way, we can calculate the anonymity of all trajectory points in one time, instead of calculating the anonymity based on d i for each CTPT of p i .
Comparing formula (7) with formula (12), we can see they are in the similar form. In other words, in the case of PRU, by a round of comparison such as formula (7), the judgment that whether p ′ can be used as CTPT of p i can be completed, and the accumulation of anonymity of p ′ finished at the same time.
In order to record the distance relation and structural relation between trajectory points on TD, we mapped the trajectory points and their position relation to a directed relation network of trajectory points (RNTP), where the nodes represent the trajectory points. Desensitization can be achieved by replacing the trajectory point at the beginning of the directed edge with the trajectory point at the end of the directed edge. e indegree of a node p indicates the number of the points taking p as CTPT, which also reflects the anonymity of p.
On the obtained RNTP, for each trajectory point p i , we select a neighbor whose indegree is not less than k i as the translocation trajectory point of p i . If there is no such neighbor, we delete TID i of p i directly, according to the same reason as Record (10).

e Data Structure of RNTP.
In order to improve the efficiency of execution, we select the linked list as the storage structure of RNTP [17]. As shown in Figure 9, P previous points to the previous node of the current node p, SerialNum is the unique Serial Number of p on TD list, and TID is the identifier of the trajectory point, representing the body of the trajectory point. (t, x, y), k, and d are the spatial-temporal position, anonymous threshold, and desensitization threshold of the trajectory point, respectively. P next points to the next node of p, NeighborNum is the indegree of p, and P neighbor points to the neighbors tree (NT) of p.
e neighbor set of current node is stored in the form of binary search tree. P father points to the parent node of p, Serial-Number is the unique sequence number of the neighbor node, P left and P right point to the left and right child nodes of p, and P neighbor points to the current neighbor in the main storage linked list.

Construction of RNTP.
To improve efficiency, trajectory points in TD are first stored in a SLL, and the concept of time-window is introduced, which is the same as in PRK. e difference is that PRU needs to link the trajectory points in TD into a directed relation network through the neighbor tree of each trajectory point.
In order to build RNTP, we detect CTPTs for each trajectory point in the time-window of p i . If p ′ satisfies d i ≤ |p i − p ′ | ≤ d max , the directed edge from p i to p ′ was established, and the indegree of p ′ was increased by 1, which was also the anonymity of p ′ . e schematic diagram for the time-window and neighbor tree of the node p i is shown in Figure 10.

Algorithm Based on Translocation of Trajectory Points on RNTP.
In RNTP, NT i has recorded all CTPTs that satisfy desensitization. In order to make p * i satisfy k i -anonymity, we only need to select a trajectory point p ′ randomly in NT i , whose indegree is not less than k i , and translocate TID i of p i into p ′ to complete the personalized privacy protection of p i .  10 Mathematical Problems in Engineering e new trajectory points formed after translocation as shown in formula (9). At this time, the record form in TD * is consistent with Record (10) in the case of PRK. e pseudocode of the algorithm is shown as follows.
By calling Algorithms 2 and 3 successively, the personalized privacy protection based on translocation can be realized, and we can obtain TD * .

Analysis of the Efficiency.
In the case of PRU, the assumptions are the same as PRK. e privacy protection algorithm is divided into two steps: constructing RNTP and the translocation of trajectory points, so we analyze the efficiency of the algorithm in two steps.
Step 1. Time complexity of Algorithm 2 Algorithm 2 estimates whether the distance between trajectory point p i and the points in its time-window meets the desensitization requirement and joins the trajectory point p ′ that meets the requirements into the neighbor tree NT i of p i and then updates the indegree of p ′ . e time complexity is O (N w ). e time complexity of the above operation for all trajectory points is O (N · n · N w ).
Step 2. Time complexity of Algorithm 3 e time complexity of finding CTPTs that meet k ianonymity in the neighbor tree of p i is O (N d ). e time complexity of the above operation for all trajectory points in RNTP is O (N · n · N d ).
In the case of PRU, the privacy protection of the TD can be completed by calling Algorithms 2 and 3 successively, so the total time complexity is O (N · n · N w · (x% + 1)).

Demonstration of Example.
We take the trajectory point in Figure 6 as the example to demonstrate the process of privacy protection algorithm in the case of PRU. e analyst sets a consistent maximum distortion threshold d max for all trajectory points, but the sensitivity threshold d for each trajectory point is independent. erefore, each trajectory point has independent range for selecting its CTPTs. For example, in Figure 11(a), the sensitivity threshold of p 3 is small, and p 4 is not in its sensitivity range, so p 4 can be regarded as the CTPT of p 3 . e sensitivity threshold of p 4 is relatively large, and p 3 is in the sensitive range of p 4 , so p 3 cannot be regarded as CTPT of p 4 . According to Figure 11(a), we can establish the RNTP, as shown in Figure 11(b).
We can see from Figure 11(b) that the indegree of each trajectory point is its anonymity level, and it can be statistically calculated, as shown in Table 1.
e anonymous requirements of each trajectory point are shown in Table 1. en we can randomly replace the trajectory points to the adjacent trajectory points that meet their anonymous requirements, as shown in Figure 12.
In Figure 11(b), the anonymity threshold of p 9 is 4, but there is no trajectory point whose anonymity reaches 4 in its CTPTs, so only TID of p 9 is deleted for the same reason as Record (10). p 3 and p 8 were all translocated to the position of p 4 , but no trajectory points were translocated to the position of p 8 . e result is shown in Figure 12.

e Effectiveness of TPPP (PRU).
According to the assumption of the background knowledge of the adversary in the case of PRU, the information known is only d max . We take p * 1 in Figure 12 as an example to analyze the effectiveness of privacy protection after translocation.
(1) Desensitization. As shown in Figure 11(a), we draw a circle with p 1 as the center and d 1 as the radius. Since p * 1 is outside the sensitive region of p 1 , the desensitization of p * 1 related to p 1 can be guaranteed.
(2) Anonymity. As shown in Figure 12, the adversary can draw a circle with p * 1 as the center and d max as the radius. ere are three trajectory points that can be translocated to the position of p * 1 and achieve desensitization. So the probability of determining the original position of p * 1 is no more than 1/3. In Figure 12, if the adversary has known the privacy requirements of p 1 (k 1 , d 1 ), when d 1 > Dis (p * 1 , p * 6 ) and d 1 > Dis (p * 1 , p * 2 ), the original location of p 1 can be uniquely determined as p * 4 . erefore, the published trajectory data generated by PRU algorithm is not safe in the case of PRK. 6.1. e Purpose of Experiment and Baseline Algorithms. We proposed the TPPP algorithm and analyzed its effectiveness in the section above. Here we compare the typical Data structure of the node in RNTP Data structure of the node in neighbors tree

Simulation and Experimentation
The neighbors tree The next triple The previous triple

Father neighbor
Left son neighbor Trajectory point in chained list P father SerialNum P left P right P neighbor P previous SerialNum TID t, x, y P next NeighborNum P neighbor k dt, dx, dy Figure 9: Data structure of the node and its NT in RNTP.
(TID 1 , t 1 , x 1 , y 1 )  trajectory privacy protection algorithms and TPPP in terms of data distortion, mobility patterns, and execution efficiency on the TDs which set personalized privacy requirements. Considering the personalization and granularity of privacy protection objects, we select three typical trajectory privacy protection algorithms as baseline algorithms.
Considering the nonpersonalized trajectory privacy protection algorithm based on trajectory anonymity, we chose the classical NWA algorithm [7]. Considering the personalized privacy protection algorithm based on trajectory anonymity, we selected the algorithm in [1] and represented it with CPPP. Considering the nonpersonalized privacy Select a note p′ in the NT i of p i while p′.NeighborNum ≥ d i (4) If exist p′ then add TID i to p′ (5) Delete TID i from p i (6) Next i (7) Output RNTP as the TD *  (1) Input: TD (2) Map TD to a sequential linked list (SLL) (3) For i � 1 to |TD| p (4) For each p′ in the time-window of p i (5) If d i ≤ |p i − p′ | ≤ d max then (6) create a node in NT i and point its P neighbor to p′ (7) P ′ .NeighborNum++ (8) Endif (9) Next p′ (10) Next i (11) Output SLL as the RNTP ALGORITHM 2: Map TD to a relation network of trajectory points.
protection algorithm based on anonymous of trajectory points, we chose IPKN algorithm [17]. e TPPP algorithm is a personalized privacy protection algorithm based on anonymity of trajectory point.
In this paper, TPPP algorithm is proposed in the case of Privacy Requirements are Known and Privacy Requirements are Unknown. For simplicity, we refer to them as PRK and PRU later. erefore, there are five algorithms tested in the experiment: NWA, CPPP, IPKN, PRK, and PRU.

e Method of Experiment.
We design two groups of experiments.
e first group of experiments compared classical algorithms with our algorithm. e second group of experiments compared the two proposed algorithms on TDs with different characteristics on personalized privacy requirements. en the data distortion and mobility patterns of the processed TD * s and time efficiency of the algorithm are evaluated.

Data Distortion.
e information loss in TD * is represented by formula (12).
|TD| p represents the number of trajectory points in TD, p * i is the transition position of p i , and Dis represents the Hamiltonian distance between the two positions.
If p i is deleted by privacy protection algorithm, then the distortion is defined as a certain value Ω generally. For the suppressed trajectory point, the following method is adopted to calculate its Dis: we assumed that, on the microscopic trajectory segment T, the trajectory body performs uniform linear motion. According to the time position of p i on the segment p i−1 and p i+1 in T, we can determine a position p i ′ on the line from p i− 1 ′ to p i+1 ′ and take the distance between p i and p i ′ as the distortion caused by deleted p i . We illustrate this approach in Figure 13. Suppose that there are three continuous trajectory points: p 1 , p 2, and p 3 . When p 2 is suppressed by the privacy protection operation, only p * 1 and p * 3 will be present in TD * . When calculating Dis (p 2 , p * 2 ), we take x � |p 1 , p 2 |/|p 2 , p 3 | as proportional parameters and take the point p 2 ′ on the line p * 1 p * 3 , such that en we treat Dis (p 2 , p 2 ′ ) as the distortion of p 2 in TD * .

Mobility Patterns.
In order to capture the difference in mobility patterns before and after the transformation of trajectory data, we refer to the method of [19], including trajectory direction deviation (TDD) and trajectory distance utility (TDU).
TDU is obtained by the following formula: TDU represents distance similarity between trajectories. e experimental results on the trajectory data are the same as those of Infloss, and this conclusion can also be derived mathematically. We differ from [19] only in that we do not normalize the results, but the advantage of this is that it is easier to compare with the formulas in Infloss.
TDD is obtained by the following formula: where θ i is the angle between the ith trajectory segment on T and that on T * . TDD reflects the direction change and temporal characteristics of the trajectory segment.

Two Instructions.
For the trajectory data whose personalized privacy requirements are set on trajectory points by the trajectory bodies, there are few widely used privacy protection methods for publishing trajectories. erefore, the baseline algorithms we chose cannot directly process the  TD with personalized privacy requirements. In order to effectively compare and evaluate various algorithms, we make the following two statements encountered in the experiment. (

1) Personalized Privacy Requirements Change the Experimental Method of Privacy Protection Algorithm Fundamentally
Traditionally, TD is generated by the trajectory bodies without personalized privacy requirements, while privacy requirements are set by the data publisher. erefore, we compare the performance of different algorithms with same privacy requirements. We compare the applicable scope of the algorithm by changing TD's overall privacy requirements.
However, in TD with personalized privacy requirements, both TD and privacy requirements are generated by the trajectory bodies. e personalized privacy protection algorithm should dispose different privacy requirements in TD at the same time. So the experiment cannot compare the applicable scope of the algorithms by adjusting the privacy requirement from the perspective of TD publisher. To create different comparison scenarios for the algorithms, TDs with different personalized privacy requirements must be provided. is is why we constructed 24 TDs with personalized privacy requirements in the following section.

(2) How to Apply Baseline Algorithm is an Important Problem for TD with Personalized Privacy Requirements on Trajectory Points?
For the personalized privacy protection method for trajectories, we take the median of privacy requirements of all trajectory points in the current trajectory as the privacy requirements of trajectory, where the anonymous threshold of the trajectory T i is represented by k Ti .
For the nonpersonalized privacy protection method for trajectories, we take the median of all privacy requirements of trajectory in TD as the privacy requirements of TD, and the anonymous threshold of TD is represented by k TD .
For the nonpersonalized privacy protection method for trajectory points, we take the median of privacy requirements of all trajectory points in TD as the privacy requirements of TD, expressed by k P .
In these above three baseline algorithms, the distortion threshold d max is the same, and it is set by the trajectory publisher according to the requirements of data accuracy by the trajectory analyst. In different algorithms, d max represents cluster radius or maximum distortion radius. d max is also the only parameter in PRK and PRU, because the privacy parameters used in these two algorithms are set by the trajectory bodies in TD.
e following example illustrates the estimation process of the above parameters. Suppose that a TD contains three trajectories whose length is nine; the anonymous parameters of each trajectory points are shown in Figure 14.
We can obtain the following results through the anonymous parameters of the trajectory point. e median of all the anonymous parameters in TD is k P � 3. e anonymous parameter of the three trajectories is 3, 3, and 2, respectively. e anonymous parameter of all trajectories in TD is k TD � 3.
Inevitably, our choice of the median results in some privacy risks for TD * . However, in the experiment, compared with choosing the maximum privacy requirement as the privacy protection parameter of the algorithms, this choice is relatively fair for different algorithms.

e Data for Experiment.
Although there are some LBS that can set personalized privacy requirements, the availability of such data is limited. e data in the experiment is usually constructed manually. We select two examples of typical trajectory data as the basic data for constructing trajectory data with personalized privacy requirements, which are the real-life TD Trucks and the synthetic TD Oldenburg. For a detailed introduction of these two TDs, please refer to [17]. We only refer to the relevant Figure 15 and Table 2 in [17] to provide a brief description.
In Table 2, we describe the characteristics of the two TDs: |TD| is the number of trajectories, Area is the spatial span covered by TD, and T span is the time span; AS is the average speed, and Density is the spatial-temporal density of trajectory points, which is the quotient of the trajectory point's number and spatial-temporal span.
ere are two methods to construct experimental data in the existing personalized trajectory privacy protection problems. e first method simulates the trajectory body and randomly adds personalized anonymous requirements to the trajectory data and forms TDs [1-3, 24, 27-29]. e second method is according to the distribution of trajectory data and uses the reverse compilation method of digital map to set personalized privacy requirements for trajectory points. is setting is a subjective estimate based on the data itself [19,[21][22][23].
We use the first method to simulate the real trajectory data by adding personalized privacy requirements into TDs. According to different density of trajectory data, we set different personalized anonymous threshold. In the sparse Trucks, k T ∈ [1,10]. In the dense Oldenburg, k O ∈ [1,100]. e range of desensitization threshold is uniformly restricted to d ∈ [0, 1000]. e maximum distortion threshold d max is uniformly set as 8000.
Generally, the level of privacy requirements is inversely proportional to their share of TD. at is to say, there are more trajectory records with low privacy requirements, while there are fewer trajectory records with high privacy requirements. Similar to [27,30], Zipf distribution is used to generate personalized privacy requirements that meet the following probability density function: where r represents the level of privacy requirements; the larger r is, the stronger privacy requirement is. P(r) represents the proportion of trajectory points in the whole trajectory data, in which the intensity of privacy requirements of these trajectory points is equal to r. α represents the distribution density of the trajectory points with different privacy requirements, and the density of the distribution increases with α. C is an empirical constant; the function is to ensure that the sum of proportions of the trajectory points with different privacy requirements amounts to 1. In Table 3 we list C and the median of generated data with different α and different ranges of r. Figure 16 shows the ratio of different sample values when r ∈ [1, 10] and α is 1, 0, and − 1, respectively.
In the first group of experiments, the baseline algorithms NWA and CPPP do not need to desensitize trajectories. In order to compare the baseline algorithms with our algorithm, we constructed six TDs based on the two original TDs, which only contain personalized anonymous requirements but not personalized desensitization requirements.
Obviously, when α is 0 or − 1, the personalized privacy requirements generated for the trajectory points do not match the facts. However, it is beneficial to test the performance of algorithms on TDs with different characteristics of privacy requirements. e parameters for constructing these six TDs are shown in Table 4.
In the second group of experiments, in order to detect our proposed algorithm PRK and PRU on multiple TDs, we set different distribution parameters α for the privacy parameters (k, d) on different original TDs and constructed eighteen TDs with personalized privacy requirements. e list of parameters for constructing TD [7][8][9][10][11][12][13][14][15] from Truck is shown in  Figure 14: Estimation process of privacy protection parameters in different algorithms.

ID
Original data and the range of k Distribution parameter of k TD 1 Trucks,  In the first group of experiments, we execute three baseline algorithms and our algorithms on constructed TD 1-6 . e TD * s generated by the privacy protection algorithm are compared with the constructed TDs, and the information loss under different algorithms and constructed TDs is obtained by formula (12) and shown in Figure 17.
In Figure 17, α is the distribution parameter of personalized anonymous requirements, and the distribution status of samples is shown in Figure 16. lg(Infloss) is a logarithmic form of information loss. is expression has two advantages: first, it can clearly distinguish the magnitude difference in data distortion between the trajectory-based method and the method based on trajectory points; second, it can reflect the slight difference between similar methods.
To the TDs with personalized privacy requirements, the data distortion generated by the personalized algorithm is of the same magnitude as that generated by the nonpersonalized algorithm (NWA versus CPPP and IPKN versus TPPP), while these two kinds of algorithms have the same granularity (trajectory or trajectory point). However, we must recognize that nonpersonalization algorithms (NWA and IPKN) select the median of personalization privacy parameters as privacy requirements. is also means that 50% of the trajectory points fail to meet personalized privacy requirements.
When TDs with personalized privacy requirements are processed by the nonpersonalized algorithm, if the maximum privacy requirement is regarded as the algorithm parameter in order to meet the privacy requirements of all trajectory points, the obtained TD * s will cause serious data distortion.

Infloss increases as α decreases.
is is because the distribution parameter α is negatively correlated with the distribution of privacy requirements. As α decreases, the distribution of sensitivity threshold d and anonymous threshold k gradually tends to larger values, so the distortion increases.
e following illustration of the experimental results also shows a similar phenomenon.
In addition, the TD * generated by privacy protection algorithm based on trajectory anonymity will cause serious data distortion. However, the distortion of TD * generated by privacy protection algorithm based on anonymity and desensitization of trajectory points is two orders of magnitude less.
Since both algorithms are based on translocation of the original trajectory points, the distortions of the TD * generated by them are similar. e difference is mainly in efficiency and adaptability. A small amount of variation in data distortion results from the uneven distribution of the original data. e data distortion on different data is shown in Figure 18.

Result of TDD.
In the test of mobility patterns, we refer to TDU and TDD. e result of TDU is the same as Infloss, so it is not necessary to draw again. Only the results of TDD, which represent the directional similarity between trajectories, are shown here.
(1) TDD of the TD * s in the First Group of Experiments. We calculated the TDD with different algorithms and parameters on the two sets of experimental data generated from different original data; Figure 19 shows how the TDD keeps trajectories mobility patterns.
As can be seen from Figure 19, since the replaced trajectory point is found on the same time slice, the published trajectories generated by the trajectory clustering algorithms (NWA and CPPP) maintained the mobility patterns properly. However, the privacy protection algorithm based on trajectory points exchange leads to a large mobility patterns deviation. On the experimental data based on different original data, the performances of the algorithms are similar, and the effect of mobility patterns maintenance is shown in the following order: NWA > CPPP > TPPP > IPKN.
(2) TDD of the TD * in the Second Group of Experiments. To compare PRK with PRU, we tested them on experimental trajectory data with different distribution of privacy requirement parameters (k, d) and calculated the TDD of the published trajectory data.
As can be seen from Figure 20, there is no significant difference between the performances of TDD of the two algorithms on different data set, but they become worse as the distribution parameter αk decreases, while there is no significant rule or difference on TDD of PRK and PRU within each trajectory data.
From the above experiments, we can see that our algorithm is not outstanding in mobility patterns retention but performs better in the similarity of trajectory distance and information loss. It means that we gain an advantage in the distance similarity of trajectory at the cost of the loss of mobility patterns.

Efficiency of Our Algorithms.
In the data distortion experiment, we select different TDs, algorithms, and parameters. We also measure the execution time of those experiments.
(1) e Time Efficiency of the Algorithms in the First Group of Experiments. It can be seen from Figure 21(a) that the  execution time of IPKN is the largest, which is caused by the iteration that constructs the k-core subnet. However, the execution time for NWA and CPPP is relatively less because the time for calculating the trajectory relation network of TD is not included. PRK algorithm needs to calculate the CTPTs for each trajectory point based on its personalized privacy requirements, so the time consumption is more than PRU. PRU can construct RNTD at one time, so it saves a lot of time.
As shown in Figure 21, due to the different data size in different TDs, the differences in execution time are obvious. e efficiency of the algorithm is basically the same under different privacy requirements on the same TDs. is sublinear relation between the efficiency and the data scale is beneficial to the analysis and processing of large-scale TDs. As shown in Figure 22, the efficiency of both algorithms is not sensitive to privacy distribution but is only related to data size. PRU is more efficient because it can calculate CTPTs of all trajectory points at one time, and its premise that the adversary does not grasp the personalized privacy requirements of the target is also consistent with the reality. erefore, PRU algorithm is an ideal privacy protection algorithm by the granularity of trajectory points, and it also provides users with the function of setting personalized privacy requirements.

Conclusion
We propose a Translocation-based Personalized Privacy Protection (TPPP) method for publishing trajectories. Our algorithm abandons the clustering method that takes trajectories as the privacy protection objects and avoids the serious data distortion caused by the curse of dimensionality. TPPP achieves personalized anonymity and desensitization of trajectory points while retaining all original trajectory points. According to whether the privacy requirements are known, the algorithm can be divided into PRK and PRU. e two algorithms prove to be feasible by running on the trajectory data with personalized privacy requirements, respectively, when compared with the baseline algorithm. We also propose a modified Manhattan distance metric to measure the distance between trajectory points in the spacetime region, which improves Euclidean distance in measuring the reachable distance between two points.
Although the algorithm in this paper improves the availability of published trajectory data, it gives up trajectory anonymity and reduces the level of privacy protection. is tradeoff between the availability and privacy of published data does limit admittedly the applicability of the proposed algorithm.
In this article, we assume that the trajectory data is published statically. In fact, as SA receive LSR continually, the trajectory data is updated constantly. e dynamic release of trajectory data is inevitable. How to ensure that the combination of trajectory data released in succession does not divulge privacy and how to improve the efficiency of dynamic publishing of trajectory data are problems to be investigated in the future.

Data Availability
All data used to support the findings of this study are available from the first author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.