Designing a 6G Testbed for Location: Use Cases, Challenges, Enablers and Requirements

Location will have a central role in Research and Development (R&D) towards 6G networks, both as a service offered by the network (improving the current offering of 5G) and as an input to increasingly location-aware services and network functions. To integrate location into 6G standards, it will be very important to design validation systems such as testbeds, even when the actual technology is not yet commercially available. This paper performs a review of the use cases and their requirements, enabling technologies in 6G, and challenges; and proposes a flexible testbed architecture for performing network location related R&D. This architecture will allow to deploy an evolving infrastructure which will allow early validation of 6G technologies.


I. INTRODUCTION
In the last years, as mobile devices have taken the world, location has become a key dimension in communications. New location-aware services are being proposed and deployed, such as localization in emergency cases (floods, fires, earthquakes, etc.) [1], intruder detection [2], Unmanned Autonomous Robots (UAR) navigation [3] or self-driving vehicles. Also, existing procedures of network management are now being enriched with location as a new dimension, such as traffic prediction based on location [4].
Many of these applications cannot rely on traditional Global Navigation Satellite Systems (GNSS). In some cases, they work in indoor environments, where GNSS cannot be used due to lack of satellite visibility. In other cases, energy constraints require that the location is estimated in the network instead of in the device, to save on computational power or the need for additional location circuitry. Some applications also require the network to know the position, which would require the terminals to transmit their location using specific protocols [5](with their associated costs). Therefore, The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott .
there is a need for network based location estimation [6]. In this case, mobile network infrastructure is used for locating a user instead of satellites, using signal features such as the received power or angle of arrival. The performance of network based location will greatly depend on the capabilities of the underlying network technology.
While 5G is still on an early-stage deployment, studies to foresee how the Sixth Generation (6G) cellular networks will evolve in the next ten years have started, with white papers such as [7] by the European Commission or [8] by the International Telecommunication Union (ITU), and projects such as Hexa-X [9]. 6G will bring better network capabilities than 5G [10], such as throughputs of Tbps [11], latencies below the millisecond [12], very reliable communication (99.9999%) [13] or high-accuracy localization to the centimetre-level [14].
While the grounds for the development of 6G are being settled, there is no unified definition of what such networks will specifically contain. Most authors [15], [16], [17] agree that Artificial Intelligence (AI) and Machine Learning (ML) will play a central role both in the user and control planes, giving place to new applications with ML as a Service (MLaaS) [18], [19], [20] and novel AI/ML-based network management VOLUME 11, 2023 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ schemes. On the architectural level, Network Function Virtualization (NFV) will implement the network elements that support both the user and control planes [21], using Commercial Off-The-Shelf (COTS) hardware and reducing the cost of infrastructure. In this respect, Open Radio Access Networks (Open RAN) [22], [23] is a major breakthrough that is already being used in 5G networks and will continue to be a central feature in 6G, enabling the easy integration of software components from different vendors and speedy creation of new services and functions. Software Defined Networks (SDN [24]) will allow the definition of a dynamic architecture, that can reconfigure the network quickly and adapt it to changes in context (such as changes in traffic, variations in user behaviors, the Deep Network Slicing [17], [25] functionality, the occurrence of catastrophic events, etc.). In this context, 6G will develop a new concept of network operations that will be based on dynamic resource allocation (both in terms of network structure and network functions) for optimizing the general network efficiency [26]. At the physical layer, the migration to higher and wider bands will continue in 6G [27], and new elements, such as smart metasurfaces [2], [28] and massive antenna arrays [29] will enable faster data rates, with a more efficient use of the power. Another important aspect of 6G will be the interoperability [30] with other networks, such as prior 3GPP generations (4/5 G) or non-3GPP technologies (WiFi, LoRA, Sigfox, etc.). While most visions do not consider heterogeneity as a core aspect of 6G, coexistence will be a very important aspect, both as a challenge and as an opportunity. Several projects have emerged for improving the location accuracy in existing 5G networks up to the meter-level for indoors and outdoors, such as the LOCUS [31] or 5G EVE [32] H2020 projects. With all the aforementioned novelties, 6G will bring a slew of opportunities for better network location. Some authors have already proposed visions of location in 6G [2], [33]. Location will be integrated in basic 6G operation, along with communications [34], [35], [36], thanks to THz frequencies that allow high resolutions for radio-based sensing. Narrower beams [33] will allow to better resolve multipath components for angle-based location. Reconfigurable Intelligent Surfaces (RIS) [2] will also improve location by making the radio environment more predictable. At a higher level, location estimated with these enablers will have its own network function, and will be offered as a service [37]. This service can be offered to applications and network management [38].
While these works reflect on both the enablers for location in 6G (e.g., wideband signals and ML) and location as an enabler of some 6G functions (e.g., context aware management), they do not address an important aspect of development of location technologies in 6G: the infrastructure required for creating proof-of-concepts and evaluating the developed technical components. This aspect is especially relevant at the present time, when development is in the early stages. While simulations are a common way of evaluating location methods [39], the assumptions that are usually done have long been known to add bias to the results [40]. In prior mobile network generations, research on location [41] including testbeds has been done once the core components of the technology were well defined and commercial components were available [42], [43], [44], [45], [46], [47]. Testbeds are often limited to existing commercially available technology and implemented using closed solutions by vendors. Thus, the architecture of the testbed is usually determined by these factors. This limits the type of experiments that can be done to those that the commercial equipment supports, and therefore, there is a need for a well-planned architecture that is defined prior to the acquisition of equipment. This paper proposes a testbed architecture for 6G-based location that can be extended as 6G technologies progress. Since 6G will be the most location-centric of all 3GPP networks yet, it is very important to start this development from the very beginning of the cycle of definition. This will help integrate location and its dependencies (e.g., services to compute location based on measurements or the required signaling) into the early iterations of the first 6G definitions. This paper will study the task of designing a testbed for location research in 6G. First a review of the uses of location in future 6G-supported applications, its role within the operation of 6G networks and the enablers offered by 6G technologies will be done, highlighting the open research challenges that will need at some point to be studied in a testbed. The requirements and challenges of location will then be explored, identifying which key aspects should be studied in the future and which hardware/software equipment would be needed to evaluate and demonstrate the developed technical components. With these elements, this paper will then propose an architecture for developing location testbeds and review existing implementations for the components of the identified system blocks. Figure 1 summarizes the contribution of this paper. This paper is organized as follows. In Section II, an overview of existing testbeds will be done, describing some common design principles and relevant implementation aspects. This review will produce two inputs to the design of the architecture, in the form of ideas that can be used in the implementation and challenges that may be present in the process. In Section III, the key use cases are defined, and their requirements in terms of accuracy, latency, and frequency are defined, as well as the main challenges they offer. The review of articles in this section will serve as a source for open research questions that may be answered by experiments done in testbeds. In Section 3, location in 6G is discussed, both reviewing the 6G technologies that are enablers of location, as well as 6G functions that depend on location, both of which must be supported by the proposed architecture, and detailing further open research questions that require evaluation on testbeds. In Section V, a blueprint for a comprehensive testbed will be described, detailing the different required components to evaluate the technologies that will eventually lead to 6G standards. A review of existing technologies that can be used for the implementation is also done in this section, along with recommendations for building a real testbed based on the proposed blueprint. Section VI will describe a real implementation based on the proposed architecture, and which will illustrate how to apply the guidelines. The open challenges for implementation will then be discussed in Section VII, along with recommendations on how to overcome these challenges. Finally, in Section VIII, draws the conclusions of the study.

II. RELATED WORKS
In this section, a brief review of existing location testbeds will be done, pointing out specific particularities and ideas that will be used for the development of the architecture proposed in this paper.
Ultimately, testbeds are used to do proof-of-concepts of pre-existing theoretical development. In location, these development can be either algorithms that provide an estimation for location, or that depend on location as an input. The workflow of research and development of such algorithms goes through several phases, starting with an analytical development, followed normally by simulations and validated through proof-of-concepts in testbeds. Such testbeds can be pre-existing ones, developed with enough flexibility so as to admit experiments on different algorithms; or ad-hoc, specifically designed and deployed for demonstrating an algorithm.
Generic location testbeds [48], [49], [50] are designed to test any location technology, including location algorithms and location-aware services. Such testbeds normally provide a physical space where the experiments take place, a system for providing ground truth (i.e., the actual coordinates of a target, to be used to estimate the error of location algorithms) and a system for collecting data and running experiments. For instance, in [48], an office environment is equipped with elements to perform experiments. Such elements include mobile robots that automate test, equipped with sensors that perform Simultaneous Location and Mapping (SLAM) and which provide a ground truth location. In [50], a factory lab emulates an industrial setting for algorithms used in Industry 4.0 services. In this case, no ground truth is provided, leaving this aspect open for experimental design, but a specific component for deploying experiments and collecting data is described. The Emulab testbed [49] provides a generic wireless network testbed, which includes functions for location. A robotic platform is deployed in a large office space, with overhead cameras recording a live video feed with tracking algorithms providing real-time ground truth location. A subsystem to program experiments, collect data, and control everything remotely has also been developed.
Some testbeds are slightly less generic by limiting their scope to specific radio technologies; for instance to WiFi [51] using Received Signal Strength Indicator (RSSI) measurements, or IEEE 802.15.4 [52] with Time of Flight (ToF) measurements. Cellular technologies are well represented in VOLUME 11, 2023 this category, thanks to the growing demand for location based services in mobile networks. For instance, in [43], an experimental Long Term Evolution (LTE) cell is deployed in two different, reconfigurable, indoor scenarios (an empty room and an office setting). It supports ToF and Angle of Arrival (AoA) measurements, and the ground truth is provided by markings on the floor, which are manually fed to the collected data. In [45], a testbed for 5G location is proposed, with robots equipped with SLAM providing ground truth and automation. While it is oriented to 5G, it also supports other technologies, such as WiFi Fine Time Measurement (WiFi-FTM) and vision-based location. The physical setting is in several different indoor areas, including office and open spaces. While there are no location-specific testbeds for 6G, some proposals for generic testbeds are emerging, such as the Techtile testbed [53]. In this testbed, tile-based generic radio elements conform an indoor scenario for testing multiple 6G technologies, incluiding mmWave communications and visible light communications. While it is not specifically oriented to location, it has been used for experimenting with ultrasonic and visible light location. It contains a subsystem for collecting data and programming experiments.
Ad-hoc testbeds are developed to demonstrate a specific algorithm. While they lack the flexibility to adapt to different experiments, they are sometimes as complex as generic testbeds, in terms of number and variety of hardware and software elements. The main difference is normally that they lack a programmable component and generic data collection systems that allow experimentation flexibility. For instance, in [54], a testbed for demonstrating a specific range estimation algorithm using wireless networks is demonstrated in a testbed that allows both indoor and outdoor scenarios. In [42], a testbed for demonstrating a location algorithm based on a deep neural network is deployed in two different indoor scenarios (residential and office). A mobile app is developed for taking power measurements of surrounding cell towers and marking ground truth location on a map. In [44], vertical location with cellular networks is tested in a testbed consisting of Software Defined Radio (SDR) base stations and terminals, and compared with Global Positioning System (GPS) and barometer based estimation. In some testbeds in this category, full location is never estimated, but only the elements required for location. For instance, in [46], only AoA measurements of a single terminal and a single 5G base station are taken in an outdoor parking lot, with the purpose of demonstrating a specific network management scheme.
In this paper, a generic architecture for testbeds is proposed in Section V. The proposed architecture provides a blueprint for any type of 6G location testbed, generic (which can be achieved by systematically implementing all the proposed components) or ad-hoc (using the architecture as a generic framework and only implementing the required components for a specific proof-of-concept). A special focus is set on 6G, studying the enabler technologies and new services that it will bring, and exploring the elements that can be procured for a 6G location testbed. This architecture synthesizes the different building blocks of the testbeds cited in this section, along with the experience of the authors in prior work using testbeds. The testbed used in [47] and later expanded to add capabilities and flexibility, is used as an example of the proposed algorithm in Section VI.

A. KEY TAKEOUTS
In this section a quick overview of existing location testbeds has been done. This paper proposes a blueprint for future 6G location testbeds, such that they are implemented following a pre-established plan that responds to the needs of 6G networks and applications. Some ideas that have been proposed in the testbeds reviewed in this section, such as using a ground truth system, or robots for acquiring measurements, can be exported to future 6G testbeds. A clear division between generic and ad-hoc testbeds is also noted, where generic testbeds are implemented to support any experiment in the future, and ad-hoc testbeds to test a specific algorithm or component.

III. KEY USE CASES AND REQUIREMENTS
When development for 5G started, mobile networks were already a commodity, and new services were constantly deployed. But mobile services were no longer limited to end users; around the same time an explosion in Cellular Internet of Things (CIoT [55]) services and applications took place. Thus, 5G was designed not solely as a humancentric network, but also as a CIoT provider. While traditional human-centric applications had more or less simple Key Performance Indicators (KPI) requirements (mainly ever increasing bandwidths for multimedia services) in previous generations, in 5G, requirements had many more dimensions (e.g., reliability, device density, latency, end-to-end quality of service, etc.). Three service categories were defined for 5G [56]: • Ultra Reliable Low Latency Communications (URLLC): communications with very high reliability (above 99.999%) and very low latency (below 10ms).
• Massive Machine Type Communications (mMTC): services with a very high density of devices (around 1 million devices per km 2 ).
• Enhanced Mobile Broadband (eMBB): services with very high bandwidth requirements (up to 10Gbps). These KPIs, which measure the performance of the network, would, on their turn determine the performance of the applications that used the 5G network, which was measured in Service KPIs (SKPIs). The SKPIs were extracted from the use cases that were intended for 5G, and the KPIs that defined the different classes of traffic were derived from there.
Location SKPIs [5] were also defined in 5G, with accuracy (error lower than 50m horizontally and 5m vertically outdoors, and lower than 3m horizontally and vertically for indoors) and location acquisition latency (30 seconds outdoors and 1 second indoors) being the main ones. These requirements were not part of the initial release of 5G (Release 15), but came as an addition in Release 16.
In 6G, the services that will be supported are more demanding, responding to the decade of technological and social advances that has passed since the inception of the first 5G definition. The SKPI requirements are extreme, requiring performances far superior to those that current 5G technology can provide. From these extreme SKPIs new KPI requirements will be derived, redefining the service categories [57] defined for 5G with traffic that has characteristics that combine requirements of more than one category. In 6G, not only are the required SKPIs higher and the KPI requirements more complex, but networked applications are expected to also have a positive societal impact, measured in Key Societal Value Indicators (KVI [9]), such as the sustainability, trustworthiness, or inclusiveness. Naturally, location SKPIs will also be much more demanding and the applications will have KVI requirements that must be achieved by the network.
In this section, a review of a set of novel use cases for 6G networks that are highly location-dependent will be done, based on a selection of existing reviews of use cases [15], [34], [36], [58]. Figure 2 summarizes these use cases. The dashed line represents the limits of current mobile technology. The requirements for the use cases are detailed in the rest of this section. For each one a brief summary will be given, along with an analysis of the SKPIs (accuracy, latency, and update frequency) and the KVIs (trustworthiness, sustainability, and inclusiveness).

A. SELF DRIVING CARS
As the electric vehicle gains market, the demand for higher security standards grow and drivers seek increasing degrees of comfort. To cover these demands, the development of autonomous driving [59] becomes necessary. Autonomous driving is in the crossroads of several cutting-edge technologies, such as AI/ML [60], [61], advanced sensors [62], URLLC communications [63] and high accuracy location [64]. Autonomous cars interact with different elements of the environment, such as other vehicles, pedestrians, and road signalling. Some of these elements are not equipped with communications equipment, so advanced sensors, such as LiDAR [65], [66] would be required. Others, such as traffic signals, can be equipped with communications elements [67] that interact with the vehicle through wireless Vehicle to Infrastructure (V2I) or Vehicle to Vehicle (V2V) links. With SLAM the vehicle can combine the information of sensors and location providers to predict the trajectory and make decisions in real-time depending on unforeseen events. Some of these decisions cannot be taken with the information available to a single vehicle, such as taking routes to help harmonize traffic, so they must be taken in a centralized element outside of the vehicles [61].
Naturally, location is a major aspect of autonomous driving. Location information is used in many different functions of autonomous driving, such as route planning and tracking [68], course prediction for collision avoidance (with other vehicles, pedestrians, and other obstacles) [65], lane changing [69], fleet management [70], traffic measurement [71], etc. Not all of these applications will have similar requirements; for instance, the accuracy required for traffic measurement is in the tens of meters, while lane changing and course prediction would need sub meter accuracy (along with accurate estimations of speed and acceleration). Regarding localization latency, to provide an accurate prediction of the course, a latency of 100 ms [72] would be required in order to have a margin for reaction in case of potential collisions. Update frequency would depend on aspects such as the cruise speed and the type of road. For instance, in highways, while the speed is high, a relatively low location update frequency would work well for lane estimation, course prediction, etc. In an urban area, or in a parking lot, on the other hand, the geographical features are much smaller, so a high location update rate (up to 10 Hz) would be required.
Location will also have to meet high KVI requirements for autonomous driving. Trustworthiness is the most important aspect in this case. The location provided by the network must be correct and resistant to tampering to avoid possible risky situations and accidents. Privacy, which is another aspect of trustworthiness, must also be preserved, so an attacker cannot gain access to a specific vehicle location [73]. Inclusiveness is a key factor in autonomous driving, in the sense that a higher number of connected vehicles will help the overall functions of autonomous driving work better. In [74], it is shown how a high penetration rate (i.e., proportion of vehicles that can communicate) can drastically reduce traffic congestion. In fact, penetration rate has always been the main challenge in vehicular communications, along with a proper infrastructure connectivity for V2I. VOLUME 11, 2023 While most of this activity is done outdoors, where advanced GNSS systems can provide a location down to the centimeter [75], some of it takes part either in challenging scenarios like urban canyons [76], [77], in tunnels [78] and in underground parking lots [79] where GNSS is either unavailable or offers a very low accuracy. 6G networks are expected to cover both indoors and outdoors, especially in densely populated areas, so in these scenarios, 6G can be a viable alternative to GNSS (provided it can achieve the SKPI and KVI requirements disclosed earlier and summarized in Table 1). Some studies of 6G connectivity have been done for V2V and V2I [80], but there are currently no studies on location for this use case. A testbed with the appropriate infrastructure (roads, vehicles, signaling elements, etc.) would be required to test network location and V2V and V2I location dependent services.

B. SOCIAL MEDIA
Social media have gained a central role in society in the last years. In the market, there are general social networks (such as Facebook or Twitter) and purpose specific social networks (such as Foursquare, which is centered on location specific data). In many cases, the revenue model of social media services is that of a free service sustained by customized ad delivery. Therefore, there are actually two groups of users of social media: the end users and the entities that use the network for delivering their ads.
Geographic information is becoming an increasingly important data source both for the end user and the ad delivery service [81], [82], [83]. This dependence is more obvious in social media that are built around geographical information, such as Foursquare or Google Maps. Location in social media is used mainly to geotag publications, to offer information about the surroundings of the user and to deliver ads that are relevant to a specific place. All these applications have relatively light requirements, with location accuracies of up to tens of meters [84], [85], several seconds being an acceptable latency and no regular update required (only when interaction with the network occurs).
While the technical requirements may be loose, the KVIs in social networking are the main issue. First of all, trustworthiness is the deciding factor in the usage of social media for many users. Location tracking should be used in benefit of the users, and not to violate their privacy. As per regulations such as General Data Protection Regulation (GDPR) [86], users must be in control of their data, including their location, and must have the ability of fully disabling it. Additionally, the entities that hold location information (e.g., the social network provider or the wireless operator) must also protect it and avoid information leaks [87]. Inclusiveness is also important, since social networks are becoming the main playground for public debate and freedom of speech [88]. Thus, it is important that the location function is available to all users, with a quality that is enough to provide an acceptable social network service.
The open research questions in this use case are mainly centered around the end services, owing to the fact that these use cases are not especially restrictive with location. Detection of bots [89] is a hot topic in social media, where location analysis can play an important role. Finally, the most important challenge in social media is in the privacy of the end users [90], which must be protected from other users and from undesired information leaks to ad networks and malicious actors. As such, a 6G testbed should offer the required functions for testing algorithms for privacy preservation.

C. E-HEALTH
According to the World Health Organization (WHO) [91], healthcare-based devices and facilities can benefit from wireless networks to provide a personalized care. While some development was well underway, the COVID-19 pandemic was a major booster of e-Health technologies [92]. These applications cover different types of monitorization (e.g., smartbands [93]), remote presence applications (remote operation [94] and remote doctor visits [95]) and transportation (ambulance service enhanced with vehicular communications and health sensors [96], or drones for transporting drugs and organs [97]).
Location plays an important role in some of these applications. Some smartbands use location for tracking the movement of users and logging physical activity. Telepresence robots require location for indoor navigation and object manipulation. Drones and ambulances also need location for route planning and navigation. The requirements in terms of SKPIs depend on the scenario. Outdoor applications, such as ambulances, drones, or smartband sports trackers may work with location accuracies of a few meters; while indoor applications, such as telepresence, will normally require sub meter accuracy. Regarding location acquisition latency, the only critical application would be drones, which may face catastrophic consequences if location is not timely (e.g., collision with buildings or other drones). In this case, location sensors should have a maximum latency of 50 ms and a refresh rate of 20 Hz [98].
Health data is considered one of the most sensitive kind of information on individuals, therefore, privacy should be a major priority in all the applications. This is applicable to patient location too. Sustainability will also be a major aspect, since many of the devices will be battery-powered. In order to avoid an increase in chemical waste, and enhance the user friendliness, location (and, in general, wireless technologies in these use cases) should minimize the impact on device power and hence, the need for replacing batteries. Regarding inclusiveness, the health sector strives for covering as much of the population as possible, with an economically viable cost structure. Location technologies must be simple in order to have a low cost.
The convergence of location and communication [36] in 6G will help in the development of low cost devices. Very accurate location and pose estimation of instruments for remote surgery [99] must be investigated and tested. Specifically, whether 6G on its own can provide the required location quality is an open research question, which must be tested as the technology rolls out.

D. EMERGENCY SCENARIOS
In the last years, disaster management has received much attention from the wireless research community, with projects such as the European H2020 RESPOND-A [100] project or government agencies such as the US's Firstnet [101]. This underlines the important role that wireless technologies can eventually have in the work of first responders.
There are numerous tasks in disaster management where location is a key tool. For instance, coordination of first responders [102] is a major issue, requiring that a central command monitors the location of the acting personnel to better manage the resources and avoid hazardous zones. Another important task is to map the disaster area, identifying deviations with respect to the pre-disaster map, and locating hazards such as chemical spills or flammable gases. For such tracking [103], location must have an accuracy of few meters, and latency and frequency of few seconds.
Victim location is another important task, which must be done either by visual inspection, canine units, robots, or by detecting wireless devices [104]. Location in this case must be done again with an accuracy of few meters, so that visual inspection can further determine the exact location and status of the victim. In this case, latency and location update frequency are not too critical, since the victims are not able to move. A higher latency with an accurate location is much better than a quick but imprecise location.
6G and location can be a major enabler of new life-saving protocols, by improving working conditions of first responders and the chances of victims. In this sense, the inclusiveness KVI must measure the survival opportunities of victims in different situations, such as those that are under rubble, or closer to ground zero. Trustworthiness will be measured mainly in the fidelity of the information, for instance, in the first responder command unit (which combines the accuracy in space, the delay of the presented location, and the information of surrounding hazards).
Emergency scenarios are often chaotic and disordered, each having different challenges for first responders, victims, and deployed equipment. For location, the main challenges are the potential lack of infrastructure and the need to communicate and locate victims under rubble. The lack of infrastructure that can be used as reference points for location can be partially solved with portable equipment, such as portable base stations [104] or fusion of technologies [105]. In the case of victims trapped under rubble, location is especially challenging, since rubble acts as heavy clutter for wireless signals [106]. This may greatly affect the vertical location accuracy. In these cases, a dense deployment of reference points, for instance, from heavily sensorized buildings [3], will increase the chances of location.

E. HAPTIC SENSORS AND GAMING
Gaming is one of the fastest growing markets in entertainment in the last years. As gaming propagates among new demographics, the variety in experiences and devices grows. Technologies such as Extended Reality (XR) [107], will enable first-person views in an immersive and interactive experience over digital environments. Within XR, Virtual Reality (VR) provides a fully immersive experience, where the user is completely surrounded by virtual objects and can only interact with them; and Augmented Reality (AR) provides a mixed experience, where the virtual objects interact both with the user and the real environment. While devices like 3D glasses are used for displaying the virtual object, haptic interfaces [108] allow tactile interaction with physical feedback to the user. To quantify the requirements of gaming over mobile networks, a new concept has emerged as Qualityof-Physical-Experience (QoPE) [57] which merges physical aspects from the Quality of Service (QoS) and Quality of Experience (QoE) such as latency and video quality opinion, respectively.
Location plays a key role in XR and haptic interfaces. The user location and body pose are required to compute their view of the virtual objects. An accuracy of 10 cm or less [109] is required to provide a good experience. But most importantly, to avoid dizziness, the user location must be updated with a very low latency (below 20 ms [109]). These requirements can be met by devices that run location and tracking systems, along with 3D rendering, but such devices have a very high cost that hampers the market accessibility to casual users or to users with a lower economic capacity. Cloud gaming [110] solves partially this problem by moving the rendering to the cloud or network edge, but user location must also be sent to the network with a very low latency.
Trustworthiness will be one of the main KVIs for most users, which will need to trust that their privacy is respected within gaming sessions, especially when any kind of economical transaction occurs. Location must therefore be computed and used in a secure manner, ideally within the premises of the network operator (in the network edge). Inclusivity will be achieved mainly by keeping a low cost in the devices, such that they are affordable for all users, all while maintaining a certain QoPE.
5G localization requirements are <3m for 80% of users in indoor deployments [111] which can be considered the main scenario in gaming. Therefore, 6G must overcome the limitations of 5G with extremely low latency and high accuracy location within a challenging indoor scenario.

F. SMART EDUCATION
Education is one of the key pillars of modern society, from very early ages to university education and even mid-career training. As human knowledge advances, the topics that are taught become more and more complex and profound, and the teaching methods must evolve and adapt to new layers of complexity [112], [113]. For this reason, education is a very VOLUME 11, 2023 dynamic market, that adopts not only new teaching methods, but also new technologies. For example, a hot topic nowadays is teaching programming from early ages [114]; and typical blackboard and chalk classes are not an efficient way for this. Instead, in Smart Education methods such as gamification, which relies on transforming the concepts to games and using the dopamine response of the brain to enhance learning, can be used to better engage students. Such gamification methods use technologies like XR or holography [115], which have heavy processing and communications requirements, all while also being physically portable and non-intrusive. These technologies, along with other networked technologies (file sharing, streaming, activity recognition), have requirements which need to be served by 6G infrastructure.
In Smart Education, location plays an important role in several applications. Apart from XR and holography, location is also important in activity recognition. For instance, gesture recognition [116] and location can be used to interact with the AR/VR objects, reducing the complexity (and cost) of the end devices. Sentiment analysis [117] can also be applied to receive a feedback on the learning experience, detecting whether the students are engaging or not in the lessons. The location requirement of activity recognition is similar to that described for gaming, since the basic technologies will be very similar. Location also plays an important role as a feature of the traffic generated by Smart Education applications, which may be used for efficient network management. When students are all located within a classroom [118], broadband traffic will be concentrated in a hotspot served by a few or even just one access point. Such traffic will be similar for all the students, with changes dependent on their exact location (e.g., slightly different viewing angles of the same XR object), so edge resources can be used in a smart manner if the location is known.
Summaryzing, location in Smart Education will mainly have the same requirements as in gaming, XR, and holography, with some particularities on the contents that may be rendered in batches for groups of users that are near each other. All of this must be done under strict privacy and security standards to achieve a high trustworthiness, and a low cost for high inclusivity.
The main challenge is the indoors nature of education, together with the high density of broadband users, which also reflects in high computing power requirements.

G. AUTONOMOUS ROBOTS
Autonomous robots are cyber-physical systems that have the ability of moving around the space without a driver, and have been an important part of the innovations in several markets, such as manufacturing, logistics, first responders, or in wireless networks. Overall, the Autonomous Robotics market [119] is expected to grow 19.6% until 2027, so it will constitute an ever-growing use case for 6G networks.
Autonomous robots may move on a two-dimensional space (when they move on land [120] or over the water [121]), or on a three-dimensional space (in the case of drones [122] or submarines [123]). The accuracy of location depends on the size of the robot and the characteristics of the environment. In the case of open spaces without any obstacles, location will mainly be used for navigation and can have an accuracy of several meters. On the other hand, if location is used in an environment with obstacles such as walls or other robots, narrow corridors, etc. location will also be used for collision avoidance, and the accuracy must then be in the order of centimeters. For instance, cooperative autonomous robots will get centimetre-level localization, from 10cm in industrial scenarios to 50cm for regular consumer cases [34]. Latency and update frequency also depend on what location is used for, as well as the speed of navigation. In the most critical case, the robot must have time to react to the location updates [124]. For instance, drones [98] moving at several meters per second in a dense area should have location updates of themselves and neighboring drones with a frequency of several updates per second (20 Hz), and a latency in the range of tens of milliseconds. Not only location should be provided within these tight margins, but reliability should be very high, avoiding especially situations where several consecutive location updates are missed. Another very important point would be synchronization; to have a correct real-time view of the environment and plan safe trajectories, autonomous robots should be able to coordinate with a correct timing down to the millisecond.
Regarding trustworthiness, it will be more important in scenarios where the robots have critical roles or may cause harm if a wrong location is provided to them. Therefore, location must be provided in a way that it is not possible to falsify the information. Sustainability must also be ensured to improve that battery-powered robots are able to work for a long time without the need for recharging.
Challenges for location also vary depending on the environment. It will be simpler outdoors, where Line of Sight (LOS) is available and the requirements tend to be more loose. Indoors, on the other hand, Non-Line of Sight (NLOS) propagation dominates, making accurate location harder, and the requirements are higher. Indoor scenarios with high clutter will be common in this use case, since robots will be used in scenarios such as factories or distribution centers [125]. A testbed should include the required elements for testing new location algorithms in these scenarios.

H. KEY TAKEOUTS
In this section, a review of 6G-based location has been done, describing each of the use cases, along with the requirements that they have in terms of SKPIs and KVIs. The results of this analysis are summarized in Table 1. A testbed designed to support these use cases or others that are similar, should be able to validate that they comply with the requirements.

IV. LOCATION IN 6G
Location consists of obtaining the coordinates of a target in a 2 or 3 dimensional space defined by a coordinate or reference system. To obtain the absolute coordinates, the relative position of the target must first be computed with respect to the position of one or more reference points whose coordinates are previously known. Several enabling techniques are used for obtaining the relative position information with respect to the reference points and for combining the information to obtain the location.
The obtained coordinate can then be used for locationaware applications, such as those described in Section III. In 6G networks, location will also be used in network functions and network management, making it an integral part of the system.
In this section, the role of 6G networks in location will be reviewed. First, an overview of location techniques will be done in Section IV-A, followed by a review of the enabling technologies in 6G in Section IV-B. Finally, the use of location in 6G functions will be shown in Section IV-C.

A. LOCATION TECHNIQUES
Location techniques obtain an estimation for the location given a set of readings on the reference points. There are several techniques that can be used, depending on the type of available information, computational resources, and performance requirements.

1) LOCATION BY PROXIMITY
The simplest form of location is by proximity of a reference point ( Figure 3a). The gross location obtained in this case is equal to the position of the single reference point. The actual estimation is not exactly a single point, but a set of points covering the whole coverage area of the reference point with different degrees of certainty. Points further from the reference point will have a lower certainty, because the probability of detection is reduced with distance. This technique is used, for instance, in Bluetooth Low Energy (BLE) [126], [127], for applications where accuracy is normally not very high.

2) TRILATERATION
A more accurate location can be done with trilateration [105] ( Figure 3b). In this case, the data collected from the reference points is the distance or range. The range to each reference point defines a circle over which the target may be located. The location is then estimated by the interception of 4 circles (or 3 in 2D location). There are several methods for estimating the range with wireless technology: • Power-based estimation: this method uses fading to estimate the distance to the reference point. Given a known transmission power, the distance can be obtained by reverting a propagation model. The propagation model must be selected taking into consideration the radio technology that is used, the frequency and the environment. For instance, determining whether there is LOS or NLOS [128] is a key aspect to select the model. This method is subject to a high ranging error due to fast fading and multipath propagation. This method has been used in previous mobile network generations [129], [130].
• Time of Flight (ToF) measurements: this method is based on knowing the time that a signal takes to travel through the air between the reference point and the target. The advantage of this method is that it is not affected by fading. Multipath can also be mitigated if the signals are short and there is LOS [131]. Even with NLOS, the accuracy is higher than with power based methods [132]. Direct ToF measurement requires a very tight synchronization between the target and the reference points, making the system very costly. As an alternative, the Round Trip Time (RTT) of the signal can be measured. This is done using a protocol where the signal transmitted by one end is replied by a signal from the other end after a predetermined time. The transmitting end can then estimate the ToF in both directions based solely on its own clock. This is the approach used in technologies such as Ultra Wide Band (UWB [105], [133]) or WiFi Fine Time Measurement (WiFi-FTM), which are capable of achieving cm-level accuracy [105]. ToF has also been tested in 5G [134].
• Time difference of Arrival (TDoA) [135]: RTT requires a protocol between the target and the reference points. An alternative way to calculate distances relying on a single clock is estimating the difference in ToF of a signal between the target and two reference points. This difference can be translated in a difference of distances, that can be used to define a hyperbola instead of a circle where a user is located. The position can be estimated by the superposition of these hyperbolas instead of circles. This method has also been tested in 5G [136]. Nevertheless, range estimations have errors, and it may happen that the circles or hyperbolas used for trilateration do not cross at a single point or at all, as shown in Figure 3b for 2D location. The actual ranges are represented as dashed lines, while the estimated ranges with errors are shown as full lines. This creates an uncertainty in the location, shown as the red smudge. The darker places of the smudge are those where the confidence for location is higher given the available information. The actual position is not within the smudge, reflecting that there will be an error in this case once the uncertainty is solved. To solve the uncertainty, techniques such as Least Squares (LS) or Weighted Least Squares (WLS) are used.
Trilateration requires a certain density of reference points, such that all the points within the area are covered at least by 4 of them. This can prove challenging, especially in indoor scenarios, where obstacles cause shadowing. Either the density (and hence the cost) must be very high, or the scenario will have coverage holes. One technique that can mitigate this is opportunistic fusion [105], which uses ranges from different technologies. This improves the density by using reference points that are deployed not only for location, but also for wireless communications. This is especially useful in indoor scenarios, where several technologies may coexist; or in emergency situations, where damaged infrastructure can be complemented with low cost and low deployment effort reference points.

3) ANGLE OF ARRIVAL/DEPARTURE
Another magnitude that can be estimated between the reference points and the target is the Angle of Arrival (AoA), which measures the angle at which the signal from the reference points reaches the target, or Angle of Departure (AoD), which measures the angle at which the reference point transmits a signal. These estimations can be done with Multiple Input Multiple Output (MIMO) systems that can either estimate the AoA [137] or that are capable of doing beamforming. Location can be estimated with three AoA or AoD measurements, as shown in Figure 3c. As in the case of trilateration, the error in the estimated angle creates an uncertainty that is solved with LS, WLS, etc.
AoA has been used in 5G [138] achieving sub-meter accuracy in simulations. However, AoA highly suffers from multipath [139]. AoA can also be combined with ToA/TDoA systems [140], combining the advantages of each one.

4) FINGERPRINTING
In indoor scenarios, range estimations can be especially difficult to do and may be prone to large errors. While ToF greatly reduces these errors, it is not always possible to use it, due to the high cost of deploying location-specific radio devices with high density. This is the case, for instance, with WiFi, which is commonly available in indoor scenarios such as offices and residential areas. Moreover, WiFi in such scenarios is densely deployed, with a large number of Access Points (APs) visible to a device at a given point in space.
While the received power may not follow a specific propagation model, if the environment does not change drastically, it tends to remain static over time. For instance, if at a point that is near a WiFi AP the measured power is abnormally low due to an obstacle like a wall, it will not change over time if the obstacle remains static. Thus, each point in space will have a collection of tuples of reference point identifiers (e.g., WiFi Service Set Identifiers) and received powers that do not change over time. These tuples conform a unique signature or fingerprint that identifies each point in space. This is the base of fingerprinting (Figure 3d).
Fingerprinting has therefore two steps: a training step, where a map of signatures is collected (normally dividing the map in a fixed size grid), and an exploitation step, where the measured powers are compared with the signatures to obtain the most similar. The highest possible accuracy depends on the size of the grid defined during training. There is a tradeoff between accuracy and complexity, since a fine grid also implies a much longer training step. Fingerprinting can achieve a high accuracy when the density of reference points is high.
Fingerprinting is traditionally associated with WiFi [141], thanks to the high availability of signals in the average residential or office indoor environments. Fingerprinting has been used in mobile networks such as in LTE [142] or 5G [143]. In 6G, the higher densification of base stations will increase the resolution of fingerprinting based location.
While fingerprinting can achieve a high accuracy with low infrastructure investment, it has some major drawbacks. The main one is the need for a complex training, which severely limits the applicability in certain scenarios where prior exploration is not possible or where a large area must be covered. Another important drawback is that on longer timescales the fingerprints may vary (e.g., due to WiFi APs or objects that produce reflections or shadowing being relocated, changes in air humidity, etc.), requiring frequent retrainings of the map.

5) POSE ESTIMATION
Up to this point, techniques that return the location in space have been described; location being a synonym for the vector of coordinates in a specific reference system. This kind of location considers that the target is a single point. Another important magnitude that is often part of the location problem is the pose, which acknowledges that the target is not a single point, but a collection of geometrical shapes that are oriented in specific ways.
There are two approaches to pose estimation that can be combined in different ways. The first one is the estimation of the orientation of an object within the reference system, that is also known as 6D pose estimation [144]. In this case, the three location dimensions are complemented by three orientation VOLUME 11, 2023 parameters (roll, pitch and yaw, shown in Figure 4). These pose parameters consider a solid object, or define an outline of an object that can be further described by the other pose estimation approach. Wireless technologies have been used for 6D pose estimation; for instance, in [145], a mmWave (24 GHz) radar is used to estimate the pose of mobile robots.
The pose of a non-rigid object can also be defined as the position of its moving parts with respect to each other. The typical example is the estimation of human body pose [146] that allows gesture recognition [116], or even facial recognition [147]. In this case, the problem consists of locating several points of the object with respect to each other, and it can be formulated as the estimation of 3D location of each point with respect to a common reference system. The major challenge in this case is that the accuracy must be very high [148] in situations where the object may interfere with location signals [149].

6) LOCATION FUNCTION PLACEMENT AND PRIVACY
All the location techniques described earlier are agnostic of placement, that is, they can run either in the target or in a network infrastructure that interconnects the reference points (e.g., the 6G network). The placement of the location function will define a set of mechanisms that must be implemented to allow location estimation.
If the location function runs in the target, mechanisms should be in place to inform the target of the location and identification of the reference points. This can be done, for instance, with a predefined map, or with SLAM [150] techniques. In mobile networks, the coordinates of the base station can be transmitted through broadcasting channels, such as the System Information Block; but this has not been done in prior generations. Once the location of the reference points is known, the target needs to know the distances if trilateration is being used. The RTT algorithms used in UWB and WiFi-FTM both allow the device to estimate its own distance to the reference points. For AoA, the terminal must use some type of MIMO technique that allows it to calculate angles. The network may also transmit AoA, AoD, and TDoA measurements to the terminals [151], [152]. In the case of fingerprinting, the device must receive a digital copy of the computed (and updated) map. With these elements, the terminal can estimate its own location, without the need for the network of computing it. This has the drawback of requiring a high computing capacity for this (which may be a problem in IoT devices) and consuming energy at a higher rate. If the network needs the location, a protocol must be established such that the device sends its coordinates to the network. Such a protocol exists, for instance, in LTE, in the LTE Positioning Protocol (LPP [5]).
Location can also be calculated in the network. In this case, a central location service must be defined within the network functions. Such entity would receive range or AoD/AoA estimations to compute the location, either from the reference points or from the targets [151], [152]. If computed in the network, TDoA is calculated at different reference points, and is limited by the synchronization of their clocks [153].
For fingerprinting, a protocol should be established between the target and the location service over which the former can send readings. This relieves the target of the required computational and energetic expenses. If the target needs to know its own position, a client-server protocol must be established with the location service. This is the reasoning behind proposals such as the LOCUS platform [31] for 5G. The recent technical specification group of 3GPP has acknowledged the problem of cellular-based positioning and hybrid combination with non-3GPP technologies [10] in a centralized location service. These new specifications contemplate the possibility of the terminal transmitting measurements such as the ToF, AoA, and the Reference Signal Received Power (RSRP) to the location service.
Since in 6G trustworthiness will be one of the main KVIs, it is important to point out the implications for privacy of the decision on placement. A decentralized scheme where the position is calculated by the target terminal (that is properly secured) will be private by design if and only if the network does not participate in the estimation of the ranges. In other words, RTT protocols should not be used, since in that case, the network can also obtain the location of the user and a malicious third party can also intercept such signals. Even in that case, the network will always have a gross location estimation by the technique of the closest reference point (i.e., the serving cell). In the case of a centralized location, privacy is not guaranteed by design [90], and specific countermeasures must be put in place. For instance, the messages interchanged with the location service must be encrypted and anonymized, for instance, with temporary identifiers. In this case, it is up to the operator to follow the personal data management regulations, that, in the case of the GDPR [86] for instance, force them to delete data on request, or protect them from possible leaks.

B. 6G LOCATION ENABLERS
While there is yet no written consensus of what 6G will exactly be, research is ramping up, white papers [2] are suggesting the core components of 6G and research projects with international consortiums have started [9]. Some of these components may be included in earlier, Beyond 5G (B5G) releases, while some may even not make it into the initial release of 6G. As location is an increasingly hot topic in mobile networks and services, it will be an important objective for B5G/6G technologies. In this section, a review of the main technologies that are expected to be part of 6G will be done, based on the visions of papers such as [33], [36], and [107], which explore the relation between 6G and location. For each one, the main aspects that need to be tested in a testbed are explored.

1) HIGHER BANDS AND BANDWIDTHS
The migration to higher bandwidths has been a constant in subsequent mobile network generations, starting from the 30-200 kHz of GSM/GPRS channels [154] to up to 800 MHz in Frequency Range 2 (FR2) in 5G [155]. In 6G, channel bandwidths will further increase up to 2-10 GHz per channel in the THz band [34]. To achieve this, higher carrier frequencies shall be used. While in 5G one of the main novelties was the introduction of FR2 (also known as mmWave), in 6G, frequencies above 100 GHz (also known as µmWave) [2] or even in the THz band [16] are envisioned.
This brings two main advantages with regards to location; firstly, higher bandwidths have much lower beam widths [2], which coupled with beamforming, can lead to very accurate angle estimations; and secondly, large bandwidths allow very short signals, which can be used for better discarding multipath [156] components in time (similar to UWB [133], [157], [158]). On the other hand, atmospheric absorption is much higher at such high frequencies, leading to small ranges and the need of Line-of-Sight propagation.
These new features of 6G are a natural evolution of the physical layer of 5G; and as such, these benefits on location were already observed in 5G with respect to prior generations. In other words, mmWave also offered an increased directionality [159] and allowed ToF measurements, although not with the accuracy expected from µmWave. To evaluate the exact benefits of µmWave in location, several different aspects must be measured. Firstly, the achievable directionality, which will provide a specific angular accuracy must be measured. Secondly, the ToF resolution must be assessed, which will depend on the shortest achievable pulse and the atmospheric effects, which are much more relevant at higher frequencies. The required 6G RTT protocols must also be designed and tested.

2) VIRTUALIZATION AND OPEN RAN
The architecture of the mobile network has changed throughout the different generations. While in 2G and 3G network functions were tightly associated to specific inflexible elements, in 4G there was a simplification in the architecture [160]; and in 5G the introduction of SDN [161] and Open RAN [22], [23] simplified the implementation of the core network with COTS hardware and even cloud-based virtual machines. In 6G, this trend towards virtual machines continues. Network functions are implemented as microservices that can be containerized with technologies such as Docker [162], [163], [164], [165] or Kernel-based Virtual Machines (KVM) [163], [166], deployed in the cloud and orchestrated with scalable solutions such as Kubernetes [167] or OpenStack [168]. This approach will define a network infrastructure that is essentially a set of microservices that need a certain organization. Open RAN [169] defines a standardized architecture ( Figure 5) that allows interoperability between components developed by different vendors. The main component elements are: • Distributed Unit (DU): contains the lower layers of a traditional base station (Physical layer, Medium Access Control, and Radio Link Control).
• Control Unit (CU): contains the higher layers of a base station. A single CU can have numerous DUs distributed over a wide area.
• Near Real-Time Radio Intelligent Controller (RT-RIC): will contain the network functions that are time sensitive, such as mobility management, security, etc.
• Service Management and Orchestration Framework: containing the network management functions (in the Non-RT-RIC), configuration, policies, etc. An important aspect of this architecture is its openness, which adds flexibility to the composition of the network, allowing the integration of new and more efficient implementations of network functions, which can be developed and distributed by small and specialized vendors. These software components are called xApps, and will give place to a market of competing solutions for low level functions in the 6G network. This will also greatly simplify the inclusion of new network functions and reduce the time-to-market of novel schemes developed in the research community.
Open RAN greatly simplifies the integration of location in the network operation. Access to information of the CU can help obtain physical layer measurements (such as Timing Advance, ToA, or AoA) taken at the DU. Furthermore, access to information from several DUs can enable trilateration VOLUME 11, 2023 within a single CU. This can be done either passively (by having an xApp that collects data and derives distances to DU antennas) or actively (by having the DUs communicating with the terminal in order to estimate the distance with protocols such as RTT). Thanks to the tight integration offered by the Near-RT-RIC, this will produce location latencies that are very low, and will also allow for high frequency location updates.
Many questions remain open, such as whether a location xApp will have in practice a negative impact over the performance of a CU, which is translated into lower quality of service. To measure this, the memory and processing time of different location algorithms and protocols should be measured within an Open RAN architecture.

3) DENSIFICATION AND CELL-FREE ARCHITECTURE
Another aspect that has constantly increased with each generation of mobile networks is the densification of base stations. This is a direct consequence of migration to higher frequencies and bandwidths. As higher frequencies are used, path loss also increases, driving to a need of either a higher transmission power or a lower cell radius. Lower cell radii also imply a lower number of users per cell, which allows for higher bandwidth and more resources per user. Regarding cell radius, there are macrocells (with coverages up to several km), small cells (with up to 100m) and femtocells (which cover the area of a home or small office). For 6G, the concept of cells is expected to be changed [11], [17], [170]. DUs are much simpler (and therefore, cheaper) and flexible than base stations, and can be added and removed with much less effort, allowing for a scalable network. Additionally, DUs may even be mounted on mobile platforms, to provide temporary connectivity for IoT [171] or disaster scenarios [172] using, for instance, Unmanned Aerial Vehicles (UAV).
Regarding location, DUs can act as reference points, and as they will be packed more densely, it is expected that the terminals will have more location information available. Also, the smaller size of the coverage area of one DU will provide a finer location by proximity. In addition, UAV-mounted DUs may improve location by increasing the density temporarily when needed [173] (e.g., in emergencies or special events).
To characterize the advantages of the cell-free architecture of 6G, a measurement of the time required to coordinate several DUs must be obtained, as well as an exploration of the potential pitfalls (such as a target only being visible to less than 3 DUs) and solutions that can be proposed. Regarding densification, the tradeoff between a lower intercell distance and higher path loss must be evaluated. The higher DU density also may favor the usage of fingerprinting, so measurements that also characterize this aspect (e.g., map of visible DUs over an area) could also be of great value.

4) AI AND ML AS ENABLERS OF LOCATION
One of the most cited novelties of 6G are the increased integration with AI and ML [17], [174]. There are three roles that AI/ML will be used for in 6G: running network functions, network management, and to be offered as a service for user applications (MLaaS) [18], [19], [20]. To support these roles, some works suggest the use of a specific AI/ML component within the network, such that all the computing resources (hardware accelerators, storage, software libraries, etc.) are centralized in a single point. This will allow to better dimension and capitalize the dedicated resources. Such a component would centralize all the datasets and models used in the above mentioned functions. Nevertheless, this can be problematic for some cases. For instance, to protect privacy, some datasets may not be acquired and stored for long terms, so schemes such as federated learning [175], [176] have also been proposed for 6G services and functions. In federated learning, several nodes perform ML on their own datasets, and share the resulting model, which theoretically does not have sensitive information. The applications of AI/ML for the operation of the network are further explained in Section IV-C. Cloud-based AI/ML services constitute a novel and active market [19]. Mobile network operators have an edge over other providers for offering such services. Firstly, they can offer services that are much closer to the end users, located within the CU, with much lower latency for data transmission. Secondly, they can reuse hardware (such as hardware AI/ML accelerators) and software (such as specific algorithms or pre-trained models) components that are already being used for network functions and orchestration.
AI/ML as a service offered by the 6G network can be used as an enabler of location. Location with soft information [177] is a clear example of this. Some AI/ML techniques can help improve location accuracy; for instance, Kalman Filters [178] are commonly used to improve location through fusion with Inertial Motion Unit (IMU) data [179], and LOS/NLOS conditions can be estimated with ML [180] to improve ranging information. Source weighting based on accuracy has also shown improvements in location accuracy [105].
In the near future, there will be many aspects of AI/ML to evaluate towards 6G in real testbeds. Aspects such as dataset sizes, computing performance (in terms of memory and processor time) and learning and estimation times will all determine the location acquisition latency, frequency, and accuracy when AI/ML is an enabler of location.

5) EDGE COMPUTING
In 5G, one of the key technologies for achieving low latencies was Edge Computing [181], where the end-to-end services were ''moved'' to the network edge. This was possible thanks to cloud computing technologies [182], where a service can be disaggregated into several servers which share information over a backhaul connection. In 6G, this trend continues, with more sophisticated Edge Computing schemes such as federated learning [175]; and the addition of xApps allows a cross-layer integration between end-to-end services and the CU/DU. While location already benefits from Edge Computing in 5G [183], [184], the integration with CU/DU can further enhance location accuracy, for instance, by connecting it with external data services that allow context awareness (e.g., by integrating geographical Application Programming Interfaces -APIs-in the estimation of distances with physical magnitudes). This can be done without significantly increasing location acquisition latency.
To explore the integration of Edge Computing into 6G and location, several research questions remain open. For instance, instantiating Edge services implies some delay [185], so the impact of this delay should be further investigated and solutions developed to overcome this problem. One potential solution would be the predictive instantiation based on context awareness [186], [187]. Furthermore, the procurement of computing resources in the Edge (in terms of memory and computing power) is also an open research question, which has some precedents in 5G, for instance for task offloading from the terminals [188], [189]. In 6G, other uses, such as federated learning [190] will also require a careful resource planning in the Edge.

6) COEXISTENCE WITH OTHER NETWORKS
As mobile network generations have progressed, backwards compatibility has not been maintained. This has driven to devices and networks where radio interfaces for several generations, ranging from 2G to 5G are present simultaneously. Given the inertia of older generations, it is very likely that in 6G, several 3GPP technologies coexist in the same network [30]. Moreover, other radio technologies are also present in the same spaces than mobile networks, such as WiFi, LoraWAN [191], Bluetooth, etc. Coexistence with older generations and other radio technologies has been a widely studied issue in prior generations, leading to the development of schemes such as Listen Before Talk (LBT) in 5G non-licensed spectrum bands [192]. While coexistence with other technologies is an important challenge, synergies have also been exploited for improving the quality of service. Intertechnology handovers [193] can be used to handoff traffic to Radio Access Networks (RANs) of different 3GPP Releases, for instance, when an imbalance in traffic is detected or when a user exits the coverage area of the newer generation. Inter-technology synergy has also been proposed with non-3GPP technologies, for instance, with optical networks [194], WiFi [195], capillary networks [196] or LoRaWAN [191].
Location can greatly benefit from the coexistence of several RAN technologies. Technologies such as UWB [133] and WiFi-FTM [197] are currently competing in the market of indoor location. Both of these technologies use a flavor of the RTT protocol to estimate the distance to the reference points. UWB has been a de-facto standard for indoor location for a long time, and is now starting to acquire a significant market share in consumer devices [198]. WiFi-FTM [105], [199] is more recent and is part of IEEE 802.11mc, so it has a great potential of adoption by consumer devices, especially because it can provide location without the need of connecting to an access point [200]. Both of these technologies offer local coverage, and can complement mobile network based location indoors, with techniques such as range fusion [47], [105]. Other location technologies such as WiFi fingerprinting [141], GNSS [140], [201] (which is already integrated with LTE through LPP [5] and in 5G through New Radio Positioning Protocol A -NRPPa [202]), GNSS with Real-Time Kinematics (RTK) [203], Bluetooth proximity [126], [127] or SigFox [204] can also be used to improve future 6G location. Thanks to mechanisms such as Kalman Filters, readings from sensors in devices (such as IMUs) or even from sensing functions of 6G (e.g., passive RADAR [36]) can also be used to improve location while tracking a specific user.
6G location can be complemented with the aforementioned technologies, either to improve accuracy (for instance, by using fusion with UWB), latency, or update frequency (e.g., with Kalman Filters). Another important aspect that can be improved is availability of location. As described in Section III-B, at least 4 ranges are required for 3D location. But it is possible that at some points in space (especially in indoor spaces), there are less than 4 visible CUs. In that case, complementing with other technologies opportunistically [105] can improve the chances of acquiring location. The exact improvements that can be obtained will depend on environmental aspects such as topography of the surroundings, available networks, density of CUs and reference points of other technologies, etc.; and will come at a cost that also has to be measured in a testbed, mainly in terms of excess of power consumption (in the network due to the deployment of other RANs and in the terminals due to the activity of more network interfaces) and computational resources.

7) SMART METASURFACES
In traditional mobile communications, reflections are usually considered random and uncontrollable. Reflections are sometimes considered a negative effect that causes scattering and needs to be mitigated. On the other hand, reflections are also used for NLOS communications. With smart metasurfaces [28], [205], the reflections can be strategically modulated to improve the propagation conditions. Smart metasurfaces are made up of nanostructures and metamaterials that can shape the electromagnetic properties of the surface (such as its reflectivity, selectivity to frequencies and polarization, etc.).
Regarding location, smart metasurfaces can improve location in interiors, where NLOS conditions are dominant, for instance, exploiting near-field effects [36].
Smart metasurfaces are a cutting edge technology, where much research and development must still be done and major challenges overcome. For instance, a major question when using smart metasurfaces is where to install them [206] in order to obtain the best results. The physical characteristics of the materials are also an open research question [207], as well VOLUME 11, 2023 as the propagation models with materials with different sets of characteristics [208].

8) MASSIVE MIMO
In Massive MIMO (mMIMO), devices have many antennas (up to millions of elements [16]), to allow for different kinds of connectivity improvement. mMIMO has been used in 5G for beamforming [209] allowing very narrow and quickly reconfigurable beams, spatial multiplexing to increase the capacity of individual users, increased diversity gain [210] to achieve a higher reliability and to estimate the AoA in communications [138]. In 6G, with THz frequencies, mMIMO antennas can be very small [211], using advanced fabrication processes with metamaterials. mMIMO offers some interesting elements for location systems. AoA estimation is an obvious location enabler, as previously discussed in Section III-C. In 6G, thanks to a higher number of antennas, these estimations may be much more accurate. Furthermore, the use of THz arrays will allow the use of personal radars [2], which can create radio images from the surrounding environments, for applications such as SLAM.
To further investigate these technologies, testbed measurements should include mMIMO antennas and transceivers. Aspects such as spatial and angular resolutions need to be assessed in order to establish the location accuracy that can be achieved. Moreover, some other practical aspects still need to be studied, such as the power consumption, or the reliability of personal radars in different scenarios (for instance, whether the user of the device needs to proactively interact with the device to obtain a radio map of the environment).

9) D2D
In mobile networks, terminals usually connect to a single serving base station. As new releases have arrived, this original scheme has been modified adding alternative configurations such as multiconnectivity [212] and Device To Device (D2D [213]) to the 3GPP standards. D2D allows terminals to directly communicate among each other. D2D communications can help save energy [214] when Peer-to-Peer (P2P) services are running within a small geographical area. In this case, terminals that are within the local area can use a much lower transmission power to reach nearby terminals, instead of a distant base station. D2D can also be used for extending the coverage of the mobile network [215], using terminals as relays of the base station to serve other nearby terminals that are out of coverage due to obstacles. In 6G [216], all the technical improvements, such as AI/ML, Edge Computing, novel architecture, etc. will enhance D2D communications.
D2D can be used for cooperative location [156], where terminals use the signals from other terminals (either passively or actively) to estimate distances for trilateration or fingerprinting. This is especially useful in situations where some terminals are not within the coverage area of a fixed reference point. D2D helps in situations where coverage is problematic, both for communications and location. However, these situations tend to occur when the propagation conditions are harsh, for instance, in underground or industrial settings. In these situations, while D2D would potentially improve the chances of having enough distance measurements for location, these distances will likely be inaccurate, since they also depend on the quality of the channel. To assess the usefulness of D2D for location in 6G, a study of the accuracy in real situations must be done, as well as the achievable improvement on reliability (in other words, the improvement of the chances of measuring enough reference points).

C. 6G LOCATION-DEPENDENT FUNCTIONALITY
While 6G provides many enablers for new and improved location functions, as shown in the previous section, it is also the most location-dependent mobile generation so far. Several of potential B5G and 6G technologies will heavily rely on location information. The design and testing process of these technologies will require in the near future a testbed with location capabilities that mimic those expected for 6G. These location-dependent functionalities will be described in this section, along with the general requirements they impose on the location service.

1) RAN FUNCTIONS
Mobile networks operation relies on a complex set of individual functions in the RAN. For instance, handovers in their different flavors (soft, softer, and hard) and cell reselection are the basis of mobility from the very early 3GPP releases, complemented with more advanced functions such as beam selection and secondary cell selection for multiconnectivity. Other RAN functions are not directly related with mobility, but with traffic management, for instance, traffic steering or admission control. In recent years, these functions have been complemented with AI/ML techniques to make them proactive. For instance, in [125], schemes for adjusting network slice resources in different scenarios are proposed, based on predictors of traffic composition. In 6G, as most of the functions of the RAN tend to be virtualized, management can be much more flexible, allowing not only to adjust radio resources, but also computational resources [217]. In fact, Open RAN will greatly augment the possibilities of management in the releases leading to 6G [218].
Location will play a major role in network functions in 6G. For instance, resources can be dynamically assigned to DUs and specific beams in function of the projected aggregated location of the users in the network. This will allow to do a more efficient usage of resources.
For these functions, location will need to meet some specifications that depend on the specific function and the service that is being provided. For instance, in multiconnectivity for URLLC, the selection of new secondary base stations or the change of beams must be done in a short period, so the terminal never loses network connectivity. Location latency will then need to be very low (in the order of few ms) to ensure a correct assignment of resources. Regarding accuracy, it will depend on the size of the radio features (coverage area, beam width, etc.) that a specific function covers.

2) NETWORK MANAGEMENT
Network management comprises the configuration, optimization, and troubleshooting of RAN and core network functions. As mobile networks have become more complex, automation of network management has become an increasing necessity. In the last years, many different solutions, ranging from very specific management problems [125], [206], [219] to whole Self-Organized Network (SON) architectures [220], [221], [222] based on AI/ML and Big Data analytics, have been proposed. Such proposals respond to the increase in complexity of network management due to, among others, the coexistence of different RANs (2/3/4/5G, WiFi, LPWANs). In 6G, AI/ML will be an integral part of the core network, and it is expected that management automation relies heavily on it [223], enabling intent-based management, which translates business requirements into specific network parameter configurations. Open RAN will increase the complexity of management, but will also increase the range of possible operations that can be done without human intervention (such as adding or subtracting computing infrastructure). Network optimization is normally done offline, that is, it is a long term task that continuously monitors the network state and implements gradually a fine tuning of network functions. While it is not temporally sensitive, automatic optimization cannot take too long to adapt the resources to changes of the environment or traffic. For instance, for human-centric services, the permissible timeframes have been reduced from several days in early network generations to several minutes in 5G. Regarding troubleshooting, automation must fulfill four tasks [221]: detection of the problem and definition of which network elements are affected, compensation (i.e., redirection of redundant equipment in the network to serve the affected users), diagnosis or root cause analysis, and resolution. The time frame of troubleshooting depends on the type of failure, ranging from low priority ones (such as cells having non-optimal configuration parameters) which can be observed for several days before being resolved, to critical ones (such as coverage holes in areas with URLLC terminals), which must be corrected proactively, before users are affected.
Location will be a key resource for many management functions. Both radio resources (such as bandwidth or radiated power) and computing resources (memory, compute time, and priority of virtual functions) can be proactively redistributed among different DUs in function of where the users are concentrated or where they are moving. Moreover, configuration can be done taking into account specific users with critical requirements, helping to optimize the network for them. Location can also be an invaluable resource for troubleshooting; for instance to locate coverage holes or inefficiencies in beam parameters.
The requirements imposed over the location service will vary with the specific management function. For instance, for a CU to proactively optimize the resources (radio and computing) of several DUs, the aggregated location of users must be calculated with a delay in the order of tens of seconds to few minutes for end-user communications. Above this, users may notice some effects derived from resource scarcity. Nevertheless, if the DUs are serving traffic with high performance requirements, it is possible that the resource management must be done in much shorter periods (tens of ms) to ensure that the terminals will have resources when needed. In this case, location must be calculated with a latency in the order of few ms. In any case, this low latency would only be required for critical terminals or for terminals that are moving very fast.

3) COMPUTATIONAL RESOURCE PLACEMENT
With Open RAN, network elements are virtualized, containerized, and run over COTS computers with diverse platform architectures, operating systems, resources, and placements (i.e., the location of the physical computer running the virtual image). A single instance of a specific function (e.g., a CU) can even run on a distributed cloud, having parts of it run over different physical machines. This adds some new dimensions to the mobile network management: computing power (in terms of processing capacity), memory and storage space, energy consumption, and placement. Placement is especially relevant in time-sensitive applications where the additional latency introduced by the backbone network in higher layer functions (e.g., authentication) is not affordable. In 6G, thanks to Open RAN and virtualization, placement has a more profound influence than in prior generations, since not only the end services can be moved to the edge (as in 5G), but also RAN and core network functions [224], [225].
To successfully exploit the capability of changing the placement of network functions in 6G, the key aspect to know about the users is their location. Similarly to management, only the location of users that have special needs must be known. With this information, virtualized functions can be moved near the users cutting the latency introduced by the network. Furthermore, with trajectory analysis, a proactive placement of the functions can be done.
In the case of placement of resources, the required location is coarse, not needing an accuracy that is much higher than the service area of a DU. Nevertheless, for more sophisticated mechanisms, such as proactive placement based on trajectory analysis, a higher accuracy may be required, down to several meters. To test such mechanisms, a testbed should have the capability of dynamically placing functions within a representative area, and measuring KPIs such as reduction in latency due to a correct placement or proportion of incorrect placements in proactive placement.

4) CONTEXT AWARENESS
Mobile networks exist within a context [226] that directly affects them in several ways. Some of the most obvious contextual factors are external interference, and traffic patterns VOLUME 11, 2023 due to events that occur within the coverage area of the network, such as social events [227], disasters (which may also damage the network infrastructure) or user mobility patterns [177]. On a smaller scale, the radioelectric environment may also change, for instance, within a small area the passage of cars, or opening/closing doors may modify the LOS/NLOS conditions. The knowledge of these contextual factors will allow to better plan networks, perform proactive management and placement of functions.
Location of users with respect to network infrastructure is a very important contextual factor. It may define not only LOS/NLOS conditions of individual users, but also mobility patterns of the users as a whole [177]. This information is important for context aware functionality, such as context-aware network management or physical layer configuration. For location aware functionality, the accuracy and latency required for the location will highly depend on the specific function. For instance, in high-level network management, aggregation of locations will somehow reduce the impact of individual errors, so a low accuracy (of up to tens of meters) is acceptable. LOS/NLOS detection, on the other hand, may have a much less permissive accuracy requirement, especially in indoor scenarios, where conditions change in sub-meter scales.
To test such functions in a testbed, magnitudes such as the sensitivity to location errors and latency must be measured. Another important aspect is the rate of users that are reporting location. Since not all users may be located (due to device capabilities or privacy options), it may happen that only a sample of the users are located, so the representativeness of such sample will determine the performance of location aware network functions.

5) LOCATION AS ENABLER OF AI AND ML
As earlier described in Section IV-B, AI and ML can be used as enablers for location. Conversely, location can be used as an input for location-aware models in AI/ML. In prior mobile network generations, AI/ML have been proposed for several network functionalities and management mechanisms. The concept of SON, which was proposed back when 3G was rolling out, relies heavily on AI/ML for tasks such as troubleshooting and parameter optimization. In 4G, SON functions were also proposed [221], and some of them, such as Automatic Neighbor Relation (ANR) were even part of the 3GPP standard [228]. As the research on 5G is still ongoing, AI/ML solutions to common problems abound, especially to manage complex functions such as Network Slicing [56], [125]. With 6G, the use of more sophisticated AI/ML systems are expected. There are three roles that AI/ML will take in 6G: • AI/ML for running network functions: some common functions, such as resource assignment [217], traffic steering [229] or security [230] will be implemented with AI/ML algorithms that make them predictive and adaptable to the changes in the environment. These functions will be part of the Near-RT-RIC and distributed as xApps.
• AI/ML for network management: AI/ML algorithms will also be used for network orchestration functions in the Non-RT-RIC. Research is ongoing for orchestration tasks such as network optimization [231], while others, like troubleshooting, have not yet received much attention.
• AI/ML as a service: as already explained in Section IV-B. For all of these functions, location can be used as an input in AI/ML algorithms. Location information has been used for tasks such as network orchestration [177] or virtual function placement [186], [187].
When location is the enabler of other functions and services, the problem to study is whether its performance meets the requirements. Moreover, dataset security must also be explored, evaluating risks such as model inversion [230] or dataset poisoning [20].

D. KEY TAKEOUTS
In this section, the location topics related with 6G have been reviewed.
First, a short review of location techniques was done, showing the type of algorithms that run in the devices that are tested in a location testbed. This will, in its turn, define which elements must be present in the testbed and the workflow that must be followed. For instance, a fingerprinting based location device must include mechanisms for building the map prior to further evaluation.
The 6G technologies that will be enablers for location have also been reviewed, along with the open research questions. A location testbed may support all or some of these technologies, depending on whether an integrated or partial evaluation of these aspects is required.
Finally, location-aware functions in 6G have been reviewed. In this case, location acts as an input to the functions and must be provided by the testbed. It is up to the specific experiment whether the source of location is part of the test (i.e., the location computed solely by 6G techniques or devices that are present in 6G devices) or just part of the ground truth.

V. ARCHITECTURE FOR A 6G LOCATION TESTBED
In this section, an architecture for a testbed for 6G location will be described, fully detailing each part and the considerations for material procurement. The proposed architecture is meant for implementing testbeds where an iterative approach can be applied. The operators of the testbed will devise experiments to test location devices, location algorithms, or location based functionalities and services (which will be referred to as Device Under Test or DUT hereafter), and program the testbed to perform them and collect data. This data will then be automatically analyzed with previously programmed data analytics mechanisms within the infrastructure of the testbed. The output will inform the operators about the behavior of the DUT and its compliance of certain requirements, and adjustments can be done on it for the next iteration. Once a certain level of maturity of the DUT is reached, it can leave the testbed and further advance in the R&D workflow. Figure 6 shows the overall proposed architecture. This architecture has four scopes: the physical scope which composes the physical setting of the testbed; the 6G network scope made up of an Open RAN based infrastructure; the service scope containing the applications; and the Research and Development (R&D) scope which adds the testbed functionality to the standard equipment and services of the other scopes. In the following subsections, each of these scopes will be studied in further detail, listing some COTS components that may be procured for implementing this architecture.

A. PHYSICAL SCOPE
The physical scope comprehends all the elements that compose the physical setting of the testbed. Compared with testbeds for other purposes, the physical scope of a location testbed is especially important and complex, since location is a highly environment-dependent functionality. The physical scope contains the physical space, the elements to calculate the ground truth, and the hardware (6G and other technologies such as UWB and WiFi-FTM).

1) PHYSICAL SPACE
The physical space or setting is the environment where the testbed is deployed. It will greatly determine the challenges for location as well as the location solutions that can be used. There are several aspects that must be taken into account for choosing a location: • Indoors vs outdoors: in indoor scenarios [232], walls and furniture will act as obstacles, making propagation more difficult and creating more NLOS propagation situations. On the other hand, indoor scenarios are also more prone to have more dense deployments, which helps location by having more reference points. In outdoor environments, LOS propagation is more common, and deployments tend to be more sparse. In outdoor scenarios, GNSS systems can also be used for complementing 6G location.
• Mobility of targets: targets that are moving fast or change course will pose several challenges, such as needing a high frequency and low latency in location; and a high Doppler effect [233]. They will also pose challenges to logistics, requiring the physical setup to be especially planned for the mobile targets with the appropriate clearance or elements such as rails or robots [48], [49].
• Density of clutter: in indoor scenarios, the density of obstacles such as furniture, metallic structures, walls, or even people, will define the propagation characteristics and hence the properties of location. In outdoor spaces, the density of obstacles also plays an important role, for instance in urban canyons or parking lots.
• Deployment density: the density of reference points plays a major role in the accuracy of location. Higher densities will normally produce a higher accuracy, at a higher hardware cost. Network access points usually act as location reference points, but to achieve a higher density, location-specific equipment can also be used, even with different technologies [105].
• Special scenarios: the testbed can emulate a generic environment (e.g., generic outdoors or indoors), or specific settings, such as factories [50] for industrial communications and location, cities (on full or reduced scale) for vehicular location [234], etc. Specific location-aware applications will be analyzed in the service scope, but having a realistic physical setting will help to better understand the interworking of the service with the environment. The physical setting is one of the main decisions to take when deploying a testbed. Since the physical setting is the base over which the rest of the elements will be mounted, it is also important to choose it early in the process of design. Apart from the type of environment, other practical decisions must be taken, such as the total area that the setting will cover, whether it is a public space or reserved only for experimentation, and the layout of the space. The setting can even be nomadic if the purpose of the testbed is to evaluate location in different settings. In this case, a protocol must be put in place to evaluate the terrain, acquire a map and deploy the equipment each time that the setting is changed.
Finally, another aspect that must be taken into account is the spectrum licensing. To radiate in certain frequencies, a license (which can be shared with mobile network operators) and detailed frequency planning is required. Occasionally this is not enough, and to avoid interference with the public mobile network some limits must be respected, such as transmission power, tilt, or distance from base stations of the operator.

2) GROUND TRUTH
The ground truth refers to the actual value of a magnitude, free of estimation error. It is used to validate methods for approximating the value of the magnitude, and is normally taken in measurement campaigns oriented to validation. In the case of location, ground truth is the actual location of a target over the 2D or 3D map of the physical setting. If time is also taken into account (i.e., in the case of moving targets), the ground truth is a 3 or 4 dimensional vector, with a timestamp being one of the components of the vector. It is important to have an accurate ground truth, because the whole purpose of the testbed is to compare the outcome of location algorithms or location-aware mechanisms with the information provided by the ground truth. In other words, the accuracy of the ground truth will determine the validity of the testbed.
The ground truth can be estimated through three possible strategies: • Markings on the floor: painting marks on predetermined places will ensure that the target is in a known position. The achievable accuracy is very high in this case, with instruments such as laser distance scanners. Another VOLUME 11, 2023 FIGURE 6. Proposed architecture for the 6G location testbed. Blue arrows indicate the data flow (measurements towards the R&D scope and commands from the IDCE element).
option is using a grid with fixed size cells, which will help to place targets in known positions. The grid can be used either as a guide for fine tuning the target position (by measuring the distance to the closest grid lines), or as a coarse approximation to the ground truth (by using the position of the cell where the target is located as an approximation for its position). Some examples of testbeds using this approach are [43], [51]. Markings on the floor can provide a very reliable and accurate (down to the mm) 2D ground truth, but it has several problems. Firstly, this approach is not ideal for 3D positioning, since it requires supporting elements (e.g. metallic stands) that may introduce biases in the location methods studied in the testbed (e.g. due to reflections). Secondly, they are not valid for moving objects, since targets must be previously placed on the markings. Thirdly, this approach is not automatable, since it requires to manually place objects on the markings.
• Sensors: location of the targets can also be acquired by external sensors, such as cameras [235], motion sensing devices, radars, or IMUs. These sensors must be strategically placed on the physical setting in order to obtain timely and accurate information. The accuracy that can be achieved depends greatly on the specific device used. In [235], for instance, an accuracy below 10cm is achieved indoors with video feeds. Some sensors can also do 3D location without the need for supports. Sensors also have the advantages of being able to easily measure moving targets and being fully automatable. This comes at a higher cost in equipment, which may also increase the work in redeployments in nomadic testbeds. Testbeds like the one described in [49] use this approach combined with mobile robots.
• Location technologies: if the testbed is not intended for measuring multi-technology location, other technologies, such as UWB, WiFi-FTM, or GNSS can be used for obtaining a ground truth. The accuracy achievable with these technologies varies greatly on the specific device, the radioelectric features of the physical setting, and the density of deployment of reference points. UWB and WiFi can achieve accuracy down to several cm [47], while GNSS varies between several meters [201] down to sub-cm with GPS-RTK [203]. The cost of such technologies is normally higher for more accurate and highquality equipment, and they also have installation costs (in terms of planning, testing, and validating their setup). All of these technologies support 3D location, moving targets, and are highly automatable. For example, the testbed described in [45] uses this approach. The ground truth system must be deployed and calibrated before the operation of the testbed. Calibration consists of a validation of the location provided by the system. To do this, several known positions must be measured on the ground truth system, and the provided location must match the known position. Once this is done, the information provided by the ground truth system is considered to be correct. To maintain this trust in the system, regular calibration might be needed. For this, tools like laser distance scanners might be used. The markings on the floor approach can also be used for calibration, and a permanent grid can be set up to make this task easier; especially in the case of fixed testbeds. For nomadic testbeds, a calibration phase must be included in the redeployment protocol.
Once set up and calibrated the ground truth system will provide the 2, 3, or 4 dimensional vectors of the location of the targets over the physical setting. An interface between the ground truth system and the R&D scope must be set up. For the markings on the floor approach, a user interface that allows to manually introduce the measurements must be developed (e.g., the mobile application described in [42]), taking into account User Experience considerations that simplify operation and prevent human error. In the case of sensors and other location technologies, they will usually provide interfaces for data acquisition, such as HTTP, MQTT, or USB interfaces, so an adapter must be developed to connect it to the testbed data acquisition system.

3) HARDWARE
With the setting and the ground truth in place, the next item to consider for the design of a testbed is the location hardware. This item refers to the hardware of the location system under test, not the ground truth hardware. The required hardware can be classified in the following items: • Access points and radio elements: the infrastructure of the physical layer of the network must be planned, acquired, and deployed according to a prior design. This will include mainly the Radio Unit (RU), made up of SDR devices (such as LimeSDR [236] or USRP [237]) with diverse types of antennas (dipoles, directional antennas, and mMIMO arrays). Depending on the objective of the testbed, the setup might reflect a well planned network, with a good coverage over the physical setting, or a deployment with coverage holes (e.g., for emulating disaster scenarios or underserved areas).
• RIS: since in 6G RIS will likely play an important role, they can also be part of the testbed. RIS may be deployed on preselected surfaces of the physical setting, with the possibility of moving them around if the purpose of a test is to assess the placement of RIS over different surfaces.
• Terminals: the terminals will be the targets for location. While 6G terminals are still years from being commercially available, the testbed can be prepared for eventually admitting them when they are released. While at the time of writing this paper 6G devices are still far from reaching the market (with few experimental devices starting development [238]), several decisions may future-proof the testbed. One option is to not use COTS devices, but experimental prototypes based on SDR (such as LimeSDR [236] or USRP [237] devices) and programmable platforms, such as PC/laptops, Rasp-berryPi [239] or HiFive [240]. This will require a deep knowledge of the end service by the testbed operators; or outsourcing the production of such devices to external organizations. Another option is to use 5G devices, especially while a 6G standard is not available and the testbed implements a 5G RAN/Core. Such devices would become obsolete once a 6G RAN/Core is available, but in the mean time, they can be used to reliably perform scaled-down experiments without the need of fully developing experimental prototypes of the end services. They will also provide a better grasp of secondary effects that may not be modeled in experimental prototypes. Such terminals can be static or mobile (carried by vehicles, robots, or persons).
• Computing hardware: With Open RAN, most of the 6G network is completely virtualized, so computing power is a very important requirement. A major advantage of network virtualization is that COTS computing equipment can be used, greatly simplifying the process of hardware acquisition. Therefore, the computing hardware can be standard computers, based on Intel/AMD or ARM [241] architectures. In the near future, the RISC-V [242], [243] architecture is also expected to gain popularity in open systems such as Open RAN. The computing hardware can be either concentrated in one powerful computer, or distributed on a networkconnected cloud. Also, hardware for backups (e.g., Network Attached Storage systems) must be acquired, as well as systems for ensuring continuous operations (i.e., UPS or generators, depending on the size of the installation).
• Backbone network: in the case that the computing hardware is designed as a cloud, a backbone network must also be deployed. To ensure low latencies and high bandwidths, it may be necessary to use VOLUME 11, 2023 optic fiber connections instead of traditional ethernet cables.
• Complementary location technologies: in the case that the testbed includes multi-technology location, other technologies such as UWB and WiFi-FTM may be planned, deployed, and used over the physical setting, and connected to the 6G core network to provide measurements. This design must also reflect the intended objective of the testbed (e.g., using sparse deployments of UWB/WiFi to complement 6G location). This setup will most likely differ from the setup of these technologies as ground truth. In any case, they cannot play simultaneously a dual role as part of the location system under test and as ground truth. Once all these elements have been planned according to a specific testbed design, they can be acquired and installed in the physical setting. Installation may be permanent or removable, and in the case of nomadic setups, impact over the environment (e.g., holes in walls for mounts) must be minimized.
Another important aspect to take into account are replacements. Ideally, replacements must be available before needed, so acquisitions should be done with a margin for extra replacement parts. Replacements for commercial equipment (such as specific models of smartphones) can be especially hard to find in the market after some time.

B. 6G NETWORK SCOPE
This completely software scope contains the 6G RAN and Core network. To run the network, either cloud or local computing infrastructure is needed (which are part of the physical scope). To successfully run a testbed, such a network implementation must be open source, or at least, fully configurable with the possibility of adding new custom functions. The 6G network will also contain all the required logic for Edge Computing and MLaaS. The 6G network will run two types of virtual machines: the Open RAN virtual machine, and the core network virtual machine; each made up of several software entities.

1) OPEN RAN VIRTUAL MACHINE
In traditional networks, the RAN is a separate entity made up of a network of physical nodes containing different network functions. In the Open RAN approach, that started with 5G and will be fully adopted in 6G, all the functions are virtualized and run as microservices in containers such as Docker or KVM. As described in Section IV-B, to have a functional 6G network, the following virtual elements would be needed: • Base stations: what in prior generations is a single entity, in Open RAN can be made up of three disaggregated elements: the RU (which stands between the 6G network and the physical scopes), the DU (which controls several RUs) and the CU (which controls several DUs and may run edge services). The RUs were described in Section V-A. The DU and CU will be software components running as microservices in the physical computer that is the closest to the RUs, and can in some cases be integrated as a single element.
• Near-RT-RIC: should be deployed in a physical machine where it has ample resources and priority, as well as a good connectivity with the CUs of the network. This virtual function should also support the easy addition of microservices through a remote connection such as SSH or a remote package manager, to easily add and remove xApps.
• Non-RT-RIC: the network management functions running in the RAN (e.g. decentralized optimization algorithms) will be done in this element, which will run either in the computing infrastructure of the testbed or even in a remote cloud. Implementations for 6G Open RAN do not exist yet, so two options can be weighted: either using a 5G Open RAN as a placeholder for 6G, or using an experimental 6G Open RAN platform. The first option can be used for developing services and schemes with performances that are scaled down to the capabilities of 5G, with the promise of providing better results once 6G functions are available to them. This option must be done with a path for upgrading the RAN to 6G once a viable implementation exists. Experimental 6G Open RAN platforms will be developed on the basis of 5G Open RAN [22], so most likely, they will be able to work out of the box, albeit with more effort from the operator of the testbed and subject to possible software instability.
In any of the two options, the adoption of open source solutions, such as the O-RAN Alliance [169] implementation, will have several advantages: access to source code allowing to better understand the inner workings of the system and to modify any functionality, easy development of new functionality that depends on the CU and DU microservices, and most importantly, future-proofing the testbed by ensuring that it will support a 6G (and beyond) Open RAN once it is available.

2) CORE NETWORK VIRTUAL MACHINE
The core network of a mobile network traditionally contains the higher layer functionality of the control plane, as well as the Packet Gateway (PGW or the equivalent function) which connects the user plane to the Internet. 2G and 3G also had gateways to the Public Switched Telephone Network (PTSN), but in 4G an onwards this element was removed in favor of Voice over IP (VoIP). In 5G the User Plane Function (UPF [244]) acts as an evolved PGW, with additional QoS functions, packet inspection services, and multi-RAN support. In the control plane, the 5G core network contains functions such as the Access and Mobility Management Function (AMF) and the Session Management Function (SMF). These functions are already virtualized in 5G. In 6G, it is expected that these functions are complemented with novel services such as the MLaaS component and a specific component for user location.
The AI/ML component [17] will offer ML to other core network functions, RAN functions that are not delay-sensitive (e.g., those running on the Non-RT-RIC) and even end services through a yet to be established MLaaS API. The AI/ML functions will be a virtual machine running over a physical machine with specialized ML accelerators (such as CUDA-capable GPUs [245] or the Intel Movidius [246] platform). Therefore, to have a powerful AI/ML component in the testbed, the acquisition of such devices would be recommendable, and the development of a standardized interface to the rest of the core network, RAN, and end services.
The user location component will be of utmost importance in the proposed testbed architecture. It will contain the functions that gather information from the RAN, such as measurements of RTT or RSRP in 6G, measurements from other interconnected systems such as UWB or WiFi-FTM, and measurements from the terminals (e.g., GNSS readings), to estimate a user location. This component may offer the real-time calculated location to the terminals, other network functions or external services (if the user privacy settings allow this), and also perform location-aware computations, such as tracking and mapping. There are ongoing works for implementing this component for 5G networks, such as the LOCUS project [31]. In the 6G testbed, this component may be populated with the functionality that is standardized in the next 3GPP releases leading to 6G; but at the moment of writing this paper, it will be an empty component which development will be the very objective of the testbed. Therefore, it will be the main component that is connected to the R&D scope; where new schemes will be developed, deployed in the location function, and validated.
Just as in the case of the 6G RAN, there is no standard implementation for a 6G core network yet. Again, the testbed designer can opt either to adopt an existing 5G Core network (such as [247], [248] and [249]), or using an experimental 6G core network, with the same implications as in the Open RAN part. Also in this case, it is recommendable to use an open source implementation that adopts new technologies leading to 6G.

C. SERVICE SCOPE
The end services will use the 6G network for implementing different location aware applications over the physical setting. These services will be fully controlled by the testbed operator, such that they can be programmed, monitored, and assessed. It will run over hardware such as smartphones, laptops, mobile robots, or drones, on the terminal side; and cloud servers on the backend side. In the case of a location testbed, location-related services will be evaluated. Such services either depend on location (i.e., are location-aware) or impact the network in different manners depending on their location.
Location-aware services are those that need to know the location of the users with requirements such as accuracy, latency, or frequency. A comprehensive list of such applications is given in Section III. The testbed must measure two aspects: • Application performance: given a specific set of location characteristics, the testbed will measure the performance of the applications by closely monitoring them with sensors embedded within the DUTs or with external sensors (such as cameras). These monitoring systems will measure different SKPIs such as navigation errors in robots or self-driving cars, or QoPE in XR applications.
• Location as a Service performance: once the requirements of specific services are known, the performance of the location component will be monitored for different services, finding whether it fulfills the requirements, the factors that may affect the QoS, etc. The selection of applications will determine the purpose of the testbed. General purpose testbeds may add generic services, such as location with potentially mobile users. Special purpose testbeds may, on the other hand, acquire a set of services that are specific to a special scenario. These choices will be complemented with specific choices in the physical layout, emulating the environment where these services will run. Ideally, such applications must also have open specifications, such that developing components to connect them to the R&D scope for monitoring and for programming is feasible.
Applications will also play a role on creating different effects that may affect the location function. For instance, functions that rely on the aggregated location of users (such as location-aware network management algorithms) will ultimately depend on certain services running within the coverage area of the network. This scope will have to be able to emulate different situations where the services are not location dependent, but where their characteristics will affect other location dependent functions. For this, full applications or simply traffic emulators can be programmed and deployed in the testbed; the only requirement being that they can be controlled to perform experiments.

D. R&D SCOPE
This scope comprehends all the actual monitoring of the other scopes and development work. It comprehends a data management system that will receive ground truth measurements, measurements from devices and from the network, from external systems, and from end services. These measurements will be stored in data repositories and processed in an evaluation framework, offering the developer insights on the DUT. The developer will then use the obtained insights to modify the DUT with an Integrated Development and Control Environment (IDCE). From the IDCE, new programs or configurations can be deployed to the physical, network, and application scopes; the full testbed can be controlled and progress of the development process can be saved in development repositories where the work may find a path to market.

1) DATA MANAGEMENT SYSTEM
The data management system is one of the most complex parts of the testbed. While the other parts of the system may be implemented potentially with COTS equipment, the VOLUME 11, 2023 data management system must be developed customized for the acquired materials. The data will come from 7 different origins: • The 6G RAN: magnitudes present in the network operation, such as raw measurements used for location (e.g., RTT, RSRP, etc.), KPIs of the different RAN components (such as total RU Throughput, packet loss, etc.) or computing load of the virtual machines, etc.
• The location component: as one of the most important components of the location testbed, the location component will be heavily monitored, measuring magnitudes related to the estimated location, such as the accuracy, the compliance of requirements, the resource consumption, etc.
• The 6G core network virtual machine: other components of the 6G core network may also be of interest for the development of location-aware management, such as the throughput at the UPF, or the resource consumption in AI/ML services.
• Device measurements: raw measurements taken from the device for location estimation (RTT, RSRP, etc. from the 6G network or other RAN technologies), GNSS measurements, device status, etc.
• Multi-tech measurements: range measurements from complementary technologies such as UWB or WiFi-FTM.
• Ground truth measurements: actual positions of the location targets, taken with the devices described in Section V-A. As earlier stated, for the R&D scope, these measurements will be considered free of errors.
• Service measurements: as described in Section V-C, numerous performance measurements must be taken from the service scope. Such measurements will be highly dependent on the specific service. All of these data sources must be collected and centralized in the data management system, where some normalization must take place before the data can be stored and used. To do this, the data management system is made up of several subcomponents ( Figure 7): • Data collection network: one or several networks will be set up to connect the devices where the data is collected for the testbed. Such networks will ideally be isolated from the RANs of the technologies that are being tested. Depending on the device, alternative network interfaces will be used for this purpose. For instance, for DWM1001 UWB devices, USB, and BLE connections [250] can be used to transmit the UWB measurements. In USRP SDR devices, Ethernet and/or PCIE interfaces can be used; and in smartphones, USB-OTG, or BLE can be used, freeing 5/6G, WiFi, and UWB for the tests. This implies that the data collection network will be on its own a multi-technology network, requiring that the data management system has the corresponding interfaces.
• Data collection probes: specific components developed for the device over which each of the data sources is located that will collect the required data and send it to the testbed through the data collection network. The data that will be collected will come from sources that have varied data collection interfaces, so custom components using the tools provided by the manufacturers must be developed for each source, with different challenges in each one. For instance, collecting data from proprietary devices in the service scope may be very difficult due to lack of developer documentation or access to certain functions. Another challenge is the maintenance of a large code base with components of many different platforms and programming languages (e.g., C-based probes for USRP SDR [237] devices, Kotlin code for Android smartphones, etc.).
• Probe drivers: the probes, that will be running in the devices must be managed remotely by the testbed. The driver for each probe will connect to the probe over the data collection network and send start/stop commands, reconfigurations, software updates, etc. Since each probe will have a different behavior (i.e., different protocols, command formats, etc. depending on the specific platform they are programmed for), these drivers must provide a common interface towards the testbeds with a common set of commands and parameters, to simplify the orchestration of the testbed.
• Data format adapters: the probes will extract data in formats that highly depend on the platform and the specific magnitude that is being measured. For instance, RTT measured in Google WiFi devices [251] will come as a Kotlin or Java object, while RTT measured in DWM1001 comes in plain text. The probes should not do any conversion to minimize the impact on computing resources and battery on the monitored devices, so they will capture this raw data and send it to the testbed, where the adapters will translate them to a common format.
• Data storage: the data obtained will be stored in a database that can later be accessed by the evaluation framework. The database must fulfill two requirements: a high capacity and a relatively high throughput. Since the testbed is designed to assess the performance of location, and not to control any system in a closed loop, latency is not a major issue.
• API interface: once the data is stored, it must be accessed from the evaluation framework.
Here, an open and well documented interface must be used. Most databases already offer this, but in some cases the interface may be cumbersome and security policies within the organization running the testbed may limit the network access only to HTTP/HTTPS requests. In these cases, Representational State Transfer (REST) interfaces may be a good option to add to the data management system and act as the external query interface. Technologies such as Django REST Framework [252] may simplify the development of this interface.

2) EVALUATION FRAMEWORK
The evaluation framework will take the measurements collected in the data repositories and extract the derived metrics which assessment is the objective of the testbed. The evaluation framework produces human-readable outputs that will help developers iteratively work on new location schemes and location-aware applications.
The evaluation framework will extract the magnitudes (accuracy, latency, and frequency), and assess whether they reach the minimal requirements of the applications (as described in Sections III and IV-C). These metrics will be extracted from the data collected from the data sources described in Section V-D1.
The extraction of these metrics may imply a large amount of computations, with relatively low complexities (mainly distances between the ground truth and the estimated position, differences of the performance indicators with the requirements, etc.). The key point in this component is flexibility (in other words, the ability of setting up new calculations for different tests) and rich statistical analysis functions. Platforms such as Python [253] and Python-based data analytics tools (such as Jupyter [254] or Orange [255]), R [256], SPSS [257] or Matlab [258] may all provide such tools. It is up to the developer preference to choose among these platforms, as well as the cost of licensing.
Another important task that this block will do is live representation of metrics for live demos. For this, the aforementioned platforms offer tools for live graphics, and platforms like Grafana [259] can be deployed too.
Finally, libraries such as FPDF [260] can be used in this component for automated reports for product evaluation and certification.

3) INTEGRATED DEVELOPMENT AND CONTROL ENVIRONMENT
This element is the interface of the testbed users (developers and researchers) with the rest of the system. It allows full control of all the elements and enables the R&D workflow. This workflow, as described earlier consists of two roles: testbed control and location development.
The control work that must be done on the testbed consists of four steps: • Designing an experiment: the first phase will be to determine the objective of the experiment (i.e., what needs to be measured) and the DUT. It may be the performance of a location method, a location device, or a locationaware service. Along with the objective and the DUT, a hypothesis must be formulated, defining the expected results; as well as the boundary conditions and the environment setup.
• Configuring the environment: the next step is to configure the environmental variables to the experimental setting design specifications. This may involve reconfiguring the elements of the physical space, changing network parameters and service configurations. The evaluation framework must also be programmed to collect and show the designed output magnitudes.
• Launching the experiment: the experiment will be run in an automated or manual manner. This stage may involve moving elements through the physical space to assess for mobility in the DUT.
• Collecting results: the testbed will automatically collect measurements, store them, and process them in the evaluation framework. This stage consists of extracting information from the results of the evaluation framework, comparing them with the hypothesis and formulating conclusions based on this information.
The results of the experiments may lead to additional development on the DUT. To do this, the IDCE will offer functionality to help in the development, such as text editors with syntax highlighting and support for all the possible programming languages that the DUT may use. The IDCE will allow the users to easily push updates to the DUT, whether it is an end service running in the cloud and/or terminals, a 6G orchestration function running in the Non-RT-RIC, an xApp running in the Near-RT-RIC, or a location algorithm running VOLUME 11, 2023 in the location component of the 6G core network. The IDCE will also offer these facilities for development of computation programs in the Evaluation framework and for updating software components throughout the testbed. Some testbeds that use a similar approach are [49], [50], [53], [232]. Another function that the IDCE must offer is some degree of automation for live demonstrations, where it may replay experiments with minimal operator intervention.
The IDCE will run in one or several computers. The main terminal may be a laptop or desktop computer, with the appropriate network connections. Secondary terminals may be used for assistance during the execution of experiments, for instant, tablets, or smartphones that allow control of certain components of the testbed (as done in [42]). For this role, consumer-grade devices with average computing power can be used, reducing material costs, since heavy computation is not done on them.
Regarding the software of the IDCE, it must have several interfaces: • With the 6G network: it must be able to modify the network settings to adjust the required experimental conditions, and start/stop functions.
• With the end services: to configure them and send different events, such as connectivity interruption, user interaction, etc. This interface is dual, with one side being on the user terminal, and the other in the remote server.
• With the physical scope: to control the environment before and during the experiment. Some of the interactions must be done manually (e.g., moving partitions), while others can be automated with elements like robots as done in [48] and [49].
• With the development repositories: to publish development work on the DUT. For the implementation of this component, two approaches can be used. The first one is to use discrete software components, some of them provided by vendors (e.g., software for the control of the 6G network, Integrated Development Environments, such as Spyder [261] or Matlab [258]) and others customized for the testbed (e.g., control logic for the probes). While this option may result in a less integrated environment, it may also be less costly. The other approach is to develop a fully integrated environment, where all the functions are integrated into a single interface. This option may be more expensive, but give place to a much more streamlined workflow.

4) DEVELOPMENT REPOSITORIES
The testbed will ultimately be used for developing new location schemes or location-aware functions. During the R&D workflow, it is expected that the maturity of the DUT improves iteratively, resulting in a code that can be distributed either within the entity that owns the testbed, or publicly. In parallel, the development process may be done by a team of developers, so a control version system would be required to have a seamless workflow. For all these reasons, a repository with version control would be a very important component of the R&D scope. Platforms such as GIT [262] can be used for implementing this element.

E. KEY TAKEOUTS
This section described an architecture for implementing a testbed for 6G location. The architecture has four loosely connected scopes, which enables updating each of them separately and therefore greatly simplifies extensibility as 6G technologies progress.
The Physical scope concerns all aspects that are related with the environment of the network and services, such as the physical space, the measurement of ground truth, and the hardware.
The 6G network scope contains a full 6G network, which must follow the latest developments and standards. Thanks to technologies such as Open RAN and SDR, this can be done with ease and relative cost-efficiency.
The service scope includes all the end services that rely on network-based location, and can be used for end-to-end evaluation.
Finally, the R&D scope includes all the control logic for the testbed, including data acquisition and processing, a development environment, and data and code repositories for integration with external systems.
For each of these scopes, a small overview of existing technologies is done, showing the path to begin a real-world implementation.

VI. IMPLEMENTATION EXAMPLE
In this section, an example of an implementation of the proposed architecture is described. The purpose of this section is to show how the guidelines disclosed in this paper can be applied in practice, and how they can be used to perform a successful test. Specifically, a nomadic testbed was implemented in the University of Málaga (UMA) in the context of the LOCUS project [31] to measure the performance improvements achieved with range fusion [105] over singletechnology location. In this case, the range fusion algorithm is the DUT.
The experiments that were carried out seek to obtain the reliability and accuracy of the location service in two different scenarios: a classroom for the education use case [47] and a building under construction [263]. The hypotheses of these measurements are twofold. On the one hand, it is expected that network location will help improve the coverage of the accurate location systems (WiFi-FTM and UWB in this case), and therefore, its reliability. On the other hand, it is expected that, in points where there are more than one accurate technology available, it can help improve the accuracy. The experiments will show whether these hypotheses hold up or not.

A. PHYSICAL SCOPE SETUP
Two physical settings have been selected for the experiments: • Education use case: two laboratories separated by a partition and connected by a stretch of open air corridor in the UMA were selected. Figure 8 shows a map of the  scenario. The location of the reference points of three different technologies (UWB, WiFi FTM and LTE) is also shown. The clutter in this scenario is moderate, as shown in Figure 9, consisting of long workshop tables with PCs, monitors, and electronic lab instrumentation. The total size of the scenario is 24 × 17 meters with a height of 3.5 meters. The ceiling contains some structural elements and light fixtures.
• Building in construction: the selected building ( Figure 10) was in the phase where the main structure was already constructed, with external walls and internal partitions still missing. The measurements were taken in three different floors: the ground floor (without walls), and two underground floors, where the parkings were meant to eventually be constructed. In the underground floors, the foundations protected the building from  surrounding underground water aquifers. The constructed structure was made up of reinforced concrete, and large metallic structures such as cranes were present. The total area covered by the scenario was 45 × 28 meters. Figure 11 shows a map of the −1 floor, where the WiFi FTM and UWB reference points were installed. It can be seen that there are no partitions in the space, and only the pillars and some walls conforming a stairway (towards center-left of the scenario) are present. The measurement points were replicated in the ground and −2 floors. In these scenarios the following radio equipment was deployed: • DWM1001 UWB [250] from Qorvo as high accuracy location technology. Each reference point was powered by a USB adapter.
• Indoor Huawei LTE network [112] as cellular technology (only used in the education scenario).
• Experimental terminal made up of a stock Google Pixel 3 (which supports WiFi-FTM) with two DWM 1001 UWB tags attached through BLE links (Depicted in Figure 12 and summarized in Table 2). VOLUME 11, 2023   Regarding ground truth, in the three scenarios the markings on the floor approach was used, establishing a relation between the sample and the point using time stamps. Figure 9 highlights the locations of the markings on the floor for the education use case.

B. 6G NETWORK SCOPE PARTIAL IMPLEMENTATION
Since 6G is still not available, and the objective of this testbed is not to test the radio aspects of cellular technologies, WiFi was used as a backbone for connectivity. LTE was used for network-based measurements in the education scenario.
The LOCUS platform [31], [177] devises a network location function as the one described in Section V-B, that receives information from the terminals, network, and contextual sources to determine user location, and also acts as location server. In this testbed, an implementation of the location function was done, following the LOCUS platform definition, as shown in Figure 15 with the following elements: • Data collector: collects ranging information that may come from the reference points or the terminal. In this specific case, the ranges were measured in the terminal for simplicity.
• Data parser: unifies the format of the collected data and translates LTE power measurements into distance estimations. This is not done for UWB and WiFi-FTM because these technologies directly provide a range estimation.
• Location estimator: using the measured ranges, estimates the location of the user with range fusion as described in [47] and [105].
• RabbitMQ messaging service: connects the three elements and also acts as external interface for location-dependent services and functions. As an experimental platform no intermediary element is used, but in more mature prototypes, an element that also implements authentication and permission flags should be used as public interface. The location function will run in a laptop that also contains the R&D scope functionality.
The data provided to the location service is collected in the terminal, using an Android app that collects range estimations from the WiFi-FTM API [251], the UWB devices attached to the smartphone [265] and the RSRP of the serving, and neighbor LTE cells [266]. The app sends this data encoded in JSON to the location service. Figure 14 shows a screenshot of the developed application, where the name and coordinates of a ground truth point can be inserted and the acquisition of data commanded. Figure 13 shows the flowchart of the concentrator component that runs once the data collection is commanded in the GUI and that collects the data captured by background processes.

C. SERVICE SCOPE MEASUREMENTS
A generic service scope was implemented in this case, since E2E measurements were not being taken. The only important aspect in these experiments was measuring the following metrics, which will affect the potential end services: • Reliability: probability that the network can provide a location. It will do so if the terminal is in coverage; that is, if it is within range of 3 reference points (for 2D location). In these experiments, the reliability will be measured as the proportion of measurement points in coverage.
• Accuracy: represents the correctness of the estimated location. In these experiments, the average horizontal (2 dimensional) accuracy will be measured in meters.

D. R&D SCOPE IMPLEMENTATION
The R&D scope will be very simple in this case, including the following elements: • Probe in the location service: a RabbitMQ consumer will be implemented, listening to the publishers of the three other elements.
• Collection network: since the location service will run in the same laptop as the R&D scope functions, the probe will be connected through the loopback network interface.
• Reliability estimation: the raw data will be explored to find how many reference points are visible in each sample.
• Error estimation: the error will be calculated comparing the location service with the ground truth offline. A prior data preparation phase will first associate the time stamps to ground truth locations (based on manually taken annotations), and then add the ground truth location to the collected data based on their timestamps. The error will then be calculated and the Empirical Cumulative Distribution Function (ECDF) represented.

E. RESULTS AND DISCUSSION
The reliability results are represented in Figures 16 and 17 and the accuracy results in Figures 19 and 20 for each scenario using both a single technology and range fusion.  It can be seen that, in all scenarios, reliability is higher when using fusion, since it ''fills the gaps'' where less than three reference points of a single technology are present. In the education scenario (Figure 16), UWB on its own provides a reliability of 48.46% and WiFi-FTM on its own, 85.77%. The fusion of both improves the reliability to 95.77%, since points that previously were not covered by a  technology on its own due to insufficient visible reference points can now benefit from using additional reference points. This is also the case with LTE, which on its own provides a reliability of 78.46%, but helps improve reliability of a pure UWB setup up to 98.46%. In the construction scenario ( Figure 17), a similar behavior can be observed, with UWB on its own having a reliability of 88.42%, WiFi-FTM 95.51% and the fusion of both 98.17%. In this case, the nomadic testbed did not allow taking LTE measurements, so its effects cannot be evaluated for this scenario. Figure 18 shows the ground truth of the measurements in the classroom and three different estimations: with UWB only, with WiFi-FTM only, and fusion of both. The improvement of location accuracy with fusion is obvious, and can also be observed in Figures 19 and 20. When more than three highly accurate ranges are used, accuracy will be higher due to the overdetermination of the WLS problem. In the education scenario (Figure 19), UWB has a 90th percentile of error of 5.61 m, WiFi 9.33 m and the fusion of both reduces the error to 1.46 m. On the other hand, if an inaccurate ranging technique is used, then accuracy will be low. This is very obvious in the fusion of UWB and LTE, which increments the 90th percentile of the error to 34.55 m. This highlights that the role of LTE in this case is not improving the accuracy, but the reliability, as shown in Figure 16. The inaccurate locations are replacing what otherwise would be a sample without enough ranges to estimate location with a single technology. In the construction scenario location is generally slightly less precise than in the education scenario, having values of the 90th percentile of the error of 14.6 m for UWB, 7.75 m for WiFi-FTM and 8.23 m for fusion. In this case, overdetermination does not improve the precision of WiFi-FTM. As Figure 20 shows, UWB is much less precise than WiFi-FTM in this scenario, and fusion produces some results that are very close to pure WiFi-FTM. Combining this information with Figure 17, it can be seen that the effect of UWB here is an improvement over the reliability, similar to what LTE did with UWB in the education scenario.

F. KEY TAKEOUTS
This example covered a trivial testbed used for the specific demonstration of a range fusion algorithm in different scenarios. Due to the limited scope of these experiments, only a partial implementation of the architecture was required, but a more sophisticated setup would add flexibility for different kinds of experiments, as well as extensibility during the development and rollout of novel 6G technologies in the future.

VII. CHALLENGES
While the proposed architecture can alleviate many of the difficulties found when implementing and operating a location testbed, there are some important challenges that may be present in some settings. When building a testbed of any type, there are some general administrative challenges that are almost always present, such as the allocation of the required physical space, the procurement of funds, etc. There are other challenges that are specific to 6G, which are reviewed in this section, along with the outline of some possible lines of action. These challenges can be classified in two big groups: administrative (those related with the non-technical factors of the testbed) and technical (those related directly with the technical components).

A. ADMINISTRATIVE CHALLENGES
Administrative challenges cover the difficulties related with the management of financial resources, regulations, and business relations with vendors. These challenges are often very limiting and may impose restrictions on the technical scope, such as the type of experiments that can and cannot be done in the testbed. In the case of 6G, the following challenges may be met: • Availability of 6G equipment and vendors: 6G is still far from having a standardized implementation [267].
The techniques that will be included are not even decided; all that can be found in the bibliography is still merely speculative. This may make it difficult for designers to choose the exact components that will go into the testbed. Moreover, because there is no 6G hardware/software commercially available yet, it may be difficult to justify the acquisition of highly experimental   (and often expensive) hardware to accounting sections of the organizations that are implementing the testbed. Once decided and approved, the procurement of components may also be difficult, with a limited set of vendors of specific experimental hardware. From the administrative point of view, there is no clear solution to this challenge, so finding alternatives becomes a technical problem.
• Vendor dependency: the reduced number of specialized vendors may become an administrative problem in several different ways. Firstly, the leverage for negotiating prices is quite limited. Secondly, the dependency on one vendor puts the organization implementing the testbed in a vulnerable position in case the vendor stops giving support or upgrades (e.g., because of contract expirations or even bankrupcy of the vendor). Some factors to take into account when choosing vendors should therefore be their solvency, possible offers of extended support and more importantly, the use of generic components that are well documented and can be supported by third parties in case of need. The use of open source software and hardware components is a good warranty to avoid vendor dependency [268].
• Vendor limitations: some vendors offer their solutions with great limitations, such as the inability of modifying firmware or accessing core components of the system. These limitations may be imposed in several ways: by license or by obfuscation of functions. These terms of service limit the types of experiment that can be done with the components, becoming a technical challenge. From the administrative side, selecting vendors taking into account the lack of limitations should be a priority, but given the reduced number of vendors, it may sometimes be impossible. The establishment of agreements for joint research can then be used for obtaining improved access to the acquired components.
• Spectrum licensing: in the case of a mobile network testbed, one major administrative aspect is obtaining the rights for radiating into licensed bands. Since there is a commercial interest in the usage of such bands, it is often limited to operators that are reluctant to yield part of their capacity to research facilities. Moreover, as 6G is still under development, its bands are still not allocated and they are prone to change once the first digital dividends are released. For organizations implementing the VOLUME 11, 2023 testbed, especially small ones, obtaining licensed spectrum may be too costly to be feasible. The alternative is to establish partnerships with operators that cede part of the spectrum part of the time such that experiments can be done. Avoiding interference with the wider mobile network may also help to ensure collaboration from operators.
• Privacy: when studying location, it is often overlooked that a terminal is tied to a user. The location of a person is an especially privacy-sensitive data, so there must be mechanisms to avoid collecting or appropriately storing this information following regulations such as the European Unions' General Data Protection Regulation (GDPR) Article 17 [86]. Most testbeds will use their own terminals meant to be used inside the premises of the organization and during work hours. Nevertheless, it is important to implement a clear policy of data protection (e.g., instructing workers not to use the terminals out of periods of time established for experiments).
In case that personal equipment must necessarily be used, anonymization mechanisms must be put into place, along with an audit that ensures that personal data cannot be reconstructed.

B. TECHNICAL CHALLENGES
Technical challenges cover the difficulties related with the implementation and operation of the testbed from the purely technological perspective. These challenges stem from the capabilities of the acquired equipment, and are often subject to the decisions taken on the administrative side. The proposal of the architecture described in this paper aims at limiting the effects of such decisions, by making it easier to acquire generic equipment and isolate the limitations of some components on the overall testbed. Still, some challenges may arise when implementing the architecture over real equipment: • Use of experimental 6G technology: as described earlier, the procurement of equipment may be difficult for experimental 6G components who have been chosen based on speculations. Nevertheless, these speculations are based on educated guesses, and the ballpark estimations that are being made are enough for developing a testbed that can eventually host standard 6G techniques. Thanks to SDN and SDR, generic, COTS hardware can be acquired, which will be compatible with 6G functions in the future. The choice of terminals for end services is another major challenge, since there are no 6G terminals yet, and there are no commercial terminals that can be upgraded to 6G in the future. To future proof the terminals, therefore, adoption of a flexible platform (e.g., SDR devices that can be programmed to act as a specific terminal) will also be necessary. In any case, the cost of acquiring 5G terminals and replacing them with 6G in the future is not economically unfeasible.
• Interoperability of different components: in a testbed, all components should be able to interoperate, that is, they should have the interfaces for the R&D scope to monitor and manage them. This may mean that some development work is required for the adapters (as described in Sections V-D1 and V-D3) which may imply some hacking [269] if the original equipment does not support all the required functions. Vendors may support by either developing the required functions or providing documentation; but using open source components may avoid the need of relying on the vendor. Another problem that may occur is vendor lock-in (i.e., the component is only compatible with other components by the same vendor), which can be avoided again by using mostly open source components.
• Maintenance over time: the proposed architecture is meant to evolve over time, with new components or upgrades to those that are already integrated. This implies that a continuous effort must be done to maintain compatibility between the different components and avoid bad development practices such as the lava flow anti-pattern [270], where a software component is quickly modified by different developers to support specific experiments; which may on its turn result in code duplication giving place to a parallel versions that grow incompatible over time.
• Interference with commercial networks: a 6G testbed will most likely run in an area where commercial cellular networks operate. As such, it can interfere (see administrative challenges) and be interfered by these networks. The interference from the commercial network may cause errors in the 6G-based location estimations. To avoid this, isolating the network is again a good solution, which can work for indoor environments.
Outdoor settings, on the other hand will have more problems, so interference should be considered a limiting factor or even taken into account as a factor that adds realism to the experiments.

C. KEY TAKEOUTS
In this section, the main challenges for implementing a testbed based on the proposed architecture have been reviewed. Administrative challenges are usually the main limitation, but with the proposed architecture, its effects can be alleviated. Nevertheless, some issues, such as licensing or accessing all the capabilities of acquired equipment, can only be solved by establishing partnerships with operators and vendors. Technical challenges mainly stem from the current lack of 6G standards, and can be resolved by using Open RAN and SDR components, and favoring open source over proprietary solutions.

VIII. CONCLUSION
This paper has done a thorough review of the expectations of 6G location in the near future. 6G will bring a slew of new enabling technologies (such as THz communications, RIS, and AI/ML) that will improve the capacity of estimating ranges of the network, and hence, to calculate positions. 6G devices will also benefit from the increasing number of other location technologies, which can be used to further improve location thanks to fusion techniques. Conversely, an increasing number of location aware services will benefit from the location that the 6G network can provide, as well as network functions.
To develop and test these location techniques and location aware applications of the future, the R&D community needs a testbed that is purposefully built for location in 6G. The main requirements of such a platform are that it has to cover all the possible 6G enablers, and do that as 6G technologies are being developed. This is a very challenging task, which can be solved by using as many open components as possible.
The proposed testbed architecture disaggregates the elements into four scopes, each loosely connected to the others, such that they can be upgraded separately as new 6G software is available. These four scopes (physical, network, service, and R&D) have been described in detail, showing examples of the building block implementations that are currently available.
The proposed architecture can be used as a base for a blueprint of a location testbed, resulting in a flexible and future-proof design. An example showing a nomadic testbed based on this architecture has been shown in this paper. This testbed did not have any 6G hardware, since there are no available devices yet, but the development (hardware and software) done for it can be reused once some of the components (namely the terminal and LTE network used in the example) are switched by 6G equipment. RAQUEL BARCO MORENO received the M.Sc. and Ph.D. degrees in telecommunication engineering. She is currently a Full Professor with the University of Málaga. She worked at Telefónica and the European Space Agency. She has participated in the Mobile Communication Systems Competence Center, jointly created by Nokia and the University of Málaga. She has published more than 100 scientific papers, filed several patents, and led projects with major companies. VOLUME 11, 2023